arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# A note on a universal random variate generator for integer-valued random variables
@article{Barabesi2014ANO,
title={A note on a universal random variate generator for integer-valued random variables},
author={Lucio Barabesi and Luca Pratelli},
journal={Statistics and Computing},
year={2014},
volume={24},
pages={589-596}
}
• Published 5 November 2012
• Mathematics
• Statistics and Computing
A universal generator for integer-valued square-integrable random variables is introduced. The generator relies on a rejection technique based on a generalization of the inversion formula for integer-valued random variables. This approach allows to create a dominating probability function, whose evaluation solely involves two integrals depending on the characteristic function of the random variable to be generated. The proposal gives rise to a simple algorithm which may be implemented in a few…
Universal methods for generating random variables with a given characteristic function
• Mathematics, Computer Science
• 2015
Universal generators for absolutely-continuous and integer-valued random variables are introduced, based on a generalization of the rejection technique proposed by Devroye, which gives rise to simple algorithms which may be implemented in a few code lines and may show noticeable performance even if some classical families of distributions are considered.
Algorithms for generating random variables with a rational probability-generating function
Two algorithms for generating random variables with a rational probability-generating function are presented. One of them implements the recently developed general range reduction method, and the
The Tempered Discrete Linnik distribution
• Mathematics
Stat. Methods Appl.
• 2018
A new family of integer-valued distributions is introduced by considering a tempered version of the Discrete Linnik law, which is actually a generalization of the well-known Poisson–Tweedie law.
Random variate generation and connected computational issues for the Poisson–Tweedie distribution
• Mathematics, Computer Science
Comput. Stat.
• 2016
A closed form for the probability function, as well as its corresponding integral representation which may be useful for large argument values are introduced.
The computation of the probability density and distribution functions for some families of random variables by means of the Wynn-ρ accelerated Post-Widder formula
The suitable use of the Post-Widder inversion formula for Laplace transforms – coupled with the Wynn’s ρ-algorithm for accelerating sequences – is proposed in order to evaluate the probability density function and the distribution function of a large collection of random variables.
A new family of tempered distributions
• Mathematics
• 2016
Tempered distributions have received considerable attention, both from a theoretical point of view and in several important application fields. The most popular choice is perhaps the Tweedie model,
On the properties of a Takács distribution
• Mathematics
Statistics & Probability Letters
• 2019
Tempered positive Linnik processes and their representations
• Mathematics
• 2021
We study several classes of processes associated with the tempered positive Linnik (TPL) distribution, in both the purely absolutely-continuous and mixed law regimes. We explore four main
Goodness-of-Fit Testing for the Newcomb-Benford Law With Application to the Detection of Customs Fraud
• Computer Science
• 2018
A new way of testing the Newcomb-Benford law for digit sequences is suggested that turns out to be particularly attractive for the detection of frauds in customs data collected from international trade.
Fast Pricing of Energy Derivatives with Mean-reverting Jump Processes
• Mathematics
• 2019
The law of a mean-reverting (Ornstein-Uhlenbeck) process driven by a compound Poisson with exponential jumps is investigated in the context of the energy derivatives pricing. The said distribution
## References
SHOWING 1-10 OF 30 REFERENCES
Automatic Nonuniform Random Variate Generation
• Computer Science
• 2011
It is shown how random variate genration algorithms work and an interface for R is suggested as an example of a statistical library, which could be used for simulation or statistical computing.
A simple universal generator for continuous and discrete univariate T-concave distributions
We use inequalities to design short universal algorithms that can be used to generate random variates from large classes of univariate continuous or discrete distributions (including all log-concave
Random variate generation for exponentially and polynomially tilted stable distributions
This work develops exact random variate generators for the polynomially and exponentially tilted unilateral stable distributions and presents a novel double rejection method that is useful whenever densities have an integral representation involving an auxiliary variable.
Non-Uniform Random Variate Generation
This chapter reviews the main methods for generating random variables, vectors and processes in non-uniform random variate generation, and provides information on the expected time complexity of various algorithms before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.
Generating random numbers from a distribution specified by its Laplace transform
This paper advocates simulation by the inversion method using a modified Newton-Raphson method, with values of the distribution and density functions obtained by numerical transform inversion, and shows that this algorithm performs well in a series of increasingly complex examples.
Algorithms for Generating Discrete Random Variables with a Given Generating Function or a Given Moment Sequence
The author presents and analyzes various algorithms for generating positive integer-valued random variables when the distribution is described either through the generating function \$\sum _{i =
Short universal generators via generalized ratio-of-uniforms method
• J. Leydold
• Mathematics, Computer Science
Math. Comput.
• 2003
We use inequalities to design short universal algorithms that can be used to generate random variates from large classes of univariate continuous or discrete distributions (including all log-concave
A simple generator for discrete log-concave distributions
• L. Devroye
• Mathematics, Computer Science
Computing
• 2006
We give a short algorithm that can be used to generate random integers with a log-concave distribution (such as the binomial, Poisson, hypergeometric, negative binomial, geometric, logarithmic series
Parameter Estimation for the Discrete Stable Family
• Mathematics
• 2008
The discrete stable family constitutes an interesting two-parameter model of distributions on the non-negative integers with a Paretian tail. The practical use of the discrete stable distribution is
|
|
# Entropy and enthalpy
1. Feb 29, 2008
### Mr_Bojingles
I decided to try and learn what entropy is today and I swear to god I've been sitting here for 4 hours and I still don't have the foggiest idea of what the hell is it? Its driving me insane. I can't think anymore cuz of the stress thats building up from the fact that I just can't comprehend the concept.
What I've read is that its ties into the second law of thermodynamics and that basically it is the measure of the forces that tend to disperse molecules or heat and distribute is uniformly in a closed system. That makes perfect sense to me.
Heres where the contradictions start. Other sites say that entropy is the measure of disorder and that nature tends to go towards an unorganized, disordered state. Personally I see the dispersion of matter or energy to achieve a uniform state as organized and ordered. Theres nothing disorganized about that.
What am I missing here? I can't make any sense of the explanations on the internet. Some of them say if you have 2 metal blocks and one is hotter than the other. Lets say block 1 is hot and block 2 is cold. They say that if heat transfers from block one to block 2 that the entropy of block 1 rises while the entropy of block 2 decreases. If thats the case I have no idea what entropy is.
I'll be too pissed off to go to bed unless I understand the concept and the way things are looking I'm not going to be sleeping tonight. Can anyone help me understand the concept? If I can just figure out what entropy is I might be calm enough to go to sleep. I'll leave enthalpy for another day.
Last edited: Feb 29, 2008
2. Feb 29, 2008
### nathan12343
Technically,
$$S = kln\Omega$$
Where k is the boltzmann's constant and $$\Omega$$ is the multiplicity, or, the number of microscopic states the system can take on that have the same energy. It's easy to figure out what $$\Omega$$ is for simple systems, and one generally finds that the the multiplicity function is sharply peaked, it's really, really unlikely to find a system far away from the state of maximum entropy. So, if a system starts in a low entropy state (a block of ice), it will tend to go to a higher entropy state (a puddle). The second law of thermodynamics isn't really a physical law in the sense of $$F = Ma$$, but the statistics always work out such that the probability of the entropy not increasing isn't even worth considering.
3. Feb 29, 2008
### Mapes
Entropy is a difficult concept to understand, no doubt about it. It's a "something" that seems to pass from one body to another and can apparently be generated from nowhere.
Forget about the disorder and the dispersal analogies, they're too flawed to work for you. The most fundamental definition of entropy that I know of is $S = -k \sum p_i \ln p_i$ where $p_i$ is the probability of the system being in microstate $i$ (a microstate is when each particle has an assigned quantum state that is compatible with macroscopic observables like temperature and pressure). If all microstates are equally probable, we have the familiar $S=k\ln \Omega$, where $\Omega$ is the number of microstates. The best description that I've heard of the 2nd law is that entropy (= number of microstates) tends to increase, and that entropy is maximized when a system is at equilibrium.
Here is the problem with the disorder argument: disorder is subjective. Who's to say whether a messy room is randomly disordered or whether each items has been carefully positioned, with definite rules, by its owner? Additionally, disorder on the microscale and on the macroscale can be difficult to identify. Alluding to your point, a glass of crushed ice looks disordered, while a glass of water looks ordered. However, the glass of water has far more entropy.
Here is the problem with the energy dispersal argument: Consider two rings of pure material spinning in opposite directions at a very low temperature, arbitrarily close to absolute zero. The system velocity and angular momentum are zero; the only important number is the rotational speed. There are very few possible microstates (tending to one as we approach absolute zero) that are compatible with the system, because random atomic motion is nearly eliminated due to the low temperature. Each atom is pretty much limited to following its circular path with essentially no thermal vibration. The entropy is very low, almost zero.
If the rings are now brought into contact, they will eventually slow each other to a stop by friction. Now the rotational speed is zero and the material is hotter, say at some temperature $T$ well above absolute zero. There is now a huge number of possible microstates, because the random thermal energy could be apportioned to the particles in an enormous number of combinations without us ever knowing the difference. (It doesn't matter whether atom #45,391,567,958,... is going fast and #198,562,994,261,... is going slow or vice versa, as long as the energies add up to put the bulk material at temperature $T$.)
This is where I have a problem with "energy dispersal." The energy isn't more disperse after we connect the rings. The energy didn't go anywhere, the system is closed. Neither has the energy spread out spatially. The average energy of the particles is still the same. I think the dispersal definition falls short here, while the microstates definition explains the spontaneity of the process with no problems.
So I encourage you to think in terms of microstates, as nathan12343 pointed out above. When we heat a system, its number of microstates increases. When we cool a system, its number of microstates decreases (but the energy we removed increases the number of microstates of the surroundings). We we do work on a system, there is no change in the population of microstates. The number of possible microstates in the entire universe tends to increase (this is the Second Law).
It may be useful to think of entropy as something that "flows", but you have to be careful (nothing is actually flowing). Entropy is the conjugate extensive variable to temperature, just as volume is the conjugate extensive variable to pressure. Just as two systems will exchange volume in order to equalize their pressure, two systems will "exchange" entropy to equalize their temperature. But what is really happening is that energy is being transferred, increasing the number of possible microstates in one system while decreasing the possible number in another.
Finally, you should know that entropy is conserved for reversible processes, but entropy is created whenever energy is transferred in response to a gradient in an intensive variable, like temperature or pressure. In fact, "reversible" means that energy is being transferred without any gradient in temperature, pressure, etc. This never occurs in reality, but we can come arbitrarily close, and it's a useful idealization.
4. Feb 29, 2008
### Andy Resnick
I'm not sure anyone understands "entropy". It's one of the more difficult concepts, tying in threads from mechanical work, statistics, information, probably some others. Part of the difficulty is there there's not a good definition of 'energy', either.
First, understand that not all energy possessed by an object is accessible. The food we eat, we can only use (IIRC) 80% of the calories.
Second, understand that the amount of energy possessed by an object is due, in part, to it's configuration. A folded protein has a different energy than an unfolded protein. A ball up in the sky has a different energy than the ball on the ground.
"Entropy", as I think of it, is a measure of the amount of energy within an object (or system) that we cannot access. That's not very precise, and there's lots of mathematical derivations assigning a value to that amount of energy: in terms of statistics (kT ln(W)), in terms of information (kT ln(2)), in terms of thermodynamic concepts (dS = dQ/T), probably some others.
I really struggled with thermodynamics for a long time, trying to get an intuitive feel for all those different energies (enthalpy, Gibbs, Helmholtz, etc.), Maxwell relations, logarithmic derivatives, and all the time wondering what's the point. It may help to remember that thermodynamics is one of the oldest branches of Physics- it predates knowledge of atoms. The concepts used may appear foreign to us, as we have become accustomed to quantum mechanics and microscopic dynamics.
That, coupled with an embarrassing lack of a decent undergrad text, causes no end to headaches. FWIW, if you can find a copy of Truesdell's "Tragicomic history of Thermodynamics", you may find it helpful.
5. Feb 29, 2008
### pkleinod
As Andy_Resnick pointed out, thermodynamics is one of the
oldest branches of physics, so it must be possible to make
sense of entropy without recourse to statistical mechanics.
Not that there is anything wrong with statistical interpretations
of entropy. In fact, many thermodynamic properties can be
accurately estimated with the help of statistical mechanics,
and the statisical interpretation adds a great deal of insight
into the concept of entropy.
Have you attempted to calculate the net entropy change when
the two blocks of metal are put into contact with one
another? If you can calculate this, it might help in
understanding the entropy concept better.
Suppose you consider just one block of metal (the system) at
a temperature T2 which is then put into contact
with a huge heat bath held at temperature T1
(i.e. a bath with essentially infintite heat capacity) such
that T2 > T1. After a while, the block
will have cooled to temperature T1 and will have
given up an amount of energy Q = C(T2-T1),
where C is the heat capacity of the metal, here assumed to be
constant over the temperature range considered. Since the heat
bath was kept at temperature T1 the whole time, the
entropy change of the bath is Q/T1.
Unfortunately, we don't know
the entropy change of the block because the cooling process was
not carried out reversibly. To find the change, the system must
be restored reversibly to its original state. This can
be done by placing the block in contact with successively hotter
heat baths whose temperatures differ from each other by an
infinitesimal amount. The heat absorbed by the block in each
infintesimal step is dQ = C dT and the entropy change of the
block is (dQ)/T = (C/T)dT. The total entropy change of the
block is then the integral of this between the limits T1
and T2: C log (T2/T1). Now
the entropy is a function of state, depending only on the
temperature in this case, so that the entropy change of the
block in the cooling process is just the negative of this.
The net change in entropy for the cooling process is therefore
$$\Delta S_{total} = \Delta S_{bath} + \Delta S_{block}$$
$$= C(T_2 - T_1)/T_1 - C \log (T_2/T_1) > 0$$
I now invite you to calculate the net entropy change for the
process you mentioned involving two blocks,
one at T1 and one at T2, put into thermal contact with one another.
HINT: Assume this process is adiabatic, i.e. there is no heat
exchange with the bath during the temperature equilibration.
Then restore each block separately to its original state in
order to calculate the net entropy change in the process.
An excellent text is "Thermodynamics" by G.N. Lewis and M. Randall
(McGraw-Hill, 1961)
6. Mar 1, 2008
### GT1
Why when work is done there is no entropy change ? If work is done by the system = less energy on the system , why the entropy doesn't change when the energy of the system changes ?
7. Mar 1, 2008
### Mapes
Let's look at a practical example: a gas in a closed container. If we allow the gas to expand reversibly and do work on the environment, its energy decreases as you said (and its temperature also decreases). If the volume had stayed constant, the entropy would have decreased too. However, the volume has increased, allowing the gas more possible microstates. This increase in entropy exactly offsets the decrease due to the loss of energy.
Mathematically,
$$dU=T\,dS-p\,dV$$
$$dS=\frac{1}{T}\,dU+\frac{p}{T}\,dV=0$$
8. Mar 1, 2008
### GT1
Thanks Mapes!
9. May 5, 2010
### cwconnell
I do not have any quick answers but entopy is the degradation of the quality of energy. If you have an engine that burns fuel, e.g. automobile, energy is being degraded and there is no return to that energy level for the closed system. Ever since the universal "bang" and ever since you were born (or maybe 10 in my son's case) you are going down hill. As time passes, the quality of energy of the universe is decreasing. [This applies to a closed system.]
10. May 5, 2010
### James A. Putnam
I think there is an explanation due as to why Boltzmann's constant 'k' should carry over into statistical mechanics.
James
11. May 6, 2010
### Mapes
Briefly, entropy is proportional to the log of the number of microstates, a dimensionless number. Boltzmann's constant is the constant of proportionality that gives entropy its units (J/K) and connects it to our macroscale measurements of energy and temperature.
12. May 6, 2010
### James A. Putnam
"...Boltzmann's constant is the constant of proportionality that gives entropy its units (J/K) and connects it to our macroscale measurements of energy and temperature. "
It is that connection that I question. Not because I think it is wrong, but because the connection appears to be given from thermodynamics to statistical mechanics for free instead of being derived from statistical mechanics. To me that makes it appear that the equation is still a thermodynamic equation and the statistics part is added on to it, because the statistics are directly related to the reordering of energy. In other words, the success of the equation still is rooted in the thermodynamic derivation of thermodynamic entropy. This is not intended to be taken as an expert opinion.
James
13. May 6, 2010
### Count Iblis
A qualitative explanation I like goes as follows. Macroscopic objects consist of atoms, which in turn consist of subatomic particles etc. etc. When we in practice describe the world we perceive, we do that in terms of macroscopic variables. We don't specify the exact physical state of objects. Even if we wanted to do that, lack knowledge about the exact fundamental laws of physics would mean that we couldn't do that anyway.
What makes doing physics possible at all is that one can find a closed description of a physical system in terms of only the macroscopic variables plus a handful extra variables that in some way describe the effects of all the degrees of freedom that are left out, in a statistical way. In thermodynamics those extra degrees of freedom are quantities like internal energy, temperature, entropy etc. etc.
14. May 6, 2010
### Count Iblis
It is a matter of choosing your units. Thermodynamics gives you a phenomenological description that is not able to explain how all the variables are related to each other. That means that you'll end up with variables that are related at the fundamental level, but in the thermodynamic description you cannot see that relationship.
Historically, the thermodynamic relations were found empirically, and units were invented to measure such quantities like temperature. But if later a more fundamental theory arises and we can now directly compare what used to be incompatible physical quantities, you will end up with new constants, in this case the Boltzmann constant that will do the unit conversion from temperature to energy in the old thermodynamic units.
Comnpare this with special relativity. Einstein found that the inertia of an object is explained by the energy of the object. But in classical physics the two quantities energy and mass are unrelated and we already have defined our supposedly incompatible units for the two quantities. But relativity tells us that declaring the two quanties to be incompatible is wrong and that in fact mass is precisely the rest energy of a object. What then happens is that the equatons will automatically compensate for our previous ignorance by doing the unit conversion inside the equation, i.e. we get the equation E = m c^2 instead of E = m.
15. May 6, 2010
### Studiot
Does anyone know if entropy affects, or applies to non material objects such as information states?
16. May 6, 2010
### Andy Resnick
Of course! Look up "information entropy", 'negentropy', and Shannon's information theory.
17. May 6, 2010
### Andy Resnick
That's clearly false, amply demonstrated on the other relevant thread.
18. May 6, 2010
### Count Iblis
I should have written "phenomenologically" or "heuristically". Physics often proceeds in a heuristic way and only with hindight can you figure out how things really work. That's why, i.m.o., the historical approach to physics teaching is not so good.
19. May 6, 2010
### Studiot
So if I were to roll a pair of dice, would there be any implications for entropy in relation to the numbers that appeared on the faces? For instance would there be any difference in entropy if a seven came up as opposed to a twelve?
20. May 6, 2010
### James A. Putnam
I am not quite clear about your explanation: Are you saying that Boltzmann's constant did result from the derivation of statistical mechanics or that it was adopted from thermodynamics and used because it was convenient to use it for unit conversion?
James
|
|
# Rotational Motion and Equilibrium
1. Apr 9, 2007
### hbailey
1. The problem statement, all variables and given/known data
A 2.5 kg pulley of radius 0.15m is pivoted about an axis through its center. What constant torque is required for the pulley to reach an angular speed 25rad/s after rotating 3.0 revolutions, starting from rest?
2. Relevant equations
torque = (mr^2)(angular acceleration)
3. The attempt at a solution
First I solved for time using t= omega/angular speed = 6(pie) rad / 25 rad/s = 0.75s.
Then, I solved for angular acceleration = 33 rad/s
Solving for torque, using the above equation, I got 1.9 m-N
The textbook I have says this is the wrong answer. What have I done?
The book says the answer is 0.47 m-N
2. Apr 9, 2007
### e(ho0n3
That seems wrong. Why is omega $6\pi$ radians? Isn't omega the angular speed? What you're calculating is the amount of time it would take for the pulley to rotate $6\pi$ radians at 25 rad/s.
3. Apr 9, 2007
### Dick
In using t=omega/angular speed you are assuming that the angular speed is constant. As it starts from rest, this can't be right.
4. Apr 10, 2007
### chaoseverlasting
Torque/I = angular acceleration. w=0+at, where a=angular accn. 6pie=0.5at^2, using equations of rotational motion. Solve to get torque.
5. Apr 10, 2007
### hbailey
Oh, instead of omega above, I meant theta.
|
|
# SMD Pin Cutters
Status
Not open for further replies.
#### dknguyen
##### Well-Known Member
Is anyone able to confirm if the expensive tip cutters such as the Erem 670EP or 670EPF are small enough to cut individual 0.5mm pitch SMD pins on an IC?
I mean...the catalog has a picture of them doing it. I have a pair of Plato 170SMD which are no longer made and as small as they are, they still aren't able to do that so I find it a bit difficult to believe without a real human being confirming it. They are too expensive at $200 a pair for me to gamble on without knowing that. #### Attachments • 627.2 KB Views: 29 #### Pommie ##### Well-Known Member Most Helpful Member I've not had any success with cutters on SMD pins. Last time I had to remove one I used a dremel with a cutting wheel. There is also the technique where you use a fine wire threaded behind the pins and use a combination of tugging the wire and hot air to lift each side. Mike. #### dknguyen ##### Well-Known Member Most Helpful Member I've not had any success with cutters on SMD pins. Last time I had to remove one I used a dremel with a cutting wheel. There is also the technique where you use a fine wire threaded behind the pins and use a combination of tugging the wire and hot air to lift each side. Mike. Okay. Maybe I'll search for alternative methods. #### dknguyen ##### Well-Known Member Most Helpful Member It's been 4 years now since I retired, but I seem to remember being able to do it with a 622NB. Looks to be the same as the 670 except it is not ESD-safe and half the price. http://www.qsource.com/[email protected](15197)@(15200)@(20822)@*Sort=5*ava=0] The jaws between the 670 and 622 look different. The 622 looks more like the traditional angled-tapered jaws. with a cutting edge the entire length of the jaw. The 670 has relief only at the tip and the cutting jaws are only at the tip. Do you remember if it was QFPs 64+ pins you were working on? #### Ylli ##### Active Member Yes, the 622 has a cutting edge the entire length of the jaw. Yes to the QFPs. I do remember it was tough getting the first pin cut. After that there was a bit more room to get the cutter in. #### tomizett ##### Active Member I've come across the method of using Stanley knife to slice off the leads - if all you want to do is remove the IC. Having tried it myself a couple of times, I've found the theory is ok, but you have to be very careful to avoid cutting into the board below, and the "wedging" action of the blade forcing the lead away from the IC body can lift the pads off the board. The jury is still out on that technique as far as I am concerned... A Dremmel sounds like a good idea though, I'd definitely give that a go. Are you just looking for ways to remove an entire IC, or do you need to be able to isolate individual pins? #### dknguyen ##### Well-Known Member Most Helpful Member I've come across the method of using Stanley knife to slice off the leads - if all you want to do is remove the IC. Having tried it myself a couple of times, I've found the theory is ok, but you have to be very careful to avoid cutting into the board below, and the "wedging" action of the blade forcing the lead away from the IC body can lift the pads off the board. The jury is still out on that technique as far as I am concerned... A Dremmel sounds like a good idea though, I'd definitely give that a go. Are you just looking for ways to remove an entire IC, or do you need to be able to isolate individual pins? Removal. Currently I am using a 0.3mm conical tip that was dropped on a linoleum floor and therefore has a 1mm bend at the very tip. I wedge it behind the pin at the end of the row and out some slight pressure on it and twist back and forth and the bent lifts the pin when it's hot enough. It's worked so far but relies on damaged tip and at times is a bit more forceful than I would like. It's lifted traces before but so far only ever isolated pads that go nowhere. I do recall having a square chisel exacto blade somewhere. I should try that. I tend not to do so well with slicing motions in such a small space. Pushing straight down might work though. Last edited: #### large_ghostman ##### Well-Known Member Most Helpful Member if its whole chip removal and you dont care about the chip, then a Stanly knife works well. Like mike if its handy on the bench then dremel is used, dont slip with it as i can confirm a dremel goes through copper tracks like hot knife in butter!! Thats alot of cash for cutters!! I would want them to pick up the soldering iron and desolder it for me at that price!! #### dknguyen ##### Well-Known Member Most Helpful Member if its whole chip removal and you dont care about the chip, then a Stanly knife works well. Like mike if its handy on the bench then dremel is used, dont slip with it as i can confirm a dremel goes through copper tracks like hot knife in butter!! Thats alot of cash for cutters!! I would want them to pick up the soldering iron and desolder it for me at that price!! That's exactly the reason I'm uncomfortable with taking a power tool to the PCB. It cuts too much like hot butter. But yeah, I understand what you mean about the price. I was sitting here trying to decide whether two$80 soldering catridges designed to heat up two different sizes of QFPs would be better. It certainly would be faster but it sort of locks in the size of QFPs you can work with. I've never needed to isolate an individual pin yet where I couldn't just cut the trace with a drill bit.
I'll try the knife method though. I personally have always found it took more force than I wanted it too and produced less accuracy than I would like using an exacto to cut traces (I use a hand hand held twist drill or engraving bit to cut traces now) so it never occurred to me to press straight down with a chisel exacto blade on the pin.
Last edited:
#### JimB
##### Super Moderator
Have you ever considered a hot air tool?
JimB
#### dknguyen
##### Well-Known Member
Have you ever considered a hot air tool?
JimB
I didn't really like the times I used one and don't think I can justify the cost for one (at least the kind I would like to have if I did have one).
#### Nigel Goodwin
##### Super Moderator
I didn't really like the times I used one and don't think I can justify the cost for one (at least the kind I would like to have if I did have one).
It's a LOT cheaper than the cutters you're considering
#### jbeng
##### Member
I've removed all kinds of SMDs with no problems at all using a Weller 6966C heat gun with a 6958 reducing baffle.
.
I've also used the cutters to clip the chip leads for removal and damaged the PCB in the process.
#### Pommie
##### Well-Known Member
I tried to find a video of the wire method but couldn't. To use this method you thread a fine wire under one full side of pins. You then tack one end of the wire to a nearby component. You can then lift the loose end and unsolder the pins one at a time whilst lifting them with the wire. The unsoldering starts at the end away from the tacked end of the wire.
Hope that makes sense to others. It makes sense to me but it's very difficult to describe something like this without pictures.
Mike.
#### dknguyen
##### Well-Known Member
I tried to find a video of the wire method but couldn't. To use this method you thread a fine wire under one full side of pins. You then tack one end of the wire to a nearby component. You can then lift the loose end and unsolder the pins one at a time whilst lifting them with the wire. The unsoldering starts at the end away from the tacked end of the wire.
Hope that makes sense to others. It makes sense to me but it's very difficult to describe something like this without pictures.
Mike.
No, yeah I understood. It was self explanatory the instant you mentioned it. Though, what I had in mind was more to loop the wire back up onto itself rather than tack solder one end and then heat up the entire side of the IC with the iron at the same time and lift (but maybe that's just because I have tips that can do that).
#### Pommie
##### Well-Known Member
You don't heat an entire side, just the end pin and then work along. So, assuming tacked end to right, heat leftmost pin whilst tugging gently on wire, When pin is detached (and bends up slightly) move onto next one.
Mike.
#### Western
##### Member
I regularly use a utility knife (like a small stanley knife) to cut off pins to remove micros and memory chips etc.
I've tried the dremel method ... but that can make a real mess with pads if a pin catches and pulls sideways ... ripping the pad and track etc ... apart from slipping as mentioned above. Ask me how I know.
With the method I settled on ... I place the board on a padded surface and hold the knife with both hands ... then roll my fists forward, pressing down on the furthermost pin first ... blade vertical against chip body ... but angled forward at 45 degrees.
If it's a good clean IC ... a couple pins will cut at once ... then lift hands ... move back a mm or two then repeat.
By rolling your fists, you can control the depth of each sheering movement ... and I cut just one or two pins each movement ... allowing the next pin to stop me rolling too far forward and touching the board.
Very satisfying every little click as a pin lets go.
Definitely NO SLICING.
Most of mine are coated in a thick solid conformal coating ... so that is a lot more painful to cut through ... but I still use the same method ... it's just slower. You don't hear a click ... you have to constantly lift the knife to see the progress.
Some of the larger ones really thickly coated ... I use a v-shaped file first to expose the tops of the pins so I don't have to press so hard.
#### tomizett
##### Active Member
I use hot air when I can, but I do find that (using the small nozzles I have available) it's difficult to get an even heat all round on bigger ICs (144 pin etc) - it's too easy to leave a few pins cold and pull up the pads. Gripping the IC is also tricky. One of those vacuum pickup tools would probably be best, although I've never used one.
Hot air stations can be had for very little money. Obviously, the cheap ones aren't as good as the expensive ones, but they're a world away from not having one at all - I wouldn't go back to being without one now.
#### rjenkinsgb
##### Well-Known Member
I use some cheap cutters by Toolcraft [816745] for fine work; they have completely pointed flush-cut jaws and I have used them to remove surface mount devices in the past, starting from the end of the pin row.
They don't last anywhere near as long as Lindstrom etc., but they are a tenth the price and a lot better than a tenth the lifespan.
They are also cheap enough to file up if you need a different tip shape.
Example:
https://www.rapidonline.com/Toolcra...gonal-Cutters-No-Facet-125mm-50-6135?IncVat=1
Status
Not open for further replies.
|
|
2006.08137
|
\section{Introduction}
We apply a microscopic three-cluster model to study the hypernucleus
$_{\Lambda}^{9}$Be. This nucleus is considered as a three-cluster system
$\alpha+\alpha+\Lambda$. Our aim is to examine both discrete and continuous
spectrum states of $_{\Lambda}^{9}$Be. This research is performed within a
microscopic three-cluster model referred as AMGOB (the Algebraic Model \ of scattering with
the Gaussian and Oscillator Bases). This model was formulated in Ref. \cite{2009NuPhA.824...37V}.
In Refs. \cite{2009NuPhA.824...37V, 2009PAN....72.1450N, 2014UkrJPh..59.1065N, 2017NuPhA.958...78L,
2017UkrJPh..62..461V}, the AMGOB\ has been successfully applied to study
structure of bound and resonance states in the light nuclei $^{7}$Be, $^{7}%
$Li, $^{8}$Li, $^{8}$B, $^{10}$Be and $^{10}$B. The model has been also
applied in Ref. \cite{2012PAN.75.818V} to study the astrophysical $S$ factors
of the capture reactions of astrophysical importance.
By this reason, the model is particularly appealing for investigating
different\ two-body decay channels of the compound hypernucleus.
The energy of $1/2^+$ ground state in $_{\Lambda}^{9}$Be is -3.12 MeV with respect to its lowest binary decay threshold $_{\Lambda}^{9}$Be$\rightarrow^5_\Lambda$He$+\alpha$ and -6.63 MeV relative to its three-cluster $2\alpha+\Lambda$ threshold. Unlike $^9$Be, which is a Borromean nucleus, $_{\Lambda}^{9}$Be has a bound two-body subsystem $_{\Lambda}^{5}$He. That is why it is important to take into account the possibility for
the $_{\Lambda}^{9}$Be hypernucleus to decay via $_{\Lambda}^{9}$Be$\rightarrow^5_\Lambda$He$+\alpha$ channel.
The ground state of $^8$Be subsystem is known to be a very narrow resonance just near $2\alpha$ threshold. Hence space correlations between $\alpha$-particles should also be considered properly.
AMGOB model gives us a possibility to take into account two coupled binary cluster configurations $^5_\Lambda$He$+\alpha$ and $^8$Be$+\Lambda$ allowing for $^5_\Lambda$He and $^8$Be to be polarized.
The term "cluster polarization" connotes changing energy of a two-cluster subsystem (and, hence, change of its shape and/or size) due to the interaction with the third cluster.
The light hypernuclei have been investigated within different models in Refs.
\cite{2009PrPNP..63..339H,
2003PrPNP..51..223H,
2018FrPhy..13.2106H, 2018PhRvC..97c4324K,
2018PhRvC..97b4330K, 2018PhRvC..97f4315W,
2014EPJWC..6609013M, 2009PhRvC..80e4321H,
1999JPhG...25..961P, 2000PhRvL..85..270H, 1988PhRvC..38..854Y, 2014PTEP.2014k3D01F,
2002PhRvL..88h2501A, 2006PrPNP..57..564H,
2015PhRvC..92d4326I, 2018PhRvC..97c4302L,
2011PhRvC..83d4323I, 2015RPPh...78i6301F,
2016NuPhA.954..260V, 2005NuPhA.753..233T,
2004PhRvC..70b4002F, 2004PhRvC..70d7002F, 1997PThPh..97..881H}.
In Ref. \cite{2018PhRvC..97b4330K} $L^\pi=0^{+}$ ground state and $L^\pi=2^{+}$ excited states
of $_{\Lambda}^{9}$Be have been investigated with a microscopic
cluster model. Spin of a $\Lambda$-hyperon was disregarded in \cite{2018PhRvC..97b4330K} and, hence,
all the states of $_{\Lambda}^{9}$Be hypernucleus have been classified with the values of the total orbital momentum $L$. For the description of the core nucleus $^{8}$Be the generator coordinate method of a microscopic $2\alpha$ cluster model has been applied. The $\Lambda$-nucleus potentials have been constructed
by folding $\Lambda N$ interactions with the nuclear density calculated by the
microscopic cluster model. A core polarization has been taken into account by
artificial enhancing the central part of $NN$-potential. In this assumption,
strengthening of the effective central nuclear interactions acts like an
intensification of the inter-cluster potentials. This procedure simulates an
additional attraction to the nuclear system from $\Lambda N$ interaction. The
optimum value of the enhancement factor was chosen to minimize the energy of
the total system. Authors claimed that particularly remarkable core
polarization effects are found in $_{\Lambda}^{9}$Be, because $^{8}$Be is a
very fragile system of the quasi-bound $2\alpha$ state. The core polarization
effects have been seen in the nuclear size change and the energy changes caused by
the $\Lambda$-particle in $_{\Lambda}^{9}$Be. The significant shrinkage of
$2\alpha$ structure in $_{\Lambda}^{9}$Be has also been reported.
In Ref. \cite{2019FBS....60...30L} energy spectra of bound and resonance states of
$_{\Lambda}^{9}$Be have been calculated within the framework of $\alpha
+\alpha+\Lambda$ three-body model. The $\alpha-\alpha$ interaction was chosen
so to reproduce the observed $\alpha-\alpha$ scattering phase shift and the
ground state of $^{8}$Be within the $\alpha-\alpha$ orthogonality condition
model. The $\Lambda\alpha$ interaction was obtained by folding the $\Lambda N$
interaction into the $\alpha$ cluster wave function. Even- and odd-states of
$\Lambda N$ interaction have been adjusted so as to reproduce the observed
binding energies of the ground states in $_{\Lambda}^{5}$He and $_{\Lambda}^{9}%
$Be. For the resonant states of $_{\Lambda}^{9}$Be the complex scaling method
has been employed. The level structure has been categorized into $^{8}%
$Be-analogue states, genuine hypernuclear states, $^{9} $Be analogue states,
which have already been discussed in \cite{1985PThPS..81...42M,%
1983PThPh..69..918B, 1985PThPS..81..147I} and some new states
located at more than 10 MeV above the $\alpha+\alpha+\Lambda$ threshold states.
An extensive discussion of the structure of genuine hypernuclear states of
$_{\Lambda}^{9}$Be, as well as $^{8}$Be$^{\ast}$-analogue states, within a
microscopic $\alpha+\alpha(\alpha\ast)+\Lambda$ cluster model is given also in
review paper \cite{2009PrPNP..63..339H}.
In Ref. \cite{2000PAN....63..336F} $_{\Lambda}^{9}$Be hypernucleus has been treated
as the $S=1/2$, $T=0$ bound state of the three-cluster system $\alpha
\alpha\Lambda$. The cluster-reduction method is used to solve the s-wave
differential Faddeev equations. Phenomenological potentials have been used to
describe $\Lambda\alpha$ and $\alpha\alpha$ interactions. The authors have
considered boundary-value problems corresponding to the bound states in the
$\alpha\alpha\Lambda$ system and the problems of low-energy alpha-particle
scattering on a $_{\Lambda}^{5}$He hypernucleus. The s-wave phase shift for
$\alpha-_{\Lambda}^{5}$He scattering has been shown to behave anomalously at energies of relative motion
below 1 MeV being small and positive. The scattering length has been observed to be large in magnitude and negative, which has been attributed by the authors to the presence of a virtual level in the $\alpha
\alpha\Lambda$ system near the threshold for scattering.
In \cite{2014PhRvL.113s2502W} the first ab initio calculations for p-shell
single-$\Lambda$ hypernuclei using no-core shell model approaches with
explicit hyperons have been presented. In addition to chiral
two- and three-nucleon interactions, they used leading-order (LO) chiral
hyperon-nucleon (YN) interactions and a meson-exchange hyperon-nucleon
interaction. They have shown that the chiral hyperon-nucleon interactions
provide the ground-state and excitation energies that generally agree with
experiment within the cutoff dependence. At the same time they demonstrated
that hypernuclear spectroscopy provides tight constraints on the
hyperon-nucleon interactions.
A peculiarity of $_{\Lambda}^{9}$Be is that the spin-doublet resulting from the
$2^{+}$ state in $^{8}$Be is practically degenerate, with the higher $J$ state
being at slightly lower excitation energy experimentally, contrary to
other light hypernuclei. The LO chiral YN interactions reproduce the
excitation energy of the doublet and the near degeneracy within threshold
extrapolation and convergence uncertainties. However, the order of levels is
wrong. In contrast, the Julich'04 interaction \cite{2005PhRvC..72d4005H} gives
a significant splitting of the spin doublet in contradiction to experiment.
The energy splitting of the $5/2_{1}^{+}-3/2_{1}^{+}$ doublet states in
$_{\Lambda}^{9}$Be, which was considered to be dominantly composed of the $^{8}$Be$(2_{1}^+)\bigotimes\Lambda(s_{1/2})$ configuration, has been studied in \cite{2000PhRvL..85..270H} within a microscopic three-body model $2\alpha
+\Lambda$. The Pauli principle between two $\alpha$ clusters has been taken
into account by the orthogonality condition model. The main purpose of Ref.
\cite{2000PhRvL..85..270H} was to demonstrate how the splitting of the spin-doublet states in
$_{\Lambda}^{9}$Be is related to the underlying LS and antisymmetric LS forces
(ALS), which are different between one-boson-exchange models and quark models.
The quark model predicts that the ALS component of the LN interaction is so
strong as to substantially cancel the LS one, while the one-boson-exchange
models propose much smaller ALS and various strength of LS. The $\Lambda
\alpha$ interactions are derived by folding the $\Lambda N$ interaction into
the density of the $\alpha$ cluster. The authors introduced a phenomenological
$\Lambda NN$ three-body force, folding of which leads to both $\Lambda
\alpha\alpha$ and $\Lambda\alpha$ potentials. All the available Nijmegen
one-boson-exchange model $\Lambda N$ interactions lead to a wide range of
splittings of 0.08-0.20 MeV in $_{\Lambda}^{9}$Be. At the same time,
quark-model $\Lambda N$ interactions, which have generally large ALS force,
gives a half of the smallest one-boson-exchange model prediction for the
splitting. These data are compatible with the experimental results reported in
\cite{1988PhRvC..38..854Y}.
Based on the Faddeev methodology calculations of $2\alpha+\Lambda$ system, which used two-cluster resonating-group method kernels, have been performed in
Ref. \cite{2004PhRvC..70b4002F}. The method, which was used in
\cite{2004PhRvC..70b4002F}, is equivalent to the pairwise orthogonality
condition model of three-cluster systems, interacting via two-cluster RGM
kernels. The three-range Minnesota force, which describes the $\alpha\alpha$
phase shifts, has been chosen as an effective two-nucleon interaction. A
simple two-range Gaussian potential for each spin-singlet and spin-triplet
state, generated from the phase-shift behavior of the quark-model
hyperon-nucleon interaction, has been used as a $\Lambda N$ force for
$\Lambda\alpha$ interaction. To solve the Faddeev equation, the authors
discretized the continuous momentum variable for the Jacobi coordinate
vectors. The authors stated that the $L^\pi=0^{+}$ ground state and the $L^\pi=2^{+}$ excited state of $_{\Lambda}^{9}$Be are well described by the contracted $2\alpha$ cluster structure with a weakly coupled $\Lambda$-particle in the dominant $s$-wave component. However,the energy gain for $_{\Lambda}^{9}$Be due to partial waves higher than the s-wave is claimed to be about 1.2 MeV, because oscillatory behavior of the $\alpha\alpha$ relative wave functions needs more partial waves with a larger energy gain.
In the present paper the structure of bound and resonance states in $^9_\Lambda$Be hypernucleus for the states $1/2\leq J\leq7/2$ of positive and negative parity is investigated with special emphasis on the impact of cluster polarization on the spectrum of the $^9_\Lambda$Be and elements of scattering matrix. The Pauli exclusion principle between $\alpha$-clusters is taken into account completely.
We employ an effective $\Lambda N$ single-channel interaction simulating the basic features of the Nijmegen meson-theoretical models NSC97f \cite{2000PhRvL..85..270H}, in which a cut-off parameter $k_F$ was adopted to reproduce the energy of the ground state of $^9_\Lambda$Be with respect to $2\alpha+\Lambda$ threshold.
As a $NN$ interaction the modified Hasegawa-Nagata potential is chosen with the Majorana parameter being adjusted to give the experimentally observed energy of $^9$Be nucleus.
Parameters of resonance states are determined from analysis of the energy dependence of two-channel $S$-matrix for $\alpha-_{\Lambda}^{5}$He and $\Lambda-^{8}$Be scattering, provided that $_{\Lambda}^{5}$He and $^{8}$Be subsystems being in their ground states in entrance and exit channels.
The paper is organized as follows. Formulation of a microscopic three-cluster model used for the investigation of the $^9_\Lambda$Be hypernucleus is given in Section \ref{sec:model}. In Section \ref{sec:results} we analyze how the spectrum of bound and resonance states of $^9_\Lambda$Be depends on the polarization of two-cluster subsystems $^5_\Lambda$He and $^8$Be. The nature of the obtained resonance states in $^9_\Lambda$Be is also discussed in Section \ref{sec:results}. Conclusions are made in Section \ref{sec:concl}.
\section{Formulation of the model}
\label{sec:model}
Let us consider a microscopic Hamiltonian for a system consisting of 8
nucleons (two alpha-particles) and a $\Lambda$ hyperon:%
\begin{eqnarray}
\widehat{H} &=&-\frac{\hbar^{2}}{2m}\sum_{i=1}^{8}\frac{\partial^{2}}%
{\partial\mathbf{r}_{i}^{2}}-\frac{\hbar^{2}}{2m_{\Lambda}}\frac{\partial^{2}%
}{\partial\mathbf{r}_{\Lambda}^{2}}\label{eq:001}\\
&+& \sum_{i<j}^{8}V_{NN}\left( \mathbf{r}%
_{i}-\mathbf{r}_{j}\right) +\sum_{i=1}^{8}V_{N\Lambda}\left( \mathbf{r}%
_{i}-\mathbf{r}_{\Lambda}\right) \nonumber %
\end{eqnarray}
where $m=\left( 938.272+939.565\right) /2=$938.919 MeV/c$^{2}$ is a nucleon mass and $m_{\Lambda}=$1115.683(6) MeV/c$^{2}$ is a mass of the $\Lambda$
hyperon. It is more expedient to use the mass of a nucleon $m$ as a unit mass
and than the mass of the hyperon $\overline{m}_{\Lambda}=m_{\Lambda}/m=$1.18826.
It is assumed that coordinates of nucleons and a coordinate of the hyperon
are determined in the center-of-mass system, and thus center of mass
motion is eliminated from the Hamiltonian.
The total Hamiltonian can be separated into nuclear and hypernuclear parts:
\begin{eqnarray}
\widehat{H} & =&\widehat{H}_{NN}+\widehat{H}_{N\Lambda},\nonumber\\
\widehat{H}_{NN} & =&-\frac{\hbar^{2}}{2m}\sum_{i=1}^{8}\frac{\partial^{2}%
}{\partial\mathbf{r}_{i}^{2}}+\sum_{i<j}^{8}V_{NN}\left( \mathbf{r}%
_{i}-\mathbf{r}_{j}\right) ,\label{eq:002A}\\
\widehat{H}_{N\Lambda} & =&-\frac{\hbar^{2}}{2m_{\Lambda}}\frac{\partial^{2}%
}{\partial\mathbf{r}_{\Lambda}^{2}}+\sum_{i=1}^{8}V_{N\Lambda}\left(
\mathbf{r}_{i}-\mathbf{r}_{\Lambda}\right). \label{eq:002B}%
\end{eqnarray}
Eigenfunctions of Hamiltonian (\ref{eq:001}) characterized with the total angular momentum J and energy E of the relative motion of the cluster will be sought in the form:
\begin{eqnarray}
\Psi_{EJ} & =&\sum_{L}\sum_{\alpha=1}^{2}\sum_{\lambda_{\alpha},l_{\alpha}%
}\widehat{\mathcal{A}}\left\{ \Phi_{1}\left( ^{4}He\right) \Phi_{2}\left(
^{4}He\right) \Phi_{3}(\Lambda) \right. \label{eq:010}\\
& \times & \left. f_{\lambda_{\alpha},l_{\alpha};L}^{\left(
E,J\right) }\left( x_{\alpha},y_{\alpha}\right) \left\{ Y_{\lambda_{\alpha}}\left( \widehat{\mathbf{x}%
}_{\alpha}\right) Y_{l_{\alpha}}\left( \widehat{\mathbf{y}}_{\alpha}\right)
\right\} _{L}\right\} _{JM_{J}}.\nonumber
\end{eqnarray}
Here we involve two Faddeev amplitudes $f_{\lambda_{\alpha},l_{\alpha};L}^{\left(
E,J\right) }\left( x_{\alpha},y_{\alpha}\right) $ which represent
dynamics in binary channels $^{5}_\Lambda$He+$\alpha$ ($\alpha$=1) and $^{8}$Be+$\Lambda$
($\alpha$=2).
The Jacobi vector $\mathbf{x}_{1}$ (=$x_{1}\cdot
\widehat{\mathbf{x}}_{1}$) determines distance between an alpha particle and a $\Lambda$-hyperon:
\begin{equation}
\mathbf{x}_1=\sqrt{\frac{4\,\overline{m}_{\Lambda}}{\overline{m}_{\Lambda}+4}
}\left[\mathbf{r}_{\Lambda}-\frac{1}{4}\sum_{i=1}^4\mathbf{r}_{i}\right], \label{eq:0111}
\end{equation}
while the Jacobi vector $\mathbf{y}_{1}$ is the distance between an alpha particle and $^5_\Lambda$He binary subsystem:
\begin{equation}
\mathbf{y}_1=\sqrt{\frac{4(\overline{m}_{\Lambda}+4)}{\overline{m}_{\Lambda}+8}
}\left[\frac{1}{4}\sum_{i=5}^8\mathbf{r}_{i}-\frac{1}{\overline{m}_{\Lambda}+4}\left(\mathbf{r}_{\Lambda}+\sum_{i=1}^4\mathbf{r}_{i}\right)\right] \label{eq:0121}
\end{equation}
The second tree of Jacobi coordinates involves vector $\mathbf{x}_{2}$, which describes the relative distance between two alpha particles,
\begin{equation}
\mathbf{x}_2=\sqrt{2}\left[\frac{1}{4}%
\sum_{i=1}^4\mathbf{r}_{i}-\frac{1}{4}\sum_{j=5}^8\mathbf{r}%
_{j}\right] \label{eq:011}%
\end{equation}
and vector $\mathbf{y}_{2}$, which determines position of the $\Lambda$-hyperon relative to the $^{8}$Be:
\begin{equation}
\mathbf{y}_2=\sqrt{\frac{8\,\overline{m}_{\Lambda}}{\overline{m}_{\Lambda}+8}
}\left[\mathbf{r}_{\Lambda}-\frac{1}{8}\sum_{i=1}^8\mathbf{r}_{i}\right]
\label{eq:012}%
\end{equation}
It is worthwhile underlining that the antisymmetrization operator $\widehat
{\mathcal{A}}$ in (\ref{eq:010}) permutes coordinates of nucleons only. It
does not involve a hyperon. Due to this fact, in the second Jacobi tree adopted for describing the channel $^{8}$Be+$\Lambda$
we have got a folding type of function $\Psi_{EJ}$ in (\ref{eq:010}) with
the wave function of $^{8}$Be being antisymmetric. In the first Jacobi tree
associated with the channels $_{\Lambda}^{5}$He+$\alpha$ the
antisymmetrization operator $\widehat{\mathcal{A}}$ invokes the exchange of
nucleons between $_{\Lambda}^{5}$He and an alpha particle and thus makes
antisymmetric a wave function of the compound system $^9_\Lambda$Be.
Equation (\ref{eq:010}) represents the wave function in the $LS$ coupling scheme. Partial orbital momentum $\lambda_{\alpha}$ indicates an internal orbital momentum of $^{5}_\Lambda$He\ ($\alpha$=1) or $^{8}$Be
($\alpha$=2), while orbital momentum $l_{\alpha}$ describes rotation of an alpha particle around $^{5}_\Lambda$He\ ($\alpha$=1) or rotation of the $\Lambda$-hyperon around $^{8}$Be ($\alpha$=2). The total orbital momentum $L$ is a vector sum
of partial orbital momenta: $\overrightarrow{L}=\overrightarrow{\lambda }_{\alpha}+\overrightarrow{l}_{\alpha}$. Since the spins of alpha-clusters are equal to zero, the total spin of the hypernucleus $^9_\Lambda$Be is determined by the spin of the $\Lambda$-hyperon and equals 1/2. Thus with a given value of the
total angular momentum $J$ the total orbital momentum $L$ can have two values
$L=J-1/2$ and $L=J+1/2$. It is true for all values of $J$ and parity $\pi$ except when $J^{\pi}=1/2^{-}$ where the total orbital momentum has only one value $L=1$.
Faddeev three-cluster amplitudes $f_{\lambda_{\alpha},l_{\alpha};L}^{\left(
E,J\right) }\left( x_{\alpha},y_{\alpha}\right) $ are the solutions of an infinite set of integro-differential equations resulting from Schr\"{o}dinger equation for the wave function
(\ref{eq:010}) with the Hamiltonian (\ref{eq:001}):
\begin{eqnarray}
&& \left[ \widehat{T}_{x_{\alpha},\lambda_{\alpha}}+\widehat{T}_{y_{\alpha
},l_{\alpha}}-E\right] f_{\lambda_{\alpha},l_{\alpha};L}^{\left(
E,J\right) }\left( x_{\alpha},y_{\alpha}\right) \nonumber\\
& +& \sum_{\beta=1}^{2}%
\sum_{\lambda_{\beta},l_{\beta}}\int_{0}^{\infty}\int_{0}^{\infty}%
d\widetilde{x}_{\beta}\widetilde{x}_{\beta}^{2}d\widetilde{y}_{\beta
}\widetilde{y}_{\beta}^{2}
\mathcal{V}_{\lambda_{\alpha},l_{\alpha};\lambda_{\beta},l_{\beta}%
}^{\left( L\right) }\left( x_{\alpha},y_{\alpha};\widetilde{x}_{\beta
},\widetilde{y}_{\beta}\right) \cdot f_{\lambda_{\beta},l_{\beta};L}^{\left( E,J\right)
}\left( \widetilde{x}_{\beta},\widetilde{y}_{\beta}\right)
\label{eq:102} \\
& = & E\sum_{\beta=1}^{2}\sum_{\lambda_{\beta},l_{\beta}}\int_{0}^{\infty}%
\int_{0}^{\infty}d\widetilde{x}_{\beta}\widetilde{x}_{\beta}^{2}d\widetilde
{y}_{\beta}\widetilde{y}_{\beta}^{2}
\mathcal{N}_{\lambda_{\alpha},l_{\alpha};\lambda_{\beta},l_{\beta}%
}^{\left( L\right) }\left( x_{\alpha},y_{\alpha};\widetilde{x}_{\beta
},\widetilde{y}_{\beta}\right) \cdot f_{\lambda_{\beta},l_{\beta};L}^{\left( E,J\right)
}\left( \widetilde{x}_{\beta},\widetilde{y}_{\beta}\right)
,\nonumber
\end{eqnarray}
where%
\[
\widehat{T}_{z,l}=-\frac{\hbar^{2}}{2m}\left[ \frac{d^{2}}{dz^{2}}+\frac
{2}{z}\frac{d}{dz}-\frac{l\left( l+1\right) }{z^{2}}\right]
\]
is the kinetic energy operator associated with the Jacobi vector
$\mathbf{z}=\mathbf{x}_{\alpha}$ or $\mathbf{z}=\mathbf{y}_{\alpha}$, $\mathcal{N}_{\lambda_{\alpha},l_{\alpha};\lambda_{\beta},l_{\beta}}^{\left(L\right)}$ is the exchange part of the norm kernel, and $\mathcal{V}_{\lambda_{\alpha},l_{\alpha};\lambda_{\beta},l_{\beta}}^{\left( L\right)}$ contains a direct and an exchange
part of the potential energy and an exchange term of the kinetic energy of the three-cluster system.
We can reduce a three-cluster problem to a many-channel two-body problem by expanding the wave function $f_{\lambda_{\alpha},l_{\alpha};L}^{\left( E,J\right)
}\left( x_{\alpha},y_{\alpha}\right) $ into the basis of eigenfunctions of the two-cluster Hamiltonian consisting of bound states $g_{\mathcal{E}^{\sigma}_\alpha\lambda_{\alpha}}\left(x_{\alpha}\right) $ ($\sigma$= 1, 2, \ldots) and continuous spectrum states $g_{\mathcal{E}_\alpha\lambda_{\alpha}}\left(
x_{\alpha}\right) :$
\begin{eqnarray}
f_{\lambda_{\alpha},l_{\alpha};L}^{\left( E,J\right)
}\left( x_{\alpha
},y_{\alpha}\right) &=& \sum_{\sigma}g_{{\mathcal{E}^{\sigma}_\alpha\lambda_{\alpha}}}
\left( x_{\alpha}\right) \cdot\phi_{E-\mathcal{E}_\alpha^{\sigma},\,l_{\alpha}}\left( y_{\alpha}\right) \label{eq:115} \\
&+&\int d\mathcal{E}_\alpha g_{\mathcal{E}_\alpha\lambda_{\alpha}}\left( x_{\alpha}\right)
\phi_{E-\mathcal{E}_\alpha,\,l_{\alpha}}\left( y_{\alpha}\right) . \nonumber%
\end{eqnarray}
Functions $g_{\mathcal{E}^{\sigma}_\alpha\lambda_{\alpha}}\left(x_{\alpha}\right) $ and $g_{\mathcal{E}_\alpha\lambda_{\alpha}}\left(x_{\alpha}\right)$ satisfy the two-cluster Schr\"{o}dinger equation:
\begin{eqnarray}
& & \left[ \widehat{T}_{x_{\alpha},\lambda_{\alpha}}-\mathcal{E}_\alpha\right] g_{\mathcal{E}_\alpha\lambda
_{\alpha}}\left( x_{\alpha}\right) \label{eq:110}\\
& +&\int
_{0}^{\infty}d\widetilde{x}_{\alpha}\widetilde{x}_{\alpha}^{2}\cdot
\mathcal{V}^{\left( \lambda_{\alpha}\right) }\left( x_{\alpha}%
;\widetilde{x}_{\alpha}\right) \cdot g_{\mathcal{E}_\alpha\lambda_{\alpha}}\left( \widetilde{x}_{\alpha}\right) \nonumber \\
& =&\mathcal{E}_\alpha\int_{0}^{\infty}d\widetilde{x}_{\alpha}\widetilde{x}_{\alpha}^{2}%
\cdot\mathcal{N}^{\left( \lambda_{\alpha}\right) }\left( x_{\alpha
};\widetilde{x}_{\alpha}\right) \cdot g_{\mathcal{E}_\alpha\lambda_{\alpha}}\left( \widetilde{x}_{\alpha}\right) .\nonumber
\end{eqnarray}
Functions $\phi_{E-\mathcal{E}^{\sigma}_\alpha,\,l_{\alpha}}\left( y_{\alpha}\right)$ and $\phi_{E-\mathcal{E}_\alpha,\,l_{\alpha}}\left( y_{\alpha}\right)$ describe scattering of the third cluster on a two-cluster bound state with energy $\mathcal{E}^{\sigma}_\alpha$ or a continuum spectrum state with energy $\mathcal{E}_\alpha$, correspondingly. Quantities $\mathcal{N}^{\left( \lambda_{\alpha}\right) }\left( x_{\alpha };\widetilde{x}_{\alpha}\right)$ and $\mathcal{V}^{\left( \lambda_{\alpha}\right) }\left( x_{\alpha};\widetilde{x}_{\alpha}\right)$ have the same meaning as the similar quantities in Eq. (\ref{eq:102}), but for two-cluster systems.
So, first we solve two-cluster equation (\ref{eq:110}) and then use the eigenfunctions of the two-cluster Hamiltonian to find wave functions $\phi_{E-\mathcal{E}^{\sigma}_\alpha,\,l_{\alpha}}\left( y_{\alpha}\right)$ and $\phi_{E-\mathcal{E}_\alpha,\,l_{\alpha}}\left( y_{\alpha}\right)$. Finally, we get
three-cluster wave function $f_{\lambda_{\alpha},l_{\alpha};L}^{\left( E,J\right)
}\left( x_{\alpha},y_{\alpha}\right)$.
In practical calculations the integral part of the expansion (\ref{eq:115}) is substituted with the sum over the finite number of the discretized states in two-cluster continuum. The more terms in this sum are, the better cluster polarization is taken into account.
As in \cite{2017NuPhA.958...78L}, we use a finite number of square-integrable Gaussian functions to expand two-cluster wave function $g_{\mathcal{E}%
\lambda_{\alpha}}\left( x_{\alpha}\right):$
\begin{equation}
g_{\mathcal{E}\lambda_{\alpha}}\left( x_{\alpha
}\right) =\sum_{\nu=1}^{N_{G}^{max}}D_{\nu}^{\left( \mathcal{E}\lambda_{\alpha
}\right) }G_{\lambda_{\alpha}}\left( x_{\alpha},b_{\nu}\right) ,
\label{eq:117}%
\end{equation}
where
\begin{eqnarray}
G_{\lambda}\left( \mathbf{x},b_{\nu}\right) &= &
\sqrt{\frac{2}{b_{\nu}^{3}\Gamma\left( \lambda_{\alpha}+3/2\right) }}\rho
^{\lambda_{\alpha}}\exp\left\{ -\frac{1}{2}\rho^{2}\right\}, \label{eq:118} \\
&&\left(
\rho=\frac{x}{b_{\nu}}\right) \nonumber %
\end{eqnarray}
is a Gaussian function. Parameters $b_\nu$ are chosen so to minimize the ground state energies of the two-body subsystems.
To find wave functions $\phi_{E-\mathcal{E},l_{\alpha}}\left( y_{\alpha}\right) $ of the third cluster interacting with
the two-cluster subsystem numerated by index $\alpha$\ ($\alpha$=1,2), we expand them over the
oscillator basis
\begin{equation}
\phi_{E-\mathcal{E},l_{\alpha}}\left( y_{\alpha}\right) =\sum_{n_{\alpha}%
=0}^{N_{0}-1}C_{n_{\alpha}}^{\left(E-\mathcal{E},\,l_{\alpha}\right) }%
\psi_{n_{\alpha},l_{\alpha}}\left( y_{\alpha},b\right) , \label{eq:120}%
\end{equation}
where%
\begin{eqnarray}
\psi_{n_{\alpha},l_{\alpha}}\left( y_{\alpha},b\right) & =& \left(
-1\right) ^{n_{\alpha}}\mathcal{N}_{n_{\alpha}l_{\alpha}}%
{\tilde \rho}^{l_{\alpha}}e^{-\frac{1}{2}{\tilde \rho}^{2}}L_{n_{\alpha}}^{l_{\alpha}
+1/2}\left({\tilde \rho}^{2}\right) ,\quad\label{eq:121}\\
{\tilde \rho} = \frac{y_{\alpha}}{b}, && \quad\mathcal{N}_{n_{\alpha}l_{\alpha}}%
=\sqrt{\frac{2\Gamma\left( n_{\alpha}+1\right) }{b^{3}~\Gamma\left( n_{\alpha
}+l_{\alpha}+3/2\right) }}\nonumber
\end{eqnarray}
is an oscillator function and $b$ is the oscillator length.
In eq.
(\ref{eq:120}), a finite number of oscillator functions appears. But
in fact, this expansion involves an infinite number of functions, since
we know an asymptotic behavior of wave function $\phi_{E-\mathcal{E},\,l_{\alpha}}\left( y_{\alpha}\right)$ in coordinate space and expansion coefficients
$C_{n_{\alpha}}^{\left(E-\mathcal{E},\,l_{\alpha}\right) }$ in oscillator space.
At large distances $x_1\ll y_1$ between an $\alpha$-particle and a $^{5}_\Lambda$He subsystem being in the state with energy $\mathcal{E}_1$ wave function $\phi_{E-\mathcal{E}_1,\,l_{1}}\left( y_{1}\right)$ has the following form:
\begin{equation}
\phi_{E-\mathcal{E}_1,l_{1}}\left( y_{1}\right) \approx \delta_{c_{0},c}
\psi_{l_{1}}^{\left( -\right) }\left( k_{1}y_{1}
;\eta_{1}\right) -S_{c_{0},c}\psi_{l_{1}}^{\left( +\right)
}\left( k_{1}y_{1};\eta_{1}\right) , \label{eq:125}%
\end{equation}
where $S_{c_{0},c}$ is the scattering matrix, index $c$ numerates an exit channel $c=\left\{ \mathcal{E}_{1
}\lambda_{1}l_{1}\right\} $ and $c_{0}$ indicates the entrance
channel, $\psi_{l_{1}}^{\left(-\right)}$ ($\psi_{l_{1}}^{\left(+\right)}$) is the incoming
(outgoing) Coulomb wave, and $\eta_{1}$ is Sommerfeld parameter.
Determination of the incoming and outgoing Coulomb wave functions can be
found, for instance, in Ref. \cite{1983RvMP...55..155B}.
The asymptotic behaviour of the wave function $\phi_{E-\mathcal{E}_2,\,l_{2}}\left( y_{2}\right)$ describing scattering of a $\Lambda$-hyperon on $^8$Be subsystem being in the state with energy $\mathcal{E}_2$ is determined by a superposition of the Hankel functions $H_{l_{2}+1/2}^{\left(\pm\right) }$, since $\Lambda$-hyperon does not have an electric charge:
\begin{equation}
\phi_{E-\mathcal{E}_2,l_{2}}\left( y_{2}\right) \approx \delta_{c_{0},c}
H_{l_{2}+1/2}^{\left( -\right) }\left( k_{2}y_{2}\right) -S_{c_{0},c}H_{l_{2}+1/2}^{\left( +\right)
}\left( k_{2}y_{2}\right) , \label{eq:126}%
\end{equation}
Parameters
$k_{1,2}$ and $\eta_{1}$ in our case are defined as%
\begin{eqnarray*}
k_{1,2} & =& \sqrt{\frac{2m\left( E-\mathcal{E}_{1,2}\right) }{\hbar^{2}}},\\
\eta_{1} & =& \frac{Z^2 e^{2}%
}{\sqrt{2\left( E-\mathcal{E}_{1,2}\right) }}\sqrt{\frac{m}{\hbar^{2}}%
\frac{4\left( 4+\overline{m}_{\Lambda}\right) }{\overline{m}_{\Lambda}+8}},
\end{eqnarray*}
where $Z=2$ is a charge of $\alpha$-cluster, $E$ is the total energy of three-cluster system ($E>\mathcal{E}_{1,2}$).
Having obtained all elements $S_{c,\widetilde{c}}$ of the scattering $S$
matrix, we make the following steps to extract important physical information.
First, we use\ one of the standard parametrizations\ of the $S$ matrix:%
\[
S_{c,\widetilde{c}}=\eta_{c,\widetilde{c}}\exp\left\{ 2i\delta_{c,\widetilde
{c}}\right\} .
\]
The diagonal values of the phase shifts $\delta_{c,c}$ and inelastic
parameters $\eta_{c,c}$ are analyzed to study elastic and inelastic processes
in a many-channel system. Second, the $S$ matrix $\left\Vert S_{c,\widetilde
{c}}\right\Vert $ is reduced to the diagonal form or to the representation of
the uncoupled channels. In this representation we obtain a set of eigenphase
shifts $\delta_{\alpha}$ which provides us with additional information on the
processes under consideration. The eigenphase shifts $\delta_{\alpha}$ also
allow us to determine the total and partial widths of resonance states in the
compound system. The details of this scheme and its justifications can be found in
Ref. \cite{2007JPhG...34.1955B}.
For numerical investigations of the three-cluster system $_{\Lambda}^{9}$Be,
we have to use a finite basis of Gaussian and oscillator functions.
As was pointed out above, $N_{G}^{max}$
Gaussian functions give us the same number of eigenstates and corresponding
eigenfunctions of a two-cluster Hamiltonian. To study effects of cluster
polarization, we will involve different numbers of these eigenstates, their
actual number we denote as $N_{G}$ (1$\leq N_{G}\leq
N_{G}^{max}$).
Index $N_{f}$ numerates a cumulative number of a basis function for
the $\sigma_\alpha$th\ eigenstate of the $\alpha$th ($\alpha$=1, 2)
two-cluster subsystem with energy $\mathcal{E}_{\sigma_{\alpha}}$ (1$\leq\sigma_{\alpha}\leq N_{G}$)
and for $n_\alpha$ oscillator quanta (0$\leq n_\alpha\leq N_{O}-1$):
\begin{eqnarray}
N_{f} &=&1+n_\alpha+\left(\sigma_{\alpha}-1\right)N_{O}+\left(\alpha-1\right) N_{G}N_{O},\label{nf}\\
& & 1\leq N_f\leq 2N_GN_O. \nonumber
\end{eqnarray}
Thus, in our calculations we deal with the $2N_G$-channel system for the case of two coupled binary configurations. In an isolated configuration approximation the number of channels equals $N_G$.
Traditionally, the spectrum of the bound states is obtained with the
diagonalization of the three-cluster Hamiltonian with $N_{f}$\ basis functions.
The diagonalization also reveals a large numbers of pseudo-bound states which are specific combinations of scattering states. In what follows, we are going to
study both bound and pseudo-bound states.
To obtain scattering wave functions and elements of scattering $S$-matrix, we
will solve a system of nonhomogeneous algebraic equations for the expansion coefficients.
For scattering states, the number of oscillator functions $N_{O}$ determines a border between an internal and asymptotic regions. It is obvious that the number of channels and the number of basis functions can be different for calculating bound and scattering states.
We usually perform two type of calculations of the bound and scattering states. In the first type of calculations we take into account
polarizability of clusters involving maximal number of eigenstates of the
two-cluster Hamiltonians, i.e. $1\leq\sigma_{\alpha}\leq N_{G}^{\max}$. This approach will be called the approach with ``soft'' clusters. The second type of
calculations discards polarizability of clusters by involving only one
eigenfunction ($\sigma_{\alpha}=1$) of two-cluster Hamiltonians. It is
obviously that in such a case clusters do not change their size and shape, and
thus we call it the approach with ``rigid'' clusters.
\section{Results and Discussions}
\label{sec:results}
In the present paper we restrict the discussion to the case of zero
orbital momenta $\lambda_{\alpha}$ of binary subsystems $^5_\Lambda$He
and $^8$Be. Hence,
orbital momentum $l_{\alpha}$ describing relative motion of the binary
subsystems and the third cluster determines the total orbital momentum
$L=l_{\alpha}$ and parity. Consequently, a state with a given total
angular momentum and parity $J^\pi$ corresponds to the only value of $L$. Namely, positive parity states are characterized by even values of $L$, while negative parity states correspond to the odd values of $L$. $J^\pi=1/2^+$ states have zero total orbital momentum.
In many publications devoted to the hypernucleus $_{\Lambda}^{9}$Be, the bound and
resonance states were marked by the total orbital momentum $L$ and parity
$\pi$, which was explained by a small contribution of the spin-orbit
components of a Lambda-nucleon interaction. To make a bridge between this
notation and ours, we will mark the states of $_{\Lambda}^{9}$Be with three
quantum numbers $J$, $\pi$ and $L$ in the following way $J^{\pi}$($L$).
In the asymptotic region we have taken into account two channels describing
the scattering of an $\alpha$-particle on $^5_\Lambda$He subsystem and
a $\Lambda$-hyperon on $^8$Be subsystem, provided that both subsystems
are in their ground states. In the internal region three more excited
states for each subsystem have also been considered. Such an approximation
allows for polarization of two-cluster subsystems due to the interaction
with the third cluster at small distances between clusters, but at large
distances binary subsystems
can be only in their ground states.
\subsection{Input parameters}
First of all we need to select values for the input parameters. We perform and present two sets of calculations by using two different sets of
input parameters. These sets realize two criteria for selecting input
parameters which reproduce different observable quantities. We will denote two
sets of input parameters $P1$ and $P2$, correspondingly. We start with the
first set of the input parameters and in Subsections \ref{subsubsec:convergence}, \ref{subsubsec:convergence} the main results will be discussed with the
$P1$. In Subsection \ref{subsubsec:P2} we consider $P2$ set of input parameters.
We involve the modified Hasegawa-Nagata (MHNP) \cite{potMHN1, potMHN2}
potential as a nucleon-nucleon interaction. We use the same nucleon-hyperon
potential as in Ref. \cite{2006PhRvC..74e4312H}. Parameters of the spin-orbit
interaction of the $\Lambda N$ interaction are taken for the version NSC97f from
Ref. \cite{2000PhRvL..85..270H}.
In Fig. \ref{Fig:HNPoten1Even}\ we display the even components of the
nucleon-hyperon potential.
\begin{figure}[ptbh]
\begin{center}
\includegraphics[width=\columnwidth]{Figure1.pdf
\end{center}
\caption{The even componenents of the central part of the $N\Lambda $ potential.}
\label{Fig:HNPoten1Even}
\end{figure}
Traditionally we chose the oscillator length $b=1.317$ fm to minimize
the threshold energy of the two-cluster subsystems. The value $m=0.4389$ of
Majorana parameter of the MHNP is adjusted to reproduce the energy and
width of the ground state in $^{9}$Be nucleus relative to $\alpha+\alpha+n$
threshold. The standard value of the Majoran parameter is $m=0.4057$.
Cut-off parameter $k_{eff}=0.889$ fm$^{-1}$ of the $\Lambda N$-potential
is selected to reproduce the energy of the ground state of $^{9}_\Lambda$Be
nucleus with respect to $\alpha+\alpha+\Lambda$ threshold.
Parameters $b_{\nu}$ ($\nu$=1, 2, 3, 4) of
Gaussian functions are determined as $b_{\nu}=b_{0}q^{\nu-1}$. In $P1$ input parameter set, for
$^{5}_\Lambda$He we selected $b_{0}=0.7$ fm, $q=1.85$, and for $^{8}$Be
we took $b_{0}=1.15$ fm, $q=2.2$. Such values of the parameters minimize
the ground state energies of the $^5_\Lambda$He and $^{8}$Be nuclei.
We have also invoked for the calculations another set of the input parameters ($P2$), which gives the experimental value $-3.12$ MeV for the binding energy of the $^5_\Lambda$He. Namely, we used different values of $b_{0}=0.25$ fm, $q=1.85$ for the Gaussian functions describing the $^5_\Lambda$He subsystem and
slightly adjusted cut-off parameter $k_{eff}=0.887$ fm$^{-1}$ of the $\Lambda N$-potential to reproduce the energy of the ground state of $^{9}_\Lambda$Be nucleus with respect to $\alpha+\alpha+\Lambda$ threshold. Oscillator length $b=1.317$ fm and the value $m=0.4389$ of Majorana parameter of the MHNP and parameters of the Gaussian functions, describing the $^8$Be subsystem, remained the same in $P2$ input parameter set.
Note, that with such values of oscillator length $b$ and the Majorana
parameter $m$ of the MHNP we obtain resonance states of $^{8}$Be which are
presented in Table \ref{Tab:8BeSpectr}. In Table \ref{Tab:8BeSpectr} we also show
the experimental parameters of resonance states in $^{8}$Be. As we see, the theoretical
results are in fairly good agreement with the available experimental data
presented in Ref. \cite{2004NuPhA.745..155T}. It was indicated in Ref.
\cite{2017PhRvC..96c4322V} that these input parameters are not optimal for
resonance structure of $^{8}$Be. The optimal values of $b$ and $m$ leads to
overbinding
the compound nucleus $^{9}$Be.
This is a typical problem for many semi-realistic nucleon-nucleon potentials. Thus, in the present
calculations we use those values of $b$ and $m$ which provide the correct
position of the $^{9}$Be ground state with respect to the three-cluster
threshold $\alpha+\alpha+n$.%
\begin{table}[tbp] \centering
\caption{Spectrum of resonance states in $^8$Be calculated in a two-cluster model
and compared with experimental data \cite{2004NuPhA.745..155T}}%
\begin{tabular}
[c]{ccccc}\hline
& \multicolumn{2}{c}{Theory} & \multicolumn{2}{c}{Experiment}\\\hline
$J^{\pi}$ & $E$, MeV & $\Gamma$, MeV & $E$, MeV & $\Gamma$, MeV\\\hline
$0^{+}$ & 0.859 & 0.958 & 0.092 & 5.57 $\times$ 10$^{-6}$\\
$2^{+}$ & 4.138 & 4.809 & 3.12 & 1.513\\
$4^{+}$ & 14.461 & 6.386 & 11.44 & $\approx$3.500\\ %
\hline
\end{tabular}
\label{Tab:8BeSpectr}%
\end{table}%
\subsection{Spectrum of $^9_\Lambda$Be nucleus}
\subsubsection{Convergence of parameters of resonance states, $P1$ input parameters}
\label{subsubsec:convergence}
The spectrum of positive and negative parity states for $1/2\leq J\leq 7/2$ ($0\leq L\leq 4 $) of
the $^9_\Lambda$Be nucleus calculated within the AMGOB model is presented in
Tables \ref{Tab:Spectrum9Be+Gob}, \ref{Tab:Spectrum9Be-Gob}.
Tables \ref{Tab:Spectrum9Be+Gob}, \ref{Tab:Spectrum9Be-Gob} demonstrate that the
polarization of two-cluster subsystems plays an important role in formation of
bound and resonance states of the $^9_\Lambda$Be.
\begin{table}[tbph] \centering%
\caption{Spectrum of positive parity states of $^9_\Lambda$Be nucleus obtained
within the AMGOB model with $P1$ set of the input parameters taking into account cluster polarization (soft) or neglecting
it (rigid). $\Gamma$ is a total width of the resonance state,
$\Gamma_{1,2}$ are partial decay widths of the resonance via $^5_\Lambda$He$+\alpha$
and $^8$Be$+\Lambda$ channels, correspondingly. Energy is given in MeV, the
total and partial widths are in keV.}%
\begin{tabular}{ccccccccc}
\hline
& \multicolumn{4}{c}{rigid} & \multicolumn{4}{c}{soft} \\ \hline
$J^{\pi}(L)$ & $E$ & $\Gamma $ & $\Gamma_1$ & $\Gamma_2$ & $E$ & $\Gamma $ & $\Gamma_1$ & $\Gamma_2$\\ \hline
$1/2^{+}(0)$& -6.204 & & & &-6.623 & & & \\ \hline
&5.360 & 3170 & 170 & 3000 & 1.792 & 3.3 & 2.3 & 1 \\
& & & & & 2.252 & 60.3 & 11.8 & 48.5\\
& & & & & 2.733 & 384& 0.6 & 383.4\\
& & & & & 3.396 & 349.2& 84.6& 264.6\\
& & & & & 4.107 & 213.6 & 23.3 & 190.3 \\
& & & & & 4.650 & 470.8 & 77.8 & 393\\
& & & & & 5.083 & 10.8 & 8.4 & 2.4\\ \hline
$3/2^{+}(2)$&-3.297 & 35.8 & 35.8 & & -3.543 & 5.5 & 5.5 & \\
& & & & & 2.115 & 0.045 & 0.01 & 0.035\\
& & & & & 2.850 & 0.027 & 0.01 & 0.017\\
& & & & & 3.314 & 206.9 & 0.1 & 206.8\\
& & & & & 3.886 & 8.3 & 2 & 6.3\\
& & & & & 4.387 & 105.7 & 1.6 & 104.1\\
& & & & & 5.212 & 26.2 & 4.3 & 21.9\\
& & & & & 5.610 & 55.8 & 0.4 & 55.4\\ \hline
$5/2^{+}(2)$ & -3.446 & 12 & 12 & & -3.748 & 0.75 & 0.75 & \\
& & & & & 2.115 & 0.043 & 0.01 & 0.033\\
& & & & & 2.849 & 0.041 & 0.037 & 0.004\\
& & & & & 3.312 & 210.42 & 0.07 & 210.35\\
& & & & & 3.885 & 8.3 & 2& 6.3\\
& & & & & 4.386 & 107.8 & 1.3 & 106.5\\
& & & & & 5.209 & 26.7 & 4.2 & 22.5\\
& & & & & 5.630 & 57.64 & 0.17& 57.4\\ \hline
$7/2^{+}(4)$ & 4.726 & 2962.5 & 2962.4 & 0.1 & 2.610 & 0.0091 & 0.0088 & 0.0003\\
& & & & & 3.643 & 0.002 & 0.0007 & 0.0013\\
& & & & & 3.979 & 8.302 & 0.001 & 8.301\\
& & & & & 4.500 & 2625.2 & 2625.2 & 0.01 \\
& & & & & 5.139 & 1.6 & 0.09 & 1.51 \\ \hline
\end{tabular}%
\label{Tab:Spectrum9Be+Gob}
\end{table}
For the positive parity states, cluster polarization decreases energy of the bound and
resonance states. It also reduces significantly total width of the resonance
states. Cluster polarization, for example, decreases the energy of the
$_{\Lambda}^{9}$Be ground 1/2$^{+}$(0) state by 400 keV, and the energy of
the 3/2$_1^{+} (0)$ resonance state which determined in experiments as the second excited
state is reduced by 245 keV, while its total width decreases more than 6
times. Similar situation with the 5/2$_1^{+}$(2) resonance state which has
approximately the same energy. Cluster polarization shifts down the energy of
the resonance state by 302 keV and causes an eightfold decrease in its width.
Besides, allowing for polarization leads to the formation of a large number of
narrow resonance states in three-cluster continuum above the decay threshold
$^9_\Lambda$Be$\Rightarrow^8$Be$(0_2^+)+\Lambda$ located at 1.63 MeV above the three-cluster decay threshold.
\begin{table}[htbp] \centering
\caption{Spectrum of negative parity states of $^9_\Lambda$Be nucleus obtained
within the AMGOB model with $P1$ set of the input parameters taking into account cluster polarization (soft) or neglecting
it (rigid). $\Gamma$ is a total width of the resonance state,
$\Gamma_{1,2}$ are partial decay widths of the resonance via $^5_\Lambda$He$+\alpha$
and $^8$Be$+\Lambda$ channels, correspondingly. Energy is given in MeV, the total and partial widths are in keV.}%
\begin{tabular}{ccccccccc}
\hline
& \multicolumn{4}{c}{rigid} & \multicolumn{4}{c}{soft} \\ \hline
$J^{\pi} (L)$ & $E$ & $\Gamma $ & $\Gamma_1$ & $\Gamma_2$ & $E$ & $\Gamma $ & $\Gamma_1$ & $\Gamma_2$ \\ \hline
$1/2^{-}(1)$& 0.723 & 4521 & 0.4 & 4520.6 & 0.724 & 4228.5 & 0.4 & 4228.1\\
& & & & & 1.924 & 3.74 & 3.735 & 0.005\\
& & & & & 2.519 & 30.1 & 21.1& 9\\
& & & & & 3.364 & 259.3 & 72.5 & 186.8\\
& & & && 4.350 & 250.8 & 0.1& 250.7\\
& & & & & 5.739 & 148.4 & 1.2& 147.2\\
& & & & & 5.906 & 91.3 & 2.9& 88.4\\ \hline
$3/2^{-}(1)$ & 0.723 & 4507.4 & 0.4& 4507 & 0.724 & 4213.7 & 0.5 & 4213.2\\
& & & & & 1.924 & 3.99 & 3.97 & 0.02\\
& & & & & 2.519 & 31.1 & 22& 9.1\\
& & & & & 3.368 & 258.6 & 72.3& 186.3\\
& & & & & 4.354 & 252.5 & 0.1& 252.4\\
& & & & & 5.733 & 154 & 10& 144\\
& & & & & 5.907 & 94.8 & 2.6 & 92.2\\ \hline
$5/2^{-}(3)$ & & & & & 2.345 & 0.132 & 0.003& 0.129\\
& & & & & 3.227 & 0.1157 & 0.0007& 0.115\\
& & & & & 4.117 & 54.1 & 0.6& 53.5\\
& & & & & 4.403 & 1.5 & 0.4& 1.1\\
& & & & & 5.133 & 42.5 & 11.4& 31.1\\
& & & & & 5.866 & 2.3 & 1.1& 1.2\\ \hline
$7/2^{-}(3)$ & & & & & 2.345 & 0.145 & 0.001& 0.144\\
& & & & & 3.227 & 0.197 & 0.001& 0.197\\
& & & & & 4.117 & 54.3 & 0.6& 53.7\\
& & & & & 4.403 & 1.4 & 0.4& 1\\
& & & & & 5.132 & 41.7 & 11.5& 30.2\\
& & & & & 5.865 & 2.1 & 1.1& 1\\ \hline
\end{tabular}%
\label{Tab:Spectrum9Be-Gob}
\end{table}%
For the negative parity states we observe from Table \ref{Tab:Spectrum9Be-Gob}
that the parameters of the lowest $1/2^{-}_1$(1) and $3/2^{-}_1$(1) resonance states
remain almost intact when cluster polarization is taken into account. The energy
of this resonance is quite close to the energy 0.7 MeV where becomes possible
$^8$Be$(0_1^+)+\Lambda$ scattering. Since the latter channel is considered properly
already in the approximation of rigid two-cluster subsystems, allowing for cluster
polarization does not change a lot the lowest $1/2^{-}_1$(1) and $3/2^{-}_1$(1) states.
It might be well to point out that spectra of the $1/2^{-}$(1) and $3/2^{-}$(1) states are
almost degenerate, because they are characterized by the same orbital momentum $L=1$ and the spin-orbit interaction is small.
Analyzing partial widths of the resonance states tabulated in Tables
\ref{Tab:Spectrum9Be+Gob} and \ref{Tab:Spectrum9Be-Gob} we can observe
that the majority of the resonances decay via $^8$Be+$\Lambda$ channel.
There are not more than 2 resonances decaying via $^5_\Lambda$He+$\alpha$
channel for a given value of $J^\pi$. In all the cases, except $5/2^-_6$(3) and $7/2^-_6$(3) resonances,
partial widths via different channels significantly differ from one another.
The above mentioned $5/2^-_6$(3) and $7/2^-_6$(3) states represent the only case when
the resonances decay with almost equal probability via both channels. It is
interesting to note also the principal change in partial widths of the lowest
$1/2^+_1$(0) resonance. Allowing for cluster polarization not only halves the energy
of this resonance state, but also changes its dominant decay channel.
It was pointed out in Ref. \cite{1985PThPS..81...42M} that the resonance
states 1/2$^{-}$(1) and 3/2$^{-}$(1) in $_{\Lambda}^{9}$Be form the
"genuinely hypernuclear" rotational band as there is no such a band in the
nucleus $^{9}$Be. The Pauli principle forbids the valence nucleon to occupy
the lowest $p$ orbitals in $^{9}$Be, while there is no such a restriction for the
$\Lambda$\ particle in $_{\Lambda}^{9}$Be. In this respect it is interesting
to note that there are two types of the positive and negative parity states as
it follows from Tables \ref{Tab:Spectrum9Be+Gob} and \ref{Tab:Spectrum9Be-Gob}.
The first type comprises the resonance states with the dominant binary
structure $\Lambda$ + $^{8}$Be. Such resonance states have the partial width
$\Gamma_{2}>\Gamma_{1}$. They are created mainly by rotation of the $\Lambda
$\ particle arround $^{8}$Be. And they are "genuinely hypernuclear" states.
In particular, we would like to note that our predictions concerning the energy and width of the lowest 1/2$^{-}_1$(1) and 3/2$^{-}_1$(1) resonance states
strongly correlate with the results obtained in \cite{1985PThPS..81...42M}, since these states are located just near $\Lambda$ + $^{8}$Be and have quite large width.
The second type of states are represented by the resonance states with
partial width $\Gamma_{1}>\Gamma_{2}$. These resonance states decay mainly via $\alpha$ + $_{\Lambda
}^{5}$He channel, where the $\Lambda$\ particle forms a bound state with
an alpha particle. Rotation of the second alpha-particle around $_{\Lambda
}^{5}$He creates a set of resonance states of the positive and negative parity.
In the energy region considered $E\leq 6$ MeV above $2\alpha+\Lambda$
threshold four open channels could play a part in the formation of resonance
states. They correspond to $\sigma=1,2$
eigenstates of the two-cluster subsystems calculated with four Gaussian
functions and listed in Table \ref{Tab:2CSystems}.
We can assume that appearance of narrow resonances in the case when cluster
polarization is taken into account could result from coupling the channels
belonging to the same cluster configuration: $^8$Be$(0_1^+)+\Lambda$ and
$^8$Be$(0_2^+)+\Lambda$ channels, $^5_\Lambda$He$(0_1^+)+\alpha$ and
$^5_\Lambda$He$(0_2^+)+\alpha$ channels. In particular, the first pair of
the channels could be more coupled due to proximity of the energies needed to make the channels open.
Referring to Table \ref{Tab:2CSystems} it will be observed that in our model
$^5_\Lambda$He subsystem is overbound by 0.9 MeV compared to its experimental
binding energy. However, this overbinding is much smaller than in other simple
model calculations based upon $\Lambda N$ potentials \cite{1995PhysRep...257..349G}.
In Table \ref{Tab:2CSystems} we also display the mass root-mean-square radius for all
bound and pseudo-bond states. As we see, the ground state of $_{\Lambda}^{5}%
$He is a compact two-cluster system, while the lowest pseudo-bound state,
representing the ground state of $^{8}$Be, is very dispersed with the large value
of $r_{m}=$5.71 fm. Dispersed are also states $\sigma=$2 and $\sigma=$3 of
$^{8}$Be and the state $\sigma=$2 of $_{\Lambda}^{5}$He, however with the
smaller values of $r_{m}$.%
\begin{table}[tbp] \centering
\caption{Energy and mass root-mean-square radius of the two-cluster bound and
pseudo-bound states with $P1$ set of the input parameters. The energy is measured from a two-cluster threshold
indicated in the first column.}
\begin{tabular}
[c]{cccccc}\hline
2C-system & Quantity & $\sigma=1$ & $\sigma=2$ & $\sigma=3$ & $\sigma
=4$\\\hline
$_{\Lambda}^{5}$He=$\alpha+\Lambda$ & $E$, MeV & -4.06 & 2.91 & 21.12 &
139.76\\
& $r_{m}$, fm & 1.71 & 3.25 & 2.20 & 1.48\\
& $\overline{r}_{\alpha}$, fm & 2.62 & 6.72 & 4.06 & 1.78\\%\hline
$^{8}$Be=$\alpha+\alpha$ & $E$, MeV & 0.70 & 1.63 & 9.11 & 53.03\\
& \multicolumn{1}{c}{$r_{m}$, fm} & 5.71 & 4.37 & 3.05 & 1.95\\
& $\overline{r}_{\alpha}$, fm & 15.16 & 7.07 & 2.34 & 1.39\\ \hline
\end{tabular}
\label{Tab:2CSystems}%
\end{table}%
Table \ref{Tab:2CSystems} shows also the average distances $\overline{r}_{\alpha}$ between
clusters. They are determined in the following way%
\[
\overline{r}_{\alpha}=b\sqrt{\left\langle g_{\mathcal{E}_{\sigma}%
,\lambda_{\alpha}}\left\vert x_{\alpha}^{2}\right\vert g_{\mathcal{E}_{\sigma
},\lambda_{\alpha}}\right\rangle /\mu_{\alpha}},
\]
where $\mu_{\alpha}$ is a reduced mass appearing in the definition of the
Jacobi vector $\mathbf{x}_{\alpha}$. The quantity $\overline{r}_{\alpha}$
gives
the most probable relative distance between
the interacting clusters. This quantity has been discussed in the literature. For
example, in Ref. \cite{2018PhRvC..97b4330K}\ devoted to the study of spectrum
of $p$-shell hypernuclei including $_{\Lambda}^{9}$Be within a microscopic
cluster model, the average distance between two alpha particles in the ground
state of $^{8}$Be were determined approximately. It was obtained
that $\overline{r}_{\alpha}$=5.99 fm which is smaller \ than our estimation
$\overline{r}_{\alpha}$=15.16 fm. We believe that such a large difference for
the average distance between alpha particles can be partially ascribed to
different nucleon-nucleon potentials involved in Ref.
\cite{2018PhRvC..97b4330K} and in our calculations. But the main difference is
related to the way how this quantity is determined. Note that detail investigation
of the average distance between two alpha particles in different states of $^{8}$Be
has been performed in Ref. \cite{2020arXiv200604525K} within the two-cluster resonating group method.
\subsubsection{Spectrum of the $3/2^+$(2) states, $P1$ input parameters}
\label{subsubsec:3/2+}
In Fig. \ref{Fig:SpectrConv32PSCvsCC} we display the spectrum of
the Hamiltonian for the 3/2$^{+}$(2) states obtained with $P1$ input parameter set. The spectrum is obtained in a
single-configuration (SC) and coupled-configuration (CC) approximations.
Eigenvalues of the Hamiltonian are shown as a function of the number of
basis functions $N_{f}$ defined by Eq. (\ref{nf}). It is necessary to
recall that in the present calculations
the single-configuration approximation involves four different channels
with a binary subsystem in the ground and three excited states. 200
oscillator functions are employed in each channel. Thus the region
1$\leq N_{f}\leq$800 in both panels of Fig.
\ref{Fig:SpectrConv32PSCvsCC} corresponds to the SC approximation, and the
region 801$\leq N_{f}\leq$1600 represents the CC approximation. The range
1$\leq N_{f}\leq$200 shows the spectrum of eigenstates created by basis
functions of the channel $_{\Lambda}^{5}$He$\left( 0_{1}^{+}\right)
$+$\alpha$ (left panel) and $^{8}$Be$\left( 0_{1}^{+}\right) $+$\Lambda$
(right panel), the range 201$\leq N_{f}\leq$400 display effects
on the eigenspectrum of the second channel $_{\Lambda}^{5}$He$\left( 0_{2}%
^{+}\right) $+$\alpha$ and $^{8}$Be$\left( 0_{2}^{+}\right) $+$\Lambda$,
correspondingly, and so on.
One
can see that there are a large number of plateaus which appear in the SC
and CC approximations. The dot-dashed lines indicate the position of the 3/2$^{+}$(2)
resonance states, obtained by solving the dynamic equations
with proper boundary conditions.
\begin{figure*}[ptbh]
\begin{center}
\includegraphics[width=\textwidth]{Figure2.pdf
\caption{Spectrum of eigenstates of the internal part of Hamiltonian for
3/2$^{+}$(2) states constructed in a single-configuration (SC) and
coupled-configuration (CC) approximations with $P1$ input paratemets. The 3/2$^{+}$(2) resonance states are
displayed by the dot-dashed line in the area of the CC approximation. Energy
is measured from the three-cluster decay threshold $\alpha+\alpha+\Lambda$.}%
\label{Fig:SpectrConv32PSCvsCC}%
\end{center}
\end{figure*}
Such a plateau can be a marker of a narrow resonance state in the system
under consideration. Besides, in many-channel systems the plateaus may appear
due to a weak coupling of channels.
It is worthwhile noticing that the second type of plateau may appear, for example,
when a weak spin-orbit interaction couples
states with different values of the total orbital momentum $L$ and/or
total spin $S$ ($LS$ coupling scheme).
Many plateaus which are observed in Fig. \ref{Fig:SpectrConv32PSCvsCC} in the
single-configuration approximations belong to the second type.
Indeed, if we have a closer look at the spectrum we can see that the
channel describing the interaction of the third cluster with the most compact
two-cluster subsystem ($_{\Lambda}^{5}$He$\left(0_{1}^{+}\right)$+$\alpha$ or
$^{8}$Be$\left(0_{1}^{+}\right)$+$\Lambda$) is dominant and noticeably reduces
the energy of the eigenstates. Second in importance to the eigenstates of the
$^9_\Lambda$Be hypernucleus are $_{\Lambda}^{5}$He$\left(0_{2}^{+}\right)+\alpha$
and $^{8}$Be$\left(0_{2}^{+}\right)+\Lambda$ channels.
All other channels are weakly coupled to the dominant channels
and thus contribute much less to the energy of the eigenstates shown in Fig.
\ref{Fig:SpectrConv32PSCvsCC}. The same is also valid for a number of plateaus
occurring in the CC area.
We can also see in Fig. \ref{Fig:SpectrConv32PSCvsCC} that in many cases the
energy of resonance states is close to the energy of plateau. The narrower is
a resonance state, the closer is its energy to the plateau energy.
\begin{table}[tbph] \centering%
\caption{Spectrum of $J^\pi(L)=3/2^+(2)$ states of $^9_\Lambda$Be nucleus obtained
within the AMGOB model with $P1$ set of the input parameters for different degree of polarization of two-cluster
subsystems. Energy is in MeV and width is in keV.}%
\begin{tabular}{cccccccc}
\hline
\multicolumn{2}{c}{$N_G=1$} &\multicolumn{2}{c}{$N_G=2$}& \multicolumn{2}{c}{$N_G=3$}& \multicolumn{2}{c}{$N_G=4$} \\ \hline
$E$& $\Gamma $ & $E$ & $\Gamma $ & $E$ & $\Gamma $ & $E$ & $\Gamma $ \\ \hline
-3.297 & 35.8 &-3.430 & 13.9 & -3.539 & 5.6 &-3.543 & 5.5\\
& & 2.115 & 0.08 & 2.115 & 0.08& 2.115 & 0.04\\
& & 2.850 & 0.125 & 2.850 & 0.02& 2.850 & 0.03\\
& & 3.319 & 197.6& 3.314 & 206.9& 3.3136 & 206.9 \\
& & 3.887 & 9.4 & 3.886 & 8.3& 3.886 & 8.3 \\
& & 4.387 & 104.5 & 4.387 & 105.7& 4.387 & 105.7 \\
& & 5.217 & 32.1& 5.212 & 26.2& 5.212 & 26.2 \\
& & 5.643 & 68.6& 5.630 & 55.8& 5.630 & 55.8\\ \hline
\end{tabular}%
\label{Tab:Spectrum9Be3/2+Gob}
\end{table}
As Table \ref{Tab:Spectrum9Be3/2+Gob} suggests, the qualitative change
of the spectrum is caused by taking into account the $0_2^+$ state in
$^{5}_\Lambda$He and $^{8}$Be subsystem. Allowing for higher excited states
leads
only to some slight alteration of energies and widths, but does not change
the number of resonances. From this fact, we might reason that $0^+_2$
states of the binary subsystems plays a large part in the structure of
resonance states of $^9_\Lambda$Be nucleus.
In Table \ref{Tab:CoulombRS32P} we compare spectrum of the 3/2$^{+}$(2) resonance
states obtained with and without Coulomb forces. 200 oscillator functions are
used in both calculations.
\begin{table}[tbph] \centering
\caption{Effects of the Coulomb forces on the energy and width of the 3/2$^{+}$(2) resonance states studied with $P1$ set of the input parameters.}%
\begin{tabular}{cccc}
\hline
\multicolumn{2}{c}{with Coulomb} & \multicolumn{2}{c}{without Coulomb} \\ \hline
$E$, MeV & $\Gamma$, keV & $E$, MeV & $\Gamma $, keV\\ \hline
-3.543 & 5.5 & -5.103 & 0 \\
2.115 & 0.04 & 1.129 & 1.122\\
2.850 & 0.03 & 1.865 & 0.107\\
3.314 & 206.9 & 2.764 & 29.034\\
3.886 & 8.3 & 3.126 & 64.264\\ \hline
\end{tabular}
\label{Tab:CoulombRS32P}
\end{table}
The results presented in Table \ref{Tab:CoulombRS32P} allow us to reveal
effects of the Coulomb forces \ on energy and width of the resonance states in
$_{\Lambda}^{9}$Be, and to understand peculiarities of the
present model.
We can conclude from Table \ref{Tab:CoulombRS32P} that without the Coulomb forces
energy of all the resonance states is decreased by approximately 1 MeV, and the
lowest $3/2_1^+$(2) resonance state with $E=-3.542$ MeV below the three-cluster threshold
is transformed into a bound state. It is interesting to note that the Coulomb
interaction makes $3/2_4^+$(2) resonance state
wider and other resonance states narrower.
Figure \ref{Fig:delta_32+} illustrates the behaviour of the phase shifts of elastic
scattering for $J^\pi(L)=3/2^+(2)$ states of $^9_\Lambda$Be hypernucleus for two sets of the input parameters.
\begin{figure}[hptbh]
\begin{center}
\includegraphics[width=0.71\textwidth]{Figure3.pdf}
\caption{Phase shifts of elastic many-channel scattering for $J^\pi(L)=3/2^+(2)$ states of $^9_\Lambda$Be versus energy in
the $_{\Lambda}^{5}$He$(0^+_1)$+$^{4}$He (1, 3) and $^{8}$Be$(0^+_1)$+$\Lambda$ (2, 4) channels. The data are obtained with $P1$ (upper panel) and $P2$ (lower panel) set of input parameters for soft (1, 2) and rigid (3, 4)
two-cluster subsystems. Orbital momenta of the relative motion of clusters $l_1$ and $l_2$ equals 2.}%
\label{Fig:delta_32+}%
\end{center}
\end{figure}
We can observe from Fig. \ref{Fig:delta_32+} a large impact of cluster polarization
on the phase shifts of elastic scattering. It is interesting to note that the phase
shift of $_{\Lambda}^{5}$He$(0^+_1)$+$^{4}$He scattering calculated with $P1$ input parameters manifests resonance behaviour
only for the lowest $3/2^+_1$(2) state and the $3/2^+_4$(2). All other resonances in $P1$ calculations become
apparent only in the behaviour of $^{8}$Be$(0^+_1)$+$\Lambda$ scattering phase.
\subsubsection{Spectrum of $^9_\Lambda$Be for $P2$ set of the input parameters}
\label{subsubsec:P2}
The spectrum of positive and negative parity states for $1/2\leq J\leq 7/2$ ($0\leq L\leq 4 $) of
the $^9_\Lambda$Be nucleus calculated within the AMGOB model for $P2$ set of the input parameters is presented in Tables \ref{Tab:Spectrum9Be+Gob_new}, \ref{Tab:Spectrum9Be-Gob_new}. Only 41 keV above the $^5_\Lambda$He+$\alpha$ decay threshold appears a narrow $1/2^+ (0)$ resonance with the width of 9 keV. This state was absent in $P1$ calculations.
In Fig. \ref{Fig:Spectr9LBeP1P2} we compare spectra of bound and resonance
states of the hypernucleus $_{\Lambda}^{9}$Be which lie below the
$\alpha+\alpha+\Lambda$ threshold and are obtained with $P1$ and
$P2$ sets of input parameters. These spectra are calculated with cluster polarization. One sees that the $P2$ set of input
parameters shifts noticeable the energy of the $_{\Lambda}^{5}$He+$\alpha$ threshold and slightly shifts up energies of 3/2$^{+}$ (2) and 5/2$^{+}$ (2)
states. And thus these states are transformed into the bound states lying close to the
$_{\Lambda}^{5}$He+$\alpha$\ threshold. Neglecting cluster polarization turns bound 3/2$^{+}$ (2) state into a virtual state, while 5/2$^{+}$ (2) remains to be bound with respect to the $_{\Lambda}^{5}$He+$\alpha$ threshold.
\begin{figure}[hptb]
\begin{center}
\includegraphics[width=\columnwidth]{Figure4.pdf}%
\caption{Energy of bound and resonance states in $_{\Lambda}^{9}$Be\ obtained
with two sets of input parameters $P1$ and $P2$ and compared to the experimental data (Exp) from Refs. \cite{2005NuPhA.754...58T, 2002PhRvL..88h2501A} }%
\label{Fig:Spectr9LBeP1P2}%
\end{center}
\end{figure}
Comparing the data presented in Tables \ref{Tab:Spectrum9Be+Gob_new}, \ref{Tab:Spectrum9Be-Gob_new} with those listed in Tables \ref{Tab:Spectrum9Be+Gob}, \ref{Tab:Spectrum9Be-Gob} we observe that $P2$ set of input parameters generates half as many resonances as $P1$ set. Besides, the majority of the resonances presented in Tables \ref{Tab:Spectrum9Be+Gob_new}, \ref{Tab:Spectrum9Be-Gob_new} decay mainly via $^5_\Lambda$He+$\alpha$ channel, while $P1$ set of parameters produced the resonances decaying into $^8$Be+$\Lambda$ channel. Furthermore, almost all resonances (except $7/2^+(4)$ case), which disappeared in $P2$ calculations, have larger widths and are characterized with partial width $\Gamma_2\gg\Gamma_1.$ The $7/2^+(4)$ wide resonance appeared to be almost insensitive both to the different sets of input parameters and to the accounting of cluster polarization. All narrow $7/2^+(4)$ resonances did not appear in $P2$ calculations.
\begin{table}[htbph] \centering%
\caption{Spectrum of positive parity states of $^9_\Lambda$Be nucleus obtained
within the AMGOB model with $P2$ input parameter set taking into account cluster polarization (soft) or neglecting
it (rigid). $\Gamma$ is a total width of the resonance state,
$\Gamma_{1,2}$ are partial decay widths of the resonance via $^5_\Lambda$He$+\alpha$
and $^8$Be$+\Lambda$ channels, correspondingly. Energy is given in MeV, the
total and partial widths are in keV.}%
\begin{tabular}{ccccccccc}
\hline
& \multicolumn{4}{c}{rigid} & \multicolumn{4}{c}{soft} \\ \hline
$J^{\pi} (L)$ & $E$ & $\Gamma $ & $\Gamma_1$ & $\Gamma_2$ & $E$ & $\Gamma $ & $\Gamma_1$ & $\Gamma_2$\\ \hline
$1/2^{+} (0)$& -6.33 & & & &-6.62 & & & \\ \hline
& -3.01 & 95 & 95 & & -3.089 & 9 & 9 & \\
& & & & & 1.82 & 13.9 & 6.7& 7.2 \\
& & & & & 2.391 & 49.2 & 21.2 & 28 \\
& & & & & 3.309 & 91.2 & 47.5 & 43.7 \\
& & & & & 4.482 & 84.7 & 74.7 & 10\\
& & & & & 4.5 & 85.5 & 83 & 2.5\\ \hline
$3/2^{+} (2)$ & -3.09 & 0.001 & 0.001 & & -3.179 & & & \\
& & & & & 2.116 & 0.36 & 0.22 & 0.14 \\
& & & & & 2.856 & 2.84 & 10.2& 18.2\\
& & & & & 3.895 & 10.3 & 3.9 & 6.4\\
& & & & & 5.22 & 18.2 & 8 & 10.2\\ \hline
$5/2^{+} (2)$ & -3.34 & & & & -3.43 & & & \\
& & & & & 2.116 & 0.35 & 0.21 & 0.14\\
& & & & & 2.856 & 2.70 & 0.93 & 1.77\\
& & & & & 3.894 & 9.87 & 3.67 & 6.2\\
& & & & & 5.22 & 17.7 & 7.6 & 10.1\\ \hline
$7/2^{+} (4)$ & 4.67& 2295.1 & 2295.09& 0.02 & 4.658 & 2283 & 2282.98 & 0.02\\ \hline
\end{tabular}%
\label{Tab:Spectrum9Be+Gob_new}
\end{table}
\begin{table}[htbph] \centering%
\caption{Spectrum of negative parity states of $^9_\Lambda$Be nucleus obtained
within the AMGOB model with $P2$ input parameter set taking into account cluster polarization (soft) or neglecting
it (rigid). $\Gamma$ is a total width of the resonance state,
$\Gamma_{1,2}$ are partial decay widths of the resonance via $^5_\Lambda$He$+\alpha$
and $^8$Be$+\Lambda$ channels, correspondingly. Energy is given in MeV, the
total and partial widths are in keV.}%
\begin{tabular}{ccccccccc}
\hline
& \multicolumn{4}{c}{rigid} & \multicolumn{4}{c}{soft} \\ \hline
$J^{\pi}(L)$ & $E$ & $\Gamma $ & $\Gamma_1$ & $\Gamma_2$ & $E$ & $\Gamma $ & $\Gamma_1$ & $\Gamma_2$\\ \hline
$1/2^{-} (1)$ & & & & & 1.934 & 7.37 & 6.32 & 1.05\\
& & & & & 2.574 & 36.29 & 31.29& 5\\
& & & & & 3.554 & 75.1 & 68.8 & 6.3\\
& & & & & 4.837 & 85.47 & 84.84 & 0.63 \\ \hline
$3/2^{-} (1)$ & & & & & 1.934 & 7.46 & 6.83 & 0.63\\
& & & & & 2.574 & 35.78 & 31.44& 4.34\\
& & & & & 3.554 & 73.68 & 68.44& 5.24\\
& & & & & 4.837 & 85.59 & 85.31 & 0.28 \\ \hline
$5/2^{-} (3)$ & & & & & 2.345 & 0.06 & 0.059 & 0.001\\
& & & & & 3.228 & 0.13 & 0.07 & 0.06\\
& & & & & 4.40 & 0.74 & 0.36 & 0.38 \\ \hline
$7/2^{-} (3)$ & & & & & 2.345 & 0.083 & 0.082 & 0.001\\
& & & & & 3.228 & 0.126 & 0.072 & 0.054\\
& & & & & 4.40 & 0.72 & 0.37 & 0.35 \\ \hline
\end{tabular}%
\label{Tab:Spectrum9Be-Gob_new}
\end{table}
Energies of the $_{\Lambda}^{5}$He bound and pseudo-bound states obtained for $P2$ set of input parameters tabulated in Table \ref{Tab:2CSystems_new}. We can observe from Table \ref{Tab:2CSystems_new} that the energy of the bound state corresponds to its experimental value, while all the pseudo-bound states have much higher energies than in $P1$ calculations. Probably, this means that polarization of the $^5_\Lambda$He subsystem hardly affects the spectrum of the $^9_\Lambda$Be hypernucleus, resulting in a smaller number of resonances in the $^9_\Lambda$Be for the $P2$ calculations.
\begin{table}[tbp] \centering
\caption{Energy and mass root-mean-square radius of the $_{\Lambda}^{5}$He=$\alpha+\Lambda$ bound and
pseudo-bound states obtained for $P2$ set of input parameters. The energy is measured from the two-cluster threshold
indicated in the first column.}\vspace{2 mm}
\begin{tabular}
[c]{cccccc}\hline
2C-system & Quantity & $\sigma=1$ & $\sigma=2$ & $\sigma=3$ & $\sigma
=4$\\\hline
$_{\Lambda}^{5}$He=$\alpha+\Lambda$ & $E$, MeV & -3.13 &57.99 & 300.07 &
1013.10\\
& $r_{m}$, fm & 1.53 & 1.52 & 1.33 & 1.28\\ \hline
\end{tabular}
\label{Tab:2CSystems_new}%
\end{table}%
\subsubsection{Comparison with other methods and experiment}
In Table \ref{Tab:DiffMethods} we display
the spectrum of $_{\Lambda}^{9}$Be obtained in Refs. \cite{2019FBS...60...30L}
and \cite{2000FBS....28..103O}, and compare it with our results.
\begin{table}[htbph] \centering
\caption{Spectrum of bound and resonance states in $^{9}_{\Lambda}$Be obtained in different models. Energy is displayed in MeV, width is given in keV.}%
\begin{tabular}
[c]{cccccccccccc}\hline
\multicolumn{3}{c}{\cite{2019FBS...60...30L}} &
\multicolumn{3}{c}{Present paper, P1} & \multicolumn{3}{c}{Present paper, P2} &
\multicolumn{3}{c}{\cite{2000FBS....28..103O}}\\\hline
$L^{\pi}$ & $E$ & $\Gamma$ & $J^{\pi}$ & $E$ & $\Gamma$ & $J^{\pi}$ & $E$ & $\Gamma$ &
$J^{\pi}$ & $E$ & $\Gamma$\\\hline
0$^{+}$ & -6.65 & - & ${1\over2}^{+}$ & -6.62 & - & ${1\over2}^{+}$ & -6.62 & - & ${1\over2}^{+}$ & -7.13 & -
\vspace{1mm}\\
2$^{+}$ & -3.82 & - & ${5\over2}^{+}$ & -3.75 & 0.75 & ${5\over2}^{+}$ & -3.43 & - & ${5\over2}^{+}$ &
-3.89 & - \vspace{1mm}\\%\hline
& & & ${3\over2}^{+}$ & -3.54 & 5.5 & ${3\over2}^{+}$ & -3.18 & - & ${3\over2}^{+}$ &
-3.89 & - \vspace{1mm}\\%\hline
1$^{-}$ & 0.1 & 2500 & ${1\over2}^{-}$ & 0.724 & 4228 & ${1\over2}^{+}$ & -3.09 & 9 & ${3\over2}^{-}$ &
-2.94 & 120\vspace{1mm}\\%\hline
& & & ${3\over2}{-}$ & 1.92 & 4.0 & ${3\over2}^{-}$ & 1.93 & 7.46 & ${3\over2}^{-}$ & -2.21 &
2160\\%\hline
& & & ${3\over2}^{-}$ & 3.37 & 258.6 & ${3\over2}^{-}$ & 3.55 & 73.68 & ${3\over2}^{-}$ &
3.86 & 5.6\vspace{1mm}\\%\hline
4$^{+}$ & 3.2 & 780& ${7\over2}^{+}$ & 2.61 & 0.01 & ${7\over2}^{+}$ & 4.66 & 2283 & & & \vspace{1mm}\\
3$^{-}$ & 8.0 & 6100& ${7\over2}^{-}$ & 2.35 & 0.15 & ${7\over2}^{-}$ & 2.35 & 0.08 & & \vspace{1mm}\\ \hline
\end{tabular}
\label{Tab:DiffMethods}%
\end{table}%
As this nucleus has not more than two bound states and a large number of
resonance states, we selected those investigations which determined both
energies and widths of the excited states.
In Ref.
\cite{2019FBS...60...30L} a three-cluster model with an approximate treatment
of the Pauli principle was employed. The $\alpha\Lambda$ interaction \ was
determined in a folding approximation with the same $\Lambda N$ potential as
in the present paper, however another value of the parameter $k_{F}$
($k_{F}$ = 0.963 fm$^{-1}$) was selected. The complex scaling method was
involved in Ref. \cite{2019FBS...60...30L} to locate the energy and width of
resonance states.
As the spin-orbit interaction is disregarded in Ref.
\cite{2019FBS...60...30L}, the authors labelled the states of the $^9_\Lambda$Be
with the total orbital momentum $L$. The energy of the ground state, obtained
in Ref. \cite{2019FBS...60...30L} is close to our results. There is also a certain
likeness of our results for both input parameter sets and the results of Ref. \cite{2019FBS...60...30L} for the energy
of the $2^{+}_1$ state. However,
in our approach this state is split by spin-orbit interaction on two states -
5/2$_{1}^{+}$(2) and 3/2$_{1}^{+}$(2). In $P1$ calculations these states are resonances, since they are located above the lowest $^{5}_\Lambda$He$\left(0_{1}^{+}\right)+\Lambda$
threshold, while with $P2$ input parameters these states are bound. An important point is that our model reproduce the correct order
of the 5/2$_{1}^{+}$(2) and 3/2$_{1}^{+}$(2) states and only slightly overestimates their splitting for both sets of the input parameters.
The $L^\pi=1^-$ resonance state presented in Ref. \cite{2019FBS...60...30L} corresponds to the lowest $1/2_1^-(1)$
resonance obtained in our model with $P1$ set of the input parameters. Both states are located close to the $2\alpha+\Lambda$ and have a large width.
There are some differences in the position and in the widths of other resonance states. The energy of the $4^+$ resonance listed in Ref.\cite{2019FBS...60...30L} is about 0.6 MeV higher the energy of the lowest $7/2^+(4)$ resonance obtained with $P1$ input parameter set of our model and a significantly larger width. At the same time, $7/2^+(4)$ resonance produced by $P2$ input parameter set is characterized by higher energy and larger width than the $4^+$ resonance from Ref. \cite{2019FBS...60...30L}. Finally, the lowest $7/2^-(3)$ resonance state produced by both sets of input parameters in our model have essentially lower energy and smaller width than the $3^-$ resonance presented in Ref. \cite{2019FBS...60...30L}.
The Faddeev equation methodology were applied in Ref.
\cite{2000FBS....28..103O} to study bound and resonance states in $_{\Lambda
}^{9}$Be within a macroscopic three-body model. Several effective
$\alpha\Lambda$ interactions \ and a separable $\alpha\alpha$ interaction
were involved to calculate the spectrum.
The selected $\alpha\Lambda$ \ and $\alpha\alpha$ interactions lead to
the overbound ground state of $^9_\Lambda$Be in Ref. \cite{2000FBS....28..103O}.
As the spin-orbit interaction is neglected in \cite{2000FBS....28..103O},
the 3/2$^{+}$ and 5/2$^{+}$ have the same energy. They are bound states, since
they are located under the $_{\Lambda}^{5}$He+$\alpha$
threshold. Three $3/2^{-}$ resonance states were found in Ref.
\cite{2000FBS....28..103O}, two of them are located bellow the three-cluster
threshold and one above. $P2$ set of the input parameters of our model produces the lowest $1/2_1^+(1)$ resonance with the energy close to the energy of the lowest $3/2^-$ resonance from Ref.
\cite{2000FBS....28..103O}.
The $3/2^{-}_3$ state from \cite{2000FBS....28..103O}
lies near our 3/2$^{-}$ resonance states with energy 3.37 MeV and 3.55 MeV for $P1$ and $P2$ input parameter set, correspondingly. The latter states have, however, larger widths than the width of the 3/2$_{3}^{-}$ resonance
state obtained in Ref. \cite{2000FBS....28..103O}.
In Table \ref{Tab:Spectrum9BeExp} we collected experimental data on the spectrum of the $^9_\Lambda$Be hypernucleus. The references are found in Ref. \cite{2019FBS...60...30L}.
\begin{table}[htbph] \centering%
\caption{Spectrum of $_{\Lambda}^{9}$Be identified in different experiments compared with our results. Energy is given in MeV}%
\begin{tabular}{ccccccccc}
\hline
Source & \cite{1983PhRvL..51.2085M,
1976PhLB...62..481B} & \cite{2006PrPNP..57..564H,
1998NuPhA.639...93A} & \multicolumn{2}{c}{\cite{2005NuPhA.754...58T%
, 2002PhRvL..88h2501A}} & \multicolumn{2}{c}{Theory, P1} &\multicolumn{2}{c}{Theory, P2}\\ \hline
$J^{\pi }$ & $E$ & $E$ &$J^{\pi }$& $E$ &$J^{\pi }$& $E$ &$J^{\pi }$& $E$
\\ \hline
1/2$^{+}$ & -6.63 & -5.90 & 1/2$^{+}$ & -6.62 & 1/2$^{+}_1$ & -6.62 & 1/2$^{+}_1$ & -6.63 \\
& & & 5/2$^{+}_1$ & -3.606 & 5/2$^{+}_1$ & -3.75 & 5/2$^{+}$ & -3.43 \\
3/2$^{+}$& -3.55 & -2.97 & 3/2$^{+}$ & -3.563 & 3/2$^{+}_1$ & -3.54 & 3/2$^{+}_1$ & -3.18 \\
& & -0.10 & &-0.83 & 1/2$_{1}^{-}$ & 0.724 & 1/2$_{2}^{+}$ & -3.09\\
& & & &2.89 & 5/2$^{+}_3$ & 2.849 & 5/2$^{+}_3$ & 2.856\\
& & & & & 3/2$^{+}_3$ & 2.85& 3/2$^{+}_3$ & 2.856\\
& & 3.62 & & & 3/2$^{-}_4$ & 3.37 & 3/2$^{-}_3$ & 3.55\\\hline
\end{tabular}%
\label{Tab:Spectrum9BeExp}%
\end{table}
As we can see from Table \ref{Tab:Spectrum9BeExp}, there is some consistency
in experimental data only for the energies of the ground $1/2^+$ state and
excited $3/2^+$, $5/2^+$ states.
The energies of these three levels obtained within our model are in a good
agreement with the experimental data from the first and last column. Energy of the lowest $3/2_1^-$ state obtained with $P1$ input parameter set is closer to the experimental value than that produced in $P2$ calculations. However, in the latter case $3/2_1^-$ and $5/2_1^-$ are located below the $^5_\Lambda$He$+\alpha$ decay threshold, in accordance with the experimental data, while in $P1$ calculations these states lie above the two-cluster threshold. Energies of the $3/2_3^+$ and $5/2_3^+$ resonance states are close to the experimentally observed level with $E=2.89$ MeV from Ref. \cite{2005NuPhA.754...58T, 2002PhRvL..88h2501A}. Finally, the resonance state with energy $E=3.62$ MeV reported in \cite{2006PrPNP..57..564H, 1998NuPhA.639...93A} is located near our $3/2_4^-$ and $3/2_3^-$ resonances generated by $P1$ and $P2$ input parameter sets, correspondingly.
\subsection{Nature of the resonance states}
As we can see above, our model generates a large number of resonance states in
$_{\Lambda}^{9}$Be. What is the nature of such resonance states? What factors
are responsible for the appearance of these resonance states? All the results of this Subsection have been obtained with $P1$ set of the input parameters, since this variant of calculations produces more resonance states.
There are at
least three possible reasons for the formation of resonance states and thus three
types of resonances emerged in our many-channel model of $_{\Lambda}^{9}$Be.
First, the so-called shape resonance \ states can be created by a centrifugal or the Coulomb
barriers, or by a combination of both barriers.
The centrifugal barrier is present in both $_{\Lambda}^{5}$He+$\alpha$
and $^{8}$Be+$\Lambda$ channels, if the total orbital momentum and parity
allow rotational states with nonzero orbital momentum of relative motion of
the interacting clusters. The Coulomb barrier is present in the $_{\Lambda}^{5}$He+$\alpha$
channel only. Effects of the Coulomb forces on parameters of the resonance
states are demonstrated in Table \ref{Tab:CoulombRS32P} with $J^\pi=3/2^{+}$ states.
This example shows us that the Coulomb interaction is not responsible for creating
such a rich variety of resonance states. As we have seen, it changes parameters of
a resonance state and transforms a weakly bound state into a resonance state.
Second, some part of the obtained resonance states may be considered
as the Pauli resonance states. Appearance of a large number of
resonance states in cluster systems has been discovered many years ago
(see, for example, Refs.
\cite{1969PhRv..179..971T, 1973PhRvC...8.1649T, 1974PhLB...49..308C,
1982PhRvC..26.2410S,1982NuPhA.380...87K, 1983NuPhA.394..387W,
1985NuPhA.437..367W, 1988PhRvC..38.2013K}).
In Refs. \cite{1982NuPhA.377...84F, 1975PThPh..54..747K} it has been
shown with a a simple two-cluster model systems how the Pauli resonances appear
in the resonating group method calculations when
different oscillator lengths are used for the interacting clusters. It is worthwhile
noticing that with distinct values of oscillator lengths for different clusters one achieves more adequate description of a
compound system. The Pauli resonance states may appear also in the case when the
same oscillator lengths are used but more advanced internal cluster functions are adopted.
Using different oscillator lengths implies invoking monopole excitations of each cluster. Such excitations in light nuclei have the energy around 20 MeV. And, consequently, the Pauli resonances appear in a high-energy range. Meanwhile, employment of more advanced wave functions of interacting clusters by considering them as a binary structure may result in a set of low-energy internal states which generate the Pauli resonances at a low energy. That is the case for the present investigation.
The Pauli resonances is caused by the almost Pauli forbidden states, which
can be present in the wave function instead of the Pauli forbidden states
both in the case of different oscillator lengths and advanced intrinsic cluster wave function.
Recall that the forbidden states are the eigenfunctions of the norm
kernel with zero eigenvalues, while the almost Pauli forbidden states
correspond to nonzero but very small eigenvalues.
In Ref.
\cite{1992NuPhA.548...39K}\ it was claimed that the Pauli resonances
disappear if the almost forbidden states are removed from the wave function.
It was demonstrated for the $^{16}$O+$\alpha$ scattering that by omitting the
eigenstates with eigenvalues less than 1.0$\times$10$^{-2}$ \ one removes the
Pauli resonances. We apply this algorithm to calculate the 3/2$^{+}$(2) phase
shifts and parameters of the resonance states. In this case we have got only one
almost forbidden state with the eigenvalue 6.4$\times$10$^{-8}$. Other
eigenvalues are spread in the region between 0.2 and 1.8. Elimination of the
almost forbidden state did not result in changing phase shifts and
parameters of the 3/2$^{+}$(2) resonances. The same results were obtained
for the 1/2$^{+}$(0) and 1/2$^{-}$(1) states. This let us conclude that the obtained
resonances are not the Pauli resonances. We suggest that the Pauli resonance
states arise at a higher energy.
Third, Feshbach resonances appear in many-channel systems, provided that the
channels have different threshold energies (see original papers of Feshbach \cite{1958AnPhy...5..357F,1962AnPhy..19..287F}, a review in Ref. \cite{2010RvMP...82.1225C} and Ref. \cite{2019PTEP.2019l3D02M}, where the Feshbach resonance state was discovered in $^{11}$Li considered as a three-body system $^9$Li$+n+n$). The Feshbach resonance
originates from a bound states in one of the channels with a higher energy of the decay threshold.
Having coupled with other channel(s) with
lower threshold energy, the bound state may then decay through the open
channel(s). If the coupling between channels is weak, the bound
state turns into a resonance state. If the coupling is strong, a bound state can
be dissolved by continuum.
We have the necessary conditions for
creation of the Feshbach resonance states, since the binary channels involved
in the calculations have different threshold energies (see Table
\ref{Tab:2CSystems}). In the region 1 MeV$<$E$<$6 MeV, where we discovered the majority of the resonance states, there are four closed channels which can generate bound states.
However, to prove that our resonance states are the Feshbach resonances, we need to
compare spectrum of the resonance states with the spectrum of bound states in closed binary channels treated separately. To obtain spectrum of the bound states, we
selected from a huge matrix of the Hamiltonian those blocks which describe
each channel and omitted the blocks which couple them. Thus problem
of the $N_{ch}$ coupled channels are reduced to the problem of $N_{ch}$
independent channels and each channel is treated separately. As we pointed
above, there is a weak coupling between different channels of $_{\Lambda}^{9}$Be. Thus
we expect a correspondence between energies of the bound and resonance states.
Comparison is made in Fig. \ref{Fig:Feshbach32P}\ where we display the energy
of the 3/2$^{+}$(2) resonance states created by the eight coupled channels (the middle spectrum)
with the energy of bound states in the channel $_{\Lambda}^{5}$He$\left(0_{3}^{+}\right)
$+$\alpha$ (the left spectrum) and channel $^{8}$Be$\left( 0_{3}^{+}\right) $+$\Lambda$ (the right spectrum).
We display only those bound states in all closed
channels which lie in the energy range of our interest.
Spectrum of bound states in each channel is obtained with 200 oscillator
functions. By the dashed lines we indicated two- and three-body thresholds.
One can see in Fig. \ref{Fig:Feshbach32P} that the energies of the resonance
states obtained in the coupled-channel approximation are very close to the
energies of the bound states calculated within a single-channel approximation.
It is necessary to recall that the threshold energies of the channels $%
_{\Lambda }^{5}$He$\left( 0_{3}^{+}\right) +\alpha $ and $^{8}$Be$\left(
0_{3}^{+}\right) +\Lambda $ equal 21.12 and 9.11 MeV, correspondingly, as it
was indicated in\ Table \ref{Tab:2CSystems}. And thus all energy levels, which are
obtained in the single-channel approximations and lay below these threshold
energies, are bound states.
The $_{\Lambda}^{5}$He$\left( 0_{3}^{+}\right) $+$\alpha$ closed channel
generates three resonance states, while the $^{8}$Be$\left(0_{3}^{+}\right)$+$\Lambda$
closed channel creates four resonance states. Thus, the
numerous resonance states emerged from our calculations are mainly the
Feshbach resonance states. They complement the shape resonance states created
by the centrifugal and Coulomb barriers which appear at the low-energy region.
From Fig. \ref{Fig:Feshbach32P} one may conclude that the coupling of open channels with the $_{\Lambda}^{5}$He$\left( 0_{3}^{+}\right) $+$\alpha$ is stronger than with the $^{8}$Be$\left(0_{3}^{+}\right)$+$\Lambda$, since the resonance states are shifted noticeably with respect to the corresponding bound states in the first channel but not with respect to the states of the second channel.
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=\columnwidth]{Figure5.pdf
\caption{Spectrum of the 3/2$^{+}$(2) resonance states in $^9_\Lambda$Be obtained in the coupled-channel approximation compared with the bound states created by closed binary channels $_{\Lambda}^{5}$He$\left(0_{3}^{+}\right)
$+$\alpha$ (the left spectrum) and $^{8}$Be$\left( 0_{3}^{+}\right) $+$\Lambda$ (the right spectrum) treated separately. $P1$ set of the input parameters was used.}%
\label{Fig:Feshbach32P}%
\end{center}
\end{figure}
Summarizing, we can conclude that the location of narrow resonances of
the $^9_\Lambda$Be hypernucleus in the energy region above $^8$Be$(0_1^+)+\Lambda$
threshold depends on the pseudo-bound states of the $^8$Be and $^5_\Lambda$He
two-cluster subsystems, which have been taken into account. We can assume that
the experimentally observed levels of the two-body subsystems of the considered
hypernucleus should be primarily accounted for. For the $^9_\Lambda$Be hypernucleus
these are $0^+$ ground state and $2^+$, $4^+$ resonance states of the $^8$Be nucleus,
as well as the $0^+$ ground state of the $^5_\Lambda$He hypernucleus. In the present paper, we properly accounted for the ground states of both two-body subsystems in
the $^9_\Lambda$Be hypernucleus. In addition, we allowed for both subsystems to be
polarized without changing their angular momenta. There are strong grounds to believe that
the location of the resonances of the $^9_\Lambda$Be located below $^8$Be$(0_1^+)+\Lambda$
threshold remains unaffected by considering $2^+$ and $4^+$ pseudo-bound states of the $^8$Be
subsystem instead of already taken into account $0^+_2$, $0^+_3$ and $0^+_4$ states. However,
it can change the energies of the $^9_\Lambda$Be resonance states lying at the energy range
$E>2$ MeV above the three-cluster threshold.
\section{Conclusions}
\label{sec:concl}
We have applied a microscopic three-cluster model to studying the structure of the $_{\Lambda}^{9}$Be hypernucleus. The model treats correctly the Pauli principle and accounts for polarization of two-cluster subsystems of the hypernucleus when the third cluster is close by reducing a three-cluster problem to a coupled many-channel two-cluster problem.
A Gaussian basis was used for the description of the two-cluster subsystems, while a harmonic oscillator basis was invoked for the relative motion of the two-cluster subsystem and the remaining cluster.
The hypernucleus $_{\Lambda}^{9}$Be was considered as a three-cluster system
comprising of two alpha particles and a $\Lambda$-hyperon. Within the present
model the three-cluster configuration was projected on two binary
configurations $_{\Lambda}^{5}$He+$\alpha$ and $^{8}$Be+$\Lambda$, provided that $_{\Lambda}^{5}$He and $^{8}$Be were described as two-cluster systems. A finite number of Gaussian functions were used to describe $\alpha-\Lambda$ and $\alpha-\alpha$ binary subsystems. This resulted in a discretization of the continuous spectrum of $_{\Lambda}^{5}$He and $^{8}$Be. The set of pseudo-bound states in $_{\Lambda}^{5}$He and $^{8}$Be allowed us to take into account polarizability of these systems.
The spectrum of bound and resonance states in the $_{\Lambda}^{9}$Be hypernucleus was studied in detail. Calculations have been performed with two set of input parameters. Both sets were selected to reproduce the ground state energy of $_{\Lambda}^{9}$Be and to minimize the energy of the $^8$Be two-cluster subsystem of this hypernucleus. At the same time, $P2$ input parameter set was determined to reproduce the binding energy of $_{\Lambda}^{5}$He with respect to its $\alpha+\Lambda$ decay threshold, while $P1$ input parameter set minimizes binding energy of $_{\Lambda}^{5}$He making it slightly overbound.
It was shown that the cluster polarizability plays a significant role in formation of bound and resonance states of this hypernucleus. Moreover, polarization of the two-cluster subsystems on interaction with the third cluster is responsible for creation of a large number of resonance states, a large part of them are very narrow with the total width less than 100 keV. We have shown that the majority of the narrow resonances in the energy range 1 MeV$<$E$<$6 MeV are the Feshbach resonances generated due to a weak coupling of different binary channels in $^9_\Lambda$Be. $P2$ set of the input parameters generate the less number of resonance states than $P1$ set due to higher energies of pseudo-bound states in $_{\Lambda}^{5}$He.
There is a fairly good agreement between our results and available experimental
data. However, in our calculations with the first set of input parameters $P1$, the first 3/2$^{+}$(2) excited state of the $^9_\Lambda$Be is a resonance state, since it is located above the lowest $_{\Lambda}^{5}$He+$\alpha$ decay threshold of the $^9_\Lambda$Be. This can be ascribed to the selected nucleon-nucleon and nucleon-hyperon potentials which slightly overbind the $^5_\Lambda$He subsystem.
At the same time, $P2$ set of the input parameters make the first 3/2$^{+}$(2) and 5/2$^{+}$(2) states to be bound in accordance with the experiment. The order of lowest $5/2^+$(2) and $3/2^+$(2) levels corresponds to the experimental data for both sets of the input parameters. There is a very good correspondence of the predicted by our model energies of the $3/2_3^+$ and $5/2_3^+$ resonance states to the experimentally observed level with $E=2.89$ MeV from Ref. \cite{2005NuPhA.754...58T, 2002PhRvL..88h2501A}.
\section{Acknowledgment}
This work was supported in part by the Program of Fundamental Research of
the Physics and Astronomy Department of the National Academy of Sciences of
Ukraine (Project No. 0117U000239).
|
2206.08029
|
\section{Introduction}
\label{intro}
As the language neural nets got better at generating texts, it's getting harder and harder to distinguish the human-written text from generated one. So manual detection of these texts got impossible. For that reason, it's desirable to build a system that can automatically detect generated text.
The proposed system will use various features extracted from the text such as length, punctuation, word choice etc. To determine whether the text is human-generated or not. The accuracy of this system can be improved by using machine learning algorithms which will learn how humans generate texts and then use those features for detection purpose.
There are many ways to build such a system, but probably the most reliable one is based on machine learning algorithms. These algorithms can be trained on a large number of examples - both human-generated and computer-generated texts. After being trained, they should be able to identify which texts are computer-generated with high accuracy.
This approach already works in other areas, such as spam detection. Some early experiments have shown promising results and indicate that this approach works well for the detection of the generated text.
The DIALOG-22 shared task of RuATD 2022 had 2 tracks for binary classification and multiclass classification. In this report, we will describe the data, how we handled it, the models we used, and the ensembling technique.
\section{Task Definition}
For the binary classification task $F_{binary}$, we frame the generated text detection problem as follows: given a piece of text $X$, label it as either human-written or machine-generated $y_{binary} = \{H,M\}$.
$$F_{binary}: X \rightarrow y_{binary}$$
For the multiclass classification task $F_{multiclass}$, we set up the problem as follows: given a piece of text $X$, label it as one of the 14 classes that represent deep neural models
$y_{multiclass}$ = \{
\textit{
M2M-100, Human, OPUS-MT, M-BART50, ruGPT3-Medium, ruGPT3-Small,
mT5-Large, ruGPT3-Large, ruT5-Base-Multitask, mT5-Small,
ruT5-Base, ruGPT2-Large, M-BART, ruT5-Large
}
\}
$$F_{multiclass}: X \rightarrow y_{multiclass}$$
\section{Datasets}
Provided datasets offer the train and test splits. The part of the set was annotated automatically by different generative models. Various language models were fine-tuned on different tasks: machine translation, paraphrasing, summarization, simplification, and unconditional text generation - are used to generate texts. The texts written by a human were collected from open sources from different domains. \cite{ruatd-2022-binary}, \cite{ruatd-2022-multiclass}.
\begin{table}[h!]
\selectlanguage{russian}
\begin{center}
\begin{tabular}{|l|c|}
\hline \bf Text & \bf Class\\ \hline
Обустройство тротуаров, мостовых (в том числе тротуарной плиткой). & H \\
Минстрой обозначил способы снижения энергоемкости российской экономики. & M \\
В конце 1873 года военный суд вынес решение по делу Франциска Ахиллы Базейн. & M \\
увеличение правовой грамотности и развитие правосознания граждан. & H \\
\hline
\end{tabular}
\end{center}
\selectlanguage{british}
\caption{Example of the binary classification data.}
\label{tabFonts_binary}
\end{table}
\begin{table}[h!]
\selectlanguage{russian}
\begin{center}
\begin{tabular}{|l|c|}
\hline \bf Text & \bf Class\\ \hline
Прочла автобиографию Каутского, Одесса, 1905. & Human \\
Вы не можете быть в печи и в муulin. & M-BART50 \\
Вторая попытка привела к тому же результату. & OPUS-MT \\
Сколько учеников в вашем классе? & M2M-100 \\
\hline
\end{tabular}
\end{center}
\selectlanguage{british}
\caption{Example of the multiclass classification data.}
\label{tabFonts_multiclass}
\end{table}
\section{Related Works}
In this section, we will discuss various methods for detecting machine-generated texts.
Over the past years, many approaches appeared for generated text detection. The latest works are usually based on using transformer-based models. Either fine-tuning the proposed task or using probabilities distribution to make decision-based on them \cite{ippolito2019automatic}. Here we list some examples of different methods:
\begin{itemize}
\item First, we calculate the mean likelihood over all machine-generated sequences, then the mean of human-written ones. If the likelihood according to some language model is closer to the machine-generated mean likelihood, then we consider it as generated text \cite{solaiman2019release};
\item In GLTR \cite{gehrmann2019gltr} described a method using a language model to compute the probability distribution of the next word given the text sequence. For each sequence position, we get the likelihood of the ground-truth next word within this list. Then these
ranks are displayed on a histogram. Based on the distribution of bins, we can decide if this text is generated or not.
\item Bert fine-tuning on the classification task. Having a label of text if it's machine-generated or not, we can fine-tune the language model to predict \cite{solaiman2019release}
\item Also, there is possible to use human-machine collaboration. Real or Fake Tool provides a game-like interface for humans to decide at what point the text begins to be written by a computer \cite{dugan2020roft}.
\end{itemize}
More approaches are described in the survey \cite{jawahar2020automatic}.
\section{Experiment Setup}
In this section, we present the experiment configurations we use to solve binary and multiclass tasks.
\subsection{Data preproccessing}
We decided not to perform any preprocessing of the text itself. Regarding the data split:
\begin{itemize}
\item \textbf{Binary classification}. We concatenated the train set with the validation set. This new concatenated dataset gave us an option to perform a 5-fold cross-validation;
\item \textbf{Multiclass classification}. We took the data as is without changing anything.
\end{itemize}
\subsection{Models}
All models we used for fine-tuning we got from transformers library \cite{wolf2020transformers}. Here we will describe these models:
\begin{itemize}
\item \textbf{sberbank-ai/sbert\_large\_nlu\_ru}\cite{habr_2020}, \textbf{sberbank-ai/ruBert-large}, \textbf{DeepPavlov/rubert-base-cased} are BERT models \cite{devlin2018bert}. Sber-ai models were fine-tuned on a closed dataset collected by their research group \cite{sberbank-ai}. DeepPavlov's RuBERT was fine-tuned on the Russian part of Wikipedia and news data.
\item \textbf{IlyaGusev/mbart\_ru\_sum\_gazeta} is a mBART model fine-tuned on the dataset for summarization of Russian news\cite{gusev2020dataset}. The original MBart model was pretrained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pre-training a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text \cite{liu2020multilingual}.
\item \textbf{MoritzLaurer/mDeBERTa-v3-base-mnli-xnli} \cite{he2021debertav3} This multilingual model is suitable for multilingual zero-shot classification. The original model was pre-trained by Microsoft on the CC100 multilingual dataset \cite{wenzek-etal-2020-ccnet}.
This model was fine-tuned on the XNLI development dataset and the MNLI train dataset. The XNLI development set consists of translated texts for each of the 15 languages \cite{conneau2018xnli}.
\item \textbf{DeepPavlov/xlm-roberta-large-en-ru-mnli} is an XLM-RoBERTa model \cite{conneau2019unsupervised} which was fine-tuned on the english-russian part of the MNLI \cite{williams2017broad} dataset.
\end{itemize}
\subsection{Binary classification (with ensembling technique)}
Five chosen models were used in the experiement: \textbf{sberbank-ai/sbert\_large\_nlu\_ru}, \textbf{sberbank-ai/ruBert-large}, \textbf{IlyaGusev/mbart\_ru\_sum\_gazeta }, \textbf{MoritzLaurer/mDeBERTa-v3-base-mnli-xnli}, \textbf{DeepPavlov/xlm-roberta-large-en-ru-mnli}
Let's describe steps for training each of these models:
\begin{enumerate}
\item We split our training dataset into non-overlapping 5-folds and performed cross-validation (Figure \ref{fig:architcture});
\item for each validation fold we predicted the target and as a result, we got out-of-fold predictions for training data;
\item for the test set, we predicted the target from every 5 models from cross-validation and averaged the result which is marked with red;
\end{enumerate}
As a meta-model, we choose Logistic Regression, which was trained on out-of-fold predictions. Then we predict the final results for the test set.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{images/drawio.png}
\caption{Ensembling scheme}
\label{fig:architcture}
\end{figure}
\subsection{Multiclass classification}
For the multiclass classification we chose these models: \textbf{DeepPavlov/rubert-base-cased}, \textbf{DeepPavlov/xlm-roberta-large-en-ru},
\textbf{IlyaGusev/mbart\_ru\_sum\_gazeta}. We fine-tune these models without cross-validation only on provided train set.
\section{Results}
Table \ref{binaryTable} shows each model accuracy score on binary classification task. The multiclass classification accuracy is shown in Table \ref{multiclass_classification}. On both tasks, the best performing single model was \textbf{DeepPavlov/xlm-roberta-large-en-ru-mnli}. We managed to ensemble models in the binary task, so the ensemble of models showed the best accuracy.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|l|c|}
\hline \bf Model name & \bf Accuracy\\ \hline
sberbank-ai/sbert\_large\_nlu\_ru & 0.79986 ±0.003\\
sberbank-ai/ruBert-large & 0.80154 ±0.002\\
IlyaGusev/mbart\_ru\_sum\_gazeta & 0.80566 ±0.001\\
MoritzLaurer/mDeBERTa-v3-base-mnli-xnli & 0.80710 ±0.001\\
\textbf{DeepPavlov/xlm-roberta-large-en-ru-mnli} & \bf{0.81708 ± 0.002} \\ \hline
\textbf{Ensemble} & \textbf{0.82995}\footnote{\href{https://www.kaggle.com/competitions/ruatd-2022-bi/leaderboard}{Kaggle private leaderboard link}}\\ \hline
\end{tabular}
\end{center}
\caption{Results for binary classification for different models}
\label{binaryTable}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline \bf Model name & \bf Accuracy(validation) & \bf Accuracy(kaggle public/private) \\ \hline
IlyaGusev/mbart\_ru\_sum\_gazeta & 0.6142 & 0.61459/0.61092 \\
DeepPavlov/rubert-base-cased. & 0.6045 & 0.60433/0.60472 \\
\textbf{DeepPavlov/xlm-roberta-large-en-ru-mnli}. & \textbf{0.6242} & \textbf{0.62856/0.62644} \\ \hline
\end{tabular}
\end{center}
\caption{Results for multiclass classification for different models}
\label{multiclass_classification}
\end{table}
\section{Conclusion}
In this paper, we described our pipeline for the DIALOG-22 RuATD challenge. Our solution achieved 1st place in binary classification using ensembling techniques and 4th place for the multiclass classification task using only a single model. However, the proposed solution requires a lot of computational power, so it cannot be used in real-time systems to detect generated texts. But it gives us an understanding that we still need to upgrade methods to distinguish generated texts from human-written ones.
|
2210.11051
|
\section{Introduction and statements of the Theorems}
A classical theorem of Linnik \cite{{Linnik*44a}, {Linnik*44b}} states
that there exists an absolute constant $b >0$ such that, given a positive
integer $q$ and an invertible residue class $a \bmod q$, the smallest
prime in the aforementioned residue class is at most $q^{b}$. Several
efforts have been made to bound $b$ from above when $q$ is large. The
best result to date is that of Xylouris
\cite{{Xylouris*11a},{Xylouris*11}} who proved that $b$ can be taken
to be $5$ provided $q$ is large enough.
Let ${\mathbf K}$ be a number field, $\O_{\mathbf K}$ be its ring
of integers and let ${\mathfrak q}$ be an integral ideal of ${\mathbf K}$. Let us
recall the definition of the (narrow) ray class group.
Let $I({\mathfrak q})$ be the group of fractional ideals of ${\mathbf K}$
which are co-prime to ${\mathfrak q}$ and $P_{{\mathfrak q}}$ be the subgroup of $I({\mathfrak q})$
consisting of principal ideals $(\alpha)$ satisfying
$v_\mathfrak{p}(\alpha -1) \ge v_\mathfrak{p} ({\mathfrak q})$ for all prime
ideals $\mathfrak{p}$ dividing ${\mathfrak q}$ and $\sigma(\alpha) >0$ for all
embeddings $\sigma$ of ${\mathbf K}$ in ${\mathbb R}$. This is the ray class group
associated to the modulus ${\mathfrak q} {\mathfrak q}_{\infty}$, where ${\mathfrak q}_{\infty}$
is the set of all real places of ${\mathbf K}$. However for notational convenience,
we shall simply refer to this as the narrow ray class group
of modulus ${\mathfrak q}$. We set
$H_{{\mathfrak q}}({\mathbf K})=I({\mathfrak q})/P_{{\mathfrak q}}$. When ${\mathfrak q}=\O_{\mathbf K}$, the group $H_{{\mathfrak q}}({\mathbf K})$ is
the usual class group in the narrow sense.
One way to generalize Linnik's problem to number fields is the following:
Given an integral ideal ${\mathfrak q}$ in $\O_{\mathbf K}$ and a class $C$ of $H_{\mathfrak q}({\mathbf K})$, find a
prime ideal $\P \in C$ such that ${\mathfrak N}(\mathfrak{p}) \leq A~{\mathfrak N}({\mathfrak q})^b$. This problem
was first considered by Fogels \cite{Fogels*64a} which was then refined by
Weiss \cite{{Weiss*80}, {Weiss*83a}} who proved the following theorem.
\begin{thm}{\rm (Weiss \cite{{Weiss*80},{Weiss*83a}} 1983)}\label{t1}
There exists an effectively computable constant $a$ such that the
following holds. In any number field ${\mathbf K}$ of discriminant $d_{\mathbf K}$,
every element of $H_{{\mathfrak q}}({\mathbf K})$ contains a prime ideal $\P$ of
$K$ such that ${\mathfrak N}\P \le (|d_{\mathbf K}| {\mathfrak N}{\mathfrak q})^a$.
\end{thm}
Here and thereafter, $\mathfrak{N}$ denotes the absolute norm of ${\mathbf K}$ over ${\mathbb Q}$.
Note that the exponent $a$ is not explicit.
There is another generalization of Linnik's problem which was initiated by
Lagarias-Montgomery-Odlyzko~\cite{Lagarias-Montgomery-Odlyzko*79}.
By class field theory, finding a prime ideal in a class of the ray
class group $H_{{\mathfrak q}}({\mathbf K})$ is the same as finding a prime ideal with a
given Artin symbol in the Galois group of the corresponding ray class
field ${\mathbf K}_{{\mathfrak q}}$ over the number field ${\mathbf K}$. In this set-up,
Lagarias, Montgomery and Odlyzko derived an upper bound
for the least norm of such a prime ideal in terms of $|d_{{\mathbf K}_{\mathfrak q} / {\mathbb Q}}|^{a}$
with $a$ inexplicit. For explicit $a$, see the works of
Ahn-Kwon \cite{Ahn-Kwon*19}, Kadiri-Wong \cite{KW}, Thorner-Zaman
\cite{Thorner-Zaman*17} and Zaman \cite{Zaman*16}.
All these bounds are however large in terms of the
exponent of the norm of the modulus ${\mathfrak q}$. For example, if one
takes a rational prime $p$, then
the ray class field is the $p$-th cyclotomic field whose
discriminant has absolute value $p^{p-2}$.
In this article, we would like to show that if one considers a product of three
prime ideals in place of a single prime ideal, one can find prime ideals with
much smaller norm when compared to the norm of the modulus.
Furthermore, we shall make the dependence on the ground field
${\mathbf K}$ explicit.
Our notation recalled below is classical. In brief, $n_{\mathbf K}$, $h_{\mathbf K}$, $R_{\mathbf K}$ and
$d_{\mathbf K}$ are respectively the degree, the class number, the regulator
and the discriminant of a number field ${\mathbf K}$ while $\alpha_{\mathbf K}$ is the
residue of the Dedekind zeta-function at~1. In this set up, we have
the following theorem.
\begin{thm}\label{mainthm}
Let ${\mathbf K}$ be a number field, ${\mathfrak q}$ be an integral ideal of
${\mathbf K}$. Set
\begin{equation}
\label{deftK}
t({\mathbf K})=\max\Bigl(n_{{\mathbf K}}^{ 48n_{{\mathbf K}}^3 }|d_{\mathbf K}|^6(R_{\mathbf K} h_{\mathbf K})^{n_{{\mathbf K}}}, ~\exp(|d_{\mathbf K}|^{30})\Bigr).
\end{equation}
Each element of $H_{{\mathfrak q}}({\mathbf K})$
contains a product of three degree one primes, each of norm
at most $t({\mathbf K})^3{\mathfrak N}{\mathfrak q}^3$.
\end{thm}
As the case ${\mathbf K}=\mathbb{Q}$ has already been treated in
\cite{Ramare-Walker*16,Ramare-Serra-Srivastav*18} by Ramar\'e
together with Serra, Srivastav and Walker, we
may assume that ${\mathbf K}\neq\mathbb{Q}$ and we do so hereafter.
\begin{rmk}
Three features of this result should be underlined: the
dependence on the field ${\mathbf K}$ is completely explicit in terms of
classical invariants of the field, and even numerically
so. However, we did not strive to get small constants. The second point is
that the exponent in ${\mathfrak q}$ is relatively small. The
third point is more technical; the dependence in ${\mathfrak q}$ has the form
${\mathfrak N}{\mathfrak q}^3$ and not ${\mathfrak N}{\mathfrak q}^3$ times a power of $\log{\mathfrak N}{\mathfrak q}$ and this precision
comes from a much more refined treatment. The dependence in ${\mathbf K}$ is most
probably dominated by the term $\exp(|d_{\mathbf K}|^{30})$ that comes from a
possible real zero abnormally close to~1. We control such a zero by
Lemma~\ref{AKzero} due to Kadiri and Wong \cite{KW} (see also \cite{Ahn-Kwon*19}).
\end{rmk}
Theorem~\ref{mainthm} relies on four ingredients of independent
interest. We first need a Brun-Titchmarsh Theorem for elements
of $H_{\mathfrak q}({\mathbf K})$.
\begin{thm}\label{bt-tri}
Let $\b, {\mathfrak q}$ be integral ideals with $(\b, {\mathfrak q}) = \O_{{\mathbf K}}$ and $[\b] \in H_{{\mathfrak q}}({\mathbf K})$.
Then
\begin{equation*}
\sum_{\substack{\P \in [\b],\\ {\mathfrak N}\P\le X}} 1
~\le~
\frac{2 X} { |H_{\mathfrak q}({\mathbf K})| \log (\frac{X}{u({\mathbf K}) {\mathfrak N}{\mathfrak q}})}~,
\quad u({\mathbf K})= n_{{\mathbf K}}^{ 48n_{{\mathbf K}}^3 }|d_{\mathbf K}|^6(R_{\mathbf K} h_{\mathbf K})^{n_{{\mathbf K}}}
\end{equation*}
provided the denominator is~$>0$.
\end{thm}
This is the number field analogue of the classical Brun-Titchmarsh
Theorem for the initial interval, see for instance \cite[Theorem
2]{Montgomery-Vaughan*73} by Montgomery and Vaughan. A
precursor of this result can be found in \cite[Theorem
4]{Hinz-Lodemann*94} by Hinz and Lodemann, though without the
dependence in ${\mathbf K}$ and with a slightly worse upper bound. As these
two authors, we rely on the Selberg sieve, though with an improved
treatment of the error term, see Theorem~\ref{absolute}, and on an
estimate for the number of integral ideals recalled as
Theorem~\ref{asymfinal} below.
Our second ingredient is a Brun-Titchmarsh Theorem valid for cosets,
the analogue of \cite[Theorem 1.2]{Ramare-Serra-Srivastav*18} when
${\mathbf K}=\mathbb{Q}$, and which is the topic of Section~\ref{btc}. We
notice here that in Theorem~\ref{bt-tri}, we \emph{parametrized} the
class while for Theorem~\ref{bt}, we \emph{capture} elements of a
subgroup by using multiplicative characters.
Our third ingredient is less novel.
\begin{thm}\label{degreeoneprime}
Every element in $H_{\mathfrak q}({\mathbf K})$ contains an integral ideal $\a$ such that
${\mathfrak N}\a \le 10^{25n_{\mathbf K}} n_{{\mathbf K}}^{7n_{{\mathbf K}}} |d_{\mathbf K}|^{4} {\mathfrak N}{\mathfrak q}^3$
and $\a$ is product of degree one primes.
\end{thm}
\begin{thm}\label{degreeoneprimebis}
Every element in $H_{\mathfrak q}({\mathbf K})$ contains an integral ideal $\a$ such that
\begin{equation*}
{\mathfrak N}\a ~\le~ F_1({\mathfrak q}){\mathfrak N}{\mathfrak q} \, \log(3F({\mathfrak q}))^{n_{\mathbf K}^2}\,
\log\log(B({\mathbf K})F({\mathfrak q}){\mathfrak N}{\mathfrak q})^2,
\quad
B({\mathbf K})=n_{\mathbf K}^{50n_{\mathbf K}^3} (E({\mathbf K})\sqrt{|d_{\mathbf K}|})^{n_{\mathbf K}},
\end{equation*}
and $\a$ is product of degree one primes. Here
$F({\mathfrak q})=2^{r_1}h_{\mathbf K} \phi({\mathfrak q})/ h_{{\mathbf K},{\mathfrak q}}$, $F_1({\mathfrak q})=2^{r_1}h_{\mathbf K} {\mathfrak N}{\mathfrak q}$ and
the constant $E({\mathbf K})$ is equal to $1000 n_{{\mathbf K}}^{ 12n_{{\mathbf K}}^2 }(R_{\mathbf K}/|\mu_{\mathbf K}|)^{1/n_{\mathbf K}}
\bigl[\log\bigl((2n_{{\mathbf K}})^{4n_{\mathbf K}}R_{{\mathbf K}}/|\mu_{\mathbf K}| \bigr)\bigr]^{n_{{\mathbf K}}}$.
\end{thm}
Theorem~\ref{degreeoneprime} is enough for our purpose, but our proof gives
only the bound
${\mathfrak N}\a\ll_{{\mathbf K},\varepsilon} {\mathfrak N}{\mathfrak q}~|H_{\mathfrak q}({\mathbf K})|^{2(1+ \epsilon)}$,
while $|H_{\mathfrak q}({\mathbf K})|$ should be enough.
By using some techniques from sieve method, Theorem~\ref{degreeoneprimebis} corrects
this defect as far as the dependence in ${\mathfrak N}{\mathfrak q}$ is concerned, but the
dependence in $d_{\mathbf K}$ becomes much worse.
Our fourth and last ingredient shows that quadratic subgroups
of~$H_{\mathfrak q}({\mathbf K})$ contain small degree one prime ideal.
\begin{thm}\label{primeinkernel}
Let $\chi$ be a quadratic character on $H_{\mathfrak q}({\mathbf K})$. There exists a
prime ideal $\P$ of degree one in ${\mathbf K}$ such that $(\P, {\mathfrak q}) = \O_{\mathbf K}$,
$\chi(\P)=1$ and
${\mathfrak N}\P \le 8 \cdot (10^{31} n_{\mathbf K}^7)^{n_{\mathbf K}} |d_{\mathbf K}|^4 {\mathfrak N}{\mathfrak q}^2$.
\end{thm}
This bound is modest as far as the exponent of ${\mathfrak N}{\mathfrak q}$ is
concerned, but it is completely explicit. When ${\mathbf K}=\mathbb{Q}$, the
question has been treated by Linnik and Vinogradov
in~\cite{Vinogradov-Linnik*66}. Their better exponent comes from the
usage of the Burgess bounds and Siegel's Theorem while we only rely on convexity (or
equivalently, on a Polya-Vinogradov inequality). The exponent we get
is $3/2+\varepsilon$ for any positive $\varepsilon$.
\section{Notation and Preliminaries}
\label{Notation}
\subsection*{Notation}
Let ${\mathbf K}\neq \mathbb{Q}$ be a number field with discriminant
$|d_{\mathbf K}|\ge3$ (by Minkowski's bound).
Also let us set $n_{\mathbf K} = [{\mathbf K} : {\mathbb Q}]\ge2$ and ${\mathfrak q}$ be an (integral) ideal of ${\mathbf K}$.
The number of real embeddings of ${\mathbf K}$ is denoted by $r_1$
whereas the number of complex ones are denoted by $2r_2$.
The ring of integers of ${\mathbf K}$ is denoted by $\O_{\mathbf K}$,
the narrow ray class group modulo ${\mathfrak q}$ is denoted by $H_{\mathfrak q}({\mathbf K})$
and the (absolute) norm is denoted by ${\mathfrak N}$. Throughout the
article $\P$ will denote a prime ideal in $\O_{\mathbf K}$, $p$ will
denote a rational prime number and for any integral ideals
$\a, \b$, their lcm and gcd in $\O_{{\mathbf K}}$ will be denoted
by $[\a,\b]$ and $(\a,\b)$ respectively. Further
an element of $H_{\mathfrak q}({\mathbf K})$ containing an integral ideal
$\a$ will be denoted by $[\a]$.
A sum over degree one prime ideals will be denoted by $\mathop{\sum\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{\P}$,
and $\mathop{\prod\nolimits^{\hbox to 0pt{$\flat$\hss}}}_\P$ will denote a product over degree one prime
ideals. Similarly $\mathop{\sum\nolimits^{\hbox to 0pt{$\sharp$\hss}}}_{\P}$ and $\mathop{\prod\nolimits^{\hbox to 0pt{$\sharp$\hss}}}_{\P}$ denotes respectively
a sum and a product over primes $\P$ that are \emph{not} of degree
one. As a generalisation, the sign $\mathop{\sum\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{\a}$ denotes a summation
over integral ideals $\a$ whose prime factors are all of degree one.
\subsection*{Smoothings}\label{smooth}
We shall work with a generic smoothing function
$w : \mathbb{R}_{\ge 0} \to \mathbb{R}_{\ge 0}$ with the following properties:
\begin{itemize}
\item
$w(t)=0$ when $t \ge1$ and $t\le 1/10$,
\item
$w$ does not vanish uniformly and $|w(t)|\le 1$ throughout,
\item
$w$ is at least $n_{\mathbf K} + 3$ times continuously differentiable,
\item
For every $m\le n_{\mathbf K}+2$, we have
$w^{(m)}(\frac{1}{10})=w^{(m)}(1)=0$, where
$w^{(m)}$ denotes the $m$-th derivative of~$w$.
\end{itemize}
We will henceforth refer to this function as `the smoothing function'.
Its Mellin transform $\check{w}$ is defined by
\begin{equation}
\label{defcheckw}
\check{w}(s)=\int_0^\infty w(t) t^{s-1}dt.
\end{equation}
We show in \lemref{smoothdecay} that this analytic function
decreases at least like $1/(1+|s|)^{n_{\mathbf K} + 3}$ uniformly in any vertical
strip.
For applications, we select the special function $w_0$ described below.
Let
$$
f_k(t)
~=~
\begin{cases}
( 4t(1-t) )^k & \text{ when }t \in [0, 1], \\
0 & \text{ otherwise}
\end{cases}
$$
be as defined in page 348 of \cite{Ramare*17-2} with $k=n_{{\mathbf K}}+4$. We set
\begin{equation}
\label{defw0}
w_0(t)=f_{n_{{\mathbf K}}+4}\left(\tfrac{10}{9}(t-\tfrac1{10})\right).
\end{equation}
\begin{lem}
\label{studyw0}
We have
$\displaystyle
\|w_0\|_\infty=1,\
10\sqrt{n_{{\mathbf K}}}\,\check{w}_0(1)\in[2, 15],\
\|w_0'\|_1=2,\
\|w_0^{(n_{\mathbf K}+3)}\|_\infty\le 4(40n_{\mathbf K})^{n_{\mathbf K} + 3}$.
\end{lem}
\begin{proof}
Indeed, by \cite[Lemma 2.2]{Ramare*17-2}, we have
$\check{w}_0(1)=\|w_0\|_1=\frac{9}{10}\frac{2^{2k}\cdot k!^2}{(2k+1)!}$
with $k=n_{\mathbf K}+4$. Applying the classical explicit Stirling's formula
\begin{equation}
\label{ExplicitStirling}
n!=(n/e)^n\sqrt{2\pi n}~e^{\frac{\theta_+(n)}{12 n}}, \qquad(\theta_+(n) \in [0,1]),
\end{equation}
we find that
\begin{equation*}
\|w_0\|_1
=\frac{9}{10}\frac{\sqrt{\pi k}}{(2k+1)}e^{\frac{\theta_+(k)}{6k} ~-~ \frac{\theta_+(2k)}{24k}}
=\frac{\xi(n_{{\mathbf K}})}{10\sqrt{n_{{\mathbf K}}}\,}, \qquad 2\le \xi(n_{{\mathbf K}})\le 15.
\end{equation*}
Next we check that
\begin{equation*}
\|f_k'\|_1=2\int_0^{1/2}f'_k(t)dt=2.
\end{equation*}
Also, by Leibniz Formula, we find that
\begin{equation*}
f_k^{(k-1)}(t)=4^k\sum_{0\le \ell\le
k-1}\binom{k-1}{\ell}\frac{k!}{(k-\ell)!}t^{k-\ell}
(-1)^{k-1-\ell}\frac{(k-1)!}{(k-(k-1-\ell))!}(1-t)^{k-(k-1-\ell)}
\end{equation*}
so that
$$
|w_0^{(k-1)}(t)|
~\le~
\left(\frac{10}{9}\right)^{k-1} 4^k(k-1)! ~\sum_{0 \le \ell \le k-1}\binom{k-1}{\ell}\binom{k}{k-\ell}
~=~ \left(\frac{10}{9}\right)^{k-1} 4^k(k-1)! \binom{2k-1}{k}
$$
by Vandermonde's identity. We bound this almost trivially:
\begin{equation*}
\left(\frac{10}{9}\right)^{k-1}4^k(k-1)!\binom{2k-1}{k}
~=~
\left(\frac{10}{9}\right)^{k-1}4^k\frac{(2k-1)!}{k!}\le 4^k(2k-1)^{k-1}
\le 4(40n_{\mathbf K})^{n_{\mathbf K} + 3}.
\end{equation*}
\end{proof}
\subsection*{Explicit number of ideals in a ray class below some bound}
We now state the main theorem in \cite{Gun-Ramare-Sivaraman*22b} which
is required for the proofs of \thmref{bt-tri} and \thmref{degreeoneprimebis} .
\begin{thm}\label{asymfinal}
Let ${\mathfrak q}$ be a modulus of ${\mathbf K}$ and $[\a]$ be an element of $H_{{\mathfrak q}}({\mathbf K})$.
For any real number $X \ge 1$, we get
\begin{equation*}
\sum_{\b \in [\a] \atop{ \b \subseteq \O_{{\mathbf K}} \atop{ {\mathfrak N}\b \le X } } } 1
~=~ \frac{\alpha_{{\mathbf K}} \phi({\mathfrak q})}{h_{{\mathbf K},{\mathfrak q}}}
\frac{X}{{\mathfrak N}({\mathfrak q})}
+
{{\rm{O} }}^* \biggl( E({\mathbf K})
F({\mathfrak q})^{\frac{1}{n_{\mathbf K}}}\log(3F({\mathfrak q}))^{n_{\mathbf K}}
\left(\frac{X}{{\mathfrak N}({\mathfrak q})}\right)^{1-\frac{1}{n_{\mathbf K}}}
~+~ n_{{\mathbf K}}^{8n_{\mathbf K}}\frac{R_{\mathbf K}}{|\mu_{\mathbf K}|} F({\mathfrak q}) \biggr).
\end{equation*}
where $F({\mathfrak q})=2^{r_1}\frac{h_{\mathbf K} \phi({\mathfrak q}) }{h_{{\mathbf K},{\mathfrak q}}}\ge1$,
$E({\mathbf K})=1000 n_{{\mathbf K}}^{ 12n_{{\mathbf K}}^2 }(R_{\mathbf K}/|\mu_{\mathbf K}|)^{1/n_{\mathbf K}}
\bigl[\log\bigl((2n_{{\mathbf K}})^{4n_{\mathbf K}}R_{{\mathbf K}}/|\mu_{\mathbf K}| \bigr)\bigr]^{n_{{\mathbf K}}}$
and the notation ${{\rm{O} }}^*$ denotes that the implied constant is less than
or equal to $1$.
\end{thm}
\subsection*{Lower bound for the root discriminant}
The root discriminant is defined by $|d_{\mathbf K}|^{1/n_{\mathbf K}}$.
\begin{lem}
\label{rootdisc}
We have $|d_{\mathbf K}|^{1/n_{\mathbf K}}\ge \pi/2$.
\end{lem}
\begin{proof}
In this proof, we denote $n_{\mathbf K}$ by $n$.
By Minkowski's bound, we find that
$$
\rho=|d_{\mathbf K}|^{1/n_{\mathbf K}}\ge
\frac{\pi}{4}n^2/n!^{2/n}.
$$
The explicit Stirling Formula recalled in \eqref{ExplicitStirling} yields $n!^{2/n}\le
\frac{n^2}{e^2}(\sqrt{2\pi n}~e^{\theta_+(n)/(12n)})^{2/n}\le n^2/2$
when $n\ge 3$ while $2!^{2/2}=2\le 2^2/2$.
We note that, when $n\ge 5$, the Minkowski bound is superseded by the bound
given in~\cite[Eq. (2)]{Liang-Zassenhaus*77} by Liang and
Zassenhaus (the quantity $V_{r_1,r_2}$ is given on line~14, page~18
of their paper, and $\mu_n\ge1$).
\end{proof}
\subsection*{The Dedekind zeta-function}
For $\Re s= \sigma > 1$, the Dedekind zeta-function is defined by
$$
\zeta_{{\mathbf K}}(s)=\sum_{\a \subseteq \O_{{\mathbf K}}, \atop{\a \ne (0) }} \frac{1}{{\mathfrak N}(\a)^s},
$$
where $\a$ ranges over the integral ideals of $\O_{{\mathbf K}}$.
It has only a simple pole at $s=1$ of residue $\alpha_{{\mathbf K}}$, say.
We know from the analytic class number formula that
\begin{equation}\label{acf}
\alpha_{{\mathbf K}} = \frac{2^{r_1} (2\pi)^{r_2} h_{{\mathbf K}} R_{{\mathbf K}}}{|\mu_{{\mathbf K}}| \sqrt{|d_{\mathbf K}|}},
\end{equation}
where $h_{{\mathbf K}}, R_{{\mathbf K}}, d_{{\mathbf K}}$ are as defined in the introduction and
$\mu_{{\mathbf K}}$ is the group of roots of unity in ${\mathbf K}$. As we will see later
(\lemref{HR1}) its order in vertical strips is sufficiently moderate so that
multiplication by $\check{w}(s)$ makes it an $L^1$-function
on any line $\Re s=\sigma\ge -1/2$.
\subsection*{Euler-Kronecker constant}
For a number field ${\mathbf K}$, the Euler-Kronecker constant
is defined as
$$
\gamma_{{\mathbf K}} = \lim_{s \to 1} \left( \frac{\zeta'_{{\mathbf K}}(s)}{\zeta_{{\mathbf K}}(s)}+ \frac{1}{s-1} \right).
$$
We also know that in a neighbourhood of $s=1$
$$
\zeta_{\mathbf K}(s) = \frac{\alpha_{{\mathbf K}}}{s-1} ~+~ \alpha_{{\mathbf K}}\gamma_{{\mathbf K}} ~+~ {{\rm{O} }}(s-1),
$$
where the constant in ${{\rm{O} }}$ depends on ${\mathbf K}$. The constant $\gamma_{\mathbf K}$
is called the `Euler-Kronecker constant' in \cite{Ihara*06} by
Ihara. We use Proposition~3, page~431 of this paper, namely the inequality
\begin{equation}
\label{IharaIneq}
\gamma_{{\mathbf K}}\ge-\tfrac12\log|d_{\mathbf K}|
\end{equation}
(where $\alpha_{\mathbf K}$ in Ihara's paper is given by (1.2.2), $\beta_{\mathbf K}$ by (1.2.3) and $c_{\mathbf K}=1$ by
(1.3.12)). The conclusion we need is also restated as (0.7) therein.
\subsection*{The narrow ray-class group}
By narrow ray class group $H_{\mathfrak q}({\mathbf K})$, we consider those ray class groups
where the integral ideal ${\mathfrak q}$ is completed with all real
archimedean places. We have
\begin{equation}\label{eq:4}
h_{{\mathbf K}, 1}=|H_1({\mathbf K})| ~{\Big\vert}~ |H_{\mathfrak q}({\mathbf K})| ~{\Big\vert}~ 2^{r_1} \phi({\mathfrak q})|H_1({\mathbf K})|,
\end{equation}
where
\begin{equation} \label{eq:6}
\phi({\mathfrak q})={\mathfrak N}({\mathfrak q})\prod_{\mathfrak p|{\mathfrak q}}\left(1 - \frac{1}{{\mathfrak N}(\mathfrak p)}\right)
\end{equation}
and $H_1({\mathbf K})$ denotes the narrow ray class group
corresponding to $\O_{{\mathbf K}}$. A good reference for this are the notes of
Sutherland \cite{Sutherland*15}.
We also have the following theorem in this context.
\begin{lem}[Lang, \cite{Lang*94}, page 127]\label{classnumber}
Let ${\mathfrak q}$ be a modulus of ${\mathbf K}$, $h_{{\mathbf K}, {\mathfrak q}} = |H_{\mathfrak q}({\mathbf K})|$ and
$r_1, h_{{\mathbf K}}$ be as defined earlier. Then
$h_{{\mathbf K}, {\mathfrak q}} \le 2^{r_1} \phi({\mathfrak q}) ~h_{{\mathbf K}} $.
\end{lem}
\subsection*{Characters on the narrow ray-class group}
A manner to work with the narrow ray class group $H_{\mathfrak q}({\mathbf K})$ is to consider
its character group. When lifted to the set of all ideals, these are
characters that vanish on ideals which are not co-prime to ${\mathfrak q}$.
An excellent reference is the report \cite{Landau*18-2} where Landau
explains in detail and refines Hecke's theory. We are only interested
in the extensions of the true characters of $H_{\mathfrak q}({\mathbf K})$ and these
are the ones that have finite order. These are the ones that Hecke
considers, while Landau \cite[Lemma~6.34 and (6.7)]{Narkiewicz*04} considers
an extended class of characters that may have infinite orders.
In our case, the notion of conductor of a character goes through, and
the functional equation of the Hecke $L$-function of a primitive
(``eigentlicher'' in Landau's paper) is
given in \cite[Theorem LVI]{Landau*18-2}.
\section{Some General Lemmas}
\subsection*{On Mellin transform}
\begin{lem}\label{smoothdecay}
When $ \Re s =0$
and $|\Im s | \ge 1$ or when $\Re s\ge 1/2$, we have
\begin{equation*}
|\check{w}(s)| ~\le~
\frac{2^{\frac{n_{\mathbf K}}{2}+3}~\|w^{(n_{\mathbf K}+3)}\|_\infty }{(1 + |s|)^{n_{\mathbf K}+3}}.
\end{equation*}
\end{lem}
\begin{proof}
We set $A= n_{{\mathbf K}} + 3$ and $t=\Im s$ for typographical simplification. Integrating
by parts $A$ times and noting that $w^{(m)}(1)= w^{(m)}(0)=0$ for
$0 \le m\le A$, we get
\begin{equation*}
\left| \check{w}(s)=
\frac{(-1)^A}{s(s+1)\cdots(s+A-1)}\int_{1/10}^1 w^{(A)}(u) ~u^{s+A-1}du \right|
~ \le~
\frac{\|w^{(n_{\mathbf K}+3)}\|_\infty}{|s||s+1| \cdots |s+A-1|}.
\end{equation*}
In case $\Re s=0$, we note that $|t| \ge 1$ implies $1 + |t| \le 2|t|$.
Furthermore we have $\sqrt{2}~|m + it | \ge |t|+1$ for all
$m \ge 1$. This yields the constant $2^{\frac{n_{\mathbf K}}{2} + 2}$.
In case $\Re s\ge 1/2$, we first notice that $3|s|\ge |s|+1$, and
then we prove that $\sqrt{2}~|s+ m| \ge |s|+1$ for $m\ge1$ as before.
To prove this inequality, set $\sigma=\Re s$. It is enough to prove
that $t^2+2(\sigma+m)^2\ge \sigma^2+2|s|+1$. As
$|s|\le |t|+|\sigma|$, it is enough to prove that
$t^2-2|t|+1\ge -\sigma^2+2(1-2m)\sigma+2(1- m^2)$ which is obviously
true. This yields the constant $3\cdot2^{(n_{\mathbf K}+2)/2}$. We majorize
the constant in both cases by $2^{\frac{n_{\mathbf K}}{2}+3}$ and this
concludes this lemma.
\end{proof}
\begin{lem}
\label{getM}
For $\varepsilon\in(0,1/2]$, we define
\begin{equation}
\label{defM}
M(w,\varepsilon)=\int_{-\infty}^\infty |\check{w}(it)|
(1+|t|)^{\frac{1 +
\varepsilon}{2}n_{\mathbf K}}dt.
\end{equation}
We have
$
M(w,\varepsilon)
\le
2^{2 +\frac{\varepsilon n_{\mathbf K}}{2}}
\bigr(\|w^{(n_{\mathbf K}+3)}\|_\infty
+
10\cdot 2^{\frac{n_{\mathbf K}}{2}}\|w\|_1\bigl)$.
\end{lem}
\begin{proof}
Set $n=n_{\mathbf K}$.
We split this integral according to whether $|t|\ge1$ or not. When
$|t|\ge1$, Lemma~\ref{smoothdecay} applies. When
$|t|\le 1$, we simply use $|\check{w}(it)|\le 10 \|w\|_1$. This
gives us
\begin{equation*}
M(w,\varepsilon)
~\le~
2\frac{2^{\frac{n}{2}+3}\|w^{(n+3)}\|_\infty}
{2^{\frac{1-\varepsilon}{2}n+2}(\frac{1-\varepsilon}{2}n+2)}
+
10\|w\|_12^{\frac{1+\varepsilon}{2}n+2}
~\le~
2^{2+\frac{\varepsilon n}{2}}
\bigr(\|w^{(n+3)}\|_\infty
+
10\cdot 2^{\frac{n}{2}}\|w\|_1\bigl).
\end{equation*}
This completes the proof of the lemma.
\end{proof}
\begin{lem}
\label{getMstar}
For $\varepsilon\in(0,1/2]$ and $r\in\{1,2\}$, we define
\begin{equation}
\label{defMstar}
M^*(\varepsilon,r)=
\int_{\frac{1+\varepsilon}{2}-i\infty}^{\frac{1+\varepsilon}{2}+i\infty} |\check{w}_0(s)|
(1+|s|)^{\frac{1 +
\varepsilon}{2r}n_{\mathbf K}}ds.
\end{equation}
We have
$M^*( \varepsilon,r) ~\le~ 12(57n_{\mathbf K})^{n_{\mathbf K} + 3}$.
\end{lem}
\begin{proof}
Set $n=n_{\mathbf K}$ and $\sigma=(1+\varepsilon)/2$. Lemma~\ref{smoothdecay} applies and gives us
\begin{equation*}
M^*(\varepsilon,r)
~\le~
2\cdot 2^{\frac{n}{2}+3}\|w_0^{(n+3)}\|_\infty
\int_{0}^{ \infty}
\frac{dt}{(1+|\sigma + it|)^{\frac{(2r-1-\varepsilon)n}{2r}+ 3}}.
\end{equation*}
Furthermore
$$
\int_{0}^{ \infty}
\frac{dt}{(1+|\sigma + it|)^{\frac{(2r-1-\varepsilon)n}{2r}+ 3}}
~\le~
\int_{0}^{ \infty}
\frac{dt}{(1+t)^{\frac{(2r-1-\varepsilon)n}{2r}+ 3}}
~\le~
1/2.
$$
Lemma~\ref{studyw0} leads to the bound $12(40\sqrt{2}n)^{n+ 3}$
which is indeed not more than $12 (57 n)^{n + 3}$.
\end{proof}
\subsection*{On the Dedekind zeta-function}
\begin{lem}\label{HR1}
Let $0 < \varepsilon \le 1/2$. In the strip $- \varepsilon \le \sigma\le 1 + \varepsilon$,
the Dedekind zeta-function $\zeta_{{\mathbf K}} (s)$ satisfies the inequality
\begin{equation*}
|\zeta_{{\mathbf K}}(s)| ~\le~ 3 \zeta(1+\varepsilon)^{n_{{\mathbf K}}}\biggl|\frac{s+1}{s-1}\biggr|
\bigl(|d_{{\mathbf K}}|(1+|s|)^{n_{{\mathbf K}}}\bigr)^{\frac{1 + \varepsilon -\sigma}2}.
\end{equation*}
\end{lem}
\begin{proof}
This is Theorem 4 of \cite{Rademacher*59} by Rademacher.
\end{proof}
As a corollary, we deduce the following lemma.
\begin{lem}
\label{boundalphaK}
We have
$\displaystyle
\frac{9\cdot 2^{n_{\mathbf K}}h_{\mathbf K}}{100 \sqrt{|d_{\mathbf K}|}}
\le \alpha_{\mathbf K}
\le 6\left( \frac{2\pi^2}{5} \right)^{n_{\mathbf K}}|d_{\mathbf K}|^{1/4}$
and
$h_{\mathbf K} \le 67 (\pi^2/5 )^{n_{\mathbf K}} |d_{\mathbf K}|^{3/4}$.
\end{lem}
See the book \cite{Pohst-Zassenhaus*89} by Pohst and Zassenhaus for more on lower bounds for $\alpha_{\mathbf K}$.
\begin{proof}
Lemma~\ref{HR1} with the choices $\varepsilon=1/2$ and $s=1$ gives
the upper bound. As
$2^{r_1}(2\pi)^{r_2}\ge 2^{r_1}2^{2r_2}\ge 2^{n_{\mathbf K}}$ and
the ratio of the regulator to $\mu_{\mathbf K}$ is bounded below absolutely
by $0.09$ (see \cite{Friedman*89} by E.~Friedman), Eq.~\eqref{acf}
provides us with the lower bound for $\alpha_{\mathbf K}$. Concerning the
upper bound for $h_{\mathbf K}$, we use~\eqref{acf} again and this
time, derive from it that $\alpha_{\mathbf K}\ge
(9/100)\,2^{n_{\mathbf K}}h_{\mathbf K}/\sqrt{|d_{\mathbf K}|}$, from which the last bound
follows immediately.
\end{proof}
Next we deduce similar bounds for Hecke $L$-functions
corresponding to primitive characters $\chi$ of finite order.
For a Hecke character $\chi$ defined modulo ${\mathfrak q}$,
the Hecke L-function associated to $\chi$ is defined as follows;
$$
L_{{\mathfrak q}}(s,\chi) = \sum_{\a \subseteq \O_{{\mathbf K}} \atop \a \ne 0} \frac{\chi(\a)}{{\mathfrak N}(\a)^{s}},
$$
where $\Re s >1$.
We now state a result which bounds the growth of the
Hecke L-series using the Phragm{\'e}n-Lindel{\"o}f principle.
\begin{lem}[\cite{Rademacher*59}, Theorem 5]\label{HR2}
Let $0 < \varepsilon \le 1/2$. In the strip
$- \varepsilon \le \sigma\le 1 + \varepsilon$, the Hecke $L$-series
associated with the primitive character $\chi$ of finite order and
conductor~${\mathfrak q}$ satisfies the inequality
\begin{equation*}
|L_{{\mathfrak q}}(s,\chi)|
~\le~
\zeta(1+ \varepsilon)^{n_{{\mathbf K}}}
\bigl(|d_{{\mathbf K}}|\mathfrak{N}({\mathfrak q})(1+|s|)^{n_{{\mathbf K}}}\bigr)^{\frac{1+ \varepsilon -\sigma}2}.
\end{equation*}
\end{lem}
\begin{proof}
This is a direct consequence of \cite[Theorem 5]{Rademacher*59} with
$\eta= \varepsilon$, but one should be mindful of the notation,
since Rademacher considers general Hecke Grossencharakteren, not
necessarily of finite order. Things are rather clear when we inspect
the gamma-factor given in~\cite[Bottom of page
202]{Rademacher*59}. We have $a_p=0$ for every complex place,
$v_p=0$ for every place and $a_p\in\{0,1\}$ for real places, $q$ of
them taking the value 1.
\end{proof}
\subsection*{On rational primes}
\begin{lem}
\label{takeiteasy}
For $x \ge1$, we have
$\displaystyle
\sum_{\substack{p^k\le x, \atop k\ge2}}1
~\le~
\frac{5\sqrt{x}}{4}.$
\end{lem}
\begin{proof}
We first check this property by Pari/GP for $t\le 10^7$. To extend
this result, let us denote by $S$ the sum to be bounded
above. Inequality \cite[(3.32)]{Rosser-Schoenfeld*62} by Rosser and
Schoenfeld tells us that
\begin{equation}
\label{rosserpsi}
\sum_{p \le x} \log p ~\le~ 1.02\,x
\phantom{m}\text{for } x>0.
\end{equation}
We then readily check
that
\begin{align*}
S
&\le~ \sum_{p\le \sqrt{x}}\frac{\log x}{\log p}
~\le~
2(\log x)\int_2^{\sqrt{x}}\sum_{p\le u} \log p\frac{du}{u(\log
u)^3}
+(\log x)\sum_{p\le \sqrt{x}}\frac{\log p}{(\frac12\log x)^2}
\\&
\le~ 2.04(\log x)\int_2^{\sqrt{x}}\frac{du}{(\log u)^3}
~+~ 4.08~\frac{\sqrt{x}}{\log x}
~\le~ \sqrt{x}
\end{align*}
when $x \ge 10^7$. The lemma follows readily.
\end{proof}
\begin{lem}
\label{takeiteasyb}
For $x \ge100$, we have
$\displaystyle
\sum_{\substack{p\le x}}\frac{1}{p}
~\le~ 2\log\log x.
$
\end{lem}
\begin{proof}
This is readily checked by Pari/GP for $x \le 10^8$. We conclude the
proof by appealing to \cite[Theorem 5]{Rosser-Schoenfeld*62} by Rosser
and Schoenfeld.
\end{proof}
\subsection*{On the M\"{o}bius function}
For an ideal $\b$ of $\O_{{\mathbf K}}$, we define the M\"{o}bius
function as
\begin{equation}
\label{defmoebius}
\mu(\b) =
\begin{cases}
1 &\text{ if } \b = \O_{{\mathbf K}}, \\
(-1)^{r} &\text{ if } \b =\P_1\cdots \P_r, ~\text{ where } \P_i \text{ are distinct prime ideals}, \\
0 &\text{ otherwise}.
\end{cases}
\end{equation}
For a positive integer $R$ and for an ideal $\b$ of $\O_{{\mathbf K}}$, we define the truncated
M\"{o}bius function $\mu_{R}$
in the following manner;
\begin{eqnarray*}
\mu_{R} (\b)
=
\begin{cases}
\mu(\b) & \text{ if } \omega(\b) \le R \\
0 & \text{ otherwise}.
\end{cases}
\end{eqnarray*}
Let $\psi_R(\b) = \sum_{\d \mid \b} \mu_{R}(\d)$. Applying M\"{o}bius
inversion formula and the fact that $\sum_{\d \mid \b}\mu(\d)=1$ if
and only if $\b = \O_{{\mathbf K}}$, we get
$\mu_{R}(\b) = \sum_{\d \mid \b} \mu(\frac{\b}{\d}) \psi_R(\d)$.
In this context, we have the following Lemma.
\begin{lem}\label{boundpsi}
For any integral ideal $\d \neq \O_{{\mathbf K}}$, we have
$$
|\psi_{R} (\d)|
~\le~
{{\omega(\d) - 1} \choose R}.
$$
\end{lem}
\begin{proof}
Applying induction on $R$, we get
$$
\psi_{R} (\d)
~=~
\sum_{0 \le k \le R} (-1)^k {{\omega(\d) } \choose k}
~=~
(-1)^R {{\omega(\d) - 1} \choose R}
$$
(see H. Halberstam and H. Richert \cite[p. 46/47]{Halberstam-Richert*79}).
\end{proof}
\iffalse
\begin{proof}
By \lemref{boundpsi}, we have
\begin{eqnarray*}
\sum_{\b \mid M \atop {\mathfrak N}\b > 1} \frac{\psi_R(\b)}{\phi_{{\mathbf K}}(\b)}
& \le &
\sum_{\b \mid M \atop {\mathfrak N}\b > 1} {{\omega(\d) - 1} \choose R} \frac{1}{\phi_{{\mathbf K}}(\b)}
\le
\sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n_{\mathbf K}})} {{m - 1} \choose R}\sum_{\b \mid M \atop {{\mathfrak N}\b > 1 \atop \omega(\b)=m}} \frac{1}{\phi_{{\mathbf K}}(\b)} \\
& \le &
\sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n_{\mathbf K}})} {{m - 1} \choose R} \left( \sum_{\P \mid M} \frac{1}{\phi_{{\mathbf K}}(\P)} \right)^m \cdot \frac{1}{m!}
\le
\frac{1}{R!} \sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n_{\mathbf K}})} \frac{1}{(m-R)!} \left( \sum_{\P \mid M} \frac{1}{{\mathfrak N}\P-1} \right)^m \\
&\le&
\frac{1}{R!} \sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n_{\mathbf K}})} \frac{1}{(m-R)!} \left( \sum_{p < 40^{n_{\mathbf K}}} \frac{\floor{\frac{{n_{\mathbf K}}}{2}}}{p^2-1} \right)^m
\le
\frac{1}{R!} \sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n_{\mathbf K}})} \frac{1}{(m-R)!} \left( \sum_{p < 40^{n_{\mathbf K}}} \frac{{n_{\mathbf K}}}{p^2} \right)^m.
\end{eqnarray*}
We have, however, that
$$
\sum_{p < 40^{n_{\mathbf K}}} \frac{1}{p^2} \le \int_{1}^{40^{n_{\mathbf K}}} \frac{dt}{t^2} \le 1.
$$
Therefore, we get
\begin{eqnarray*}
\sum_{\b \mid M \atop {\mathfrak N}\b > 1} \frac{\psi_R(\b)}{\phi_{{\mathbf K}}(\b)}
& \le &
\frac{1}{R!} \sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n_{\mathbf K}})} \frac{n_{{\mathbf K}}^m}{(m-R)!}
\le
\frac{{n_{\mathbf K}}^R}{R!} \sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n_{\mathbf K}})} \frac{ {n_{\mathbf K}}^{m-R}}{(m-R)!}
\le
\frac{{n_{\mathbf K}}^R e^{n_{\mathbf K}}}{R!}
\le
\frac{e^{n_{{\mathbf K}} + R\log n_{{\mathbf K}}}}{R!}.
\end{eqnarray*}
We know that $ R! \ge (\frac{R}{e})^R$ and this implies
$$
\sum_{\b \mid M \atop {\mathfrak N}\b > 1} \frac{\psi_R(\b)}{\phi_{{\mathbf K}}(\b)}
\le
\frac{e^{n_{{\mathbf K}} + R\log n_{{\mathbf K}}}}{R^R}e^R
\le
\exp(n_{{\mathbf K}} + R - R\log R +R \log {n_{\mathbf K}}).
$$
\end{proof}
\fi
\iffalse
Here is simple consequence of a result due to C. Mardjanichvili \cite{Mardjanichvili*39}.
\begin{lem}
\label{Mard}
For any $x\ge1$, we have
\begin{equation*}
\sum_{m\le x}d_k(m)
~\le~ \frac{x}{(k-1)!}(\log x+k-1)^{k-1}
\end{equation*}
where $d_k(m)$ is the number of tuples $(m_1,\cdots,m_k)$ such that
$m_1\cdots m_k=m$.
\end{lem}
\begin{lem}
\label{Eii}
For any positive integer $k$ and $Y\ge1$, we have
\begin{equation*}
\int_Y^\infty v^{k}e^{-v}dv
~\le~ (k+1)! Y^{k}e^{-Y}.
\end{equation*}
\end{lem}
\begin{proof}
Integration by parts readily yields the formula:
\begin{equation*}
\int_Y^\infty v^{k}e^{-v}dv
=
k! e^{-Y}\biggl(1+Y+\frac{Y^2}{2!}+\cdots+\frac{Y^{k}}{k!}\biggr).
\end{equation*}
The lemma follows readily.
\end{proof}
\fi
\section{Degree one primes in ray class groups}
Let ${\mathfrak q}$ be a modulus and let $T$ be a subgroup of index~2 of
$H_{\mathfrak q}({\mathbf K})$. Let $\chi$ be the quadratic character whose kernel
is~$T$. We want to show that there exists a degree one prime $\P$ with
small norm such that $\chi(\P)=-1$ and another prime $\P'$ with small
norm such that $\chi(\P')=1$.
\subsection*{$L$-series for degree one primes}
For a Hecke character of finite order modulo ${\mathfrak q}$, we define
\begin{equation}\label{defFchi}
F(s, \chi) ~=~ \mathop{\prod\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{\substack{ \P \nmid {\mathfrak q}}}~ \frac{1}{1 - \chi(\P){\mathfrak N}(\P)^{-s}}.
\end{equation}
\begin{lem}\label{boundFchi}
When $\Re s = (1+ \varepsilon)/2$ for some $0< \varepsilon \le 1/2$ and
$\chi$ is a Hecke character of finite order modulo~${\mathfrak q}$, we have
$$
|F(s,\chi)| ~\le~ \zeta(1+\varepsilon)^{\frac{3n_{{\mathbf K}}}{2}} (|d_{{\mathbf K}}|{\mathfrak N}({\mathfrak q}))^{\frac{1+ \varepsilon}{4}}
\theta({\mathfrak q}) (1+|s|)^{\frac{(1+\varepsilon)n_{{\mathbf K}}}{4}},
$$
where $\theta({\mathfrak q})=\prod_{\P|{\mathfrak q}}\frac{\sqrt{{\mathfrak N}\P}}{\sqrt{{\mathfrak N}\P}-1}$. When
$\chi = \chi_{0, {\mathfrak q}}$ is the trivial character modulo ${\mathfrak q}$, we have
$$
|F(s,\chi_{0, {\mathfrak q}})| ~\le~ 27 \zeta(1+\varepsilon)^{\frac{3n_{{\mathbf K}}}{2}} |d_{{\mathbf K}}|^{\frac{1+ \varepsilon}{4}}
\theta({\mathfrak q}) (1+|s|)^{\frac{(1+\varepsilon)n_{{\mathbf K}}}{4}}.
$$
\end{lem}
\begin{proof}
To find an upper bound for $F(s,\chi)$, we first reduce it to Hecke
$L$-series using the following product $F(s,\chi) = L_{{\mathfrak q}}(s,\chi)J(s,\chi)$,
where
\begin{equation}\label{defJ}
J(s,\chi) ~=~ \mathop{\prod\nolimits^{\hbox to 0pt{$\sharp$\hss}}}_{\substack{ \P \nmid {\mathfrak q}}} ~\bigl(1-\chi(\P){\mathfrak N}(\P)^{-s}\bigr).
\end{equation}
We readily find that, when $\Re s = (1+ \varepsilon)/2$, we have
\begin{equation}
\label{eq:7}
|J(s,\chi)| ~\le~ \prod_{p}\bigl(1+p^{-2\sigma}\bigr)^{\frac{n_{{\mathbf K}}}{2}}
~\le~
\zeta(1+\varepsilon)^{ \frac{n_{{\mathbf K}}}{2} }.
\end{equation}
The next step is to reduce $L_{{\mathfrak q}}(s,\chi)$ to
$L_{\mathfrak{f}}(s,\chi^*)$, where $\chi^*$ is the primitive character, say
modulo~$\mathfrak{f}$, inducing~$\chi$. We directly see that
$$
L_{{\mathfrak q}}(s,\chi) = L_{\mathfrak{f}}(s,\chi^{*})\prod_{\P \mid {\mathfrak q} \atop{ \P \nmid \mathfrak{f} }}
(1 - \chi(\P){\mathfrak N}(\P)^{-s}).
$$
Therefore, applying Lemma~\ref{HR2} when $\chi^*$ is not equal to the
constant character~1, and
Lemma~\ref{HR1} otherwise, together with the bound in \eqref{eq:7}, we get
the desired result.
\end{proof}
\subsection*{Products of degree one primes in ray classes modulo ${\mathfrak q}$,
sieve approach}
\begin{lem}
\label{primeboundini}
Let $\mathfrak{b}$ be an integral ideal co-prime to ${\mathfrak q}$.
When $F_1({\mathfrak q}) = 2^{r_1} h_{\mathbf K} {\mathfrak N}{\mathfrak q}$ and
\begin{equation*}
X
~\ge~
\log(3F({\mathfrak q}))^{n_{\mathbf K}^2}\, B({\mathbf K})F_1({\mathfrak q}){\mathfrak N}{\mathfrak q} \, \log\log(B({\mathbf K})F({\mathfrak q}){\mathfrak N}{\mathfrak q})^2,
\quad
B({\mathbf K}) = (n_{\mathbf K}^{50n_{\mathbf K}^2} E({\mathbf K})\sqrt{|d_{\mathbf K}|})^{n_{\mathbf K}},
\end{equation*}
we have
\begin{equation*}
\sum_{\substack{\P \\ {\mathfrak N}\P\ge 40^{n_{\mathbf K}} \atop { \deg \P\ge 2}} }
\sum_{\substack{\P|\mathfrak{a} \subseteq \O_{{\mathbf K}}
\\ [\mathfrak{a}]
= [\mathfrak{b}] \\ {\mathfrak N}\mathfrak{a}\le X}}
1
~~\le~
\frac{\alpha_{\mathbf K}\phi({\mathfrak q})X}{2^{{n_{\mathbf K}} } {\mathfrak N}{\mathfrak q} ~|H_{\mathfrak q}({\mathbf K})|}.
\end{equation*}
\end{lem}
We can remove the term $\log\log(B({\mathbf K})F({\mathfrak q}){\mathfrak N}{\mathfrak q})^2$ when $n_{\mathbf K} \ge 3$.
\begin{proof}
We shorten $n_{\mathbf K}$ by $n$ and set $D=40^n$.
On calling $S$ the sum on the left hand side, Theorem~\ref{asymfinal}
gives us the upper bound, with $Y=\alpha_{{\mathbf K}} \phi({\mathfrak q}) X /({\mathfrak N}{\mathfrak q} h_{{\mathbf K},{\mathfrak q}})$,
\begin{equation*}
S
~\le~
\sum_{\substack{\P \\ {\mathfrak N}\P\ge D,\\ \deg \P\ge2}}\frac{Y}{{\mathfrak N}\P}
~+~
E({\mathbf K})
F({\mathfrak q})^{\frac{1}{n}}\log(3F({\mathfrak q}))^{n}
\sum_{\substack{\P \\ X\ge{\mathfrak N}\P\ge D,\\ \deg
\P\ge2}}\left(\frac{X}{{\mathfrak N}{\mathfrak q}{\mathfrak N}\P}\right)^{1-\frac{1}{n}}
~+~
n^{8n}\frac{R_{\mathbf K}}{|\mu_{\mathbf K}|} F({\mathfrak q})\sum_{\substack{\P \\ {\mathfrak N}\P\le X,\\ \deg
\P\ge2}}1.
\end{equation*}
We now have to study each of these three sums. We readily bound
above the first one by
\begin{equation*}
Y\sum_{\substack{p^k\ge D,\\ k\ge2}}\frac{n/2}{p^k}
~\le~ \frac{nY}{2}\int_{D}^{\infty} (\sum_{\substack{D\le p^k\le t,\\ k\ge2}}1 )~\frac{dt}{t^2}
~\le~ \frac{5nY}{8}\int_{D}^{\infty}\frac{dt}{t^{3/2}}
~\le~ \frac{5nY}{4\sqrt{D}}
\end{equation*}
by Lemma~\ref{takeiteasy}. We notice that $(5/4)40^{-n/2}\le
1/2^{2n+1}$ when $n\ge2$.
The same Lemma~\ref{takeiteasy} yields
the bound $\tfrac58 n^{8n+1}R_{\mathbf K} F({\mathfrak q})\sqrt{X}/|\mu_{\mathbf K}|$ for the third term. We
further find that
\begin{align*}
\tfrac58 n^{8n+1}\frac{R_{\mathbf K}}{|\mu_{\mathbf K}|} F({\mathfrak q})\sqrt{X}
&=~\frac{5n^{8n+1}}{8} \frac{R_{\mathbf K} {\mathfrak N}{\mathfrak q} \, h_{{\mathbf K},{\mathfrak q}} F({\mathfrak q}) }{\alpha_{\mathbf K} |\mu_{\mathbf K}| \phi({\mathfrak q})}
\frac{Y\sqrt{X}}{X}
~=~ \frac{5n^{8n+1}}{8} \frac{2^{r_1} h_{{\mathbf K}}R_{\mathbf K}}{\alpha_{\mathbf K}|\mu_{\mathbf K}|}
\frac{{\mathfrak N}{\mathfrak q} \,Y}{\sqrt{X}}
\\&\le
\frac{5n^{8n+1}}{8} (2\pi)^{-r_2}\sqrt{|d_{\mathbf K}|}
~ \frac{{\mathfrak N}{\mathfrak q} \, Y}{\sqrt{X}}
~\le~ n^{9n}\sqrt{|d_{\mathbf K}|} ~\frac{{\mathfrak N}{\mathfrak q} \, Y}{\sqrt{X}}
\end{align*}
by applying the definition of $F({\mathfrak q})$ from Theorem~\ref{asymfinal}
and the expression of $\alpha_{\mathbf K}$ in terms of the invariants of the
field mentioned in~\eqref{acf}. Let us now examine the second term
above, a task for which we distinguish between the case when $n=2$
and $n\ge3$. In the latter situation, we find that
\begin{equation}
\label{compl}
\sum_{\substack{\P \\ X\ge{\mathfrak N}\P\ge D,\\ \deg
\P\ge2}}\frac{1}{({\mathfrak N}\P)^{1-\frac{1}{n}}}
~\le~ \frac{n}{2}\sum_{p\ge2}\frac{1}{p^{2(1-1/3)}}
~\le~ n
\end{equation}
by appealing to Pari/GP.
In this case, we thus find that
\begin{align*}
S/Y
&\le~ \frac{n}{2^{2n+1}}
+
E({\mathbf K})F({\mathfrak q})^{\frac{1}{n}}\log(3F({\mathfrak q}))^{n}
\frac{nh_{{\mathbf K},{\mathfrak q}}}{\alpha_{\mathbf K}\phi({\mathfrak q})}\left(\frac{{\mathfrak N}{\mathfrak q}}{X}\right)^{{1}/{n}}
~+~
n^{9n}\sqrt{|d_{\mathbf K}|} \frac{{\mathfrak N}{\mathfrak q}}{\sqrt{X}}
\\&\le~
\frac{1}{2^{n+1}}
+
E({\mathbf K})\log(3F({\mathfrak q}))^{n}
\frac{n2^{r_1}h_{{\mathbf K}}}{\alpha_{\mathbf K}}\left(\frac{F({\mathfrak q}){\mathfrak N}{\mathfrak q}}{X}\right)^{{1}/{n}}
~+~
n^{9n}\sqrt{|d_{\mathbf K}|} \frac{{\mathfrak N}{\mathfrak q}}{\sqrt{X}}
\\
&\le~ \frac{1}{2^{n+1}}
+
\frac{100n}{9}E({\mathbf K})\log(3F({\mathfrak q}))^{n}
\sqrt{|d_{\mathbf K}|}\left(\frac{F({\mathfrak q}){\mathfrak N}{\mathfrak q}}{X}\right)^{{1}/{n}}
~+~
n^{9n}\sqrt{|d_{\mathbf K}|} \frac{{\mathfrak N}{\mathfrak q}}{\sqrt{X}}
\end{align*}
by Lemma~\ref{classnumber} and~\ref{boundalphaK}. Since we have assumed
that
\begin{equation}
\label{eq:2}
X~\ge~ B({\mathbf K})F_1({\mathfrak q}){\mathfrak N}{\mathfrak q} \,\log(3F({\mathfrak q}))^{n^2} ,
\quad
B({\mathbf K}) = (n^{50n^2} E({\mathbf K})\sqrt{|d_{\mathbf K}|})^{n}
\end{equation}
we find
that $S\le Y/2^{n}$ when $n\ge3$. When $n=2$, we only have to replace
the upper bound $n$ in \eqref{compl} by $n\log\log X=2\log\log X$ by
Lemma~\ref{takeiteasyb}. Proceeding as above we reach the inequality
\begin{equation*}
S/Y
~\le~ \frac14
~+~
\frac{200}{9} E({\mathbf K})\log(3F({\mathfrak q}))^2
\log\log X \sqrt{|d_{\mathbf K}|}\left(\frac{F({\mathfrak q}){\mathfrak N}{\mathfrak q}}{X}\right)^{{1}/{2}}
~+~
2^{18} \frac{{\mathfrak N}{\mathfrak q}}{ \sqrt{X}}\sqrt{|d_{\mathbf K}|}.
\end{equation*}
Our hypothesis on $X$ reads
\begin{equation*}
X ~\ge~ \log(3F({\mathfrak q}))^{4}\,B({\mathbf K})F_1({\mathfrak q}){\mathfrak N}{\mathfrak q}\,\log\log(B({\mathbf K})F({\mathfrak q}){\mathfrak N}{\mathfrak q})^2.
\end{equation*}
We just need to notice that
$\log(3F({\mathfrak q}))^{4}\log\log(B({\mathbf K})F({\mathfrak q}){\mathfrak N}{\mathfrak q})^2\le
(B({\mathbf K})F({\mathfrak q}){\mathfrak N}{\mathfrak q})^6$, so that
\begin{align*}
\log\log X
&\le \log\log((B({\mathbf K})F({\mathfrak q}){\mathfrak N}{\mathfrak q})^7)
\\&\le \biggl(1+ \frac{\log 7}{\log\log(B({\mathbf K}))}\biggr)\log\log(B({\mathbf K})F({\mathfrak q}){\mathfrak N}{\mathfrak q})
\le 2\log\log(B({\mathbf K})F({\mathfrak q}){\mathfrak N}{\mathfrak q}).
\end{align*}
This is enough to conclude.
\end{proof}
\begin{lem}
\label{primeboundinib}
Let $\mathfrak{b}$ be an integral ideal co-prime to ${\mathfrak q}$.
When $F_1({\mathfrak q}) = 2^{r_1} h_{\mathbf K} {\mathfrak N}{\mathfrak q}$ and
\begin{equation*}
X ~\ge~
\log(3F({\mathfrak q}))^{n_{\mathbf K}^2}\, B({\mathbf K})F_1({\mathfrak q}){\mathfrak N}{\mathfrak q} \, \log\log(B({\mathbf K})F({\mathfrak q}){\mathfrak N}{\mathfrak q})^2,\quad
B({\mathbf K})=(n_{\mathbf K}^{50n_{\mathbf K}^2} E({\mathbf K})\sqrt{|d_{\mathbf K}|})^{n_{\mathbf K}},
\end{equation*}
we have
\begin{equation*}
\mathop{\sum\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{\substack{\mathfrak{a} \subseteq \O_{{\mathbf K}}
\\ [\mathfrak{a}]
= [\mathfrak{b}] \\ {\mathfrak N}\mathfrak{a}\le X}}~
1
~\ge~
(1/4)\frac{\alpha_{\mathbf K}\phi({\mathfrak q})X}{ {\mathfrak N}{\mathfrak q} |H_{\mathfrak q}({\mathbf K})|}.
\end{equation*}
\end{lem}
\begin{proof}
We denote $n_{\mathbf K}$ by $n$ and set $D=40^n$. Let $M$ be the product of
the prime ideals in $ \O_{{\mathbf K}}$ of degree greater than or equal to $2$,
co-prime to ${\mathfrak q}$ and of norm at most $D$.
Denoting the sum to be evaluated by $S$, a simple combinatorial
argument together with Lemma~\ref{primeboundini} gives us
\begin{equation*}
S
~\ge~
\sum_{\substack{\mathfrak{a} \subseteq \O_{{\mathbf K}}
\\ [\mathfrak{a}] = [\mathfrak{b}]
\\ {\mathfrak N}\mathfrak{a}\le X\\ (\mathfrak{a},M)=1}}
1
~- ~\frac{\alpha_{\mathbf K}\phi({\mathfrak q})X}{2^{n} {\mathfrak N}{\mathfrak q} |H_{\mathfrak q}({\mathbf K})|}
~=~
S(M) ~-~ \frac{\alpha_{\mathbf K}\phi({\mathfrak q})X}{2^{n} {\mathfrak N}{\mathfrak q} |H_{\mathfrak q}({\mathbf K})|}
\end{equation*}
say. We detect the coprimality condition with an elementary instance
of the Brun sieve. We select an odd positive integer $R$ and write
\begin{equation*}
1_{(\mathfrak{a},M)=1}=\sum_{\mathfrak{d}|(M,\mathfrak{a})}\mu(\mathfrak{d})
\ge \sum_{\substack{\mathfrak{d}|(M,\mathfrak{a}),\\
\omega(\mathfrak{d})\le R}}\mu(\mathfrak{d}).
\end{equation*}
We deduce
\begin{equation*}
S(M) ~\ge~
\sum_{\substack{\mathfrak{d}|M,\\
\omega(\mathfrak{d})\le R}}\mu(\mathfrak{d})
\sum_{\substack{\mathfrak{a}' \subseteq \O_{{\mathbf K}}
\\ [\mathfrak{d}\mathfrak{a}']
= [\mathfrak{b}] \\ {\mathfrak N}\mathfrak{a}'\le \frac{ X}{ {\mathfrak N}\mathfrak{d}} }}1.
\end{equation*}
Theorem~\ref{asymfinal}
gives us the lower bound, with $Y=\alpha_{{\mathbf K}} X\phi({\mathfrak q})/({\mathfrak N}{\mathfrak q} \,
h_{{\mathbf K},{\mathfrak q}})$,
\begin{equation*}
S(M)
~\ge~
Y\sum_{\substack{\mathfrak{d}|M,\\
\omega(\mathfrak{d})\le R}}\frac{\mu(\mathfrak{d})}{{\mathfrak N}\mathfrak{d}}
-
E({\mathbf K})
F({\mathfrak q})^{\frac{1}{n}}\log(3F({\mathfrak q}))^{n}
\sum_{\substack{\mathfrak{d}|M,\\
\omega(\mathfrak{d})\le R}}
\left(\frac{X}{{\mathfrak N}{\mathfrak q}{\mathfrak N}\mathfrak{d}}\right)^{1-\frac{1}{n}}
-~
n^{8n}\frac{R_{\mathbf K}}{|\mu_{\mathbf K}|} F({\mathfrak q}) \,
\sum_{\substack{\mathfrak{d}|M,\\ \omega(\mathfrak{d})\le R}}1.
\end{equation*}
Concerning the main term, we can write
\begin{eqnarray}\label{New}
\sum_{\substack{\mathfrak{d}|M,\\ \omega(\mathfrak{d})\le R}}
\frac{\mu(\mathfrak{d})}{{\mathfrak N}\mathfrak{d}}
~=~
\sum_{\d \mid M} \frac{\mu_{R} (\d)}{{\mathfrak N}\d}
~=~
\sum_{\d \mid M} \frac{1}{{\mathfrak N}\d} \sum_{\b \mid \d} \mu \left(\frac{\d}{\b}\right) \psi_R(\b)
&=&
\sum_{ \b \mid M} \frac{\psi_R(\b)}{{\mathfrak N}\b} \sum_{\c \mid \frac{M}{\b}} \frac{\mu(\c)}{{\mathfrak N}\c} \nonumber \\
&=&
\prod_{\P \mid M} \left( 1- \frac{1}{{\mathfrak N}\P} \right) \sum_{ \b \mid M} \frac{\psi_R(\b)}{\phi(\b)}.
\end{eqnarray}
Applying \lemref{boundpsi}, we have
$$
\sum_{\b \mid M \atop {\mathfrak N}\b > 1} \frac{\psi_R(\b)}{\phi_{{\mathbf K}}(\b)}
~\le~
\sum_{\b \mid M \atop {\mathfrak N}\b > 1} {{\omega(\b) - 1} \choose R} \frac{1}{\phi(\b)}
~\le~
\sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n})} {{m - 1} \choose R}
\sum_{\b \mid M \atop {{\mathfrak N}\b > 1 \atop \omega(\b)=m}} \frac{1}{\phi(\b)},
$$
where $\pi_{{\mathbf K}}(x)$ be the number of prime ideals in $\O_{{\mathbf K}}$ with norm less
than or equal to $x$. The last quantity is equal to
\begin{eqnarray*}
\sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n})} {{m - 1} \choose R} \left( \sum_{\P \mid M}
\frac{1}{\phi(\P)} \right)^m \frac{1}{m!}
&\le&
\frac{1}{R!} \sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n})} \frac{1}{(m-R)!}
\left( \sum_{\P \mid M} \frac{1}{{\mathfrak N}\P-1} \right)^m \\
&\le&
\frac{1}{R!} \sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n})} \frac{1}{(m-R)!}
\left( \sum_{p < 40^{n}}
\frac{n}{2(p^2-1)} \right)^m \\
&\le&
\frac{1}{R!} \sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n})} \frac{1}{(m-R)!}
\left( \sum_{p < 40^{n}} \frac{{n}}{p^2} \right)^m.
\end{eqnarray*}
We have, however, that
$$
\sum_{p < 40^{n}} \frac{1}{p^2} ~\le~ \int_{1}^{40^{n}} \frac{dt}{t^2} ~\le~ 1.
$$
Therefore, we get
\begin{eqnarray*}
\sum_{\b \mid M \atop {\mathfrak N}\b > 1} \frac{\psi_R(\b)}{\phi(\b)}
& \le &
\frac{1}{R!} \sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n})} \frac{n^m}{(m-R)!}
~\le~
\frac{n^R }{R!} \sum_{m=R+1}^{\pi_{{\mathbf K}}(40^{n})} \frac{ {n}^{m-R}}{(m-R)!}
~\le~
\frac{ n^R e^{n} }{R!}
~\le~
\frac{e^{n + R\log n }}{R!}.
\end{eqnarray*}
We know that $ R! \ge (\frac{R}{e})^R$ and this implies that
$$
\sum_{\b \mid M \atop {\mathfrak N}\b > 1} \frac{\psi_R(\b)}{\phi(\b)}
~\le~
\frac{e^{n + R\log n }}{R^R}e^R
~\le~
\exp(n + R - R\log R +R \log {n}).
$$
We
select
\begin{equation}
\label{defR}
R=2[5n\log n]+1
\end{equation}
so that
\begin{equation}\label{NEWF}
\exp \left(n - R \log \frac{R}{en} \right)
~\le~
\exp\left(n - (9n\log n ) \log \frac{9\log n }{e} \right)
~\le~
\exp\left(- 4 n \right)
~\le~
50^{-n}.
\end{equation}
Combining \eqref{New} and \eqref{NEWF}, we get
$$
\sum_{\substack{\mathfrak{d}|M,\\
\omega(\mathfrak{d})\le R}}
\frac{\mu(\mathfrak{d})}{{\mathfrak N}\mathfrak{d}}
~\ge~
\prod_{\P \mid M} \left( 1- \frac{1}{{\mathfrak N}\P} \right) (1 - 50^{-n})
~\ge~
\left(\frac{6}{\pi^2} \right)^{\frac{n}{2}} \left(1 - 10^{-n} \right),
$$
since all the prime ideals dividing $M$ have degree at least $2$.
This takes care of the main term. Concerning the error term for
$S(M)$, we first notice that the number of rational primes less than $D$ is
at most $3D/(2\log D)$ (see \cite[(3.6)]{Rosser-Schoenfeld*62}). This implies
that the number of prime ideals in
$\O_{{\mathbf K}}$ of norms at most $D$ and of degree $\ge2$ is at most
$[n/2]3D/(2\log D)$ which is not more that $2D/3$. Whence
\begin{equation*}
\sum_{\substack{\mathfrak{d}|M,\\
\omega(\mathfrak{d})\le R}}1
~\le~ (2D/3)^R.
\end{equation*}
We have thus reached the inequality
\begin{equation*}
S(M)
~\ge~
Y\bigl(1-{10^{-n}}\bigr)(6/\pi^2)^{n/2}
~-~
(2D/3)^R\biggl(E({\mathbf K})
F({\mathfrak q})^{\frac{1}{n}}\log(3F({\mathfrak q}))^{n}
\left(\frac{X}{{\mathfrak N}{\mathfrak q}}\right)^{1-\frac{1}{n}}
~+~
n^{8n}\frac{R_{\mathbf K}}{|\mu_{\mathbf K}|} F({\mathfrak q})\biggr).
\end{equation*}
We get
\begin{align*}
\frac{1}{Y}(2D/3)^RE({\mathbf K})
F({\mathfrak q})^{\frac{1}{n}}\log(3F({\mathfrak q}))^{n}
\left(\frac{X}{{\mathfrak N}{\mathfrak q}}\right)^{1-\frac{1}{n}}
&\le~
40^{12n^2\log n}\frac{ h_{{\mathbf K},{\mathfrak q}}}{\alpha_{{\mathbf K}}
\phi({\mathfrak q})}\left(\frac{F({\mathfrak q}){\mathfrak N}{\mathfrak q}}{X}\right)^{\frac{1}{n}}
E({\mathbf K})
\log(3F({\mathfrak q}))^{n}
\\&\le~
40^{12n^2 \log n}\frac{2^{r_1}h_{\mathbf K}}{\alpha_{{\mathbf K}}}\left(\frac{F({\mathfrak q}){\mathfrak N}{\mathfrak q}}{X}\right)^{\frac{1}{n}}
E({\mathbf K})
\log(3F({\mathfrak q}))^{n}
\\&\le~
40^{12n^2 \log n}\frac{100}{9} \sqrt{|d_{\mathbf K}|}\left(\frac{F({\mathfrak q}){\mathfrak N}{\mathfrak q}}{X}\right)^{\frac{1}{n}}
E({\mathbf K})
\end{align*}
by Lemma~\ref{classnumber} and~\ref{boundalphaK}. Our lower bound
on~$X$ ensures that this upper bound is $\le 1/10^n$.
The second error term is treater similarly. We write
\begin{align*}
\frac{D^Rn^{8n}}{Y}\frac{R_{\mathbf K}}{|\mu_{\mathbf K}|} F({\mathfrak q})
&\le~
\frac{40^{12n^2\log n}n^{8n}{\mathfrak N}{\mathfrak q} h_{{\mathbf K},{\mathfrak q}}}{\alpha_{{\mathbf K}}
X\phi({\mathfrak q})}\frac{R_{\mathbf K}}{|\mu_{\mathbf K}|} F({\mathfrak q})
\\&=~
\frac{40^{12n^2\log n}n^{8n}{\mathfrak N}{\mathfrak q} h_{{\mathbf K},{\mathfrak q}}}{
Xh_{\mathbf K}\phi({\mathfrak q})2^{r_1}(2\pi)^{r_2}}
\sqrt{|d_{\mathbf K}|}\frac{2^{r_1}\phi({\mathfrak q})h_{\mathbf K}}{h_{{\mathbf K},{\mathfrak q}}}
~\le~
40^{12n^2\log n}n^{8n}\frac{{\mathfrak N}{\mathfrak q}}{X}
\sqrt{|d_{\mathbf K}|}
\end{align*}
by using \eqref{acf} and the definition of $F({\mathfrak q})$. The lower bound
for $X$ again implies that this quantity is at most $1/10^n$.
Combining all our estimates, we find that
\begin{equation*}
S/Y\ge
\bigl(1-{10^{-n}}\bigr)(6/\pi^2)^{n/2}
-\frac{1}{10^n}
-\frac{1}{10^n}
-\frac{1}{2^n}
\ge
\tfrac14(6/\pi^2)^{n/2}
\end{equation*}
as required. This concludes the proof.
\end{proof}
\subsection*{Products of degree one primes in ray classes modulo ${\mathfrak q}$,
analytic approach}
\begin{lem}
\label{primebound}
Let $\mathfrak{b}$ be an integral ideal co-prime to ${\mathfrak q}$.
When $X \ge 10^{25n_{\mathbf K}} n_{{\mathbf K}}^{7n_{{\mathbf K}}}
|d_{\mathbf K}|^{4} {\mathfrak N}{\mathfrak q}^3$, we have
\begin{equation*}
\mathop{\sum\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{\mathfrak{a} \subseteq \O_{{\mathbf K}} \atop{ [\mathfrak{a}]
= [\mathfrak{b}]}} w_0\left(\frac{{\mathfrak N}(\mathfrak{a})}{X}\right)
~\ge~
\frac{\alpha_{\mathbf K}\phi({\mathfrak q})\check{w}_0(1)X}{2(1.3)^{n_{\mathbf K}} {\mathfrak N}{\mathfrak q} ~|H_{\mathfrak q}({\mathbf K})|}.
\end{equation*}
\end{lem}
\begin{proof}
On calling $S$ the sum on the left hand side, the orthogonality of characters
readily gives us that (recall~\eqref{defFchi})
\begin{align*}
S
&=~
\frac{1}{|H_{{\mathfrak q}}({\mathbf K})|} \sum_{\chi \in \hat{H}_{{\mathfrak q}}({\mathbf K})} \overline{\chi}(\mathfrak{b})
\mathop{\sum\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{(\mathfrak{a}, {\mathfrak q})=\O_{{\mathbf K}}}
w_0\left(\frac{{\mathfrak N}(\mathfrak{a})}{X}\right) \chi(\mathfrak{a}),
\\
& =~
\frac{1}{|H_{{\mathfrak q}}({\mathbf K})|} \sum_{\chi \in\hat{H}_{{\mathfrak q}}({\mathbf K})}
\frac{\overline{\chi}(\b)}{2\pi i} \int_{2-i\infty}^{2+i\infty} F(s,\chi)\check{w}_0(s) X^{s}ds.
\end{align*}
On using Lemma~\ref{boundFchi}, we find that
\begin{align*}
S
& =~
\frac{\alpha_{{\mathbf K}}\phi({\mathfrak q})J(1, \chi_{0, {\mathfrak q}})}{{\mathfrak N}{\mathfrak q}}
\frac{\check{w}_0(1)X}{|H_{{\mathfrak q}}({\mathbf K})| }
~+~
\frac{1}{|H_{{\mathfrak q}}({\mathbf K})|}\sum_{\chi \in\hat{H}_{{\mathfrak q}}({\mathbf K})} \frac{\overline{\chi}(\b)}{2\pi i}
\int_{\frac{1+\varepsilon}{2}-i\infty}^{\frac{1+\varepsilon}{2}+
i\infty} F(s,\chi)\check{w}_0(s) X^{s}ds
\\
& =~
\frac{\alpha_{{\mathbf K}}\phi({\mathfrak q})J(1, \chi_{0, {\mathfrak q}})}{{\mathfrak N}{\mathfrak q}}
\frac{\check{w}_0(1)X}{|H_{{\mathfrak q}}({\mathbf K})| } +
{{\rm{O} }}^*\left( 5 \zeta(1+\varepsilon)^{\frac{3n_{{\mathbf K}}}{2}}
(|d_{{\mathbf K}}|{\mathfrak N}{\mathfrak q})^{\frac{1+\varepsilon}{4}}\theta({\mathfrak q}) X^{\frac{1+\varepsilon}{2}}
M^*(\varepsilon,2)
\right)
\end{align*}
where $M^*$ is as defined in Lemma~\ref{getMstar}.
Proceeding as in~\eqref{eq:7}, we find that
\begin{equation}\label{lb-j}
J(1, \chi_{0, {\mathfrak q}})\ge \zeta(2)^{-n_{\mathbf K}/2}
\end{equation}
while Lemma~\ref{boundalphaK} provides us with a
lower bound for $\alpha_{\mathbf K}$ and, combined with~\eqref{eq:4}, an upper bound
for $|H_{\mathfrak q}({\mathbf K})|$. This together with Lemma~\ref{studyw0} tells us that the main term above
is at least, with the notation $n=n_{\mathbf K}$,
\begin{equation*}
\frac{9X}{33500 \sqrt{\zeta(2)^{n}}(\pi^2/5)^{n} \sqrt{n}|d_{\mathbf K}|^{5/4}{\mathfrak N}{\mathfrak q} }
~\ge~
\frac{X}{3723(\pi^2/5)^{3n/2} \sqrt{n}|d_{\mathbf K}|^{5/4}{\mathfrak N}{\mathfrak q}}.
\end{equation*}
This is larger than twice the above error term provided we have
\begin{equation}
\label{ineqstep}
X^{\frac{1-\varepsilon}{2}}
~\ge~
83 \cdot 10^{9} n^{\frac{7}{2}} (159n)^{n}
\zeta(1+\varepsilon)^{\frac{3n}{2}}
|d_{\mathbf K}|^{\frac{6+\varepsilon}{4}}\theta({\mathfrak q}) {\mathfrak N}{\mathfrak q}^{\frac{5+\varepsilon}{4}}.
\end{equation}
We select $\varepsilon=1/10$. The previous inequality is implied by
\begin{equation*}
X
~\ge~
19 \cdot 10^{23} n^{\frac{70}{9}} (5476n)^{\frac{20n}{9}}
|d_{\mathbf K}|^{\frac{61}{18}}\theta({\mathfrak q})^{\frac{20}{9}} {\mathfrak N}{\mathfrak q}^{\frac{17}{6}}.
\end{equation*}
Now $(\frac{\sqrt{p}}{\sqrt{p}-1})^{20/9}\le p^{1/6}$ when $p >19$, while
$
\prod_{p\le
19}\theta(p)^{20/9}/{p^{1/6}}
\le 1200$.
As a conclusion, we derive that $\theta({\mathfrak q})^{20/9}\le 1200^{n}{\mathfrak N}{\mathfrak q}^{1/6}$.
Some numerical computations end the proof of our lemma.
\end{proof}
\begin{proof}[Proof of Theorems~\ref{degreeoneprime} and \ref{degreeoneprimebis}]
Theorem~\ref{degreeoneprime} follows as an easy consequence of
Lemma~\ref{primebound}, and similarly, Theorem~\ref{degreeoneprimebis} follows
from Lemma~\ref{primeboundinib}.
\end{proof}
\subsection*{Degree one primes in quadratic ray subgroups modulo ${\mathfrak q}$}
\begin{lem}\label{boundLonechiquad}
Let $\chi$ be quadratic character on $H_{\mathfrak q}({\mathbf K})$. We have
\begin{equation*}
L_{{\mathfrak q}}(1,\chi)
~\ge~
\frac{9\cdot 2^{2n_{\mathbf K}}}{100 \alpha_{{\mathbf K}} |d_{{\mathbf K}}| \sqrt{{\mathfrak N}({\mathfrak q})}}
\phantom{mm}\text{and}\phantom{mm}
F(1,\chi)
~\ge~
\frac{9\cdot 2^{2n_{\mathbf K}}}{100 \alpha_{{\mathbf K}} |d_{{\mathbf K}}| \sqrt{\zeta(2)^{n_{{\mathbf K}}}{\mathfrak N}({\mathfrak q})}}
\end{equation*}
where $d_{{\mathbf K}}$ is the discriminant of ${\mathbf K}$,
$\alpha_{{\mathbf K}}$ is the residue of the Dedekind zeta function
at $s=1$.
\end{lem}
This lemma improves on \cite[Lemma 2]{Hinz-Lodemann*94} of Hinz and
Lodemann in two ways: the dependence in the base field is explicit
and the lower bound is in $1/\sqrt{{\mathfrak N}{\mathfrak q}}$ instead of $1/[\sqrt{{\mathfrak N}{\mathfrak q}}(\log{\mathfrak N}{\mathfrak q})^2]$.
\begin{proof}
We note that the product $L_{{\mathfrak f}}(s,\chi) \zeta_{{\mathbf K}}(s)$, where ${\mathfrak f} | {\mathfrak q}$
is the Dedekind zeta function of a quadratic extension ${\mathbf M}$
of ${\mathbf K}$. By \eqref{acf}, we have
$$
L_{{\mathfrak f}}(1,\chi)\alpha_{{\mathbf K}} = \alpha_{{\mathbf M}} = \frac{2^{r_1}(2\pi)^{r_2}h_{{\mathbf M}}
R_{{\mathbf M}}}{|\mu_{{\mathbf M}}| \sqrt{|d_{{\mathbf M}}|}},
$$
where $r_1$ and $2r_2$ are the number of real and complex embeddings of ${\mathbf M}$. Since this field is of
degree $2n_{\mathbf K}$ over $\mathbb{Q}$, Lemma~\ref{boundalphaK} gives us
\begin{equation}\label{lbd-res}
\alpha_{{\mathbf M}} ~\ge~ \frac{9\cdot 2^{n_{\mathbf M}}}{100 \sqrt{|d_{{\mathbf M}}|}}.
\end{equation}
Finally, since the extension ${\mathbf M}/{\mathbf K}$ has conductor with finite part ${\mathfrak f}$
and is a quadratic extension, by the conductor discriminant
formula (\cite[Chapter VII, Point (11.9)]{Neukirch*99} by Neukirch) the relative
discriminant of ${\mathbf M}/{\mathbf K}$ is also ${\mathfrak f}$.
However we also have that (\cite[Chapter III, Corollary (2.10)]{Neukirch*99})
$\sqrt{|d_{{\mathbf M}}|} = |d_{{\mathbf K}}| \sqrt{{\mathfrak N}({\mathfrak f})}$. This gives that
$$
|L_{{\mathfrak q}}(1,\chi)| ~\ge~ |L_{{\mathfrak f}}(1,\chi) | \prod_{\P| {\mathfrak q} \atop{\P\nmid {\mathfrak f}} }\left(1 - \frac{1}{{\mathfrak N}\P}\right).
$$
Since
$$
\frac{\sqrt{{\mathfrak N}{\mathfrak q}}}{\sqrt{{\mathfrak N}{\mathfrak f}}} \prod_{\P| {\mathfrak q} \atop{\P\nmid {\mathfrak f}} }\left(1 - \frac{1}{{\mathfrak N}\P}\right)
~\ge~
\frac{\sqrt{{\mathfrak N}{\mathfrak q}} \prod_{\P| {\mathfrak q}} \left(1 - \frac{1}{{\mathfrak N}\P}\right)}{\sqrt{{\mathfrak N}{\mathfrak f}}\prod_{\P| {\mathfrak f}} \left(1 - \frac{1}{{\mathfrak N}\P}\right)}
~\ge~ 1,
$$
we get the desired bound.
To extend it to $F(1,\chi)$, we notice that
\begin{equation}\label{J-bound}
\left| \frac{L_{{\mathfrak q}}(1,\chi)}{F(1,\chi)} \right|
~\le~
\mathop{\prod\nolimits^{\hbox to 0pt{$\sharp$\hss}}}_{\substack{\P \nmid {\mathfrak q}}}
~\left(1- \frac{1}{{\mathfrak N}\P} \right)^{-1}
~\le~
\zeta(2)^{\frac{n_{{\mathbf K}}}{2}}.
\end{equation}
\end{proof}
\begin{lem}\label{simAxer}
Let $\chi$ be a quadratic character on $H_{\mathfrak q}({\mathbf K})$. We have
$$
\mathop{\sum\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{\substack{\mathfrak{a} \subseteq \O_K \\
(\mathfrak{a},{\mathfrak q})=\O_K }}
({\bf 1}\star\chi)(\mathfrak{a})~w_0\left(\frac{{\mathfrak N}(\mathfrak{a})}{ X}\right)
~ >~
\frac{X}{27
\sqrt{{\mathfrak N}{\mathfrak q}}|d_{\mathbf K}|^2}\frac{\phi({\mathfrak q})}{{\mathfrak N}{\mathfrak q}}
$$
provided that $X\ge 8 \cdot (10^{31} n_{\mathbf K}^7)^{n_{\mathbf K}} |d_{\mathbf K}|^4 {\mathfrak N}{\mathfrak q}^2$.
\end{lem}
\begin{proof}
Let us denote the sum on the left hand side by $S_1(w_0)$ and the principal Hecke character
modulo~${\mathfrak q}$ by $\chi_{0,{\mathfrak q}}$. By mimicking the proof of
Lemma~\ref{primebound}, we find that
\begin{align*}
S_1(w_0)
&=
\frac{1}{2\pi i}\int_{2-i\infty}^{2+i\infty}F(s,\chi_{0,{\mathfrak q}})F(s,\chi)\check{w}_0(s)X^s
ds
\\
&=
\alpha_{{\mathbf K}, {\mathfrak q}} F(1,\chi)\check{w}_0(1) X + \frac{1}{2\pi i}
\int_{\frac{1+\varepsilon}{2} - i\infty}^{\frac{1+\varepsilon}{2} + i\infty}
F(s,\chi_{0,{\mathfrak q}})F(s,\chi)\check{w}_0(s)X^s ds
\\
&=
\alpha_{{\mathbf K}, {\mathfrak q}} F(1,\chi)\check{w}_0(1) X +
{{\rm{O} }}^*\Bigl(27\zeta(1+\varepsilon)^{3n_{{\mathbf K}}} |d_{{\mathbf K}}|^{\frac{1+\varepsilon}{2}}{\mathfrak N}{\mathfrak q}^{\frac{1+\varepsilon}{4}}
\theta({\mathfrak q})^2X^{\frac{1+\varepsilon}{2}}M^*(\varepsilon,1)\Bigr),
\end{align*}
where $\alpha_{{\mathbf K}, {\mathfrak q}} = \frac{\alpha_{{\mathbf K}}\phi({\mathfrak q})J(1, \chi_{0, {\mathfrak q}})}{{\mathfrak N}{\mathfrak q}}$
is the residue of $F(s, \chi_{0, {\mathfrak q}})$ at $s=1$ and $M^*$ is
as defined in Lemma~\ref{getMstar}. We also have
\begin{equation*}
\alpha_{{\mathbf K}, {\mathfrak q}} = \alpha_{{\mathbf K}} J(1, \chi_{0, {\mathfrak q}})\frac{\phi({\mathfrak q})}{{\mathfrak N}{\mathfrak q}}
~\ge~
\frac{\phi({\mathfrak q})}{{\mathfrak N}{\mathfrak q}}\frac{\alpha_{{\mathbf K}}}{\sqrt{\zeta(2)^{n_{{\mathbf K}}}}}
\end{equation*}
where we have used the inequality~\eqref{lb-j}. The
above is valid for a general smoothing function~$w$ but we restrict ourselves to $w = w_0$.
Applying Lemma~\ref{studyw0} and using yet again the notation $n$ for $n_{\mathbf K}$,
we get
\begin{equation*}
S_1(w_0)
~\ge~
\frac{9(4/\zeta(2))^nX}{500\cdot
\sqrt{n{\mathfrak N}{\mathfrak q}}~|d_{\mathbf K}|}\frac{\phi({\mathfrak q})}{{\mathfrak N}{\mathfrak q}}
~-~
324~\zeta(1+\varepsilon)^{3n} (57n)^{n + 3} |d_{{\mathbf K}}|^{\frac{1+\varepsilon}{2}}{\mathfrak N}{\mathfrak q}^{\frac{1+\varepsilon}{4}}
\theta({\mathfrak q})^2X^{\frac{1 +\varepsilon}{2}}
\end{equation*}
and the first summand is larger than twice the second one provided
that
\begin{equation*}
X^{\frac{1-\varepsilon}{2}}
~\ge~ 7 \cdot 10^9 n^{\frac{7}{2}} (24n)^n\zeta(1+\varepsilon)^{3n}
|d_{{\mathbf K}}|^{\frac{3+\varepsilon}{2}}
\theta({\mathfrak q})^{2}
\frac{{\mathfrak N}{\mathfrak q}}{\phi({\mathfrak q})}{\mathfrak N}{\mathfrak q}^{\frac{3+\varepsilon}{4}} .
\end{equation*}
We select $\varepsilon=1/10$. The previous inequality is implied by
\begin{equation*}
X ~\ge~ 8 \cdot 10^{21}\cdot n^{\frac{70}{9}} (31944 n)^{\frac{20n}{9}}
|d_{\mathbf K}|^{\frac{31}{9}} \left(\theta({\mathfrak q})^{2} \frac{{\mathfrak N}{\mathfrak q}}{\phi({\mathfrak q})} \right)^{\frac{20}{9}} {\mathfrak N}{\mathfrak q}^{\frac{31}{18}}.
\end{equation*}
Since $\theta({\mathfrak q})^{2} \frac{{\mathfrak N}{\mathfrak q}}{\phi({\mathfrak q})} = \prod_{\P | {\mathfrak q}} \frac{{\mathfrak N}\P^2}{({\mathfrak N}\P -1)(\sqrt{{\mathfrak N}\P} -1)^2}$
and
$$
\left(\frac{p^2}{(p-1)(\sqrt{p}-1)^2} \right)^{20/9} ~\le~ p^{1/6}
\text{ when } p > 56 \phantom{m}\text{and}\phantom{m}
\prod_{p\le 56}\frac{1}{p^{1/6}}\left(\frac{p^2}{(p-1)(\sqrt{p}-1)^2} \right)^{20/9}
~\le~ 9 \cdot 10^9,
$$
we have
$$
X ~\ge~ 8 \cdot (10^{31} n^7)^{n} |d_{\mathbf K}|^{\frac{31}{9}} {\mathfrak N}{\mathfrak q}^{\frac{34}{18}}.
$$
We simplify the final statement by noticing that
$\frac{9(4/\zeta(2))^n}{1000\sqrt{n}}\ge \frac{1}{27}$.
\end{proof}
We may now complete the proof of Theorem~\ref{primeinkernel}.
\begin{proof}[Proof of Theorem~\ref{primeinkernel}]
Suppose that the theorem is not true. Then every degree one prime ideal $\P$
co-prime to ${\mathfrak q}$ with norm at most
$
X=
{\mathfrak N}\P \le 8 \cdot (10^{31} n_{\mathbf K}^7)^{n_{\mathbf K}} |d_{\mathbf K}|^4 {\mathfrak N}{\mathfrak q}^2
$
satisfies the property that $\chi(\P)=-1$. Then for every non square-full ideal
$\mathfrak{a} ~(\neq \mathcal{O}_{{\mathbf K}})$ which decomposes only as a product
of prime ideals of degree one and of norm at most $X$, we have
$(1\star \chi)(\mathfrak{a})=0$. By Lemma~\ref{simAxer}, we get a
contradiction, and
this completes the proof of the theorem.
\end{proof}
\section{Selberg sieve for number fields in sieve dimension one}
\label{Ssnfdone}
In this section, we derive some lemmas which are required
to prove the number field analogues of two versions of the Brun-Titchmarsh Theorem.
Throughout this section, $z$ will denote a real number greater than one and all ideals
considered are integral ideals. For a fixed integral ideal ${\mathfrak q}$, we define
\begin{equation}\label{prod-def}
V(z)
=
\prod_{\P \mid \mathcal{P}(z) \atop{(\P, {\mathfrak q})=\O_{\mathbf K}} } \P
\phantom{mmm}\text{where} \phantom{mmm}
\mathcal{P}(z)
=
\prod _{\P \atop{{\mathfrak N}(\P) \le z} } \P .
\end{equation}
Recall the definition of the M\"{o}bius function in~\eqref{defmoebius}.
Further, for any ideal ${\mathfrak{e}}$ of $\O_{{\mathbf K}}$, we define
\begin{equation}
G_{{\mathfrak{e}}}(z)
=
\sum_{0< {\mathfrak N}(\a) \le z, \atop{(\a, {\mathfrak{e}}) = \O_{{\mathbf K}}}} \frac{\mu^2(\a)}{\phi(\a)},
\quad
G(z)
=
G_{\O_{{\mathbf K}}}(z).
\end{equation}
For some fixed integral ideal ${\mathfrak q}$, we further set
\begin{equation}
\label{deflambdae}
\lambda_{{\mathfrak{e}}}({\mathfrak q})
=
\mu({\mathfrak{e}}) \frac{{\mathfrak N}({\mathfrak{e}})G_{{\mathfrak{e}}{\mathfrak q}}(\frac{z}{{\mathfrak N}({\mathfrak{e}})})}{\phi({\mathfrak{e}})G_{\mathfrak q}(z)}
,\quad
\lambda_{\mathfrak{e}}=\lambda_{\mathfrak{e}}(\O_{\mathbf K}).
\end{equation}
We set $\lambda_{{\mathfrak{e}}}({\mathfrak q}) = 0$ whenever ${\mathfrak N}({\mathfrak{e}}) > z$ or $({\mathfrak{e}}, {\mathfrak q}) \ne \O_{{\mathbf K}}$.
In this set-up, we recall the following two lemmas from \cite{Schaal*68} by
Schaal. The reader may also refer to the beginning of Section~4 of
the paper by Debaene \cite{Debaene*19}.
\begin{lem}
For any ideal ${\mathfrak{e}} \mid V(z)$, one has $|\lambda_{{\mathfrak{e}}}({\mathfrak q})| \le 1$.
\end{lem}
\begin{lem}\label{G(z)bound}
We have
$
\displaystyle G_{\mathfrak q}(z)^{-1}
=
\sum_{{\mathfrak{e}}_1, {\mathfrak{e}}_2 \atop{{\mathfrak{e}}_i \mid V(z)}} \frac{\lambda_{{\mathfrak{e}}_1}({\mathfrak q}) \lambda_{{\mathfrak{e}}_2}({\mathfrak q})}{{\mathfrak N}[{\mathfrak{e}}_1,{\mathfrak{e}}_2]}
$.
\end{lem}
\begin{proof}
In \cite{Schaal*68}, the required definitions are in $(1.25)$ and in
$(3.1)$ except
that we do not consider a more severe sieving restriction, so that
$\rho=x$ and this latter quantity is called~$z$ in our setting.
The bound for $|\lambda_{{\mathfrak{e}}}({\mathfrak q})|$ is given in $(3.2)$. The expression
for $G_{\mathfrak q}(z)$ is contained in the last displayed equations at the
bottom of page~293 with the remark of eq.~$(3.3)$.
\end{proof}
\subsection{A lower bound for $G_{\mathfrak q}(z)$}
\begin{lem}
\label{Mellin1}
When $y>0$ and $k$ is a positive integer, we have
\begin{equation*}
\max(0, 1-y)^k = \frac{1}{2\pi i}
\int_{\Re s=2} y^{-s} \frac{k! ds}{s(s+1) \cdots (s+k)}.
\end{equation*}
\end{lem}
The following theorem gives a lower bound for $G_{\mathfrak q}(z)$.
\begin{thm}\label{Gzbound}
When $z\ge (10^6n_{\mathbf K})^{4 n_{\mathbf K}}|d_{\mathbf K}|^3$, we have
$\displaystyle
G_{\mathfrak q}(z)
\ge
\alpha_{{\mathbf K}} \frac{\phi({\mathfrak q})}{{\mathfrak N}({\mathfrak q})}
\log\frac{z}{e^2 n_{\mathbf K}\sqrt{|d_{{\mathbf K}}|}}$.
\end{thm}
This is a version of Lemma~5 of \cite{Schaal*70} where the dependence
in the field is explicit. In Lemma 14 of \cite{Debaene*19}, a similar result
is proved, but it relies on $R_{\mathbf K} h_{\mathbf K}$ when we prefer to rely on~$d_{\mathbf K}$.
\begin{proof}
We first remove the dependence in ${\mathfrak q}$ by using the following
inequality from \cite{Schaal*70}, page~266, obtained by combining
the points (a) and (b) therein:
\begin{equation}
\label{Halberstam-Schaal}
G_{\mathfrak q}(z)\ge
\frac{\phi({\mathfrak q})}{{\mathfrak N}({\mathfrak q})}
\sum_{0< {\mathfrak N}(\a) \le z} \frac{1}{{\mathfrak N}(\a)}.
\end{equation}
We note in passing that it is
straightforward to adapt the inequality \cite[(1.3)]{van-Lint-Richert*65}
by Van Lint and Richert to prove that $G_{\mathfrak q}(z)\ge
\frac{\phi({\mathfrak q})}{{\mathfrak N}({\mathfrak q})}G(z)$, which is
more refined than \eqref{Halberstam-Schaal}.
To handle right hand side of \eqref{Halberstam-Schaal}, we use Lemma~\ref{Mellin1} with
$y = {\mathfrak{N}(\a)}/{z}$ to get
\begin{align*}
\sum_{0< {\mathfrak N}(\a) \le z} \frac{1}{{\mathfrak N}(\a)}
& \ge
\sum_{\a \atop \a \ne0} \frac{1}{{\mathfrak N}(\a)} \max\left( 0,
~\left(1 - \frac{\mathfrak{N}(\a)}{z} \right)\right)^k \\
& =
\frac{1}{2\pi i} \sum_{\a \atop \a \ne 0} \frac{1}{{\mathfrak N}(\a)}
\int_{\Re s=2} \left(\frac{\mathfrak{N}(\a)}{z}\right)^{-s} \frac{k! }{s(s+1) \cdots (s+k)}~ds.
\end{align*}
Since we are in the region of absolute convergence of $\zeta_{{\mathbf K}}(s)$,
this leads to
\begin{equation*}
\sum_{0< {\mathfrak N}(\a) \le z} \frac{1}{{\mathfrak N}(\a)}
\ge
\frac{1}{2 \pi i} \int_{\Re s=2} \zeta_{{\mathbf K}}(1+s) ~\frac{k! z^s }{s(s+1) \cdots (s+k)} ~ds.
\end{equation*}
We note that the integrand has a double pole at $s=0$ and move the
line of integration to $\Re s = -1/4$. In the neighborhood of $s=0$, we find that
\begin{equation*}
s^2\zeta_{{\mathbf K}}(1+s) \frac{k! z^s}{s(s+1) \cdots (s+k)}
=\frac{\alpha_{\mathbf K} k! z^s}{(s+1) \cdots (s+k)}
+\frac{\alpha_{\mathbf K}\gamma_{\mathbf K} k! s\, z^s}{(s+1) \cdots (s+k)}
+ O(s^2).
\end{equation*}
The residue at $s=0$ is thus given by
\begin{equation*}
r = \lim_{s \to 0} \frac{d}{ds} \left(s^2 \zeta_{{\mathbf K}}(1+s) ~\frac{k! z^s}{s(s+1) \cdots (s+k)}\right)
=
\alpha_{{\mathbf K}} \left( \gamma_{{\mathbf K}} + \log z
-\sum_{\ell=1}^{k} \frac{1}{\ell} \right).
\end{equation*}
The remaining integral is now
\begin{equation*}
I = \frac{1}{2 \pi i} \int_{\Re s = -\frac{1}{4}}
\frac{k! \zeta_{{\mathbf K}}(1+s) z^s}{s(s+1)\cdots(s+k)} ds.
\end{equation*}
Further Lemma~\ref{HR1} with $\varepsilon=1/4$ gives us
$$
|\zeta_{{\mathbf K}} (3/4 + it ) |
\le
27 \zeta(5/4)^{n_{{\mathbf K}}} (|d_{{\mathbf K}}|(7/4 + |t|)^{n_{{\mathbf K}}})^{1/4}.
$$
This gives
\begin{equation*}
| I |
\le
5 ~\zeta(5/4)^{n_{{\mathbf K}}} k!
\left(\frac{|d_{{\mathbf K}}|}{z}\right)^{1/4}
\int_{\Re s = -\frac{1}{4}} \frac{(7/4 + |t| )^{\frac{n_{{\mathbf K}}}{4} }|ds|}{ |s(s+1)\cdots(s+k)|}.
\end{equation*}
To estimate the integral, say $I_0$, choose $T>0$, $k>n_{\mathbf K}/4$ and write
\begin{align*}
\tfrac12 I_0
& =
\int_{t=0, \Re s = -\frac{1}{4} }^T
\frac{ (7/4 + t )^{ \frac{n_{{\mathbf K}}}{4} } |ds|}{ |s(s+1)\cdots(s+k)|}
~+~
\int_{t=T, \Re s = -\frac{1}{4} }^{\infty}
\frac{ ( 7/4 + t )^{ \frac{n_{{\mathbf K}}}{4} } |ds|}{ |s(s+1)\cdots(s+k)|}
\\
& \le
\frac{ 8 }{(k-1)!} \int_{0}^T
\left( \frac{7}{4}+ t \right)^{\frac{n_{{\mathbf K}}}{4}} dt
~+~
\int_{T}^{\infty} \Bigl(\frac7{4t} + 1\Bigr)^{\frac{n_{{\mathbf K}}}{4}}
\frac{dt}{t^{k+1-\frac{n_{\mathbf K}}{4}}}
\\&\le
\frac{ 32 }{(k-1)!(4+n_{\mathbf K})}(7/4 + T)^{1+\frac{n_{{\mathbf K}}}{4}}
~+~
\Bigl(\frac7{4T} + 1\Bigr)^{\frac{n_{{\mathbf K}}}{4}}
\frac{1}{(k-\frac{n_{\mathbf K}}{4})T^{k-\frac{n_{\mathbf K}}{4}}}.
\end{align*}
Hence, with $T=k/e$,
\begin{align*}
k! I_0
&\le
(7/4 + k/e)^{1+\frac{n_{{\mathbf K}}}{4}}
\biggl(
\frac{ 64 k}{4+n_{\mathbf K}} ~+~ \frac{8\cdot k!}{(4k-n_{\mathbf K})(k/e)^k}
\biggr)
\\&\le
(7/4 + k/e)^{1+\frac{n_{{\mathbf K}}}{4}} \biggl(\frac{ 64 k}{4+n_{\mathbf K}}
~+~\frac{8\cdot \sqrt{2\pi k}e^{\frac1{12k}}}{(4k-n_{\mathbf K})} \biggr)
\end{align*}
by using the explicit Stirling Formula
recalled in \eqref{ExplicitStirling}.
Therefore, with the choice $k = n_{{\mathbf K}}$, we find that
\begin{align*}
| I |
& \le
\biggl(5^{1/n_{\mathbf K}}
\zeta(5/4)\biggl(\frac{7}{4n_{\mathbf K}}+\frac{1}{e}\biggr)^{\frac{1}{n_{\mathbf K}}+\frac{1}{4}}
\biggl(\frac{ 64 n_{\mathbf K}}{4+n_{\mathbf K}}
+\frac{8\cdot \sqrt{2\pi}e^{\frac1{12n_{\mathbf K}}}}{3\sqrt{n_{\mathbf K}}}\biggr)^{1/n_{\mathbf K}}
n_{\mathbf K}^{\frac{1}{n_{\mathbf K}}+\frac{1}{4}} \biggr)^{n_{{\mathbf K}}}
\left(\frac{ |d_{{\mathbf K}}|}{z}\right)^{\frac{1}{4}}
\\
&\le
\bigl(10^{3}n_{\mathbf K}\bigr)^{n_{\mathbf K}}\left(\frac{ |d_{{\mathbf K}}|}{z}\right)^{\frac{1}{4}}.
\end{align*}
Whence
\begin{align*}
\frac{{\mathfrak N}{\mathfrak q}}{\phi({\mathfrak q})} G_{\mathfrak q}(z)
&\ge \alpha_{{\mathbf K}} \left( \gamma_{{\mathbf K}} + \log z
-\log n_{\mathbf K}-1 \right)
-\bigl(10^{3}n_{\mathbf K}\bigr)^{n_{\mathbf K}}\left(\frac{
|d_{{\mathbf K}}|}{z}\right)^{\frac{1}{4}}
\\&\ge
\alpha_{{\mathbf K}} \left( \log z -\log\sqrt{|d_{\mathbf K}|}
-\log n_{\mathbf K}-1
-\bigl(10^{3}n_{\mathbf K}\bigr)^{n_{\mathbf K}}\frac{100 \sqrt{|d_{{\mathbf K}}|}}{9 \cdot 2^{n_{{\mathbf K}}}}\left(\frac{
|d_{{\mathbf K}}|}{z}\right)^{\frac{1}{4}}\right)
\end{align*}
by Lemma~\ref{boundalphaK} and the inequality $\gamma_{\mathbf K}\ge
-\log\sqrt{|d_{\mathbf K}|}$ from \eqref{IharaIneq}.
This completes the proof of \thmref{Gzbound}.
\end{proof}
\subsection{Controlling the error term in Selberg's sieve}
Our first lemma borrows from~\cite{Halberstam-Richert*79} by
Halberstam and Richert.
\begin{lem}\label{primes}
When $x\ge1$ and $\alpha\in[0,1)$, we have
$\displaystyle
\sum_{{\mathfrak N}\P\le x}\frac{\log{\mathfrak N}\P}{{\mathfrak N}\P^\alpha}\le
\frac{1.02 \,n_{\mathbf K} x^{1-\alpha}}{1-\alpha}$.
\end{lem}
\begin{proof}
Let $\P_1,\cdots, \P_g$ be prime ideals of $\O_{\mathbf K}$ lying over
a rational prime $p$. We then have
$$
\displaystyle\sum_{1\le i\le g}e_i\log {\mathfrak N}\P_i
=
n_{\mathbf K}\log p,
$$
where $e_i\ge1$ is the ramification index of $\P_i$ above~$p$. This
implies that $\sum_{1\le i\le g}\log {\mathfrak N}\P_i\le n_{\mathbf K}\log
p$. Inequality~\eqref{rosserpsi}
concludes the proof of the lemma
when $\alpha=0$. For the general case, we use summation by parts to write
\begin{align*}
\sum_{{\mathfrak N}\P\le x}\frac{\log{\mathfrak N}\P}{{\mathfrak N}\P^\alpha}
&=
\alpha\int_1^x \sum_{{\mathfrak N}\P\le t}\log{\mathfrak N}\P\frac{dt}{t^{1+\alpha}}
+\sum_{{\mathfrak N}\P\le x}\frac{\log{\mathfrak N}\P}{x^\alpha}
\\&\le~
1.02
\,n_{\mathbf K}\left(\alpha\int_1^x\frac{dt}{t^{\alpha}}+x^{1-\alpha}\right)
~\le~ 1.02
\,n_{\mathbf K}\frac{x^{1-\alpha}}{1-\alpha}
\end{align*}
as required.
\end{proof}
\begin{lem}\label{userhhr}
When $x\ge 1$ and $\alpha\in[0,1)$, we have
\begin{equation*}
\sum_{0< {\mathfrak N}\a\le x}\mu^2(\a)\left(\frac{x}{{\mathfrak N}\a}\right)^\alpha
~\le~
\frac{(1+1.02 n_{\mathbf K})x}{(1-\alpha)(1+\log x)}\sum_{0< {\mathfrak N}\a\le x}\frac{\mu^2(\a)}{{\mathfrak N}\a}.
\end{equation*}
\end{lem}
\begin{proof}
We set $S(x)=\sum_{ 0<{\mathfrak N}\a\le x}\mu^2(\a)\left({x}/{{\mathfrak N}\a}\right)^\alpha$.
We use $\log y\le (y^{1-\alpha}-1)/(1-\alpha)$ when $y>0$ and
readily find that
\begin{align*}
S(x)\log x
&=\sum_{0< {\mathfrak N}\a\le x}\mu^2(\a)\left(\frac{x}{{\mathfrak N}\a}\right)^\alpha\log\frac{x}{{\mathfrak N}\a}
+\sum_{0< {\mathfrak N}\a\le x}\mu^2(\a)\left(\frac{x}{{\mathfrak N}\a}\right)^\alpha\log{\mathfrak N}\a
\\&\le
\frac{x}{1-\alpha}\sum_{0< {\mathfrak N}\a\le x}\frac{\mu^2(\a)}{{\mathfrak N}\a}-S(x)
+\sum_{0< {\mathfrak N}\a\le x}\mu^2(\a)\left(\frac{x}{{\mathfrak N}\a}\right)^\alpha\sum_{\P|\a}\log{\mathfrak N}\P
\\&\le
\frac{x}{1-\alpha}\sum_{0< {\mathfrak N}\a\le x}\frac{\mu^2(\a)}{{\mathfrak N}\a}-S(x)
+\sum_{{\mathfrak N}\P\le x}\log{\mathfrak N}\P
\sum_{0< {\mathfrak N}\a\le x/{\mathfrak N}\P} \mu^2(\a)\left(\frac{x/{\mathfrak N}\P}{{\mathfrak N}\a}\right)^\alpha.
\end{align*}
We invert the summation in the last term and
then appeal to Lemma~\ref{primes} to get
\begin{equation*}
S(x)(1+\log x)
~\le~
\frac{x}{1-\alpha}\sum_{0< {\mathfrak N}\a\le x}\frac{\mu^2(\a)}{{\mathfrak N}\a}
+\frac{1.02 n_{\mathbf K}\,x }{1- \alpha} \sum_{0< {\mathfrak N}\a\le x}\frac{\mu^2(\a)}{{\mathfrak N}\a}
\end{equation*}
and our lemma follows readily.
\end{proof}
\begin{thm}\label{absolute}
When $z\ge1$ and $\alpha\in[0,1)$, we have
\begin{equation*}
\sum_{\substack{{\mathfrak{e}} \mid V(z)}} \frac{|\lambda_{{\mathfrak{e}}}({\mathfrak q})|}{{\mathfrak N}{\mathfrak{e}}^\alpha}
\le
\frac{3.1\,n_{\mathbf K} z^{1-\alpha}}{(1-\alpha)(2+\log z)}
c_1(\alpha)^{n_{\mathbf K}}
+
z^{3(1-\alpha)/4}c_2(\alpha)^{n_{\mathbf K}},
\end{equation*}
where
\begin{equation*}
c_1(\alpha)=
\prod_{p}\biggl(1+\frac{1+p^\alpha}{(p-1)p}\biggr),
\quad
c_2(\alpha)=
\prod_{p}\biggl(1+\frac{1+p^\alpha}{(p-1)p^{\frac{1+ 3\alpha}4}}\biggr).
\end{equation*}
\end{thm}
This theorem replaces Lemma~15 in \cite{Debaene*19}, but notice that we avoid the
high power of $\log z$ and even save an additional $\log z$. We also
mention for future use that
\begin{align}\label{boundc2}
\frac{c_2(\alpha)}{\zeta(\frac{5+3\alpha}{4})\zeta(\frac{5-\alpha}4)}
~=~
\prod_{p}\biggl( 1 + \frac{(p^{1/4} -p^{\alpha/4})p^{\alpha/4}}{p^{5/2} - p^{3/2}}
~+~ \frac{ -(p^{5/4} + p^{1/4}) p^{\alpha} + p^{\frac{6 + 3\alpha}{4} } + p^{ \frac{\alpha}{4} }
+ p^{\frac{5\alpha} {4} } - p^{5/4}}{ p^{\frac{3\alpha}{ 2} } (p^{15/4} - p^{11/4})}
\biggr).
\end{align}
where the right hand side is uniformly bounded above for $\alpha\in[0,1]$.
\begin{proof}
Set $T = \sum_{{\mathfrak{e}} \mid V(z)} |\lambda_{{\mathfrak{e}}}({\mathfrak q})|/{\mathfrak N}{\mathfrak{e}}^\alpha$. Then from the definition of $\lambda_{{\mathfrak{e}}}({\mathfrak q})$, we get
\begin{align*}
G_{\mathfrak q}(z)T
&=
\sum_{{\mathfrak{e}} \mid V(z)} \frac{{\mathfrak N}{\mathfrak{e}}^{1-\alpha}}{\phi({\mathfrak{e}})}
\sum_{{\mathfrak N}\a \le \frac{z}{{\mathfrak N}{\mathfrak{e}}} ,\atop{(\a, {\mathfrak{e}}{\mathfrak q}) = \O_{{\mathbf K}}}} \frac{\mu^2(\a)}{\phi(\a)}
=
\sum_{{\mathfrak{e}} \mid V(z)} {\mathfrak N}{\mathfrak{e}}^{1-\alpha}
\sum_{{\mathfrak N}\b \le z, \atop{(\b,{\mathfrak q})= \O_{{\mathbf K}}, \atop{{\mathfrak{e}} \mid \b}}}
\frac{\mu^2(\b)}{\phi(\b)}
\\&=
\sum_{{\mathfrak N}\b \le z, \atop{(\b,{\mathfrak q})= \O_{{\mathbf K}}}}
\frac{\mu^2(\b)}{\phi(\b)} \sum_{{\mathfrak{e}} \mid \b} {\mathfrak N}{\mathfrak{e}}^{1-\alpha} .
\end{align*}
We note that, when $\b$ is squarefree, we have
$$
\frac{{\mathfrak N}\b^\alpha}{\phi(\b)} \sum_{{\mathfrak{e}} \mid \b} {\mathfrak N}{\mathfrak{e}}^{1-\alpha}
=
\prod_{\P \mid \b}
\left(
\frac{{\mathfrak N}\P + {\mathfrak N}\P^{\alpha}}{{\mathfrak N}\P-1}\right)
=
\prod_{\P \mid \b} \left(1+\frac{1+{\mathfrak N}\P^\alpha}{{\mathfrak N}\P-1}\right)
=
\sum_{{\mathfrak f}\mid \b} g_\alpha({\mathfrak f})
$$
where $g_\alpha({\mathfrak f})$ is the completely multiplicative function defined on primes
by $g_\alpha(\P)=\frac{1+{\mathfrak N}\P^\alpha}{{\mathfrak N}\P-1}$. Only the values on
squarefree ideals are required but to define a unique function, we
added that it is a `completely' multiplicative function.
Therefore
\begin{align*}
G_{\mathfrak q}(z)T
&=
\sum_{{\mathfrak N}\b \le z, \atop{(\b,{\mathfrak q}) = \O_{{\mathbf K}}}}
\frac{\mu^{2}(\b)}{{\mathfrak N}\b^\alpha}
\sum_{{\mathfrak f} \mid \b} g_\alpha({\mathfrak f})
=
\sum_{{\mathfrak N}{\mathfrak f} \le z} \mu^2({\mathfrak f}) g_\alpha({\mathfrak f})
\sum_{{\mathfrak N}\b \le z, \atop{(\b,{\mathfrak q})=\O_{{\mathbf K}}, \atop{{\mathfrak f} \mid \b}}}
\frac{\mu^{2}(\b)}{{\mathfrak N}\b^\alpha} \\
&\le
\sum_{{\mathfrak N}{\mathfrak f} \le z, \atop{({\mathfrak f},{\mathfrak q}) = \O_{{\mathbf K}}}}
\frac{\mu^2({\mathfrak f}) g_\alpha({\mathfrak f})}{{\mathfrak N}{\mathfrak f}^\alpha}
\sum_{{\mathfrak N}\b_1 \le \frac{z}{{\mathfrak N}{\mathfrak f}}}\frac{\mu^{2}(\b_1)}{{\mathfrak N}\b_1^\alpha}.
\end{align*}
At this level, we use the trivial bound
$$
G_{\mathfrak q}(z) = \sum_{{\mathfrak N}(\a) \le z \atop{(\a, {\mathfrak q})= \O_{{\mathbf K}}}}
\frac{\mu^2(\a)}{{\mathfrak N}\a} \prod_{\P | \a} \sum_{k \ge 0} {\mathfrak N}\P^{-k}
~\ge
\sum_{{\mathfrak N}(\a) \le z, \atop{(\a, {\mathfrak q})= \O_{{\mathbf K}}}} \frac{1}{{\mathfrak N}(\a)}
$$
and Lemma~\ref{userhhr} with $x=z/{\mathfrak N}{\mathfrak f}$ for the inner sum
of the right-hand side when ${\mathfrak N}{\mathfrak f}\le \sqrt{z}$, and with $x=\sqrt{z}$
for the remaining sum. This first gives us
\begin{multline*}
\sum_{{\mathfrak N}(\a) \le z, \atop{(\a, {\mathfrak q})= \O_{{\mathbf K}}}} \frac{1}{{\mathfrak N}(\a)} T
~\le~
\frac{(1+1.02n_{\mathbf K})z^{1-\alpha}}{1-\alpha}
\sum_{{\mathfrak N}{\mathfrak f} \le \sqrt{z}, \atop{({\mathfrak f},{\mathfrak q}) = \O_{{\mathbf K}}}}
\frac{\mu^2({\mathfrak f}) g_\alpha({\mathfrak f})}{{\mathfrak N}{\mathfrak f}(1+\log(z/{\mathfrak N}{\mathfrak f}))}
\sum_{{\mathfrak N}\b_1 \le \frac{z}{{\mathfrak N}{\mathfrak f}}, \atop{ (\b_1, {\mathfrak q})= \O_{{\mathbf K}}}}\frac{\mu^2(\b_1)}{{\mathfrak N}\b_1}
\\+~
z^{(1-\alpha)/2}\sum_{z \ge {\mathfrak N}{\mathfrak f} > \sqrt{z}, \atop{({\mathfrak f},{\mathfrak q}) = \O_{{\mathbf K}}}}
\frac{\mu^2({\mathfrak f}) g_\alpha({\mathfrak f})}{{\mathfrak N}{\mathfrak f}^\alpha}
\sum_{{\mathfrak N}(\a) \le \sqrt{z}, \atop{(\a, {\mathfrak q})= \O_{{\mathbf K}}}} \frac{1}{{\mathfrak N}(\a)}.
\end{multline*}
We readily deduce from this the inequality
\begin{equation*}
T
\le
\frac{3.1\,n_{\mathbf K} z^{1-\alpha}}{(1-\alpha)(2+\log z)}
\sum_{{\mathfrak N}{\mathfrak f} \le z, \atop{({\mathfrak f},{\mathfrak q}) = \O_{{\mathbf K}}}}
\frac{\mu^2({\mathfrak f})g_\alpha({\mathfrak f})}{{\mathfrak N}{\mathfrak f}}
~+~
z^{(1-\alpha)/2}\sum_{z \ge {\mathfrak N}{\mathfrak f} > \sqrt{z}, \atop{({\mathfrak f},{\mathfrak q}) = \O_{{\mathbf K}}}}
\frac{\mu^2({\mathfrak f}) g_\alpha({\mathfrak f})}{{\mathfrak N}{\mathfrak f}^\alpha}.
\end{equation*}
To proceed, we forget
about the coprimality in~${\mathfrak f}$ and use Rankin's trick for the second
part, noticing that $(z/{\mathfrak N}{\mathfrak f})^{(1-\alpha)/4}\ge 1$ therein. This leads~to
\begin{equation*}
T
\le
\frac{3.1\,n_{\mathbf K} z^{1-\alpha}}{(1-\alpha)(2+\log z)}C_1(\alpha)
~+~ z^{3(1-\alpha)/4}C_2(\alpha).
\end{equation*}
Concerning the constants $C_1(\alpha)$ and $C_2(\alpha)$, we readily find that
\begin{equation*}
\left\{
\begin{aligned}
C_1(\alpha)&=\sum_{{\mathfrak f} \atop {\mathfrak f} \ne 0}\frac{\mu^2({\mathfrak f}) g_\alpha({\mathfrak f})}{{\mathfrak N}{\mathfrak f}}
~ =~
\prod_{\P}\biggl(1+\frac{1+{\mathfrak N}\P^\alpha}{({\mathfrak N}\P-1){\mathfrak N}\P}\biggr)
~\le~
\prod_{p}\biggl(1+\frac{1+p^\alpha}{(p-1)p}\biggr)^{n_{\mathbf K}},
\\
C_2(\alpha)
&=~
\sum_{{\mathfrak f} \atop {\mathfrak f} \ne 0}\frac{\mu^2({\mathfrak f}) g_\alpha({\mathfrak f})}{{\mathfrak N}{\mathfrak f}^{\frac{3 \alpha +1}{4}}}
~=~
\prod_{\P}\biggl(1+\frac{1+{\mathfrak N}\P^\alpha}{({\mathfrak N}\P-1){\mathfrak N}\P^{\frac{3\alpha+1}4}}\biggr)
~\le~
\prod_{p}\biggl(1+\frac{1+p^\alpha}{(p-1)p^{\frac{3\alpha+1}4}}\biggr)^{n_{\mathbf K}}.
\end{aligned}
\right.
\end{equation*}
The lemma follows readily.
\end{proof}
\begin{cor}
\label{absolute0}
When $z\ge1$, we have
$\displaystyle
\sum_{\substack{{\mathfrak{e}} \mid V(z),\\ ({\mathfrak{e}},{\mathfrak q})=\O_{\mathbf K}}} |\lambda_{{\mathfrak{e}}}({\mathfrak q})|
\le
6\cdot
89^{n_{\mathbf K}}\frac{z}{2+\log z}$.
\end{cor}
\begin{proof}
We use Theorem~\ref{absolute} with $\alpha=0$. We numerically find that
\begin{equation*}
1.6\,n_{\mathbf K} \prod_{p}\biggl(1+\frac{2}{(p-1)p}\biggr)^{n_{\mathbf K}}
~\le~
\prod_{p}\biggl(1+\frac{2}{(p-1)p^{1/4}}\biggr)^{n_{\mathbf K}}\le (88.2)^{n_{\mathbf K}}.
\end{equation*}
Furthermore $z^{3/4}\le 4\,z/(2+\log z)$ and $6\,(88.2)^{n_{\mathbf K}}\le 6\cdot
89^{n_{\mathbf K}}$.
\end{proof}
\begin{cor}
\label{absolute1minus1overnK}
When $z\ge1$, we have
$\displaystyle
\sum_{\substack{{\mathfrak{e}} \mid V(z),\\ ({\mathfrak{e}},{\mathfrak q})=\O_{\mathbf K}}}
\frac{|\lambda_{{\mathfrak{e}}}({\mathfrak q})|}{{\mathfrak N}{\mathfrak{e}}^{1-\frac{1}{n_{\mathbf K}}}}
~\le~
n_{\mathbf K}^{9n_{\mathbf K}}\frac{z^{1/n_{\mathbf K}}}{2+\log z}$.
\end{cor}
\begin{proof}
We use Theorem~\ref{absolute} with $\alpha=1-1/n_{\mathbf K}$. Let us call $T$ the quantity to be bounded
above. Theorem~\ref{absolute} gives the upper bound, with~$n=n_{\mathbf K}$,
\begin{multline*}
\frac{3.1\,n^2 z^{1/n}}{2+\log z}
c_1(1-1/n)^{n}
+
z^{3/(4n)}c_2(1-1/n)^{n}
\\=
\frac{n\, z^{1/n}}{2+\log z}
\Bigl(3.1\,n c_1(1-1/n)^{n} + 4c_2(1-1/n)^{n}\frac{\frac1{2n}+\log(z^{1/(4n)})}{z^{1/(4n)}}\Bigr).
\end{multline*}
The maximum of $y\mapsto(\frac14+\log y)/y$ is attained at $y=e^{3/4}$ and
$c_1(\alpha)\le c_2(\alpha)$, so we can simplify this upper bound to
\begin{equation*}
\frac{n\, z^{1/n}}{2+\log z}(3.1\,n +1.9)c_2(1-1/n)^{n}.
\end{equation*}
Now we want to find an upper bound for $c_2(1-1/n)$.
Applying inequality~\eqref{boundc2} and the fact that $\alpha\ge1/2$
in our case, we get
\begin{equation*}
\frac{c_2(1-1/n)}{\zeta(1+ \frac{1}{4n})\zeta(2-\frac{3}{4n})}
~\le~
\prod_{p}\biggl( 1 + \frac{p^{1/2}}{p^{5/2} - p^{3/2}}
+ \frac{p^2 - p^{3/2} - p^{1/2} + 1}{p^{1/2}(p^{15/4} - p^{11/4})}
\biggr).
\end{equation*}
A direct computation shows that right hand side of the product for $p\le 10^4$ is bounded by
$2.3$, and then using calculus
we derive that ${c_2(1-1/n)}\le 11.5n \zeta(13/8)$. This leads
to
\begin{equation*}
c_2(1-1/n) ~\le~ 26.45\,n.
\end{equation*}
Furthermore the quantity $n\, (3.1\,n +1.9)(26.45\,n)^{n}$ may further be bounded above by
$n^{9n}$. This establishes this lemma.
\end{proof}
\section{Brun-Titchmarsh Theorem for cosets of ray class groups}
\label{btc}
In this section, we prove a number field analogue of Brun-Titchmarsh theorem for
cosets. This result, while being of independent interest, is also crucial in proving our
main theorem.
\begin{thm}\label{bt}
Let $\a, {\mathfrak q}$ be integral ideals with $(\a, {\mathfrak q}) = \O_{{\mathbf K}}$. Also let
$H$ be a subgroup of $H_{{\mathfrak q}}({\mathbf K})$ with index $Y$. When
$X/Y
~\ge~
(10^6n_{\mathbf K})^{8n_{\mathbf K} + 11}|d_{\mathbf K}|^6\sqrt{|d_{{\mathbf K}}|{\mathfrak N}({\mathfrak q})}\log(|d_{\mathbf K}|{\mathfrak N}{\mathfrak q})^{n_{\mathbf K}}$,
we have
\begin{equation*}
\sum_{\P \in [\a]H} w\left(\frac{{\mathfrak N}(\P)}{X}\right)
\le
\frac{2 \|w\|_1 X}
{Y\log\frac{u^*(w,{\mathbf K})X}{Y\sqrt{{\mathfrak N}({\mathfrak q})}\log(|d_{\mathbf K}|{\mathfrak N}{\mathfrak q})^{n_{\mathbf K}}}}~,
\quad%
u^*(w,{\mathbf K})=\frac{\|w\|_1/(\|w^{(n_{\mathbf K}+3)}\|_\infty+5\|w\|_1)}{20000\cdot2^{22n_{\mathbf K}}|d_{{\mathbf K}}|^{3/2}}.
\end{equation*}
for any \emph{non-negative} smoothing function $w$ as defined in
Section~\ref{Notation}.
\end{thm}
When compared to the usual Brun-Titchmarsh Theorem, this result has
the surprising feature to be sometimes valid for instance when
$X<{\mathfrak N}{\mathfrak q}$. For instance, when $Y=2$, the bound $X$ can be as small as
${{\rm{O} }}_{{\mathbf K}}({\mathfrak N}{\mathfrak q}^{\frac12+\epsilon})$.
\begin{proof}
We use the notation $G=H_{{\mathfrak q}}({\mathbf K})$ and $n=n_{\mathbf K}$.
For any ideal $\b$ of $\O_{\mathbf K}$, we define
\begin{eqnarray*}
\delta(\b)
= \frac{1}{Y} \sum_{\chi \in {G\widehat /H }} \chi([\b \a^{-1}]H)
& \text{ and } &
\delta^*(\b)
= \frac{1}{Y} \sum_{\chi \in {G\widehat /H}} \chi^*({[\b \a^{-1}] H})
\end{eqnarray*}
where $G\widehat/H$ denotes the group of characters of $G/H$,
$\chi^*$ is the primitive character inducing $\chi$. For
any integral ideal $\b$ of $\O_{{\mathbf K}}$ with $(\b, {\mathfrak q}) = \O_{{\mathbf K}}$,
$\overline{\b}$ denotes an element of $G/H$.
In this case, $\delta(\b) = \delta^*(\b)$. For the case $(\b, {\mathfrak q}) \neq \O_{{\mathbf K}}$,
let ${\mathfrak q}(\b)$ be the largest divisor of ${\mathfrak q}$ co-prime to
$\b$ and let $L_{{\mathfrak q}(\b)}$ be the image of $H$ in $H_{{\mathfrak q}(\b)}({\mathbf K}) = G_{{\mathfrak q}(\b)}$, say.
Note that
$$
\delta^*(\b)
=
\frac{1}{Y} \sum_{\chi \in {{G_{{\mathfrak q}(\b)}\widehat/ L_{{\mathfrak q}(\b)}}}} \chi^*({ [\b \a^{-1}]H })
=
\frac{|G_{{\mathfrak q}(\b)}/L_{{\mathfrak q}(\b)}|}{Y} ~~{\bf 1}_{[\a]L_{{\mathfrak q}(\b)}},
$$
where $ {\bf 1}_{ [\a]L_{{\mathfrak q}(\b)} }$ is the characteristic function of $[\a]L_{{\mathfrak q}(\b)}$.
This shows that $\delta^*(\b)$ is non-negative whenever $(\b, {\mathfrak q}) \neq \O_{\mathbf K}$.
Therefore $\delta^*(\b) \ge \delta(\b)$.
\noindent
Consider the sum $T_1=\sum_{\P} \delta(\P) w({{\mathfrak N}(\P)}/{X})$. As
$|w|\le 1$, we readily find that
\begin{align*}
T_1
&=
\sum_{{\mathfrak N}(\P) \le z} \delta(\P) w\left(\frac{{\mathfrak N}(\P)}{X}\right)
+ \sum_{{\mathfrak N}(\P) > z} \delta(\P) w\left(\frac{{\mathfrak N}(\P)}{X}\right)
\\
&\le~
n z + \sum_{(\b, V(z)) = \O_{{\mathbf K}}} \delta^*(\b) w\left(\frac{{\mathfrak N}(\b)}{X}\right)
~\le~
nz + \sum_{\b} \delta^*(\b) w\left(\frac{{\mathfrak N}(\b)}{X}\right) \sum_{{\mathfrak{e}} \mid (\b,V(z))} \mu({\mathfrak{e}}),
\end{align*}
where $V(z)$ is as in Section~\ref{Ssnfdone}.
Since $\sum_{{\mathfrak{e}} \mid \b}\mu ({\mathfrak{e}})\le(\sum_{{\mathfrak{e}} \mid \b}\lambda_{{\mathfrak{e}}})^2$, we have
$$
T_1 - nz
~\le~
\sum_{\b} \delta^*(\b) w\left(\frac{{\mathfrak N}(\b)}{X}\right) \left(\sum_{{\mathfrak{e}} \mid (\b, V(z))} \lambda_{{\mathfrak{e}}}\right)^2
~\le~
\sum_{{\mathfrak{e}}_1, {\mathfrak{e}}_2 \atop{{\mathfrak{e}}_i \mid V(z)}} \lambda_{{\mathfrak{e}}_1} \lambda_{{\mathfrak{e}}_2}
\sum_{[{\mathfrak{e}}_1,{\mathfrak{e}}_2] \mid \b} \delta^*(\b) w\left(\frac{{\mathfrak N}(\b)}{X}\right).
$$
Replacing the definition of $\delta^{*}(\b)$, we get
\begin{align*}
T_1 - nz
& \le~
\sum_{{\mathfrak{e}}_1, {\mathfrak{e}}_2 \atop{{\mathfrak{e}}_i \mid V(z)}} \frac{\lambda_{{\mathfrak{e}}_1}
\lambda_{{\mathfrak{e}}_2}}{Y} \sum_{[{\mathfrak{e}}_1,{\mathfrak{e}}_2] \mid \b} w\left(\frac{{\mathfrak N}(\b)}{X}\right)
\sum_{\chi \in {G\widehat/H}} \chi^{*}({[\b \a^{-1}]H})
\\
&\le~
\sum_{{\mathfrak{e}}_1, {\mathfrak{e}}_2 \atop{{\mathfrak{e}}_i \mid V(z)}} \frac{\lambda_{{\mathfrak{e}}_1} \lambda_{{\mathfrak{e}}_2}}{Y}
\sum_{\chi \in {G\widehat/H} }\chi^{*}({[\a^{-1}]H})
\sum_{[{\mathfrak{e}}_1,{\mathfrak{e}}_2] \mid \b} w\left(\frac{{\mathfrak N}(\b)}{X}\right) \chi^*({[\b}]H)
\\& \le~
\sum_{{\mathfrak{e}}_1, {\mathfrak{e}}_2 \atop{{\mathfrak{e}}_i \mid V(z)}} \frac{\lambda_{{\mathfrak{e}}_1} \lambda_{{\mathfrak{e}}_2}}{Y}
\sum_{\chi \in {G\widehat/H}}\chi^{*}({[\a^{-1}]H}) \sum_{{\mathfrak m}} w\left(\frac{{\mathfrak N}({\mathfrak m}[{\mathfrak{e}}_1,{\mathfrak{e}}_2])}{X}\right)
\chi^*({[{\mathfrak m}[{\mathfrak{e}}_1,{\mathfrak{e}}_2]~]H}).
\end{align*}
Applying Mellin transforms, we get
$$
\sum_{{\mathfrak m}} w\left(\frac{{\mathfrak N}({\mathfrak m}[{\mathfrak{e}}_1,{\mathfrak{e}}_2])}{X}\right) \chi^*({[{\mathfrak m}[{\mathfrak{e}}_1,{\mathfrak{e}}_2]~]H})
=
\chi^{*}([~[{\mathfrak{e}}_1,{\mathfrak{e}}_2]~]H)\frac{1}{2\pi i}\int_{2 - i \infty}^{2+ i\infty} \check{w}(s) L_{{\mathfrak q}^*}(s,\chi^*)
\frac{X^s ds}{{\mathfrak N}([{\mathfrak{e}}_1,{\mathfrak{e}}_2])^s}.
$$
where ${\mathfrak q}^*$ is the conductor of $\chi^*$. We shift the line of integration to the line $\Re s=0$.
This leads to the upper bound of $T_1 - nz$ given by
$$
\sum_{{\mathfrak{e}}_1, {\mathfrak{e}}_2 \atop{{\mathfrak{e}}_i \mid V(z)}} \frac{\lambda_{{\mathfrak{e}}_1} \lambda_{{\mathfrak{e}}_2}}{Y}
\left\{ \frac{\alpha_{{\mathbf K}}\phi({\mathfrak q})\check{w}(1)X}{{\mathfrak N}({\mathfrak q}){\mathfrak N}([{\mathfrak{e}}_1,{\mathfrak{e}}_2])}
+
\sum_{\chi \in {G \widehat/H}} \frac{\chi^{*}({[\a^{-1}[{\mathfrak{e}}_1,{\mathfrak{e}}_2]]H})}{2 \pi i}
\int_{-i\infty}^{i\infty} \check{w}(s) L_{{\mathfrak q}^*}(s,\chi^*) \frac{X^s ds}{{\mathfrak N}([{\mathfrak{e}}_1,{\mathfrak{e}}_2])^s} \right\}.
$$
On using \lemref{HR2} when $\chi^*\neq1$ and \lemref{HR1} when
$\chi^*=1$ on the line $\Re s=0$, we see that $T_1 - nz$ is less than
$$
\frac{\alpha_{\mathbf K}\check{w}(1) \phi({\mathfrak q})X}{{\mathfrak N}({\mathfrak q}) Y}
\sum_{{\mathfrak{e}}_1, {\mathfrak{e}}_2 \atop{{\mathfrak{e}}_i \mid V(z)}} \frac{\lambda_{{\mathfrak{e}}_1}
\lambda_{{\mathfrak{e}}_2}}{{\mathfrak N}([{\mathfrak{e}}_1,{\mathfrak{e}}_2])}
+
{{\rm{O} }}^* \biggl( 3
M(w,\varepsilon)\zeta(1+\varepsilon)^{n_{{\mathbf K}}} (|d_{{\mathbf K}}|{\mathfrak N}({\mathfrak q}))^{\frac{1+\varepsilon}{2}}
\Bigl(\sum_{{\mathfrak{e}} \atop{{\mathfrak{e}} \mid V(z)}}|\lambda_{{\mathfrak{e}}}|\Bigr)^2 \biggr),
$$
where $0< \varepsilon <1/24$ and
$M(w,\varepsilon)$ is defined and bounded in Lemma~\ref{getM}.
The sum over ${\mathfrak{e}}$ may be treated by Corollary~\ref{absolute0}.
As a partial conclusion, we reach
\begin{equation*}
T_1\le n z+ \frac{\alpha_{\mathbf K}\check{w}(1)X}{YG(z)}
+ 12\cdot 2^{\frac{\varepsilon}{2}n}(\|w^{(n+3)}\|_\infty
+10\cdot 2^{\frac{n}{2}}\|w\|_1)\zeta(1+\varepsilon)^{n} (|d_{{\mathbf K}}|{\mathfrak N}({\mathfrak q}))^{\frac{1+\varepsilon}{2}}36\cdot
89^{2n}\frac{z^2}{\log^2 z}.
\end{equation*}
Let us assume that $z\ge (10^6n)^{4n}|d_{\mathbf K}|^3$ to use
Theorem~\ref{Gzbound}. This leads to
\begin{equation*}
T_1\le n z+ \frac{\check{w}(1)X}{Y\log\frac{z}{n_{\mathbf K}^4\sqrt{|d_{{\mathbf K}}|}}}
+ 432 \cdot 2^{\frac{28+\varepsilon}{2}n}(\|w^{(n+3)}\|_\infty
+10\cdot 2^{\frac{n}{2}}\|w\|_1)\zeta(1+\varepsilon)^{n} (|d_{{\mathbf K}}|{\mathfrak N}({\mathfrak q}))^{\frac{1+\varepsilon}{2}}
\frac{z^2}{\log^2 z}.
\end{equation*}
We select $\varepsilon=\min(1/2, 2/\log(|d_{\mathbf K}|{\mathfrak N}{\mathfrak q}))$ and use
\begin{equation*}
\zeta(1+\varepsilon)^{n} (|d_{{\mathbf K}}|{\mathfrak N}({\mathfrak q}))^{\frac{1+\varepsilon}{2}}
\le e \bigl(2\log(|d_{\mathbf K}|{\mathfrak N}{\mathfrak q})\bigr)^{n}\sqrt{|d_{{\mathbf K}}|{\mathfrak N}({\mathfrak q})}
\end{equation*}
to derive the bound
\begin{equation*}
\frac{Y\log\frac{z}{n^4\sqrt{|d_{{\mathbf K}}|}}}{X \|w\|_1} T_1 ~ \le
1
+ 2600\cdot 2^{16n}\cdot \frac{(\|w^{(n+3)}\|_\infty + 5 \|w\|_1)}{ \|w\|_1}
\log(|d_{\mathbf K}|{\mathfrak N}{\mathfrak q})^{n}\sqrt{|d_{{\mathbf K}}|{\mathfrak N}({\mathfrak q})}
\frac{Y z^2}{X\log z}.
\end{equation*}
We select
\begin{equation*}
z^2=\frac{X\|w\|_1/Y}{2600\cdot 2^{16n}(\|w^{(n+3)}\|_\infty + 5 \|w\|_1)
\log(|d_{\mathbf K}|{\mathfrak N}{\mathfrak q})^{n}\sqrt{|d_{{\mathbf K}}|{\mathfrak N}({\mathfrak q})}}.
\end{equation*}
This gives us the inequality
\begin{equation*}
\frac{Y\log\frac{z}{n^4\sqrt{|d_{{\mathbf K}}|}}}{X \|w\|_1} T_1
~ \le~ 1 + \frac{1}{\log \frac{z}{n^4 \sqrt{|d_{{\mathbf K}} |} } }
~ \le~ \frac{1}{ 1 - \log^{-1} \frac{z}{n^4 \sqrt{ |d_{{\mathbf K}}| }} },
\end{equation*}
where we have used the inequality $1+x\le 1/(1-x)$ for $x\in[0,1)$.
Thus
\begin{equation*}
\sum_{\P \in [\a]H} w\left(\frac{{\mathfrak N}(\P)}{X}\right)
~\le~
\frac{2 \|w\|_1 X}{Y\log \frac{X/ (Y \sqrt{{\mathfrak N}{\mathfrak q}} \log(|d_{\mathbf K}|{\mathfrak N}{\mathfrak q})^{n} ) }{
|d_{{\mathbf K}}|^{3/2} 20000\cdot 2^{22n}\cdot (\|w^{(n+3)}\|_\infty + 5 \|w\|_1) / \|w\|_1 }},
\end{equation*}
where we have used the inequality $2^{16n}n^8\le 2^{22n}$. This completes the
proof of our theorem.
\end{proof}
\section{Brun-Titchmarsh Theorem for single class of ray class groups}
Applying Theorem~\ref{bt} for the trivial subgroup and forgetting
the field-dependence leads to the upper bound
$\frac{2X}{h_{{\mathbf K},{\mathfrak q}}\log(X/{\mathfrak N}{\mathfrak q}^{3/2+\epsilon})}$ while the usual
Brun-Titchmarsh inequality has $\log(X/{\mathfrak N}{\mathfrak q}^{1+\epsilon})$.
\begin{proof}[Proof of Theorem~\ref{bt-tri}]
As before, we denote $n_{\mathbf K}$ by $n$.
We use a Selberg sieve of dimension one as presented in
Section~\ref{Ssnfdone} with coefficients
$(\lambda_{{\mathfrak{e}}} ({\mathfrak q}))_{{\mathfrak N}{\mathfrak{e}}\le z}$.
On denoting by $T_1$ the sum to evaluate, this gives us
\begin{equation*}
T_1
~\le~ \sum_{\substack{\a\in [\b],\\ {\mathfrak N}\a \le X}}
\biggl(\sum_{{\mathfrak{e}} \mid (\a, V(z))} \lambda_{{\mathfrak{e}}}({\mathfrak q})\biggr)^2
~+~ n z
~\le~
\sum_{\substack{{\mathfrak{e}}_1,{\mathfrak{e}}_2}}\lambda_{{\mathfrak{e}}_1}({\mathfrak q})\lambda_{{\mathfrak{e}}_2}({\mathfrak q})
\sum_{\substack{[{\mathfrak{e}}_1,{\mathfrak{e}}_2]|\a\in[\b],\\ {\mathfrak N}\a\le X}}1 ~+~ nz.
\end{equation*}
We write $\a=[{\mathfrak{e}}_1,{\mathfrak{e}}_2]\c$ where $\c$ is an integral ideal
in the class of $\b[{\mathfrak{e}}_1,{\mathfrak{e}}_2]^{-1}$ (this is legal since the lcm
$[e_1,{\mathfrak{e}}_2]$ is indeed prime to ${\mathfrak q}$), and of norm $\le
X/{\mathfrak N}[{\mathfrak{e}}_1,{\mathfrak{e}}_2]$. We now apply Theorem~\ref{asymfinal}. This gives us
\begin{multline*}
T_1-n z
~\le~
\frac{\alpha_{{\mathbf K}} \phi({\mathfrak q})}{h_{{\mathbf K},{\mathfrak q}}}
\frac{X}{ {\mathfrak N}{\mathfrak q}}
\sum_{\substack{{\mathfrak{e}}_1,{\mathfrak{e}}_2}}
\frac{\lambda_{{\mathfrak{e}}_1}({\mathfrak q})\lambda_{{\mathfrak{e}}_2}({\mathfrak q})}{{\mathfrak N}[{\mathfrak{e}}_1,{\mathfrak{e}}_2]}
+
\biggl(\sum_{\substack{{\mathfrak{e}}}}
|\lambda_{{\mathfrak{e}}}({\mathfrak q})|\biggr)^2
n^{8n}R_{\mathbf K} F({\mathfrak q})
\\+~
E({\mathbf K})
F({\mathfrak q})^{\frac{1}{n}}\log(3F({\mathfrak q}))^{n}
\left( \frac{X}{{\mathfrak N}{\mathfrak q}} \right)^{1-\frac{1}{n} }
\sum_{\substack{{\mathfrak{e}}_1,{\mathfrak{e}}_2}}
\frac{|\lambda_{{\mathfrak{e}}_1}({\mathfrak q})\lambda_{{\mathfrak{e}}_2}({\mathfrak q})|}{{\mathfrak N}[{\mathfrak{e}}_1,{\mathfrak{e}}_2]^{1-\frac{1}{n}}}.
\end{multline*}
The first term on the right hand side is handled by Lemma~\ref{G(z)bound}, the second
term is controlled by Corollary~\ref{absolute0} and we now show that the third
one can be controlled by Corollary~\ref{absolute1minus1overnK}. Indeed,
Selberg's diagonalisation process gives us, with $\alpha=1-\frac{1}{n}$,
\begin{equation*}
\sum_{\substack{{\mathfrak{e}}_1,{\mathfrak{e}}_2}}
\frac{|\lambda_{{\mathfrak{e}}_1}({\mathfrak q})\lambda_{{\mathfrak{e}}_2}({\mathfrak q})|}{{\mathfrak N}[{\mathfrak{e}}_1,{\mathfrak{e}}_2]^{\alpha}}
~=~
\sum_{\substack{{\mathfrak{e}}_1,{\mathfrak{e}}_2}}{\mathfrak N}({\mathfrak{e}}_1,{\mathfrak{e}}_2)^{\alpha}
\frac{|\lambda_{{\mathfrak{e}}_1}({\mathfrak q})\lambda_{{\mathfrak{e}}_2}({\mathfrak q})|}{{\mathfrak N}{\mathfrak{e}}_1^\alpha{\mathfrak N}{\mathfrak{e}}_2^{\alpha}}
~ \le~
\sum_{\d, \atop {\mathfrak N}\d\le z}\phi_\alpha(\d)
\biggl(\sum_{\d|{\mathfrak{e}}}
\frac{|\lambda_{{\mathfrak{e}}}({\mathfrak q})|}{{\mathfrak N}{\mathfrak{e}}^\alpha}\biggr)^2,
\end{equation*}
where $\phi_\alpha(\d) = {\mathfrak N}(\d)^{\alpha}$.
We now note that
\begin{align*}
\sum_{\d|{\mathfrak{e}}}
\frac{|\lambda_{{\mathfrak{e}}}({\mathfrak q})|}{{\mathfrak N}{\mathfrak{e}}^\alpha}
&\le~
\sum_{({\mathfrak{e}},\d{\mathfrak q})=1}
\frac{{\mathfrak N}{\mathfrak{e}}{\mathfrak N}\d}{\phi({\mathfrak{e}})\phi(\d){\mathfrak N}\d^\alpha{\mathfrak N}{\mathfrak{e}}^\alpha}
\frac{G_{{\mathfrak{e}}\d{\mathfrak q}}(\frac{z/{\mathfrak N}\d}{{\mathfrak N}{\mathfrak{e}}})}{G_{\mathfrak q}(z)}
\\&\le~
\frac{G_{\d{\mathfrak q}}(z/{\mathfrak N}\d)}{G_{\mathfrak q}(z)}
\frac{{\mathfrak N}\d^{1-\alpha}}{\phi(\d)}
\sum_{({\mathfrak{e}},\d{\mathfrak q})=1}
\frac{{\mathfrak N}{\mathfrak{e}}}{\phi({\mathfrak{e}}){\mathfrak N}{\mathfrak{e}}^\alpha}
\frac{G_{{\mathfrak{e}}\d{\mathfrak q}}(\frac{z/{\mathfrak N}\d}{{\mathfrak N}{\mathfrak{e}}})}{G_{\d{\mathfrak q}}(z/{\mathfrak N}\d)}
\\&\le~ \frac{G_{\d{\mathfrak q}}(z/{\mathfrak N}\d)}{G_{\mathfrak q}(z)}
\frac{{\mathfrak N}\d^{1-\alpha}}{\phi(\d)}
\sum_{({\mathfrak{e}}, \d{\mathfrak q})=1}
\frac{|\lambda_{\mathfrak{e}}({\mathfrak q},z/{\mathfrak N}\d)|}{{\mathfrak N}{\mathfrak{e}}^\alpha},
\end{align*}
where $\lambda_{\mathfrak{e}}({\mathfrak q},z/{\mathfrak N}\d) = \frac{{\mathfrak N}{\mathfrak{e}}}{\phi({\mathfrak{e}})}
\frac{G_{{\mathfrak{e}}\d{\mathfrak q}}(\frac{z/{\mathfrak N}\d}{{\mathfrak N}{\mathfrak{e}}})}{G_{\d{\mathfrak q}}(z/{\mathfrak N}\d)}$. By Corollary~\ref{absolute1minus1overnK},
this is bounded above by
\begin{equation*}
n^{9n}\frac{{\mathfrak N}\d}{\phi(\d)}
\frac{G_{\d{\mathfrak q}}(z/{\mathfrak N}\d)}{G_{\mathfrak q}(z)}\frac{z^{1-\alpha}}{(2+\log(z/{\mathfrak N}\d)){\mathfrak N}\d}.
\end{equation*}
We use the upper bound $G_{\mathfrak q}(z)$ for $({\mathfrak N}\d/\phi(\d))G_{\d{\mathfrak q}}(z/{\mathfrak N}\d)$
which follows along the lines of the inequality \cite[(1.3)]{van-Lint-Richert*65} by Van
Lint and Richert. Hence we have
\begin{eqnarray*}
\sum_{\substack{{\mathfrak{e}}_1,{\mathfrak{e}}_2}}
\frac{|\lambda_{{\mathfrak{e}}_1}({\mathfrak q})\lambda_{{\mathfrak{e}}_2}({\mathfrak q})|}{{\mathfrak N}[{\mathfrak{e}}_1,{\mathfrak{e}}_2]^{\alpha}}
&\le&
n^{18 n}z^{2(1-\alpha)}
\sum_{\d, \atop{0< {\mathfrak N}\d\le z}}\mu^2(\d)\frac{\phi_\alpha(\d)}{(2+\log(z/{\mathfrak N}\d))^2{\mathfrak N}\d^2} \\
&\le&
\frac{ n^{20 n}z^{2(1-\alpha)}}{(\log z)^2}
\sum_{\d, \atop{0< {\mathfrak N}\d\le z^{\alpha}}}
\mu^2(\d)\frac{\phi_\alpha(\d)}{{\mathfrak N}\d^2}
~+~ \frac{n^{18 n}z^{2(1-\alpha)}}{4} \sum_{\d, \atop{z^{\alpha}<{\mathfrak N}\d\le z}}
\mu^2(\d)\frac{\phi_\alpha(\d)}{{\mathfrak N}\d^2}
\Bigl(\frac{{\mathfrak N}\d}{z^\alpha}\Bigr)^{\frac{1-\alpha}{2}} \\
&\le&
n^{20 n}\frac{z^{2(1-\alpha)}}{(\log z)^2} \zeta_{\mathbf K}(2-\alpha)
~+~ n^{18 n}\frac{z^{\frac{(4-\alpha)}{2}(1-\alpha)}}{4}
\zeta_{\mathbf K}\Bigl(1+\frac{1-\alpha}{2}\Bigr).
\end{eqnarray*}
We simplify the upper bound as follows;
\begin{equation*}
n^{20 n}\frac{z^{2/n}}{(\log z)^2}\biggl( (n+1)^{n}
~+~
\frac{(\log z)^2}{4 z^{\frac{n-1}{2n^2}}}(2n+1)^{n}
\biggr)
\end{equation*}
which is bounded above by
\begin{equation*}
n^{20n}\frac{z^{2/n}}{(\log z)^2}\biggl( (n+1)^{n}
~+~
\frac{(\frac{4n^2}{n-1}\log z^{\frac{n-1}{4n^2}})^2}{4 z^{\frac{n-1}{2n^2}}}(2n+1)^{n}
\biggr)
~\le~
n^{27 n}\frac{z^{2/n}}{(\log
z)^2}.
\end{equation*}
Let us resume our study of $T_1$. The estimates above lead to
\begin{multline*}
\frac{h_{{\mathbf K},{\mathfrak q}}}{ X}T_1\le
\frac{\alpha_{\mathbf K} \phi({\mathfrak q}) }{{\mathfrak N}{\mathfrak q} G_{\mathfrak q}(z)}
+
36 \cdot 2^{14n} n^{8n}R_{\mathbf K}
F({\mathfrak q})
\frac{z^2h_{{\mathbf K},{\mathfrak q}}}{{\mathfrak N}{\mathfrak q} }
\frac{{\mathfrak N}{\mathfrak q}/X}{(2+\log z)^2}
\\+~
n \frac{h_{{\mathbf K},{\mathfrak q}} }{{\mathfrak N}{\mathfrak q} }
\frac{z{\mathfrak N}{\mathfrak q}}{X}
~+~
n^{27 n} E({\mathbf K}) F({\mathfrak q})^{\frac{1}{n}}\log(3F({\mathfrak q}))^{n}
\frac{h_{{\mathbf K},{\mathfrak q}}}{{\mathfrak N}{\mathfrak q} (\log z)^2}
\left( \frac{z^2{\mathfrak N}{\mathfrak q}} {X} \right)^{\frac{1}{n} },
\end{multline*}
where $E({\mathbf K})=1000
n^{ 12n^2 }R_{\mathbf K}^{1/n}
\bigl[\log\bigl((2n)^{4n}R_{{\mathbf K}} \bigr)\bigr]^{n}$ and $F({\mathfrak q})=2^{r_1}\phi({\mathfrak q})h_{\mathbf K}/h_{{\mathbf K},{\mathfrak q}}$ are defined in Theorem~\ref{asymfinal}. We employ Lemma~\ref{Gzbound} to bound $G_{\mathfrak q}(z)$ from below, on assuming
that $z\ge (10^6n)^{4n}|d_{\mathbf K}|^3$, getting
\begin{multline*}
\frac{h_{{\mathbf K},{\mathfrak q}}}{ X}T_1
~\le~
\frac{1}{\log\frac{z}{e^4 n^4\sqrt{|d_{{\mathbf K}}|}}}
\biggl(1
+
36 \cdot 2^{14n} n^{8n}R_{\mathbf K}
2^{r_1}h_{\mathbf K}
\frac{z^2{\mathfrak N}{\mathfrak q} /X}{2+\log z}
\\+~
n 2^{r_1}h_{\mathbf K}
\frac{z{\mathfrak N}{\mathfrak q}\log z}{X}
~+~
n^{27 n} E({\mathbf K}) \frac{F({\mathfrak q})^{\frac{1}{n}}\log(3F({\mathfrak q}))^{n}}{F({\mathfrak q})}
\frac{2^{r_1}h_{\mathbf K}}{ \log z}
\left( \frac{z^2{\mathfrak N}{\mathfrak q}} {X} \right)^{\frac{1}{n} }
\biggr).
\end{multline*}
We notice that $z\log z\le \frac23 z^2/\log z$ when $z > 1$ and that
\begin{align}
\frac{y^{\frac{1}{n}}\log(3y)^{n}}{y}
&=\biggl(\frac{\log(3y)}{y^{\frac{1}{n}-\frac{1}{n^2}}}\biggr)^{n}
=3^{1-\frac1n}\biggl(\frac{\log(3y)}{(3y)^{\frac{1}{n}-\frac{1}{n^2}}}\biggr)^{n}
\notag
\\&=3^{1-\frac1n}\frac{n^{2n}}{(n-1)^n}
\biggl(\frac{\log((3y)^{\frac1n-\frac1{n^2}})}{(3y)^{\frac{1}{n}-\frac{1}{n^2}}}\biggr)^{n}
~\le~ 3\frac{n^{2n}}{e^n(n-1)^n} ~\le~ n^{2n}.
\label{simpl}
\end{align}
On assuming that $z^2\le X/{\mathfrak N}{\mathfrak q}$, we reach the upper bound
\begin{align*}
\frac{h_{{\mathbf K},{\mathfrak q}}}{ X}T_1
&\le~
\frac{1}{\log\frac{z}{n^4\sqrt{|d_{{\mathbf K}}|}}}
\biggl(1
+
\Bigl(36 n^{23 n}R_{\mathbf K}
+
n^{2n}
+
E({\mathbf K})
n^{30 n}
\Bigr)\frac{h_{\mathbf K} }{ \log z} \left( \frac{z^2{\mathfrak N}{\mathfrak q}} {X} \right)^{\frac{1}{n} }\biggr)
\\&\le~
\frac{1}{\log\frac{z}{n^4\sqrt{|d_{{\mathbf K}}|}}}
\biggl(1
+
\Bigl( y
+
1
+
1000
y^{1/n}
(\log y)^{n}
\Bigr)\frac{n^{ 27n^2 }h_{\mathbf K}}{\log z} \left( \frac{z^2{\mathfrak N}{\mathfrak q}} {X} \right)^{\frac{1}{n} }\biggr)
\end{align*}
with $y=(2n)^{4n}R_{\mathbf K}$. Recall that $R_{\mathbf K}\ge 1/5$ by
\cite{Friedman*89}, so that $y\ge13\,000$. A reasoning very similar to the
one that led to inequality~\eqref{simpl}
applies, getting
\begin{equation*}
\frac{h_{{\mathbf K},{\mathfrak q}}}{ X}T_1
~\le~
\frac{1}{\log\frac{z}{n^4\sqrt{|d_{{\mathbf K}}|}}}
\Bigl(1
+
\frac{n^{ 35n^2 }R_{\mathbf K} h_{\mathbf K}}{ \log z} \left( \frac{z^2{\mathfrak N}{\mathfrak q}} {X} \right)^{\frac{1}{n} }\Bigr).
\end{equation*}
Choosing $z^2 = \frac{X}{{\mathfrak N}{\mathfrak q}} (n^{ 35n^2 }R_{\mathbf K} h_{\mathbf K})^{-n}$, we get
\begin{equation*}
\frac{h_{{\mathbf K},{\mathfrak q}}}{ X}T_1
~\le~
\frac{1}{\log\frac{z}{n^4\sqrt{|d_{{\mathbf K}}|}}- 1}
~\le~
\frac{2}{\log \frac{X}{{\mathfrak N}{\mathfrak q}} - \log \left( n^8 |d_{{\mathbf K}}| n^{ 35n^3 } (R_{\mathbf K} h_{\mathbf K})^n \right)}
\end{equation*}
and this completes the proof of our theorem.
\end{proof}
\section{Finding enough primes}
Winckler in his PhD thesis (see Theorem 1.7 of \cite{Winckler*15}) proved an explicit version
of Tchebotarev density Theorem. The case $L={\mathbf K}$ provides us with the following explicit
version of Landau's prime ideal theorem.
\begin{thm}[Winckler]
\label{LandauPrimeIdealTheorem}
For every $x \ge \exp (110000n_{\mathbf K} (\log(9|d_{\mathbf K}|^8 ))^2 )$, we have
\begin{equation*}
\sum_{{\mathfrak N}\P\le x}1
~=~
\operatorname{Li}(x) ~+~ {{\rm{O} }}^*\bigl(\operatorname{Li}(x^\beta)\bigr) ~+~
{{\rm{O} }}^*\biggl(10^{14}x\exp\frac{-\sqrt{\log x}}{12}\biggr),
\end{equation*}
where $\beta$ is the possible largest real zero of $\zeta_{\mathbf K}$ and $\operatorname{Li}(x)=\int_2^x \frac{dt}{\log t}$ is the
usual \emph{logarithmic integral function}.
\end{thm}
This possible exceptional zero $\beta$ can be controlled by
the next lemma.
\begin{lem}[Kadiri and Wong \cite{KW}, 2022]
\label{AKzero}
Any positive real zero $\rho$ of $\zeta_{{\mathbf K}}(s)$ satisfies
$1 - \rho \ge {1}/{|d_{{\mathbf K}}|^{12}}.$
\end{lem}
We need a lower bound for the number of prime ideals of degree
one, meaning we need to remove the primes of degree $>1$ from the
estimate of Theorem~\ref{LandauPrimeIdealTheorem}. This, and more, is
achieved by using the next Lemma.
\begin{lem}
\label{RemoveParasites}
We have
\begin{equation*}
\sum_{\substack{\P, k,\\ {\mathfrak N}\P^k>p,\\ {\mathfrak N}\P\le X}}1
~\le~
3.64~n_{\mathbf K}\sqrt{X}.
\end{equation*}
\end{lem}
The summation is over powers $k \ge2$ for primes of any degree, or of
powers $k \ge1$ for primes of degree~$>1$. We denote by $p$ the rational
prime below~$\P$.
\begin{proof}
Each prime $p$ has at most $n_{\mathbf K}$ prime ideals above itself. For
each such prime, say of norm $p^f$, the contribution to the sum is
at most $\frac{\log X}{f\log p}\le (\log X)/\log 2$ and a prime $p$ that contributes verifies
$p\le \sqrt{X}$. We use the bound for $\pi(x)\le 1.26x/\log x$ from
\cite[Corollary 1]{Rosser-Schoenfeld*62} by Rosser and Schoenfeld valid for
$x=\sqrt{X}>1$. The lemma follows.
\end{proof}
\begin{lem}
\label{lowerbound}
When $x\ge \exp(|d_{\mathbf K}|^{30})$, we have
$\displaystyle
\mathop{\sum\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{\substack{\P \atop{{\mathfrak N}\P\le x}}}1
~\ge~
\frac{x}{\log x} ~+~ \frac{(1-10^{-10})x}{(\log x)^2}$.
\end{lem}
\begin{proof}
Let $\pi^\flat_{\mathbf K}(x)$ be the quantity to estimate.
We start by combining Theorem~\ref{LandauPrimeIdealTheorem} together
with
Lemma~\ref{RemoveParasites} to obtain
\begin{equation*}
\pi^\flat_{\mathbf K}(x)
\ge
\operatorname{Li}(x)
-\operatorname{Li}(x^\beta)
-10^{14}x\exp\biggl(\frac{-\sqrt{\log x}}{12}\biggr)
-3.64n_{\mathbf K}\sqrt{x}.
\end{equation*}
By Theorem~\ref{AKzero}, we have $\beta\le 1-|d_{\mathbf K}|^{-115}$, while,
by Lemma~\ref{rootdisc}, we have $n_{\mathbf K}\le 3\log|d_{\mathbf K}|$. This gives
us a lower bound solely in terms of the discriminant. We also need
to handle the logarithmic integral. Using integration by parts, we find
that
\begin{equation*}
\operatorname{Li}(y)=\frac{y}{\log y}-\frac{2}{\log 2}
+\int_2^{y}\frac{dt}{\log^2t}
~\ge~ \frac{y}{\log y}-\frac{2}{\log
2}+\frac{\operatorname{Li}(y)}{\log y}.
\end{equation*}
We first deduce from this that $\operatorname{Li}(y)\ge y/\log y$ when
$y\ge 7.5$, by neglecting the additional term $\operatorname{Li}(y)/\log y$.
On incorporating it, we get $\operatorname{Li}(y)\ge \frac{y}{\log
y}+\frac{y}{(\log y)^2}$.
As for an upper bound, we work rather trivially:
\begin{equation*}
\operatorname{Li}(y)
~ \le~ \int_2^{\sqrt{y}}\frac{dt}{\log 2}+\int_{\sqrt{y}}^y\frac{dt}{\log\sqrt{y}}
~\le~ \frac{2y}{\log y}+\frac{\sqrt{y}}{\log 2}
~\le~ \frac{3\,y}{\log y}
\end{equation*}
when $y\ge 20$. This leads to, with $d=|d_{\mathbf K}|$,
\begin{align*}
\biggl(\frac{\log x}{x}\pi^\flat_{\mathbf K}(x)-1\biggr)\log x
&\ge~
1-\frac{3e^{-\frac{\log x}{d^{12}}}\log x}{\beta}
-10^{14}(\log x)^2e^{\frac{-\sqrt{\log x}}{12}}
-12(\log d)\frac{(\log x)^2}{\sqrt{x}},
\\&\ge~
1-6d^{30}e^{-d^{18}}
-10^{14} d^{60} e^{\frac{-d^{15}}{12}}
-12(\log d)d^{60}e^{-\frac{d^{30}}{2}}
\\&\ge~ 1-10^{-10}
\end{align*}
since $d\ge2$. This completes the proof of the lemma.
\end{proof}
\begin{lem}
\label{wlowerbound}
When $x\ge \exp(|d_{\mathbf K}|^{30})$, we have
$\displaystyle
\mathop{\sum\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{\substack{\P}}w_0({\mathfrak N}\P/x)
\ge
\frac{x\|w_0\|_1}{\log x}
+\frac{x\|w_0\|_1}{5(\log x)^2}$.
\end{lem}
\begin{proof}
Since $w_0$ is bounded above by~1 and has support within $[0,1]$, we
may again use Lemma~\ref{RemoveParasites} to handle the condition
`$\P$ of degree~1'. Whence, on denoting $T(w_0)$ the sum to be studied, we find that
\begin{equation*}
T(w_0)
\ge \sum_{\substack{\P}}w_0({\mathfrak N}\P/x)-3.64n_{\mathbf K}\sqrt{x}.
\end{equation*}
As $w_0(y)= - \int_y^1w_0'(t)dt$, we get
\begin{equation*}
T(w_0)\ge
- \int_0^1
\bigl(\sum_{{\mathfrak N}\P\le tx}1\bigr) w_0'(t)dt-3.64n_{\mathbf K}\sqrt{x}.
\end{equation*}
Of course $w_0'(t)=0$ when $t\le 1/10$, so we may assume that $tx\ge x/10$
which is thus larger than $ \exp (110000n_{\mathbf K} (\log(9|d_{\mathbf K}|^8 ))^2
)$. Theorem~\ref{LandauPrimeIdealTheorem} applies and yields
\begin{align*}
T(w_0)
&\ge~
- \int_0^1
\operatorname{Li}(xt) w_0'(t)dt
-\|w_0'\|_1\biggl(\operatorname{Li}(x^\beta)+10^{14}x\exp\frac{-\sqrt{\log x}}{12}\biggr)
-3.64n_{\mathbf K}\sqrt{x}
\\&\ge~
\int_0^1
\frac{xw_0(t)dt}{\log(xt)}
-\|w_0'\|_1\biggl(\operatorname{Li}(x^\beta)+10^{14}x\exp\frac{-\sqrt{\log x}}{12}\biggr)
-3.64n_{\mathbf K}\sqrt{x}
\\&\ge~
\frac{x\|w_0\|_1}{\log x}
+\int_0^1\frac{xw_0(t)\log(1/t)dt}{(\log x)\log(x t)}
-\|w_0'\|_1\biggl(\operatorname{Li}(x^\beta)+10^{14}x\exp\frac{-\sqrt{\log x}}{12}\biggr)
-3.64n_{\mathbf K}\sqrt{x}.
\end{align*}
All of that is valid for a rather general non-negative
function~$w$. When it comes to $w=w_0$, we have
$10\sqrt{n_{{\mathbf K}}}\|w_0\|_1\in[2, 15]$ by
Lemma~\ref{studyw0} and $\|w_0'\|_1=2$. Further,
by applying Lemma~\ref{rootdisc}, we get $\sqrt{n_{{\mathbf K}}} \le
\sqrt{(\log|d_{\mathbf K}|)/\log(\pi/2)}$
which is not more than $|d_{\mathbf K}|^2$. We finally notice that
\begin{equation*}
\|w_0\log(1/t)\|_1
~\ge~ \int_{1/10}^{11/20}w_0(t)\log(1/t)dt
~\ge~ \log(20/11)\frac{\|w_0\|_1}2
~\ge~ \frac{\|w_0\|_1}4.
\end{equation*}
Hence, and following estimates very similar to the ones done during
the proof of Lemma~\ref{lowerbound}, we find that
\begin{align*}
T(w_0)
&\ge~
\frac{x\|w_0\|_1}{\log x}
+\frac{x\|w_0\|_1}{4(\log x)^2}\biggl(
1
- 832 \sqrt{\log |d_{\mathbf K}|} x^{\beta-1}\log x
- 757 (\log |d_{\mathbf K}|)^{3/2}x^{-1/2}(\log x)^2
-10^{-6} \frac{\sqrt{\log |d_{\mathbf K}|}}{\log x}
\biggr)
\\&\ge~
\frac{x\|w_0\|_1}{\log x}
+\frac{x\|w_0\|_1}{5(\log x)^2}
\end{align*}
as required.
\end{proof}
\section{Proof of \thmref{mainthm}}
In order to prove \thmref{mainthm}, we need the following lemmas.
\begin{lem}
\label{simplifytK}
We have $n_{\mathbf K}^{ 48n_{\mathbf K}^3 }(R_{\mathbf K} h_{\mathbf K})^{n_{{\mathbf K}}} \ge
10^{25n_{\mathbf K}} n_{{\mathbf K}}^{7n_{{\mathbf K}}}$.
\end{lem}
\begin{proof}
By ~\cite{Friedman*89}, we have $R_{\mathbf K}/ |\mu_{\mathbf K}| \ge 9/100$ which implies $R_{\mathbf K}\ge 9/100$.
It is thus enough to check the inequality
\begin{equation*}
n_{\mathbf K}^{ 48n_{\mathbf K}^3 } \left( \frac{9}{100} \right)^{n_{{\mathbf K}}} ~\ge~
10^{25n_{\mathbf K}} n_{{\mathbf K}}^{7n_{{\mathbf K}}}
\end{equation*}
which is readily seen to hold true as $n_{{\mathbf K} } \ge 2$.
\end{proof}
\begin{lem}[Kneser's Theorem \cite{Kneser*53}, 1953]\label{KT}
Let $G$ be a finite abelian group and $\mathcal{B}$ be a non-empty
subset of $G$. Also let
$
H = \{ g \in G ~|~ g + \mathcal{B} + \mathcal{B} = \mathcal{B} + \mathcal{B} \}
$
be the stabiliser of the set $\mathcal{B} + \mathcal{B}$. If $\mathcal{B}$ intersects $\lambda$ many cosets
of $H$, then
$$
|\mathcal{B}+\mathcal{B}| \geq (2\lambda -1 )|H|.
$$
\end{lem}
\begin{proof}[Proof of Theorem~\ref{mainthm}]
For any un-ramified prime ideal $\P$ of degree one, we denote ${\mathfrak N}(\P)$ by $p$.
Also let $\mathcal{A}$ be the subset of $H_{{\mathfrak q}}({\mathbf K})$ defined~by
\begin{equation}
\mathcal{A} = \{ [\a] \in H_{{\mathfrak q}}({\mathbf K}) ~|~ \exists ~\P \text{ with } {\mathfrak N}(\P)=p < X, [\P]=[\a]\}.
\end{equation}
Let us set $u({\mathbf K})= n_{{\mathbf K}}^{ 48n_{{\mathbf K}}^3 } |d_{\mathbf K}|^6(R_{\mathbf K} h_{\mathbf K})^{n_{{\mathbf K}}}$. When
$X>u({\mathbf K}){\mathfrak N}{\mathfrak q}$, Theorem~\ref{bt-tri} gives us that
\begin{equation*}
\mathop{\sum\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{\substack{{\mathfrak N}\P\le X}}1
~\le~
\frac{2 |\mathcal{A}| X}
{h_{{\mathbf K},{\mathfrak q}}\log\frac{X}{u({\mathbf K}){\mathfrak N}{\mathfrak q}}}.
\end{equation*}
On the other side and when $X\ge \exp(|d_{\mathbf K}|^{30})$,
Lemma~\ref{lowerbound} ensures us that
\begin{equation*}
\frac{X}{2(\log X)^2}+\frac{X}{\log X}~\le~ \mathop{\sum\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{\substack{{\mathfrak N}\P\le X}}1.
\end{equation*}
On assuming that the required conditions hold true, a comparison of
both inequalities gives us
\begin{equation*}
\frac{|\mathcal{A}|}
{h_{{\mathbf K},{\mathfrak q}}}
~\ge~
\frac{\log\frac{X}{u({\mathbf K}){\mathfrak N}{\mathfrak q}}}{2\log X}
~+~
\frac{\log\frac{X}{u({\mathbf K}){\mathfrak N}{\mathfrak q}}}{4(\log X)^2}.
\end{equation*}
Take $X=(t({\mathbf K}){\mathfrak N}{\mathfrak q})^3$, where $t({\mathbf K})$ is defined in \eqref{deftK}. The above inequality gives us
\begin{equation}
\label{inieq}
\frac{|\mathcal{A}|} {h_{{\mathbf K},{\mathfrak q}}}
~\ge~
\frac13
~+~
\frac{1}{18\log
t({\mathbf K})+ 18\log{\mathfrak N}{\mathfrak q}}.
\end{equation}
From now onwards, let us set $G =H_{\mathfrak q}({\mathbf K})$.
Also let $H$ be the stabilizer of $\mathcal{A}\cdot \mathcal{A}$ in $G$, $y$ be the index of
$H$ in $G$ and $\lambda$ be the number of cosets of $H$ that intersect
$\mathcal{A}$.
By Kneser's Theorem, we have
\begin{equation*}
|\mathcal{A} \cdot \mathcal{A}|
~\ge~
(2\lambda-1)|H|
~=~ \frac{2\lambda-1}{y}|G|.
\end{equation*}
We further know that
\begin{equation*}
\lambda ~\ge~ \Bigl\lceil{\frac{|\mathcal{A}|}{|H|}\Bigr\rceil}
\end{equation*}
and that $\lambda$ is an integer.
Furthermore, to be sure that $\mathcal{A} \cdot \mathcal{A} \cdot\mathcal{A} =G$, we only
need
\begin{equation}\label{eq:8}
|\mathcal{A}| + \frac{2\lambda-1}{y}|G| ~>~ |G|
\phantom{m}\text{i.e.}\phantom{m}
\frac{|\mathcal{A}|}{|G|}+\frac{2\lambda-1}{y} ~>~ 1.
\end{equation}
This follows from the following observation.
Given any $[\b] \in G$, we consider the
set
$$
[\b]\mathcal{A}^{-1} = \{[\b] [\a] ~~:~ [\a^{-1}] \in \mathcal{A}\}.
$$
For any $[\b] \in G$ if $[\b]\mathcal{A}^{-1} \cap \mathcal{A}\cdot\mathcal{A} \neq \emptyset$,
then $[\b] \in \mathcal{A}\cdot\mathcal{A}\cdot\mathcal{A}$. However $|[\b]\mathcal{A}^{-1}| = |\mathcal{A}|$.
Therefore by Pigeon-hole principle if $|\mathcal{A}| + |\mathcal{A} \cdot \mathcal{A}| > |G|$, then
$\mathcal{A} \cdot \mathcal{A}\cdot \mathcal{A}=G$.
From the lower bound $\lambda \ge |\mathcal{A}|/|H|$, we observe
that it suffices to show that
\begin{equation} \label{eq:9}
3 \frac{|\mathcal{A}|}{|G|} - \frac{1}{y} ~>~ 1.
\end{equation}
Let us discuss the possible values of $y$.
\subsubsection*{$\blacksquare$ Large values of $y$}
By~\eqref{inieq}, the above inequality \eqref{eq:9} will be satisfied if we have
$$
\frac{1}{9\log t({\mathbf K})~+~ 9\log {\mathfrak N}{\mathfrak q}}-\frac1y ~>~ 0
$$
This implies that the inequality \eqref{eq:9} holds when
$y > 9 \log t({\mathbf K}) + 9\log {\mathfrak N}{\mathfrak q}$.
\subsubsection*{$\blacksquare$ The case $y=1$}
This is the case when $H=G$ and
thus $\mathcal{A} \cdot \mathcal{A}=G$. Hence $\mathcal{A} \cdot \mathcal{A} \cdot\mathcal{A} =G$.
\subsubsection*{$\blacksquare$ The case $y=2$}
So $H$ is a quadratic subgroup of $G$. Using Lemma~\ref{simplifytK},
we have
$$
X
~\ge~ ( 10^{25n_{\mathbf K}} n_{{\mathbf K}}^{7n_{{\mathbf K}}} |d_{\mathbf K}|^{4/3} {\mathfrak N}{\mathfrak q})^3
~\ge~ 10^{25n_{\mathbf K}} n_{{\mathbf K}}^{7n_{{\mathbf K}}} |d_{\mathbf K}|^{4} {\mathfrak N}{\mathfrak q}^3.
$$
Theorem~\ref{degreeoneprime} tells us that the subgroup
generated by $\mathcal{A}$ is $G$. It follows that
$\mathcal{A}$ contains an element of $G\setminus H$. By \thmref{primeinkernel},
we have that the subset $\mathcal{A}$ also contains an element of $H$. Indeed, we
have $X\ge 8(10^{31} n_{\mathbf K}^7)^{n_{\mathbf K}} |d_{\mathbf K}|^4 {\mathfrak N}{\mathfrak q}^2$.
As $\mathcal{A}\cdot\mathcal{A}$ is a union of cosets mod~$H$, we conclude
that $ \mathcal{A}\cdot\mathcal{A}=G$, whence $\mathcal{A}\cdot\mathcal{A}\cdot\mathcal{A}=G$.
\subsubsection*{$\blacksquare$ Medium values of $y\nequiv2 \bmod{3}$}
Since $\mathcal{A}\cdot \mathcal{A}$ is a union of cosets modulo $H$, it is enough to
check that $\mathcal{A}/H\cdot(\mathcal{A}\cdot \mathcal{A}/H)$ covers $G/H$. This means that it
is enough to assume that $\mathcal{A}$ is a union of ($\lambda$ many) cosets modulo~$H$, from
which we infer that we only need to prove that
$\frac{\lambda}{y}+\frac{2\lambda-1}{y}> 1$, i.e.
$\lambda>(y+1)/3$. Note that \eqref{inieq} implies that
$\lambda>y/3$. When $y\equiv 0 \bmod{3}$, then $\lambda \ge y/3 +1$.
When $y\equiv 1 \bmod{3}$, the integer part of $y/3$ is at
least $(y+2)/3$, so that the inequality $\lambda>(y+1)/3$ is
satisfied. There remains the values of $y\ge3$ that are $2 \bmod{3}$
and below $9 \log t({\mathbf K})+ 9\log {\mathfrak N}{\mathfrak q}$.
\subsubsection*{$\blacksquare$ Medium values of $y\equiv2 \bmod{3}$ and $y\ge5$}
We use \thmref{bt} for $y \le 9 \log t({\mathbf K})+ 9 \log {\mathfrak N}{\mathfrak q}$ together with
Lemma~\ref{wlowerbound}.
This gives us, with $n_{\mathbf K}=n$,
\begin{align*}
\frac{X\|w_0\|_1}{\log X}
+\frac{X\|w_0\|_1}{5(\log X)^2}
&\le~
\mathop{\sum\nolimits^{\hbox to 0pt{$\flat$\hss}}}_{\substack{\P}}w_0({\mathfrak N}\P/X)
~\le~
\sum_{ [\a]H \in\mathcal{A}/H}\sum_{\P \in [\a]H} w_0\left({\mathfrak N}\P/X\right)
\\&\le~
\lambda
\frac{2 \|w_0\|_1 X}
{y\log\frac{X\|w_0\|_1/[20000 (\|w_0^{(n+3)}\|_\infty
+5\|w_0\|_1)\log(|d_{\mathbf K}|{\mathfrak N}{\mathfrak q})^{n}]}
{2^{22n}y\sqrt{|d_{{\mathbf K}}|^3{\mathfrak N}({\mathfrak q})}}}.
\end{align*}
Lemma~\ref{studyw0} gives us
\begin{equation*}
\frac{\|w_0\|_1}{\|w_0^{(n+3)}\|_\infty +5\|w_0\|_1}
~\ge~ \frac{2\sqrt{n}}{ (40n)^{n+4} + 75 \sqrt{n} }
~\ge~ \frac{\sqrt{n}}{(40n)^{n +4}}.
\end{equation*}
We have thus obtained the inequality
\begin{equation*}
\log\frac{\sqrt{n}}{20000(40n)^{n+4}
2^{22n}|d_{{\mathbf K}}|^{3/2}}
\frac{X}
{9\log (t({\mathbf K}){\mathfrak N}{\mathfrak q})\sqrt{{\mathfrak N}({\mathfrak q})}\log(|d_{\mathbf K}|{\mathfrak N}{\mathfrak q})^{n}}
~\le~ \frac{2\lambda}{y}\log X
\end{equation*}
i.e. also
\begin{equation*}
1-\frac{1}{\log X}
\log\biggl(180000\cdot 2^{22n}(40n)^{n+4}|d_{{\mathbf K}}|^{3/2}
{\log (t({\mathbf K}){\mathfrak N}{\mathfrak q})\sqrt{{\mathfrak N}({\mathfrak q})}\log(|d_{\mathbf K}|{\mathfrak N}{\mathfrak q})^{n}}
\biggr)
~\le~ \frac{2\lambda}{y}.
\end{equation*}
We readily check that $180000\cdot 2^{22n}(40n)^{n+4} |d_{{\mathbf K}}|^{3/2} \le \sqrt{t({\mathbf K})}$, so that the
above inequality implies that
\begin{equation*}
1-\frac1{3\log V}\log(\sqrt{V}(\log V)^{1+n}) ~\le~ \frac{2\lambda}{y},
\end{equation*}
where $V=t({\mathbf K}){\mathfrak N}{\mathfrak q}$,
while we are to prove that $\lambda > (y+1)/3$ (see the discussion of
the case ``Medium values of $y\nequiv2 \bmod{3}$"). We should thus prove
that $\log(\sqrt{V}(\log V)^{1+n})<\frac{3}{5}\log V$, i.e.
\begin{equation*}
(\log V)^{10 + 10 n} ~<~ V.
\end{equation*}
Again by using Lemma~\ref{rootdisc}, we see that
$n\le \frac{1}{30\log(\pi/2)}\log\log
t({\mathbf K})\le \frac{1}{13}\log\log V$.
With $W=\log V$, we should thus prove that
\begin{equation*}
10\log W ~+~ \frac{10}{13} (\log W)^2 ~<~ W.
\end{equation*}
This inequality happens to be always satisfied, which
concludes our proof.
\end{proof}
\noindent
{\bf Acknowledgements.}
Research of this article was partially supported by Indo-French Program in Mathematics (IFPM).
All authors would like to thank IFPM for financial support. The first, second and fourth authors
acknowledge the support of SPARC project~445.
The second author would also like to thank MTR/2018/000201 and DAE number
theory plan project for partial financial support.
|
2004.07833
|
\section{Introduction}
There are several problems of mathematical physics in which the only available analytic solution is a divergent and/or truncated power series expansion. Over the past decade, a new approach has evolved to overcome the convergence barrier in series solutions. An asymptotic approximant is a closed-form expression whose expansion in one region is exact up to a specified order and whose asymptotic equivalence in another region is enforced. The remarkable feature of asymptotic approximants is their ability to attain uniform accuracy not only in these two regions, but also at all points in-between, as demonstrated thus far for problems in thermodynamics, astrophysics, and fluid dynamics~\cite{BarlowJCP,BarlowAIChE,Barlow2015,Barlow:2017,Barlow:2017b,Beachley,Belden}. The current need to model and predict viral epidemics motivates us to extend the application of asymptotic approximants to the commonly used Susceptible-Infected-Recovered (SIR) model. This model is formulated as a system of nonlinear ordinary differential equations. Although no exact analytic solution has yet been found for the SIR model, a convergent series solution may be formulated via the homotopy analysis method~\cite{Khan}. Here, we provide an alternative and simple analytic approach. Interestingly, the SIR model shares the same asymptotic features as boundary layer flow over a moving flat plate, for which asymptotic approximants have already been applied~\cite{Barlow:2017}. The analytic nature of the asymptotic approximant derived in what follows is advantageous. Model parameters may be extracted for available COVID-19 data via a least squares (or equivalent) technique without the need for an embedded numerical scheme.
The SIR epidemic model considers the time-evolution of a susceptible population, $S(t)$, interacting with an infected population, $I(t)$, where $t$ is time. This model is expressed as~\cite{Kermack}
\begin{subequations}
\begin{equation}
\frac{dS}{dt}=-rSI
\label{eq:S}
\end{equation}
\begin{equation}
\frac{dI}{dt}=rSI-\alpha I
\label{eq:I}
\end{equation}
with constraints
\begin{equation}
S=S_0,~I=I_0\text{ at }t=0,
\label{eq:constraints}
\end{equation}
\label{eq:SIR}
\end{subequations}
where $r$, $\alpha$, $S_0$, $I_0$ are non-negative constant parameters~\cite{Kermack}. Once~(\ref{eq:SIR}) is solved, the recovered population is extracted as:
\begin{equation}
R(t)=\alpha\int_0^t I(\zeta) d\zeta.
\label{eq:R}
\end{equation}
Equation~(\ref{eq:S}) can be thought of as a standard collision model in a 2${}^\mathrm{nd}$-order chemical reaction, where species $S$ and $I$ ``collide'' to deplete the population of $S$ to create the species $I$. In this interpretation, \textit{r} is a rate constant, which in practice may be reduced by population behavior such as ``social distancing''. In the case where $\alpha$=0 in~(\ref{eq:I}), the system~(\ref{eq:SIR}) indicates that $S+I=S_{0} +I_{0} $ for all time. For \textit{$\alpha \ne 0$}, then, the number of infected are reduced in time in accordance with~(\ref{eq:I}), and it is seen that the parameter \textit{$\alpha$} determines the rate of recovery of infected individuals. The omission of a negative $\alpha I(t)$ term in~(\ref{eq:S}) is an implicit model assumption that the recovered population is no longer susceptible to the disease.
We now manipulate the system~(\ref{eq:SIR}) into an equivalent first-order equation to simplify the analysis that follows. Equations~(\ref{eq:S}) and~(\ref{eq:I}) are divided to obtain
\begin{equation}
\frac{dI}{dS} =\frac{\alpha }{rS} -1.
\label{eq:divide}
\end{equation}
Subsequent integration of~(\ref{eq:divide}) with respect to $S$ and application of the constraints~(\ref{eq:constraints}) yields
\begin{equation}
I=\frac{\alpha }{r} \ln \left(\frac{S}{S_{0} } \right)-S+S_{0} +I_{0} .
\label{eq:algebraic}
\end{equation}
Equation~(\ref{eq:algebraic}) is substituted into equation~(\ref{eq:S}) to obtain
\begin{subequations}
\begin{equation}
\frac{dS}{dt} =\beta S+rS^{2}-\alpha S\ln S
\label{eq:singleODE}
\end{equation}
where
\begin{equation}
\beta =\alpha \ln S_{0} -r(S_{0} +I_{0} ).
\label{eq:beta}
\end{equation}
From equation~(\ref{eq:constraints}), the constraint on $S$ is:
\begin{equation}
S=S_0\text{ at }t=0.
\label{eq:Sconstraint}
\end{equation}
\label{eq:newODE}
\end{subequations}
System~(\ref{eq:newODE}) is equivalent to~(\ref{eq:SIR}) to solve for $S$ and, once solved, the solution for $I$ may be obtained using~(\ref{eq:algebraic}), which may be integrated to find $R$ from~(\ref{eq:R}).
The series solution of~(\ref{eq:newODE}) is given by
\begin{subequations}
\begin{equation}
S=\sum_{n=0}^\infty a_n t^n,~a_0=S_0
\label{eq:series}
\end{equation}
\begin{equation}
a_{n+1}=\frac{1}{n+1}\left[\beta a_n+\displaystyle\sum_{j=0}^na_j\left(ra_{n-j}-\alpha b_{n-j}\right)\right],
\label{eq:coefficients}
\end{equation}
\begin{equation}
b_{n>0}=\frac{1}{n}\sum_{j=0}^{n-1}a_{j+1}\tilde{a}_{n-1-j},~~b_0=\ln a_0,
\label{eq:b}
\end{equation}
\begin{equation}
\tilde{a}_{n>0}=\frac{-1}{a_0}\sum_{j=1}^na_j\tilde{a}_{n-j},~~\tilde{a}_0=\frac{1}{a_0}.
\label{eq:atilde}
\end{equation}
\label{eq:SeriesSolution}
\end{subequations}
The result~(\ref{eq:SeriesSolution}) is obtained by applying Cauchy's product rule~\cite{Churchill} to expand $S^2$ and $S\ln S$ in~(\ref{eq:newODE}). The expansion of $\ln S$ is obtained by first applying Cauchy's product rule to the identity $SS^{-1}=1$ and evaluating like-terms to obtain a recursive expression for the coefficients of the expansion of $S^{-1}$, given by~(\ref{eq:atilde}). The expansion of $S^{-1}$ is subsequently integrated term-by-term to obtain the expansion of $\ln S$, whose coefficients are given by~(\ref{eq:b}). Although the series solution given by~(\ref{eq:SeriesSolution}) is an analytic solution to~(\ref{eq:newODE}), it is only valid within its radius of convergence and is incapable of capturing the long-time behavior of $S$. This motivates the construction of an approximant to analytically continue the series beyond this convergence barrier.
The long-time asymptotic behavior of the system~(\ref{eq:newODE}) is required to develop our asymptotic approximant, and so we proceed as follows. It has been proven in prior literature~\cite{Hethcote} that $S$ approaches a limiting value, $S_\infty$, as $t\to\infty$, and this corresponds to $I\to0$ according to~(\ref{eq:newODE}). The value of $S_\infty$ satisfies equation~(\ref{eq:algebraic}) with $I=0$ as~\cite{Hethcote}
\begin{equation}
\frac{\alpha }{r} \ln \left(\frac{S_\infty }{S_0} \right)-S_{\infty } +S_{0} +I_{0} =0.
\label{eq:infinite}
\end{equation}
We expand $S$ as $t\to \infty $ as follows:
\begin{equation}
S\sim S_{\infty } +S_{1} (t)\text{ where } S_{1} \to 0\text{ as }t\to \infty.
\label{eq:perturb}
\end{equation}
Equation~(\ref{eq:perturb}) is substituted into~(\ref{eq:newODE}) and terms of O($S_{1} {}^{2} $) are neglected to achieve the following linearized equation
\begin{subequations}
\begin{equation}
\frac{dS_{1} }{dt} =\kappa S_{1}
\end{equation}
where
\begin{equation}
\kappa =rS_{\infty } -\alpha.
\label{eq:kappa}
\end{equation}
\label{eq:asymptoticDE}
\end{subequations}
In writing~(\ref{eq:kappa}), the definition of $\beta$ in~(\ref{eq:beta}) has been employed. Additionally, to obtain~(\ref{eq:asymptoticDE}), equation~(\ref{eq:infinite}) has been used which eliminates all O(1) terms in the linearized system. The solution to~(\ref{eq:asymptoticDE}) is
\begin{equation}
S_{1} =\varepsilon e^{\kappa t},
\label{eq:S1}
\end{equation}
where \textit{$\varepsilon$} is an unknown constant that can only be determined via connection with short-time behavior. Consistent with the assumptions made, we find $\kappa <0$ such that $S_{1} \to 0$ as $t\to \infty $. Thus the long-time asymptotic behavior of $S$ is given by
\begin{equation}
S\sim S_\infty+\varepsilon e^{\kappa t},~t\to\infty.
\label{eq:asymptotic}
\end{equation}
Higher order corrections to the expansion~(\ref{eq:asymptotic}) may be obtained by the method of dominant balance~\cite{Bender} as a series of more rapidly damped exponentials of the form $e^{n\kappa t}$ where $n>1$. This long-time asymptotic behavior of successive exponentials mimics that of the Sakiadis boundary layer problem describing flow along a moving plate in a stationary fluid~\cite{Barlow:2017}. It is natural, then, to apply the Sakiadis approximant~\cite{Barlow:2017} to capture this asymptotic behavior while retaining the $t=0$ behavior given by~(\ref{eq:series}). The Sakiadis approximant imposes the exponential form of the long-time asymptotic behavior~(\ref{eq:asymptotic}) for all time; the coefficients of the exponentials are determined by matching their short-time expansion to the known power series developed about $t=0$ in the form of~(\ref{eq:series}). However, here we find that a reciprocal expression that achieves the same $t\to\infty$ behavior~(\ref{eq:asymptotic}) (through its binomial expansion) converges faster than the original Sakiadis approixmant.
\begin{figure*}
\begin{tabular}{c}
\includegraphics[width=3.6in]{simpleA}
\includegraphics[width=3.6in]{simpleB}
\end{tabular}
\caption{Analytical and numerical solutions to the SIR model~(\ref{eq:newODE}), where the susceptible ($S$), infected ($I$), and recovered ($R$) populations are plotted versus time, all in arbitrary units. (a) As the number of terms $N$ is increased, the series solution, denoted $S_{S,N}$ (given by~(\ref{eq:series}), dashed curves), diverges and the approximant, denoted $S_{A,N}$ (given by~(\ref{eq:approximant}), solid curves), converges to the exact (numerical) solution ($\bullet$'s). (b) The converged asymptotic approximant for $S$ is used to obtain $R$ and $I$ (from equations~(\ref{eq:R}) and~(\ref{eq:algebraic}), respectively). The model parameters values and initial conditions $\alpha=2$, $r=1/5$, $I_0=25$, and $S_0=75$ are taken from a test case used in~\citet{Khan} to validate the homotopy analysis method.}
\label{fig:simple}
\end{figure*}
\begin{figure*}
\begin{tabular}{c}
\includegraphics[width=3.6in]{bubonicA}
\includegraphics[width=3.6in]{bubonicB}
\end{tabular}
\caption{Analytical and numerical solutions to the SIR model~(\ref{eq:newODE}) where $S$, $I$, and $R$ are in units of people and $t$ is in months. All other notation and labels are the same as in figure~\ref{fig:simple}. The model parameters values and initial conditions $\alpha=2.73$, $r=0.0178$, $I_0=7$, and $S_0=254$ are taken from estimates of the 1966 bubonic plague outbreak in Eyam, England examined in~\citet{Khan}}
\label{fig:bubonic}
\end{figure*}
\begin{figure*}
\begin{tabular}{c}
\includegraphics[width=3.6in]{convergence}
\includegraphics[width=3.6in]{SIR}
\end{tabular}
\caption{Analytical and numerical solutions to the SIR model~(\ref{eq:newODE}) where $S$, $I$, and $R$ are in units of people and $t$ is in days. All other notation and labels are the same as in figure~\ref{fig:simple}. The model parameters values $\alpha$=0.0164 and $r$=2.9236$\times10^{-5}$ were obtained via a least-squares fit between the asymptotic approximant and Japan COVID-19 outbreak data~\cite{covid} ($\circ$'s), using initial conditions $I_0=2$ (from the first point in the data set~\cite{covid}) and $S_0=4206$. Here $t=0$ is January 22, 2020 (see main text for interpretation of the COVID-19 data).}
\label{fig:covid}
\end{figure*}
The assumed SIR approximant is given by
\begin{subequations}
\begin{equation}
S_{A,N}=\frac{S_\infty}{1+\displaystyle\sum_{n=1}^N A_n e^{n \kappa t}}
\label{eq:Aform}
\end{equation}
where the $A_n$'s are obtained by taking the reciprocal of both sides of~(\ref{eq:Aform}), expanding each side about $t=0$, and equating like-terms. The coefficients of the subsequent reciprocal expansion of the left-hand side (that of $S^{-1}$) are given by~(\ref{eq:atilde}). After equating like-terms of this expansion with that of the reciprocal of the right-hand side of~(\ref{eq:Aform}), one arrives at the following linear system of equations to solve for the $A_n$ values as
\begin{equation}
\left[ \begin{array}{ccccc}
1^0 & 2^0 & 3^0 & \cdots & N^0 \\ 1^1 & 2^1 & 3^1 & \cdots & N^1 \\ 1^2 & 2^2 & 3^2 & \cdots & N^2 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1^{N-1} & 2^{N-1} & 3^{N-1} & \cdots & N^{N-1}\end{array} \right]\left[ \begin{array}{c}
A_1\\ A_2 \\ A_3 \\ \vdots \\ A_N\end{array} \right]=\vec{f},
\label{eq:matrix}
\end{equation}
\begin{equation}
\vec{f}=S_\infty\left[ \begin{array}{c}
0!~\tilde{a}_0-1/S_\infty\\ 1!~(1/\kappa)~\tilde{a}_1 \\ 2!~ (1/\kappa)^2 ~\tilde{a}_2 \\ \vdots \\ (N-1)!~ (1/\kappa)^{N-1} ~\tilde{a}_{N-1} \end{array} \right],
\label{eq:vector}
\end{equation}
\label{eq:approximant}
\end{subequations}
where~(\ref{eq:matrix}) is a Vandermonde matrix whose inversion is explicitly known~\cite{Turner:1966}. The SIR approximant~(\ref{eq:approximant}) is thus a closed-form expression that, by construction, matches the correct $t\to\infty$ behavior given by~(\ref{eq:asymptotic}) and whose expansion about $t=0$ is exact to $N^\mathrm{th}$-order. A MATLAB code for computing the $A_n$ coefficients is available from the authors~\cite{code}.
Figure~\ref{fig:simple}a provides a typical comparison of the $N$-term series solution~(\ref{eq:SeriesSolution}) denoted by $S_{S,N}$ (and dashed lines), the $N$-term approximant~(\ref{eq:approximant}) denoted by $S_{A,N}$ (solid lines), and the numerical solution (indicated by symbols). Note that the series solution has a finite radius of convergence as evidenced by the poor agreement and divergence from the numerical solution at larger times, even as additional terms are included. By contrast, the approximant converges as additional terms are included. For $N=15$, the approximant is visibly indistinguishable from the numerical solution (obtained by forward differencing) with a maximum relative error on the order of the numerical time-step (here $10^{-2}$) over the time range indicated. Increasing the number of terms beyond $N=15$ does improve accuracy up to a point, but also increases the likelihood of deficient approximants for which the denominator can be zero for certain time values and specific values of $N$. In general, the lowest number of terms that yields the desired accuracy is chosen to avoid this behavior. The convergence of the approximant with increasing $N$ is a necessary condition for a valid approximant. For the problems of mathematical physics to which we have applied asymptotic approximants~\cite{BarlowJCP,BarlowAIChE,Barlow2015,Barlow:2017,Barlow:2017b,Beachley,Belden}, we have observed that convergence of approximants implies excellent agreement with numerical results. There is as-of-yet no proof developed that guarantees this result, but this interesting behavior has been a property of all approximants developed thus far. In figure~\ref{fig:simple}b, the converged ($N=15$) asymptotic approximant for $S$ is used to obtain $R$ and $I$ (from equations~(\ref{eq:R}) and~(\ref{eq:algebraic}), respectively) and is compared with the numerical solution for these quantities.
In figure~\ref{fig:bubonic}, the approximant is applied to a case examined in~\citet{Khan} to model the 1966 bubonic plague outbreak in Eyam, England. In figure~\ref{fig:covid}, the approximant is applied to COVID-19 data for Japan~\cite{covid}. An increased number of terms in the approximant is required to achieve the same relative errors in figures~\ref{fig:simple},~\ref{fig:bubonic}, and~\ref{fig:covid}. For all cases examined, we observe that this trend correlates with the breadth of the initial $S$ plateau.
Note that the reported COVID-19 outbreak data~\cite{covid} in figure~\ref{fig:covid} is originally provided in terms of confirmed cases and recovered individuals per day. The difference between these two quantities is used as an approximation to compare with the quantity $I$ of the SIR model. It is acknowledged that the actual COVID-19 data is influenced by transient effects not included in the SIR model such as the exposure lag-time; these effects are incorporated in more sophisticated models such as SEIR~\cite{Hethcote}. The approximation of $I$ from COVID-19 data is not restrictive in the current context, as our purpose is to show the efficacy of the closed form approximant rather than assess the validity of the SIR model.
In figure~\ref{fig:covid}, a least squares fit of the asymptotic approximant to $I$ data is used to extract SIR parameters $\alpha$ and $r$ based on data from the initial stages of the COVID-19 epidemic in Japan. To do so,~(\ref{eq:algebraic}) is used to relate $I$ analytically to the solution for $S$ (here, the approximant $S_{A,30}$); note that $S_\infty$, used in the approximant, is affected by these parameters explicitly according to~(\ref{eq:infinite}). The value of $S_0$ is not provided in the data set~\cite{covid}, and a least-squares algorithm is ineffective at determining an optimal value. Here, we choose the value of $S_0$ to be twice that of the maximum value of $I$ approximated from the data, as it captures a typical curve shape for $S$ seen in applications of the SIR model~\cite{Hethcote}. In regards to the sensitivity of fitting parameters to the choice for $S_0$, a $100\%$ difference in $S_0$ leads to roughly a $50\%$ difference in $r$ and a $6\%$ difference in $\alpha$. The fit is made especially simple owing to the analytical form of the approximant that obviates the need to embed the numerical solution in such an algorithm. The population of recovered individuals, $R$, is extracted from the solution for $I$ by direct integration in accordance with~(\ref{eq:R}). Note that the predicted curve for $R$ in figure~\ref{fig:covid}, obtained solely by fitting data for $I$, is in good agreement with approximations from COVID-19 data for the recovered population, and serves as a check on the consistency of the data and algorithm.
It is evident from the results presented here that an asymptotic approximant can be used to provide accurate analytic solutions to the SIR model. Future work should focus on whether the asymptotic approximant technique can yield a closed form solution to more sophisticated epidemic models.
|
2004.07986
|
\section{Missing Proofs in Section \ref{sec:dis}}
\subsection{Proof of Lemma~\ref{lem:lower_bound_on_cost}}
\begin{proof}
Let $Z\in\mathbb{R}^{n\times n}$ be a random matrix. For each $i,j\in[n],$ define random variable $Z_{i,j}$ as
\begin{align*}
Z_{i,j}=\left\{\begin{array}{ll} |\Delta_{i,j}|, & \text{~if~} |\Delta_{i,j}|\leq n; \\ n, &\mathrm{otherwise.}\end{array}\right.
\end{align*}
For $i,j\in [n],$ by Markov's inequality, we have
\begin{align}
\Pr[|\Delta_{i,j}|\geq n]=\Pr[|\Delta_{i,j}|^p\geq n^p]\leq \E[|\Delta_{i,j}|^p]/n^p=O(1/n^p).\label{eq:tail_bound_of_delta}
\end{align}
Notice that
\begin{align*}
\E[|\Delta_{i,j}|^p]=\int_{0}^n x^p f(x) \mathrm{d}x+\int_{n}^{\infty} x^p f(x) \mathrm{d}x=O(1)
\end{align*}
where $f(x)$ is the probability density function of $|\Delta_{i,j}|.$
Thus we have
\begin{align*}
\int_{n}^{\infty} x f(x) \mathrm{d}x\leq\int_{n}^{\infty} x^p/n^{p-1}\cdot f(x) \mathrm{d}x=O(1/n^{p-1}).
\end{align*}
Because $\E[|\Delta_{i,j}|]=1,$ we have
\begin{align}
\int_{0}^{\infty} x f(x) \mathrm{d}x = \E[|\Delta_{i,j}|]-\int_{n}^{\infty} x f(x) \mathrm{d} x \geq 1-O(1/n^{p-1}).\label{eq:tail_bound_of_expectation}
\end{align}
By Equation~\eqref{eq:tail_bound_of_expectation}, we have
\begin{align*}
\E[Z_{i,j}]=\int_{0}^n x f(x) dx + n \cdot \Pr[|\Delta_{i,j}|\geq n]\geq \int_{0}^n x f(x) dx\geq 1-O(1/n^{p-1}).
\end{align*}
By Equation~\eqref{eq:tail_bound_of_delta} and $E[|\Delta_{i,j}|^p]\leq O(1)$, we have
\begin{align*}
\E[Z_{i,j}^2]=\int_{0}^n x^2 f(x) \mathrm{d}x+n^2\Pr[|\Delta_{i,j}|\geq n]\leq O(n^{2-p})+O(n^{2-p})=O(n^{2-p}).
\end{align*}
By the inequality of~\cite{m03},
\begin{align*}
\Pr[\E[\|Z\|_1]-\|Z\|_1\geq \epsilon \E[\|Z\|_1]/2]&\leq\exp\left(\frac{-\epsilon^2\E[\|Z\|_1]^2/4}{2\sum_{i,j}\E[Z_{i,j}^2]}\right)\\
&\leq \exp\left(\frac{-\epsilon^2(n^2-O(n^{3-p}))^2/4}{2n^2\cdot O(n^{2-p})}\right)\\
&\leq e^{-\Theta(n)}
\end{align*}
Thus with probability at least $1-e^{-\Theta(n)},$ $\|Z\|_1\geq (1-\epsilon/2)\E[\|Z\|_1]\geq (1-\epsilon)n^2$ where the last inequality follows by $\E[\|Z\|_1\geq n^2-O(n^{3-p})]$ and $1/\epsilon=n^{o(1)}.$ Since $\|\Delta\|_1\geq \|Z\|_1,$ we complete the proof.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:averaging_works}}
\begin{proof}
Let $Z\in\mathbb{R}^{n\times t}$ be a random matrix where $Z_{i,j}$ are i.i.d. random variables with probability density function:
\begin{align*}
g(x)=\left\{\begin{array}{ll}f(x)/\Pr[|\Delta_{1,1}|\leq n^{1/2+1/(2p)}], & \text{~if~} |x|\leq n^{1/2+1/(2p)}; \\ 0, & \mathrm{~otherwise.}\end{array}\right.
\end{align*}
where $f(x)$ is the probability density function of $\Delta_{1,1}$. (Note that in the above equation, $\Pr[|\Delta_{1,1}|\leq n^{1/2+1/(2p)}] > 0$.) Now, we have $\forall a\geq 0$,
\begin{align*}
\Pr\left[\left\|\sum_{j=1}^t\alpha_j\Delta_j\right\|_1\leq a ~\bigg|~ \forall i\in[n],j\in[t],|\Delta_{i,j}|\leq n^{1/2+1/(2p)}\right]=\Pr\left[\left\|\sum_{j=1}^t\alpha_jZ_j\right\|_1\leq a\right].
\end{align*}
Now we look at the $i$-th row of $\sum_{j=1}^t\alpha_jZ_j.$ We have
\begin{align}\label{eq:expectation_of_averaging}
\E\left[\left|\sum_{j=1}^t\alpha_jZ_{i,j}\right|\right]
= & ~ \left(\E\left[\left|\sum_{j=1}^t\alpha_jZ_{i,j}\right|\right]^p\right)^{1/p} \notag \\
\leq & ~ \E\left[\left|\sum_{j=1}^t\alpha_jZ_{i,j}\right|^p\right]^{1/p} \notag\\
\leq & ~ \E\left[\left( \left( \sum_{j=1}^t\alpha_j^2Z_{i,j}^2 \right)^{1/2} \right)^p\right]^{1/p} \notag \\
\leq & ~ \E\left[\sum_{j=1}^t|\alpha_jZ_{i,j}|^p\right]^{1/p} \notag\\
\leq & ~ \left(\sum_{j=1}^t\E[|\alpha_jZ_{i,j}|^p]\right)^{1/p} \notag \\
\leq & ~ \left(\sum_{j=1}^t\E[|Z_{i,j}|^p]\right)^{1/p} \notag\\
\leq & ~ O(t^{1/p}),
\end{align}
where the first inequality follows by Jensen's inequality, the second inequality follows by Remark 3 of~\cite{l97}, the third inequality follows by $\|x\|_2\leq \|x\|_p$ for $p<2$, the fourth inequality follows by $|\alpha_j|\leq 1$, the fifth inequality follows by $\E[|Z_{i,j}|^p]=\E[|\Delta_{i,j}|^p\mid |\Delta_{1,1}|\leq n^{1/2+1/(2p)}]\leq \E[|\Delta_{i,j}|^p]=O(1).$
For the second moment, we have
\begin{align}\label{eq:the_bound_for_the_second_moment}
\E\left[\left|\sum_{j=1}^t\alpha_jZ_{i,j}\right|^2\right]&=\sum_{j=1}^t\E\left[\alpha_j^2Z_{i,j}^2\right]+\sum_{j\not=k}\E[\alpha_j\alpha_kZ_{i,j}Z_{i,k}]\notag\\
= & ~ \sum_{j=1}^t\alpha_j^2\E\left[Z_{i,j}^2\right]+\sum_{j\not=k}\alpha_j\alpha_k\E[Z_{i,j}]\E[Z_{i,k}]\notag\\
\leq & ~ \sum_{j=1}^t\E\left[Z_{i,j}^2\right]\notag\\
= & ~ t\cdot 2\int_{0}^{n^{1/2+1/(2p)}} x^2f(x)/\Pr \left[|\Delta_{i,j}|\leq n^{1/2+1/(2p)} \right]\mathrm{d}x\notag\\
\leq & ~ 2t/\Pr \left[ |\Delta_{i,j}|\leq n^{1/2+1/(2p)} \right]\cdot (n^{1/2+1/(2p)})^{2-p}\int_{0}^{n^{1/2+1/(2p)}}x^pf(x)\mathrm{d}x\notag\\
\leq & ~ O(tn^{2-p}),
\end{align}
where the second inequality follows by independence of $Z_{i,j}$ and $Z_{i,k}.$ The first inequality follows by $|\alpha_j|\leq 1$ and $\E[Z_{i,j}]=\E[Z_{i,k}]=0.$ The third equality follows by the probability density function of $Z_{i,j}.$ The second inequality follows by $x^{2-p}\leq (n^{1/2+1/(2p)})^{2-p}$ when $0\leq x\leq n^{1/2+1/(2p)}.$ The last inequality follows by $\E[|\Delta_{i,j}|^p]=O(1),p>1$ and $\Pr[|\Delta_{i,j}|\leq n^{1/2+1/(2p)}]\geq 1- \E[|\Delta_{i,j}|^p]/(n^{1/2+1/(2p)})^p=1-O(1/n^{p/2+1/2})\geq 1/2.$
For $i\in[n],$ define $X_i=|\sum_{j=1}^t\alpha_jZ_{i,j}|.$ Then, by Bernstein's inequality
\begin{align*}
&\Pr\left[\left\|\sum_{j=1}^t\alpha_jZ_j\right\|_1-\E\left[\left\|\sum_{j=1}^t\alpha_jZ_j\right\|_1\right]\geq 0.5t^{1/p}n\right]\\
=~&\Pr\left[\sum_{i=1}^n X_i-\E\left[\sum_{i=1}^n X_i\right]\geq 0.5t^{1/p}n\right]\\
\leq~&\exp\left(-\frac{0.5\cdot0.5^2t^{2/p}n^2}{\sum_{i=1}^n \E[X_i^2]+\frac{1}{3}n^{1/2+1/(2p)}\cdot0.5t^{1/p}n}\right)\\
\leq~&e^{-n^{\Theta(1)}}.
\end{align*}
The last inequality follows by Equation~\eqref{eq:the_bound_for_the_second_moment}. According to Equation~\eqref{eq:expectation_of_averaging}, with probability at least $1-e^{-n^{\Theta(1)}},$
\begin{align*}
\left\|\sum_{j=1}^t\alpha_jZ_j\right\|_1\leq \E\left[\left\|\sum_{j=1}^t\alpha_jZ_j\right\|_1\right]+0.5t^{1/p}n\leq O(t^{1/p}n).
\end{align*}
\end{proof}
\subsection{Proof of Lemma~\ref{lem:hard_one_is_small}}
\begin{proof}
For $i,j\in[n],$ we have
\begin{align*}
\Pr \left[ |\Delta_{i,j}|> n^{1/2+1/(2p)} \right]=\Pr \left[ |\Delta_{i,j}|^p> n^{p/2+1/2} \right]\leq \E \left[|\Delta_{i,j}|^p \right]/n^{p/2+1/2}\leq O(1/n^{p/2+1/2}).
\end{align*}
For column $j$, by taking a union bound,
\begin{align*}
\Pr[j\in H]=\Pr \left[\exists i\in[n],|\Delta_{i,j}|>n^{1/2+1/(2p)} \right]\leq O(1/n^{p/2-1/2}).
\end{align*}
Thus, $\E[|H|]\leq O(n^{1-(p-1)/2}).$ By applying Markov's inequality, we complete the proof.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:remaining_is_small}}
\begin{proof}
For $l\in\mathbb{N}_{\geq 0},$ define $G_l=\{j\mid \|\Delta_j\|_1\in(n\cdot2^l,n\cdot 2^{l+1}]\}.$ We have
\begin{align*}
\E[|G_l|] \leq & ~ \sum_{j=1}^n \Pr \left[ \|\Delta_{j}\|_1\geq n\cdot 2^l \right]\\
= & ~ n\Pr \left[ \|\Delta_{1}\|_1\geq n\cdot 2^l \right]\\
\leq & ~ n\Pr \left[n^{1-1/p}\|\Delta_{1}\|_p\geq n\cdot 2^l \right]\\
= & ~ n\Pr \left[ n^{p-1}\|\Delta_{1}\|_p^p\geq n^p\cdot 2^{lp} \right]\\
\leq & ~ n \E \left[ n^{p-1}\|\Delta_{1}\|_p^p \right]/(n^p\cdot 2^{lp})\\
\leq & ~ O(n/2^{lp}).
\end{align*}
The first inequality follows by the definition of $G_l$. The second inequality follows since $\forall x\in \mathbb{R}^n,\|x\|_1\leq n^{1-1/p}\|x\|_p.$ The third inequality follows by Markov's inequality. The last inequality follows since $\forall i,j\in[n],\E[|\Delta_{i,j}|^p]=O(1).$
Let $l^*\in\mathbb{N}_{\geq 0}$ satisfy $2^{l^*}< \epsilon r$ and $2^{l^*+1}\geq \epsilon r.$ We have
\begin{align*}
\E\left[\sum_{j:\|\Delta_j\|_1\geq n2^{l^*}}\|\Delta_j\|_1\right]&\leq \E\left[\sum_{l=l^*}^\infty|G_l|\cdot n2^{l+1}\right]
=\sum_{l=l^*}^{\infty}\E[|G_l|]\cdot n2^{l+1}\\
&\leq \sum_{l=l^*}^{\infty} O(n/2^{lp})\cdot n2^{l+1}
= \sum_{l=l^*}^{\infty} O(n^2/2^{l(p-1)})\\
&= O(n^2/2^{l^*(p-1)})
= O(n^2/(\epsilon r)^{p-1})\\
&=O(\epsilon n^2).
\end{align*}
By Markov's inequality, with probability at least $.999,$ $\sum_{j:\|\Delta_j\|_1\geq n2^{l^*}}\|\Delta_j\|_1\leq O(\epsilon n^2).$ Conditioned on $\sum_{j:\|\Delta_j\|_1\geq n2^{l^*}}\|\Delta_j\|_1\leq O(\epsilon n^2),$ for any $S\subset[n]$ with $|S|\leq n/r,$ we have
\begin{align*}
\sum_{j\in S}\|\Delta_j\|_1\leq |S|\cdot n2^{l^*}+\sum_{j:\|\Delta_j\|_1\geq n2^{l^*}}\|\Delta_j\|_1\leq \epsilon n^2+O(\epsilon n^2)=O(\epsilon n^2).
\end{align*}
The second inequality follows because $|S|\leq n/r,2^{l^*}\leq \epsilon r$ and $\sum_{j:\|\Delta_j\|_1\geq n2^{l^*}}\|\Delta_j\|_1\leq O(\epsilon n^2).$
\end{proof}
\subsection{Proof of Lemma~\ref{lem:easy_one_is_concentrated}}
\begin{proof}
Let $M=n^{1/2+1/(2p)}.$ Let $Z\in\mathbb{R}^{n}$ be a random vector where $Z_{i}$ are i.i.d. random variables with probability density function
\begin{align*}
g(x)=\left\{\begin{array}{ll}f(x)/\Pr[|\Delta_1|\leq M]& \text{~if~} 0\leq x\leq M ; \\ 0 & \text{~otherwise.}\end{array}\right.
\end{align*}
where $f(x)$ is the probability density function of $|\Delta_1|.$ Then $\forall a>0$
\begin{align*}
\Pr\left[\|\Delta\|_1\leq a\mid \forall i\in[n],|\Delta_i|\leq M\right]=\Pr\left[\|Z\|_1\leq a\right].
\end{align*}
For $i\in [n],$ because $\E[|\Delta_i|]=1,$ it holds that $\E[Z_i]\leq 1.$ We have $\E[\sum_{i=1}^n Z_i]\leq n.$ For the second moment, we have
\begin{align*}
\E[Z_i^2]&=\int_{0}^{M} x^2f(x)/\Pr[|\Delta_1|\leq M]\mathrm{d}x\\
&\leq M^{2-p}/\Pr[|\Delta_1|\leq M]\int_{0}^{M} x^p f(x)\mathrm{d}x\\
&\leq O(M^{2-p})\\
&\leq O(n^{2-p})
\end{align*}
where the second inequality follows by $\E[|\Delta_1|^p]=O(1),$ and $\Pr[|\Delta_1|\leq M]\geq 1-\E[|\Delta_1|^p]/M^p\geq 1/2.$
Then by Bernstein's inequality, we have
\begin{align*}
&\Pr\left[\sum_{i=1}^n Z_i-E\left[\sum_{i=1}^n Z_i\right]\geq\epsilon n\right]\\
\leq~&\exp\left(\frac{-0.5\epsilon^2n^2}{\sum_{i=1}^n \E[Z_i^2]+\frac{1}{3}M\cdot\epsilon n}\right)\\
\leq~&e^{-n^{\Theta(1)}}.
\end{align*}
Thus,
\begin{align*}
\Pr\left[\|\Delta\|_1\leq (1+\epsilon)n\mid \forall i\in[n],|\Delta_i|\leq M\right]=\Pr\left[\|Z\|_1\leq (1+\epsilon)n\right]\geq 1-e^{-n^{\Theta(1)}}.
\end{align*}
\end{proof}
\subsection{Proof of Lemma~\ref{lem:good_tuple_low_cost}}
\begin{proof}
Recall that $(S_1,S_2,\cdots,S_t,i)$ is equivalent to $(S_{[t]},i)$. Let $(S_{[t]},i)$ be an $(A^*,q,t,1/2)$-good tuple which satisfies $H\cap\left(\bigcup_{j=1}^t S_j\right)=\emptyset.$ Let $C$ be the core of $(S_{[t]},i).$ Let $(x_1,x_2,\cdots,x_t)$ be the coefficients tuple corresponding to $(S_{[t]},i).$ Then we have that
\begin{align*}
&\left\|\frac{1}{|C|}\sum_{j=1}^t A_{S_j}x_j-A_i\right\|_1\\
=~&\left\|\frac{1}{|C|}\sum_{j=1}^t \left(A^*_{S_j}+\Delta_{S_j}\right)x_j-(A^*_i+\Delta_i)\right\|_1\\
\leq~&\left\|\frac{1}{|C|}\sum_{j=1}^t A^*_{S_j}x_j-A^*_i\right\|_1+\|\Delta_i\|_1+\frac{1}{|C|}\left\|\sum_{j=1}^t \Delta_{S_j}x_j\right\|_1\\
=~&\|\Delta_i\|_1+\frac{1}{|C|}\left\|\sum_{j=1}^t \Delta_{S_j}x_j\right\|_1\\
\leq~&\|\Delta_i\|_1+\frac{2}{t}\left\|\sum_{j=1}^t \Delta_{S_j}x_j\right\|_1\\
\leq~&\|\Delta_i\|_1+O\left(\frac{1}{t}\cdot(qt)^{1/p}n\right)\\
=~&\|\Delta_i\|_1+O\left(q^{1/p}/t^{1-1/p}n\right)
\end{align*}
holds with probability at least $1-2^{-n^{\Theta(1)}}.$ The first equality follows using $A=A^*+\Delta.$ The first inequality follows using the triangle inequality. The second equality follows using the definition of the core and the coefficients tuple (see Definition~\ref{def:core_and_good_tuple} and Definition~\ref{def:coefficients_tuple}). The second inequality follows using Definition~\ref{def:core_and_good_tuple}. The third inequality follows by Lemma~\ref{lem:averaging_works} and the condition that $H\cap\left(\bigcup_{j=1}^t S_j\right)=\emptyset.$
Since the size of $\left|\{i\}\cup\left(\bigcup_{j=1}^t S_j\right)\right|=qt+1,$ the total number of $(A^*,q,t,1/2)-$good tuples is upper bounded by $n^{qt+1}\leq 2^{n^{o(1)}}.$ By taking a union bound, we complete the proof.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:label_uniform_samples}}
\begin{proof}
For $j\in[t],$ by symmetry of the choices of $S_j$ and $i$, we have $\Pr[i\in R_{A^*}(S_j\cup\{i\})]\leq k/(q+1).$ Thus, by Markov's inequality,
\begin{align*}
&\Pr[|\{j\in[t]\mid i\in R_{A^*}(S_j\cup\{i\})\}|>0.5t]\\
\leq~& \E[|\{j\in[t]\mid i\in R_{A^*}(S_j\cup\{i\})\}|]/(0.5t)\\
\leq~&2k/q.
\end{align*}
Thus,
\begin{align*}
&\Pr[|\{j\in[t]\mid i\not\in R_{A^*}(S_j\cup\{i\})\}|\geq0.5t]\geq 1-2k/q.
\end{align*}
\end{proof}
\subsection{Proof of Lemma~\ref{lem:easy_to_find_good_tuple}}
\begin{proof}
For $S_1,S_2,\cdots,S_t\in {[n]\choose q}$ with $\sum_{j=1}^t |S_j|=qt,$ define
\begin{align*}
P_{(S_1,S_2,\cdots,S_t)}=\Pr_{i\in [n]\setminus \left(\bigcup_{j=1}^t S_j\right)}[(S_1,S_2,\cdots,S_t,i)\text{~is an $(A^*,q,t,1/2)-$good tuple~}].
\end{align*}
Let set $T$ be defined as follows:
\begin{align*}
\left\{ (S_1,S_2,\cdots,S_t) ~\bigg|~ S_1,S_2,\cdots,S_t\in {[n]\choose q} \text{~with~} \sum_{j=1}^t |S_j|=qt \right\}.
\end{align*}
Let $G$ be the set of all the $(A^*,q,t,1/2)-$good tuples. Then, we have
\begin{align*}
&\Pr_{(S_1,S_2,\cdots,S_t)\sim T}\left[\left|\left\{i\in[n]\setminus \left(\cup_{j=1}^t S_j\right)\mid (S_1,S_2,\cdots,S_t,i)\in G\right\}\right|\geq (1-4k/q)(n-qt)\right]\\
=~&\frac{1}{|T|}\left|\left\{(S_1,S_2,\cdots,S_t)\mid (S_1,S_2,\cdots,S_t)\in T\text{~and~}P_{(S_1,S_2,\cdots,S_t)}\geq 1-4k/q\right\}\right|\\
=~&\frac{1}{|T|}\underset{P_{(S_1,S_2,\cdots,S_t)}\geq 1-4k/q}{\sum_{(S_1,S_2,\cdots,S_t)\in T}} 1\\
\geq~& \frac{1}{|T|}\underset{P_{(S_1,S_2,\cdots,S_t)}\geq 1-4k/q}{\sum_{(S_1,S_2,\cdots,S_t)\in T}} P_{(S_1,S_2,\cdots,S_t)}\\
\geq~&1-2k/q-\frac{1}{|T|}\underset{P_{(S_1,S_2,\cdots,S_t)}< 1-4k/q}{\sum_{(S_1,S_2,\cdots,S_t)\in T}} P_{(S_1,S_2,\cdots,S_t)}\\
\geq~&1-2k/q-(1-4k/q)\\
\geq~&2k/q.
\end{align*}
The second inequality follows from Lemma~\ref{lem:label_uniform_samples}
\begin{align*}
\frac{1}{|T|}\underset{P_{(S_1,S_2,\cdots,S_t)}< 1-4k/q}{\sum_{(S_1,S_2,\cdots,S_t)\in T}} P_{(S_1,S_2,\cdots,S_t)}+\frac{1}{|T|}\underset{P_{(S_1,S_2,\cdots,S_t)}\geq 1-4k/q}{\sum_{(S_1,S_2,\cdots,S_t)\in T}} P_{(S_1,S_2,\cdots,S_t)}\geq 1-2k/q.
\end{align*}
\end{proof}
\section{Hardness Result}\label{sec:hard}
\paragraph{An overview of the hardness result.}
Recall that we overcame the column subset selection lower bound
of \cite{swz17}, which shows for entrywise $\ell_1$-low rank approximation
that there are matrices for which any subset of $\poly(k)$
columns spans at best a $k^{\Omega(1)}$-approximation. Indeed, we came up with a column subset
of size $\poly(k(\epsilon^{-1} + \log n))$ spanning a $(1+\epsilon)$-approximation. To do this,
we assumed $A = A^* + \Delta$, where $A^*$ is an arbitrary rank-$k$ matrix, and the entries
are i.i.d. from a distribution with $\E[|\Delta_{i,j}|] = 1$
and $\E[|\Delta_{i,j}|^p] = O(1)$ for any real number $p$ strictly greater than
$1$.
Here we show an assumption on the moments is necessary,
by showing if instead $\Delta$ were drawn from a matrix of i.i.d. Cauchy random variables, for which the
$p$-th moment is undefined or infinite for all $p \geq 1$,
then for any subset of $n^{o(1)}$ columns, it spans at best a $1.002$ approximation. The input
matrix $A = n^C 1 \cdot 1^\top + \Delta$, where $C > 0$ is a constant and we show that $n^{\Omega(1)}$
columns need to be chosen to obtain a $1.001$-approximation, even for $k = 1$.
Note that this
result is stronger than that in \cite{swz17} in that it rules out column subset selection even if one
were to choose $n^{o(1)}$ columns; the result in \cite{swz17} requires at most $\poly(k)$ columns, which
for $k = 1$, would just rule out $O(1)$ columns. Our
main goal here is to show that a moment assumption on our distribution is necessary, and our
result also applies to a symmetric noise distribution which is i.i.d. on all entries, whereas
the result of \cite{swz17} requires a specific deterministic pattern (namely, the identity matrix)
on certain entries.
Our main theorem is given in Theorem \ref{thm:dis_l1_hardness}. The outline of the proof is as follows. We first condition on the event that $\|\Delta\|_1 \leq \frac{4.0002}{\pi} n^2 \ln n$, which is shown in Lemma \ref{lem:bound_of_Delta} and follows form standard analysis of sums of absolute values of Cauchy random variables. Thus, it is sufficient to show if we choose any subset $S$ of $r = n^{o(1)}$ columns, denoted by the submatrix $A_S$, then
$\min_{X \in \mathbb{R}^{r \times n}}\|A_SX-A\|_1 \geq \frac{4.01}{\pi} \cdot n^2 \ln n$, as indeed then $\min_{X \in \mathbb{R}^{r \times n}}\|A_SX-A\|_1 \geq 1.002 \|\Delta\|_1$ and we rule out a $(1+\epsilon)$-approximation for $\epsilon$ a sufficiently small constant. To this end, we instead show for a fixed $S$, that $\min_{X \in \mathbb{R}^{r \times n}}\|A_SX-A\|_1 \geq \frac{4.01}{\pi} \cdot n^2 \ln n$ with probability $1-2^{-n^{\Theta(1)}}$, and then apply a union bound over all $S$. To prove this for a single subset $S$, we argue that for every ``coefficient matrix'' $X$, that
$\|A_SX-A\|_1 \geq \frac{4.01}{\pi} \cdot n^2 \ln n$.
We show in Lemma \ref{lem:for_all_alpha_can_not_be_too_large}, that with probability $1-(1/n)^{\Theta(n)}$ over $\Delta$, simultaneously for all $X$,
if $X$ has a column $X_j$ with $\|X_j\|_1 \geq n^c$ for a constant $c >0$,
then $\|A_SX_j - A_j\|_1 \geq .9n^3$, which is already too large to provide an $O(1)$-approximation.
Note that we need such a high probability bound to later union bound over {\it all $S$}.
Lemma \ref{lem:for_all_alpha_can_not_be_too_large} is in turn shown via a net argument on all $X_j$ (it suffices to prove this for a single $j \in [n]$,
since there are only $n$ different
$j$, so we can union bound over all $j$). The net bounds are given in Definition \ref{def:l1epsnet} and Definition \ref{lem:l1_eps_net_size},
and the high probability bound for a given coefficient vector $X_j$ is shown in Lemma \ref{lem:for_each_alpha_can_not_be_too_large}, where we use properties
of the Cauchy distribution. Thus, we can assume $\|X_j\|_1 < n^c$ for all $j \in [n]$. We also show in Fact \ref{fac:bound_of_sum_of_alpha},
conditioned on the fact that $\|\Delta\|_1 \leq \frac{4.002}{\pi} n^2 \ln n$,
it holds that for {\it any} vector $X_j$, if $\|X_j\|_1 < n^c$ and $|1-{\bf 1}^\top X_j| > 1-10^{-20}$,
then $\|A_SX-A\|_1 \geq \|A_SX_j - A_j\|_1 > n^3$. The intuition here is $A = n^{c_0}{\bf 1} \cdot {\bf 1^\top} + \Delta$
for a large constant $c_0$, and $X_j$ does not have enough norm ($\|X_j\|_1 \leq n^c$) or correlation with the vector ${\bf 1}$
($|1-{\bf 1}^\top X_j| > 1-10^{-20}$) to make $\|A_SX_j - A_j\|_1$ small.
Given the above, we can assume both that
$\|X_j\|_1 \leq n^c$ and $|1-{\bf 1}^\top X_j| \leq 1-10^{-20}$ for all columns $j$ of our coefficient matrix $X$. We can also assume
that $\|A_SX-A\|_1 \leq 4n^2 \ln n$, as otherwise such an $X$ already satisfies $\|A_SX-A\|_1 \geq \frac{4.01}{\pi} \cdot n^2 \ln n$ and
we are done.
To analyze $\|A_SX-A\|_1 = \sum_{i,j} |(A_SX-A_{[n]\setminus S})_{i,j}|$
in Theorem \ref{thm:dis_l1_hardness}, we then split the sum over ``large coordinates'' $(i,j)$ for which
$|\Delta_{i,j}|> n^{1.0002}$, and ``small coordinates'' $(i,j)$ for which $|\Delta_{i,j}| < n^{.9999}$, and since we
seek to lower bound $\|A_SX-A_{[n] \setminus S}\|_1$, we drop the remaining coordinates $(i,j)$. To handle large coordinates, we observe that since the column span of
$A_S$ is only $r = n^{o(1)}$-dimensional, as one ranges over all vectors $y$ in its span of $1$-norm, say, $O(n^2 \ln n)$, there is only a small subset
$T$, of size at most $n^{.99999}$ of coordinates $i \in [n]$ for which we could ever have $|y_i| \geq n^{1.0001}$.
We show this in Lemma \ref{lem:size_of_the_bad_region}. This uses the property of vectors in low-dimensional subspaces, and has been exploited in earlier works in the context of designing so-called subspace embeddings
\cite{cw13,mm13}. We call $T$ the ``bad region'' for $A_S$.
While the column span of $A_S$ depends on $\Delta_S$, it is independent of $\Delta_{[n] \setminus S}$, and thus it is extremely
unlikely that the large coordinate of $\Delta_S$ ``match up'' with the bad region of $A_S$. This is captured in Lemma \ref{lem:bound_for_large_part},
where we show that if $\|A_SX-A_{[n] \setminus S}\|_1 \leq 4n^2 \ln n$ (as we said we could assume above),
then $\sum_{\textrm{large coordinates }i,j}|(A_SX-A_{[n] \setminus S})_{i,j}|$ is at least $\frac{1.996}{\pi}n^2 \ln n$.
Intuitively, the heavy coordinates make up about
$\frac{2}{\pi}n^2 \ln n$ of the total mass of $\|\Delta\|_1$, by tail bounds of the Cauchy distribution, and for any set $S$ of size $n^{o(1)}$,
$A_S$ fits at most a small portion of this, still leaving us left with $\frac{1.996}{\pi} n^2 \ln n$ in cost. Our goal is to show
that $\|A_SX-A_{[n] \setminus S}\|_1 \geq \frac{4.01}{\pi} \cdot n^2 \ln n$, so we still have a way to go.
We next analyze $\sum_{\textrm{small coordinates }i,j}|(A_SX-A_{[n] \setminus S})_{i,j}|$. Via Bernstein's inequality, in Lemma \ref{lem:small_part_same_sign_delta} we argue
that for any fixed vector $y$ and random vector $\Delta_j$ of i.i.d. Cauchy entries, roughly half of the contribution of coordinates to
$\|\Delta_j\|_1$ will come from coordinates
$j$ for which sign$(y_j) = $sign$(\Delta_j)$ and $|\Delta_j| \leq n^{.9999}$, giving us a contribution of roughly $\frac{.9998}{\pi} n \ln n$
to the cost. The situation we will actually be in, when analyzing a
column of $A_SX-A_{[n] \setminus S}$, is that of taking the sum of two independent Cauchy vectors, shifted by a multiple of ${\bf 1}^\top$.
We
analyze this setting in Lemma \ref{lem:small_part_same_sign_alpha}, after first conditioning on certain level sets having typical behavior in Lemma \ref{lem:bound_of_Cauchy_level_size}.
This roughly doubles the contribution, gives us roughly a contribution of
$\frac{1.996}{\pi} n^2 \ln n$ from coordinates $j$ for which $(i,j)$ is a small coordinate and we look at coordinates $i$ on which the
sum of two independent Cauchy vectors have the same sign. Combined with the contribution from the heavy coordinates, this gives
us a cost of roughly $\frac{3.992}{\pi} n^2 \ln n$, which still falls short of the $\frac{4.01}{\pi} \cdot n^2 \ln n$ total cost
we are aiming for. Finally, if we sum up two independent Cauchy vectors and look at the contribution to the sum from coordinates which
disagree in sign, due to the anti-concentration of the Cauchy distribution we can still ``gain a little bit of cost'' since the values, although differing in sign, are still likely not to be very close in magnitude. We formalize this
in Lemma \ref{lem:small_part_different_sign}. We combine all of the costs from small coordinates in Lemma \ref{lem:fixed_small_part}, where we show we obtain a contribution
of at least $\frac{2.025}{\pi} n \ln n$. This is enough, when combined with our earlier $\frac{1.996}{\pi} n^2 \ln n$ contribution
from the heavy coordinates, to obtain an overall $\frac{4.01}{\pi} \cdot n^2 \ln n$ lower bound on the cost, and conclude the proof
of our main theorem in Theorem \ref{thm:dis_l1_hardness}.
In the remaining sections, we will present detailed proofs.
\subsection{A Useful Fact}
\begin{fact}\label{fac:bound_of_sum_of_alpha}
Let $c_0>0$ be a sufficiently large constant. Let $u= n^{c_0} \cdot {\bf 1 } \in \mathbb{R}^n$ and $\Delta\in\mathbb{R}^{n\times (d+1)}.$ If $\sum_{i=1}^{d+1} \| \Delta_{i}\|_1 \leq n^3$ and if $\alpha \in \mathbb{R}^d$ satisfies $ | 1 - {\bf 1}^\top \alpha | > 1/n^{c_1}$ and $\|\alpha\|_1\leq n^c$, where $0<c<c_0-10$ is a constant and $c_1>3$ is another constant depending on $c_0,c$, then
\begin{align*}
\| u - u {\bf 1}^\top \alpha + \Delta_{d+1} - \Delta_{[d]} \alpha \|_1 > n^3.
\end{align*}
\end{fact}
\begin{proof}
\begin{align*}
&\| u - u {\bf 1}^\top \alpha + \Delta_{d+1} - \Delta_{[d]} \alpha \|_1\\
\geq ~& |1-{\bf 1}^\top \alpha|\cdot\|u\|_1-\|\Delta_{d+1}\|_1-\|\Delta_{[d]}\alpha\|_1\\
\geq ~& |1-{\bf 1}^\top \alpha|\cdot n\cdot n^{c_0}-n^3-n^4\|\alpha\|_1\\
\geq~ & |1-{\bf 1}^\top \alpha|\cdot n\cdot n^{c_0}-n^{5+c}\\
\geq~ & n^{c_0+1-c_1}-n^{5+c}\\
\geq~ & n^3.
\end{align*}
The first inequality follows by the triangle inequality. The second inequality follows since $u= n^{c_0} \cdot {\bf 1 } \in \mathbb{R}^n$ and $\sum_{i=1}^{d+1} \| \Delta_{i}\|_1 \leq n^3.$ The third inequality follows since $\|\alpha\|_1\leq n^c.$ The fourth inequality follows since $ | 1 - {\bf 1}^\top \alpha | > 1/n^{c_1}.$ The last inequality follows since $c_0-c_1>c+5.$
\end{proof}
\subsection{One-Sided Error Concentration Bound for a Random Cauchy Matrix}
\begin{lemma}[Lower bound on the cost]\label{lem:bound_of_Delta} If $n$ is sufficiently large, then
\begin{align*}
\Pr_{\Delta \sim \{C(0,1)\}^{n\times n}} \left[ \| \Delta \|_1 \leq \frac{4.0002}{\pi} n^2 \ln n \right] \geq 1-O(1/\log\log n).
\end{align*}
\end{lemma}
\begin{proof}
Let $\Delta\in\mathbb{R}^{n\times n}$ be a random matrix such that each entry is an i.i.d. $C(0,1)$ random Cauchy variable. Let $B=n^2\ln\ln n.$ Let $Z\in\mathbb{R}^{n\times n}$ and $\forall i,j\in[n],$
\begin{align*}
Z_{i,j}=\left\{\begin{array}{ll}|\Delta_{i,j}|&|\Delta_{i,j}|<B\\ B & \text{Otherwise}\end{array}\right..
\end{align*}
For fixed $i,j\in[n],$ we have
\begin{align*}
\E[Z_{i,j}]&=\frac{2}{\pi}\int_{0}^B \frac{x}{1+x^2} \mathrm{d}x+\Pr[|\Delta_{i,j}|\geq B]\cdot B\\
&= \frac{1}{\pi}\ln(B^2+1)+\Pr[|\Delta_{i,j}|\geq B]\cdot B\\
&\leq \frac{1}{\pi}\ln(B^2+1)+1
\end{align*}
where the first inequality follows by the cumulative distribution function of a half Cauchy random variable. We also have $\E[Z_{i,j}]\geq \frac{1}{\pi}\ln(B^2+1).$
For the second moment, we have
\begin{align*}
\E[Z_{i,j}^2]&=\frac{2}{\pi}\int_{0}^B \frac{x^2}{1+x^2} \mathrm{d}x+\Pr[|\Delta_{i,j}|\geq B]\cdot B^2\\
&=\frac{2}{\pi}(B-\tan^{-1} B)+\Pr[|\Delta_{i,j}|\geq B]\cdot B^2\\
&\leq \frac{2}{\pi} B + B\\
&\leq 2B
\end{align*}
where the first inequality follows by the cumulative distribution function of a half Cauchy random variable.
By applying Bernstein's inequality, we have
\begin{align}
&\Pr\left[\|Z\|_1-\E[\|Z\|_1]>0.0001 \E[\|Z\|_1]\right]\notag\\
\leq~&\exp\left(-\frac{0.5\cdot0.0001^2 \E[\|Z\|_1]^2}{n^2\cdot 2B+\frac{1}{3}B\cdot 0.0001 \E[\|Z\|_1]}\right)\notag\\
\leq~&\exp(-\Omega(\ln n/\ln\ln n))\notag\\
\leq~&O(1/\ln n).\label{eq:bernstein_sum_of_Z}
\end{align}
The first inequality follows by the definition of $Z$ and the second moment of $Z_{i,j}.$ The second inequality follows from $\E[\|Z\|_1]=\Theta(n^2\ln n)$ and $B=\Theta(n^2\ln\ln n).$
Notice that
\begin{align*}
~&\Pr \left[ \| \Delta \|_1 > \frac{4.0002}{\pi} n^2 \ln n \right]\\
=~&\Pr \left[ \| \Delta \|_1 > \frac{4.0002}{\pi} n^2 \ln n \mid \forall i,j, |\Delta_{i,j}|<B \right]\Pr\left[\forall i,j, |\Delta_{i,j}|<B\right]\\
~&+\Pr \left[ \| \Delta \|_1 > \frac{4.0002}{\pi} n^2 \ln n \mid \exists i,j, |\Delta_{i,j}|\geq B \right]\Pr\left[\exists i,j, |\Delta_{i,j}|\geq B\right]\\
\leq~&\Pr \left[ \| \Delta \|_1 > \frac{4.0002}{\pi} n^2 \ln n \mid \forall i,j, |\Delta_{i,j}|<B \right]+\Pr\left[\exists i,j, |\Delta_{i,j}|\geq B\right]\\
\leq~&\Pr \left[ \| Z \|_1 > \frac{4.0002}{\pi} n^2 \ln n \right]+\Pr\left[\exists i,j, |\Delta_{i,j}|\geq B\right]\\
\leq~&\Pr \left[ \| Z \|_1 > \frac{4.0002}{\pi} n^2 \ln n \right]+n^2\cdot 1/B\\
\leq~&\Pr \left[ \| Z \|_1 > 1.0001 \E[\|Z\|_1]\right]+n^2\cdot 1/B\\
\leq~&O(1/\log(n))+O(1/\log \log n)\\
\leq~&O(1/\log\log n)\\
\end{align*}
The second inequality follows by the definition of $Z$. The third inequality follows by the union bound and the cumulative distribution function of a half Cauchy random variable. The fourth inequality follows from $\E[\|Z\|_1]\leq n^2(1/\pi\cdot\ln(B^2+1)+1)\leq 4.0000001/\pi\cdot n^2\ln n$ when $n$ is sufficiently large.
\end{proof}
\subsection{``For Each'' Guarantee}
In the following Lemma, we show that, for each fixed coefficient vector $\alpha$, if the entry of $\alpha$ is too large, the fitting cost cannot be small.
\begin{lemma}[For each fixed $\alpha$, the entry cannot be too large]\label{lem:for_each_alpha_can_not_be_too_large}
Let $c>0$ be a sufficiently large constant, $n\geq d\geq 1,$ $u\in\mathbb{R}^n$ be any fixed vector and $\Delta \in \mathbb{R}^{n\times d}$ be a random matrix where $\forall i\in[n],j\in[d],\Delta_{i,j} \sim C(0,1)$ independently. For any fixed $\alpha \in \mathbb{R}^d$ with $\| \alpha \|_1 = n^{c}$,
\begin{align*}
\Pr_{\Delta \sim \{ C(0,1) \}^{n\times d} } [ \| ( u \cdot {\bf 1}^\top + \Delta ) \alpha \|_1 > n^3 ] > 1 - (1/n)^{\Theta(n)}.
\end{align*}
\end{lemma}
\begin{proof}
Let $c$ be a sufficiently large constant.
Let $\alpha\in\mathbb{R}^d$ with $\|\alpha\|_1=n^c$. Let $u\in\mathbb{R}^n$ be any fixed vector. Let $\Delta \in \mathbb{R}^{n\times d}$ be a random matrix where $\forall i\in[n],j\in[d],\Delta_{i,j} \sim C(0,1).$ Then $\Delta\alpha\in\mathbb{R}^n$ is a random vector with each entry drawn independently from $C(0,\|\alpha\|_1)$. Due to the probability density function of standard Cauchy random variables,
\begin{align*}
\Pr[\|\Delta\alpha\|_1<n^3]\geq \Pr[\|\Delta\alpha+u\cdot{\bf 1}^\top\alpha\|_1<n^3].
\end{align*}
It suffices to upper bound $\Pr[\|\Delta\alpha\|_1<n^3].$ If $c>10,$ then due to the cumulative distribution function of Cauchy random variables, for a fixed $i\in [n],$ $\Pr[|(\Delta\alpha)_i|<n^3]<1/n.$ Thus, $\Pr[\|\Delta\alpha\|_1<n^3]<(\frac{1}{n})^n.$ Thus,
\begin{align*}
\Pr_{\Delta \sim \{ C(0,1) \}^{n\times d} } [ \| ( u \cdot {\bf 1}^\top + \Delta ) \alpha \|_1 > n^3 ] > 1 - (1/n)^n.
\end{align*}
\end{proof}
\subsection{From ``For Each'' to ``For All'' via an $\epsilon$-Net}
\begin{definition}[$\epsilon$-net for the $\ell_1$-norm ball]\label{def:l1epsnet}
Let $A\in\mathbb{R}^{n\times d}$ have rank $d$, and let $L=\{y\in\mathbb{R}^n\mid y=Ax,x\in\mathbb{R}^d\}$ be the column space of $A.$ An $\epsilon$-net of the $\ell_1$-unit sphere $\mathcal{S}^{d-1}=\{y\mid \|y\|_1=1,y\in L\}\subset L$ is a set $N\subset \mathcal{S}^{d-1}$ of points for which $\forall y\in \mathcal{S}^{d-1},\exists y'\in N$ for which $\|y-y'\|\leq \epsilon.$
\end{definition}
\cite{ddhkm09} proved an upper bound on the size of an $\epsilon$-net.
\begin{lemma}[See, e.g., the ball $B$ on page 2068 of~\cite{ddhkm09}]\label{lem:l1_eps_net_size}
Let $A\in\mathbb{R}^{n\times d}$ have rank $d$, and let $L=\{y\in\mathbb{R}^n\mid y=Ax,x\in\mathbb{R}^d\}$ be the column space of $A.$ For $\epsilon\in(0,1),$ an $\epsilon$-net (Definition~\ref{def:l1epsnet}) $N$ of the $\ell_1$-unit sphere $\mathcal{S}^{d-1}=\{y\mid \|y\|_1=1,y\in L\}\subset L$ exists. Furthermore, the size of $N$ is at most $(3/\epsilon)^d.$
\end{lemma}
\begin{lemma}[For all possible $\alpha$, the entry cannot be too large]\label{lem:for_all_alpha_can_not_be_too_large}
Let $n\geq 1,d=n^{o(1)}.$ Let $u = n^{c_0} \cdot {\bf 1} \in \mathbb{R}^n$ denote a fixed vector where $c_0$ is a constant. Let $\Delta \in \mathbb{R}^{n\times d}$ be a random matrix where $\forall i\in[n],j\in[d],\Delta_{i,j} \sim C(0,1)$ independently. Let $c>0$ be a sufficiently large constant. Conditioned on $\|\Delta\|_1\leq n^3,$ with probability at least $1-(1/n)^{\Theta(n)},$ for all $\alpha\in\mathbb{R}^d$ with $\|\alpha\|_1\geq n^c,$ we have $\| ( u \cdot {\bf 1}^\top + \Delta ) \alpha \|_1>0.9n^3.$
\end{lemma}
\begin{proof}
Due to Lemma~\ref{lem:l1_eps_net_size}, there is a set $N\subset\{\alpha\in\mathbb{R}^d\mid \|\alpha\|_1=n^c \}\subset \mathbb{R}^{d}$ with $|N|\leq 2^{\Theta(d\log n)}$ such that $\forall \alpha\in\mathbb{R}^d$ with $\|\alpha\|_1=n^c,$ $\exists \alpha'\in N$ such that $\|\alpha-\alpha'\|_1\leq 1/n^{c'}$ where $c'>c_0+100$ is a constant. By applying Lemma~\ref{lem:for_each_alpha_can_not_be_too_large} and union bounding over all the points in $N,$ with probability at least $1-(1/n)^n\cdot|N|\geq 1-(1/n)^n\cdot 2^{n^{o(1)}}=1-(1/n)^{\Theta(n)},$ $\forall \alpha'\in N,$ $\| ( u \cdot {\bf 1}^\top + \Delta ) \alpha' \|_1>n^3.$ $\forall \alpha\in\mathbb{R}^{d}$ with $\|\alpha\|_1=n^c,$ we can find $\alpha'\in N$ such that $\|\alpha-\alpha'\|_1\leq 1/n^{c'}.$ Let $\gamma=\alpha-\alpha'.$ Then,
\begin{align*}
&\|(u\cdot {\bf 1}^\top+\Delta)\alpha\|_1\\
=~& \|(u\cdot {\bf 1}^\top+\Delta)(\alpha'+\gamma)\|_1\\
\geq~ & \|(u\cdot {\bf 1}^\top+\Delta)\alpha'\|_1-\|(u\cdot {\bf 1}^\top+\Delta)\gamma\|_1\\
\geq~& n^3-\sqrt{n}\|(u\cdot {\bf 1}^\top+\Delta)\gamma\|_2\\
\geq~& n^3-\sqrt{n}(\|u\cdot {\bf 1}^\top\|_2+\|\Delta\|_2)\|\gamma\|_2\\
\geq~& n^3-n^{c_0+50}/n^{c'}\\
\geq~& 0.9n^3.
\end{align*}
The first equality follows from $\alpha=\alpha'+\gamma.$ The first inequality follows by the triangle inequality. The second inequality follows by the relaxation from the $\ell_1$ norm to the $\ell_2$ norm. The third inequality follows from the operator norm and the triangle inequality. The fourth inequality follows using $\|\Delta\|_2\leq\|\Delta\|_1\leq n^3,\|u\|_2\leq n^{c_0+10},\|\gamma\|_2\leq \|\gamma\|_1\leq (1/n)^{c'}.$ The last inequality follows since $c'>c_0+100.$
For $\alpha\in\mathbb{R}^n$ with $\|\alpha\|_1>n^c,$ let $\alpha'=\alpha/\|\alpha\|_1\cdot n^c.$ Then
\begin{align*}
\|(u\cdot {\bf 1}^\top+\Delta)\alpha\|_1\geq \|(u\cdot {\bf 1}^\top+\Delta)\alpha'\|_1 \geq 0.9n^3.
\end{align*}
\end{proof}
\subsection{Bounding the Cost from the Large-Entry Part via ``Bad'' Regions}
In this section, we will use the concept of \emph{well-conditioned basis} in our analysis.
\begin{definition}[Well-conditioned basis~\cite{ddhkm09}]\label{def:well_conditioned_basis_for_l1}
Let $A\in\mathbb{R}^{n\times m}$ have rank $d$. Let $p\in[1,\infty),$ and let $\|\cdot\|_q$ be the dual norm of $\|\cdot\|_p,$ i.e., $1/p+1/q=1.$ If $U\in\mathbb{R}^{n\times d}$ satisfies
\begin{enumerate}
\item $\|U\|_p\leq \alpha,$
\item $\forall z\in\mathbb{R}^d,\|z\|_q\leq \beta\|Uz\|_p,$
\end{enumerate}
then $U$ is an $(\alpha,\beta,p)$ well-conditioned basis for the column space of $A$.
\end{definition}
The following theorem gives an existence result of a well-conditioned basis.
\begin{theorem}[$\ell_1$ well-conditioned basis~\cite{ddhkm09}]\label{thm:l1_well_conditioned_basis}
Let $A\in\mathbb{R}^{n\times m}$ have rank $d$. There exists $U\in\mathbb{R}^{n\times d}$ such that $U$ is a $(d,1,1)$ well-conditioned basis for the column space of $A$.
\end{theorem}
In the following lemma, we consider vectors from low-dimensional subspaces. For a coordinate, if there is a vector from the subspace for which this entry is large, but the norm of the vector is small, then this kind of coordinate is pretty ``rare''. More formally,
\begin{lemma}\label{lem:size_of_the_bad_region}
Given a matrix $U\in \mathbb{R}^{n\times r}$ for a sufficiently large $n\geq 1$, let $r=n^{o(1)}$. Let $S = \{ y | y = U x , x \in \mathbb{R}^r \}$. Let the set $T$ denote $\{ i\in [n] ~ | ~ \exists y \in S, |y_i| \geq n^{1.0001} \text{~and~} \| y \|_1 < 8 n^2 \ln n \}$. Then we have
\begin{align*}
|T| \leq n^{0.99999 }.
\end{align*}
\end{lemma}
\begin{proof}
Due to Theorem~\ref{thm:l1_well_conditioned_basis}, let $U\in\mathbb{R}^{n\times r}$ be the $(r,1,1)$ well-conditioned basis of the column space of $U$. If $i\in T,$ then $\exists x\in \mathbb{R}^r$ such that $|(Ux)_i|\geq n^{1.0001}$ and $\|Ux\|_1<8n^2\ln n.$ Thus, we have
\begin{align*}
n^{1.0001}\leq |(Ux)_i|\leq \|U^i\|_1\|x\|_{\infty}\leq \|U^i\|_1\|Ux\|_1\leq \|U^i\|_1\cdot 8n^2\ln n.
\end{align*}
The first inequality follows using $n^{1.0001}\leq |(Ux)_i|.$ The second inequality follows by H\"{o}lder's inequality. The third inequality follows by the second property of the well-conditioned basis. The fourth inequality follows using $\|Ux\|_1<8n^2\ln n.$ Thus, we have
\begin{align*}
\|U^i\|_1\geq n^{1.0001}/n^{2+o(1)}\geq 1/n^{0.9999-o(1)}.
\end{align*}
Notice that $\sum_{j=1}^n \|U^j\|_1=\|U\|_1\leq r.$ Thus,
\begin{align*}
|T|\leq r/(1/n^{0.9999-o(1)})=n^{0.9999+o(1)}\leq n^{0.99999}.
\end{align*}
\end{proof}
\begin{definition}[Bad region]
Given a matrix $U \in \mathbb{R}^{n\times r}$, we say ${\cal B}(U) = \{ i \in [n] ~|~ \exists y \in \mathrm{colspan}(U) \subset \mathbb{R}^n \mathrm{~s.t.~} y_i \geq n^{1.0001} \mathrm{~and~} \| y \|_1 \leq 8 n^2 \ln n \}$ is a bad region for $U$.
\end{definition}
Next we state a lower and an upper bound on the probability that a Cauchy random variable is in a certain range,
\begin{claim}\label{cla:prob_Cauchy_range}
Let $X\sim C(0,1)$ be a standard Cauchy random variable. Then for any $x>1549,$
\begin{align*}
\frac{2}{\pi}\cdot\frac{\ln(1.001)}{x}\geq \Pr[|X|\in (x,1.001x]]\geq \frac{1.999}{\pi}\cdot\frac{\ln(1.001)}{x}.
\end{align*}
\end{claim}
\begin{proof}
When $x>1549,$ $\frac{2}{\pi}\cdot\frac{\ln(1.001)}{x}\geq\frac{2}{\pi}\cdot (\tan^{-1}(1.001x)-\tan^{-1}(x))\geq \frac{1.999}{\pi}\cdot\frac{\ln(1.001)}{x}.$
\end{proof}
We build a level set for the ``large'' noise values, and we show the bad region cannot cover much of the large noise. The reason is that the bad region is small, and for each row, there is always some large noise.
\begin{lemma}\label{lem:analysis_for_large_part}
Given a matrix $U\in \mathbb{R}^{n\times r}$ with $n$ sufficiently large, let $r=n^{o(1)}$, and consider a random matrix $\Delta \in \mathbb{R}^{n\times (n-r)}$ with $\Delta_{i,j} \sim C(0,1)$ independently. Let $L_t = \{ (i,j) ~|~ (i,j) \in [n]\times [n-r], |\Delta_{i,j}| \in (1.001^t , 1.001^{t+1} ] \}$. With probability at least $1-1/2^{n^{\Theta(1)}}$, for all $t \in ( \frac{1.0002\ln n}{\ln 1.001} , \frac{1.9999 \ln n}{\ln 1.001} ) \cap \mathbb{N}$,
\begin{align*}
| L_t \setminus ({\cal B}(U) \times [n-r]) | \geq n(n-r) \cdot 1.998\cdot \ln(1.001)/(\pi\cdot 1.001^t).
\end{align*}
\end{lemma}
\begin{proof}
Let $N=n\cdot(n-r).$ Then according to Claim~\ref{cla:prob_Cauchy_range}, $\forall t\in ( \frac{1.0002\ln n}{\ln 1.001} , \frac{1.9999 \ln n}{\ln 1.001} )\cap \mathbb{N},$ $\E(|L_t|)\geq N\cdot 1.999\cdot \ln(1.001)/(\pi\cdot 1.001^t)\geq n^{\Theta(1)}.$ For a fixed $t,$ by a Chernoff bound, with probability at least $1-1/2^{n^{\Theta(1)}}$, $|L_t|\geq N\cdot 1.9989\cdot \ln(1.001)/(\pi\cdot 1.001^t).$ Due to Lemma~\ref{lem:size_of_the_bad_region}, $|{\cal B}(U) \times [n-r]|\leq n^{0.99999}(n-r)=N/n^{0.00001}.$ Due to the Chernoff bound, with probability at least $1-1/2^{n^{\Theta(1)}},$ $| L_t \cap ({\cal B}(U) \times [n-r]) |<N/n^{0.00001}\cdot 2.0001\cdot \ln(1.001)/(\pi\cdot 1.001^t).$ Thus, with probability at least $1-1/2^{n^{\Theta(1)}},$ $|L_t\setminus ({\cal B}(U) \times [n-r])|\geq N\cdot 1.998\cdot \ln(1.001)/(\pi\cdot 1.001^t).$ By taking a union bound over all $t\in ( \frac{1.0002\ln n}{\ln 1.001} , \frac{1.9999 \ln n}{\ln 1.001} ) \cap \mathbb{N}$, we complete the proof.
\end{proof}
\begin{lemma}[The cost of the large noise part]\label{lem:bound_for_large_part}
Let $n\geq 1$ be sufficiently large, and let $r=n^{o(1)}.$ Given a matrix $U\in \mathbb{R}^{n\times r}$, and a random matrix $\Delta \in \mathbb{R}^{n\times (n-r)}$ with $\Delta_{i,j} \sim C(0,1)$ independently, let ${\cal I}= \{ (i,j) \in [n]\times[n-r] ~ | ~ |\Delta_{i,j}| \geq n^{1.0002} \}$. If $\| \Delta \|_1 \leq 4 n^2\ln n$, then with probability at least $1-1/2^{n^{\Theta(1)}}$, for all $X\in \mathbb{R}^{r\times n}$, either
\begin{align*}
\sum_{(i,j)\in {\cal I}} | (U X - \Delta)_{i,j} | > \frac{1.996}{\pi} n^2 \ln n,
\end{align*}
or
\begin{align*}
\|UX-\Delta\|_1>4n^2\ln n
\end{align*}
\end{lemma}
\begin{proof}
\begin{align}
&\sum_{(i,j)\in {\cal I}} | (U X - \Delta)_{i,j} | \notag\\
\geq~& \sum_{(i,j)\in {\cal I}\setminus \mathcal{B}(U)} | (U X - \Delta)_{i,j} |\notag\\
\geq~&\sum_{(i,j)\in {\cal I}\setminus \mathcal{B}(U)} | (\Delta)_{i,j} |-\sum_{(i,j)\in {\cal I}\setminus \mathcal{B}(U)} | (UX)_{i,j} |\label{eq:large_part_fixed_and_random}
\end{align}
Let $N=n(n-r).$ By a Chernoff bound and the cumulative distribution function of a Cauchy random variable, with probability at least $1-1/2^{n^{\Theta(1)}},$ $|{\cal I}|\leq 1.1\cdot N/n^{1.0002}.$ If $\exists (i,j)\in {\cal I}\setminus \mathcal{B}(U)$ which has $| (UX)_{i,j} |>n^{1.0001}$, then according to the definition of $\mathcal{B}(U),$ $\|UX\|_1\geq \|(UX)_j\|_1\geq 8n^2\ln n.$ Due to the triangle inequality, $\|UX-\Delta\|_1\geq \|UX\|_1-\|\Delta\|_1\geq 4n^2\ln n.$ If $\forall (i,j)\in {\cal I}\setminus \mathcal{B}(U)$ we have $| (UX)_{i,j} |\leq n^{1.0001},$ then
\begin{align}\label{eq:large_part_fixed_part}
\sum_{(i,j)\in {\cal I}\setminus \mathcal{B}(U)} | (UX)_{i,j} |\leq |\mathcal{I}|\cdot n^{1.0001}\leq 1.1\cdot N/n^{0.0001}.
\end{align}
Due to Lemma~\ref{lem:analysis_for_large_part}, with probability at least $1-1/2^{n^{\Theta(1)}},$
\begin{align}
&\sum_{(i,j)\in {\cal I}\setminus \mathcal{B}(U)} | (\Delta)_{i,j} |\notag\\
\geq~&\sum_{t\in( \frac{1.0002\ln n}{\ln 1.001} , \frac{1.9999 \ln n}{\ln 1.001} ) \cap \mathbb{N}} 1.001^t \cdot N \cdot 1.998\cdot \ln(1.001)/(\pi\cdot 1.001^t)\notag\\
\geq~&\frac{1.997}{\pi}\cdot N \ln n. \label{eq:large_part_random_part}
\end{align}
We plug~\eqref{eq:large_part_fixed_part} and~\eqref{eq:large_part_random_part} into~\eqref{eq:large_part_fixed_and_random}, from which we have
\begin{align*}
\sum_{(i,j)\in {\cal I}} | (U X - \Delta)_{i,j} |\geq \frac{1.996}{\pi} n^2\ln n.
\end{align*}
\end{proof}
\subsection{Cost from the Sign-Agreement Part of the Small-Entry Part}
We use $-y$ to fit $\Delta$ (we think of $A_S \alpha = A_S^* \alpha - y$, and want to minimize $\| - y - \Delta \|_1$). If the sign of $y_j$ is the same as the sign of $\Delta_j$, then both coordinate values will collectively contribute.
\begin{lemma}[The contribution from $\Delta_i$ when $\Delta_i$ and $y_i$ have the same sign]\label{lem:small_part_same_sign_delta}
Suppose we are given a vector $y\in \mathbb{R}^n$ and a random vector $\Delta \in \mathbb{R}^{n}$ with $\Delta_{j} \sim C(0,1)$ independently. Then with probability at least $1-1/2^{n^{\Theta(1)}},$
\begin{align*}
\sum_{j ~:~ \sign(y_j)=\sign(\Delta_{j}) \mathrm{~and~} |\Delta_{j}| \leq n^{0.9999} } |\Delta_{j}| > \frac{0.9998}{\pi} n \ln n.
\end{align*}
\end{lemma}
\begin{proof}
For $j\in[n]$, define the random variable
\begin{align*}
Z_j=\left\{\begin{array}{ll}\Delta_j&0<\Delta_j\leq n^{0.9999}\\0&\text{otherwise}\end{array}\right..
\end{align*}
Then, we have
\begin{align*}
\Pr\left[\sum_{j ~:~ \sign(y_j)=\sign(\Delta_{i,j}) \mathrm{~and~} |\Delta_{j}| \leq n^{0.9999} } |\Delta_{j}| > \frac{0.9998}{\pi} n \ln n\right]
=\Pr\left[\sum_{j=1}^n Z_j>\frac{0.9998}{\pi} n \ln n\right].
\end{align*}
Let $B=n^{0.9999}.$
For $j\in [n],$
\begin{align*}
\E[Z_j]=\frac{1}{\pi}\int_{0}^{B} \frac{x}{1+x^2} \mathrm{d}x=\frac{1}{2\pi}\ln(B^2+1).
\end{align*}
Also,
\begin{align*}
\E[Z_j^2]=\frac{1}{\pi}\int_{0}^{B} \frac{x^2}{1+x^2} \mathrm{d}x=\frac{B-\tan^{-1}(B)}{\pi}\leq B.
\end{align*}
By Bernstein's inequality,
\begin{align*}
&\Pr\left[\E\left[\sum_{j=1}^n Z_j\right]-\sum_{j=1}^n Z_j>10^{-5}\E\left[\sum_{j=1}^n Z_j\right]\right]\\
\leq~&\exp\left(-\frac{0.5\cdot \left(10^{-5}\E\left[\sum_{j=1}^n Z_j\right]\right)^2}{\sum_{j=1}^n \E[Z_j^2]+\frac{1}{3}B\cdot 10^{-5}\E\left[\sum_{j=1}^n Z_j\right]}\right)\\
\leq~&\exp\left(-\frac{5\cdot 10^{-11}n^2\ln^2(B^2+1)/(4\pi^2)}{nB+\frac{1}{3}B\cdot 10^{-5}n\ln(B^2+1)/(2\pi)}\right)\\
\leq~&e^{-n^{\Theta(1)}}.\\
\end{align*}
The last inequality follows since $B=n^{0.9999}.$ Thus, we have
\begin{align*}
\Pr\left[\sum_{j=1}^n Z_j<0.9998/\pi\cdot n\ln n\right]\leq\Pr\left[\sum_{j=1}^n Z_j<0.99999n\ln(B^2+1)/(2\pi)\right]\leq e^{-n^{\Theta(1)}}.
\end{align*}
\end{proof}
\begin{lemma}[Bound on level sets of a Cauchy vector]\label{lem:bound_of_Cauchy_level_size}
Suppose we are given a random vector $y\in \mathbb{R}^n$ with $y_i\sim C(0,1)$ chosen independently. Let
\begin{align*}
L_t^- = \{ i \in [n] ~ | ~ -y_i \in (1.001^t, 1.001^{t+1}] \} \mathrm{~and~} L_t^+ = \{ i \in [n] ~ | ~ y_i \in (1.001^t, 1.001^{t+1}] \} .
\end{align*}
With probability at least $1-1/2^{n^{\Theta(1)}}$, for all $t\in(\frac{\ln 1549}{\ln 1.001},\frac{0.9999\ln n}{\ln 1.001})\cap \mathbb{N}$,
\begin{align*}
\min ( |L_t^-|, |L_t^+| ) \geq 0.999n \cdot \frac{1}{\pi} \frac{\ln 1.001}{1.001^t} .
\end{align*}
\end{lemma}
\begin{proof}
For $i\in[n],t\geq \frac{\ln 1549}{\ln 1.001},$ according to Claim~\ref{cla:prob_Cauchy_range}, $\Pr[y_i\in (1.001^t,1.001^{t+1}]]\geq 0.9995/\pi\cdot \ln(1.0001)/1.001^t.$ Thus, $\E[|L_t^+|]=\E[|L_t^-|]=n\cdot 0.9995/\pi\cdot \ln(1.0001)/1.001^t.$ Since $t\leq \frac{0.9999\ln n}{\ln 1.001},$ $1.001^t\leq n^{0.9999},$ we have $\E[|L_t^+|]=\E[|L_t^-|]\geq n^{\Theta(1)}.$ By applying a Chernoff bound,
\begin{align*}
\Pr[|L_t^+|>0.999n/\pi\cdot \ln(1.0001)/1.001^t]\geq 1-1/2^{n^{\Theta(1)}}.
\end{align*}
Similarly, we have
\begin{align*}
\Pr[|L_t^-|>0.999n/\pi\cdot \ln(1.0001)/1.001^t]\geq 1-1/2^{n^{\Theta(1)}}.
\end{align*}
By taking a union bound over all the $L_t^+$ and $L_t^-,$ we complete the proof.
\end{proof}
\begin{lemma}[The contribution from $y_i$ when $\Delta_i$ and $y_i$ have the same sign]\label{lem:small_part_same_sign_alpha}
Let $u=\eta\cdot\mathbf{1}\in\mathbb{R}^n$ where $\eta\in\mathbb{R}$ is an arbitrary real number. Let $y\in \mathbb{R}^n$ be a random vector with $y_i\sim C(0,\beta)$ independently for some $\beta>0.$ Let $\Delta\in\mathbb{R}^n$ be a random vector with $\Delta_i \sim C(0,1)$ independently. With probability at least $1-1/2^{n^{\Theta(1)}},$
\begin{align*}
\sum_{i ~: ~ \sign((u+y)_i) = \sign(\Delta_{i}) \mathrm{~and~} |\Delta_{i}| \leq n^{0.9999}} |(u+y)_i| \geq \beta\cdot \frac{0.997}{\pi} n\ln n.
\end{align*}
\end{lemma}
\begin{proof}
For all $t\in(\frac{\ln 1549}{\ln 1.001},\frac{0.9999\ln n}{\ln 1.001})\cap \mathbb{N},$ define
\begin{align*}
L_t^- = \{ i \in [n] ~ | ~ -y_i \in (\beta\cdot 1.001^t, \beta\cdot 1.001^{t+1}] \} \mathrm{~and~} L_t^+ = \{ i \in [n] ~ | ~ y_i \in (\beta\cdot1.001^t, \beta\cdot1.001^{t+1}] \} .
\end{align*}
Define
\begin{align*}
G=\{i\in[n]\mid \sign((u+y)_i) = \sign(\Delta_{i})\mathrm{~and~}|\Delta_{i}| \leq n^{0.9999}\}.
\end{align*}
Then $\forall i\in[n],\Pr[i\in G]\geq 0.5-1/n^{0.9999}\geq 0.4999999999.$
Due to Lemma~\ref{lem:bound_of_Cauchy_level_size},
\begin{align*}
\min ( |L_t^-|, |L_t^+| ) \geq 0.999n \cdot \frac{1}{\pi} \frac{\ln 1.001}{1.001^t} \geq n^{\Theta(1)}.
\end{align*}
By a Chernoff bound and a union bound, with probability at least $1-1/2^{n^{\Theta(1)}},$ $\forall t\in(\frac{\ln 1549}{\ln 1.001},\frac{0.9999\ln n}{\ln 1.001})\cap \mathbb{N},$
\begin{align}
&\min ( |L_t^-\cap G|, |L_t^+ \cap G|)\notag\\
\geq~&0.499n \cdot \frac{1}{\pi} \frac{\ln 1.001}{1.001^t}. \label{eq:size_of_pm_level}
\end{align}
Then we have
\begin{align*}
&\sum_{i\in G} |(u+y)_i|\\
\geq~&\sum_{t\in(\frac{\ln 1549}{\ln 1.001},\frac{0.9999\ln n}{\ln 1.001})\cap \mathbb{N}}\left(\sum_{i\in L_t^+,i\in G}|y_i+\eta|+\sum_{i\in L_t^-,i\in G}|-y_i-\eta|\right)\\
\geq~&\sum_{t\in(\frac{\ln 1549}{\ln 1.001},\frac{0.9999\ln n}{\ln 1.001})\cap \mathbb{N}}0.499n \cdot \frac{1}{\pi} \frac{\ln 1.001}{1.001^t}\cdot 2\cdot 1.001^t\cdot \beta\\
\geq~&\beta\cdot\frac{0.997}{\pi}n\ln n
\end{align*}
The second inequality follows by Equation~\eqref{eq:size_of_pm_level} and the triangle inequality, i.e., $\forall a,b,c\in\mathbb{R},|a+c|+|b-c|\geq |a+b|.$
\end{proof}
\subsection{Cost from the Sign-Disagreement Part of the Small-Entry Part}
\begin{lemma}\label{lem:small_part_different_sign}
Given a vector $y\in \mathbb{R}^n$ and a random vector $\Delta\in \mathbb{R}^{n}$ with $\Delta_{i} \sim C(0,1)$ independently, with probability at least $1-1/2^{n^{\Theta(1)}}$,
\begin{align*}
\sum_{i ~ : ~ \sign(y_i) \neq \sign(\Delta_{i}) \mathrm{~and~} |\Delta_{i}| < n^{0.9999} } |y_i + \Delta_{i}| > \frac{0.03}{\pi}n\ln n.
\end{align*}
\end{lemma}
\begin{proof}
For $t\in [0,\frac{0.9999\ln n}{\ln 4})\cap \mathbb{N}$ define
\begin{align*}
L_t=\{i\in[n]\mid \sign(y_i) \neq \sign(\Delta_{i}),|\Delta_i|\in (4^t,4^{t+1}],|\Delta_i|\not\in[|y_i|-4^t,|y_i|+4^t]\}.
\end{align*}
$\forall x\geq 1,y>0,$ we have
\begin{align*}
&\Pr_{X\sim C(0,1)}[|X|\in (x,4x],|X|\not\in [y-x,y+x]]\\
\geq~& \Pr_{X\sim C(0,1)}[|X|\in (3x,4x]]\\
=~ &\frac{2}{\pi}\cdot(\tan^{-1}(4x)-\tan^{-1}(3x))\\
\geq~& \frac{0.1}{\pi}\cdot\frac{\ln(4)}{x}
\end{align*}
Thus, $\forall i\in[n],t\in [0,\frac{0.9999\ln n}{\ln 4})\cap \mathbb{N},$
\begin{align*}
\Pr[i\in L_t]\geq \frac{0.05}{\pi}\cdot\frac{\ln(4)}{4^t}.
\end{align*}
Thus, $\forall t\in [0,\frac{0.9999\ln n}{\ln 4})\cap \mathbb{N},\E[|L_t|]\geq 0.05n/\pi\cdot \ln(4)/4^t\geq n^{\Theta(1)}.$ By a Chernoff bound and a union bound, with probability at least $1-1/2^{n^{\Theta(1)}}$ $\forall t\in [0,\frac{0.9999\ln n}{\ln 4})\cap \mathbb{N},$ $|L_t|\geq 0.04n/\pi\cdot \ln(4)/4^t.$ Thus, we have, with probability at least $1-1/2^{n^{\Theta(1)}},$
\begin{align*}
&\sum_{i ~ : ~ \sign(y_i) \neq \sign(\Delta_{i}) \mathrm{~and~} |\Delta_{i}| < n^{0.9999} } |y_i + \Delta_{i}|\\
\geq~&\sum_{t\in [0,\frac{0.9999\ln n}{\ln 4})\cap \mathbb{N}}|L_t|\cdot 4^t\\
\geq~&\frac{0.03}{\pi}n\ln n.
\end{align*}
\end{proof}
\subsection{Overall Cost of the Small-Entry Part}
\begin{lemma}[For each]\label{lem:fixed_small_part}
Let $u=\eta\cdot \mathbf{1}\in\mathbb{R}^n$ where $\eta\in\mathbb{R}$ is an arbitrary real number. Let $\alpha\in\mathbb{R}^d$ where $\|\alpha\|_1\geq 1-10^{-20}.$ Let $\Delta\in\mathbb{R}^{n\times(d+1)}$ and $\forall (i,j)\in [n]\times [d+1],\Delta_{i,j}\sim C(0,1)$ are i.i.d. standard Cauchy random variables. Then with probability at least $1-1/2^{n^{\Theta(1)}},$
\begin{align*}
\sum_{j\in[n],|\Delta_{j,d+1}|<n^{0.9999}}|(u+\Delta_{d+1}-(u\mathbf{1}^\top+\Delta_{[d]})\alpha)_j|\geq \frac{2.025}{\pi}n\ln n.
\end{align*}
\end{lemma}
\begin{proof}
Let $G_1=\{j\in[n]\mid |\Delta_{j,d+1}|<n^{0.9999},\sign((u(1-\mathbf{1}^\top\alpha)-\Delta_{[d]}\alpha)_j)=\sign(\Delta_{d+1})_j)\},G_2=\{j\in[n]\mid |\Delta_{j,d+1}|<n^{0.9999},\sign((u(1-\mathbf{1}^\top\alpha)-\Delta_{[d]}\alpha)_j)\not=\sign(\Delta_{d+1})_j)\}.$ Notice that $\Delta_{[d]}\alpha$ is a random vector with each entry independently drawn from $C(0,\|\alpha\|_1).$
Then with probability at least $1-1/2^{n^{\Theta(1)}},$
\begin{align*}
&\sum_{j\in[n],|\Delta_{j,d+1}|<n^{0.9999}}|(u+\Delta_{d+1}-(u\mathbf{1}^\top+\Delta_{[d]})\alpha)_j|\\
=~&\sum_{j\in[n],|\Delta_{j,d+1}|<n^{0.9999}} |(u(1-\mathbf{1}^\top\alpha)-\Delta_{[d]}\alpha+\Delta_{d+1})_j|\\
=~&\sum_{j\in G_1} |(u(1-\mathbf{1}^\top\alpha)-\Delta_{[d]}\alpha+\Delta_{d+1})_j|+\sum_{j\in G_2} |(u(1-\mathbf{1}^\top\alpha)-\Delta_{[d]}\alpha+\Delta_{d+1})_j|\\
=~& \sum_{j\in G_1} |(u(1-\mathbf{1}^\top\alpha)-\Delta_{[d]}\alpha)_j|+\sum_{j\in G_1}|(\Delta_{d+1})_j|+\sum_{j\in G_2} |(u(1-\mathbf{1}^\top\alpha)-\Delta_{[d]}\alpha+\Delta_{d+1})_j|\\
\geq~&\|\alpha\|_1\cdot\frac{0.997}{\pi}\cdot n\ln n+\frac{0.9998}{\pi}n\ln n+\frac{0.03}{\pi}n\ln n\\
\geq~&\frac{2.025}{\pi}n\ln n
\end{align*}
The first inequality follows by Lemma~\ref{lem:small_part_same_sign_alpha}, Lemma~\ref{lem:small_part_same_sign_delta} and Lemma~\ref{lem:small_part_different_sign}. The second inequality follows by $\|\alpha\|_1\geq 1-10^{-20}.$
\end{proof}
\begin{lemma}[For all]\label{lem:bound_for_small_part}
Let $c>0,c_0>0$ be two arbitrary constants. Let $u=\eta\cdot \mathbf{1}\in\mathbb{R}^n$ where $\eta\in\mathbb{R}$ satisfies $|\eta|\leq n^{c_0}$. Consider a random matrix $\Delta\in\mathbb{R}^{n\times(d+1)}$ with $d=n^{o(1)}$ and $\forall (i,j)\in [n]\times [d+1],\Delta_{i,j}\sim C(0,1)$ are i.i.d. standard Cauchy random variables. Conditioned on $\|\Delta\|_1\leq n^3,$ with probability at least $1-1/2^{n^{\Theta(1)}},$ $\forall \alpha\in\mathbb{R}^{d}$ with $1-10^{-20}\leq\|\alpha\|_1\leq n^c,$
\begin{align*}
\sum_{j\in[n],|\Delta_{j,d+1}|<n^{0.9999}}|(u+\Delta_{d+1}-(u\mathbf{1}^\top+\Delta_{[d]})\alpha)_j|\geq \frac{2.024}{\pi}n\ln n.
\end{align*}
\end{lemma}
\begin{proof}
Let $\mathcal{N}$ be a set of points:
\begin{align*}
\mathcal{N}=\left\{\alpha\in\mathbb{R}^d\mid 1-10^{-20}\leq \|\alpha\|_1\leq n^c\mathrm{~and~} \exists q\in \mathbb{Z}^d,\mathrm{~such~that~}\alpha=q/n^{c+c_0+1000}\right\}.
\end{align*}
Since $d=n^{o(1)},$ we have $|\mathcal{N}|\leq (n^{2c+c_0+2000})^d=2^{n^{o(1)}}.$ By Lemma~\ref{lem:fixed_small_part} and a union bound, with probability at least $1-1/2^{n^{\Theta(1)}}\cdot |\mathcal{N}|\geq 1-1/2^{n^{\Theta(1)}},$ $\forall \alpha\in \mathcal{N},$ we have
\begin{align*}
\sum_{j\in[n],|\Delta_{j,d+1}|<n^{0.9999}}|(u+\Delta_{d+1}-(u\mathbf{1}^\top+\Delta_{[d]})\alpha)_j|\geq \frac{2.025}{\pi}n\ln n.
\end{align*}
Due to the construction of $\mathcal{N},$ we have $\forall \alpha\in\mathbb{R}^{d}$ with $1-10^{-20}\leq\|\alpha\|_1\leq n^c,$ $\exists \alpha'\in\mathcal{N}$ such that $\|\alpha-\alpha'\|_\infty\leq 1/n^{c+c_0+1000}.$ Let $\gamma=\alpha-\alpha'.$ Then
\begin{align*}
&\sum_{j\in[n],|\Delta_{j,d+1}|<n^{0.9999}}|(u+\Delta_{d+1}-(u\mathbf{1}^\top+\Delta_{[d]})\alpha)_j|\\
=~&\sum_{j\in[n],|\Delta_{j,d+1}|<n^{0.9999}}|(u+\Delta_{d+1}-(u\mathbf{1}^\top+\Delta_{[d]})(\alpha'+\gamma))_j|\\
\geq~&\sum_{j\in[n],|\Delta_{j,d+1}|<n^{0.9999}}|(u+\Delta_{d+1}-(u\mathbf{1}^\top+\Delta_{[d]})\alpha')_j|-\sum_{j\in[n],|\Delta_{j,d+1}|<n^{0.9999}}|((u\mathbf{1}^\top+\Delta_{[d]})\gamma)_j|\\
\geq~&\sum_{j\in[n],|\Delta_{j,d+1}|<n^{0.9999}}|(u+\Delta_{d+1}-(u\mathbf{1}^\top+\Delta_{[d]})\alpha')_j|-\|(u\mathbf{1}^\top+\Delta_{[d]})\gamma\|_1\\
\geq~&\frac{2.025}{\pi}n\ln n-1/n^{500}\\
\geq~&\frac{2.024}{\pi}n\ln n
\end{align*}
The first equality follows from $\alpha=\alpha'+\gamma.$ The first inequality follows by the triangle inequality. The third inequality follows from $\|\gamma\|_1\leq 1/n^{c+c_0+800},\|u\mathbf{1}^\top\|_1\leq n^{c_0+10},\|\Delta\|_1\leq n^3,$ and $\forall \alpha'\in \mathcal{N},$
\begin{align*}
\sum_{j\in[n],|\Delta_{j,d+1}|<n^{0.9999}}|(u+\Delta_{d+1}-(u\mathbf{1}^\top+\Delta_{[d]})\alpha')_j|\geq \frac{2.025}{\pi}n\ln n.
\end{align*}
\end{proof}
\subsection{Main result}
\begin{theorem}[Formal version of Theorem~\ref{thm:intro_l1_hardness}]\label{thm:dis_l1_hardness}
Let $n>0$ be sufficiently large. Let $A=\eta\cdot \mathbf{1}\cdot\mathbf{1}^\top+\Delta\in\mathbb{R}^{n\times n}$ be a random matrix where $\eta=n^{c_0}$ for some sufficiently large constant $c_0,$ and $\forall i,j\in[n],\Delta_{i,j}\sim C(0,1)$ are i.i.d. standard Cauchy random variables. Let $r=n^{o(1)}.$ Then with probability at least $1-O(1/\log\log n),$ $\forall S\subset[n]$ with $|S|=r,$
\begin{align*}
\min_{X\in\mathbb{R}^{r\times n}} \|A_SX-A\|_1\geq 1.002\|\Delta\|_1
\end{align*}
\end{theorem}
\begin{proof}
We first argue that for a fixed set $S,$ conditioned on $\|\Delta\|_1\leq 100n^2\ln n,$ with probability at least $1-1/2^{n^{\Theta(1)}},$
\begin{align*}
\min_{X\in\mathbb{R}^{r\times n}} \|A_SX-A\|_1\geq 1.002\|\Delta\|_1.
\end{align*}
Then we can take a union bound over the at most $n^r=2^{n^{o(1)}}$ possible choices of $S.$ It suffices to show for a fixed set $S,$ $\min_{X\in\mathbb{R}^{r\times n}} \|A_SX-A\|_1$ is not small.
Without loss of generality, let $S=[r],$ and we want to argue the cost
\begin{align*}
\min_{X\in\mathbb{R}^{r\times n}} \|A_SX-A\|_1\geq\min_{X\in\mathbb{R}^{r\times n}} \|A_SX_{[n]\setminus S}-A_{[n]\setminus S}\|_1\geq 1.002\|\Delta\|_1.
\end{align*}
Due to Lemma~\ref{lem:bound_of_Delta}, with probability at least $1-O(1/\log\log n),$ $\|\Delta\|_1\leq 4.0002/\pi\cdot n^2\ln n.$ Now, we can condition on $\|\Delta\|_1\leq 4.0002/\pi\cdot n^2\ln n.$
Consider $j\in[n]\setminus S.$ Due to Lemma~\ref{lem:for_all_alpha_can_not_be_too_large}, with probability at least $1-(1/n)^{\Theta(n)},$ for all $X_j\in\mathbb{R}^r$ with $\|X_j\|_1\geq n^c$ for some constant $c>0,$ we have
\begin{align*}
\|A_SX_j-A_j\|_1&=\|(\eta\cdot\mathbf{1}\cdot\mathbf{1}^\top+[\Delta_S\ \Delta_j])[X_j^\top\ -1]^\top\|_1\geq 0.9n^3.
\end{align*}
By taking a union bound over all $j\in[n]\setminus S,$ with probability at least $1-(1/n)^{\Theta(n)},$ for all $X\in\mathbb{R}^{r\times n}$ with $\exists j\in[n]\setminus S,\|X_j\|_1\geq n^c,$ we have
\begin{align*}
\|A_SX-A\|_1\geq 0.9n^3.
\end{align*}
Thus, we only need to consider the case $\forall j\in [n]\setminus S,\|X_j\|_1\leq n^c.$ Notice that we condition on $\|\Delta\|_1\leq 4.0002/\pi\cdot n^2\ln n.$ By Fact~\ref{fac:bound_of_sum_of_alpha}, we have that if $\|X_j\|_1\leq n^c$ and $|1-\mathbf{1}^\top X_j|>1-10^{-20},$ then $\|A_SX-A\|_1\geq \|A_SX_j-A_j\|_1> n^3.$
Thus, we only need to consider the case $\forall j\in [n]\setminus S,\|X_j\|_1\leq n^c,|1-\mathbf{1}^\top X_j|\leq 1-10^{-20}.$ $\forall X\in\mathbb{R}^{r\times n}$ with $\forall j\in[n]\setminus S, \|X_j\|_1\leq n^c,|1-\mathbf{1}^\top X_j|\leq 1-10^{-20},$ if $\|A_SX_{[n]\setminus S}-A_{[n]\setminus S}\|_1\leq 4n^2\ln n,$ then
\begin{align*}
&\|A_SX_{[n]\setminus S}-A_{[n]\setminus S}\|_1\\
=~&\|(\eta\cdot\mathbf{1}\cdot\mathbf{1}^\top+\Delta_S)X_{[n]\setminus S}-(\eta\cdot\mathbf{1}\cdot\mathbf{1}^\top+\Delta_{[n]\setminus S})\|_1\\
\geq~& \sum_{i\in[n],j\in[n]\setminus S,|\Delta_{i,j}|\geq n^{1.0002}} |(((\eta\cdot\mathbf{1}\cdot\mathbf{1}^\top+\Delta_S)X_{[n]\setminus S}-\eta\cdot\mathbf{1}\cdot\mathbf{1}^\top)-\Delta_{[n]\setminus S})_{i,j}|\\
&+\sum_{i\in[n],j\in[n]\setminus S,|\Delta_{i,j}|< n^{0.9999}} |((\eta\cdot\mathbf{1}\cdot\mathbf{1}^\top+\Delta_S)X_{[n]\setminus S}-(\eta\cdot\mathbf{1}\cdot\mathbf{1}^\top+\Delta_{[n]\setminus S}))_{i,j}|\\
\geq~& \frac{1.996}{\pi}\cdot n^2\ln n+\sum_{i\in[n],j\in[n]\setminus S,|\Delta_{i,j}|< n^{0.9999}} |((\eta\cdot\mathbf{1}\cdot\mathbf{1}^\top+\Delta_S)X_{[n]\setminus S}-(\eta\cdot\mathbf{1}\cdot\mathbf{1}^\top+\Delta_{[n]\setminus S}))_{i,j}|\\
=~&\frac{1.996}{\pi}\cdot n^2\ln n+\sum_{j\in[n]\setminus S}\sum_{i\in[n],|\Delta_{i,j}|< n^{0.9999}} |((\eta\cdot\mathbf{1}\cdot\mathbf{1}^\top+\Delta_S)X_j-\eta\cdot\mathbf{1}-\Delta_j)_i|\\
\geq~&\frac{1.996}{\pi}\cdot n^2\ln n+\sum_{j\in[n]\setminus S}\frac{2.024}{\pi}n\ln n\\
\geq~&\frac{1.996}{\pi}\cdot n^2\ln n+\frac{2.023}{\pi}n^2\ln n\\
\geq~&\frac{4.01}{\pi}\cdot n^2\ln n
\end{align*}
holds with probability at least $1-1/2^{n^\Theta(1)}.$ The first equality follows by the definition of $A$. The first inequality follows by the partition by $|\Delta_{i,j}|.$ Notice that $[\mathbf{1}\ \Delta_S]$ has rank at most $r+1=n^{o(1)}.$ Then, due to Lemma~\ref{lem:bound_for_large_part}, and the condition $\|A_SX_{[n]\setminus S}-A_{[n]\setminus S}\|_1\leq 4n^2\ln n,$ the second inequality holds with probability at least $1-1/2^{n^{\Theta(1)}}.$ The second equality follows by grouping the cost by each column. The third inequality holds with probability at least $1-1/2^{n^{\Theta(1)}}$ by Lemma~\ref{lem:bound_for_small_part}, and a union bound over all the columns in $[n]\setminus S.$ The fourth inequality follows by $n-r=n-n^{o(1)}\geq (1-10^{-100})n.$
Thus, conditioned on $\|\Delta\|_1\leq 4.0002/\pi\cdot n^2\ln n,$ with probability at least $1-1/2^{n^{\Theta(1)}},$ we have $\min_{X\in\mathbb{R}^{r\times n}} \|A_SX-A\|_1\geq \frac{4.02}{\pi}\cdot n^2\ln n.$ By taking a union bound over all the ${n\choose r} = 2^{n^{o(1)}}$ choices of $S,$ we have that conditioned on $\|\Delta\|_1\leq \frac{4.0002}{\pi}n^2\ln n,$ with probability at least $1-1/2^{n^{\Theta(1)}},$ $\forall S\subset [n]$ with $|S|=r=n^{o(1)},$
$\min_{X\in\mathbb{R}^{r\times n}} \|A_SX-A\|_1\geq \frac{4.02}{\pi}\cdot n^2\ln n.$ Since $4.01/4.0002>1.002,$
\begin{align*}
\min_{X\in\mathbb{R}^{r\times n}} \|A_SX-A\|_1\geq 1.002 \|\Delta\|_1.
\end{align*}
Since $\|\Delta\|_1\leq \frac{4.0002}{\pi}n^2\ln n$ happens with probability at least $1-O(1/\log \log n),$ this completes the proof.
\end{proof}
\section{$\ell_1$-Norm Column Subset Selection}\label{sec:dis}
We first present two subroutines.
{\bf Linear regression with $\ell_1$ loss.} The first subroutine needed is an approximate $\ell_1$ linear regression solver.
In particular, given a matrix $M\in\mathbb{R}^{n\times d}$, $n$ vectors $b_1,b_2,\cdots,b_n\in\mathbb{R}^n$, and an error parameter $\varepsilon\in(0,1)$, we want to compute $x_1,x_2,\cdots,x_n\in\mathbb{R}^d$ for which $\forall i\in[n]$, we have
\begin{align*}
\|Mx_i-b_i\|_1\leq (1+\varepsilon)\cdot\min_{x\in\mathbb{R}^d} \|Mx-b_i\|_1.
\end{align*}
Furthermore, we also need an estimate $v_i$ of the regression cost $\|Mx_i-b_i\|_1$ for each $i\in[n]$ such that $\|Mx_i-b_i\|_1\leq v_i\leq (1+\varepsilon)\|Mx_i-b_i\|_1$.
Such an $\ell_1$-regression problem can be solved efficiently (see \cite{w14} for a survey).
The total running time to solve these $n$ regression problems simultaneously is at most $\widetilde{O}(n^2)+n\cdot \poly(d\log n)$, and the success probability is at least $0.999$.
{\bf $\ell_1$ Column subset selection for general matrices.}
The second subroutine needed is an $\ell_1$-low rank approximation solver for general input matrices, though we allow a large approximation ratio.
We use the algorithm proposed by \cite{cgklrw17} for this purpose.
In particular, given an $n\times d$ $(d\leq n)$ matrix $M$ and a rank parameter $k$, the algorithm can output a small set $S\subset[n]$ with size at most $O(k\log n)$, such that
\begin{align*}
\min_{X\in\mathbb{R}^{|S|\times d}}\|M_SX-M\|_1\leq O(k\log k)\cdot \min_{\rank-k\ B}\|M-B\|_1.
\end{align*}
Furthermore, the running time is at most $\widetilde{O}(n^2)+n\cdot\poly(k\log n)$, and the success probability is at least $0.999$. Now we can present our algorithm, Algorithm~\ref{alg:dis_l1_algorithm}.
\begin{algorithm}
\begin{algorithmic}[1]\caption{$\ell_1$-Low Rank Approximation with Input Assumption}\label{alg:dis_l1_algorithm}
\Procedure{\textsc{L1NoisyLowRankApprox}}{$A\in\mathbb{R}^{n\times n},k,\varepsilon$} \Comment{Theorem~\ref{thm:dis_l1_algorithm}}
\State Sample a set $I$ from ${[n] \choose s}$ uniformly at random, where $s=\poly(k/\varepsilon).$ \label{sta:uniform_sample}
\State Solve the approximate $\ell_1$-regression problem $\min_{x\in\mathbb{R}^{|I|}}\|A_I x - A_i\|_1$ for each $i\in[n]$, and let $v_i$ be the estimated regression cost.
\label{sta:l1_regression}
\State Compute the set $T=\{i\in[n]\mid v_i\text{ is one of the top }l\text{ largest values among } v_1,v_2,\cdots,v_n\}$, where $l=n/\poly(k/\varepsilon)$.
\State Solve $\ell_1$-column subset selection for $A_T.$ Let the solution be $A_Q$.
\State Solve the approximate $\ell_1$-regression problem $\min_{X\in\mathbb{R}^{(|I|+|Q|)\times n}}\|A_{(I\cup Q)}X - A\|_1$, and let $\hat{X}$ be the solution.
Return $A_{(I\cup Q)}$ and $\hat{X}$. \Comment{$A_{(I\cup Q)}\hat{X}$ is a good low rank approximation to $A$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
{\bf Running time.} Uniformly sampling a set $I$ can be done in $\poly(k/\varepsilon)$ time.
According to our $\ell_1$-regression subroutine, solving $\min_x \|A_Ix-A_i\|_1$ for all $i\in[n]$ can be finished in $\widetilde{O}(n^2)+n\cdot\poly(k\log(n)/\varepsilon)$ time.
We only need sorting to compute the set $T$ which takes $O(n\log n)$ time.
By our second subroutine, the $\ell_1$-column subset selection for $A_T$ will take $\widetilde{O}(n^2)+n\cdot\poly(k\log n)$.
The last step only needs an $\ell_1$-regression solver, which takes $\widetilde{O}(n^2)+n\cdot\poly(k\log(n)/\varepsilon)$ time.
Thus, the overall running time is $\widetilde{O}(n^2)+n\cdot\poly(k\log(n)/\varepsilon)$.
The remaining parts in this section will focus on analyzing the correctness of the algorithm.
\subsection{
Properties of the Noise Matrix
}
Recall that the input matrix $A\in\mathbb{R}^{n\times n}$ can be decomposed as $A^*+\Delta$, where $A^*$ is the ground truth, and $\Delta$ is a random noise matrix.
In particular, $A^*$ is an arbitrary rank-$k$ matrix, and $\Delta$ is a random matrix where each entry is an i.i.d. sample drawn from an unknown symmetric distribution.
The only assumption on $\Delta$ is that each entry $\Delta_{i,j}$ satisfies $\E[|\Delta_{i,j}|^p]=O(\E[|\Delta_{i,j}|^p])$ for some constant $p>1$, i.e., the $p$-th moment of the noise distribution is bounded.
Without loss of generality, we will suppose $\E[|\Delta_{i,j}|]=1$, $\E[|\Delta_{i,j}|^p]=O(1)$, and $p\in(1,2)$ throughout the paper. In this section, we will present some key properties of the noise matrix.
The following lemma provides a lower bound on $\|\Delta\|_1$.
Once we have the such lower bound, we can focus on finding a solution for which the approximation cost is at most that lower bound.
\begin{lemma}[Lower bound on the noise matrix]\label{lem:lower_bound_on_cost}
Let $\Delta\in\mathbb{R}^{n\times n}$ be a random matrix where $\Delta_{i,j}$ are i.i.d. samples drawn from a symmetric distribution.
Suppose $\E[|\Delta_{i,j}|]=1$ and $\E[|\Delta_{i,j}|^p]=O(1)$ for some constant $p\in(1,2)$.
Then, $\forall \epsilon\in(0,1)$ which satisfies $1/\epsilon=n^{o(1)},$ we have
\begin{align*}
\Pr \left[ \|\Delta\|_1\geq(1-\epsilon)n^2 \right]\geq 1-e^{-\Theta(n)}.
\end{align*}
\end{lemma}
The next lemma shows the main reason why we are able to get a small fitting cost when running regression.
Consider a toy example.
Suppose we have a target number $a\in\mathbb{R}$, and another $t$ numbers $a+g_1,a+g_2,\cdots,a+g_t\in\mathbb{R}$, where $g_i$ are i.i.d. samples drawn from the standard Gaussian distribution $N(0,1)$.
If we use $a+g_i$ to fit $a$, then the expected cost is $\E[|a+g_i-a|]=\E[|g_i|]=\sqrt{2/\pi}$.
However, if we use the average of $a+g_1,a+g_2,\cdots,a+g_t$ to fit $a$, then the expected cost is $\E[|\sum_{i=1}^t g_i|/t]$.
Since the $g_i$ are independent, $\sum_{i=1}^t g_i$ is a random Gaussian variable with variance $t$, which means that the above expected cost is $\sqrt{2/\pi }/\sqrt{t}$.
Thus the fitting cost is reduced by a factor $\sqrt{t}$.
By generalizing the above argument, we obtain the following lemma.
\begin{lemma}[Averaging reduces the noise]\label{lem:averaging_works}
Let $\Delta_1,\Delta_2,\cdots,\Delta_t\in\mathbb{R}^n$ be $t$ random vectors. The $\Delta_{i,j}$ are i.i.d. symmetric random variables with $\E[|\Delta_{i,j}|]=1$ and $\E[|\Delta_{i,j}|^p]=O(1)$ for some constant $p\in(1,2)$. Let $\alpha_1,\alpha_2,\cdots,\alpha_t\in[-1,1]$ be $t$ real numbers. Conditioned on $\forall i\in[n],j\in[t],|\Delta_{i,j}|\leq n^{1/2+1/(2p)},$ with probability at least $1-2^{-n^{\Theta(1)}},$
\begin{align*}
\left\|\sum_{i=1}^t\alpha_i\Delta_i \right\|_1\leq O(t^{1/p}n).
\end{align*}
\end{lemma}
The above lemma needs a condition that each entry in the noise column should not be too large.
Fortunately, we can show that most of the (noise) columns do not have any large entry.
\begin{lemma}[Only a small number of columns have large entries]\label{lem:hard_one_is_small}
Let $\Delta\in\mathbb{R}^{n\times n}$ be a random matrix where the $\Delta_{i,j}$ are i.i.d. symmetric random variables with $\E[|\Delta_{i,j}|]=1$ and $\E[|\Delta_{i,j}|^p]=O(1)$ for some constant $p\in(1,2)$. Let
\begin{align*}
H=\{j\in[n] ~\big|~ \exists i\in[n],|\Delta_{i,j}|>n^{1/2+1/(2p)}\}.
\end{align*}
Then with probability at least $0.999$ $|H|\leq O(n^{1-(p-1)/2}).$
\end{lemma}
The following lemma shows that any small subset of the columns of the noise matrix $\Delta$ cannot contribute too much to the overall error.
By combining with the previous lemma, the entrywise $\ell_1$ cost of all columns containing large entries can be bounded.
\begin{lemma
\label{lem:remaining_is_small}
Let $\Delta\in\mathbb{R}^{n\times n}$ be a random matrix where $\Delta_{i,j}$ are i.i.d. symmetric random variables with $\E[|\Delta_{i,j}|]=1$ and $\E[|\Delta_{i,j}|^p]=O(1)$ for some constant $p\in(1,2)$. Let $\epsilon\in(0,1)$ satisfy $1/\epsilon=n^{o(1)}.$ Let $r\geq (1/\epsilon)^{1+1/(p-1)}.$ Then, with probability at least $.999,$ $\forall S\subset [n]$ with $|S|\leq n/r,$ $\sum_{j\in S}\|\Delta_j\|_1 = O(\epsilon n^2).$
\end{lemma}
We say a (noise) column is good if it does not have a large entry.
We can show that, with high probability, the entry-wise $\ell_1$ cost of a good (noise) column is small.
\begin{lemma}[Cost of good noise columns]\label{lem:easy_one_is_concentrated}
Let $\Delta\in\mathbb{R}^{n}$ be a random vector where $\Delta_{i}$ are i.i.d. symmetric random variables with $\E[|\Delta_{i}|]=1$ and $\E[|\Delta_{i}|^p]=O(1)$ for some constant $p\in(1,2)$. Let $\epsilon\in(0,1)$ satisfy $1/\epsilon=n^{o(1)}.$ If $\forall i\in[n],|\Delta_i|\leq n^{1/2+1/(2p)},$ then with probability at least $1-2^{-n^{\Theta(1)}},$ $\|\Delta\|_1\leq (1+\epsilon)n.$
\end{lemma}
\subsection{Definition of Tuples and Cores}
In this section, we provide some basic definitions, e.g., of a tuple, a good tuple, the core of a tuple, and a coefficients tuple. These definitions will be heavily used later when we analyze the correctness of our algorithm.
Before we present the definitions, we introduce a notion $R_{A^*}(S)$.
Given a matrix $A^* \in \mathbb{R}^{n_1 \times n_2}$,
for a set $S \subseteq [n_2]$, we define
\begin{align*}
R_{A^*}(S) := \arg\max_{P:P \subseteq S} \left\{ \left|\det\left( (A^*)_P^Q \right)\right| ~\bigg|~ |P| = |Q|= \rank (A^*_S), Q \subseteq [n_1] \right\},
\end{align*}
where for a squared matrix $C$, $\det(C)$ denotes the determinant of $C$.
The above maximum is over both $P$ and $Q$ while $R_{A^*}(S)$ only takes the value of the corresponding $P$.
By Cramer's rule, if we use the columns of $A^*$ with index in the set $R_{A^*}(S)$ to fit any column of $A^*$ with index in the set $S$, the absolute value of any fitting coefficient will be at most $1$.
The use of Cramer's rule is as follows.
Consider a rank $k$ matrix $M\in\mathbb{R}^{n\times(k+1)}$.
Let $P\subseteq[k+1],Q\subseteq [n],|P|=|Q|=k$ be such that $|\det(M_P^Q)|$ is maximized.
Since $M$ has rank $k$, we know $\det(M_P^Q)\not= 0$ and thus the columns of $M_P$ are independent.
Let $i\in [k+1]\setminus P$.
Then the linear equation $M_Px=M_i$ is feasible and
there is a unique solution $x$.
Furthermore, by Cramer's rule $x_j={\det(M^Q_{[k+1]\setminus\{j\}})}/{\det(M_P^Q)}$.
Since $|\det(M_P^Q)|\geq |\det(M^Q_{[k+1]\setminus\{j\}})|$, we have $\|x\|_{\infty}\leq 1$.
Small fitting coefficients are good since they will not increase the noise by too much.
For example, suppose $A^*_i=A^*_S x$ and $\|x\|_{\infty}\leq 1$, i.e., the $i$-th column can be fit by the columns with indices in the set $S$ and the fitting coefficients $x\in\mathbb{R}^{|S|}$ are small.
If we use the noisy columns of $A^*_S+\Delta_S$ to fit the noisy column $A^*_i+\Delta_i$, then the fitting cost is at most $\|(A^*_S+\Delta_S)x-(A^*_i+\Delta_i)\|_1\leq \|\Delta_i\|_1+\|\Delta_S x\|_1$.
Since $\|x\|_{\infty}\leq 1$, it is possible to give a good upper bound for $\|\Delta_S x\|_1$.
\begin{definition}[Tuple]\label{def:tuple}
A $(q,t,n)-$tuple is defined to be
$
(S_1,S_2,\cdots,S_t,i),
$
where $\forall j\in[t],S_j\subset [n]$ with $|S_j|=q.$ Let $S=\bigcup_{j=1}^t S_j$.
Then $|S|=qt,$ i.e., $S_1,S_2,\cdots,S_t$ are disjoint. Furthermore, $i\in[n]$ and $i\not\in S.$ For simplicity, we use $(S_{[t]},i)$ to denote $(S_1,S_2,\cdots,S_t,i)$.
\end{definition}
We next provide the definition of a good tuple.
\begin{definition}[Good tuple]\label{def:core_and_good_tuple}
Given a $\rank$-$k$ matrix $A^*\in\mathbb{R}^{n\times n},$ an $(A^*,q,t,\alpha)$-good tuple is a $(q,t,n)$-tuple $
(S_{[t]},i)
$ which satisfies
\begin{align*}
|\{j\in[t]\mid i\not\in R_{A^*}(S_j\cup \{i\})\}|\geq \alpha \cdot t.
\end{align*}
\end{definition}
We need the definition of the core of a tuple.
\begin{definition}[Core of a tuple]\label{def:core_tuple}
The core of $(S_{[t]},i)$ is defined to be the set
\begin{align*}
\{j\in[t]\mid i\not\in R_{A^*}(S_j\cup \{i\})\}.
\end{align*}
\end{definition}
We define a coefficients tuple as follows.
\begin{definition}[Coefficients tuple]\label{def:coefficients_tuple}
Given a $\rank$-$k$ matrix $A^*\in\mathbb{R}^{n\times n},$ let $(S_{[t]},i)$ be an $(A^*,q,t,\alpha)$-good tuple. Let $C$ be the core of $(S_{[t]},i)$. A coefficients tuple corresponding to $(S_{[t]},i)$ is defined to be $(x_1,x_2,\cdots,x_t)$ where $\forall j\in[t],x_j \in \mathbb{R}^q.$ The vector $x_j \in \mathbb{R}^q$ satisfies:
$x_j = 0$ if $j \in [t] \backslash C$, while
$A_{S_j}^* x_j = A_i^*$ and $\| x_j \|_{\infty}\leq 1$, if $j \in C$.
To guarantee the coefficients tuple is unique, we restrict each vector $x_j \in \mathbb{R}^q$ to be one that has the minimum lexicographic order.
\end{definition}
\subsection{Properties of a Good Tuple and a Coefficients Tuple}
Consider a good tuple $(S_1,S_2,\cdots,S_t,i)$.
By the definition of a good tuple, the size of the core $C$ of the tuple is large.
For each $j\in C$, the coefficients $x_j$ of using $A^*_{S_j}$ to fit $A^*_i$ should have absolute value at most $1$.
Now consider the noisy setting.
As discussed in the previous section, using $A_{S_j}$ to fit $A_i$ has cost at most $\|\Delta_i\|_1+\|\Delta_{S_j}x_j\|_1$.
Although $\|\Delta_{S_j}x_j\|_1$ has a good upper bound, it is not small enough.
To further reduce the $\ell_1$ fitting cost, we can now apply the averaging argument (Lemma~\ref{lem:averaging_works}) over all the fitting choices corresponding to $C$.
Formally, we have the following lemma.
\begin{lemma}[Good tuples imply low fitting cost]\label{lem:good_tuple_low_cost}
Suppose we are given a matrix $A\in\mathbb{R}^{n\times n}$ which satisfies $A=A^*+\Delta$, where $A^*\in\mathbb{R}^{n\times n}$ has rank $k$. Here $\Delta\in\mathbb{R}^{n\times n}$ is a random matrix where $\Delta_{i,j}$ are i.i.d. symmetric random variables with $\E[|\Delta_{i,j}|]=1$ and $\E[|\Delta_{i,j}|^p]=O(1)$ for some constant $p\in(1,2)$. Let $H \subset [n]$ be defined as follows:
\begin{align*}
H= \left\{ j\in[n] ~\bigg|~ \exists i\in[n],|\Delta_{i,j}|>n^{1/2+1/(2p)} \right\}.
\end{align*}
Let $q,t\leq n^{o(1)}.$ Then, with probability at least $1-2^{-n^{\Theta(1)}},$ for all $(A^*,q,t,1/2)$-good tuples $(S_1,S_2,\cdots,S_t,i)$ which satisfy $H\cap\left(\bigcup_{j=1}^t S_j\right)=\emptyset,$ we have
\begin{align*}
\min_{y\in\mathbb{R}^{qt}}\left\|A_{\{\bigcup_{j=1}^t S_j\}}y-A_i\right\|_1\leq\left\|\frac{1}{|C|}\sum_{j=1}^t A_{S_j}x_j-A_i\right\|_1\leq \|\Delta_i\|_1+O(q^{1/p}/t^{1-1/p}n),
\end{align*}
where $C$ is the core of $(S_1,S_2,\cdots,S_t,i)$, and $(x_1,x_2,\cdots,x_t)$ is the coefficients tuple corresponding to $(S_1,S_2,\cdots,S_t,i).$
\end{lemma}
We next show that if we choose columns randomly, it is easy to find a good tuple.
\begin{lemma
\label{lem:label_uniform_samples}
Given a rank-$k$ matrix $A^*\in\mathbb{R}^{n\times n}$, let $q>10k,t>0.$ Let $I=\{i_1,i_2,\cdots,i_{qt+1}\}$ be a subset drawn uniformly at random from $[n]\choose qt+1$. Let $\pi : I \rightarrow I$ be a random permutation of $qt+1$ elements. $\forall j\in[t],$ let
\begin{align*}
S_j= \left\{ i_{\pi((j-1)q+1)},i_{\pi((j-1)q+2)},\cdots,i_{\pi((j-1)q+q)} \right\}.
\end{align*}
We use $i$ to denote $i_{\pi(qt+1)}$. With probability $\geq 1-2k/q$, $(S_1,S_2,\cdots,S_t,i)$ is an $(A^*,q,t,1/2)-$good tuple.
\end{lemma}
Lemma~\ref{lem:label_uniform_samples} implies that if we randomly choose $S_1,S_2,\cdots,S_t$, then with high probability, there are many choices of $i\in[n]$, such that $(S_1,S_2,\cdots,S_t,i)$ is a good tuple.
Precisely, we can show the following.
\begin{lemma
\label{lem:easy_to_find_good_tuple}
Given a rank-$k$ matrix $A^*\in\mathbb{R}^{n\times n}$, let $q>10k,t>0.$ Let $I=\{i_1,i_2,\cdots,i_{qt}\}$ be a random subset uniformly drawn from $[n]\choose qt$. Let $\pi$ be a random permutation of $qt$ elements. $\forall j\in[t],$ we define $S_j$ as follows:
\begin{align*}
S_j= \left\{ i_{\pi((j-1)q+1)}, i_{\pi((j-1)q+2)}, \cdots, i_{\pi((j-1)q+q)} \right\}.
\end{align*}
Then with probability at least $2k/q$,
\begin{align*}
\left|\left\{i\in[n]\setminus I ~\big|~ (S_1,S_2,\cdots,S_t,i)\mathrm{~is~an~}(A^*,q,t,1/2)\mathrm{-good~tuple~}\right\}\right|\geq (1-4k/q)(n-qt).
\end{align*}
\end{lemma}
\subsection{Main Result}
Now we are able to put all ingredients together to prove our main theorem,
Theorem~\ref{thm:dis_l1_algorithm}.
\begin{theorem}[Formal version of Theorem~\ref{thm:intro_l1_algorithm}]\label{thm:dis_l1_algorithm}
Suppose we are given a matrix $A= A^* +\Delta \in \mathbb{R}^{n\times n}$, where $\rank(A^*)=k$ for $k=n^{o(1)}$, and $\Delta$ is a random matrix for which the $\Delta_{i,j}$ are i.i.d. symmetric random variables with $\E[|\Delta_{i,j}|]=1$ and $\E[|\Delta_{i,j}|^p]=O(1)$ for some constant $p\in(1,2)$. Let $\epsilon\in (0,1/2)$ satisfy $1/\epsilon=n^{o(1)}.$ There is an $\widetilde{O}(n^2+n\poly(k/\varepsilon))$ time algorithm (Algorithm~\ref{alg:dis_l1_algorithm}) which can output a subset $S\in [n]$ with $|S|\leq \poly(k/\epsilon)+O(k\log n)$ for which
\begin{align*}
\min_{X\in \mathbb{R}^{|S| \times n}} \| A_S X - A \|_1 \leq (1 + \epsilon ) \| \Delta \|_1,
\end{align*}
holds with probability at least $99/100$.
\end{theorem}
\begin{proof}
We discussed the running time at the beginning of Section~\ref{sec:dis}. Next, we turn to correctness. Let $q=\Omega\left(\frac{k(k\log k)^{1+\frac{1}{p-1}}}{\epsilon^{1+\frac{1}{p-1}}}\right),$ $t=\frac{q^{\frac{1}{p-1}}}{\epsilon^{1+\frac{1}{p-1}}}.$ Let $r=\Theta(q/k).$ Let
\begin{align*}
I_1= \left\{ i_1^{(1)},i_2^{(1)},\cdots,i_{qt}^{(1)} \right\}, I_2= \left\{i_1^{(2)},i_2^{(2)},\cdots,i_{qt}^{(2)} \right\},\cdots,I_r= \left\{i_1^{(r)},i_2^{(r)},\cdots,i_{qt}^{(r)} \right\},
\end{align*}
be $r$ independent subsets drawn uniformly at random from $[n]\choose qt$.
Let $I=\bigcup_{s\in[r]}I_s$, which is the same as that in Algorithm~\ref{alg:dis_l1_algorithm}.
Let $\pi_1,\pi_2,\cdots,\pi_{r}$ be $r$ independent random permutations of $qt$ elements. Due to Lemma~\ref{lem:easy_to_find_good_tuple} and a Chernoff bound, with probability at least $.999,$ $\exists s\in[r]$,
\begin{align*}
\left|\left\{i\in[n]\setminus I_s ~\big|~ (S_1,S_2,\cdots,S_t,i)\text{~is an $(A^*,q,t,1/2)-$good tuple~}\right\}\right|\geq (1-4k/q)(n-qt)
\end{align*}
where
\begin{align*}
S_j= \left\{ i^{(s)}_{\pi_s((j-1)q+1)},i^{(s)}_{\pi_s((j-1)q+2)},\cdots,i^{(s)}_{\pi_s((j-1)q+q)} \right\}, \forall j \in [t].
\end{align*}
Let set $H \subset [n]$ be defined as follows:
\begin{align*}
H=\{j\in[n]\mid \exists i\in[n],|\Delta_{i,j}|>n^{1/2+1/(2p)}\}.
\end{align*}
Then due to Lemma~\ref{lem:hard_one_is_small}, with probability at least $0.999,$ $|H|\leq O(n^{1-(p-1)/2}).$ Thus, for $j\in[r],$ the probability that $H\cap I_j\not=\emptyset$ is at most $O(qt\cdot n^{1-(p-1)/2}/(n-qt))=1/n^{\Omega(1)}.$ By taking a union bound over all $j\in[r],$ with probability at least $1-1/n^{\Omega(1)},$ $\forall j\in[r],I_j\cap H=\emptyset.$ Thus, we can condition on $I_s\cap H=\emptyset.$
Due to Lemma~\ref{lem:good_tuple_low_cost} and $q^{1/p}/t^{1-1/p}=\epsilon,$
\begin{align*}
\left|\left\{i\in[n]\setminus I_s ~\bigg|~ \min_{y\in\mathbb{R}^{qt}}\|A_{I_s}y-A_i\|_1\leq \|\Delta_i\|_1+O(\epsilon n)\right\}\right|\geq (1-4k/q)(n-qt).
\end{align*}
Due to Lemma~\ref{lem:easy_one_is_concentrated} and a union bound over all $i\in[n]\setminus H$, with probability at least $.999,$ $\forall i\not\in H,\|\Delta_i\|\leq (1+\epsilon)n$. Thus,
\begin{align*}
\left|\left\{i\in[n]\setminus I_s ~\bigg|~ \min_{y\in\mathbb{R}^{qt}}\|A_{I_s}y-A_i\|_1\leq (1+O(\epsilon)) n\right\}\right|\geq (1-4k/q)(n-qt)-|H|.
\end{align*}
Let
\begin{align*}
T'=[n]\setminus\left\{i\in[n] ~\bigg|~ \min_{y\in\mathbb{R}^{qt}}\|A_{I_s}y-A_i\|_1\leq (1+O(\epsilon)) n\right\}.
\end{align*}
Then $|T'|\leq O(kn/q+n^{1-(p-1)/2})=O(kn/q)=O(( \epsilon / ( k\log k ) )^{1+ 1 /(p-1)}n).$
By our selection of $T$ in algorithm~\ref{alg:dis_l1_algorithm}, $T'$ should be a subset of $T$.
Due to Lemma~\ref{lem:remaining_is_small}, with probability at least $.999,$ $\|\Delta_{T}\|_1\leq O(\epsilon n^2/(k\log k) )$. By our second subroutine mentioned at the beginning of Section~\ref{sec:dis}
it can find a set $Q\subset[n]$ with $|Q|=O(k\log n)$ such that
$min_{X\in\mathbb{R}^{|Q|\times |T|}}\|A_QX-A_T\|_1\leq O(k\log k)\|\Delta_T\|_1\leq O(\epsilon n^2).$
Thus, we have
$\min_{X\in\mathbb{R}^{(|Q|+q\cdot t\cdot r)\times n}}\|A_{(Q\cup I)}X-A\|_1
\leq
\min_{X_1\in\mathbb{R}^{(q\cdot t\cdot r)\times n}}\|A_{ I}X_1-A_{[n]\setminus T}\|_1 + \min_{X_2\in\mathbb{R}^{|Q|\times n}}\|A_{ Q}X_2-A_{T}\|_1
\leq (1+O(\epsilon)) n^2.$
Due to Lemma~\ref{lem:lower_bound_on_cost}, with probability at least $.999,$ $\|\Delta\|_1\geq (1-\epsilon)n^2,$ and thus
$\min_{X\in\mathbb{R}^{(|Q|+q\cdot t\cdot r)\times n}}\|A_{(Q\cup I ) }X-A\|_1\leq (1+O(\epsilon)) \|\Delta\|_1.$
\end{proof}
\section{Experiments}\label{sec:exp}
The take-home message from our theoretical analysis is that although the noise distribution may be heavy-tailed, if the $p$-th $(p>1)$ moment of the distribution exists, averaging the noise may reduce the noise.
In the spirit of averaging, we found that taking a median works a bit better in practice.
Inspired by our theoretical analysis, we propose a simple heuristic algorithm (Algorithm~\ref{alg:heu}) which can output a rank-$k$ solution. We tested Algorithm~\ref{alg:heu} on both synthetic and real datasets.
\begin{algorithm}
\begin{algorithmic}[1]\caption{Median Heuristic}\label{alg:heu}
\Procedure{\textsc{L1NoisyLowRankApproxHeu}}{$A\in\mathbb{R}^{n\times d},k\geq 1$}
\State Sample a set $I=\{i_1,i_2,\cdots,i_{sk}\}$ from ${[n] \choose sk}$ uniformly at random.
\State Compute $B\in\mathbb{R}^{n\times k}$ s.t., for $t\in[n],q\in[k],$ $B_{t,q}=\mathrm{median}(A_{t,i_{s(q-1)+1}},\cdots,A_{t,i_{sq}})$.
\State Solve $\min_{X\in\mathbb{R}^{k\times d}}\|BX-A\|_1$ and let the solution be $X^*$. Output $BX^*$.
\EndProcedure
\end{algorithmic}
\end{algorithm}
{\bf Datasets.}
For each rank-$k$ experiment, we chose a high rank matrix $\hat{A}\in\mathbb{R}^{n\times d}$, applied top-$k$ SVD to $\hat{A}$ and obtained a rank-$k$ matrix $A^*$ as our ground truth matrix.
For our synthetic data experiments, the matrix $\hat{A}\in\mathbb{R}^{500\times 500}$ was generated at random, where each entry was drawn uniformly from $\{0,1,\cdots,9\}$. For real datasets, we chose \textit{isolet}\footnote{\url{https://archive.ics.uci.edu/ml/datasets/isolet}} $(617 \times 1559)$ or \textit{mfeat}\footnote{\url{https://archive.ics.uci.edu/ml/datasets/Multiple+Features}} $(651\times 2000)$ as $\hat{A}$~\cite{an07}. We tested two different noise distributions. One distribution is the standard L\'evy $1.1$-stable distribution~\cite{m60}. Another distribution is constructed from the standard Cauchy distribution, i.e., to draw a sample from the constructed distribution, we draw a sample from the Cauchy distribution, keep the sign unchanged, and take the $\frac{1}{1.1}$-th power of the absolute value.
Notice that both distributions have bounded $1.1$-th moment, but do not have a $p$-th moment for any $p>1.1$.
To construct the noise matrix $\Delta\in\mathbb{R}^{n\times d}$, we drew a matrix $\hat{\Delta}$ where each entry is an i.i.d. sample from one of the two noise distributions, and then scaled the noise: $\Delta = \hat{\Delta}\cdot \frac{\|A^*\|_1}{20\cdot n\cdot d}$.
We set $A=A^*+\Delta$ as the input.
{\bf Methodologies.} We compare Algorithm~\ref{alg:heu} with SVD, $\poly(k,\log n)$-approximate entrywise $\ell_1$ low rank approximation~\cite{swz17}, and uniform $k$-column subset sampling~\cite{cgklrw17}\footnote{We chose to compare with \cite{swz17,cgklrw17} due to their theoretical guarantees. Though the uniform $k$-column subset sampling described in the experiments of \cite{cgklrw17} is a heuristic algorithm, it is inspired by their theoretical algorithm.}.
For Algorithm~\ref{alg:heu}, we set $s=\min(50,\lfloor n/k\rfloor)$.
For all of algorithms
we repeated the experiment the same number of times and compared the best solution obtained by each algorithm.
We report the approximation ratio $\|B-A\|_1/\|\Delta\|_1$ for each algorithm, where $B\in\mathbb{R}^{n\times d}$ is the output rank-$k$ matrix. The results are shown in Figure~\ref{fig:results}. As shown in the figure, Algorithm~\ref{alg:heu} outperformed all of the other algorithms.
\begin{figure*}
\centering
\bgroup
\setlength\tabcolsep{-0.1cm}
\begin{tabular}{ccc}
\textsc{synthetic} & \textsc{isolet} & \textsc{mfeat} \\
\includegraphics[width=0.36\textwidth]{synthetic_stable}&
\includegraphics[width=0.36\textwidth]{isolet_stable}&
\includegraphics[width=0.36\textwidth]{mfeat_stable}\\
\includegraphics[width=0.36\textwidth]{synthetic_cauchy}&
\includegraphics[width=0.36\textwidth]{isolet_cauchy}&
\includegraphics[width=0.36\textwidth]{mfeat_cauchy}\\
\end{tabular}
\egroup
\caption{\small \textbf{Empirical results.} The noise distributions of the experiments in the first row are from a $1.1$-stable distribution. The noise distributions corresponding to the second row are the $1.1$-th root of a Cauchy distribution.
The blue, red, orange and yellow bar denote SVD, the entrywise $\ell_1$-norm low rank algorithm in \cite{swz17}, the uniform $k$-column subset sampling algorithm in \cite{cgklrw17}, and Algorithm~\ref{alg:heu} respectively.}\label{fig:results}
\end{figure*}
\section{Introduction}
Numerical linear algebra algorithms are fundamental building blocks in many machine learning and data mining tasks.
A well-studied problem is low rank matrix approximation.
The most common version of the problem is also known as Principal Component Analysis (PCA), in which the goal is to find a low rank matrix to approximate a given matrix such that the Frobenius norm of the error is minimized.
The optimal solution of this objective can be obtained via the singular value decomposition (SVD).
Hence, the problem can be solved in polynomial time.
If approximate solutions are allowed, then the running time can be made almost linear in the number of non-zero entries of the given matrix \cite{s06,cw13,mm13,nn13,bdn15,c16}.
An important variant of the PCA problem is the entrywise $\ell_1$-norm low rank matrix approximation problem.
In this problem, instead of minimizing the Frobenius norm of the error, we seek to minimize the $\ell_1$-norm of the error.
In particular, given an $n\times n$ input matrix $A$, and a rank parameter $k$, we want to find a matrix $B$ with rank at most $k$ such that $\|A-B\|_1$ is minimized, where for a matrix $C$, $\|C\|_1$ is defined to be $\sum_{i,j} |C_{i,j}|$.
There are several reasons for using the $\ell_1$-norm as the error measure.
For example, solutions with respect to the $\ell_1$-norm loss are usually more robust than solutions with Frobenius norm loss \cite{h64,clmw11}.
Further, the $\ell_1$-norm loss is often used as a relaxation of the $\ell_0$-loss, which has wide applications including sparse recovery, matrix completion, and robust PCA; see e.g., \cite{xcs10,clmw11}.
Although a number of algorithms have been proposed for the $\ell_1$-norm loss~\cite{kk03,kk05,kimefficient,kwak08,zlsyO12,bj12,bd13,bdb13,meng2013cyclic,mkp13,mkp14,mkcp16,park2016iteratively}, the problem is known to be NP-hard~\cite{gv15}.
The first $\ell_1$-low rank approximation with provable guarantees was proposed by \cite{swz17}.
To cope with NP-hardness, the authors gave a solution with a $\poly(k\log n)$-approximation ratio, i.e., their algorithm outputs a rank-$k$ matrix $B'\in\mathbb{R}^{n\times n}$ for which
\begin{align}
\|A-B'\|_1\leq \alpha\cdot\min_{\rank-k\ B}\|A-B\|_1
\end{align}
for $\alpha=\poly(k\log n)$.
The approximation ratio $\alpha$ was further improved to $O(k\log k)$ by allowing $B'$ to have a slightly larger $k'=O(k\log n)$ rank \cite{cgklrw17}. Such $B'$ with larger rank is referred to as a bicriteria solution.
However, in high precision applications, such approximation factors are too large.
A natural question is if one can compute a $(1+\varepsilon)$-approximate solution efficiently for $\ell_1$-norm low rank approximation.
In fact, a $(1+\varepsilon)$-approximation algorithm was given in \cite{bbbklw17}, but the running time of their algorithm is a prohibitive $n^{\poly(k/\varepsilon)}$.
Unfortunately, \cite{bbbklw17} shows in the worst case that a $2^{k^{\Omega(1)}}$ running time is necessary for any constant approximation given a standard conjecture in complexity theory.
\subsection{Notation}
{\bf Notation.} To describe our results, let us first introduce some notation.
We will use $[n]$ to denote the set $\{1,2,\cdots,n\}$.
We use $A_i$ to denote the $i^{\text{th}}$ column of $A$.
We use $A^j$ to denote the $j^{\text{th}}$ row of $A$.
Let $Q\subseteq[n]$.
We use $A_Q$ to denote the matrix which is comprised of the columns of $A$ with column indices in $Q$.
Similarly, we use $A^Q$ to denote the matrix which is comprised of the rows of $A$ with row indices in $Q$.
We use $[n]\choose t$ to denote the set of all the size-$t$ subsets of $[n]$.
Let $\| A\|_F$ denote the Frobenius norm of a matrix $A$, i.e., $\|A\|_F$ is the square root of the sum of squares of all the entries in $A$. For $1\leq p<2$, we use $\| A \|_p$ to denote the entry-wise $\ell_p$-norm of a matrix $A$, i.e., $\|A\|_p$ is the $p$-th root of the sum of $p$-th powers of the absolute values of the entries of $A$. $\|A\|_1$ is an important special case of $\|A\|_p$, which corresponds to the sum of absolute values of the entries in $A$.
A random variable $X$ has the Cauchy distribution if its probability density function is $f(z) = \frac{1}{\pi (1+z^2)}$.
\subsection{Our Results}
We propose an efficient bicriteria $(1+\epsilon)$-approximate column subset selection algorithm for the $\ell_1$-norm.
We bypass the running time lower bound mentioned above by making a mild assumption on the input data, and also show that our assumption is necessary in a certain sense.
Our main algorithmic result is described as follows.
\begin{theorem}[Informal version of Theorem~\ref{thm:dis_l1_algorithm}]\label{thm:intro_l1_algorithm}
Suppose we are given a matrix $A= A^* +\Delta \in \mathbb{R}^{n\times n}$, where $\rank(A^*)=k$ for $k=n^{o(1)}$, and $\Delta$ is a random matrix for which the $\Delta_{i,j}$ are i.i.d. symmetric random variables with $\E[|\Delta_{i,j}|^p]=O(\E[|\Delta_{i,j}|]^p)$ for some constant $p>1$. Let $\epsilon\in (0,1/2)$ satisfy $1/\epsilon=n^{o(1)}.$ There is an $\widetilde{O}(n^2+n\poly(k/\varepsilon))$\footnote{We use the notation $\widetilde{O}(f):= O(f\cdot \log^{O(1)} f)$.} time algorithm (Algorithm~\ref{alg:dis_l1_algorithm}) which can output a subset $S\subseteq [n]$ with $|S|\leq \poly(k/\epsilon)+O(k\log n)$ for which
\begin{align*}
\min_{X\in \mathbb{R}^{|S| \times n}} \| A_S X - A \|_1 \leq (1 + \epsilon ) \| \Delta \|_1,
\end{align*}
holds with probability at least $99/100$.
\end{theorem}
Note the running time in Theorem \ref{thm:intro_l1_algorithm} is nearly linear in the number of non-zero entries of $A$,
since for an $n \times n$ matrix with i.i.d. noise drawn from any continuous distribution, the number of non-zero entries of $A$ will be $n^2$ with probability $1$.
We also show the moment assumption of Theorem \ref{thm:intro_l1_algorithm} is
necessary in the following precise sense.
\begin{theorem}[Hardness, informal version of Theorem~\ref{thm:dis_l1_hardness}]\label{thm:intro_l1_hardness}
Let $n>0$ be sufficiently large. Let $A=\eta\cdot \mathbf{1}\cdot\mathbf{1}^\top+\Delta\in\mathbb{R}^{n\times n}$ be a random matrix where $\eta=n^{c_0}$ for some sufficiently large constant $c_0,$ $\mathbf{1}\in\mathbb{R}^n$ is the all-ones vector, and $\forall i,j\in[n],\Delta_{i,j}\sim C(0,1)$ are i.i.d. standard Cauchy random variables. Let $r=n^{o(1)}.$ Then with probability at least $1-O(1/\log\log n),$ $\forall S\subseteq [n]$ with $|S|=r,$
\begin{align*}
\min_{X\in\mathbb{R}^{r\times n}} \|A_SX-A\|_1\geq 1.002\|\Delta\|_1.
\end{align*}
\end{theorem}
\subsection{Our Techniques}
For an overview of our hardness result, we refer readers to the supplementary material, namely, Appendix~\ref{sec:hard}.
In the following, we will outline the main techniques used in our algorithm.
\paragraph{$(1+\epsilon)$-Approximate $\ell_1$-Low Rank Approximation.}
We make the following distributional assumption on the input matrix $A\in\mathbb{R}^{n\times n}$: namely, $A = A^*+ \Delta$ where $A^*$ is an arbitrary
rank-$k$ matrix and the entries of $\Delta$ are i.i.d. from any symmetric
distribution with $\E[|\Delta_{i,j}|] = 1$
and $\E[|\Delta_{i,j}|^p] = O(1)$ for any real number $p$ strictly greater than $1$, e.g., $p = 1.000001$ would
suffice. Note that such an assumption is mild compared to typical noise models which require the noise
be Gaussian or have bounded variance; in our case the random variables may even be heavy-tailed with infinite variance.
In this setting we show it is possible to obtain a subset of $\poly(k(\epsilon^{-1} + \log n))$ columns spanning a
$(1+\epsilon)$-approximation. This provably overcomes the column subset selection lower bound of \cite{swz17}
which shows for entrywise $\ell_1$-low rank approximation that there are matrices for which any subset of $\poly(k)$
columns spans at best a $k^{\Omega(1)}$-approximation.
Consider the following algorithm: sample $\poly(k/\epsilon)$ columns of $A$, and try to cover as many of the remaining columns as possible. Here, by covering a column $i$, we mean that if $A_I$ is the subset of columns sampled, then $\min_y \|A_I y - A_i\|_1 \leq (1+O(\epsilon))n$. The reason for this notion of covering is that we are able to show in Lemma \ref{lem:lower_bound_on_cost} that in this noise model, $\|\Delta\|_1 \geq (1-\epsilon)n^2$ w.h.p., and so if we could cover every column $i$, our overall cost would be $(1+O(\epsilon))n^2$, which would give a $(1+O(\epsilon))$-approximation to the overall cost.
We will not be able to cover all columns, unfortunately, with our initial sample of $\poly(k/\epsilon)$ columns of $A$. Instead, though, we will show that we will be able to cover all but a set $T$ of $\epsilon n/(k \log k)$ of the columns. Fortunately, we show in Lemma \ref{lem:remaining_is_small} another property of the noise matrix $\Delta$ is that {\it all} subsets $S$ of columns of size at most $n/r$, for $r \geq (1/\gamma)^{1+1/(p-1)}$ satisfy $\sum_{j \in S}\|\Delta_j\|_1 = O(\gamma n^2)$. Thus, for the above set $T$ that we do not cover, we can apply this lemma to it with $\gamma = \epsilon/(k \log k)$, and then we know that $\sum_{j \in T}\|\Delta_j\|_1 = O(\epsilon n^2/(k \log k))$, which then enables us to run a previous $\tilde{O}(k)$-approximate $\ell_1$ low rank approximation algorithm \cite{cgklrw17} on the set $T$, which will only incur total cost $O(\epsilon n^2)$, and since by Lemma \ref{lem:lower_bound_on_cost} above the overall cost is at least $(1-\epsilon)n^2$, we can still obtain a $(1+O(\epsilon))$-approximation overall.
The main missing piece of the algorithm to describe is why we are able to cover all but a small fraction of the columns. One thing to note is that our noise distribution may not have a finite variance, and consequently, there {\it can} be very large entries $\Delta_{i,j}$ in some columns. In Lemma \ref{lem:hard_one_is_small}, we show the number of columns in $\Delta$ for which there exists an entry larger than $n^{1/2+1/(2p)}$ in magnitude is $O(n^{(2-p)/2})$, which since $p > 1$ is a constant bounded away from $1$, is sublinear. Let us call this set with entries larger than $n^{1/2+1/(2p)}$ in magnitude the set $H$ of ``heavy" columns; we will not make any guarantees about $H$, rather, we will stuff it into the small set $T$ of columns above on which we will run our earlier $O(k \log k)$-approximation.
For the remaining, non-heavy columns, which constitute almost all of our columns, we show in Lemma \ref{lem:easy_one_is_concentrated} that $\|\Delta_i\|_1 \leq (1+\epsilon) n$ w.h.p. The reason this is important is that recall to cover some column $i$ by a sample set $I$ of columns, we need $\min_y \|A_I y - A_i\|_1 \leq (1+O(\epsilon))n$. It turns out, as we now explain, that we will get $\min_y \|A_Iy-A_i\|_1 \leq \|\Delta_i\|_1 + e_i$, where $e_i$ is a quantity which we can control and make $O(\epsilon n)$ by increasing our sample size $I$. Consequently, since $\|\Delta_i\|_1 \leq (1+\epsilon) n$, overall we will have $\min_y \|A_Iy-A_i\|_1 \leq (1+O(\epsilon))n$, which means that $i$ will be covered. We now explain what $e_i$ is, and why $\min_y \|A_Iy-A_i\|_1 \leq \|\Delta_i\|_1 + e_i$.
Towards this end, we first explain a key insight in this model. Since the $p$-th moment exists for some real number $p > 1$ (e.g., $p = 1.000001$ suffices), {\it averaging} helps reduce the noise of fitting a column $A_i$ by subsets of other columns. Namely, we show in Lemma \ref{lem:averaging_works} that for any $t$ non-heavy column $\Delta_{i_1}, \ldots, \Delta_{i_t}$ of $\Delta$, and any coefficients $\alpha_1, \alpha_2, \ldots, \alpha_t \in [-1,1]$, $\|\sum_{j=1}^t \alpha_j \Delta_{i_j}\|_1 = O(t^{1/p} n)$, that is, since the individual coordinates of the $\Delta_{i_j}$ are zero-mean random variables, their sum {\it concentrates} as we add up more columns. We do not need bounded variance for this property.
How can we use this averaging property for subset selection? The idea is, instead of sampling a single subset $I$ of $O(k)$ columns and trying to cover each remaining column with this subset as shown in \cite{cgklrw17}, we will sample multiple independent subsets $I_1, I_2, \ldots, I_t$. Each set has size $\poly(k/\varepsilon)$ and we will sample at most $\poly(k/\varepsilon)$ subsets.
By a similar argument of \cite{cgklrw17},
for any given column index $i \in [n]$, for most of these subset $I_j$, we have that $A^*_i/\|\Delta_i\|_1$ can be expressed as a linear combination of columns $A^*_{\ell}/\|\Delta_{\ell}\|_1, \ell \in I_j,$ via coefficients of absolute value at most $1$. Note that this is only true for most $i$ and most $j$; we develop terminology for this in Definitions \ref{def:tuple}, \ref{def:core_and_good_tuple}, \ref{def:core_tuple}, and \ref{def:coefficients_tuple}, referring to what we call a {\it good core}. We quantify what we mean by most $i$ and most $j$ having this property in Lemma \ref{lem:label_uniform_samples} and Lemma \ref{lem:easy_to_find_good_tuple}.
The key though, that drives the analysis, is Lemma \ref{lem:good_tuple_low_cost}, which shows that $\min_y \|A_iy-A_i\|_1 \leq \|\Delta_i\|_1 + e_i$, where $e_i = O(q^{1/p}/t^{1-1/p} n)$, where $q$ is the size of each $I_j$, and $t$ is the number of different $I_j$. We need $q$ to be at least $k$, just as before, so that we can be guaranteed that when we adjoin a column index $i$ to $I_j$,
there is some positive probability that $A^*_i/\|\Delta_i\|_1$ can be expressed as a linear combination of columns $A^*_{\ell}/\|\Delta_{\ell}\|_1, \ell \in I_j$, with coefficients of absolute value at most $1$. What is different in our noise model though is the division by $t^{1-1/p}$. Since $p > 1$, if we set $t$ to be a large enough $\poly(k/\epsilon)$, then $e_i = O(\epsilon n)$, and then we will have covered $A_i$, as desired. This captures the main property that averaging the linear combinations for expression $A^*_i/\|\Delta_i\|_1$ using different subsets $I_j$ gives us better and better approximations to $A^*_i/\|\Delta_i\|_1$. Of course we need to ensure several properties such as not sampling a heavy column (the averaging in Lemma \ref{lem:averaging_works} does not apply when this happens), we need to ensure most of the $I_j$ have small-coefficient linear combinations expressing $A^*_i/\|\Delta_i\|_1$, etc. This is handled in our main theorem, Theorem \ref{thm:dis_l1_algorithm}.
|
1804.08725
|
\section{Introduction}
\begin{figure}
\centering
\imagepdftex{0.6\textwidth}{figures/all.pdf_tex}
\caption{Underlying graphs induced by the complete set of paths in the quarter-plane for some well known directed lattice path classes and their vertically constrained\xspace extensions.}
\label{fig:summary}
\end{figure}
Lattice path enumeration is a classic part of combinatorial enumeration, having been explored for over a century.
Research in this area has resulted in well known classes of lattice paths such as those named after Dyck, Motzkin, Schr\"{o}der and Delannoy~\cite{donaghey,Stanley2012, barcuccimotzkin}. Many variations on lattice paths have been examined, and we refer the reader to the survey by Humphreys~\cite{humphreys} for additional historical information.
In the process of creating a mathematical model for a fiber art form known as bobbin lace~\cite{irvine}, we encountered lattice paths very similar to the Motzkin paths but with the addition of vertical steps. An example of one of these partially directed (weakly monotonic in the $x$ direction and self avoiding) paths is shown in Figure \ref{fig:laceconnection}. In recent work, Dziemia\'nczuk~\cite{dziemianczuk} examined lattice paths with a vertical step in the down direction $(\downarrow)$. When only down steps are allowed, the number of paths between the origin and a termination point remains finite. With the addition of both up and down step vectors, an infinite number of paths are possible. Traditionally in such cases the number of paths is restricted to a finite value by considering only paths with a fixed number of steps (see for example the ``slow walk'' example of Niederhausen and Heinrich~\cite{niederhausen}) or by constraining the walk to a certain region of space such as a wedge or a slit~\cite{van2008partially}. In contrast, for the self-avoiding lattice paths encountered in bobbin lace, the constraint that limits the number of walks ending at a specified point to a finite number is that vertical steps $(\uparrow, \downarrow)$ cannot be consecutive.
The significance of this constraint on vertical steps can be understood by looking at the role of the paths in bobbin lace tessellations. A drawing embedded on the torus of a 2-regular directed graph $D(V,E)$ can be used to model the placement of threads in a lace tessellation~\cite{irvine}. In order to describe a workable lace pattern, $D(V,E)$ must have the property that it can be partitioned into a set of osculating paths: paths that meet at a vertex but do not cross transversely. Furthermore, at every vertex of $D(V,E)$, two paths must meet. It is not possible to combine two weakly monotonic lattice path in this manner if one or both of the paths contains consecutive vertical steps.
\begin{figure}[H]
\centering
\imagepdftex{0.8\textwidth}{figures/laceconnection.pdf_tex}
\caption{Bobbin lace pattern: a) Diagram of threads. b) Unit of pattern. c) Periodic tiling of pattern. d) Execution of lace pattern in cotton thread.}
\label{fig:laceconnection}
\end{figure}
The purpose of this study is to examine vertically constrained\xspace lattice paths on the discrete Cartesian plane $\mathbb{Z} \times \mathbb{Z}$ constructed from the finite set of step vectors $\SVMW=\{\step{1,0}, \step{1,1},$ $\step{1,-1}, \step{0,1}, \step{0,-1} \}$. Here $\SVMW$ is the union of the set of step vectors used in Motzkin paths with the set of up and down step vectors. We use the term \emph{vertically constrained\xspace} to indicate paths that do not have consecutive vertical steps. That is, the following sequences are not permitted: $\uparrow\uparrow$, $\downarrow\downarrow$, as well as the self intersecting sequences $\uparrow\downarrow$ and $\downarrow\uparrow$.
In Section \ref{sec:recurrence} we describe the recurrence relation for vertically constrained\xspace $\SVMW$ paths. In Section \ref{sec:bijection} we identify a bijection between vertically constrained\xspace $\SVMW$ paths and Motzkin paths. In Section \ref{sec:genfunction} we derive a generating function for $\SVMW$ paths in the half-plane making use of the recurrence relation, and in the quarter-plane using the bijection. The same approach is applied in Section \ref{sec:extensions} to extend both the Dyck and Schr\"{o}der classes of paths. We conclude with suggestions for further exploration of this family of partially directed, vertically constrained lattice paths.
\section{Recurrence Relations}
\label{sec:recurrence}
The lattice paths discussed in this paper start at the origin and travel in a weakly monotonic manner to the right. A point that is a horizontal distance $n$ and vertical distance $m$ from the origin will be represented as $(n,m)$.
As shown in Table \ref{table:cases}, the lattice paths can be divided into four classes using two boolean properties:
1) Can the path extend below the $x$-axis? That is, is the path restricted to the upper right quarter-plane (Q) or can it travel anywhere in the right half-plane (H)?
2) Can the leading step be a vertical step vector? In other words, is the leading step, $e_1$, chosen from the set $\SVMW$ (a leading vertical step is allowed) or from the set $\SVMS$ (leading step is not vertical)?
The importance of this second property will become clear when we describe a bijection with Motzkin paths in Section \ref{sec:bijection}.
\begin{table}[H]
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
Name & Restricted to Q & Leading Step & Description \\ \hline \hline
$\GMWSet$ & False & $e_1 \in \SVMW$ & Leading vertical steps allowed \\ \hline
$\GMWRSet$ & False & $e_1 \in \SVMS$ & No leading vertical steps \\ \hline
$\MWSet$ & True & $e_1 \in \SVMW$ & Restricted to quarter-plane, \\
& & & leading vertical steps allowed \\ \hline
$\MWRSet$ & True & $e_1 \in \SVMS$ & Restricted to quarter-plane, \\
& & & no leading vertical steps \\ \hline
\end{tabular}
\end{center}
\caption{Four classes of vertically constrained\xspace lattice paths}
\label{table:cases}
\end{table}
We define $\GMWSet$ to be the set of partially directed, vertically constrained\xspace lattice paths in the half-plane created from step vectors $\SVMW$. Similarly, we denote the number of paths of type $\GMWSet$ starting at point $(0,0)$ and terminating at point $(n, m)$ by $\GMWCount(n,m)$.
For a proposition $P$, let $\I{P}$ be 1 if $P$ is true and 0 otherwise.
\begin{lemma}
$\GMWCount(n,m)$ satisfies the recurrence relation
\begin{align}
\GMWCount(0,m) &= \I{ m \in \{-1,0,1\} }\label{eq:recurrenceAInitial} \\
\GMWCount(n,m) &= \GMWCount(n{-}1,m{+}2) + 2\GMWCount(n{-}1,m{+}1) + 3\GMWCount(n{-}1,m) \nonumber \\
&\quad{+} \, 2\GMWCount(n{-}1,m{-}1) + \GMWCount(n{-}1,m{-}2). \label{eq:recurrenceA}
\end{align}
\label{theo:recurrenceA}
\end{lemma}
\begin{proof}
Equation \eqref{eq:recurrenceA} is obtained by examining all of the ways in which a $\GMWSet$ path can terminate at a lattice point as illustrated in Figure \ref{fig:validarcs}.
\end{proof}
Such a recursive counting formula is common in lattice path enumeration and adjacent topics. See, for instance, the work of Donaghey and Shapiro~\cite{donaghey} on the Motzkin triangle.
\begin{figure}[H]
\centering
\imagepdftex{0.4\textwidth}{figures/motzkinrecurrence.pdf_tex}
\caption{All possible ways in which a vertically constrained path with steps from the set $\SVMW$ can terminate at a lattice point(indicated by red dot).}
\label{fig:validarcs}
\end{figure}
In a similar manner, we define $\GMWRSet$ as the set of partially directed, vertically constrained\xspace lattice paths in the half-plane created from step vector $\SVMW$ in which the leading step is restricted to $\SVMS$. The number of paths that travel from the origin to point $(n,m)$, is represented by $\GMWRCount(n,m)$. The initial condition is $\GMWRCount(0,m) = \I{m = 0}$. Equation \eqref{eq:recurrenceA} can be rewritten for $\GMWRCount(n,m)$ by substituting $\GMWRCount(a,b)$ for $\GMWCount(a,b)$.
Paths that are restricted to the quarter-plane require special handling for positions close to the $x$-axis. We define $\MWSet$ as the set of partially directed, vertically constrained\xspace lattice paths restricted to the quarter-plane, created from step vectors $\SVMW$. The count of $\MWSet$ paths terminating at $(n,m)$ is given by $\MWCount(n,m)$.
\begin{lemma}
The recurrence relation for $\MWCount(n,m)$ is
\begin{align}
\MWCount(0,m) &= \I{m \in \{0,1\}} \\
\MWCount(n,0) &= \MWCount(n{-}1,2) + 2\MWCount(n{-}1,1) + 2\MWCount(n{-}1,0) \label{eq:r1} \\
\MWCount(n,1) &= \MWCount(n{-}1,3) + 2\MWCount(n{-}1,2) + 3\MWCount(n{-}1,1) + 2\MWCount(n{-}1,0) \label{eq:r2}\\
\MWCount(n,m) &= \MWCount(n{-}1,m{+}2) + 2\MWCount(n{-}1,m{+}1) + 3\MWCount(n{-}1,m) \nonumber \\
&\quad {+} \, 2\MWCount(n{-}1,m{-}1) + \MWCount(n{-}1,m{-}2). \label{eq:r3}
\end{align}
\label{theo:recurrenceAQ}
\end{lemma}
This follows again from considering all ways in which a walk can end at a specific point.
The recurrence relation for the set of $\MWRSet$ paths, in which the leading step is restricted to $\SVMS$, has initial condition $\MWRCount(0,m) = \I{m = 0}$. Equations \eqref{eq:r1},~\eqref{eq:r2} and \eqref{eq:r3} can be rewritten for $\MWRSet$ paths by substituting $\MWRCount(a,b)$ for $\MWCount(a,b)$.
\section{An Explicit Bijection}
\label{sec:bijection}
\begin{table}[H]
\begin{center}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{|l|rl|l|}
\hline
\multicolumn{1}{|c|}{Motzkin family} & \multicolumn{2}{|c|}{Relationship} & \multicolumn{1}{|c|}{OEIS~\cite{oeis}} \\ \hline \hline
\multirow{2}{4cm}{Motzkin Paths} & $\MWRCount(n,0)$ &$= \MSCount(2n)$ & Bisection of A001006 (also A026945) \\ \cline{2-4}
& $\MWCount(n,0)$ &$= \MSCount(2n{+}1)$ & Bisection of A001006 (also A099250) \\ \hline
\multirow{2}{4cm}{Grand Motzkin Paths} & $\GMWRCount(n,0)$ &$= \GMSCount(2n)$ & Bisection of A002426 (also A082758)\\ \cline{2-4}
& $\GMWCount(n,0)$ &$= \GMSCount(2n{+}1)$ & Bisection of A002426 \\ \hline
\multirow{2}{4cm}{Motzkin Triangle} & $\MWRCount(n,m)$ &$= \MT(2n, m)$ & Row Bisection of A026300 \\ \cline{2-4}
& $\MWCount(n,m)$ &$= \MT(2n{+}1,m)$ & Row Bisection of A026300 \\ \hline
\multirow{2}{4cm}{Triangle of Trinomial Coefficients} &$\GMWRCount(n,m)$ &$= \TT(2n, m)$ & Row Bisection of A027907 \\ \cline{2-4}
&$\GMWCount(n,m)$ &$= \TT(2n{+}1, m)$ & Row Bisection of A027907 \\ \hline
\end{tabular}
\end{adjustbox}
\end{center}
\caption{Correlation between vertically constrained\xspace paths and their Motzkin counterparts. Bisection means mapping to alternating (even or odd) indices or rows of a sequence.}
\label{table:summary2}
\end{table}
Table \ref{table:summary2} lists some well-known lattice path models. Both Motzkin and Grand Motzkin paths use the $\SVMS$ step vectors and start and end on the $x$-axis; the distinction is that Motzkin paths are restricted to the quarter-plane while Grand Motzkin paths are restricted to the half-plane. The Motzkin triangle enumerates $\SVMS$ paths in the quarter-plane that terminate at a more general point $(x,y)$ and the Triangle of Trinomial Coefficients does the same for $\SVMS$ paths in the half-plane.
As indicated in Table \ref{table:summary2}, there is a relationship between the vertically constrained\xspace $\SVMW$ lattice paths and a bisection of the Motzkin paths. We prove this relationship by providing an explicit bijection.
We define the mapping $\phi$, illustrated in Figure \ref{fig:map1}, in which the following substitutions are performed on alternating steps of the Motzkin path:
\begin{eqnarray*}
\phi :& \step{1,1} & \to \step{0,1}\\
\phi :& \step{1,-1} & \to \step{0,-1}\\
\phi :& \step{1,0} & \to \text{remove step}
\end{eqnarray*}
\begin{figure}[H]
\centering
\imagepdftex{0.9\textwidth}{figures/mapping.pdf_tex}
\caption{Conversion of a Motzkin path ($n=7$) to an $\MWSet$ path ($n=3$). Grey steps are modified using the substitution rules of $\phi$. Black steps are unchanged.}
\label{fig:map1}
\end{figure}
\begin{lemma}
The mapping $\phi$ applied to alternating steps starting at the \emph{first} step of a Motzkin (Grand Motzkin) path terminating at $(2n+1,0)$ produces a vertically constrained\xspace path of type $\MWSet$ $(\GMWSet)$ terminating at $(n,m)$.
\label{theo:phi}
\end{lemma}
\begin{proof}
Step vectors in the Motzkin paths belong to $\SVMS$ which is a subset of $\SVMW$. All substituted step vectors in the mapping $\phi$ belong to $\SVMW$, therefore the resulting path contains only step vectors from $\SVMW$. Only alternating steps of the Motzkin path are modified, therefore consecutive vertical step vectors cannot be introduced by $\phi$. The resulting path is thus a vertically constrained\xspace path of type $\MWSet$ $(\GMWSet)$. At each substitution, the horizontal length of the path is decreased by $1$. The mapping starts with a path of length $2n+1$ and horizontally collapses $n+1$ steps resulting in a path of length $n$. Only horizontal displacement is affected by the substitutions of $\phi$ leaving the height of vertices along the path unchanged. Therefore, an operand restricted to one quadrant produces a result restricted to one quadrant and an operand that terminates at height $m$ produces a result that terminates at height $m$.
\end{proof}
\begin{lemma}
The mapping $\phi$ applied to alternating steps starting at the \emph{second} step of a Motzkin (Grand Motzkin) path terminating at $(2n,m)$ produces a path of type $\MWRSet$ $(\GMWRSet)$ (restricted leading step) terminating at $(n,m)$.
\label{theo:phi2}
\end{lemma}
\begin{proof}
The same arguments used in Lemma \ref{theo:phi} apply. Because the mapping starts with the second step of the Motzkin path, the first step of the resulting path belongs to $\SVMS$ and $n$ steps are collapsed horizontally.
\end{proof}
The inverse relationship, taking a vertically constrained\xspace $\SVMW$ path to a Motzkin path, can be represented by the mapping $\psi$, illustrated in Figure \ref{fig:unmap1}, in which the following substitutions are performed on the vertical steps at each integral horizontal distance from the origin.
\begin{eqnarray*}
\psi : & \step{0,1} & \to \step{1,1} \\
\psi : & \step{0,-1} & \to \step{1,-1} \\
\psi : & \text{no vertical step} & \to \text{ insert } \step{1,0}
\end{eqnarray*}
\begin{figure}[H]
\centering
\imagepdftex{0.8\textwidth}{figures/unmapping.pdf_tex}
\caption{Conversion of $\GMWSet$ path ($n=3$) to Grand Motzkin path ($n=7$)}
\label{fig:unmap1}
\end{figure}
\begin{lemma}
The mapping $\psi$ applied at a horizontal distance $i$ from the origin for all $0 \le i \le n$ to a vertically constrained\xspace $\MWSet$ $(\GMWSet)$ path terminating at $(n,m)$ produces a Motzkin (Grand Motzkin) path terminating at $(2n+1,m)$.
\label{theo:psiA}
\end{lemma}
\begin{proof}
The mapping $\psi$ replaces all vertical steps with steps belonging to $\SVMS$. The resulting path is therefore a Motzkin or Grand Motzkin path. The substitutions do not affect the height of vertices along the path, therefore an input restricted to a quadrant produces an output restricted to a quadrant and an input that terminates at height $m$ produces an output that terminates at height $m$. At each substitution, the horizontal length of the path is increased by $1$. Since $n+1$ substitutions are performed on an operand of length $n$, the resulting path has length $2n+1$.
\end{proof}
\begin{lemma}
The mapping $\psi$ applied at a horizontal distance $i$ from the origin for all $0 < i \le n$ to a vertically constrained\xspace $\MWRSet$ $(\GMWRSet)$ path terminating at $(n,m)$ produces a Motzkin (Grand Motzkin) path terminating at $(2n,m)$.
\end{lemma}
\begin{proof}
The proof for this lemma is similar to the proof for Lemma \ref{theo:psiA}. Because the substitutions begin at $i=1$, the number of substitutions is $n$ and the length of the resulting path is $2n$.
\end{proof}
\begin{theorem}
The map $\phi$ is a bijection from Motzkin (Grand Motzkin) paths of length $2n+1$ to vertically constrained\xspace paths of type $\MWSet$ $(\GMWSet)$ of length $n$, with inverse $\psi$.
\label{theo:bijectionA}
\end{theorem}
\begin{proof}
First we prove that the mapping $\phi$ is an injection. The input for $\phi$ is a Motzkin path. Through $\phi$, each step vector in the input is either copied directly to the output or uniquely mapped to a step vector that is not an element in $\SVMS$. The same destination path could not be generated by distinct inputs, therefore, $\phi$ is an injection from a Motzkin (Grand Motzkin) path to a vertically constrained\xspace $\MWSet$ $(\GMWSet)$ path.
Similarly, $\psi$ is an injection from vertically constrained paths of type vertically constrained\xspace $\MWSet$ $(\GMWSet)$ to the collection of Motzkin (Grand Motzkin) paths of length $2n+1$. This implies $\phi$ is a bijection, and it is clear that its inverse is $\psi$.
\end{proof}
\begin{theorem}
The map $\phi$ is a bijection from Motzkin (Grand Motzkin) paths of length $2n$ to vertically constrained\xspace paths of type $\MWRSet$ $(\GMWRSet)$ and length $n$, with inverse $\psi$.
\end{theorem}
\begin{proof}
This can be proven using the same approach as Theorem \ref{theo:bijectionA}.
\end{proof}
Figure \ref{fig:exampleMotzkin} shows that vertically constrained\xspace paths with a restricted leading step are a subset of the unrestricted vertically constrained\xspace paths of the same horizontal length. By the bijective mapping $\phi$, this relationship also exits between even and odd length Motzkin paths. If we prepend a horizontal step, Motzkin paths of length $2n$ form a unique subset of Motzkin paths of length $2n+1$.
\begin{figure}[H]
\centering
\imagepdftex{\textwidth}{figures/exampleB.pdf_tex}
\caption{Motzkin paths ($n=4$ and $n=5$) and corresponding $\MWRSet$ and $\MWSet$ paths ($n=2$).}
\label{fig:exampleMotzkin}
\end{figure}
\section{Generating Functions}
\label{sec:genfunction}
\begin{theorem}
For $\GMWSet$ and $\GMWRSet$, the generating functions are
\begin{align*}
a^{H}_{R}(x,y) &:= \sum_{n \ge 0} \sum_{m\in\mathbb{Z}} \GMWRCount(n,m) x^n y^m = \frac{y^2}{y^2{-}x \left(1 {+} y {+} y^2 \right)^2},\\
a^{H}(x,y) &:= \sum_{n \ge 0} \sum_{m\in\mathbb{Z}} \GMWCount(n,m) x^n y^m = \frac{y\left(1{+}y{+}y^2\right)}{y^2{-}x \left(1 {+} y{+}y^2\right)^2},\\
a^{H}_{R}(x,0) &:= \sum_{n \ge 0} \GMWRCount(n,0) x^n = \frac{1}{P(x)}\sqrt{\frac{1{-}3x{+}P(x)}{2}},\\
\intertext{and}
a^{H}(x,0) &:= \sum_{n \ge 0} \GMWCount(n,0) x^n = \frac{1}{P(x)}\sqrt{\frac{1{-}3x{-}P(x)}{2x}}
\end{align*}
where
\[ P(x) := \sqrt{1{-}10x{+}9x^2}.\]
\end{theorem}
\begin{proof}
The generating functions for $a^{H}_{R}(x,y)$ and $a^{H}(x,y)$ were derived by expanding terms in general summation using the recurrence relations of Lemma \ref{theo:recurrenceA}. See, for example, Wilf~\cite[p.~5]{wilf}. The expressions for $a^{H}_{R}(x,0)$ and $a^{H}(x,0)$ were obtained by bisecting the well known generating function for Grand Motzkin paths~\cite{barcuccimotzkin} and simplifying.
\end{proof}
\begin{corollary}
For $\MWSet$ and $\MWRSet$, the generating functions are as follows:
\begin{align*}
a^{Q}_{R}(x,y) &:= \sum_{n \ge 0} \sum_{m \ge 0} \MWRCount(n,m) x^n y^m
= \frac{R(x)+Q(x)+2}{\left(1{+}Q(x){-}\sqrt{x}(1{+}2y)\right)\left(1{+}R(x)+\sqrt{x}(1{+}2y)\right)},\\
a^{Q}(x,y) &:= \sum_{n \ge 0} \sum_{m \ge 0} \MWCount(n,m) x^n y^m
= \frac{R(x)-Q(x)+2\sqrt{x}(1{+}2y)}{\sqrt{x}\left(1{+}Q(x){-}\sqrt{x}(1{+}2y)\right)\left(1{+}R(x)+\sqrt{x}(1{+}2y)\right)},\\
a^{Q}_{R}(x,0) &:= \sum_{n \ge 0} \MWRCount(n,0) x^n = \frac{1}{2x}\left(1 - \sqrt{ \frac{1{-}3x{+}P(x)}{2}}\right)\\
\intertext{and}
a^{Q}(x,0) &:= \sum_{n \ge 0} \MWCount(n,0) x^n = \frac{1}{2x}\left(\sqrt{\frac{1{-}3x{-}P(x)}{2x}} - 1\right)\\
\intertext{where}
Q(x) &:= \sqrt{1{-}2\sqrt{x}-3x}\\
R(x) &:= \sqrt{1{+}2\sqrt{x}-3x}\\
P(x) &:= Q(x)R(x) = \sqrt{1{-}10x{+}9x^2}
\end{align*}
\end{corollary}
\begin{proof}
Generating functions are derived by bisecting and simplifying the generating functions for the corresponding Motzkin triangle and Motzkin path sequences, given by Barcucci et al.~\cite{barcuccimotzkin}.
\end{proof}
\section{Extension to other lattice paths}
\label{sec:extensions}
Our approach can be used to explore vertically constrained\xspace counterparts to other well known lattice paths, such as Dyck and Schr\"{o}der paths shown in Figure \ref{fig:summary}.
\subsection{Dyck Paths with Vertical Steps}
\label{sec:dyck}
Dyck paths are composed from the step vectors $\SVDS = \{\step{1,1}, \step{1,-1}\}$. We now consider vertically constrained\xspace lattice paths with the vector step set $\SVDW = \{\step{1,1}$, $\step{1,-1}$, $\step{0,1}$, $\step{0,-1} \}$ . The four types of vertically constrained\xspace $\SVDW$ paths are presented in Table \ref{table:dyckcases}.
\begin{table}[H]
\begin{center}
\begin{tabular}{|l|l|l|} \hline
Name & Restricted to Q & Leading Step \\ \hline \hline
$\GDWSet$ & False & $e_1 \in \SVDW$ \\ \hline
$\GDWRSet$ & False & $e_1 \in \SVDS$ \\ \hline
$\DWSet$ & True & $e_1 \in \SVDW$ \\ \hline
$\DWRSet$ & True & $e_1 \in \SVDS$ \\ \hline
\end{tabular}
\end{center}
\caption{Vertically constrained\xspace lattice paths with vector step set $\SVDW$.}
\label{table:dyckcases}
\end{table}
\subsubsection{Recurrence Relations}
\label{sec:dyckrecurrence}
\begin{figure}[H]
\centering
\imagepdftex{0.3\textwidth}{figures/dyckrecurrence2.pdf_tex}
\caption{All possible ways in which a vertically constrained\xspace path using steps from the set $\SVDW$ can terminate at a lattice point (indicated by red dot).}
\label{fig:dyckrecursion}
\end{figure}
As before, we determine the recurrence relation by considering all ways the vertically constrained\xspace $\SVDW$ paths can terminate at a lattice point.
\begin{lemma}
Let $\GDWCount(n,m)$ be the number of vertically constrained\xspace paths in the half-plane created from step vectors $\SVDW$ and extending from $(0,0)$ to $(n,m)$.
The recurrence relation for $\GDWCount(n,m)$ is
\begin{eqnarray}
&\GDWCount(0,m) &= \I{m \in\{-1, 0, 1\}} \label{eq:gdw1} \\
&\GDWCount(n,m) &= \GDWCount(n{-}1,m{+}2) + \GDWCount(n{-}1,m{+}1) + 2\GDWCount(n{-}1,m) \nonumber \\
& &\quad{+}\, \GDWCount(n{-}1,m{-}1) + \GDWCount(n{-}1,m{-}2). \label{eq:gdw2}
\end{eqnarray}
\end{lemma}
For $\GDWRSet$, in which the first step is restricted to the set $\SVDS$, the initial condition is $\GDWRCount(0,m) = \I{m=0}$. Equation \ref{eq:gdw2} can be rewritten for $\GDWRSet$ by substituting $\GDWRCount(a,b)$ for $\GDWCount(a,b)$.
The recurrence relation for $\DWSet$ and $\DWRSet$, paths restricted to the quarter-plane, is similar but with additional conditions concerning the first two rows of the triangle.
\begin{lemma}
Let $\DWCount(n,m)$ be the number of vertically constrained\xspace paths in the quarter-plane created from step vectors $\SVDW$ and extending from $(0,0)$ to $(n,m)$.
The recurrence relation for $\DWCount(n,m)$ is
\begin{eqnarray}
&\DWCount(0,m) &= \I{m \in\{0, 1\}}, \nonumber \\[0.2cm]
\mathrlap{\text{otherwise, for $n>0$ and $m=0$ or $1$,}} \nonumber \\[0.2cm]
&\DWCount(n,0) &= \DWCount(n{-}1,2) + \DWCount(n{-}1,1) + \DWCount(n{-}1,0),\label{eq:dw1} \\
&\DWCount(n,1) &= \DWCount(n{-}1,3) + \DWCount(n{-}1,2) + 2\DWCount(n{-}1,1) + \DWCount(n{-}1,0)\label{eq:dw2}\\[0.2cm]
\mathrlap{\text{ and for $n>0$, $m>1$,}} \nonumber \\[0.2cm]
&\DWCount(n,m) &= \DWCount(n{-}1,m{+}2) + \DWCount(n{-}1,m{+}1) + 2\DWCount(n{-}1,m) \nonumber \\
& &\quad + \DWCount(n{-}1,m{-}1) + \DWCount(n{-}1,m{-}2). \label{eq:dw3}
\end{eqnarray}
\end{lemma}
For $\DWRSet$, in which the first step is restricted to the set $\SVDS$, the initial condition is $\DWRCount(0,m) = \I{m=0}$. Equations \eqref{eq:dw1}, \eqref{eq:dw2} and \eqref{eq:dw3} can be rewritten for $\DWRSet$ by substituting $\DWRCount(a,b)$ for $\DWCount(a,b)$.
\subsubsection{An Explicit Bijection}
\label{sec:dyckbijection}
\begin{table}[H]
\begin{center}
\begin{tabular}{|l|l|l|} \hline
Family & Relationship & OEIS~\cite{oeis} \\ \hline \hline
\multirow{2}{4cm}{Motzkin, no flat steps at odd indices} & $\DWCount(n,0) = M_{\invdiameter}^{Q}(2n+1,0)$ & Bisection of A214938 \\
& & \\ \hline
\multirow{2}{4cm}{Motzkin, no flat steps at even indices} & $\DWRCount(n,0) = M_{\invdiameter}^{Q}(2n,0)$ & Bisection of A214938 \\
& & \\ \hline
\multirow{2}{4cm}{Grand Motzkin, no flat steps odd indices} & $\GDWCount(n,0) = M_{\invdiameter}^{H}(2n+1,0)$ & Bisection of A026520 \\
& & \\ \hline
\multirow{2}{4cm}{Grand Motzkin, no flat steps even indices} & $\GDWRCount(n,0) = M_{\invdiameter}^{H}(2n,0)$ & Bisection of A026520 \\
& & \\ \hline
\multirow{2}{4cm}{Grand Motzkin, no flat steps odd indices} & $\GDWCount(n,m) = M_{\invdiameter}^{H}(2n+1,m)$ & \multirow{2}{4cm}{Column bisection of A026519} \\
& & \\ \hline
\multirow{2}{4cm}{Grand Motzkin, no flat steps even indices} & $\GDWRCount(n,m) = M_{\invdiameter}^{H}(2n,m)$ & \multirow{2}{4cm}{Column bisection of A026519} \\
& & \\ \hline
\end{tabular}
\end{center}
\caption{Correlation between vertically constrained\xspace paths of type $\SVDW$ and subsets of the Motzkin family of paths}
\label{table:dyckoeis}
\end{table}
The $\DWSet$ paths correspond to OEIS sequence A026520 which is the number of Motzkin paths in which flat steps do not occur at odd indices. The $\GDWSet$ paths correspond to the OEIS table A026519 which is a prototypical sequence contributed by Kimberling and defined by a recurrence relation. We will show that it corresponds to a subset of Grand Motzkin paths with the constraint that flat steps cannot occur at odd indices.
\begin{theorem}
There is a bijection between the subset of Motzkin (Grand Motzkin) paths terminating at $(2n+1,m)$ that avoid flat steps at \textbf{odd} indices and the vertically constrained\xspace paths of type $\DWSet$ $(\GDWSet)$ terminating at $(n,m)$.
\label{theo:bijectionDyck}
\end{theorem}
\begin{proof}
This bijection can be described using the same $\phi$ and $\psi$ mappings outlined in Section \ref{sec:bijection}.
Start with a Motzkin path (or Grand Motzkin path) of length $2n+1$ in which the step vector $\step{1,0}$ occurs only at \textbf{odd} indices. The application of $\phi$ to the first step, $e_1$, and subsequent steps of odd index, removes all horizontal steps during the substitution phase and inserts non-consecutive vertical steps. The resulting path contains only steps found in $\SVDW$ and is of type $\DWSet$ or $\GDWSet$. Inversely, if we start with a $\DWSet$ or $\GDWSet$ path and apply $\psi$, all vertical steps are removed and horizontal steps, when added, occur only at odd indices producing a Motzkin path (or Grand Motzkin path) of the desired subset.
\end{proof}
\begin{theorem}
There is a bijection between the subset of Motzkin (Grand Motzkin) paths terminating at $(2n,m)$ that avoid flat steps at \textbf{even} indices and the vertically constrained\xspace paths of type $\DWSet$ $(\GDWSet)$ terminating at $(n,m)$.
\end{theorem}
\begin{proof}
The proof of this theorem follows from that of Theorem \ref{theo:bijectionDyck}. By applying $\phi$ to the second step and alternating steps thereafter, the first step cannot be vertical. Similarly, $\psi$ removes all horizontal steps at even indices in the resulting Motzkin path.
\end{proof}
It is important to note that if we reverse a Motzkin path of even length with horizontal steps not allowed at \textbf{even} indices we obtain a Motzkin path with horizontal steps not allowed at \textbf{odd} indices.
\begin{figure}[H]
\centering
\imagepdftex{\textwidth}{figures/example3.pdf_tex}
\caption{Motzkin paths with flat steps not allowed at either even or odd indices and the corresponding $\DWRSet$ or $\DWSet$ paths respectively.}
\label{fig:exampleDyck}
\end{figure}
\subsubsection{Generating Functions}
\begin{theorem} \label{thm:GDW}
The bivariate generating functions for $\GDWRCount(n,m)$ and $\GDWCount(n,m)$ are
\begin{align*}
b^{H}_{R}(x,y) &:= \sum_{n \ge 0} \sum_{m\in\mathbb{Z}} \GDWRCount(n,m) x^n y^m = \frac{y^2}{y^2 {-} x\left(1{+}y^{2} \right)\left(1{+}y{+}y^{2} \right)},\\
b^{H}(x,y) &:= \sum_{n \ge 0} \sum_{m\in\mathbb{Z}} \GDWCount(n,m) x^n y^m = \frac{y\left(1{+}y{+}y^{2}\right)}{y^2 - x\left( 1 {+} y^2\right) \left(1{+}y{+}y^{2} \right)},\\
b^{H}_{R}(x,0) &:= \sum_{n \ge 0} \GDWRCount(n,0) x^n = \sqrt{\frac{2{-}7x{+}2\sqrt{1{-}8x{+}12x^2}}{4{-}31x{+}40x^2{+}12x^3}},\\
\intertext{and}
b^{H}(x,0) &:= \sum_{n \ge 0} \GDWCount(n,0) x^n = \sqrt{ \frac{2{-}4x{-}3x^2{-}2\sqrt{1{-}8x{+}12x^2}}{x\left(4{-}31x{+}40x^2{+}12x^3\right)}}
\end{align*}
\label{theo:gfdw}
\end{theorem}
\begin{proof}
The generating functions for $b^{H}_{R}(x,y)$ and $b^{H}(x,y)$ were derived by expanding terms in the general summation~\cite[p.~5]{wilf} using the recurrence relations from Section \ref{sec:dyckrecurrence}.
To find $b^{H}_{R}(x,0)$, we isolate the terms of $b^{H}_{R}(x,y)$ that have no $y$ term. As $b^{H}_{R}(x,y)$ contains negative powers of $y$, one
cannot simply substitute $y=0$. However, as $b^{H}_{R}(x,y)$ is a rational function we can divide by $y$ and compute the residues of the resulting function
in $y$ which approach the origin as $x$ approaches the origin:
\begin{align*}
b^{H}_{R}(x,0) = \sum_{i=1}^{n} Res\left(\frac{b^{H}_{R}(x,y)}{y}; \rho_i \right)
\end{align*}
where ${\rho_1, ..., \rho_n}$ are the poles of $\frac{b^{H}_{R}(x,y)}{y}$ which approach the origin as $x$ approaches the origin.
Write $b^{H}_{R}(x,y)/y = G(x,y)/H(x,y)$ for co-prime polynomials $G$ and $H$. There exist two singularities, roots $r_1$ and $r_2$ of $H(x,y)$, that are finite, and in fact vanish, when $x=0$:
\begin{align*}
r_1 &= \frac{ \sqrt{x(x{+}4)} {-} x {-} \sqrt{4x{-}14x^2{-}2x\sqrt{x(x{+}4)}}}{4x},\\
r_2 &= \frac{{-}\sqrt{x(x{+}4)} {-} x {+} \sqrt{4x{-}14x^2{+}2x\sqrt{x(x{+}4)}}}{4x}.
\end{align*}
The existence of these two roots follows from Proposition 6.1.8 of Stanley~\cite{stanley}, and their Puiseux expansions can determined implicitly using the Newton polygon of $H$. As $r_1$ and $r_2$ are simple poles of $H$,
\begin{align*}
Res\left( \frac{G(x,y)}{H(x,y)}; y=r_1\right) = \left.\frac{G(x,y)}{\frac{d{H(x,y)}}{d{y}}}\right|_{y=r_1} = \frac{\sqrt{2}}{\sqrt{4+x}\sqrt{2-7x-\sqrt{x(4+x)}}},\\
Res\left( \frac{G(x,y)}{H(x,y)}; y=r_2\right) = \left.\frac{G(x,y)}{\frac{d{H(x,y)}}{d{y}}}\right|_{y=r_2} = \frac{\sqrt{2}}{\sqrt{4+x}\sqrt{2-7x+\sqrt{x(4+x)}}}.
\end{align*}
The sum of these residues simplifies to give the final result, and the same approach can be used to find $b^{H}(x,0)$.
\end{proof}
\begin{corollary}
The bivariate generating function for Grand Motzkin paths in which flat steps are not permitted at odd indices is derived using the bisection relationship of Lemma \ref{theo:bijectionDyck}:
\begin{align*}
m_{\invdiameter}^{H}(x,y) & = b^{H}_{R}(x^2,y)+xb^{H}(x^2,y) = \frac{y^2 +xy(1+y+y^2)}{y^2-x^2(1+y^2)(1+y+y^2)}\\
\intertext{and}
m_{\invdiameter}^{H}(x,0) & = b^{H}_{R}(x^2,0)+xb^{H}(x^2,0) = \sqrt{ \frac{4{+}6x{-}11x^2{-}6x^3{-}3x^4{+}2x\sqrt{1{-}8x^2{+}12x^4}}{4{-}31x^2{+}40x^4{+}12x^6}}
\end{align*}
\end{corollary}
\begin{theorem}
\label{theo:dw}
The generating functions for $\DWRCount(n,m)$ and $\DWCount(n,m)$ are
\begin{align*}
\label{eq:dw4}
b^{Q}_{R}(x,y) & := \sum_{n \ge 0} \sum_{m \ge 0} \DWRCount(n,m) x^n y^m \\
& = \frac{(r_1{-}y)(r_2{-}y)}{(1{-}r_1 r_2)(y^2{-}x(1 + y^2)(1 + y + y^2))}, \\
b^{Q}(x,y) & := \sum_{n \ge 0} \sum_{m \ge 0} \DWCount(n,m) x^n y^m \\
& = \frac{r_1 r_2 y{-}(1{+}y{+}y^2)\left(x(r_1+r_2){-}xy(1{-}r_1 r_2)\right)}{x(1{-}r_1 r_2)(y^2{-}x(1 + y^2)(1 + y + y^2))}, \\
b^{Q}_{R}(x,0) &:= \sum_{n\ge0}\DWRCount(n,0)x^n = \frac{{-}r_1r_2}{x(1{-}r_1r_2)},\\
\intertext{and}
b^{Q}(x,0) &:= \sum_{n\ge0}\DWCount(n,0)x^n = \frac{r_1{+}r_2}{x(1{-}r_1r_2)}.
\end{align*}
\end{theorem}
\begin{proof}
Generating functions for $\DWSet$ and $\DWRSet$ paths (restricted to the quarter-plane) were determined using a combination of term expansion and the \emph{kernel method}~\cite{BanderierBousquet-MelouDeniseFlajoletGardyGouyou-Beauchamps2002}. Term expansion in the general summation gives
\begin{align}
b^{Q}_{R}(x,y) &= \frac{y^2{-}x\left(1{+}y{+}y^{2}\right)b^{Q}_{R}(x,0) {-} xy S(x)}{y^2 {-} x\left( 1 {+} y^2\right) \left(1{+}y{+}y^{2} \right)},\\
b^{Q}(x,y) &= \frac{y^2 {+} y^3 {-} x\left(1 {+} y {+} y^2\right)b^{Q}(x,0) {-}xy T(x)}{y^2 {-} x\left( 1 {+} y^2\right)\left(1{+}y{+}y^{2} \right)},
\end{align}
where
\[
S(x) := \sum_{n\ge0}\DWRCount(n,1)x^n \quad \text{and} \quad
T(x) := \sum_{n\ge0}\DWCount(n,1)x^n. \]
The term `kernel method' refers to a collection of techniques for solving functional equations often appearing in lattice path problems. Clearing fractions in Equation~\eqref{eq:dw4} implies
\begin{equation} K(x,y) b^{Q}_{R}(x,y) = y^2{-}x\left(1{+}y{+}y^{2}\right)b^{Q}_{R}(x,0) {-} xy S(x), \label{eq:kernel} \end{equation}
where $K(x,y)$ is the \emph{kernel}
\[ K(x,y) = y^2 - x\left(1+y^2\right)\left(1+y+y^{2}\right). \]
As discussed in the proof of Theorem~\ref{thm:GDW}, there are two roots $r_1$ and $r_2$ of $K$ which vanish at $x=0$. Substituting $y=r_1$ and $y=r_2$ into Equation~\eqref{eq:kernel} gives two equations in two unknowns $b^{Q}_{R}(x,0)$ and $S(x)$, which can be solved explicitly.
The same approach can be used to find $b^{Q}(x,y)$ and $b^{Q}(x,0)$.
\end{proof}
The bijection with Motzkin paths in which flat steps are not permitted at odd indices can then be used to obtain generating functions for this subset of Motzkin paths both in the general triangle case and for paths terminating on the $x$-axis.
\begin{theorem}
The univariate and bivariate generating functions for Motzkin paths terminating on the $x$-axis in which flat steps are not permitted at odd indices are
\begin{align*}
m_{\invdiameter}^{Q}(x,y) &:= \sum_{n \ge 0} \sum_{m \ge 0}M_{\invdiameter}^{Q}(n,0) x^n y^m\\
& = \frac{x (q_1 {-} y) (q_2 {-} y) + q_1 q_2 y - x^2 (q_1 {+} q_2 {-} y {+} q_1 q_2 y) (1 {+} y {+} y^2)}{x(1 {-} q_1 q_2) (y^2 {-} x^2 (1 {+} y^2) (1 {+} y {+} y^2))},
\intertext{and}
m_{\invdiameter}^{Q}(x,0) &:= \sum_{n\ge0}M_{\invdiameter}^{Q}(n,0)x^n = \frac{x(q_1{+}q_2){-}q_1 q_2}{x^2(1{-}q_1 q_2)},\\
\intertext{where}
q_1 &:= -\frac{x - \sqrt{4 + x^2} + \sqrt{4{-}14x^2{-}2x\sqrt{4 + x^2}}}{4 x},\\
q_2 &:= -\frac{x + \sqrt{4 + x^2} - \sqrt{4{-}14x^2{+}2x\sqrt{4 + x^2}}}{4 x}.
\end{align*}
\label{theo:flat}
\end{theorem}
\begin{proof}
The generating function $m_{\invdiameter}^{Q}(x,y)$ is derived, as a consequence of the bijection relationship, by combining $b^{Q}_{R}(x,y)$ and $b^{Q}(x,y)$. By setting $y=0$ in $m_{\invdiameter}^{Q}(x,y)$, we obtain $m_{\invdiameter}^{Q}(x,0)$.
\end{proof}
\subsection{Schr\"{o}der and Delannoy Paths with Vertical Steps}
\label{sec:schroder}
Delannoy paths are a composition of the step vectors $\SVCS = \{\step{1,1}$, $\step{1,-1}$, $\step{2,0}\}$.\footnote{Note: we take a non-traditional orientation of the Schr\"{o}der and Delannoy paths. Typically they are drawn with steps $\{\step{1,1}$, $\step{1,0}$ and $\step{0,1}\}$ travelling from the south-west to north-east corners of an $n\times n$ square, a $45^{\circ}$ rotation from our representation.}
Schr\"{o}der paths use the same step vectors but are restricted to the first quadrant.
Our vertically constrained\xspace variant is created from the step set $\SVCW = \{\step{1,1}$, $\step{1,-1}$, $\step{2,0}$, $\step{0,2}$, $\step{0,-2}\}$ where the height of the vertical step vector is equal to the width of the horizontal step vector. Table \ref{table:schrodercases} outlines the four cases.
\begin{table}[H]
\begin{center}
\begin{tabular}{|l|l|l|} \hline
Name &Restricted to Q& Leading Step \\ \hline \hline
$\GCWSet$ & False & $e_1 \in \SVCW$ \\ \hline
$\GCWRSet$ & False & $e_1 \in \SVCS$ \\ \hline
$\CWSet$ & True & $e_1 \in \SVCW$ \\ \hline
$\CWRSet$ & True & $e_1 \in \SVCS$ \\ \hline
\end{tabular}
\end{center}
\caption{Vertically constrained\xspace lattice paths with vector step set $\SVCW$.}
\label{table:schrodercases}
\end{table}
\subsubsection{Recurrence Relations}
\label{sec:schroderrecurrence}
\begin{figure}[H]
\centering
\imagepdftex{0.4\textwidth}{figures/schroderrecurrence.pdf_tex}
\caption{All possible ways a vertically constrained\xspace path can terminate at a lattice point (red dot) using step set $\SVCW$.}
\label{fig:schroderrecursion}
\end{figure}
Once again, the recurrence relation for $\GCWSet$ paths is determined by considering all ways in which the path can terminate at a lattice point as shown in Figure \ref{fig:schroderrecursion}.
\begin{lemma}
The recurrence relation for $\GCWCount(n,m)$ is
\begin{eqnarray}
&\GCWCount(0,m) &= \I{m\in\{-2, 0, 2\}} \text{ if $n=0$, otherwise}\nonumber \\
&\GCWCount(n,m) &= \GCWCount(n{-}1,m{+}3) + \GCWCount(n{-}2,m{+}2) + 2\GCWCount(n{-}1,m{+}1) + \GCWCount(n{-}2,m) \nonumber \\
& &\quad{+}\, 2\GCWCount(n{-}1,m{-}1) + \GCWCount(n{-}2,m{-}2) + \GCWCount(n{-}1,m{-}3). \nonumber
\end{eqnarray}
\end{lemma}
The recurrence relation for $\CWSet$ (restricted to the quarter-plane) is very similar but with additional conditions concerning steps near the $x$-axis.
\begin{lemma}
The recurrence relation for $\CWCount(n,m)$ is
\begin{eqnarray}
&\CWCount(p,m) &= 0 \text{ if } p<0 \nonumber \\
&\CWCount(0,m) &= \I{m \in\{0, 2\}}\nonumber \\
&\CWCount(n,0) &= \CWCount(n{-}1,3) + \CWCount(n{-}2,2) + 2\CWCount(n{-}1,1) + \CWCount(n{-}2,0)\nonumber \\
&\CWCount(n,1) &= \CWCount(n{-}1,4) + \CWCount(n{-}2,3) + 2\CWCount(n{-}1,2) + \CWCount(n{-}2,1) + \CWCount(n{-}1,0)\nonumber \\
&\CWCount(n,2) &= \CWCount(n{-}1,5) + \CWCount(n{-}2,4) + 2\CWCount(n{-}1,3) + \CWCount(n{-}2,2) + 2\CWCount(n{-}1,1) \nonumber \\
& &\;\; +\, \CWCount(n{-}2,0)\nonumber \\
&\CWCount(n,m) &= \CWCount(n{-}1,m+3) + \CWCount(n{-}2,m+2) + 2\CWCount(n{-}1,m+1) + \CWCount(n{-}2,m) \nonumber \\
& &\quad{+}\, 2\CWCount(n{-}1,m-1)+ \CWCount(n{-}2,m-2) + \CWCount(n{-}1,m-3).\nonumber
\end{eqnarray}
\end{lemma}
\subsubsection{An Explicit Bijection}
The mappings $\phi$ and $\psi$ do not apply to the vertically constrained\xspace variants of the Schr\"{o}der-Delannoy paths. In $\SVCW$, the $\rightarrow$ step vector has twice the horizontal displacement of the $\nearrow$ and $\searrow$ step vectors, a property that causes substitutions of the type used in $\phi$ and $\psi$ to produce paths of varying length. The OEIS does not contain entries for objects equinumerous with members of the $\SVCW$ path family.
There does, however, exist a bijection to a subset of Schr\"{o}der-Delannoy paths consisting of paths that are smooth at every third column. By \emph{smooth} at a column $c_i$, we mean that the path does not change direction at the integer $x$-position $c_i$. This can occur either because the edges before and after $x=c_i$ have the same step vector, or, because $x=c_i$ intersects a midpoint of an edge that spans multiple columns. Examples of these paths are shown in Figure \ref{fig:exampleSchroder}. For conciseness, we shall refer to this subset of Schr\"{o}der/Delannoy paths as \ssd paths.
We define a mapping $\gamma$, illustrated in Figure \ref{fig:schrmap}, in which the following substitutions are performed on a \ssd path to either a single step or a pair of steps centered at a regular interval of every 3 columns on the integer lattice:
\begin{eqnarray*}
\gamma :& \step{1,1}\step{1,1} & \to \step{0,2}\\
\gamma :& \step{1,-1}\step{1,-1} & \to \step{0,-2}\\
\gamma :& \step{2,0}\step{2,0} & \to \step{2,0}\\
\gamma :& \step{2,0} & \to \text{ remove step}\\
\end{eqnarray*}
\begin{figure}[H]
\centering
\imagepdftex{\textwidth}{figures/schroder_mapping.pdf_tex}
\caption{Bijective mapping from a Schr\"{o}der path (length $=18$) that is smooth at horizontal distances $3i+2$ from the origin (indicated by red dashed lines) for $0 \le i < 6$ to a $\CWRSet$ path (length $=6$).}
\label{fig:schrmap}
\end{figure}
The mapping $\gamma$ is similar to $\phi$ but maintains a constant horizontal width by replacing vertical steps in the $\SVCW$ paths with \emph{two} diagonal steps and replacing horizontal steps in $\SVCW$ with \emph{two} horizontal steps.
We define the inverse mapping $\lambda$, illustrated in Figure \ref{fig:schrunmap}, in which the following substitutions are performed on steps that are centered at each column in the integer lattice:
\begin{eqnarray*}
\lambda : & \step{0,2} & \to \step{1,1}\step{1,1}\\
\lambda : & \step{0,-2} & \to \step{1,-1}\step{1,-1} \\
\lambda : & \step{2,0} & \to \step{2,0}\step{2,0} \\
\lambda : & \text{no step} & \to \text{insert }\step{2,0}
\end{eqnarray*}
\begin{figure}[H]
\centering
\imagepdftex{\textwidth}{figures/schroder_unmapping.pdf_tex}
\caption{Bijective mapping from a $\GCWSet$ path (length $=6$) to a Delannoy path (length $=20$) that is smooth at horizontal distances $3i+1$ from the origin (indicated by red dashed lines) for $0 \le i \le 6$.}
\label{fig:schrunmap}
\end{figure}
\begin{theorem}The mapping $\gamma$ is a bijection from Schr\"{o}der (Delannoy) paths terminating at $(3n+2,m)$ that are smooth at each horizontal distance $3i+1$ from the origin for $0 \le i \le n$ to vertically constrained\xspace paths of type $\CWSet$ $(\GCWSet)$ terminating at $(n,m)$.
\label{theo:bijectionSchrod}
\end{theorem}
\begin{proof}
We start by proving that $\gamma$ applied to a \ssd Schr\"{o}der (Delannoy) path always produces a vertically constrained\xspace $\CWSet$ $(\GCWSet)$ path. The set of step vectors $\SVCS$ is a subset of $\SVCW$ and all substitutions in $\gamma$ are from the set $\SVCW$, therefore the resulting path is made up of steps from $\SVCW$. Because there is a spacing of three columns in the pre-image between $\gamma$ substitutions, there is at least one column between vertical steps in the image satisfying the non-consecutive condition of vertically constrained\xspace $\SVCW$ paths. Each $\gamma$ substitution reduces the horizontal path length by 2 resulting, after $n+1$ substitutions, in a path of length $n$. As in the Motzkin path bijections of Section \ref{sec:bijection}, there is no change in the height of step endvertices ensuring path images remain in the same half or quadrant as their pre-image.
Next we will prove that the vertically constrained\xspace $\SVCW$ paths and \ssd Schr\"{o}der (Delannoy) paths are equinumerous. There are exactly four ways in which a Schr\"{o}der path can be smooth across a column: two repeated steps of the same type $(\rightarrow\rightarrow$, $\nearrow\nearrow$, and $\searrow\searrow)$ or a single horizontal step of length two. All four of these cases are uniquely mapped by $\gamma$ to steps in $\SVCW$. Since each \ssd Schr\"{o}der (Delannoy) path under $\gamma$ produces a unique vertically constrained\xspace $\SVCW$ path, the number of vertically constrained\xspace $\SVCW$ paths is greater than or equal to the number of \ssd Schr\"{o}der (Delannoy) paths. Similarly, the mapping $\lambda$ applied to a vertically constrained\xspace $\SVCW$ path can be proven to always produce a unique \ssd Schr\"{o}der (Delannoy) path. From this we can conclude that the number of \ssd Schr\"{o}der (Delannoy) paths is greater than or equal to the number of vertically constrained\xspace $\SVCW$ paths. The two contradictory inequalities imply that the two sets of paths are equinumerous.
\end{proof}
\begin{theorem}The mapping $\gamma$ is a bijection from Schr\"{o}der (Delannoy) paths terminating at $(3n,m)$ that are smooth at a horizontal distance $3i+2$ from the origin for $0 \le i < n$ to vertically constrained\xspace paths of type $\CWRSet$ $(\GCWRSet)$ terminating at $(n,m)$.
\label{theo:bijectionSchrod2}
\end{theorem}
\begin{proof}
The proof is similar to that of Theorem \ref{theo:bijectionSchrod}. The first step of the \ssd Schr\"{o}der (Delannoy) path is not modified by $\gamma$, therefore there can be no leading vertical step. There are $n$ substitutions, resulting in a vertically constrained\xspace path of length $n$.
\end{proof}
\begin{figure}[H]
\centering
\imagepdftex{0.8\textwidth}{figures/schroder_example.pdf_tex}
\caption{Schr\"{o}der paths ($n=6$ and $n=8$) that are smooth at a regular interval of 3 columns and the corresponding $\CWRSet$ and $\CWSet$ paths ($n=2$).}
\label{fig:exampleSchroder}
\end{figure}
\subsubsection{Generating Functions}
\begin{theorem}
The generating functions for $\GCWRCount(n,m)$ and $\GCWCount(n,m)$ are
\begin{align*}
c^{H}_{R}(x,y) &:= \sum_{n \ge 0} \sum_{m\in\mathbb{Z}} \GCWRCount(n,m) x^n y^m = \frac{y^3}{y^3 {-} x\left(1{+}xy{+}y^2\right)\left(1{+}y^2{+}y^4 \right)},\\
c^{H}(x,y) &:= \sum_{n \ge 0} \sum_{m\in\mathbb{Z}} \GCWCount(n,m) x^n y^m = \frac{y\left( 1 {+} y^2 {+} y^4 \right)}{y^3 {-} x\left( 1{+}xy{+}y^2 \right)\left( 1 {+} y^2 {+} y^4 \right)},\\
c^{H}_{R}(x,0) &:= \sum_{n \ge 0} \GCWRCount(n,0) x^n = \frac{d_{\invdiameter}^{H}(x,0)+d_{\invdiameter}^{H}(-x,0)}{2},\\
c^{H}(x,0) &:= \sum_{n \ge 0} \GCWCount(n,0) x^n = \frac{d_{\invdiameter}^{H}(x,0)-d_{\invdiameter}^{H}(-x,0)}{2x},
\intertext{where}
d_{\invdiameter}^{H}(x,0)&= \sum_{y\in\{s_1,s_2,s_3\}}\frac{y^2+x \left(1+y^2+y^4\right)}{3 y^2-2 x y \left(2+4 y^2+3 y^4\right)-x^2 \left(1+3 y^2+5 y^4\right)},
\intertext{and}
s_1,s_2,s_3&:= \text{unique roots of }y^3 {-} x\left(1{+}xy{+}y^2\right)\left(1{+}y^2{+}y^4\right)\text{ that are finite at 0}.
\end{align*}
\end{theorem}
\begin{proof}
The generating functions for $\GCWSet$ and $\GCWRSet$ paths, expressed as $c^{H}(x,y)$ and $c^{H}_{R}(x,y)$, can be determined by expanding terms in the general summation using the recurrence relations found in Section~\ref{sec:schroderrecurrence}.
To enumerate only the paths that terminate on the x-axis, we first combine the general equations, each of which has zero values at alternating cell positions in the triangle representation, to obtain $d_{\invdiameter}^{H}(x,y)$, an enumeration of Delannoy paths that are smooth at regular horizontal distances,
\begin{align*}
d_{\invdiameter}^{H}(x,y)&:= c^{H}_{R}(x,y)+xc^{H}(x,y) = \frac{y^3+xy(1+y^2+y^4)}{y^3 {-} x\left(1{+}xy{+}y^2\right)\left(1{+}y^2{+}y^4 \right)}.
\end{align*}
Because $d_{\invdiameter}^{H}(x,y)$ is a ratio of polynomials, we can once again apply the calculus of residues to get an explicit formula for $d_{\invdiameter}^{H}(x,0)$.
Proposition 6.1.8 of Stanley~\cite{stanley} implies there are three roots of the kernel, $H:=y^3 {-} x\left(1{+}xy{+}y^2\right)\left(1{+}y^2{+}y^4\right)$, that are finite as $x\rightarrow 0$, and they all vanish at the origin. We sum over the residues evaluated at these three roots to obtain an exact form for $d_{\invdiameter}^{H}(x,0)$. Finally, we use bisection to return to our original goal.
\end{proof}
\begin{theorem}
The generating functions for $\CWRCount(n,m)$ and $\CWCount(n,m)$ are
\begin{align*}
c^{Q}_{R}(x,y) & := \sum_{n \ge 0} \sum_{m \ge 0} \CWRCount(n,m) x^n y^m \\
&= \frac{-\left(s_1-y\right)\left(s_2-y\right)\left(s_3-y\right)\left(1-s_1 s_2 s_3 y\right)}
{(1+s_1 s_2 s_3 (s_1+s_2+s_3)) \left(y^3 {-} x\left( 1 {+} xy {+} y^2\right)\left( 1 {+} y^2 {+} y^4 \right)\right)}, \\
c^{Q}(x,y) & := \sum_{n \ge 0} \sum_{m \ge 0} \CWCount(n,m) x^n y^m \\
& = \frac{-(s_1-y)(s_2-y)(s_3-y)}{(1+s_1 s_2 s_3 (s_1+s_2+s_3)) \left(y^3 {-} x\left( 1 {+} xy {+} y^2\right)\left( 1 {+} y^2 {+} y^4 \right)\right)}\\
&\quad\times\Bigg(1{+}\frac{1}{2}\left((s_1{+}s_2)^2{+}(s_1{+}s_3)^2{+}(s_2{+}s_3)^2\right)\\
&\qquad + \left(s_1+s_2+s_3+s_1 s_2 s_3\left(s_1 s_2 +s_1 s_3+s_2 s_3 -1\right)\right) y \\
&\qquad + \left(1+s_1 s_2 s_3\left(s_1 + s_2 + s_3\right)\right) y^2\Bigg),\\
c^{Q}_{R}(x,0) & := \sum_{n \ge 0}\CWRCount(n,0) x^n,\\
& = \frac{s_1s_2s_3}{x(1+s_1 s_2 s_3(s_1+s_2+s_3))}\\
c^{Q}(x,0) & :=\sum_{n \ge 0}\CWCount(n,i) x^n, \\
& = \frac{s_1s_2s_3(1{+}\frac{1}{2}((s_1{+}s_2)^2{+}(s_1{+}s_3)^2{+}(s_2{+}s_3)^2))}
{x(1+s_1 s_2 s_3(s_1+s_2+s_3))}
\end{align*}
\label{theo:gfcwr}
\end{theorem}
\begin{proof}
Again we use a combination of term expansion and the kernel method. By term expansion in the general summation, we obtain
\begin{align}
\label{eq:cw1}
c^{Q}_{R}(x,y) & = \frac{y^3 {-} x\left(1{+}xy{+}2y^{2}{+}y^{4}\right)c^{Q}_{R}(x,0) {-} xy(1{+}xy)W_1(x) {-} xy^2W_2(x)}{y^3 {-} x\left( 1 {+} xy {+} y^2\right)\left( 1 {+} y^2 {+} y^4 \right)}\\
c^{Q}(x,y) & = \frac{y\left( y^2{+}y^4\right) {-} x\left(1{+}xy{+}2y^{2}{+}y^{4}\right)c^{Q}(x,0) {-} xy(1{+}xy)V_1(x) {-} xy^2V_2(x)}{y^3 {-} x\left( 1 {+} xy {+} y^2\right)\left( 1 {+} y^2 {+} y^4 \right)}, \\
\intertext{where}
W_i(x) & := \sum_{n \ge 0}\CWRCount(n,i) x^n, \quad i \in {1,2}, \nonumber\\
V_i(x) & := \sum_{n \ge 0}\CWCount(n,i) x^n, \quad i \in {1,2}. \nonumber
\end{align}
As in the half-plane case, use the three roots $s_1,s_2$ and $s_3$ of the kernel $K(x,y) = y^3 {-} x\left( 1 {+} xy {+} y^2\right)\left( 1 {+} y^2 {+} y^4 \right)$, which vanish at the origin, to obtain a system of three equations in the three unknowns $c^{Q}_{R}(x,0)$, $W_1(x)$ and $W_2(x)$. Solving this system gives the stated result.
\end{proof}
\section{Future Work}
Inspired by a lattice path model arising in the study of lace, we have studied the vertically constrained\xspace extensions of Motzkin, Dyck, Schr\"{o}der and Delannoy paths. There are many potential generalizations: the same approach can be applied to any set of directed step vectors; we have focused our discussion on vertical steps that are of the same length as the horizontal steps but the length of the vertical steps can also be allowed to vary; the restriction that there can be zero consecutive vertical steps could be generalized to allow at most $k$ consecutive vertical steps.
We have demonstrated the existence of a bijective relationship between vertically constrained\xspace paths and subsets of lattice path families with similar step vectors. Because the vertical constraint has clear geometric consequences, it is straight-forward to derive a recurrence relation. Consequently, vertically constrained\xspace paths may serve as a useful tool for analyzing subsets of other lattice path families subject to periodic constraints.
\bibliographystyle{plain}
|
cond-mat/0603633
|
\section{Introduction}
After their first derivation by Rokhsar and Kivelson in 1988 in
the context of cuprates,~\cite{rokhsar} the hard-core quantum dimer models (QDM)
have attracted significant attention. The phase diagrams of the QDM on the square
and triangular lattices
have been investigated in great details,~\cite{leung,syljuasen} and, following the pioneering
work of Moessner and Sondhi on the triangular lattice,~\cite{moessner} the very existence
of a stable resonating valence bond (RVB) phase has been unambiguously
demonstrated.~\cite{ralko}
The presence of a liquid phase with deconfined vison excitations~\cite{balents}
has also been established for a toy model living on the Kagome lattice.~\cite{misguich}
However, the relationship between QDMs and Mott insulators, the physical systems
for which they were
proposed in the first place, is not straightforward. It is well established
by now that the ground state of the S=1/2 Heisenberg model on the square and triangular
lattices exhibits long-range magnetic order of N\'eel and 120 degree type respectively,
and this type of order cannot be reached within the variational basis of Rokhsar and
Kivelson, which consists of short-range singlet dimers.
For a QDM to be a good effective model, one should thus identify models
for which the subspace of short-range dimer coverings on a certain lattice
is a good variational basis.
The first example of such a case was provided by the S=1/2 Heisenberg model on the
trimerized Kagome lattice.~\cite{mila} Indeed, an effective spin-chirality model
living on a triangular lattice can be derived, and, at the level of a mean-field
decoupling between spin and chirality, the ground state manifold consists of all
dimer coverings on the triangular lattice. Going beyond mean-field is
thus expected to lead to a relevant effective QDM. Using Rokhsar and Kivelson's
prescription, which consists in truncating the Hamiltonian and inverting the overlap
matrix within the basis of dimer coverings, Zhitomirsky has derived such an
effective Hamiltonian~\cite{zhitomirsky} and shown that the main competition is between
kinetic terms involving loops of length 4 and 6 respectively, and {\it not} a competition
between a kinetic and a potential term, as for the QDM derived by Rokhsar and Kivelson.
The next logical step would be to study the properties of this effective QDM. This is far
from easy however. We know from the experience with the standard QDM model on the triangular
lattice that the clusters reachable with exact diagonalizations are much too small to allow
any significant conclusion regarding the presence of an RVB phase, and since there is no
convention leading only to negative off-diagonal matrix elements, it is impossible to
perform quantum Monte Carlo simulations.
Recently, two of us came across another model, for which the low
energy sector consists of almost degenerate singlet coverings on the triangular lattice.
This model is a Kugel-Khomskii~\cite{kugel} model that was derived in the
context of LiNiO$_2$, and the mean-field equations that describe the decoupling of the spin
and orbital degrees of freedom possess an infinite number of locally stable solutions.
These solutions
are almost degenerate and correspond to spin singlet (and orbital triplet) dimers on the
triangular lattice.~\cite{vernay} Following Rokhsar and Kivelson's prescription, an effective
QDM can also be derived (see Appendix~\ref{derivation}). As for the S=1/2 Heisenberg model
on the trimerized Kagome lattice, it
consists of a competition between kinetic terms, with two important differences however.
The main term of length 6 lives on loops that have a shape of large triangles, a term absent
in the other case. But more importantly, the off-diagonal matrix elements are {\it all}
negative.
Since the competition between kinetic processes was never investigated before, we have
decided to concentrate on the minimal model obtained by keeping only the dominant term of
length 6 for clarity. We have checked that the properties of the complete effective model
are similar. This minimal model is described by the Hamiltonian:
\begin{widetext}
\begin{equation}\label{hamilt}
\begin{array}{rcl}
{ H}&=& \ -t \sum
\left(
|\unitlength=1mm
\begin{picture}(6.2,5)
\linethickness{2mm}
\put(0.9,-.7){\line(1,2){1.8}}
\put(3.8,-.7){\line(1,2){1.8}}
\end{picture}
\rangle
\langle
\unitlength=1mm
\begin{picture}(6.5,5)
\linethickness{0.3mm}
\put(3.2,2.6){\line(1,0){3.2}}
\put(0.9,-.7){\line(1,0){3.2}}
\end{picture}
|
+h.c.\right)
- \ t^\prime \sum
\left(
\left|\unitlength=1mm
\begin{picture}(7,6)
\linethickness{2mm}
\put(0.8,-1.7){\line(1,2){1.8}}
\put(6.4,1.6){\line(-1,2){1.8}}
\linethickness{0.2mm}
\put(3.8,-1.7){\line(1,0){3.6}}
\end{picture}
\right\rangle
\left\langle
\unitlength=1mm
\begin{picture}(7,6)
\linethickness{2mm}
\put(1.8,1.6){\line(1,2){1.8}}
\put(7,-1.7){\line(-1,2){1.8}}
\linethickness{0.2mm}
\put(-0.6,-1.7){\line(1,0){3.6}}
\end{picture}
\right|
+h.c.\right)
+ \ V \sum \left(
|\unitlength=1mm
\begin{picture}(6.2,5)
\linethickness{2mm}
\put(0.9,-.7){\line(1,2){1.8}}
\put(3.8,-.7){\line(1,2){1.8}}
\end{picture}
\rangle
\langle
\unitlength=1mm
\begin{picture}(6.2,5)
\linethickness{2mm}
\put(0.9,-.7){\line(1,2){1.8}}
\put(3.8,-.7){\line(1,2){1.8}}
\end{picture}|+
|
\unitlength=1mm
\begin{picture}(6.5,5)
\linethickness{0.3mm}
\put(3.2,2.6){\line(1,0){3.2}}
\put(0.9,-.7){\line(1,0){3.2}}
\end{picture}\rangle
\langle
\begin{picture}(6.5,5)
\linethickness{0.3mm}
\put(3.2,2.6){\line(1,0){3.2}}
\put(0.9,-.7){\line(1,0){3.2}}
\end{picture}
|
\right),
\end{array}
\end{equation}
\end{widetext}
where the sums run over the 4-site and 6-site loops with all possible
orientations. Although the repulsion is a higher order process, we have included
a repulsion term in the Hamiltonian, and we will treat its amplitude
$V$ as a free parameter to be able to make contact with the Rokhsar-Kivelson model
on the triangular lattice. The hopping amplitudes $t$ and $t^\prime$ are negative, and
although the ratio $t^\prime/t$ is in principle fixed by their expression in the
perturbative expansion, we will also treat it as a free parameter.
Our central goal in this paper is to determine the nature of the ground state as
a function of $t^\prime/t$. With respect to what we already know
about QDMs, the main question is whether a competition between kinetic terms can also
lead to a liquid phase. As we shall see, the answer to that question is positive, a
liquid phase being present in a finite region of the phase diagram in the
$t^\prime{-}V$ plane.
The paper is organized as follows. In Section~\ref{method}, we briefly review the basic
preliminaries used in the rest of the paper. The results obtained with exact
diagonalizations are presented in Section~\ref{resed}, those obtained with quantum
Monte Carlo in Section~\ref{resqmc}, and the conclusions in Section~\ref{concl}.
The perturbation calculation which has motivated the investigation of this QDM is finally
presented in Appendix~\ref{derivation}.
\section{The method}\label{method}
In this section, we present a brief introduction to the numerical methods, to the clusters used
in the analysis, and to the physical concepts underlying the determination of the phase diagram.
More details can be found in Ref.~\onlinecite{ralko}.
Let us first discuss the shape of the finite-size clusters.
In general, a finite cluster is defined by two vectors ${\bf T}_1$ and ${\bf T}_2$ and,
in order to have the symmetries for rotations by $2\pi/3$, they have to satisfy:~\cite{bernu}
\begin{eqnarray}
{\bf{T}}_{1} &=& l {\bf{u}}_{1} + m {\bf{u}}_{2} \nonumber \\
{\bf{T}}_{2} &=& -m {\bf{u}}_{1} + (l+m) {\bf{u}}_{2} \nonumber ,
\end{eqnarray}
where $l$ and $m$ are integers and ${\bf u}_1=(1,0)$ and ${\bf u}_2=(1/2,\sqrt{3}/2)$ are
the unitary vectors defining the triangular lattice. The number of sites in the cluster
is $N= l^2+m^2+lm$. In order to have also the axial symmetry, and therefore all the
symmetries of the infinite lattice, we must take either $lm=0$ or $l=m$. The first
possibility corresponds to type-A clusters (with the notation of Ref.~\onlinecite{ralko}),
with $N=l^2$ sites; the second one gives rise to type-B clusters, with
$N=3 \times l \times l$ (for examples of both cases, see Fig.~\ref{fig:cluster}).
Since for $t^\prime/t=0$ and $V/t=0$ the ground state belongs to a crystalline phase with
a 12-site unit cell,~\cite{moessner,ralko} we will restrict in this work
to clusters with a number of sites multiple of 12, in order not to frustrate
this order. To limit the finite size effects related to the geometry of the
clusters, we will concentrate on type-B clusters, which are always compatible with this
order. Note that the 6-dimer loop kinetic term does not introduce further restrictions
since all it requires is to be able to accommodate 6-site unit cells.
\begin{figure}
\includegraphics[width=0.45\textwidth]{fig1.eps}
\caption{\label{fig:cluster}
Example of type-A (left) and type-B (right) clusters with 16 and 12 sites, respectively.}
\end{figure}
A very important concept in the QDM is the existence of topological sectors.
Indeed, in the triangular lattice, the Hilbert space is split into four disconnected
topological sectors on a torus defined by the parity of the number of dimers cutting two
lines that go around the two axes of the torus and denoted by $(p,q)$ with $p,q=0$
(respectively $1$) if this number is even (respectively odd). One can convince oneself,
by a direct inspection of the effect of the 4-site and 6-site terms, that these numbers
are conserved quantities under the action of the Hamiltonian of Eq.~(\ref{hamilt}).
More generally, this is a consequence of the fact that the topological sectors are not
coupled by any {\it local} perturbation. These topological sectors are extremely useful
to distinguish between valence bond solids and spin liquids. Indeed, valence bond solids
are only consistent with some topological sectors, whereas RVB spin-liquid phases are
characterized by topological degeneracy. Therefore, the main goal will be to investigate
whether, in the thermodynamic limit, the topological sectors are degenerate or not.
In that respect, it is useful to remember that the $(0,1)$ and $(1,0)$ sectors are always
degenerate with either $(0,0)$ or $(1,1)$ (depending on the cluster geometry) since they
contain the same configurations rotated by an angle $\pi/3$.~\cite{ralko}
One can thus, without any loss of generality, restrict oneself to the analysis of the
$(0,0)$ and $(1,1)$ sectors.
Therefore, we define the absolute value of the topological gap as:
\begin{equation}\label{topogap}
\Delta E = |E_{00} - E_{11}|,
\end{equation}
where $E_{00}$ and $E_{11}$ are the total ground-state energies for the topological sectors
with $p=q=0$ and $p=q=1$, respectively. This gap is expected to scale to zero with the cluster
size in the RVB phase.
Finally, in order to detect a possible dimer order, we also consider the static dimer-dimer
correlations
\begin{equation}\label{dimercorr}
D^{i,j}(r-r^\prime) = \langle D^i(r) D^j(r^\prime) \rangle,
\end{equation}
where $D^i(r)$ is the dimer operator defined as follows: It is a diagonal operator in the configuration
space that equals $1$ if there is a dimer from the site $r$ to the site $r+a_i$, with
$a_1=(1,0)$, $a_2=(1/2,\sqrt{3}/2)$, or $a_3=(-1/2,\sqrt{3}/2)$ and vanishes otherwise.
The method used is the same one as that used by Ralko and collaborators~\cite{ralko} to determine
the phase diagram of the Rokhsar-Kivelson QDM on the triangular lattice and our investigations are
Lanczos diagonalizations and Green's function Monte Carlo (GFMC) simulations.
In particular, the GFMC is a zero-temperature stochastic technique based on the power method:
Starting from a given wave function and by applying powers of the Hamiltonian, the ground state is
statistically sampled to extract its energy and equal-time correlation functions.
In principle, as in other Monte Carlo algorithms, in order to reduce the statistical fluctuations,
it is very useful to consider the importance sampling, through the definition of a suitable
{\it guiding function}.
Unfortunately, when dealing with dimer models, it is very hard to implement an accurate and,
at the same time, efficient guiding function for the crystalline phases.
This problem is particularly relevant when the 6-site term becomes dominant. In this case, our
simulations suffer from wild statistical fluctuations, deteriorating the convergence of the GFMC.
As a consequence, we are not able to reach the largest available size, $432$-site cluster,
for all parameters $t^\prime/t$ and $V/t$.
By contrast, given the simple form of the spin-liquid ground state (that reduces to a
superposition of all the configurations with the same weight at the Rokshar-Kivelson point),
in the disordered region we can use the guiding function with all equal weights
and obtain very small fluctuations (no fluctuation at the Rokshar-Kivelson point)
and, therefore, excellent results with zero computational effort.
Combining these facts, the loss of the GFMC convergence can be interpreted as a signal for the
appearance of a crystalline phase. Of course, this is not a quantitative criterion, but, as
it will be shown in the following, it gives reasonable insight into the emergence of a dimer order.
\begin{figure}
\includegraphics[width=0.40\textwidth]{fig2.eps}
\caption{\label{fig:ED_12_36}
(Color online) Difference between $E_{00}$ and $E_{11}$, the total ground-state energies of the
topological sectors with $p=q=0$ and $p=q=1$, respectively. The results are found by exact
diagonalizations for clusters with 12 and 36 sites.}
\end{figure}
\begin{figure}
\includegraphics[width=0.50\textwidth]{fig3.eps}
\caption{\label{fig:correl}
(Color online) Dimer-dimer correlations on the 36-site cluster. The dimer of reference is
the thickest one in the up-right corner. The thicker the line, the farther
the value of the correlation from the uniform distribution equal value $1/36$.
Solid lines are used when the correlation is higher than $1/36$, and dashed lines when
it is lower.}
\end{figure}
Finally it should be mentioned that, since the different topological
sectors are completely decoupled (each dimer configuration belonging to one and only one of them)
and cannot be connected by the terms contained in the Hamiltonian, within the GFMC it is possible to
work in a given topological sector, making it possible to extract the ground-state properties of
each of them.
\section{Exact Diagonalizations}\label{resed}
To get a first idea of the properties of the model, we start with the results
we have obtained with exact diagonalizations of finite clusters for the model of
Eq.~(\ref{hamilt}) with $V/t=0$.
Let us first begin with the ground-state energy for the 12- and 36-site clusters for both
the topological sectors $(0,0)$ and $(1,1)$ (see Fig.~\ref{fig:ED_12_36}).
Note that the 12-site cluster is of type B, whereas the 36-site one is of type A.
We have that, for both sizes, a level crossing occurs for $t^\prime/t \sim 2$.
Below that value, the ground state is in $(1,1)$ topological sector, in agreement with
the earlier results of Ref.~\onlinecite{ralko} for $V/t=0$.
The main difference between the 12-site and the 36-site clusters is that, for the 36-site
cluster, the topological ground-state energies stay very close in a large parameter range
for $t^\prime/t \sim 2$. This could suggest that, upon increasing the size, this level
crossing might evolve into a phase where these energies are rigorously degenerate, giving
rise to a liquid phase without any crystalline order.
In order to give an idea of the various phases, we report in Fig.~\ref{fig:correl} the
dimer-dimer correlations of Eq.~(\ref{dimercorr}) for the 36-site cluster below, at, and above
the level crossing, which takes place at $t^\prime/t=2.2$ for this size.
For small $t^\prime/t$, the correlations show a pattern similar to that of the intermediate
$\sqrt{12} \times \sqrt{12}$ phase of the standard QDM model, already shown in
Ref.~\onlinecite{ralko}. It should be stressed that, since in these calculations the Hamiltonian
has the translational symmetry, the 12-site unit cell is not directly visible from
Fig.~\ref{fig:correl}, and in order to have a clearer evidence one should break the symmetry by hand.
Nonetheless, as it has been shown in Ref.~\onlinecite{ralko}, these results are in perfect
agreement with the existence of a $\sqrt{12} \times \sqrt{12}$ phase with a crystalline ground state
in the thermodynamic limit.
When $t^\prime/t$ is large, another pattern arises, which has never been observed
in the standard QDM, and which presents a kind of 6-site triangle ordering. In this case,
the dominant kinetic term involving 6 sites [see the second term of the Hamiltonian~(\ref{hamilt})]
induces a dimer pattern with the same symmetry, possibly inducing a
ground state with a 6-site unit cell in the thermodynamic limit. Also in this case the
translational symmetry of the ground state partially masks the existence of a regular dimer pattern.
Unfortunately, we will not be able to confirm this prediction since, as stated before, the GFMC
algorithm has serious problems of convergence inside this phase and for large clusters.
In any case, for intermediate values of $t^\prime/t$, the correlations decay
very rapidly with the distance and are close to those obtained in the liquid phase of the
standard QDM with $V/t \lesssim 1$ and $t^\prime/t=0$.~\cite{ralko}
This fact gives another evidence of the possible existence of an RVB phase between two
ordered phases in the model with competing kinetic terms and without the dimer repulsion.
Of course by considering the exact diagonalization results only it is impossible to give
definite statements on the stabilization of this liquid phase and, therefore, in the
following section, we will consider a more systematic study of the topological gap,
in order to unveil the existence of a wide disordered region that develops from the
Rokshar-Kivelson point $V/t=1$ of the standard QDM and survives up to $V/t=0$ and
finite $t^\prime/t$.
\section{Green's Function Monte Carlo}\label{resqmc}
In this section, we use the GFMC method to extend the results of the
previous section to larger clusters, with up to $432$ sites, and to map out the
phase diagram in the $t^\prime{-}V$ plane.
\subsection{The case of $V/t=0$}
Let us first describe the results we have obtained for $V/t=0$ and consider the behavior of the
topological gap given in Eq.~(\ref{topogap}).
In Fig.~\ref{fig:topo_gap1}, for clarity, we divided the results in two sets for
$0 \leq t^\prime/t \leq 0.6$ and $t^\prime/t \geq 1$.
The first remarkable feature is that, for most parameters, the gap decreases between
12 and 48 sites, regardless of its behavior for larger sizes. Therefore, the possibility to
study much larger clusters is crucial for this analysis. Indeed, there is a clear change
of behavior for $t^\prime/t \sim 1.6$, a value beyond which our extrapolations give solid
evidences in favor of a vanishing gap in the thermodynamic limit.
On the other hand, for smaller $t^\prime/t$ ratios, we have a clear evidence that $\Delta E$
increases for large sizes. Therefore, we come to the important conclusion that,
the crystalline $\sqrt{12} \times \sqrt{12}$ phase is destroyed by increasing the
amplitude of the 6-site term and the system is eventually driven into a liquid RVB phase.
\begin{figure}
\includegraphics[width=0.45\textwidth]{fig4.eps}
\caption{\label{fig:topo_gap1}
(Color online) Topological gaps for $V/t=0$ as a function of $1/\sqrt{N}$, where $N$ is the
number of sites and for different values of $t^\prime/t$. Upper panel: Small values of
$t^\prime/t$, where the gap opens for large clusters. Lower panel: Larger values of
$t^\prime/t$. For $t^\prime/t \gtrsim 1.6 $, the finite-size gap closes upon increasing the
cluster size, signaling a liquid phase.}
\end{figure}
\begin{figure}
\includegraphics[width=0.50\textwidth]{fig5.eps}
\caption{\label{fig:parall_cs}
(Color online) Dimer-dimer correlation function for a 108-site cluster along a
horizontal line as a function of the distance for $t^\prime/t=2$ and $V/t=0$. The dashed
line corresponds to $1/36$, the value in the absence of correlations.}
\end{figure}
To have a further confirmation of the existence of this disordered phase, we have calculated
the dimer-dimer correlations. The results obtained on the 108-site cluster for
$t^\prime/t=2$ are reported in Fig.~\ref{fig:parall_cs}, where the correlation functions
for parallel dimers along the same row are plotted as a function of the distance.
Given the small number of clusters available, a precise size scaling of the order parameter
is not possible and also a meaningful estimation of the correlation length is very hard.
Nevertheless, the behavior is definitely consistent with an exponential decay and the
uncorrelated value of $1/36$ is approached very rapidly, as expected in a liquid phase
without any crystalline order.
Unfortunately, the larger $t^\prime/t$ region is numerically far more difficult to access.
Indeed, as stated above, although the GFMC is in principle numerically exact, we have not
been able to find an efficient guiding function to perform the importance sampling and
wild statistical fluctuations prevent us to reach a safe convergence for large clusters.
In practice, we have access to clusters up to 108 sites, that are still too small to
predict the thermodynamic behavior. For instance, for $V/t=0$, the convergence stops
before one can observe any increase of the topological gap, and the criterion used
for the phase transition on the other side of the RVB phase cannot be used any more.
However, the lack of convergence is a clear sign that one enters a new (crystalline) phase.
So, if the change of behavior of the topological gap with the size cannot be observed,
we take as a definition of the boundary for the phase transition the parameters for which
the convergence is not good any more.
This is expected to be semi-quantitative, and indeed, as we shall see in the next section
when studying the full diagram, the points obtained with this criterion agree reasonably
well with those obtained with the reopening of the topological gap.
In summary, although the region where the 6-site term dominates over the usual 4-site dimer
flip is not accessible by using GFMC, based on our numerical results for small $t^\prime/t$,
we can safely argue that the crystalline $\sqrt{12} \times \sqrt{12}$ phase is destabilized
by increasing the 6-site kinetic term, leading to a true disordered ground state with
topological degeneracy.
\subsection{Phase diagram in the $t^\prime{-}V$ plane}
In this section we prove that the RVB phase found in the previous paragraph for $V/t=0$ is
connected to the one obtained for the standard QDM, i.e., close to the Rokshar-Kivelson
point and $t^\prime/t=0$. In order to do that, we have investigated a generalization of
the previous model that also includes a repulsion $V$ between dimers facing each other, see
Eq.~(\ref{hamilt}). For $t^\prime/t=0$, this model reduces to the standard QDM, which has
an RVB phase for $0.75 \lesssim V/t \lesssim 1$.~\cite{moessner,ralko}
To map out the complete phase diagram of this model, we have done the same analysis of the
previous paragraph for different values of $V/t$ between 0 and 1. As an example, we show in
Fig.~\ref{fig:topo_gap2} the finite-size scaling of the topological gap for $t^\prime/t=1$ and
several values of $V/t$. The global behavior is the same as before and we have clear
evidence that the topological gap present at $V/t=0$ persists up to $V/t \sim 0.25$, and that
it opens again for $V/t \sim 0.8$. In that case, as
in many other cases, it turned out to be possible to actually observe the opening
of the gap upon leaving again the RVB phase before the convergence problems were too strong.
\begin{figure}
\includegraphics[width=0.45\textwidth]{fig6.eps}
\caption{\label{fig:topo_gap2}
(Color online) Topological gap for $t^\prime/t=1$ and for various $V/t$ as a function of
$1/\sqrt{N}$, where $N$ is the number of sites. Small and large values of $V/t$
have been shown separately in the upper and lower panels for clarity.}
\end{figure}
\begin{figure}
\includegraphics[width=0.40\textwidth]{fig7.eps}
\caption{\label{fig:phasediag}
(Color online) Phase diagram in the $t^\prime{-}V$ plane. A wide disordered region
extends all the way from the standard QDM ($t^\prime/t=0$ axis) to the purely
kinetic QDM ($V/t=0$ axis). The description of the symbols is given in the text.}
\end{figure}
The resulting phase diagram is depicted in Fig.~\ref{fig:phasediag}, where different
symbols have been used depending on whether the boundary was determined from
the increase of the correlation length with the size or from the loss of convergence:
Solid circles when the closing of the topological gap was observable and
solid diamonds when the convergence of the GFMC was lost for large systems.
Interestingly enough, these different criteria build a relatively smooth line,
a good indication that it can be interpreted as a phase boundary.
Note that we did not perform simulations for $V/t>1$ where, for a vanishing $t^\prime/t$,
a crystalline phase with staggered dimer order is stabilized. So we cannot exclude that
the boundary extends beyond $V/t>1$ for small $t^\prime/t$.
Remarkably, the RVB phase we found for $V/t=0$ is connected to the RVB phase
reported before for the standard QDM, i.e., $t^\prime/t=0$, the total RVB phase building
up a large stripe that encompasses a significant portion of the phase diagram.
We have also calculated static correlation function for several values of the
parameter, but they merely confirm the identification of the phases and are not
reported for brevity.
\section{Conclusions}\label{concl}
Coming back to our long-term motivation, namely to find an RVB phase in a
realistic model of Mott insulators, this paper contains significant results of
two sorts. First of all, we have shown (see Appendix~\ref{derivation}) that,
starting from the quasi-degenerate
mean-field ground state of a Kugel-Khomskii spin-orbital model, one can construct
a QDM with two remarkable properties: It describes a competition between
two kinetic terms of comparable magnitude, and all off-diagonal matrix elements in
the dimer basis are negative. This has allowed us to implement the GFMC
and to investigate the results of the competition between
these terms. It turns out that the competition between these kinetic terms leads to
the disappearance of the $\sqrt{12}\times \sqrt{12}$ crystalline order when
$t^\prime/t \sim 1.6$. This transition is similar to the transition into the RVB phase that
happens in the standard QDM upon approaching the Rokhsar-Kivelson point. Indeed,
the two phases can be connected into a single RVB phase in the context of a generalized QDM.
As far as the numerical investigation of the model is concerned, the main open issue
is to pin down the nature of the phase that occurs when the 6-dimer kinetic term dominates.
Unfortunately, the GFMC suffers from severe statistical fluctuations whenever the guiding
function is not accurate, i.e., for clusters larger that 108 sites and large $t^\prime/t$.
Therefore, we cannot make any definite statements on the phase where the 6-site term
dominates.
Another interesting question is of course the nature of the quantum phase transitions
between these phases (continuous or first order). We are currently working on that
rather subtle issue in the context of the standard QDM.
The general features of the RVB phase are consistent with the phenomenology
of LiNiO$_2$, which exhibits neither orbital nor magnetic long-range order. A number of
points deserve further investigation however. The precise form of the QDM does not
seem to be an issue:
The actual model that can be derived along the Rokhsar-Kivelson lines has more terms
(see Appendix~\ref{derivation}), but preliminary results show that the RVB phase is present
in that model as well.
The fact that the RVB phase does not contain the point $t^\prime/t=1.34$ derived in the
Appendix~\ref{derivation} is not really an issue either.
First of all, this ratio was determined for vanishing Hund's
rule coupling and one vanishing hopping integral, and its precise value in that case should
at best be taken as
an indication of its order of magnitude in the actual system. Besides, the other 6-site terms
pull the RVB region down to smaller values of the relative ratio of the 6-dimer term
to the 4-dimer term. What would deserve more
attention is the validity of the
expansion that leads to the effective QDM. The small parameter of the expansion is not
that small ($\alpha=1/\sqrt{2}$, see Appendix~\ref{derivation}), and it would
be very useful to better understand to which extent such an expansion can be controlled.
Nevertheless, the present results strongly suggest that the presence of an RVB liquid phase
between competing ordered phases is a generic feature of QDM, and that to identify such a
phase in a realistic Mott insulator via an effective QDM is very promising.
We acknowledge useful discussions with P. Fazekas, M. Ferrero, and K. Penc. This work was
supported by the Swiss National Fund and by MaNEP. F.B. is supported by CNR-INFM and MIUR
(COFIN 2005).
|
cond-mat/0603220
|
\section{Introduction \label{sec:introduction}}
Diffuse-charge dynamics plays an important role in the response of
electrochemical and biological systems subject to time-dependent
voltages or electric fields~\cite{bazant2004}. The classical example
is impedance spectroscopy in
electrochemistry~\cite{sluyters1970,macdonald1990,parsons1990,geddes1997},
but electrochemical relaxation is also being increasingly exploited in
colloids and microfluidics~\cite{squires2005}. For example, alternating
electric fields have been used to pump or mix liquid
electrolytes~\cite{ramos1999,ajdari2000,gonzalez2000,green2002,green2000b,
ramos2003, studer2002,
studer2004,nadal2002b,iceo2004b,iceo2004a,levitan2005}, to separate or
self-assemble colloids near electrodes~\cite{trau1997, yeh1997,
faure1998, green2000a, marquet2002, nadal2002a, ristenpart2003}, and
to manipulate polarizable
particles~\cite{murtsovkin1996,iceo2004b,yariv2005,
squires2006,rose2006b,rose2006,saintillon2006} or biological cells and
vesicles~\cite{helfrich1974, mitov1993,pethig1996}.
\begin{figure}
\begin{center}
\includegraphics[width=3in]{figs/cartoon-sphere}
\begin{minipage}[h]{2.9in}
\caption[Schematic diagram of model problem]{
\label{figure:metal_colloid_sphere_schematic}
(a) Schematic diagram of metallic colloidal sphere in a binary
electrolyte subjected to an applied electric field, which has the same
relaxation as a metallic hemisphere on a flat insulting
surface, shown in (c). We also consider the analogous
two-dimensional problems of a metallic cylinder (b), or a half
cylinder on an insulating plane (d).
}
\end{minipage}
\end{center}
\end{figure}
In this paper, we analyze some simple problems exemplifying the
nonlinear response of an electrolyte around an ideally polarizable
object due to diffusion and electromigration. As shown in
Figure~\ref{figure:metal_colloid_sphere_schematic}, we consider the
ionic relaxation around a metallic sphere (a) or cylinder (b) subject
to a suddenly applied uniform background electric field, as in
metallic colloids. Equivalently, we consider a metallic hemi-sphere
(c) or half-cylinder (d) on an insulating plane, to understand
relaxation around metallic structures on channel walls in
micro-electrochemical devices. Although we do not consider fluid
flow, our analysis of nonlinear electrochemical relaxation is an
necessary first step toward understanding associated problems of
induced-charge electro-osmosis in the same
geometries~\cite{iceo2004b,iceo2004a,levitan2005,rose2006b,rose2006,saintillon2006},
and thus it also has relevance for the case of AC electro-osmosis at
planar electrode
arrays~\cite{ramos1999,ajdari2000,gonzalez2000,green2002,green2000b,
ramos2003, studer2002, studer2004}.
In
electrochemistry~\cite{delahay_book,bockris_book,bard_book,newman_book}
and colloid science~\cite{hunter_book,lyklema_book_vol2}, it is common
to assume that the charged double-layer at a metal surface is very
thin and thus remains in quasi-equilibrium, even during charging
dynamics. As a result, for over a century~\cite{bazant2004}, the
standard model of electrochemical relaxation has been an equivalent
circuit, where the neutral bulk is represented by an Ohmic resistor
and the double layer by a surface
impedance~\cite{sluyters1970,macdonald1990,parsons1990,geddes1997},
which reduces to a linear capacitor at an ideally polarizable
surface. For our model problems, this ``RC circuit'' model was first
applied to electrochemical relaxation around a sphere by Simonov and
Shilov~\cite{simonov1977} and around a cylinder by Bazant and
Squires~\cite{iceo2004b,iceo2004a}. Similar RC-circuit analysis has
been applied extensively to planar electrode arrays in microfluidic
devices, following with Ramos et al.~\cite{ramos1999} and
Ajdari~\cite{ajdari2000}.
While convenient for mathematical analysis and often sufficiently
accurate, circuit models neglect the possibility of bulk concentration
gradients, which can arise at large applied voltages~\cite{bazant2004}
and/or when the surface is highly
charged~\cite{dukhin1969,hinch1983,hinch1984,shilov1970}, as well as
nonuniform surface transport of ions through the double
layer~\cite{bikerman1940,deryagin1969}. Dukhin and
Shilov~\cite{dukhin1969,shilov1970} and later Hinch, Sherwood, Chew,
and Sen~\cite{hinch1983,hinch1984} made significant progress beyond
the simple circuit model by including bulk diffusion in their studies
of double layer polarization around highly charged spherical particles
(of fixed charged density) in weak applied fields. One of the main
results of their analysis is that for weak applied electric fields,
bulk concentration gradients appear as a small correction (on the
order of the applied electric field) to a uniform background
concentration. In this work, we will lift the weak-field restriction
and consider the nonlinear response of the system to strong applied
fields, using the same mathematical model as in all prior work -- the
Poisson-Nernst-Planck (PNP) equations of dilute solution
theory~\cite{newman_book}. As we shall see, nonlinear response
generally involves non-negligible bulk diffusion.
There are also difficulties with the traditional macroscopic picture
of the double layer as an infinitely thin surface impedance, or
possibly some more general nonlinear circuit element. At the
microscopic level, the double layer has a more complicated structure
with at least two different regimes: a diffuse layer where the ions
move freely in solution and a compact surface layer, where ions may be
condensed in a Stern mono-layer with its own physical features (such
as surface capacitance, surface diffusivity, and surface
roughness)~\cite{delahay_book,bockris_book,bard_book}. The surface
capacitance may also include the effect of a dielectric coating, where
ions do not penetrate~\cite{ajdari2000}. Mathematically, in one
dimension the capacitor model can be derived as an effective boundary
condition for the neutral bulk (Nernst-Planck) equations by asymptotic
analysis of the PNP equations in the thin double-layer
limit~\cite{bazant2004}. Extending this analysis to higher dimensions,
however, requires allowing for tangential ``surface conduction''
through the double layer on the conductor for large applied electric
fields. Here, we derive effective boundary conditions for the neutral
bulk in the PNP model by following a general mathematical method for
surface conservation laws at microscopically diffuse
interfaces~\cite{chu_surf_cons_laws}.
The nonlinear response of electrochemical systems to strong applied
fields seems to be relatively unexplored. To our knowledge, the only
prior mathematical study of nonlinear relaxation with the PNP model is
the recent work of Bazant, Thornton, and Ajdari on the one-dimensional
problem of parallel-plate blocking electrodes applying a sudden pulsed
voltage~\cite{bazant2004}. (The same analysis has been recently
extended to ``modified-PNP'' models accounting for steric effects in
concentrated solutions~\cite{kilic2006}, which would also be an
important extension of our work.) For applied voltages in the
\emph{weakly nonlinear} regime, which is analogous to weak applied
fields in our problems, they find that the relaxation of the cell to
the steady state requires bulk diffusion processes that appear as a
small correction at $O(\eps)$, where the small parameter $\eps$ is the
ratio of the Debye length to a typical scale of the geometry. (In
contrast, in Refs. ~\cite{dukhin1969,hinch1983,hinch1984,shilov1970},
it appears that primarily the the strength of the applied electric
field is assumed to be small, although the mathematical limit is not
explicitly defined.) For applied voltages in the \emph{strongly
nonlinear} regime, they show that bulk concentration gradients can
no longer be considered small, since $O(1)$ concentration variations
appear. In both regimes, the absorption of neutral salt by the double
layer (and therefore build up of surface ion density) is the key
driving force for bulk diffusion. This positive salt adsorption in
response to an applied voltage, first noted in Ref.~\cite{bazant2004},
is opposite to the classical ``Donnan effect'' of salt
expulsion~\cite{lyklema_book_vol2}, which occurs if the surface
chemically injects or removes ions during the initial creation of the
double layer~\cite{lyklema2005}.
Bazant, \emph{et al. } also emphasize the importance of both the charging \emph{and}
the diffusion time scales in the evolution of the electrochemical
systems~\cite{bazant2004}.
Because circuit models inherently neglect diffusion processes, the only
characteristic dynamic time scale that appears is the so-called
RC charging time, $\tau_c = \lambda_D L/D$, where $\lambda_D$ is the
Debye length, $L$ is the system size, and $D$ is the characteristic
diffusivity of the ions~\footnote{Note that when charging is driven by
an externally applied voltage or field, the relevant relaxation time for
double layer charging is \emph{not} the often quoted Debye time,
$\tau_D = \lambda_D^2/D$~\cite{hunter_book,lyklema_book_vol1}. The Debye time
is the correct characteristic response time for double layer charging only
in the unphysical scenario where charge is instantaneously placed on the
particle (as opposed to transported through the electrolyte)~\cite{ferry1948}.
This result has been discovered many times by different scientific
communities~\cite{bazant2004,dukhin1993,dukhin1980,kornyshev1981,
macdonald1970} but only recently seems to be gaining widespread
understanding.}.
However, when concentration gradients are present, diffusion processes
and dynamics at the diffusion time scale, $\tau_L = L^2/D$ may be important.
Most theoretical analyses of electrochemical systems only consider
the dynamics at one of these two dominant time scales -- effectively
decoupling the dynamics at the two time scales.
This decoupling of the dynamics is natural when one considers
the wide separation in the time scales that govern the evolution
of the system: $\tau_L \gg \tau_c$.
An interesting discussion of how the two time scales are coupled using
ideas related to time-dependent asymptotic matching is presented
in ~Ref.\cite{bazant2004}.
The paper is organized as follows. We begin in
section~\ref{sec:mathematical_model} by carefully considering the thin
double-layer limit of our model problems, which leads to effective
boundary conditions for the neutral-bulk equations in
section~\ref{sec:effective_bcs}. The most interesting new boundary conditions
are surface conservation laws, whose physical content we discuss in
detail in section~\ref{sec:surfproc}, where we also define
dimensionless parameters governing the importance of various surface
transport processes. In section~\ref{sec:steady_response}, we explore
the steady response to large applied electric fields in our model
problems; a notable prediction is the formation of recirculating bulk
diffusion currents coupled to surface transport processes. We then
turn to relaxation dynamics in the three regimes identified by Bazant
\emph{et al. }~\cite{bazant2004}, using similar methods, albeit with nontrivial
modifications for two or three dimensions. We begin with the linear
response to a weak field in section ~\ref{sec:transient_response},
where we obtain exact solutions using transform methods for arbitrary
double-layer thickness and also consider AC electric fields. Next, in
section~\ref{sec:weakly_nonlinear_dynamics} we use boundary-layer
methods in space {\it and} time to analyze ``weakly nonlinear''
relaxation in somewhat larger fields, in the asymptotic limit of thin
double layers. Finally, in
section~\ref{sec:strongly_nonlinear_dynamics} we comment on the
challenges of ``strongly nonlinear'' relaxation, where bulk diffusion
and surface conduction dominate the dynamics. We conclude in
section~\ref{sec:conclusions} by discussing limitations of the PNP
model and directions for future research.
\section{Mathematical Model \label{sec:mathematical_model}}
\subsection{ PNP Initial-Boundary-Value Problem }
As a model problem, we consider the response of an isolated, ideally
polarizable sphere (or cylinder) subjected to a uniform, applied
electric field, as shown in
Figure~\ref{figure:metal_colloid_sphere_schematic}. For simplicity,
we focus only on a symmetric, binary electrolyte where both ionic
species have the same diffusivity and charge number. In order to
study nonlinear effects and avoid imposing a time scale, we assume
that the uniform electric field is suddenly applied at $t = 0$.
As in most (if not all) prior work on electrochemical dynamics, we
assume the Poisson-Nernst-Planck equations of dilute solution
theory~\cite{newman_book},
\begin{eqnarray}
\frac{\partial C_+}{\partial t} &=&
\nabla \cdot \left ( D \nabla C_+ + \frac{z_+eD}{kT} C_+ \nabla \Phi \right )
\label{eq:dimensional_C+_eqn_bulk_diffusion_time} \\
\frac{\partial C_-}{\partial t} &=&
\nabla \cdot \left ( D \nabla C_- - \frac{z_+eD}{kT} C_- \nabla \Phi \right )
\label{eq:dimensional_C-_eqn_bulk_diffusion_time} \\
-\varepsilon_s \ensuremath{\nabla^2} \Phi &=& z_+e \left( C_+ - C_- \right)
\label{eq:dimensional_poisson_eqn_bulk_diffusion_time}
\end{eqnarray} where $D$ is the diffusivity, $z_+$ is the charge number of the
positively charged species, $e$ is the charge of a proton, $k$ is
Boltzmann's constant, $T$ is the absolute temperature, and
$\varepsilon_s$ is the electric permittivity of the solution. As
usual, we have used the Nernst-Einstein relation to write the
mobility in terms of the the diffusivity, $b = D/kT$. It is also
useful to define the chemical potentials of the ions,
\begin{equation}
\mu_\pm = kT \log C_\pm \pm z_+ e \Phi
\label{eq:mu_dilute}
\end{equation}
from which their fluxes are defined as ${\bf F}_\pm = -b C_\pm \nabla
\mu_\pm$.
At the conductor's surface, we assume the same boundary conditions as
in Ref.~\cite{bazant2004}. We adopt a linear surface-capacitance condition
on the electrostatic potential~\cite{bazant2005},
\begin{equation}
\Phi + \lambda_S
\frac{\partial \Phi}{\partial n} = V
\label{eq:dimensional_stern_bc}
\end{equation}
where $\lambda_S$ is a length characterizing the compact-layer
surface capacitance (e.g. due to a Stern monolayer or a thin
dielectric coating), where $V$ is the potential of the conductor,
which is set either externally or by the condition of fixed total
charge~\cite{iceo2004b,iceo2004a,squires2006}. (We will focus on the
case of zero total charge, where symmetry implies $V=0$ in our simple
geometries.) To focus on charging dynamics, we also assume an ideally
polarizable surface with no-flux boundary conditions: \begin{eqnarray} D
\frac{\partial C_+}{\partial n} + \frac{z_+eD}{kT} C_+ \frac{\partial
\Phi}{\partial n}
&=& 0 \label{eq:dimensional_C+_no_flux_bc} \\
D \frac{\partial C_-}{\partial n} - \frac{z_+eD}{kT} C_- \frac{\partial
\Phi}{\partial n} &=& 0 \label{eq:dimensional_C-_rho_no_flux_bc}
\end{eqnarray} where the direction of the unit normal is taken to point inwards
towards the center of the sphere (\emph{i.e., } \emph{outwards} from the region
occupied by the electrolyte solution). In the far field, we assume
that both the concentration and potential profiles tend toward their
initial conditions, given everywhere by
\begin{eqnarray}
C_{\pm}(R,\theta,\phi,t=0) &=& C_o \label{eq:dimensional_C_ic} \\
\Phi(R,\theta,\phi,t=0) &=& -E_o R \left( 1 - \frac{a^3}{R^3} \right)
\cos \theta
\label{eq:dimensional_phi_ic}.
\end{eqnarray}
where $a$ is the radius of the sphere, $C_o$ is the bulk concentration
far away from the conductor, and $E_o$ is the applied electric field.
For the case of the cylinder, the initial conditions for the concentration
profile remains the same and the initial electric potential takes the form
\begin{eqnarray}
\Phi(R,\theta,t=0) &=& -E_o R \left( 1 - \frac{a^2}{R^2} \right)
\cos \theta
\end{eqnarray}
\subsection{Different Contributions to Ion Transport}
The transport equations
(\ref{eq:dimensional_C+_eqn_bulk_diffusion_time}) and
(\ref{eq:dimensional_C-_eqn_bulk_diffusion_time}) represent
conservation laws for the ionic species where the flux of each species
is a combination of diffusion and electromigration. Because this
paper examines ionic fluxes in detail, let us take a moment to fix the
notation that we shall use to discuss different contributions to
charge and mass transport. All fluxes will be denoted with the by the
variable ${\bf F}$ (or $F$ for scalar components of flux). Superscripts
will be used to distinguish between the diffusion, $(d)$, and
electromigration, $(e)$, contributions to the flux. Subscripts will
be used to denote the species or quantity with which the flux is
associated. Finally, normal and tangential components of a flux will
denoted through the use of an extra subscript: $n$ for the normal
component and $t$ for the tangential component. As examples,
${\bf F}^{(e)}_+ = -\frac{z_+eD}{kT} C_+ \nabla \Phi$ represents the flux of
the positively charged species due to electromigration and
$F^{(d)}_{n,c} = -D \frac{\partial C}{\partial n}$ represents the flux
of the neutral salt concentration, $C = (C_+ + C_-)/2$, normal to a
surface arising from diffusion. Table
\ref{tab:summary_of_bulk_fluxes} provides a summary of the various
bulk fluxes that shall arise in our discussion. (We also abuse
notation and use the same symbol $\mu$ for chemical potential with
dimensions and scaled to $kT$.)
\begin{table}
\caption{\label{tab:summary_of_bulk_fluxes} Summary of notations for
for the various ion fluxes and chemical potentials, before and after scaling. }
\begin{ruledtabular}
\begin{tabular}{ccc}
& Dimensional Formula~\footnote{
In these formulae, $\rho$ is half the charge density (\emph{not} the total
charge density). Also, we have abused notation and used the same variable
$\rho$ for both the dimensional and dimensionless formulae. $\rho$ in the
dimensional formulae is equal to $C_o \rho$ in the dimensionless formulae.
}
& Dimensionless Formula \\
\hline \\
$\mu_\pm$ & $kT \log C_\pm \pm z_+e\Phi$ & $\log c_\pm \pm \phi$ \\[3pt]
\hline \\
${\bf F}_\pm$ & $-b C_\pm \nabla \mu_\pm$ & $-c_\pm \nabla \mu_\pm$ \\[3pt]
\ &
$ -\left( D \nabla C_\pm \pm \frac{z_+eD}{kT} C_\pm \nabla \Phi \right )$ &
$ -\left( \nabla c_\pm \pm c_\pm \nabla \phi \right )$ \\[3pt]
${\bf F}^{(d)}_\pm$ &
$ -D \nabla C_\pm $ &
$ -\nabla c_\pm $ \\[3pt]
${\bf F}^{(e)}_\pm$ &
$ \mp \frac{z_+eD}{kT} C_\pm \nabla \Phi $ &
$ \mp c_\pm \nabla \phi $ \\[3pt]
\hline \\
${\bf F}_c$ &
$ -\left ( D \nabla C + \frac{z_+eD}{kT} \rho \nabla \Phi \right )$ &
$ -\left( \nabla c + \rho \nabla \phi \right )$ \\[3pt]
${\bf F}^{(d)}_c$ &
$ -D \nabla C $ &
$ -\nabla c $ \\[3pt]
${\bf F}^{(e)}_c$ &
$ -\frac{z_+eD}{kT} \rho \nabla \Phi$ &
$ -\rho \nabla \phi$ \\[3pt]
\hline \\
${\bf F}_\rho$ &
$ -\left( D \nabla \rho + \frac{z_+eD}{kT} C \nabla \Phi \right )$ &
$ -\left( \nabla \rho + c \nabla \phi \right )$ \\[3pt]
${\bf F}^{(d)}_\rho$ &
$ -D \nabla \rho $ &
$ -\nabla \rho$ \\[3pt]
${\bf F}^{(e)}_\rho$ &
$ -\frac{z_+eD}{kT} C \nabla \Phi$ &
$ -c \nabla \phi$ \\[3pt]
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Dimensionless Formulation}
To facilitate the analysis of the model problem, it is convenient to
nondimensionalize the governing equations and boundary conditions.
Scaling length by the radius of the sphere, $a$, the time to the diffusion
time $\tau_D = a^2 / D$, and the electric potential by the thermal voltage
divided by the cation charge number, $kT/z_+e$, the governing equations
(\ref{eq:dimensional_C+_eqn_bulk_diffusion_time}) --
(\ref{eq:dimensional_poisson_eqn_bulk_diffusion_time})
become
\begin{eqnarray}
\frac{\partial c_+}{\partial t} &=&
\nabla \cdot \left ( \nabla c_+ + c_+ \nabla \phi \right )
\label{eq:c+_eqn_bulk_diffusion_time} \\
\frac{\partial c_-}{\partial t} &=&
\nabla \cdot \left ( \nabla c_- - c_- \nabla \phi \right )
\label{eq:c-_eqn_bulk_diffusion_time} \\
-\eps^2 \ensuremath{\nabla^2} \phi &=& \left( c_+ - c_- \right) / 2
\label{eq:poisson_eqn_bulk_diffusion_time}
\end{eqnarray}
where $c_\pm$, $\phi$, and $t$ are the dimensionless concentrations,
electric potential and time, respectively, the spatial derivatives
are with respect to the dimensionless position,
and $\eps$ is the ratio of the Debye screening length,
$\lambda_D = \sqrt{\frac{\varepsilon_s k T}{2 z_+^2 e^2 C_o}}$, to the
radius of the sphere. The boundary conditions at the surface of the
sphere and the initial conditions become
\begin{eqnarray}
\phi + \delta \eps \frac{\partial \phi}{\partial n} &=& v
\label{eq:stern_bc} \\
\frac{\partial c_+}{\partial n} + c_+ \frac{\partial \phi}{\partial n}
&=& 0 \label{eq:c+_no_flux_bc} \\
\frac{\partial c_-}{\partial n} - c_- \frac{\partial \phi}{\partial n}
&=& 0 \label{eq:c-_no_flux_bc} \\
c_\pm(r,\theta,\phi,t=0) &=& 1 \label{eq:c_pm_ic} \\
\phi(r,\theta,\phi,t=0) &=& -E_o r \left( 1 - \frac{1}{r^3} \right)
\cos \theta \label{eq:phi_ic}.
\end{eqnarray}
In the far field, the dimensionless concentrations approach $1$ and the
electric potential approaches $-E_o r \cos \theta$.
Note that in nondimensionalizing the surface capacitance boundary condition
(\ref{eq:dimensional_stern_bc}), we have chosen to introduce a new
dimensionless parameter $\delta \equiv \lambda_S/\lambda_D$ which makes
it possible to study the effects of the Stern layer capacitance
independently from the double layer thickness~\cite{bazant2005}.
Because the charge density and neutral salt concentration are important for
understanding the behavior of electrochemical transport at high applied
fields, it is often useful to formulate the governing equations in terms
of the average concentration, $c = \left( c_+ + c_- \right)/2$, and half the
charge density, $\rho = \left( c_+ - c_- \right)/2$
\cite{bazant2004,bazant2005, bonnefont2001}. Using these definitions
(\ref{eq:c+_eqn_bulk_diffusion_time}) --
(\ref{eq:poisson_eqn_bulk_diffusion_time}) can be rewritten as
\begin{eqnarray}
\frac{\partial c}{\partial t} &=&
\nabla \cdot \left ( \nabla c + \rho \nabla \phi \right )
\label{eq:c_eqn_bulk_diffusion_time} \\
\frac{\partial \rho}{\partial t} &=&
\nabla \cdot \left ( \nabla \rho + c \nabla \phi \right )
\label{eq:rho_eqn_bulk_diffusion_time} \\
-\eps^2 \ensuremath{\nabla^2} \phi &=& \rho
\label{eq:c_rho_poisson_eqn_bulk_diffusion_time}
\end{eqnarray}
Throughout our discussion, we shall alternate between this formulation and
equations (\ref{eq:c+_eqn_bulk_diffusion_time}) --
(\ref{eq:poisson_eqn_bulk_diffusion_time}) depending on the context.
The initial and boundary conditions for this set of equations are easily
derived from (\ref{eq:stern_bc}) -- (\ref{eq:phi_ic}). Here we summarize
those boundary conditions that change:
\begin{eqnarray}
\frac{\partial c}{\partial n} + \rho \frac{\partial \phi}{\partial n}
&=& 0 \label{eq:c_no_flux_bc} \\
\frac{\partial \rho}{\partial n} + c \frac{\partial \phi}{\partial n}
&=& 0 \label{eq:rho_no_flux_bc} \\
c(r,\theta,\phi,t=0) &=& 1 \label{eq:c_ic} \\
\rho(r,\theta,\phi,t=0) &=& 0 \label{eq:rho_ic}
\end{eqnarray}
For the remainder of this paper, we shall work primarily with
dimensionless equations. On occasion, we will mention the dimensional
form of various expressions and equations to help make their physical
interpretation more apparent.
\subsection{Electroneutral Bulk Equations}
In the context of electrokinetics, it is desirable to reduce the complexity
of the electrochemical transport problem by replacing the PNP equations
with a simpler set of equations that treats the bulk electrolyte and
the double layer as separate entities.
Circuit models \cite{bazant2004,iceo2004b,ramos2003,iceo2004a} have been
used extensively to achieve this goal by reducing the transport problem to
an electrostatics problem.
However, circuit models make the rather stringent assumption that bulk
concentrations remain uniform at all times.
Unfortunately, at high applied electric fields, this assumption is no longer
valid because concentration gradients become important \cite{bazant2004}.
In the present analysis, we consider an alternative simplification of the
PNP equations that allows for bulk concentration variations.
Since we are interested in colloidal systems where particle diameters are
on order of microns, $\eps$ is very small which suggests that
we consider the thin double layer limit ($\eps \rightarrow 0$).
In this limit, the bulk remains locally electroneutral, so it is
acceptable to replace Poisson's equation with the local electroneutrality
condition~\cite{newman_book,chu_thesis_2005}:
\begin{eqnarray}
\sum_i z_i c_i = 0 \label{eq:local_electroneutrality}.
\end{eqnarray}
For the case of symmetric, binary electrolytes,
(\ref{eq:local_electroneutrality}) leads to the electroneutral Nernst-Planck
equations:
\begin{eqnarray}
\frac{\partial c}{\partial t} &=& \nabla^2 c
\label{eq:dlc_c_eqn_bulk} \\
0 &=& \nabla \cdot \left ( c \nabla \phi \right )
\label{eq:dlc_rho_eqn_bulk} \\
\rho &=& 0 \label{eq:dlc_LEN}.
\end{eqnarray}
Notice that the common practice of modeling the bulk electrolyte as a
linear resistor obeying Ohm's law, $\nabla^2\phi=0$, arises from these
equations if the concentration profile is uniform.
We emphasize that these equations only describe the electrochemical
system at the macroscopic level (\emph{i.e., } in the bulk region of the solution).
The microscopic structure within the double layer, where local
electroneutrality breaks down, is completely neglected.
Therefore, any physical effects of double layer structure can only be
incorporated into the mathematical model via effective boundary conditions.
\section{Effective Boundary Conditions Outside the Double
Layer \label{sec:effective_bcs}}
When local electroneutrality is used in place of Poisson's equation,
the physical boundary conditions imposed at at electrode surfaces must
be modified to account for the microscopic structure within the double
layer. Fortunately, in the $\eps \rightarrow 0$ limit, the double
layer remains in quasi-equilibrium, so the Gouy-Chapman-Stern
model~\cite{bard_book} can be used to derive effective boundary
conditions for the system. We emphasize that the GCS model is
\emph{not} an assumption; rather, it emerges as the leading-order
approximation in an asymptotic analysis of the thin double-layer
limit. (See Ref.~\cite{bazant2004} for the history of this well known
result.) Because the general form for the effective boundary
conditions of locally electroneutral electrochemical systems has not
been extensively discussed in the literature, we provide a detailed
derivation of these boundary conditions and discuss some associated
dimensionless parameters.
\subsection{Compact-layer Surface Capacitance}
We begin our discussion of effective boundary conditions by
considering the ``Stern boundary condition'' (\ref{eq:stern_bc}),
which describes the (linear) capacitance of a possible compact layer
on the surface. Because Eq.~(\ref{eq:stern_bc}) involves the electric
potential and electric field at the inner edge of the diffuse charge
layer, it cannot be directly used as a boundary condition for the
locally electroneutral equations (\ref{eq:dlc_c_eqn_bulk}) --
(\ref{eq:dlc_LEN}). However, by rearranging (\ref{eq:stern_bc}) and
using the GCS model, it is is possible to rewrite the Stern boundary
condition so that it only explicitly involves the electric potential
and average concentration at the macroscopic surface of the electrode
(\emph{i.e., } the outer edge of the diffuse charge
layer)~\cite{bazant2004,bazant2005}:
\begin{equation}
\zeta + 2 \delta \sqrt{c}
\sinh \left( \zeta/2 \right) = v - \phi.
\label{eq:stern_bc_GCS}
\end{equation}
Here $\zeta$ is the potential drop across the diffuse part of the
double layer (\emph{i.e., } zeta-potential), $v$ is the potential of the metal
sphere in the thermal voltage scale, and $\phi$ and $c$ are the values
of the electric potential and average concentration at the
outer edge of the diffuse charge layer.
With dimensions, (\ref{eq:stern_bc_GCS}) is given by
\begin{equation}
\tilde{\zeta}
+ 2 \lambda_s \sqrt{\frac{2 k T C}{\varepsilon_s}}
\sinh \left( \frac{z_+ e \bar{\zeta}}{2 kT} \right)
= V - \Phi
\end{equation}
where $\bar{\zeta}$ is the dimensional zeta-potential.
Note that the GCS model for the double layer leads to the conclusion that
the only dependence of the normal derivative of the electric potential
on the double layer structure is through the zeta-potential.
A more detailed derivation of this form of the Stern boundary condition
can be found in~\cite{bazant2005}.
\subsection{Diffuse-layer Surface Conservation Laws \label{sec:ionic_fluxes}}
The effective boundary conditions for ionic fluxes are more
complicated. Because the physical domain for the macroscopic equations
(\ref{eq:dlc_c_eqn_bulk}) -- (\ref{eq:dlc_LEN}) excludes the diffuse
part of the double layer, the no-flux boundary conditions do not apply
-- it is possible for there to exist ion flux between the bulk region and
the double layer. Moreover, there is also the possibility of ion transport
within the double layer itself (often neglected) which must be accounted for.
\begin{figure}
\begin{center}
\includegraphics[width=2.0in]{figs/surface_conservation_law}
\begin{minipage}[h]{3in}
\caption[Schematic diagram of normal and tangential fluxes.]{
\label{figure:surface_conservation_law}
Schematic diagram of normal and tangential fluxes involved in the
surface conservation law (\ref{eq:general_form_surf_conserv_law}).
The shaded circle represents the metal particle; the dotted circle
represents the outer ``edge'' of the double layer.
}
\end{minipage}
\end{center}
\end{figure}
The derivation of the effective flux boundary conditions
(\ref{eq:effective_flux_bc}) is based on a general theory of surface
conservation laws at microscopically diffuse interfaces, which we develop
in Ref.~\cite{chu_surf_cons_laws}. The basic physical idea is to integrate
out the spatial variation within the double layer in the direction
normal to the electrode-electrolyte interface. While intuitively
obvious, carrying out the integration involves careful use of
asymptotic analysis to address the mathematical subtleties of
integration over boundary layers. The
theory tells us that effective flux
boundary conditions have the physically apparent form
\begin{equation} \frac{\partial
\Gamma_i}{\partial t} = -\nabla_s \cdot {\bf J}_{t,i} + F_{n,i}
\label{eq:general_form_surf_conserv_law}
\end{equation}
where $(\nabla_s \cdot )$ denotes surface divergence, ${\bf J}_{t,i}$ is the
tangential flux within and $F_{n,i}$ is the normal flux into the boundary
layer (see Figure~\ref{figure:surface_conservation_law}).
The effective fluxes ${\bf J}_{t,i}$ and $F_{n,i}$ are directly related to the
flux ${\bf F}_i$ for the transport process via
\begin{eqnarray}
{\bf J}_{t,i} &=& \eps \int_0^\infty
\left( \tilde{{\bf F}}_{t,i} - \hat{{\bf F}}_{t,i} \right) dz
\label{eq:general_boundary_layer_fluxes_s} \\
F_{n,i} &=& {\bf F}_i \cdot \hat{n}
\label{eq:general_boundary_layer_fluxes_n}
\end{eqnarray}
where $\tilde{{\bf F}}$ and $\hat{{\bf F}}$ denote the flux within the boundary
layer and the flux in the bulk just outside of the boundary layer and
the integration is over the entire boundary layer (\emph{i.e., } $z$ is
the inner variable of a boundary layer analysis).
For electrochemical transport, the flux is given by the Nernst-Planck
equation
\begin{equation}
{\bf F}_i = - \left( \nabla c_i + z_i c_i \nabla \phi \right).
\label{eq:nernst_planck_flux}
\end{equation}
Substituting this expression into
(\ref{eq:general_boundary_layer_fluxes_s}) --
(\ref{eq:general_boundary_layer_fluxes_n}), rearranging a bit,
and using the definition of a surface excess
concentration~\cite{chu_surf_cons_laws,hunter_book,lyklema_book_vol2}
\begin{equation}
\Gamma_i \equiv \eps \int_0^\infty \gamma_i dz
= \eps \int_0^\infty \left( \tilde{c}_i - \hat{c}_i \right) dz,
\label{eq:dlc_gamma_def}
\end{equation}
we obtain
\begin{eqnarray}
{\bf J}_{t,i} &=&
-\left( \nabla_s \Gamma_i + z_i \Gamma_i \nabla_s \hat{\phi}
+ \eps z_i \int_0^\infty \tilde{c}_i \nabla_s \tilde{\psi} dz \right)
\label{eq:ion_transport_fluxes_s} \\
F_{n,i} &=& -\frac{\partial c_i}{\partial n}
-z_i c_i \frac{\partial \phi}{\partial n}.
\label{eq:ion_transport_fluxes_n}
\end{eqnarray}
where $\tilde{\psi} \equiv \tilde{\phi} - \hat{\phi}$ is the excess electric potential
within the boundary layer and $\nabla_s$ denotes a surface
gradient.
As before, the tilde ($\tilde{\ }$) and hat ($\hat{\ }$) accents denote the
quantities within boundary layer and the in the bulk immediately outside of
the boundary layer.
Finally, the effective flux boundary conditions for electrochemical
transport follow by using these results in
(\ref{eq:general_form_surf_conserv_law}):
\begin{eqnarray}
\frac{\partial \Gamma_i}{\partial t} &=&
\nabla_s \cdot
\left[
\nabla_s \Gamma_i + z_i \Gamma_i \nabla_s \phi
+ \eps z_i \int_0^\infty \tilde{c}_i \nabla_s \tilde{\psi} ~dz
\right] \nonumber \\
&-& \left( \frac{\partial c_i}{\partial n}
+z_i c_i \frac{\partial \phi}{\partial n} \right)
\label{eq:effective_flux_bc}
\end{eqnarray}
Note that even though our choice of electric potential scale eliminates the
need to explicitly refer to the ionic charge numbers, $z_i$, we opt to
continue using them in the present discussion so that it is clear where the
charge number should appear for alternative choices of the electric potential
scale; in the following discussion, the $z_i$ are essentially the sign of
the ``dimensional'' ionic charge numbers.
There are a few important features of (\ref{eq:effective_flux_bc}) worth
mentioning. First, the surface transport term (first term on the right
hand side) does not always contribute to the leading order effective flux
boundary condition.
Whether the surface conduction term must be retained at leading
order depends on the magnitudes of $\Gamma_i$ (which in turn depends on
the zeta-potential) and the tangential component of the bulk electric
field. Interestingly, when the the surface transport term is significant,
the flux boundary condition depends explicitly on the small
parameter~$\eps$. Also, (\ref{eq:effective_flux_bc}) allow for two important
physical effects that only arise for 2- and 3-D systems:
(i) non-uniform double layer charging and
(ii) surface transport within the double layer itself.
The presence of these effects lead to richer behavior for 2- and 3-D systems
compared to the 1D system studied in~\cite{bazant2004}.
To put (\ref{eq:effective_flux_bc}) into a more useful form, we use
the GCS model of the double layer to express the surface flux densities
in terms of the zeta-potential and the bulk concentration.
From the GCS model~\cite{bard_book,newman_book}, we
know that the excess concentration of each ionic species is given by
\begin{equation}
\gamma_i = \tilde{c}_i - \hat{c}_i = \hat{c} \left ( e^{- z_i \tilde{\psi}} - 1 \right )
\label{eq:dlc_gamma}
\end{equation}
and that
\begin{equation}
\frac{\partial \tilde{\psi}}{\partial z} =
-2 \sqrt{\hat{c}} \sinh \left( \frac{z_+ \tilde{\psi}}{2} \right).
\label{eq:dlc_dpsi_dz}
\end{equation}
Using these expressions, it is straightforward to show that the surface
excess concentration is
\begin{equation}
\Gamma_i = \frac{2 \eps \sqrt{\hat{c}}}{\abs{z_i}}
\left( e^{-z_i \zeta/2} - 1 \right).
\label{eq:dlc_Gamma}
\end{equation}
Therefore, the first two surface flux density terms in
(\ref{eq:effective_flux_bc}) can be written as
\begin{eqnarray}
\nabla_s \Gamma_i + z_i \Gamma_i \nabla_s \hat{\phi} &=&
\frac{\Gamma_i}{2} \nabla_s \log \hat{c}
+ z_i \Gamma_i \nabla_s \hat{\phi} \nonumber \\
&-& \ \eps~\sgn{z_i} \sqrt{\hat{c}}~e^{- z_i \zeta/2} \nabla_s \zeta.
\label{eq:dlc_surface_flux_terms_1_and_2}
\end{eqnarray}
To evaluate the last term in the surface flux density, we observe that
\begin{equation}
\nabla_s \tilde{\psi} = \frac{\partial \tilde{\psi}}{\partial z}
\left( -\frac{\nabla_s \zeta}
{2 \sqrt{\hat{c}}~\sinh \left( \zeta/2 \right)}
+ \frac{z}{2} \nabla_s \log \hat{c} \right),
\end{equation}
which follows directly by comparing the normal and surface derivatives
of the leading order expression for the electric potential within
the double layer~\cite{bazant2005,bonnefont2001,bazant2004}
\begin{equation}
\tilde{\psi}(z) = 4 \tanh^{-1}\left(\tanh(\zeta/4)
e^{-\sqrt{\hat{c}}z}\right).
\label{eq:psi_gc_solution}
\end{equation}
Using this result, the integral in (\ref{eq:effective_flux_bc}) greatly
simplifies and yields
\begin{eqnarray}
\eps z_i \int_0^\infty \tilde{c}_i \nabla_s \tilde{\psi} ~dz &=&
\eps~\sgn{z_i} \sqrt{\hat{c}}~e^{-z_i \zeta/2} \nabla_s \zeta \nonumber \\
&+& \frac{\Gamma_i}{2} \nabla_s \log \hat{c}
\label{eq:dlc_surface_flux_term_3}.
\end{eqnarray}
Finally, combining (\ref{eq:dlc_surface_flux_terms_1_and_2}) and
(\ref{eq:dlc_surface_flux_term_3}), the effective flux boundary condition
(\ref{eq:effective_flux_bc}) becomes
\begin{eqnarray}
\frac{\partial \Gamma_i}{\partial t} &=&
\nabla_s \cdot
\left[ \Gamma_i \nabla_s \log \hat{c}
+ z_i \Gamma_i \nabla_s \hat{\phi} \right] \nonumber \\
& & - \left( \frac{\partial c_i}{\partial n}
+z_i c_i \frac{\partial \phi}{\partial n} \right)
\label{eq:effective_flux_bc_GCS}
\end{eqnarray}
where (dropping hats) $c$ and $\phi$ are understood to be in the bulk,
just outside the double layer. Notice that the tangential gradients
in the zeta-potential have vanished in this equation so that the
surface flux density of the individual species is independent of
$\nabla_s \zeta$.
In terms of the (dimensionless) chemical potentials of the ions,
\begin{equation}
\mu_i = \log c_i + z_i \hat{\phi},
\end{equation}
the surface conservation law (\ref{eq:effective_flux_bc_GCS}) reduces
to a very simple form,
\begin{equation}
\frac{\partial \Gamma_i}{\partial t} =
\nabla_s\cdot(\Gamma_i\nabla\mu_i) - \hat{n}\cdot c_i\nabla\mu_i \label{eq:mubc}
\end{equation}
where it is clear that the tangential gradient in the {\it bulk}
chemical potential just outside the double layer drives the transport
of the {\it surface} excess concentration of each ion, as well as the
bulk transport. This form of the surface conservation law should hold
more generally, even when the chemical potential does not come from
dilute solution theory (PNP equations).
Effective boundary conditions similar to
(\ref{eq:effective_flux_bc_GCS}) describing the dynamics of the double
layer have been known for some time. In the late 1960s, Dukhin,
Deryagin, and Shilov essentially used (\ref{eq:effective_flux_bc_GCS})
in their studies of surface conductance and the polarization of the
diffuse charge layer around spherical particles with thin double
layers at weak applied fields~\cite{deryagin1969,dukhin1969,
shilov1970}. Later, Hinch, Sherwood, Chew, and Sen used similar
boundary conditions in their extension of the work of Dukhin \emph{et al. } to
explicitly calculate the tangential flux terms for a range of large
surface potentials and asymmetric
electrolytes~\cite{hinch1983,hinch1984}. A key feature of both of
these studies is the focus on small deviations from bulk equilibrium.
As a result, the effective boundary conditions used are basically
applications of (\ref{eq:effective_flux_bc_GCS}) for weak
perturbations to the background concentration and electric potential.
Our work differs from these previous analyses because we do not
require that the bulk concentration only {\em weakly} deviates from a
uniform profile, and we more rigorously justify the approximation
through the use of matched asymptotics. In addition, our analysis is
not restricted to the use of the GCS model for the double layer;
equations (\ref{eq:ion_transport_fluxes_s}) --
(\ref{eq:effective_flux_bc}) are valid even for more general models of
the boundary layer.
Before moving on, we write the effective boundary conditions in
dimensional form to emphasize their physical interpretation:
\begin{eqnarray}
\frac{\partial \bar{\Gamma}_i}{\partial t} &=&
\nabla_s \cdot
\left[ D \bar{\Gamma}_i
\nabla_s \log \left( \frac{C}{C_o} \right)
+ \frac{z_i e D}{kT} \bar{\Gamma}_i \nabla_s \Phi \right] \nonumber \\
&-& \ \left( D_i \frac{\partial C_i}{\partial n}
+ \frac{z_i e D}{kT} C_i \frac{\partial \Phi}{\partial n} \right),
\end{eqnarray}
where $\bar{\Gamma}_i$ is the dimensional surface excess concentration
of species $i$ and is defined as
\begin{equation}
\bar{\Gamma}_i \equiv \int_{dl} \left( \tilde{C}_i - C_i \right) dZ
\end{equation}
where the integration is only over the double layer. With the units
replaced, it becomes clear that the effective boundary conditions is
a two-dimensional conservation law for the excess surface concentration
$\Gamma_i$ with a driving force for the flux and a source term that
depend only on the dynamics away from the surface. Thus, the effective
boundary conditions naturally generalize the simple capacitor picture of
the double layer to allow for flow of ionic species tangentially along
the electrode surface.
\subsection{Surface Charge and Excess Salt Concentration}
Since the governing equations (\ref{eq:dlc_c_eqn_bulk}) -- (\ref{eq:dlc_LEN})
are formulated in terms of the salt concentration and charge density,
it is convenient to derive boundary conditions that are directly related to
these quantities. Toward this end, we define $\eps q$ and $\eps w$ to be
the surface charge density and surface excess salt concentration,
respectively:
\begin{eqnarray}
q &=& \int_0^\infty \tilde{\rho} dz
= \frac{1}{2} \int_0^\infty \left( \gamma_+ - \gamma_- \right) dz
\label{eq:dlc_q_def} \\
w &=& \int_0^\infty \left( \tilde{c} - \hat{c} \right) dz
= \frac{1}{2} \int_0^\infty \left( \gamma_+ + \gamma_- \right) dz
\label{eq:dlc_w_def}.
\end{eqnarray}
By integrating the expressions for the diffuse layer salt
concentration and charge density~\cite{hunter_book,bazant2004,newman_book,
bard_book},
\begin{eqnarray}
\tilde{c} &=& \hat{c} \cosh{\tilde{\psi}}
\label{eq:c_diffuse_layer} \\
\tilde{\rho} &=& - \hat{c} \sinh{\tilde{\psi}}
\label{eq:rho_diffuse_layer}
\end{eqnarray}
and
using (\ref{eq:dlc_dpsi_dz}), both $q$ and $w$ can be expressed as simple
functions of the zeta-potential and the bulk concentration just outside
of the double layer:
\begin{eqnarray}
q &=& -2 \sqrt{\hat{c}} \sinh(\zeta/2)
\label{eq:dlc_q_def_GCS} \\
w &=& 4 \sqrt{\hat{c}} \sinh^2(\zeta/4).
\label{eq:dlc_w_def_GCS}
\end{eqnarray}
Thus, we can combine the effective flux boundary conditions for individual
ions (\ref{eq:effective_flux_bc}) to obtain
\begin{eqnarray}
\eps \frac{\partial q}{\partial t} &=&
\eps \nabla_s \cdot
\left[
\nabla_s q + w \nabla_s \hat{\phi}
+ \int_0^\infty \tilde{c} \nabla_s \tilde{\psi} ~dz
\right] \nonumber \\
& & -\ c \frac{\partial \phi}{\partial n}
\label{eq:q_evolution_eqn} \\
\eps \frac{\partial w}{\partial t} &=&
\eps \nabla_s \cdot
\left[
\nabla_s w + q \nabla_s \hat{\phi}
+ \int_0^\infty \rho \nabla_s \tilde{\psi} ~dz
\right] \nonumber \\
& & -\ \frac{\partial c}{\partial n}.
\label{eq:w_evolution_eqn}
\end{eqnarray}
Notice that, as in the PNP equations written in terms of $c$ and $\rho$,
there is a symmetry between $q$ and $w$ in these equations.
As in the previous section, we can use the GCS model to rewrite
(\ref{eq:q_evolution_eqn}) and (\ref{eq:w_evolution_eqn}) solely
in terms of bulk field variables and the zeta-potential:
\begin{eqnarray}
\eps \frac{\partial q}{\partial t} &=&
\eps \nabla_s \cdot
\left(
q \nabla_s \log \hat{c}
+ w \nabla_s \hat{\phi}
\right)
- c \frac{\partial \phi}{\partial n}
\label{eq:q_evolution_eqn_GCS} \\
\eps \frac{\partial w}{\partial t} &=&
\eps \nabla_s \cdot
\left(
w \nabla_s \log \hat{c}
+ q \nabla_s \hat{\phi}
\right)
- \frac{\partial c}{\partial n},
\label{eq:w_evolution_eqn_GCS}
\end{eqnarray}
which is the form of the effective flux boundary conditions
we shall use in our analysis below.
\section{ Surface Transport
Processes}
\label{sec:surfproc}
Before proceeding to the analysis, we discuss the relative importance
of the various surface transport processes, compared to their neutral
bulk counterparts. For clarity, we summarize our results for
tangential surface fluxes in
Table~\ref{tab:summary_of_surface_fluxes}, with and without
dimensions. As with the associated bulk fluxes summarized in
Table~\ref{tab:summary_of_bulk_fluxes}, we introduce different notations
for contributions by diffusion and electromigration (superscripts
$(d)$ and $(e)$, respectively) to the tangential (subscript $t$)
surface fluxes of cations, anions, charge, and neutral salt
(subscripts $+$, $-$, $q$, and $w$, respectively). This allows us to
define dimensionless parameters comparing the various contributions.
\begin{table*}
\caption{\label{tab:summary_of_surface_fluxes} Summary of surface
flux formulae for electrochemical transport. }
\begin{ruledtabular}
\begin{tabular}{ccc}
& Dimensional Formula$^{a,b,c}$
\footnotetext[1]{
We have abused notation and used the same variable $\Gamma_i$ for both
the dimensional and dimensionless formulae. $\Gamma_i$ in the
dimensional formulae is equal to $C_o a \Gamma_i$ in the dimensionless
formulae.}
\footnotetext[2]{ The dimensional surface charge density, $Q$,
is defined by $Q \equiv C_o \lambda_D q$.}
\footnotetext[3]{ The dimensional excess
neutral salt concentration, $W$, is defined by $W \equiv C_o \lambda_D w$.}
& Dimensionless Formula \\
\hline \\
${\bf J}_{t,\pm}$ &
$ - b \Gamma_\pm \nabla_s \mu_\pm$ & $- \Gamma_\pm \nabla_s \mu_\pm$ \\[3pt]
\ &
$ -\left [
D \Gamma_\pm \nabla_s \log \left( C/C_o \right)
\pm \frac{z_+eD}{kT} \Gamma_\pm \nabla_s \Phi \right ]$ &
$ -\left( \Gamma_\pm \nabla_s \log c \pm \Gamma_\pm \nabla_s \phi
\right )$ \\[3pt]
${\bf J}^{(d)}_{t,\pm}$ &
$ -D \Gamma_\pm \nabla_s \log \left( C/C_o \right) $ &
$ -\Gamma_\pm \nabla_s \log c $ \\[3pt]
${\bf J}^{(e)}_{t,\pm}$ &
$ \mp \frac{z_+eD}{kT} \Gamma_\pm \nabla_s \Phi $ &
$ \mp \Gamma_\pm \nabla_s \phi $ \\[3pt]
\hline \\
${\bf J}_{t,q}$ &
$ -\left [ D Q \nabla_s \log \left( C/C_o \right)
+ \frac{z_+eD}{kT} W \nabla_s \Phi \right ]$ &
$ -\eps \left( q \nabla_s \log c + w \nabla_s \phi \right )$ \\[3pt]
${\bf J}^{(d)}_{t,q}$ &
$ -D Q \nabla_s \log \left (C/C_o \right)$ &
$ -\eps q \nabla_s \log c $ \\[3pt]
${\bf J}^{(e)}_{t,q}$ &
$ -\frac{z_+eD}{kT} W \nabla_s \Phi$ &
$ -\eps w \nabla_s \phi$ \\[3pt]
\hline \\
${\bf J}_{t,w}$ &
$ -\left [ D W \nabla_s \log \left( C/C_o \right)
+ \frac{z_+eD}{kT} Q \nabla_s \Phi \right ]$ &
$ -\eps \left( w \nabla_s \log c + q \nabla_s \phi \right )$ \\[3pt]
${\bf J}^{(d)}_{t,w}$ &
$ -D W \nabla_s \log \left (C/C_o \right)$ &
$ -\eps w \nabla_s \log c$ \\[3pt]
${\bf J}^{(e)}_{t,w}$ &
$ -\frac{z_+eD}{kT} Q \nabla_s \Phi$ &
$ -\eps q \nabla_s \phi$ \\[3pt]
\end{tabular}
\end{ruledtabular}
\end{table*}
J. J. Bikerman pioneered the experimental and theoretical study of
double-layer surface transport~\cite{bikerman1933,bikerman1935} and
first defined the dimensionless ratio of surface current to bulk
current, across a geometrical length scale, $a$, for a uniformly
charged double layer and a uniform bulk
solution~\cite{bikerman1940}. Using our notation, the Bikerman number
is
\begin{equation}
Bi = \frac{J_{t,q}}{a J_0} \label{eq:alphaB}
\end{equation}
where $J_0 = z_+e F_0$ is a reference bulk current in terms of the
typical diffusive flux, $F_0 = DC_0/a$. B. V. Deryagin and
S. S. Dukhin later added contributions from electro-osmotic flow,
using the GCS model of the double layer (as did
Bikerman)~\cite{deryagin1969}, and Dukhin and collaborators then used
this model to study electrophoresis of highly charged particles (with
large, but nearly uniform surface
charge)~\cite{dukhin1969,shilov1970,dukhin1993}. As in other
situations, surface-conduction effects in electrophoresis are
controlled by $Bi$, which thus came to be known in the Russian
literature as the ``Dukhin number'', $Du$. (Dukhin himself denoted it
by $Rel$).
Recently, Bazant, Thornton and Ajdari pointed out
that the (steady) Bikerman-Dukhin number is equal to the ratio of
the excess surface salt concentration to its bulk counterpart at the
geometrical scale $a$,
\begin{equation}
Bi = \frac{\Gamma_+ + \Gamma_i}{2 a C_0}
\end{equation}
and they showed its importance in a one-dimensional problem of
electrochemical relaxation between parallel-plate
electrodes~\cite{bazant2004}. This surprising equivalence (in light
of the definition of $Bi$) demonstrates that surface conduction
becomes important relative to bulk conduction simply because a
significant number of ions are adsorbed in the double layer compared to
the nearby bulk solution. This means that salt adsorption leading to
bulk diffusion should generally occur at the same time as surface
conduction, if the double layer becomes highly charged during the
response to an applied field or voltage, as in our model problems
below.
Unlike prior work, however, our multi-dimensional nonlinear analysis
allows for nonuniform, time-dependent charging of the double layer, so
the Bikerman-Dukhin number can only be defined locally and in
principle could vary wildly across the surface. Moreover, since we
separate the contributions to surface transport from diffusion and
electromigration, we can define some new dimensionless numbers. From
Eqs.~(\ref{eq:q_evolution_eqn_GCS})-(\ref{eq:w_evolution_eqn_GCS}),
we find that the following surface-to-bulk flux ratios are related to
the excess surface-to-bulk ratio of neutral salt concentration
\begin{eqnarray}
\alpha &=& \epsilon w = 4
\frac{\lambda_D}{a}\sqrt{\frac{C}{C_0}}\sinh^2
\left(\frac{z_+e\bar{\zeta}}{4kT}\right) \label{eq:alphadef} \\
&\sim& \frac{|J_{t,q}^{(e)}|}{a J_0} \sim
\frac{|J_{t,w}^{(d)}|}{a F_0}.
\label{eq:alpha}
\end{eqnarray}
Note that when surface diffusion is neglected, as in most prior work,
then $\alpha = Bi$, since surface currents arise from electromigration
alone. The other surface-to-bulk flux ratios are given by the
surface-to-bulk ratio of the charge density,
\begin{eqnarray}
\beta &=& \epsilon |q| = 2
\frac{\lambda_D}{a}\sqrt{\frac{C}{C_0}}\sinh\left(\frac{z_+e\bar{\zeta}}{2kT}\right) \label{eq:betadef} \\
&\sim& \frac{|J_{t,q}^{(d)}|}{a J_0} \sim
\frac{|J_{t,w}^{(e)}|}{a F_0}
\end{eqnarray}
For thin double layers ($\lambda_D \ll a$), we see that the
surface-to-bulk flux ratios, $\alpha$ and $\beta$, are only
significant when $\bar{\zeta}$ significantly exceeds the thermal voltage
$kT/e$.
To better understand how the charge and neutral-salt fluxes are
carried, it is instructive to form the ratio of these numbers,
\begin{eqnarray}
\frac{\alpha}{\beta} &=& \tanh\left(\frac{z_+e\bar{\zeta}}{4kT}\right) \\
&\sim& \frac{|J_{t,w}^{(d)}|}{|J_{t,w}^{(e)}|}
\sim \frac{|J_{t,q}^{(e)}|}{|J_{t,q}^{(d)}|}
\end{eqnarray}
For weakly charged double layers, $z_+e\bar{\zeta} \ll 4kT$, where $\alpha \ll
\beta$, the (small) surface flux of salt is dominated by
electromigration, while the surface flux of charge (surface current)
is dominated by diffusion (if bulk concentration gradients exist). For
highly charged double layers $z_+e\bar{\zeta} > 4kT$ where $\alpha \sim
\beta$, the contributions to each flux by diffusion and
electromigration become comparable, as counterions are completely
expelled ($q \sim w$).
\section{Nonlinear Steady Response for Thin Double Layers
\label{sec:steady_response}}
\subsection{ Effective Equations }
Using the mathematical model developed in the previous section, we now
examine the steady response of a metal sphere or cylinder with thin
double layers subjected to a large, uniform applied electric field.
At steady-state, the unsteady term is eliminated from the governing
equations (\ref{eq:dlc_c_eqn_bulk}) -- (\ref{eq:dlc_LEN}), so we have
\begin{eqnarray}
0 &=& \nabla^2 c
\label{eq:c_eqn_steady} \\
0 &=& \nabla \cdot \left( c \nabla \phi \right)
\label{eq:phi_eqn_steady}.
\end{eqnarray}
Similarly, the flux boundary conditions (\ref{eq:q_evolution_eqn_GCS}) --
(\ref{eq:w_evolution_eqn_GCS}) become
\begin{eqnarray}
0 &=&
\eps \nabla_s \cdot
\left(
q \nabla_s \log c
+ w \nabla_s \phi
\right)
- c \frac{\partial \phi}{\partial n}
\label{eq:q_bc_steady} \\
0 &=&
\eps \nabla_s \cdot
\left(
w \nabla_s \log c
+ q \nabla_s \phi
\right)
- \frac{\partial c}{\partial n}.
\label{eq:w_bc_steady}
\end{eqnarray}
Notice that we have retained the surface transport terms in
these boundary conditions even though they appear at $O(\eps)$.
At large applied fields, it is no longer valid to order
the terms in an asymptotic expansions by $\eps$ alone.
We also need to consider factors of the form $\eps e^{\zeta}$
or $\eps \sinh(\zeta)$ since $e^{\zeta}$ may be as large as $O(1/\eps E)$
for large applied fields. Since both $q$ and $w$ contain factors
which grow exponentially with the zeta-potential, we cannot discard
the surface conduction terms in (\ref{eq:q_bc_steady}) and
(\ref{eq:w_bc_steady}).
Finally, the Stern boundary condition (\ref{eq:stern_bc_GCS})
remains unchanged because it does not involve any time derivatives.
As mentioned earlier, the steady problem exhibits interesting features
that have not been extensively explored. Unfortunately, the
nonlinearities present in the governing equations and boundary
conditions make it difficult to proceed analytically, so we use
numerical methods to gain insight into the behavior of the system.
In this section, we first briefly describe the numerical model we use to
study the system. We then use the numerical model to study the
development of $O(1)$ bulk concentration variations and their impact
on transport around metal colloid spheres.
\subsection{Numerical Model}
To solve (\ref{eq:c_eqn_steady}) -- (\ref{eq:phi_eqn_steady}) numerically
in a computationally efficient manner, we use a pseudospectral
method~\cite{trefethen_spectral_book,boyd_spectral_book,
fornberg_spectral_book}. For problems in electrochemical transport,
pseudospectral methods are particularly powerful because they naturally
resolve boundary layers by placing more grid points near boundaries of
the physical domain~\cite{trefethen_spectral_book,fornberg_spectral_book,
boyd_spectral_book}.
We further reduces the computational complexity and cost of the numerical
model by taking advantage of the axisymmetry of the problem to reduce
the numerical model to two dimensions (as opposed to using a fully 3-D
description).
For the computational grid, we use a tensor product grid of a
uniformly spaced grid for the polar angle direction and a
shifted semi-infinite rational Chebyshev grid for the radial
direction~\cite{boyd_spectral_book}.
To obtain the discretized form of the differential operators on this grid,
we use Kronecker products of the differentiation matrices for the
individual one-dimensional grids \cite{trefethen_spectral_book}.
The numerical model is then easily constructed using collocation
by replacing field variables and continuous operators in the
mathematical model by grid functions and discrete operators.
The resulting nonlinear, algebraic system of equations for the values of
the unknowns at the grid points is solved using a standard Newton iteration
with continuation in the strength of the applied electric field.
The Jacobian for the Newton iteration is computed exactly by using a
set of simple \emph{matrix}-based differentiation rules derived
in the appendix of~\cite{chu_thesis_2005}. By using the exact Jacobian,
the convergence rate of the Newton iteration is kept low; typically
less than five iterations are required for each value of the continuation
parameter before the residual of the solution to the discrete system of
equations is reduced to an absolute tolerance of
$10^{-8}$~\footnote{The residual is computed using the $L^\infty$ norm.}.
Directly computing the Jacobian in matrix form had the additional advantage
of making it easy to implement the numerical model in MATLAB, a high-level
programming language with a large library of built-in functions.
It is also worth mentioning that we avoid the
problem of dealing with infinite values of the electric potential by
formulating the numerical model in terms of $\phi + E r \cos \theta$,
the deviation of the electric potential from that of the uniform applied
electric field, rather than $\phi$ itself.
In our numerical investigations, we used the numerical method described
above with $90$ radial and $75$ angular grid points. This grid resolution
balanced the combined goals of high accuracy and good computational
performance. Figure~\ref{figure:c_and_psi_full_profiles} shows typical
solutions for the concentration and electric potential (relative to the
background applied potential) for large applied electric fields.
A comparison of the concentration and electric potential at the
surface of the sphere for varying values of $E$ are shown in
Figure~\ref{figure:c_s_and_phi_s}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=1.6in]{figs/c_full_profile}
\includegraphics[width=1.6in]{figs/psi_full_profile}
\begin{minipage}[h]{3in}
\caption[Numerical solutions for the concentration and excess potential
for $E = 10$]{
\label{figure:c_and_psi_full_profiles}
Numerical solutions for the concentration $c$ (left) and excess electric
potential $\psi$ (right) for $E = 10$, $\eps = 0.01$, $\delta = 1$. Notice
the large gradients in the concentration profile near the surface of the
sphere.
}
\end{minipage}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.6in,height=1.4in]{figs/c_s_varying_E}
\includegraphics[width=1.6in,height=1.4in]{figs/phi_s_varying_E}
\begin{minipage}[h]{3in}
\caption[Bulk concentration and electric potential profiles at surface of
sphere for varying values of the applied electric field]{
\label{figure:c_s_and_phi_s}
Bulk concentration $\hat{c}$ and electric potential $\hat{\phi}$ at the
surface of the sphere for varying values of the applied electric field.
In these figures, $\eps = 0.01$ and $\delta = 1$. Notice that for $E=15$,
the poles ($\theta = 0$ and $\theta=\pi$) are about to be depleted of ions
(\emph{i.e., } $\hat{c} \approx 0$).
}
\end{minipage}
\end{center}
\end{figure}
\subsection{Enhanced Surface Excess Concentration and Surface Conduction}
Perhaps the most important aspects of the steady response at high applied
electric fields are the enhanced surface excess ion concentration and
surface transport within the double layer.
\begin{figure}
\begin{center}
\includegraphics[width=1.6in,height=1.4in]{figs/q_varying_E}
\includegraphics[width=1.6in,height=1.4in]{figs/w_varying_E}
\begin{minipage}[h]{3in}
\caption[Surface charge density and excess surface concentration of
neutral salt and for varying values of the applied electric field]{
\label{figure:q_and_w}
Surface charge density $\eps q$ (left) and excess surface concentration of
neutral salt $\eps w$ (right) and for varying values of the applied electric
field. In these figures, $\eps = 0.01$ and $\delta = 1$. Notice that for
large applied fields, $\eps w = O(1/E)$ and $\eps q = O(1/E)$ so that the
surface conduction terms in (\ref{eq:q_evolution_eqn_GCS})
and (\ref{eq:w_evolution_eqn_GCS}) are $O(1)$.
}
\end{minipage}
\end{center}
\end{figure}
As shown in Figure~\ref{figure:q_and_w}, at high applied fields,
the excess surface concentrations is $O \left( 1/E \right)$, so surface
transport within the double layer (shown in Figure~\ref{figure:Fq_and_Fw})
becomes non-negligible in the leading order effective flux boundary
conditions (\ref{eq:q_bc_steady}) -- (\ref{eq:w_bc_steady}).
\begin{figure}
\begin{center}
\includegraphics[width=1.6in,height=1.4in]{figs/J_q_varying_E}
\includegraphics[width=1.6in,height=1.4in]{figs/J_w_varying_E}
\begin{minipage}[h]{3in}
\caption[Tangential surface fluxes for the surface charge density and
excess surface concentration of neutral salt for varying values of the
applied electric field]{
\label{figure:Fq_and_Fw}
Tangential surface fluxes for the surface charge density $|{\bf J}_{t,q}|$ (left)
and excess surface concentration of neutral salt $|{\bf J}_{t,w}|$ (right) for
varying values of the applied electric field.
In these figures, $\eps = 0.01$ and $\delta = 1$. Notice that for large
applied fields, the surface fluxes are $O(1)$ quantities and make a
non-negligible leading-order contribution in (\ref{eq:q_evolution_eqn_GCS})
and (\ref{eq:w_evolution_eqn_GCS}).
}
\end{minipage}
\end{center}
\end{figure}
Interestingly, surface conduction, ${\bf J}^{(e)}_{t,q}$ and ${\bf J}^{(e)}_{t,w}$,
are the dominant contributions to surface transport
(see Figure~\ref{figure:compare_surf_conduction_diffusion}).
\begin{figure}
\begin{center}
\includegraphics[width=1.6in,height=1.4in]
{figs/Jq_compare_surf_conduction_diffusion}
\includegraphics[width=1.6in,height=1.4in]
{figs/Jw_compare_surf_conduction_diffusion}
\begin{minipage}[h]{3in}
\caption[Comparison of the magnitudes of surface conduction and
surface diffusion for the tangential fluxes of $q$ and $w$]{
\label{figure:compare_surf_conduction_diffusion}
Comparison of the magnitudes of $|{\bf J}^{(d)}|$ (solid
lines) and $|{\bf J}^{(e)}|$ (dashed lines) for the excess surface fluxes of
$\eps q$ (left) and $\eps w$ (right) for an applied electric field value of
$E = 15$. In these figures, $\eps = 0.01$ and $\delta = 1$.
Notice that in both cases, the surface diffusion is on the order
of $1/\hat{c} E$ times the surface conduction.
}
\end{minipage}
\end{center}
\end{figure}
While there are clearly surface gradients in concentration, surface
diffusion is smaller than surface migration by a factor on the order of
$1/\hat{c} E$.
Also, we reiterate that the driving force for surface transport is
solely from surface gradients of the bulk concentration and bulk electric
potential; gradients in the zeta-potential do not play a role because
they are completely canceled out.
An important feature of the surface fluxes, ${\bf J}_{t,q}$ and ${\bf J}_{t,w}$, shown
in Figure~\ref{figure:Fq_and_Fw} is that they are non-uniform.
This non-uniformity is strongly influenced by the non-uniformity in the
tangential electric field (see Figure~\ref{figure:E_t}).
\begin{figure}
\begin{center}
\includegraphics[width=2.5in,height=1.5in]{figs/E_t_varying_E}
\begin{minipage}[h]{3in}
\caption[Tangential component of bulk electric field at surface of sphere
for varying values of the applied electric field]{
\label{figure:E_t}
Tangential component of bulk electric field, $|E_t|$ at surface of sphere
for varying values of the applied electric field.
In these figures, $\eps = 0.01$ and $\delta = 1$.
}
\end{minipage}
\end{center}
\end{figure}
The non-uniformity of the surface excess salt concentration and
charge surface density also play a role but to a lesser extent.
For the steady problem, the surface excess concentration of ions
remains constant in time, so the non-uniformity in the surface fluxes
leads to non-uniform normal fluxes of current and neutral salt from the
bulk into the double layer (see Figure~\ref{figure:normal_fluxes_from_bulk}).
\begin{figure}
\begin{center}
\includegraphics[width=1.6in,height=1.4in]{figs/c_dphi_dn_varying_E}
\includegraphics[width=1.6in,height=1.4in]{figs/dc_dn_varying_E}
\begin{minipage}[h]{3in}
\caption[Normal flux of current and neutral salt into the double
layer for varying values of the applied electric field]{
\label{figure:normal_fluxes_from_bulk}
Normal flux of current $c \frac{\partial \phi}{\partial n}$ (left)
and neutral salt $\frac{\partial c}{\partial n}$ (right) into the double
layer for varying values of the applied electric field.
In these figures, $\eps = 0.01$ and $\delta = 1$.
}
\end{minipage}
\end{center}
\end{figure}
Notice that the normal flux of neutral salt into the double layer, which
is given by $-\partial c/ \partial n$, shows an injection of neutral salt
at the poles ($-\partial c/ \partial n > 0$), where the charging
is strongest, and an ejection of neutral salt at the equator
($-\partial c/\partial n < 0$), where there is essentially no excess
neutral salt build up. This configuration of fluxes leads to the neutral
salt depletion at the poles and accumulation at the equator shown in
Figures~\ref{figure:c_and_psi_full_profiles} and
\ref{figure:c_s_and_phi_s}.
Similarly, the normal current density $-\hat{c} \partial \phi/ \partial n$,
shows an influx of negative (positive) current density at the north (south)
pole and a positive (negative) current density closer to the equator.
At the equator, there is no normal current density because the
normal diffusion currents of cations and anions exactly balance and
there is no normal electric field to drive a migration current.
\subsection{Bulk Diffusion and Concentration Gradients}
One major consequence of surface conduction is transport of large amounts of
neutral salt into the double layer. These cause strong concentration
gradients to appear near the surface of the sphere
(Figures~\ref{figure:c_and_psi_full_profiles} and
\ref{figure:c_s_and_phi_s}), indicating that the usual assumption of
a uniform concentration profile is invalid at high electric fields.
The presence of these large concentration gradients at relatively low
electric fields ($E \approx 5$) should not be surprising since it is
well-known that large voltage effects often begin with voltages as low as
a few times the thermal voltage \cite{bazant2004}. The dramatic influence
of the voltage arises from the exponential dependence of double layer
concentrations on the zeta-potential.
Since the transport of neutral salt is driven by concentration gradients,
the presence of these strong variations leads to diffusion currents
(see Figure \ref{figure:diffusion_currents}).
\begin{figure}
\begin{center}
\includegraphics[width=2in,height=2in]{figs/diffusion_currents}
\begin{minipage}[h]{3in}
\caption[Diffusion currents]{
\label{figure:diffusion_currents}
Diffusion currents drive transport of neutral salt near the surface
of the sphere. In this figure, $E = 10$, $\eps = 0.01$ and $\delta = 1$.
Notice that streamlines of neutral salt are closed; current lines
start on the surface of the sphere near the equator and end closer
to the poles.
}
\end{minipage}
\end{center}
\end{figure}
An important feature of these diffusion currents is that
they are closed; current lines start on the surface of the sphere near
the equator where neutral salt is ejected into the bulk (as a result of
neutral salt transport within the double layer) and end close to the
poles where neutral salt is absorbed by the double layer.
These recirculation currents are important because they allow the system
to conserve the total number of cation and anions \emph{locally}.
Without them, the local depletion and accumulation of ions would require
global changes to the bulk concentration (\emph{i.e., } the concentration at infinity
would be affected).
While the presence of diffusion currents is interesting, we must be
careful in how they are interpreted in terms of the motion of individual ion
molecules. In actuality, no ions are moving purely under the influence of
diffusion. Rather, the cation and anion migration flux densities are
slightly imbalanced due to the presence of a concentration gradient which
results in a net transport of neutral salt concentration.
\subsection{Individual Ion Currents}
Since cations and anions are the physical entities that are transported
through the electrolyte, it is useful to consider the cation and anion
flux densities individually.
As shown in Figure~\ref{figure:bulk_migration_vs_diffusion}, in the bulk,
the contribution of electromigration to transport dominates diffusion.
Moreover, within a short distance from the sphere, the electromigration
term itself becomes dominated by the contribution from the
applied electric field.
Thus, the concentration gradient only serves to slightly bias the flux
densities so that cation (anion) motion is slightly retarded near the north
(south) pole.
\begin{figure}
\begin{center}
\includegraphics[width=1.6in,height=1.4in]
{figs/compare_bulk_migration_and_diffusion}
\includegraphics[width=1.6in,height=1.4in]
{figs/bulk_diffusion_varying_angles}
\begin{minipage}[h]{3in}
\caption[Comparison of the magnitudes of bulk electromigration and
diffusion at various positions on the surface of sphere]{
\label{figure:bulk_migration_vs_diffusion}
Comparison of the magnitudes of bulk electromigration
$|{\bf F}_\rho^{(e)}|$ (solid
lines) and diffusion $|{\bf F}_c^{(d)}|$ (dashed lines) fluxes as a function of
distance from the surface of the sphere at $\theta = 0, \pi/4$, and
$\pi/2$ (left).
The figure on the right zooms in on the diffusion flux.
In these figure, $E = 10$, $\eps = 0.01$ and $\delta = 1$.
Notice that magnitude of the electromigration dominates diffusion and that
the electromigration term itself becomes dominated by the contribution from
the applied electric field a short distance from the surface of the sphere.
}
\end{minipage}
\end{center}
\end{figure}
It is more interesting to consider the surface transport of the individual
ions within the double layer. In the northern hemisphere, the double layer
is dominated by anions; similarly, the southern hemisphere is dominated by
cations.
As a result, transport in each hemisphere is primarily due to
only one species (see Figure \ref{figure:J_cplus_and_J_cminus}).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=1.6in,height=1.4in]{figs/J_cplus_varying_E}
\includegraphics[width=1.6in,height=1.4in]{figs/J_cminus_varying_E}
\begin{minipage}[h]{3in}
\caption[Tangential surface fluxes of cations and anions
for varying values of the applied electric field]{
\label{figure:J_cplus_and_J_cminus}
Tangential surface fluxes for the cation $|{\bf J}_{t,+}|$ (left) and
anion $|{\bf J}_{t,-}|$ (right) for varying values of the applied electric field.
In these figures, $\eps = 0.01$ and $\delta = 1$. Notice that for large
applied fields, the surface fluxes are $O(1)$ quantities (in the
appropriate hemisphere) which leads to a non-negligible leading-order
contribution in (\ref{eq:effective_flux_bc_GCS}).
}
\end{minipage}
\end{center}
\end{figure}
This observation provides a direct explanation for the depletion and
accumulation regions in the concentration profile in terms of the
the motion of individual ions.
As mentioned earlier, the transport is from the poles to the
equator because surface conduction is driven by the tangential component of
the bulk electric field (see Figure~\ref{figure:E_t}).
Since the double layer in the northern hemisphere is predominantly anions,
the surface ion transport is from the poles towards the equator.
A similar argument in the southern hemisphere shows that surface transport
of the majority ion is again from the poles towards the equator.
Thus, influx of ions into the double layer occurs in the regions near
the poles and outflux occurs by the equator.
The dominance of a single species within the double layer for each hemisphere
leads to a small conundrum: how does the bulk electrolyte near the
surface of the sphere remain locally electroneutral? In the northern
hemisphere, it would seem that more anion should be absorbed at the pole
and ejected at the equator leading to bulk charge imbalance. Analogous
reasoning involving cation leads to the same conclusion in the southern
hemisphere. The resolution to the conundrum comes from remembering that
both diffusion and electromigration contribute to transport.
The imbalance in the normal flux required to sustain an imbalanced
concentration profile in the double layer is achieved by carefully
balancing diffusion (which drives both species in the same direction)
and electromigration (which drives the two species in opposite directions)
so that the normal flux of the appropriate species dominates.
For example, at the north pole, an electric field pointing away from
the surface of the sphere will suppress the cation normal flux towards
the surface while enhancing the anion normal flux.
In this situation, the electric field plays a similar role as the diffusion
potential for electrochemical transport in an electroneutral solution.
Recall that when the cation and anion have different diffusivities, the
electric field acts to slow down the species with the higher diffusivity
and speed up the species with the lower diffusivity in such a way that both
species have equal flux densities.
In the current situation, the electric field serves to create the
necessary imbalance in the cation and anion flux densities so
that the surface excess concentrations within the double layer can be
maintained while preserving local electroneutrality in the bulk.
\section{Linear Relaxation for Arbitrary Double Layer Thickness
\label{sec:transient_response}}
\subsection{ Debye-Falkenhagen Equation }
Although the focus of this paper is on nonlinear relaxation, leading
to the steady state described in the previous section, it is
instructive to first consider linear response to a weak applied field,
where exact solutions are possible. Moreover, the linear analysis also
has relevance for the early stages of nonlinear relaxation before
significant double layer charge has accumulated, such that
$\max\{\zeta(\theta)\} \ll kT/e$, as long as the metal surface is
uncharged when the field is applied. The linear response also allows a
more general analysis, including AC periodic response, in addition to
our model problem with sudden DC forcing.
When the potential drop across the particle is much smaller than the
thermal voltage ($E_o \ll 1$), it is possible to analyze the response of
the system \emph{without} assuming that the double layers are thin; that is,
we need not assume that $\eps$ is small and may describe the system
using the full unsteady PNP equations,
(\ref{eq:c_eqn_bulk_diffusion_time}) -- (\ref{eq:rho_no_flux_bc}).
Instead, we assume that the response of the system is only a small
deviation from the equilibrium solution:
\begin{eqnarray}
c \equiv 1 \ , \ \ \rho \equiv 0 \ , \ \ \phi = E r \cos \theta.
\end{eqnarray}
In this limit, we can linearize the ionic concentrations around
a uniform concentration profile so that $c = 1 + \delta c$.
Using this expression in (\ref{eq:rho_eqn_bulk_diffusion_time}) and
making use of Poisson's equation (\ref{eq:poisson_eqn_bulk_diffusion_time}) to
eliminate the electric potential, we find that the charge density,
$\rho = \left( c_+-c_- \right)/2 = \left( \delta c_+ - \delta c_- \right)/2$,
obeys the (dimensionless) Debye-Falkenhagen equation \cite{debye1928}:
\begin{equation}
\frac{\partial \rho}{\partial t} \approx \ensuremath{\nabla^2}\rho - \frac{1}{\eps^2} \rho
\label{eq:debye_falkenhagen}.
\end{equation}
Similarly, the flux boundary condition corresponding to this equation
reduces to
\begin{equation}
\frac{\partial \rho}{\partial n} + \frac{\partial \phi}{\partial n} = 0
\label{eq:linear_response_rho_bc}.
\end{equation}
Note that (\ref{eq:debye_falkenhagen}) and Poisson's equation
(together with the no-flux and Stern boundary conditions) are a linear
system of partial differential equations.
Thus, we can take advantage of integral transform techniques.
\subsection{Transform Solutions for Arbitrary $\eps$ and $\delta$}
Since for weak applied fields, the model problem is a linear, initial value
problem, it is natural to carry out the analysis using Laplace transforms
in time. Transforming the Debye-Falkenhagen and Poisson equations, we obtain
\begin{eqnarray}
\ensuremath{\nabla^2} \check{\rho} &=& \beta^2 \check{\rho} \\
-\eps^2 \ensuremath{\nabla^2} \check{\phi} &=& \check{\rho}
\label{eq:poisson_eqn_LT}
\end{eqnarray}
where
\begin{equation}
\beta(s)^2 = s + \frac{1}{\eps^2}
\end{equation}
and the check accents $(\check{\ })$ are used to denote Laplace transformed
variables. Similarly, the boundary conditions become
\begin{eqnarray}
\frac{\partial \check{\rho}}{\partial n} +
\frac{\partial \check{\phi}}{\partial n} &=& 0
\label{eq:lin_response_rho_bc} \\
\check{\phi} + \delta \eps \frac{\partial \check{\phi}}{\partial n} &=& v s^{-1}
\label{eq:lin_response_phi_bc} \\
-\nabla \check{\phi} \rightarrow E_o s^{-1} \ &\textrm{as}& \
r \rightarrow \infty.
\label{eq:lin_response_phi_bc_infinity}
\end{eqnarray}
To solve the resulting boundary value problem, we take advantage
of the spherical geometry to write the solution in terms of
spherical harmonics. Since $\check{\rho}$ satisfies the modified Helmholtz
equation, we can expand it in a series with terms that are products of
spherical harmonics, $Y_l^m(\theta,\phi)$, and modified spherical Bessel
functions, $k_l(\beta r)$. Moreover, we can reduce the series to
a single term
\begin{equation}
\check{\rho}(r,\theta, \phi) = R~k_1(\beta r)~Y_1^0(\theta,\phi)
= R~k_1(\beta r)~\cos \theta
\label{eq:rholap_general_soln}
\end{equation}
by taking into account the symmetries of the charge density:
(i) axisymmetry, (ii) antisymmetry with respect to $\theta = \pi/2$,
and (iii) the dipolar nature of the externally applied field.
Note that we have only retained the term involving the modified spherical
Bessel functions that decays as $r \rightarrow 0$ because $\check{\rho}$ vanishes
at infinity.
Similarly, the general solution for $\check{\phi}$ may be expressed as
\begin{equation}
\check{\phi}(r,\theta, \phi) =
- E_o s^{-1} r \cos \theta
+ A + B \frac{\cos \theta}{r^2}
+ C~k_1(\beta r)~\cos \theta
\label{eq:philap_general_soln}
\end{equation}
where the first term accounts for the boundary condition on the electric
field at infinity, the next two terms are the general solution to Laplace's
equation possessing the required symmetries, and the last term
is the particular solution to Poisson's equation. Note that
we have left out the monopolar term in the potential because it is
only necessary for charged spheres. Our analysis is not made any less
general by neglecting this term; the case of a charged sphere in an
weak applied field is handled by treating it as the superposition of
a charged sphere in the absence of an applied field with an uncharged sphere
subjected to an applied field.
The coefficients in (\ref{eq:rholap_general_soln}) and
(\ref{eq:philap_general_soln}) are determined by enforcing Poisson's
equation and the boundary conditions (\ref{eq:lin_response_rho_bc}) --
(\ref{eq:lin_response_phi_bc}). Plugging
(\ref{eq:rholap_general_soln}) and (\ref{eq:philap_general_soln})
into Poisson's equation (\ref{eq:poisson_eqn_LT}), we obtain
\begin{equation}
C = -\frac{R}{\left( \beta \eps \right)^2}.
\end{equation}
To apply the boundary conditions, note that on the sphere
$\frac{\partial}{\partial n} = - \left . \frac{\partial}{\partial r}
\right |_{r=1}$ so that
\begin{equation}
\frac{\partial \check{\phi}}{\partial n} =
E_o s^{-1} \cos \theta
+ 2 B \cos \theta
+ \frac{1}{\left( \beta \eps \right)^2}
\left . \frac{\partial \check{\rho}}{\partial r}
\right |_{r = 1}
\end{equation}
where the last term was obtained by using the relation between $C$ and $R$.
Thus, the no-flux boundary condition (\ref{eq:lin_response_rho_bc})
becomes
\begin{eqnarray}
0 &=&
E_o s^{-1} \cos \theta + 2 B \cos \theta
+ \left (\frac{1}{\left( \beta \eps \right)^2} -1 \right )
\left .\frac{\partial \check{\rho}}{\partial r} \right |_{r=1}
\nonumber \\
&=&
\left \{
\begin{array}{l}
E_o s^{-1} + 2 B \\
+ \ R \left (\frac{1}{\left( \beta \eps \right)^2} -1 \right )
\beta k_1'(\beta)
\end{array}
\right \} \cos \theta
\label{eq:lin_response_noflux_bc_series}
\end{eqnarray}
Similarly, the Stern boundary condition (\ref{eq:lin_response_phi_bc})
becomes
\begin{eqnarray}
v s^{-1} &=& A \ + \nonumber \\
& & \left \{
\begin{array}{l}
\left(\delta \eps - 1\right) E_o s^{-1}
+ \left(1+2\delta \eps \right) B \\
+ \ \frac{R}{( \beta \eps )^2}
\left [ -k_1(\beta) + \delta \eps \beta k_1'(\beta) \right ]
\end{array}
\right \} \cos \theta
\label{eq:lin_response_stern_bc_series}
\end{eqnarray}
By independently equating the coefficients of the different spherical
harmonics in (\ref{eq:lin_response_noflux_bc_series}) and
(\ref{eq:lin_response_stern_bc_series}), we obtain (after a little algebra)
\begin{eqnarray}
A &=& v s^{-1}
\label{eq:lin_response_A} \\
R &=& \frac{-3 E_o s^{-1} \left( \beta \eps \right)^2}
{2 k_1(\beta) + \left [ 1 - \left( \beta \eps \right)^2
\left(1 + 2 \delta \eps \right) \right ]
\beta k_1'(\beta)}
\label{eq:lin_response_R} \\
B &=& -\frac{E_o s^{-1}}{2}
-\frac{R}{2} \left( \frac{1}{\left( \beta\eps \right)^2} - 1\right)
\beta k_1'(\beta)
\label{eq:lin_response_B}.
\end{eqnarray}
Finally, by writing $k_1(x)$ in terms of elementary
functions~\cite{weisstein_modified_spherical_bessel_function},
\begin{equation}
k_1(x) = \frac{e^{-x} (x+1)}{x^2},
\end{equation}
we can express $R$ as
\begin{equation}
R = \frac{-3 E_o s^{-1}}
{ e^{-\beta}
\left [
\left ( 1 + \frac{2}{\beta} + \frac{2}{\beta^2} \right )
\left ( 1 + 2 \delta \eps \right )
- \frac{1}{\left( \beta \eps \right)^2}
\right ]
}. \label{eq:lin_response_R_expanded}
\end{equation}
Following Bazant, Thornton, and Ajdari~\cite{bazant2004}, we focus on
times that are long relative to the Debye time ($t = O(\eps^2)$ in
dimensionless units). In this limit, $s \ll 1/\eps^2$ so that
the charge density on the surface of the sphere is given by
\begin{equation}
\check{\rho}(r = 1,\theta,s) \sim
\left ( \frac{-K_\rho s^{-1}}{1+\tau_\rho s} \right) \cos \theta
\end{equation}
with
\begin{eqnarray}
K_\rho &=& \frac{3 E_o (1+\eps)} {2 \gamma} \\
\tau_\rho &=& \eps
\left [
\frac{(1 + 2 \delta \eps) (1 + \eps) - \delta \eps}
{2 \gamma (1+\eps)}
\right ] \\
\gamma &=& \left ( 1 + 2 \delta \eps \right )
\left ( 1 + \eps \right ) + \delta.
\end{eqnarray}
Inverting the Laplace transform, we see that at long times,
the charge at the surface of the sphere has an exponential
relaxation with a characteristic time on the order of $\eps$:
\begin{equation}
\rho(r = 1,\theta,t) \sim
-K_\rho \left ( 1 - e^{-t/\tau_\rho} \right ) \cos \theta.
\label{eq:lin_response_rho_relaxation_long_times}
\end{equation}
Note that $\tau_\rho$ is on the order of the dimensionless RC time, $\eps$,
which is much larger than $\eps^2$, the dimensionless Debye time.
To obtain the linear response of cylinders, we follow the same procedure
as above. The main differences are that the series solution is written
in terms of a cosine series with the radial dependence given by modified
Bessel functions. Without going through the algebra, the results for
cylinders are given in Table~\ref{tab:linear_response} along side the
analogous results for spheres.
\begin{table*}
\caption{\label{tab:linear_response} Table of formulae for
the linear response of metallic cylinders and spheres to weak applied
electric fields.
}
\begin{ruledtabular}
\begin{tabular}{ccc}
& Cylinder\footnote{$\theta$ measured from vertical axis.} & Sphere \\
\hline \\[1pt]
$\check{\rho}$ & $R K_1(\beta r) \cos \theta$
& $R k_1(\beta r) \cos \theta$ \\[12pt]
$\check{\phi}$ & $vs^{-1} -E_o s^{-1} r \cos \theta
+ \left [\frac{B}{r} - \frac{R K_1(\beta r)}{(\beta \eps)^2}
\right ] \cos \theta$
& $vs^{-1} -E_o s^{-1} r \cos \theta
+ \left [\frac{B}{r^2} - \frac{R k_1(\beta r)}{(\beta \eps)^2}
\right ] \cos \theta$ \\[12pt]
$R$ & $\frac{-2 E_o s^{-1} \left( \beta \eps \right) ^2}
{K_1(\beta)
+ \left [ 1 - (\beta \eps)^2 (1+\delta \eps) \right ] \beta K_1'(\beta)}$
& $\frac{-3 E_o s^{-1}} { e^{-\beta}
\left [
\left ( 1 + \frac{2}{\beta} + \frac{2}{\beta^2} \right )
\left ( 1 + 2 \delta \eps \right )
- \frac{1}{\left( \beta \eps \right)^2}
\right ]}$ \\[16pt]
$B$ & $-E_o s^{-1}
-R \left( \frac{1}{\left( \beta\eps \right)^2} - 1\right)
\beta K_1'(\beta)$
& $-\frac{1}{2} E_o s^{-1}
-\frac{1}{2} R \left( \frac{1}{\left( \beta\eps \right)^2} - 1\right)
\beta k_1'(\beta)$ \\[16pt]
$K_\rho$ & $\frac{2 E_o K_1(1/\eps)}{K_1(1/\eps) - \delta K_1'(1/\eps)}$
& $\frac{3 E_o (1+\eps)} {2
\left [ \left( 1+2\delta\eps \right) \left( 1+\eps \right) + \delta
\right]}$ \\[16pt]
$\tau_\rho$ & $-\eps \left \{
\frac{2 \eps K_1(1/\eps)
+ \left[2+\delta \eps - \delta \frac{K_1'(1/\eps)}{K_1(1/\eps)} \right]
K_1'(1/\eps)
+ \delta K_1''(1/\eps)}{2 \left[K_1(1/\eps) - K_1'(1/\eps) \right]}
\right \}$
& $\eps \left \{
\frac{(1 + 2 \delta \eps) (1 + \eps) - \delta \eps} {2 (1 + \eps)
\left [ \left( 1+2\delta\eps \right) \left( 1+\eps \right) + \delta
\right]} \right \}$ \\[16pt]
$K_q$ & $\frac{2 E_o \eps K_0(1/\eps)}{K_1(1/\eps) - \delta K_1'(1/\eps)}$
& $\frac{3 E_o \eps}
{\left( 1+2\delta\eps \right) \left( 1+\eps \right) + \delta}$
\\[16pt]
$\tau_q$ & $-\eps \left \{
\frac{\left(\eps + \frac{K_0'(1/\eps)}{K_0(1/\eps)} \right) K_1(1/\eps)
+ \left[1+2\delta\eps-\delta \frac{K_0'(1/\eps)}{K_0(1/\eps)} \right]
K_1'(1/\eps)
+ \delta K_1''(1/\eps)}{2 \left[K_1(1/\eps) - K_1'(1/\eps) \right]}
\right \}$
& $\eps \left \{
\frac{(1 + 2 \delta \eps) (1 + \eps) }{2
\left [ \left( 1+2\delta\eps \right) \left( 1+\eps \right) + \delta
\right]} \right \}$ \\[12pt]
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsection{Response to a Weak, Oscillatory Field}
Due to the close relationship between Fourier and Laplace transforms,
the algebra involved in analyzing the response of the sphere to
a weak, oscillatory field is almost identical to the response to
a suddenly applied field. Thus, for sufficiently low frequencies
($\omega \ll 1/\eps^2$), we can immediately write down the response to
a weak, oscillatory field of the form
$E = E_o \ensuremath{\mathrm{Re}} \left ( e^{i \omega t} \right )$:
\begin{equation}
\rho(r = 1,\theta,t) = -K_\rho
\ensuremath{\mathrm{Re}} \left ( \frac{e^{i \omega t}}{1 + i \omega \tau_\rho} \right )
\cos \theta
\label{eq:lin_response_rho_oscillatory_field}.
\end{equation}
\subsection{Accumulated Surface Charge Density}
Because of its importance in many physical processes, the accumulated
surface charge density, $\check{q}$, is an interesting quantity to consider.
It is easily computed from the (volume) charge density $\rho$ by
integrating it in the radial direction from $r=1$ to $r = \infty$.
While the identification of this integral with a surface charge density
really only makes sense in the thin-double layer limit, the
accumulated surface charge density is still worth examining.
Fortunately, the integral is straightforward because
$k_1(z) = -k_0'(z)$~\cite{weisstein_modified_spherical_bessel_function}
and the radial dependence of the charge density is independent of the angle.
Using these observations, the surface charge density is given by
\begin{eqnarray}
\check{q}(\theta,s) &=& \frac{R~k_0(\beta)}{\beta} \cos \theta
\nonumber \\
&=& \frac{R~e^{-\beta}}{\beta^2} \cos \theta.
\end{eqnarray}
In the long time limit $s \ll 1/\eps^2$, we find that the
surface charge density an exponential relaxation
$q(\theta,t) = -K_q \left (1 - e^{-t/\tau_q} \right ) \cos \theta$
with
\begin{eqnarray}
K_q &=& \frac{3 E_o \eps}{\gamma} \\
\tau_q &=& \eps \left [
\frac{(1 + 2 \delta \eps) (1 + \eps) }{2 \gamma}
\right ].
\end{eqnarray}
As with $\rho$, we see that the characteristic relaxation
time for $q$ is on the order of the dimensionless RC time.
\subsection{Time Scales for Linear Response}
We have seen that at long times both $\rho$ and $q$ relax exponentially
with characteristic time scales on the order of the RC time, $\eps$.
However, as Figure \ref{figure:3D_lin_relax_time} shows, the relaxation times
for the two quantities are not exactly the same and have a nontrivial
dependence on the diffuse layer thickness, $\epsilon$, and Stern capacitance,
$\delta$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=1.6in,height=1.4in]{figs/rho_lin_relax_time_3D}
\includegraphics[width=1.6in,height=1.4in]{figs/q_lin_relax_time_3D}
\begin{minipage}[h]{3in}
\caption[Exponential relaxation time constants for the charge density
and accumulated surface charge density at weak applied fields as a function
of $\eps$ and $\delta$]{
\label{figure:3D_lin_relax_time}
Exponential relaxation time constants for the charge density $\rho$ and
the accumulated surface charge density $q$ at weak applied fields
as a function of $\eps$ and $\delta$.
The left panel shows the relaxation time constant for the charge
density at the surface of the sphere, $\tau_\rho(r=1)$.
The right panel shows the relaxation time constant for the accumulated
surface charge density, $\tau_q$.
}
\end{minipage}
\end{center}
\end{figure}
Notice that for infinitely thin double layers ($\eps = 0$) the relaxation
times for the surface charge density and the accumulated charge density
are identical. This behavior is expected since for thin double
layers, almost all of the charge density in the diffuse layer is located
very close to the surface of the particle. In this limit, the relaxation
time has a strong dependence on the Stern capacitance.
For thick double layers ($\eps \gg 1$), this dependence on the Stern
capacitance disappears and the relaxation time curves for all $\delta$
values converge.
Physically, the difference in the relaxation times for the surface charge
and the accumulated charge densities for nonzero $\eps$ values is an
indication of the complex spatio-temporal structure of the double layer
charging. For thin double layers, the difference $\tau_\rho$ and $\tau_q$
is relatively small because the charge in the double layer is restricted
to a thin region.
For thick double layers, however, the difference in relaxation times is
accentuated because the charge in the double layer is spread out over a
larger spatial region, which does not necessarily charge uniformly.
In fact, that $\tau_\rho(r=1)$ is smaller than $\tau_q$ for larger values
of $\eps$ (Figure \ref{figure:3D_lin_relax_time}) suggests that regions
closer to the surface of the sphere charge faster than regions that
are further away.
\section{Weakly Nonlinear Relaxation for Thin Double Layers}
\label{sec:weakly_nonlinear_dynamics}
\subsection{ Dynamical Regimes in Space-Time }
For weak applied fields in linear response, the complicated dependence
of the Laplace transform solution for large $s$ (short times) above
hints at the presence of multiple time scales in the charging
dynamics. In this section, using boundary-layer methods, we derive
asymptotic solutions in the thin double-layer limit ($\eps \ll 1$) at
somewhat larger electric fields (defined below) for the concentrations
and electric potential by solving the leading order equations at the
two dominant time scales: (1) the RC time $\lambda_D a/D$ and (2) the
bulk diffusion time $a^2/D$. We proceed by seeking the leading order
term (and in some cases, the first-order correction) to the governing
equations (\ref{eq:c_eqn_bulk_diffusion_time}) --
(\ref{eq:c_rho_poisson_eqn_bulk_diffusion_time}) with both the spatial
\emph{and} the time coordinate scaled to focus on the space-time
region of interest.
For the analysis, it is important to realize that the space-time domain is
divided into five asymptotically distinct regions
(see Figure \ref{figure:five_space_time_regions}).
At the RC time, there exist three spatially significant regions:
(i) an $O(\eps)$ quasi-steady double layer, (ii) an $O(\sqrt{\eps})$
dynamically active diffusion layer, and (iii) a quasi-equilibrium, uniform
bulk electrolyte layer with time-varying harmonic electric potential.
At this time scale, the charging dynamics are completely driven by the
$O(\sqrt{\eps})$ diffusion layer.
At the diffusion time, there are only two important spatial regimes:
(i) a quasi-steady double layer and
(ii) a dynamic bulk that evolves through locally electroneutral,
diffusion processes.
\begin{figure}
\begin{center}
\includegraphics[width=3.1in]{figs/space_time_regions}
\begin{minipage}[h]{3in}
\caption[Five dominant regions of space-time that define the dynamic
response of a metal colloid sphere to an applied electric field]{
\label{figure:five_space_time_regions}
Five asymptotically distinct regions of space-time that govern the
dynamic response of a metal colloid sphere to an applied electric field.
Note the nested spatial boundary layers at the RC time ($t = O(\eps)$).
}
\end{minipage}
\end{center}
\end{figure}
\subsection{Dynamics at the RC Time}
\subsubsection{Uniform Bulk and Equilibrium Double Layers}
To examine the dynamics at the RC time, we rewrite
(\ref{eq:c_eqn_bulk_diffusion_time}) --
(\ref{eq:c_rho_poisson_eqn_bulk_diffusion_time})
by rescaling time to the RC time using $\tRC = t/\eps$:
\begin{eqnarray}
\frac{\partial c}{\partial \tRC} &=&
\eps \nabla \cdot \left ( \nabla c + \rho \nabla \phi \right )
\label{eq:c_eqn_bulk_RC_time} \\
\frac{\partial \rho}{\partial \tRC} &=&
\eps \nabla \cdot \left ( \nabla \rho + c \nabla \phi \right )
\label{eq:rho_eqn_bulk_RC_time} \\
-\eps^2 \ensuremath{\nabla^2} \phi &=& \rho
\label{eq:poisson_eqn_bulk_RC_time}.
\end{eqnarray}
Since the spatial coordinate is scaled to the bulk length, the leading order
solution of these equations describes the dynamics of the bulk during the
double layer charging phase. Substituting a regular asymptotic expansions
of the form
\begin{equation}
c(x,t) \sim c_0 + \eps c_1 + \eps^2 c_2 + \ldots
\end{equation}
into equations
(\ref{eq:c_eqn_bulk_RC_time})-(\ref{eq:poisson_eqn_bulk_RC_time})
yields a hierarchy of partial differential equations.
By sequentially solving the equations using the initial conditions,
it is easy to show that the ``outer'' solutions at the RC charging time
scale are
\begin{eqnarray}
c_0~(x,t) &\equiv& 1 \\
c_j(x,t) &\equiv& 0 \ ,\ \ j = 1,2,3, \ldots \\
\rho_j(x,t) &\equiv& 0 \ ,\ \ j = 0,1,2, \ldots
\label{eq:c_and_rho_bulk_RC_time}
\end{eqnarray}
with $\phi_j$ is harmonic at all orders. In other words, the bulk
solution can be completely expressed (\emph{without} a series expansion)
as a uniform concentration profile, $c~(x,t) \equiv 1$, and
a time-varying harmonic electric potential, $\phi$.
Note that by taking advantage of spherical geometry and axisymmetry in
our problem, we can write the potential as a series in spherical harmonics
with zero zonal wavenumber (\emph{i.e., } Legendre polynomials in $\cos \theta$):
\begin{eqnarray}
\phi(r,\theta,t) &=& -E_o r \cos \theta
+ \sum_{l=0}^{\infty} P_l(\cos \theta) \frac{A_{l}(t)}{r^{l+1}}
\label{eq:phi_bulk_RC_time}
\end{eqnarray}
where the radial dependence of each term has been selected so that
$\phi$ automatically satisfies Laplace's equation at all times.
At the RC charging time, the $O(\eps)$ double layer is in quasi-equilibrium,
which is easily verified by rescaling the spatial coordinate normal
to the particle surface by the dimensionless Debye length $\eps$.
As mentioned earlier, a quasi-equilibrium double layer possesses a structure
described by the Gouy-Chapman-Stern model. For convenience, we repeat
the GCS solution here:
\begin{eqnarray}
\tilde{c}_\pm = \hat{c} e^{\mp\tilde{\psi}} \ &,&
\ \ \tilde{c} = \hat{c} \cosh{\tilde{\psi}} \ ,
\ \ \tilde{\rho} = - \hat{c} \sinh{\tilde{\psi}} \nonumber \\
\tilde{\psi}(z) &=& 4 \tanh^{-1}\left(\tanh(\zeta/4) e^{-\sqrt{\hat{c}}z}\right).
\label{eq:double_layer_structure}
\end{eqnarray}
Recall that $\tilde{\psi}$ is the excess voltage relative to the bulk,
$\tilde{\psi} = \tilde{\phi} - \hat{\phi}$ and that the tilde and hat accents are
used to indicate quantities within and just outside of the double layer,
respectively. Also, we reiterate that unlike the 1D case, $\zeta$ is
\emph{not} a constant but is a function of spatial along the electrode
surface.
Since the bulk concentration at the RC time is uniform, the double layer
structure is given by (\ref{eq:double_layer_structure}) with $\hat{c}$ set
equal to $1$.
Notice that at the RC time, bulk concentration gradients have not yet had
time to form, so the variation of the diffuse layer concentration and
charge density along the surface of the sphere is solely due to a
non-uniform zeta-potential.
\subsubsection{$O \left( \sqrt{\eps} \right)$ Diffusion Layer}
The analysis in the previous section leads us to an apparent paradox.
The dynamics of the system seem to have been lost since both the bulk and
the boundary layers are in quasi-equilibrium at leading order; neither
layer drives its own dynamics.
As discussed in \cite{bazant2004}, the resolution to this paradox lies in
the time-dependent flux matching between the bulk and the boundary layer.
Unfortunately, it is inconsistent to directly match the bulk to the
boundary layer; there \emph{must} exist a nested
$O \left( \sqrt{\eps} \right)$ diffusion layer in order to account for the
build up of both surface charge \emph{and} surface excess neutral salt
concentration.
Mathematically, the presence of the diffusion layer at the RC time scale
appears as a dominant balance in the transport equations by rescaling
the spatial coordinate in the normal direction to the surface by
$\sqrt{\eps}$ to obtain:
\begin{eqnarray}
\frac{\partial \bar{c}}{\partial \tRC} &=&
\eps^2 \nabla_s \cdot \left ( \nabla_s \bar{c} + \bar{\rho} \nabla_s \bar{\phi} \right )
\nonumber \\
& & + \ \frac{\partial}{\partial \zdiff}
\left ( \frac{\partial \bar{c}}{\partial \zdiff}
+\bar{\rho} \frac{\partial \bar{\phi}}{\partial \zdiff}
\right )
\label{eq:c_eqn_diffusion_RC_time} \\
\frac{\partial \bar{\rho}}{\partial \tRC} &=&
\eps^2 \nabla_s \cdot \left ( \nabla_s \bar{\rho} + \bar{c} \nabla_s \bar{\phi} \right )
\nonumber \\
& & + \ \frac{\partial}{\partial \zdiff}
\left ( \frac{\partial \bar{\rho}}{\partial \zdiff}
+\bar{c} \frac{\partial \bar{\phi}}{\partial \zdiff}
\right )
\label{eq:rho_eqn_diffusion_RC_time} \\
- \eps^2 \ensuremath{\nabla^2}_s \bar{\phi} - \eps \frac{\partial^2 \bar{\phi}}{\partial \zdiff^2}
&=& \bar{\rho}
\label{eq:poisson_eqn_diffusion_RC_time}.
\end{eqnarray}
Here the bar accent denotes the ``diffusion layer'' solution at the RC time
and $z' = Z/\sqrt{\eps}$ is the spatial coordinate in the direction
normal to the surface. Notice that at this length scale, the system
is \emph{not} in quasi-equilibrium as it is at the bulk and Debye length
scales. It is, however, locally electroneutral at leading order
as a result of (\ref{eq:poisson_eqn_diffusion_RC_time}).
As the double layer charges, it absorbs an $O(\eps)$ amount of charge and
neutral salt from the $O(\sqrt{\eps})$ diffusion layer. Therefore, we
expect that concentration changes within the diffusion to be on the order
of $\sqrt{\eps}$, which motivates the use of an asymptotic expansion of
the form
\begin{equation}
\bar{c}(x,t) \sim \bar{c}_0 + \eps^{1/2}~\bar{c}_{1/2} + \eps~\bar{c}_1 + \ldots.
\end{equation}
Using this expansion in (\ref{eq:c_eqn_diffusion_RC_time}) --
(\ref{eq:poisson_eqn_diffusion_RC_time}), we find that the leading
order equations are:
\begin{eqnarray}
\frac{\partial \bar{c}_0}{\partial \tRC} &=&
\frac{\partial^2 \bar{c}_0}{\partial \zdiff^2}
\label{eq:O1_c_eqn_diffusion_RC_time} \\
\frac{\partial}{\partial \zdiff}
\left( \bar{c}_0 \frac{\partial \bar{\phi}_0}{\partial \zdiff} \right)
&=& 0
\label{eq:O1_rho_eqn_diffusion_RC_time} \\
\bar{\rho}_0 &=& 0
\label{eq:O1_poisson_eqn_diffusion_RC_time}.
\end{eqnarray}
The initial conditions and boundary condition as
$\zdiff \rightarrow \infty$ for these equations are
$\bar{c}_0 (t = 0) \equiv 1$
and $\bar{c}_0 (\zdiff \rightarrow \infty)= 1$, respectively.
The boundary condition at $\zdiff = 0$ is given by flux matching
with the double layer.
Rescaling space and time in (\ref{eq:q_evolution_eqn_GCS}) and
(\ref{eq:w_evolution_eqn_GCS}), we find that the appropriate flux
boundary conditions for the diffusion layer are
\begin{eqnarray}
\frac{\partial q}{\partial \tRC} &=&
\eps \nabla_s \cdot
\left(
q \nabla_s \log \bar{c}
+ w \nabla_s \bar{\phi}
\right)
+ \frac{1}{\sqrt{\eps}~}~\bar{c} \frac{\partial \bar{\phi}}{\partial \zdiff}
\label{eq:q_flux_bc_diffusion_layer} \\
\frac{\partial w}{\partial \tRC} &=&
\eps \nabla_s \cdot
\left(
w \nabla_s \log \bar{c}
+ q \nabla_s \bar{\phi}
\right)
+ \frac{1}{\sqrt{\eps}~}~\frac{\partial \bar{c}}{\partial \zdiff}
\label{eq:w_flux_bc_diffusion_layer}.
\end{eqnarray}
Thus, the leading order flux boundary conditions,
which appear at $O(1/\sqrt{\eps})$, are
\begin{eqnarray}
\bar{c}_0 \frac{\partial \bar{\phi}_0}{\partial \zdiff} &=& 0 \\
\frac{\partial \bar{c}_0}{\partial \zdiff} &=& 0.
\end{eqnarray}
The leading order solutions in the diffusion layer,
$\bar{c}_0 \equiv 1$ and $\bar{\phi}_0 = \phi~(Z \rightarrow 0)$,
are easily determined by applying the initial and boundary conditions
to (\ref{eq:O1_c_eqn_diffusion_RC_time}) --
(\ref{eq:O1_poisson_eqn_diffusion_RC_time}).
To obtain dynamics, we must examine the first-order correction to
the solution. At the next higher order, the governing equations are
\begin{eqnarray}
\frac{\partial \bar{c}_{1/2}}{\partial \tRC} &=&
\frac{\partial^2 \bar{c}_{1/2}}{\partial \zdiff^2}
\label{eq:Osqrteps_c_eqn_diffusion_RC_time} \\
\frac{\partial^2 \bar{\phi}_{1/2}}{\partial \zdiff^2}
&=& 0
\label{eq:Osqrteps_rho_eqn_diffusion_RC_time} \\
\bar{\rho}_{1/2} &=& 0
\label{eq:Osqrteps_poisson_eqn_diffusion_RC_time}
\end{eqnarray}
where we have made use of the leading order solution to simplify
(\ref{eq:Osqrteps_rho_eqn_diffusion_RC_time}). Again, the initial
conditions and boundary condition at infinity are simple:
$\bar{c}_{1/2} (t = 0) \equiv 0$ and
$\bar{c}_{1/2} (\zdiff \rightarrow \infty)= 0$.
The flux boundary conditions at $\zdiff = 0$, however, are more interesting
because they involve the charging of the double layer:
\begin{eqnarray}
\frac{\partial q}{\partial \tRC} &=&
\left . \frac{\partial \bar{\phi}_{1/2}}{\partial \zdiff} \right |_{z=0}
\label{eq:O1_q_flux_bc_diffusion_layer} \\
\frac{\partial w}{\partial \tRC} &=&
\left . \frac{\partial \bar{c}_{1/2}}{\partial \zdiff} \right |_{z=0}
\label{eq:O1_w_flux_bc_diffusion_layer}.
\end{eqnarray}
Thus, simple diffusion of neutral salt at $x = O(\sqrt{\eps})$ driven by
absorption into the $O(\eps)$ double layer dominates the dynamics of the
diffusion layer. From (\ref{eq:Osqrteps_rho_eqn_diffusion_RC_time})
and (\ref{eq:O1_q_flux_bc_diffusion_layer}), we see that the electric
potential possesses a linear profile with slope given by rate of
surface charge build up in the double layer:
\begin{equation}
\bar{\phi} \sim \phi~(Z \rightarrow 0)
+ \eps^{1/2}~\left( \frac{\partial q}{\partial \tRC} \right) \zdiff
\label{eq:phi_diffusion_layer}.
\end{equation}
Note that constant term at $O(\sqrt{\eps})$ is forced to be zero by
matching with the bulk electric potential since all higher order corrections
to the bulk potential are identically zero.
\subsubsection{Effective Boundary Conditions Across Entire Diffusion Layer}
It is precisely the fact that the electric potential has the form
(\ref{eq:phi_diffusion_layer}) that justifies the common approach
of asymptotic matching directly between the bulk and the double layer
\cite{iceo2004b, bazant2004, iceo2004a}. For instance,
the time-dependent matching used by Bazant, Thornton, and
Ajdari \cite{bazant2004} rests on the implicit assumption that
the leading order electric field in the diffusion layer is a constant
and appears at $O(\sqrt{\eps})$ so that
\begin{equation}
\left. \frac{\partial \phi}{\partial Z} \right|_{Z=0} =
\left. \frac{1}{\sqrt{\eps}}
\left( \frac{\partial \bar{\phi}}{\partial \zdiff} \right)
\right|_{\zdiff \rightarrow \infty}
\sim \left. \frac{\partial \bar{\phi}_{1/2}}{\partial \zdiff}
\right|_{\zdiff \rightarrow \infty}
= \frac{\partial q}{\partial \tRC}
\label{eq:double_layer_charging_eqn_RC_time}.
\end{equation}
Similarly, the definition of the zeta-potential and the Stern boundary
condition (\ref{eq:stern_bc_GCS}) across the entire diffusion layer
require that $\bar{\phi} \sim \phi~(Z \rightarrow 0)$ at leading order.
\subsubsection{Leading-order Dynamics}
Using the results of our discussion, we now derive the leading order
equations that govern the charging dynamics on the surface of the sphere.
Since charging is non-uniform over the electrode surface, the equations take
the form of partial differential equations on the surface of the sphere.
Defining $\Psi(\theta,\phi) \equiv v - \phi(r=1,\theta,\phi)$,
we can write the Stern boundary condition (\ref{eq:stern_bc_GCS})
and the double layer charging equation
(\ref{eq:double_layer_charging_eqn_RC_time}) as
\begin{eqnarray}
\zeta + 2 \delta \sinh \left( \zeta/2 \right) &=& \Psi
\label{eq:effective_stern_bc_RC_time} \\
C (\Psi) \frac{\partial \Psi}{\partial \tRC}
&=& \frac{\partial \phi}{\partial n}
\label{eq:effective_charging_eqn_RC_time}
\end{eqnarray}
where we have introduced the leading order differential double layer
capacitance $C (\Psi) = -\partial q / \partial \Psi$ and used the
fact that $\hat{c} = 1$ at the RC time.
Together with (\ref{eq:dlc_q_def_GCS}),
(\ref{eq:effective_stern_bc_RC_time}) and
(\ref{eq:effective_charging_eqn_RC_time}) form a complete set of
boundary conditions for the leading order electric potential in the bulk
region.
To compute the double layer capacitance, we can combine
(\ref{eq:dlc_q_def_GCS}) with (\ref{eq:effective_stern_bc_RC_time}) to
obtain
\begin{equation}
C = \frac{1}{\ensuremath{\mathrm{sech}} \left(\zeta/2\right) + \delta}
\label{eq:double_layer_capacitance}.
\end{equation}
Since $C$ depends on $\tilde{\Psi}$ (via $\zeta$), the charging equation
(\ref{eq:effective_charging_eqn_RC_time}) is nonlinear which makes
the problem analytically intractable.
For small applied fields (where $\zeta \approx E_o$ is a
reasonable approximation), analytical progress can be made by linearizing
the the double-layer capacitance around $\zeta = 0$ to obtain
$C \sim 1/(1+\delta)$. This approximation leads to a linear charging
equation making it possible to solve the equations as an expansion
in spherical harmonics. Substituting the expansion~(\ref{eq:phi_bulk_RC_time})
into (\ref{eq:effective_charging_eqn_RC_time}) and the definition of
$\Psi$, we obtain a decoupled system of ordinary differential equations for
the expansion coefficients:
\begin{eqnarray}
\frac{d A_0}{d\tRC} + (1+\delta) A_0 &=& \frac{d v}{d\tRC} \nonumber \\
\frac{d A_1}{d\tRC} + 2 (1+\delta) A_1 &=&
-(1+\delta) E_o + \frac{d E_o}{d\tRC} \nonumber \\
\frac{d A_l}{d\tRC} + (1+l) (1+\delta) A_l &=& 0 \ , \ l > 1.
\label{eq:harmonic_coef_eqns}
\end{eqnarray}
To retain generality, we have allowed for the possibility that the applied
electric field and surface potential are time-varying.
For the case of a steady surface potential and uniform applied field,
we find that the bulk electric potential is
\begin{equation}
\phi = \frac{v}{r} e^{-(1+\delta) \tRC}
- E_o r \cos \theta \left [
1 + \frac{1}{2 r^3} \left ( 1 - 3 e^{-2 (1+\delta) \tRC} \right )
\right ]
\label{eq:phi_bulk_low_field_RC_time}
\end{equation}
where we have assumed that both the surface potential and the electric field
are suddenly switched on at $\tRC = 0$. The initial condition in this
situation is that of a conducting sphere at potential $v$ in a uniform
applied electric field $E_o$:
$\phi = v - E_o r \cos \theta \left( 1-1/r^3\right)$.
Using (\ref{eq:phi_bulk_low_field_RC_time}), the double layer potential and
total diffuse charge are easily determined to be
\begin{eqnarray}
\Psi &=& v \left ( 1 - e^{-(1+\delta) \tRC} \right ) \nonumber \\
& & + \ \frac{3}{2}E_o \cos \theta
\left ( 1 - e^{-2 (1+\delta) \tRC} \right )
\label{eq:Psi_low_field_RC_time} \\
q &\sim& -\frac{v}{1+\delta} \left ( 1 - e^{-(1+\delta) \tRC} \right )
\nonumber \\
& & - \ \frac{3}{2 \left(1+\delta\right) }
E_o \cos \theta \left ( 1 - e^{-2 (1+\delta) \tRC} \right )
\label{eq:q_low_field_RC_time}.
\end{eqnarray}
These results are consistent with the calculations by Bazant and
Squires~\cite{iceo2004b,iceo2004a}, which is expected since the low applied
field limit corresponds to the regime where the total diffuse charge is
linearly related to the zeta-potential (and therefore the total double layer
potential). It is worth mentioning that in the common situation where
the surface potential is set well before $t = 0$, then the first term
in each of the above three expressions is not present.
Analogous double layer charging formulae for the cylinders are shown in
Table~\ref{tab:linear_charging_RC_time}.
\begin{table*}
\caption{\label{tab:linear_charging_RC_time} Table of formulae for
double layer charging of metallic cylinders and spheres at weak applied
electric fields at the RC-time. In these formulae, the potential, $v$,
of the particle is set to zero.
}
\begin{ruledtabular}
\begin{tabular}{ccc}
& Cylinder\footnote{$\theta$ measured from vertical axis.} & Sphere \\
\hline \\[1pt]
$\phi$ & $-E_o r \cos \theta
\left [ 1 + \frac{1}{r^2} \left( 1-2e^{-(1+\delta)t} \right)
\right ]$
& $-E_o r \cos \theta
\left [ 1 + \frac{1}{2r^3} \left( 1-3 e^{-2(1+\delta)t} \right)
\right ]$ \\[8pt]
$\Psi$ & $ 2 E_o \cos \theta \left( 1 - e^{-(1+\delta) t} \right) $
& $ \frac{3}{2} E_o \cos \theta
\left( 1 - e^{-2(1+\delta) t} \right) $ \\[8pt]
$q$ & $ -\frac{2}{1+\delta} E_o \cos \theta
\left( 1-e^{-(1+\delta) t} \right) $
& $ -\frac{3}{2 (1+\delta)} E_o \cos \theta
\left( 1-e^{-2(1+\delta) t} \right) $ \\[6pt]
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsubsection{Numerical Model}
For larger applied fields, the nonlinear double layer capacitance forces us
to use numerical solutions to gain physical insight.
Since the bulk electric potential is a time-varying harmonic function,
it is most natural to numerically model the evolution equation for
the potential using a multipole expansion with harmonic terms.
Truncating (\ref{eq:phi_bulk_RC_time}) after a finite number of terms
yields a discrete solution of the form:
\begin{eqnarray}
\hat{\phi}(r,\theta,\tRC) = -E_o r \cos \theta
+ \sum_{l=0}^{N} \frac{A_l(\tRC)}{r^{l+1}} P_l(\cos \theta)
\label{eq:numerical_model_phi_weakly_nonlinear}
\end{eqnarray}
where the unknowns are the time-dependent coefficients in the expansion.
By using (\ref{eq:numerical_model_phi_weakly_nonlinear}) and enforcing
that (\ref{eq:effective_charging_eqn_RC_time}) is satisfied at the
collocation points $\theta_i = i \pi/N$, we derive a system of ordinary
differential equations for the coefficients $A_l(\tRC)$:
\begin{equation}
\mat{C} \mat{P} \frac{d \vec{A}}{d\tRC} =
E_o \mat{P}(:,2) + \mat{Q} \vec{A}
\label{eq:numerical_model_weakly_nonlinear_RC_time}
\end{equation}
where $\vec{A}$ is the vector of expansion coefficients,
$\mat{P}$ and $\mat{Q}$ are collocation matrices that relate
the expansion coefficients to $\phi$ and $\partial \phi / \partial n$,
respectively, and $\mat{C}$ is a diagonal matrix that
represents the double layer capacitance at the collocation points.
Note that $\mat{P}(:,2)$ (which represents the second column of
$\mat{P}$ in MATLAB notation) is the discrete representation
of $P_1(\cos \theta) = \cos \theta$, so the term $E_o \mat{P}(:,2)$
accounts for the applied background potential.
The explicit forms for the collocation matrices and the discrete
double layer capacitance are given by
\begin{widetext}
\begin{eqnarray}
\mat{P} = \left[ \begin{matrix}
P_0(\cos \theta_0) & P_1(\cos \theta_0) & \ldots & P_N(\cos \theta_0) \\
P_0(\cos \theta_1) & P_1(\cos \theta_1) & \ldots & P_N(\cos \theta_1) \\
\vdots & \vdots & \ddots & \vdots \\
P_0(\cos \theta_N) & P_1(\cos \theta_N) & \ldots & P_N(\cos \theta_N)
\end{matrix} \right] \ \ &,& \ \
\mat{Q} = \left[ \begin{matrix}
P_0(\cos \theta_0) & 2 P_1(\cos \theta_0) & \ldots
& (N+1) P_N(\cos \theta_0) \\
P_0(\cos \theta_1) & 2 P_1(\cos \theta_1) & \ldots
& (N+1) P_N(\cos \theta_1) \\
\vdots & \vdots & \ddots & \vdots \\
P_0(\cos \theta_N) & 2 P_1(\cos \theta_N) & \ldots
& (N+1) P_N(\cos \theta_N)
\end{matrix} \right] \nonumber
\\[6pt]
\mat{C} &=& \ensuremath{\mathrm{diag}} \left( C(\Psi(\theta_0), C(\Psi(\theta_1),
\ldots, C(\Psi(\theta_N) \right).
\end{eqnarray}
\end{widetext}
The system of ODEs for the expansion coefficients
is easily solved using MATLAB's built-in ODE solvers by
multiplying (\ref{eq:numerical_model_weakly_nonlinear_RC_time})
through by $(\mat{C}\mat{P})^{-1}$ and writing a simple function
to evaluate the resulting right-hand side function.
\subsubsection{Dipole-Dominated Charging}
From the numerical solution of the charging equation
(\ref{eq:effective_charging_eqn_RC_time}), we see that when
the sphere is electrically grounded, charging is dominated by the dipolar
contribution to the response
(see Figure~\ref{figure:weakly_nonlinear_dipolar_charging}).
\begin{figure}
\begin{center}
\includegraphics[width=2.5in]{figs/weakly_nonlinear_v0_E5_d0_1}
\begin{minipage}[h]{3in}
\caption[Dipolar double layer charging in the weakly nonlinear regime]{
\label{figure:weakly_nonlinear_dipolar_charging}
Time evolution of the dominant coefficients in the Legendre polynomial
expansion of the bulk electric potential in the weakly nonlinear regime
at the RC time. In this figure $v=0$, $E = 5$ and $\delta = 0.1$.
Notice that the dipolar term dominates the solution.
}
\end{minipage}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=2.5in]{figs/weakly_nonlinear_v3_E5_d0_1}
\begin{minipage}[h]{3in}
\caption[Time evolution of dominant expansion coefficients for $\phi$ in
the weakly nonlinear regime at the RC time when sphere has a nonzero
applied voltage]{
\label{figure:weakly_nonlinear_dipolar_charging_v3}
Time evolution of the dominant coefficients in the Legendre polynomial
expansion of the bulk electric potential in the weakly nonlinear regime
at the RC time when the sphere has a nonzero applied voltage.
In this figure $v=3$, $E = 5$ and $\delta = 0.1$.
Note that both symmetric and antisymmetric terms make non-negligible
contributions.
}
\end{minipage}
\end{center}
\end{figure}
While the nonlinear capacitance does in fact allow higher harmonics to
contribute to the response, the higher harmonics only play a small role
even at larger applied fields. As expected, when the sphere is kept at zero
voltage, the antisymmetry between the upper and lower hemisphere is not
broken and only odd terms contribute to the series solution
(\ref{eq:phi_bulk_RC_time}). However, as shown in
Figure~\ref{figure:weakly_nonlinear_dipolar_charging_v3},
if a nonzero potential is applied to the sphere, all harmonics contribute
to the solution. In this case, the dominant contributions come from the
monopolar and dipolar terms.
The dipolar nature of double layer charging forms the foundation of
much of the work on the charging of colloid particles over the past half
century. For instance, the non-equilibrium double layer is often
characterized in terms of the induced dipole moment~\cite{dukhin1993}.
As shown in (\ref{eq:phi_bulk_low_field_RC_time}) --
(\ref{eq:q_low_field_RC_time}), the monopole and dipole contributions are
the \emph{only} contributions in a linearized theory.
Our numerical investigations demonstrate that, even for the nonlinear
theory, the monopole and dipole response dominates the total response.
Our results provides additional theoretical support for the focus
on the dipole response when studying colloid particles in applied fields.
\subsubsection{Extended Double Layer Charging}
The slowing of double layer charging is one important consequence of
nonlinearity in the double layer capacitance (see
Figure~\ref{figure:weakly_nonlinear_retarded_double_layer_charging}).
\begin{figure}[htb]
\begin{center}
\includegraphics[width=1.6in,height=1.4in]{figs/weakly_nonlinear_v0_E5_d1}
\includegraphics[width=1.6in,height=1.4in]{figs/weakly_nonlinear_v0_E5_d0_01}
\begin{minipage}[h]{3in}
\caption[Double layer charging in the weakly nonlinear regime at the
RC time for varying $\delta$ values]{
\label{figure:weakly_nonlinear_retarded_double_layer_charging}
Time evolution of the dominant coefficients in the Legendre polynomial
expansion of the bulk electric potential in the weakly nonlinear regime
at the RC time for low (left) and high (right) Stern capacitance
values. In these figures $v=0$ and $E = 5$.
Note that double layer charging is retarded when $\delta$ is
small but that the slowdown in double layer charging is suppressed
for larger $\delta$ values.
}
\end{minipage}
\end{center}
\end{figure}
However, as shown in
Figure~\ref{figure:weakly_nonlinear_retarded_double_layer_charging}
extended charging only occurs for $\delta \ll 1$.
Mathematically, we only see slowed charging at
small values of $\delta$ because the double layer capacitance
(\ref{eq:double_layer_capacitance}) can only become as small as
$1/\delta$, which occurs at large zeta-potentials.
For sufficiently small $\delta$, charging
is slowed at higher applied fields because the $\ensuremath{\mathrm{sech}} (\zeta / 2)$
term in the denominator of the double layer capacitance
(\ref{eq:double_layer_capacitance}) becomes negligible when
$\zeta \gg -2 \ln (\delta / 2)$.
\subsection{Dynamics at the Diffusion Timescale}
In this section, we examine the response of the system at the diffusion
time scale. We find that the only dynamic process is diffusion of neutral
salt within the bulk in response to surface adsorption that occurred at the
RC time scale.
Since the total amount of neutral salt absorbed by the diffuse charge layers
during the charging phase is an $O(\eps)$ quantity, the bulk concentration
only needs to decrease by $O(\eps)$ to compensate.
Thus, we find that dynamics are not present at leading order;
rather, they appear only in higher-order corrections.
Also, at the diffusion time, surface transport, which is
negligible at the RC time scale, becomes important.
\subsubsection{Leading Order Bulk and Double Layer Solutions}
Substituting an asymptotic series into
(\ref{eq:c_eqn_bulk_diffusion_time}) --
(\ref{eq:c_rho_poisson_eqn_bulk_diffusion_time}),
we obtain the leading order bulk equations:
\begin{eqnarray}
\frac{\partial c_0}{\partial t} &=& \ensuremath{\nabla^2} c_0
\label{eq:c0_eqn_bulk_diffusion_time} \\
\ensuremath{\nabla^2} \phi_0 &=& 0.
\label{eq:phi0_eqn_bulk_diffusion_time}
\end{eqnarray}
Applying the initial conditions (obtained by matching to
the solution at the RC time) and the leading order boundary conditions
(derived from the effective flux boundary conditions
(\ref{eq:q_evolution_eqn_GCS}) and (\ref{eq:w_evolution_eqn_GCS}))
yields the simple leading order solutions for a sphere
\begin{eqnarray}
c_0(\vec{x},t) &\equiv& 1 \\
\phi_0(\vec{x},t) &=& -E_o r \cos \theta
\left(1 + \frac{1}{2 r^3} \right)
\end{eqnarray}
and a cylinder
\begin{eqnarray}
c_0(\vec{x},t) &\equiv& 1 \\
\phi_0(\vec{x},t) &=& -E_o r \cos \theta
\left(1 + \frac{1}{r^2} \right).
\end{eqnarray}
Notice that there is no time dependence for either the concentration
or the electric potential. It is worth mentioning that for a general
geometry, the initial condition is a uniform concentration profile
with the electric potential of an insulator in an applied field and the
boundary conditions (which are consistent with the initial conditions) are
\begin{equation}
c_0 \frac{\partial \phi_0}{\partial n} = 0 \ \ \mathrm{and} \ \
\frac{\partial c_0}{\partial n} = 0.
\end{equation}
At the diffusion time scale, the double layer continues to remain in
quasi-equilibrium, so its leading order structure is given by
(\ref{eq:double_layer_structure}) with the bulk concentration set equal
to $1$.
However, unlike the double layer at the RC time scale, the leading
order zeta-potential is \emph{not} evolving, so the leading
order double layer structure is static in time.
\subsubsection{Higher-Order Bulk Diffusion}
In order to see dynamics, we need to consider the first-order correction to
(\ref{eq:c0_eqn_bulk_diffusion_time}) and
(\ref{eq:phi0_eqn_bulk_diffusion_time}):
\begin{eqnarray}
\frac{\partial c_1}{\partial t} &=& \ensuremath{\nabla^2} c_1
\label{eq:c1_eqn_bulk_diffusion_time} \\
\ensuremath{\nabla^2} \phi_1 &=& - \nabla c_1 \cdot \nabla \phi_0
\label{eq:phi1_eqn_bulk_diffusion_time}.
\end{eqnarray}
The boundary conditions for these equations are a bit more complicated.
Using (\ref{eq:q_evolution_eqn_GCS}) -- (\ref{eq:w_evolution_eqn_GCS})
and taking into account the leading order solutions, we find that the
boundary conditions for the $O(\eps)$ equations are
\begin{eqnarray}
q_0~\delta^+(t) &=& \nabla_s \cdot \left( w_0 \nabla_s \phi_0 \right)
- \frac{\partial \phi_1}{\partial n}
\label{eq:Oeps_q_evolution_eqn_GCS} \\
w_0~\delta^+(t) &=& \nabla_s \cdot
\left( q_0 \nabla_s \phi_0 \right)
- \frac{\partial c_1}{\partial n}
\label{eq:Oeps_w_evolution_eqn_GCS}
\end{eqnarray}
where $q_0$ and $w_0$ are the leading order equilibrium surface
charge density and surface excess neutral salt concentration.
As mentioned earlier, at the diffusion time scale, these quantities
are static in time.
Also, note the presence of the delta-functions in time, which account for
the ``instantaneous'' adsorption of charge and neutral salt from the bulk
during the at the RC-time~\cite{bazant2004}.
Mathematically, the appearance of the delta-functions is a consequence
of the connection between the time derivative of double layer quantities at
the two time scales in the asymptotic limit $\eps \rightarrow 0$.
To illustrate this connection, consider the time derivative of the
surface charge density, $q$.
Let $t$ and $\tRC$ be scaled to the diffusion and RC times, respectively, so
that $t = \eps \tRC$.
At these two time scales, the surface charge density can be written as
$q(t)$ and $\tilde{q} \left( \tRC \right)$ which are simply related by
$q(t) = \tilde{q} \left( \tRC \right)$. The time derivatives, however,
are related by
\begin{equation}
\frac{\partial q}{\partial t}(t) =
\frac{1}{\eps}~\frac{\partial \tilde{q}}{\partial \tRC}
\left( t/\eps \right).
\label{eq:deriv_q_at_two_time_scales}
\end{equation}
Therefore, for nonzero $t$, $\frac{\partial q}{\partial t} = 0$ in the
asymptotic limit because
$\frac{\partial \tilde{q}}{\partial \tRC} \left( t/\eps \right)$
approaches zero faster than linearly as $\eps \rightarrow 0$
(as an example, see (\ref{eq:q_low_field_RC_time}) ).
In contrast, for $t=0$, $\frac{\partial q}{\partial t}$ is infinite
because $\frac{\partial \tilde{q}}{\partial \tRC}(0)$ has a fixed
nonzero value.
Next, consider the following integral with $t_2 > 0$:
\begin{equation}
\int_{t_1}^{t_2} \frac{\partial q}{\partial t} dt
= \int_{t_1/\eps}^{t_2/\eps} \frac{\partial \tilde{q}}{\partial \tRC} d\tRC
= \tilde{q} \left( t_2/\eps \right) - \tilde{q} \left(t_1/\eps \right).
\end{equation}
In the asymptotic limit, the integral approaches zero for
nonzero $t_1$ but approaches $\tilde{q}(\infty)$ when $t_1$ equals zero.
Putting the above properties together, we see that
$\frac{\partial q}{\partial t}(t)$ is indeed a one-sided delta-function
of strength $\tilde{q}(\infty) = q_0$.
In contrast to one-dimensional systems, the boundary layers play
a more active role in the evolution of the bulk concentrations because
surface conduction continues to play a role well beyond the initial
injection of ions at $t = 0$. Note, however, that the surface
conduction terms in (\ref{eq:Oeps_q_evolution_eqn_GCS}) and
(\ref{eq:Oeps_w_evolution_eqn_GCS}) are static, so they essentially
impose fixed normal flux boundary conditions on the $O(\eps)$
bulk equations.
\subsubsection{Comparison with the Analysis of Dukhin and Shilov}
It is interesting to compare and contrast our weakly nonlinear analysis with
the work of Dukhin and Shilov \cite{dukhin1969, shilov1970} on the
polarization of the double layer for highly charged, spherical particles
in weak applied fields.
In both cases, bulk concentration variations and diffusion processes appear
as a higher-order correction to a uniform background concentration
and are driven by surface conduction. However, the significance of
the surface conduction term arises for different reasons.
As mentioned earlier, the small parameters that controls the
size of the correction is different in the two analyses.
In Dukhin and Shilov's analysis, the small parameters are $\eps$ \emph{and}
$E_o$. Because they essentially use asymptotic series in $E_o$ as the
basis for their analysis, they require that the double layer be highly
charged in order for surface conduction to be of the same order of magnitude
as the $O(E_o)$ normal flux of ions from the bulk.
In other words, because the size of the surface conduction is proportional
to $E_o$, in order for surface conduction to be of the same order of
magnitude as the normal flux, the surface charge density \emph{must} be
an $O(1)$ quantity: $\eps q = 2 \eps \sinh(\zeta_0/2) = O(1)$.
In contrast, we use asymptotic series in $\eps$ in our analysis and do not
restrict $E_o$ to be small, so the surface conduction and normal flux of
ions from the bulk have the same order of magnitude for small $O(\eps)$
surface charge densities regardless of the strength of the applied electric
field (as long as it is not so large that the asymptotic analysis breaks
down). Thus, our weakly nonlinear analysis complements the work of
Dukhin and Shilov by extending their analysis to stronger applied electric
fields and the case where the surface charge density is induced by the
applied field rather than fixed by surface chemistry of the colloid
particle.
\section{Strongly Nonlinear Relaxation
\label{sec:strongly_nonlinear_dynamics}}
\subsection{ Definition of ``Strongly Nonlinear'' }
The weakly nonlinear analysis of the thin double-layer limit in the
previous section assumes that the dimensionless surface charge density
$\alpha$ and surface salt density $\beta$ remain uniformly small. For
our PNP model, we can estimate when this assumption becomes
significantly violated using Eq.~(\ref{eq:alpha}) with $\zeta =
E_0a/(1+\delta)$ and $C=C_0$, which occurs at fields above a critical
value at least a few times the thermal voltage,
\begin{equation}
E_0 > \frac{2kT}{z_+ea}\left(1 + \frac{\lambda_S}{\lambda_D}\right)
\log\left(\frac{a}{\lambda}\right). \label{eq:strong1}
\end{equation}
As discussed in section~\ref{sec:surfproc}, this condition also
implies that surface
adsorption of ions from the bulk is larger enough to trigger
significant bulk diffusion and surface transport through the double
layer, in steady steady state. However, as noted by Bazant
\emph{et al. }~\cite{bazant2004}, weakly nonlinear asymptotics
breaks down during relaxation {\it dynamics} at somewhat smaller
voltages, $\alpha/\sqrt{\pi\eps} < 1$, or (with units restored)
\begin{equation}
E_0 > \frac{kT}{z_+ea}\left(1 +
\frac{\lambda_S}{\lambda_D}\right)\log\left(\frac{\pi
a}{\lambda}\right),
\label{eq:strong2}
\end{equation}
since large surface adsorption can occur only {\it temporarily} at
certain positions. (The factor $\pi$ in this formula comes from a
one-dimensional analysis of bulk diffusive relaxation, which does not
strictly apply here, but the rough scale should be correct.
Beyond the weakly nonlinear regime, there are two main effects that
occur: (1) transient, local depletion of the leading order bulk
concentration and (2) surface conduction at the leading order. As in
the case of a steady applied field, perhaps the greatest impact of
$\alpha = O(1)$ is that we must pay attention to factors of the form
$\eps e^{\zeta}$ or $\eps \sinh(\zeta)$, in addition to factors of
$\eps$, when ordering terms in asymptotic expansions.
In the thin double-layer limit, the boundary layers are still in
quasi-equilibrium, as long as the bulk concentration does not approach
zero, which would typically require Faradaic reactions consuming ions
at the conducting surface~\cite{bazant2005,chu2005}. Since we only
consider ideally polarizable conductors, bulk depletion is driven
solely by adsorption of ions in the diffuse layer, which is unlikely
to exceed diffusion limitation and produce non-equilibrium space
charge, although this possibility has been noted~\cite{bazant2004}.
Therefore, we would like to proceed as in the previous sections and treat the
bulk as locally electroneutral with effective boundary conditions.
\subsection{Leading-Order Equations}
Unfortunately, the analysis of the leading order equations derived in
this manner does not appear to be as straightforward as the analysis
in the weakly nonlinear limit. The main problem is that it seems
difficult to derive a uniformly valid leading order effective boundary
conditions along the entire surface of the sphere. For this reason,
we merely present the \emph{apparent} leading order equations for the
strongly nonlinear regime and leave a thorough analysis for future
work.
At the leading order in the bulk, we find the usual equations for a neutral
binary electrolyte:
\begin{eqnarray}
\frac{\partial c_0}{\partial t} &=& \ensuremath{\nabla^2} c_0
\label{eq:c_eqn_bulk_strongly_nonlinear} \\
0 &=& \nabla \cdot \left( c_0 \nabla \phi_0 \right)
\label{eq:rho_eqn_bulk_strongly_nonlinear}
\end{eqnarray}
with $\rho = O(\eps^2)$.
The structure of the boundary layers is described by GCS theory with
the concentration and charge density profiles given by
(\ref{eq:double_layer_structure}).
Effective boundary conditions for (\ref{eq:c_eqn_bulk_strongly_nonlinear}) --
(\ref{eq:rho_eqn_bulk_strongly_nonlinear}) are derived in the same manner
as for a steady applied field except that unsteady terms are retained.
Recalling that $q$ and $w$ grow exponentially with the zeta-potential,
we find that both the surface conduction terms and the time derivatives
of the total diffuse charge and excess concentration appear in the leading
order boundary conditions:
\begin{eqnarray}
\eps \frac{\partial \tilde{q}_0}{\partial t} &=&
\eps \nabla_s \cdot \left ( \tilde{w}_0 \nabla_s \phi_0 \right )
- c_0 \frac{\partial \phi_0}{\partial n}
\label{eq:O1_q_effective_bc_strongly_nonlinear} \\
\eps \frac{\partial \tilde{w}_0}{\partial t} &=&
\eps \nabla_s \cdot \left ( \tilde{q}_0 \nabla_s \phi_0 \right )
- \frac{\partial c_0}{\partial n}
\label{eq:O1_w_effective_bc_strongly_nonlinear}.
\end{eqnarray}
\subsection{Challenges with Strongly Nonlinear Analysis}
It is important to realize that these equations are mathematically much
more complicated than the analogous equations in the weakly nonlinear regime.
First, the nonlinearity due to the electromigration term explicitly appears
in the bulk equations at leading order; the nonlinearity is \emph{not}
removed by the asymptotic analysis. Furthermore, the diffuse layer
concentrations depend on time explicitly through the bulk concentration
at the surface in addition to the zeta potential:
\begin{equation}
\tilde{c}_\pm(t) = c_\pm(t) e^{\mp \tilde{\psi}(t)}.
\end{equation}
Already, these features of the equations greatly increases the challenge in
analyzing the response of the system.
However, the greatest complication to the mathematical model in the
strongly nonlinear regime is that effective boundary conditions
(\ref{eq:O1_q_effective_bc_strongly_nonlinear}) and
(\ref{eq:O1_w_effective_bc_strongly_nonlinear}) are not uniformly
valid over the surface of the sphere (or cylinder). Near the poles,
the double layer charges quickly, so the time-dependent and surface
transport terms in the effective boundary conditions become $O(1)$ at
very short times. In contrast, the amount of surface charge in the
double layer near the equator is \emph{always} a small quantity.
Thus, it would seem that near the equator, the only significant terms
in the effective boundary conditions are the normal flux terms.
Together, these observations suggest that the appropriate set of
boundary conditions to impose on the governing equations
(\ref{eq:c_eqn_bulk_strongly_nonlinear}) and
(\ref{eq:rho_eqn_bulk_strongly_nonlinear}) depends on the position on
the surface of the sphere. Moreover, the position where the boundary
conditions switch from one set to the other depends on time as
double-layer charging progresses from the pole towards the equator.
\section{Conclusion \label{sec:conclusions}}
\subsection{ Predictions of the PNP Model }
In this paper, we have analyzed electrochemical relaxation around
ideally polarizable conducting spheres and cylinders in response to
suddenly applied background electric fields, using the standard
mathematical model of the Poisson-Nernst-Planck (PNP)
equations. Unlike most (if not all) prior theoretical studies, we have
focused on the nonlinear response to ``large'' electric fields, which
transfer more than a thermal voltage to the double layer after
charging. We have effectively extended the recent nonlinear analysis
of Bazant, Thornton and Ajdari~\cite{bazant2004} for the
one-dimensional charging of parallel-plate blocking electrodes, to
some new situations in higher dimensions, where the potential of the
conductor is not controlled. Instead, electrochemical relaxation in
our model problems is driven by a time-dependent background electric
field applied around the conductor, whose charges are completely free
to relax. We are not aware of any prior analysis of nonlinear
response in such problems (whether or not using the PNP equations),
and yet it is important in many applications, in microfluidics,
colloids, and electrochemical systems, where applied fields or
voltages are often well beyond the linear regime.
We have focused on the structure and dynamics of the double layer and
the development of bulk concentration gradients. A major goal has been
to move beyond the traditional circuit models commonly used to study
the linear response of electrochemical systems. Through a combination
of analytical and numerical results, we have shown that significantly
enhanced ion concentration within the double layer is a generic
feature of nonlinear electrochemical relaxation around a conductor.
By interacting with the bulk electric field, the enhanced
concentration within the double layer leads to large surface current
densities. In addition, because the double layer does not charge
uniformly over the surface, tangential concentration gradients within
the double layer itself lead to surface diffusion. Due to their
coupling to bulk transport processes via normal fluxes of ions into
the double layer, these surface transport process drive the formation
of bulk concentration gradients.
We have also found that bulk concentration gradients are \emph{always}
present to some degree and play an important role in allowing the
system to relax to the steady-state. For weak applied fields, they
are often neglected because they only appear as a first-order
correction to a uniform background concentration profile. However,
for strong applied fields, the variations in the bulk concentration
becomes as large as the background concentration, so they cannot be
ignored. These bulk concentration gradients lead to bulk diffusion
currents which result in net circulation of neutral salt in the region
near the metal colloid particle.
Another key contribution of this work is a careful mathematical
derivation of general effective boundary conditions for the bulk
transport equations in the thin double layer limit by applying matched
asymptotic expansions to the PNP equations. We derive a set of
boundary equations that relate the time evolution of excess surface
(double-layer) concentrations to surface transport processes and
normal flux from the bulk. An interesting feature of the effective
boundary conditions is that they explicitly involve the small
parameter $\eps$ and may have different leading order forms depending
on the characteristic time scale and the magnitude of double layer ion
concentrations.
It is beyond the scope of this article, but worth pursuing, to study
``strongly nonlinear'' dynamics in detail, where (according to the PNP
equations) the double layer adsorbs enough ions to significantly
perturb the bulk concentration and cause surface transport to bulk
transport of ions. The challenging issues discussed at the end of
Section~\ref{sec:strongly_nonlinear_dynamics} need to be addressed in
order to gain a deeper understanding of the rich behavior of metal
colloid systems in this practically relevant regime. Also, it would be
beneficial to validate solutions of the effective bulk and boundary
conditions in the thin double layer limit against solutions for the
full PNP equations. The utility of the thin double layer
approximation cannot be fully appreciated until this comparison is
completed.
\subsection{ The Need for Better Continuum Models }
Unfortunately (or fortunately, depending on one's perspective), the
theoretical challenge of strongly nonlinear electrochemical relaxation
is much deeper than just solving the PNP equations in a difficult
regime: It is clear that the dynamical equations themselves must be
modified to better describe the condensation of ions in highly charged
double layers. As emphasized by Newman~\cite{newman_book}, the PNP
equations are only justified for dilute solutions, since each ion
moves in response to an independent stochastic force with constant
diffusivity and mobility, interacting with other ions only through the
mean long-range Coulomb force. Important effects in concentrated
solutions, such as short-range forces, many-body correlations, steric
constraints, solvent chemistry, and nonlinear mobility, diffusivity,
or permittivity, are all neglected.
It is tempting to trust the PNP equations in the case of a dilute {\it
bulk} electrolyte around a initially uncharged surface. However, as
we have shown, the model predicts its own demise when a large electric
field is applied, due to enormous increases in {\it surface}
concentration in the diffuse part of the double layer, even if the
bulk concentration is small. Note that the condition for strongly
nonlinear relaxation (\ref{eq:strong1}) or (\ref{eq:strong2}) is
similar to the condition for the breakdown of the PNP equations --
both require applied voltages across the double layer only several
times the thermal voltage. In particular, the Gouy-Chapman solution to
the PNP equations predicts an absurd concentration of counterions of
one per $\AA^3$ at the surface for a surprisingly small zeta potential
around $5kT/z_+e \approx 0.2$ V even in a fairly dilute 0.01 M
electrolyte. This critical voltage is routinely applied to the
double layer in electrochemical systems.
Of course, electrochemists are well aware of this problem; in fact,
Stern originally proposed the compact layer of adsorbed ions, outside
the continuum region, as a way to cutoff the unbounded double-layer
capacitance~\cite{stern1924}. Since then, various empirical models of
the compact layer have appeared~\cite{delahay_book,bockris_book}, but
steric effects (or other nonlinearities) in concentrated solutions
have received much less attention. Borukhov, Andelman and Orland
recently postulated a continuum free energy for ions, taking into
account steric repulsion with the usual mean-field electrostatics and
minimized it to derive a modified Poisson-Boltzmann equation for
potential in the double layer~\cite{borukhov1997}. Their model
predicts a steady, equilibrium profile of ions with a condensed layer
at the steric limit for large zeta potentials. By extending this
approach to obtain the chemical potential, Kilic, Bazant and
Ajdari~\cite{kilic2006} have derived modified-PNP equations and
studied steric effects on double layer charging, but further model
development is still needed, not only for steric effects, but also for
field-dependent permittivity and/or diffusivity. The validity of a
continuum model at the scale of several atoms in the most condensed
part of the double layer must also be viewed with some skepticism.
Nevertheless, in spite of these concerns, it is a natural first step to
study nonlinear electrochemical relaxation using the standard PNP
equations as we have done. The details of our results will surely
change with modified transport equations and/or surface boundary
conditions, but we expect that some key features to be robust. For
example, steric effects could decrease the capacitance of the double
layer at large zeta potentials, but surface conduction and adsorption,
coupled to bulk diffusion, should still occur, albeit perhaps with
smaller magnitude for a given applied field. Also the mathematical
aspects of our boundary-layer analysis, such as the derivation of the
surface conservation law (\ref{eq:mubc}), could be applied to any
continuum transport equations.
\acknowledgments
This work was supported in part by the Department of Energy Computational
Science Graduate Fellowship Program of the Office of Science and
National Nuclear Security Administration in the Department of Energy
under contract DE-FG02-97ER25308 (KTC) and by the MRSEC Program of the
National Science Foundation under award number DMR 02-13282 (MZB and KTC).
The authors thank A.~Ajdari, J.~Choi and L.~H.~Oleson for many
helpful discussions.
We are particularly grateful to L.~H.~Olesen for pointing out an important
term that we had missed in deriving Eq.~(\ref{eq:effective_flux_bc_GCS}).
|
cond-mat/0603686
|
\section{Introduction}
Magnetic traps play an important role in the study of atomic
Bose-Einstein condensates (BEC) \cite{magnetictrap}. In a typical
static magnetic trap, individual atomic spin couples to the
spatial dependent magnetic (B-) field because of Zeeman effect.
When an atom moves in a region where the direction of the B-field
changes slowly and the strength of the B-field is sufficiently
strong, according to Born-Oppenheimer approximation
\cite{bo1,bo2}, the atomic spin can follow the B-field
adiabatically and remain in the same trapped eigen-state of the
interaction Hamiltonian relative to the
instantaneous direction of the magnetic field ${\vec B}({\vec r})$, where ${%
\vec r}$ is the center of mass position of the atom (or more
precisely, that of the valence electron). In this case the atomic
center of the mass experiences an effective spatially-varying
potential that is equal to the Zeeman energy and proportional to
the strength of the B-field.
For weak B-fields, when the atomic Zeeman energy is comparable to
or less than the frequency of the directional variation of the
B-field felt by the moving atom, adiabatic dynamics cannot be
followed anymore. As a result, nonadiabatic (Majorona) transitions
\cite{Majorona} for the atomic spin may occur. Two potentially
damaging effects can cause nonadiabatic transitions. The first
happens when an atom enters a weak B-field region due to its
translational motion in space. For instance, in a quadrupole trap,
atoms in the weak field seeking state are accelerated towards the
center of the trap where the B-field vanishes. Nonadiabatic
transitions always occur in the vicinity of a zero B-field. To
avoid this region of vanishing B-field or a spatial "hole," a
number of methods have been developed to effectively plug it,
e.g., with the use of a far-off-resonant optical potential as an
``optical plug" \cite{opticalplug} or the more famous time
averaged orbiting potential (TOP) trap \cite{top}. The second
reason for nonadiabatic transitions is due to the explicit time
dependence of the B-field. Obviously nonadiabatic transitions may
occur if the B-field changes rapidly with time.
Recently, the atomic quantum gas group at Peking University (PKU)
reported interesting observations of multi-component $^{87}$Rb
($F=2$) spinor condensates via switching off the B-fields of an
initially spin polarized single component condensate in a QUIC
trap \cite{chenshuai}. The group of Prof. Chandra Raman at Georgia
Tech also discovered counter-intuitive meta-stability when
condensates were loaded into an ``unplugged" magnetic quadruple
trap \cite{raman}. We decided to present our theoretical studies
in the hope that the theoretical framework for spinor nonadiabatic
level crossing dynamics may be of interest to other groups in the
field of atomic quantum gases. In this paper, we will focus on the
Peking University experiment in a time-dependent QUIC trap
\cite{hansch}. The more involved situation of a condensate in a
quadruple trap will be discussed elsewhere \cite{peng}. According
to the reported experiment \cite{chenshuai}, the affected
time-dependence for the B-field is relatively simple. After a
single component condensate was created in a QUIC trap, the
various B-field generating currents were switched off in
appropriately chosen orders. Whenever near-zero level-crossing
occurs, multi-component spinor condensates are observed.
This paper summarizes our treatment of level crossing dynamics for
an atomic spin inside an external B-field. The theory is developed
with respect to ``the first scenario," where the vanishing B-field
is due to the different time constants of decay for the B-fields
from the QUIC coil and the bias coil after being shut off as
discussed in Sec. II. An alternative scenario where the B-field
zero is due to different time constants of the decaying B-fields
from the quadruple coil and the Ioffe coil will be discussed in
Sec. III. Finally we conclude and provide a brief summary in Sec.
IV.
\begin{figure}[h]
\includegraphics[width=5.2cm,height=4.5cm]{fig1.eps} \vspace{0.5cm}
\caption{ The QUIC trap geometry (excluding the bias coils). The arrows
indicate directions of currents in the coils. }
\label{fig1}
\end{figure}
\section{The First Scenario}
The magnetic trap used in their experiment \cite{chenshuai} is
made up from two separate coils, a QUIC coil and a bias coil. The
QUIC coil consists of a quadruple trap coil and an Ioffe coil in
series as in the original QUIC trap \cite{hansch}. The
compensating coils for the earth's B-field are separate and always
left on; thus will not be included explicitly in our model. Before
switching off, the magnetic B-field ${\vec{B}}^{\rm Q}({\vec{r}})$
created by the QUIC coil has the familiar configuration of a
Ioffe-Pritchard trap and can be expressed as
\cite{chenshuai,hansch}
\begin{eqnarray}
{\vec{B}}^{\rm Q}({\vec{r}})=B^{\rm Q}_{\perp }({%
\vec{r}})\mathbf{e}_{\mathbf{\perp }}+B^{\rm Q}_{z}({%
\vec{r}})\mathbf{e}_{\mathbf{z}},
\end{eqnarray}
where the axial and radial QUIC B-field components are $B^{\rm Q}_{\perp }({%
\vec{r}})$ and $B^{\rm Q}_{z }({%
\vec{r}})$, respectively;
\begin{eqnarray}
B^{\rm Q}_{z}({\vec{r}})
&=&B^{\rm Q}_{z}(0)+B^{\rm Q\prime\prime}_{z }z^{2},\nonumber\\
B^{\rm Q}_{\perp }({\vec{r}}) &=&B^{\mathrm{Q\prime}%
}_{ \perp }\sqrt{x^{2}+y^{2}}, \label{ioffe}
\end{eqnarray}%
with the unit vector $\mathbf{e}_{\mathbf{\perp }}$ defined as
\begin{eqnarray}
\mathbf{e}_{\mathbf{\perp }}=\frac{(-x\mathbf{e}_{x}+y\mathbf{e}_{y})}{%
\sqrt{x^{2}+y^{2}}}.
\end{eqnarray}
$B^{\mathrm{Q\prime}}_{\perp}$ and $B^{\rm Q\prime\prime}_{z}$
denote the respective spatial derivatives for the B-fields. The
right handed coordinate system as in Figure \ref{fig1} is chosen
such that $B^{\rm Q}_{z}(0)>0$ and $B^{\mathrm{Q\prime}}_{ \perp
}>0$.
The bias field ${\vec{B}}^{\rm A}$ is in the $z$ direction. It is
created by the bias coils and approximately constant near the trap
center \cite{chenshuai}.
Before switching off, it can be expressed as ${\vec{B}}^{\rm A}({\vec{r}}%
)=-B^{\rm A}_{ z}\mathbf{e}_{\mathbf{z}}$ satisfying $B^{%
\rm Q}_{z }(0)>B^{\rm A}_{ z}>0$. If the bias field is switched
off first and the two components of the QUIC
field are simultaneously switched off after a time interval of $t_{\mathrm{%
int}}$, then at time $t$, the QUIC field becomes $e^{-t/\tau_{\rm Q}}{\vec{B}}^{%
\rm Q}({\vec{r}})$ and the bias field becomes
$e^{-(t+t_{\mathrm{int}})/\tau_{\rm A}}{%
\vec{B}}^{\rm A}({\vec r})$, i.e., both the QUIC field and the
bias field are assumed to decrease exponentially with decay time
constants $\tau_{Q}$ and $\tau_{A}$, and the quadruple field and
the Ioffe field are assumed to decay with the same time constant.
Assuming $t=0$ as the instant for shutting off the QUIC field, the
total time-dependent B-field then becomes
\begin{eqnarray}
{\vec{B}}({\vec{r}},t) &=&e^{-t/\tau_{Q}}B^{\rm Q}_{\perp }({\vec{%
r}})\mathbf{e}_{\mathbf{\perp }} \nonumber \\
&&+e^{-t/\tau_{Q}}B^{\rm Q}_{z}({\vec{r}})\mathbf{e}_{\mathbf{z}%
}-e^{-(t+t_{\mathrm{int}})/\tau_{\rm A}}B^{\rm A}_{ z}\mathbf{e}_{\mathbf{z%
}}.
\end{eqnarray}%
$B^{\rm Q}_{\perp }({\vec{r}})$ is proportional to $%
(x^{2}+y^{2})^{1/2}$ near the trap center or the origin; therefore, we have $B^{%
\rm Q}_{z }({\vec{r}}),\ B^{\rm A}_{z}\gg B^{\rm Q}_{\perp }({\vec{r}})$ and $|B^{\rm Q%
}_{z}({\vec{r}})-B^{\rm A}_{z}|\gg B^{\rm Q}_{\perp }({%
\vec{r}})$. Before switching off the QUIC field, the $z$ component of ${\vec{%
B}}({\vec{r}},t)$ takes a positive value $B^{\rm Q%
}_{ z }({\vec{r}})-B^{\rm A}_{ z}$ much larger than the initial
value of the transverse field $B^{\rm Q}_{\perp}({\vec{r}})$. At
$t=0$, all atomic spins initially are polarized, thus are the
eigen-state $|M_{F}=2\rangle $ of $F_{z},$ i.e., the $z$ component
of the atomic hyperfine spin $\vec{F}$. If the QUIC field and the
bias field are switched off simultaneously, i.e.,
$t_{\mathrm{int}}=0$ and $\tau_{Q}=\tau_{A}$, the direction of the
total B-field ${\vec{B}}({\vec{r}},t)$ does not change with time
although the strength of ${\vec{B}}({\vec{r}},t)$ decreases after
the switching off process. Nonadiabatic transitions do not occur
in this case and the initial single component condensate remains a
single component one. If the QUIC field and the bias field
decrease with different time constants $\tau_{\rm Q}\neq \tau_{%
\rm A}$, the direction of ${\vec{B}}({\vec{r}},t)$ changes with
time and nonadiabatic level crossing arises.
In the calculations to follow, we will make a simple approximation
that the atomic spatial position does not change during the
switching-off process. This allows for an easy calculation of
nonadiabatic transition probabilities between different atomic
spin states at a fixed spatial position ${\vec r}$. This is well
justified for the experiment of PKU, where level crossing occurs
over a time window of $\sim 10^2(\mu {\mathrm s})$, during which
an condensed atom moves a distance less than $0.1(\mu {\mathrm
m})$, provided its kinetic energy is $\sim 10^{4}({\mathrm
{Hz}})$.
As mentioned above, the $z$ component of the B-field initially
takes a large positive value. After the switching-off, the bias
field decreases much slower than the QUIC field \cite{chenshuai},
i.e. we have $\tau_{\rm A}\gg \tau_{\rm Q}$. At certain instant $%
t_{0}$, the value of $e^{-t/\tau_{Q}}B^{\rm Q}_{z}$ equals
$e^{-(t+t_{\mathrm{int}})/\tau_{\rm A}}B^{\rm A}_{ z}$, which
causes the $z$ component of the total B-field to become zero. As a
result of this, transitions from the state $|M_{F}=2\rangle $ to
other eigen-states of $F_{z}$ occur because of the finite
transverse B-field $e^{-t/\tau_{Q}}B^{\rm Q}_{\perp }$ in the
vicinity of $t_{0}$. After $t_{0}$, the $z$ component of the
B-field becomes negative because
$e^{-(t+t_{\mathrm{int}})/\tau_{\rm A}}/e^{-t/\tau_{Q}}\gg 1$ for
a large enough $t$; the absolute value of the $z$ component of the
B-field can become again much larger than the transverse
components of ${\vec{B}}({\vec{r}},t)$ for $t\gg t_{0}$.
Therefore, the probabilities for an atom in different eigen-states
of $F_{z}$ can again take constant values in the long time limit.
To compute the nonadiabatic level crossing rates, we note that
transitions mainly occur in the near zero B-field region, i.e.,
for weak B-field. Thus, we only need to consider the linear Zeeman
coupling of an atomic hyperfine spin. Our model Hamiltonian takes
the simple form
\begin{eqnarray}
H=g_{F}\mu_{{\mathrm B}}{\vec B}({\vec r},t)\cdot \vec F.
\label{Hamiltonian}
\end{eqnarray}
Here $g_{F}$ is the {\it Land$\acute{\rm e}$} $g$ factor and
$\mathrm{\mu }_{\mathrm{B}}$ is the Bohr magneton. For $^{87}$Rb
atoms under consideration here, the spinor degree of freedom
refers to the $F=2$ manifold with $g_F=1/2$. In their experiment \cite%
{chenshuai}, the initial condition corresponds to
\begin{eqnarray}
|\Psi (0)\rangle =|M_F=2\rangle.
\end{eqnarray}
At large $t\rightarrow+\infty$, the wave function can be expanded
as
\begin{eqnarray}
|\Psi(t\rightarrow+\infty)\rangle =\sum_{M_F=-2}^{2}C_{M_F}( {\vec
r}) e^{-i\phi_{M_F}({\vec r},t)}|M_F\rangle,
\end{eqnarray}
in the complete basis of $F_z$ along the initial quantization
axis. Our problem is to find the steady population distribution
$P_{M_F}( {\vec r}) =|C_{M_F}({\vec r})|^{2}$ in the long time limit.
We will make use of the method of Hioe \cite{hioe} to
calculate the finial state probability distribution due to
nonadiabatic level crossing of a high spin. Because of
the rotational symmetry of our model system (\ref{Hamiltonian}), it can be
mapped onto a spin $1/2$ spinor with the same type of coupling, described by
a Hamiltonian
\begin{eqnarray}
H_{\sigma}=g_F\mu_{\mathrm B}{\vec B}(\vec r,t)\cdot
{\frac{\vec\sigma}{2}}, \label{H12}
\end{eqnarray}
where $\vec\sigma$ is the familiar spin $1/2$ Pauli matrix vector. The
initial condition for the spin $1/2$ state is
\begin{eqnarray}
|\varphi (0)\rangle =[1,0]^{T},
\end{eqnarray}
and the finial state can be denoted as
\begin{eqnarray}
|\varphi(t\rightarrow+\infty)\rangle =[\alpha ({\vec
r})e^{i\phi_{\alpha}({\vec r},t)},\beta ({\vec
r})e^{i\phi_{\beta}({\vec r},t)}]^{T}.
\end{eqnarray}
Upon solving this two state problem, $P_{M_F}({\vec r})$ can be
found easily according to the rotation group representation
elements as in Hioe \cite{hioe}. Apart from a globe phase factor,
the evolution operator corresponding to the Hamiltonian
(\ref{Hamiltonian}) can be expressed as
$D^{(2)}=\exp[-i\hat{n}\cdot\vec{F}\theta]$; while the one
corresponding to the Hamiltonian (\ref{H12}) is
$D^{(1/2)}=\exp[-i\hat{n}\cdot(\vec{\sigma}/2)\theta]$. The unit
vector $\hat{n}$ and the angle $\theta$ are determined by
$\vec{B}(\vec{r},t)$. Therefore, $D^{(1/2)}$ and $D^{(2)}$ are the
representation matrixes (D matrixes) of the {\it same} rotation
operation. The transition probabilities $P_{M_F}$ and
$|\alpha(\vec{r})|^{2}$ can be rewritten as
$P_{M_F}=|D^{(2)}_{M_F,2}|^{2}$ and
$|\alpha(\vec{r})|^{2}=|D^{(1/2)}_{1/2,1/2}|^{2}$. According to
the representation theory of ${\rm SO}(3)$ group \cite{winger},
$|D^{(2)}_{1/2,1/2}|$ and $|D^{(2)}_{M_F,2}|$ are functions of
$\sin[\beta/2]$ and $\cos[\beta/2]$ with $\beta$ one of the three
Euler angles of the rotation. Although we do not know the values
of $\hat{n},\theta$, and $\beta$, we can express the transition
probability $P_{M_F}$ in terms of $A({\vec r})=|\alpha ({\vec
r})|^{2}$; as,
\begin{eqnarray}
P_{2}({\vec r}) &=&A({\vec r})^{4}, \nonumber \\
P_{1}({\vec r}) &=&4A({\vec r})^{3}[1-A({\vec r})], \nonumber \\
P_{0}({\vec r}) &=&6A({\vec r})^{2}[1-A({\vec r})]^{2}, \label{probability}
\\
P_{-1}({\vec r}) &=&4A({\vec r})[1-A({\vec r})]^{3}, \nonumber \\
P_{-2}({\vec r}) &=&[1-A({\vec r})]^{4}. \nonumber
\end{eqnarray}
The two state problem can be solved accurately with the
Landau-Zener formula \cite{lz}. To this end, we reexpress the
Hamiltonian (\ref{H12}) as
\begin{eqnarray}
H_{\sigma }[t]=q[t]h_{\sigma}[t],
\end{eqnarray}
with the "normalized" Hamiltonian
\begin{eqnarray}
h_{\sigma}[t]=g_{\perp }\sigma _{\perp }+\left( g^{\rm Q}_{ z }-%
e^{-\xi t}g^{\rm A}_{ z }\right) \sigma _{z},
\end{eqnarray}
and the parameters
\begin{eqnarray}
g_{\perp } &=&\frac{1}{2}g_{F}\mu_{\mathrm B}B^{\rm Q}_{\perp }({%
\vec{r}}), \nonumber \\
g^{\rm Q}_{ z } &=&\frac{1}{2}g_{F}\mu_{\mathrm
B}B^{\rm Q}_{z}({\vec{r}}), \nonumber \\
g^{\rm A}_{ z } &=&\frac{1}{2}g_{F}\mu_{\mathrm B}e^{-t_{{\mathrm {int}}}/\tau_{{\mathrm A}}}B^{%
\rm A}_{z }({\vec{r}}), \nonumber \\
\xi &=&(\tau_{\rm A}-\tau_{\rm Q})/{\tau }{_{\rm A}}%
{\tau }{_{\rm Q}}. \label{g}
\end{eqnarray}%
We define a new time variable
\begin{eqnarray}
s =-\tau_{\rm Q}e^{-t/\tau_{Q}};
\end{eqnarray}
then, the time-dependent Schr\"{o}dinger equation
\begin{eqnarray}
i\partial _{t}|\varphi \lbrack t]\rangle =H_{\sigma}[t]|\varphi
\lbrack t]\rangle ,
\end{eqnarray}
becomes
\begin{eqnarray}
i\partial _{s}|\varphi (s )\rangle =h(s )|\varphi (s )\rangle ,
\end{eqnarray}
with $|\varphi (s )\rangle =|\varphi \lbrack t(s )]\rangle $ and $%
h(s )=h[t(s )]$. Consequently, the time interval of the dynamics
$t\in \lbrack
0,\infty ]$ is mapped into $s \in \lbrack -\tau_{\rm Q%
},0].$
As stated above, the $z$ component of ${\vec{B}}({\vec{r}},t)$
takes large positive and negative values, respectively, at $t=0$
and $t=\infty $. Therefore, at $s =-\tau_{\rm Q}$ and $s =0$, the
condition
\begin{eqnarray}
\left\vert g^{\rm Q}_{ z }-e^{-\xi t(s)}g^{\rm A%
}_{z }\right\vert \gg g_{\perp },
\end{eqnarray}
is satisfied while $g^{\rm Q}_{ z }-e^{-\xi t(s)}g^{\rm A%
}_{z }$ takes positive and negative values, respectively. In the
Landau-Zener approximation, a linear approximation is always
assumed for the different energy levels. We find the value $s
_{0}$ at the crossing point $t_{0}$ is given by
\begin{eqnarray}
s_{0}=-\tau_{\rm Q}q(t_{0})=-\tau_{\rm Q}\left( \frac{g^{%
\rm A}_{z}}{g^{\rm Q}_{ z }}\right) ^{%
\frac{1}{\xi \tau_{\rm Q}}}.
\end{eqnarray}
At $s =s_{0}$ when the longitudinal B-field vanishes
\begin{eqnarray}
\left( g^{\rm Q}_{ z }-e^{-\xi t(s_{0})}g^{%
\rm A}_{z }\right) =0,
\end{eqnarray}
a linear approximation to the energy levels simply leads to%
\begin{eqnarray}
\left( g^{\rm Q}_{ z }-e^{-\xi t(s)}g^{\mathrm{%
A}}_{z }\right) \approx v(s-s _{0}),
\end{eqnarray}
with
\begin{eqnarray*}
v
&=&-g^{\rm A}_{ z }\tau_{\rm Q}(\xi \tau_{\rm Q%
})\left( \frac{g^{\rm A}_{ z }}{g^{\rm Q}_{ z }}\right)
^{-1-\frac{1}{\xi \tau_{\rm Q}}}.
\end{eqnarray*}%
Using the Landau-Zener formula, we immediately find
\begin{eqnarray}
A({\vec{r}})=\exp \left( -\pi \frac{|g_{\perp }|^{2}}{|v|}\right)
.
\end{eqnarray}
For $\tau_{\rm A}\gg \tau_{\rm Q}$, we find $\xi\tau_{\mathrm{Q%
}}\approx 1$, which then leads to
\begin{eqnarray}
A({\vec{r}},t_{\mathrm{int}}) &\simeq &\exp \left( -\pi
\frac{|g_{\perp
}|^{2}g^{\rm A}_{ z }\tau_{\rm Q}}{g_{\rm Q}}%
\right) \nonumber\\
&=&\exp \left( -\pi g_{F}\mu_{\mathrm B}B^{\rm A}_{ z}({\vec{r}})\tau_{%
\rm Q}\frac{B^{\rm Q2}_{ \perp }\left( {\vec{r}}%
\right) }{2B^{\rm Q2}_{z}\left( {\vec{r}}\right) }e^{-%
\frac{t_{\mathrm{int}}}{\tau_{\rm A}}}\right). \hskip 24pt
\label{estimation1}
\end{eqnarray}%
Obviously for a large enough time interval $t_{\mathrm{int}}$ such that $%
e^{-t_{\mathrm{int}}/\tau_{\rm A}}\ll 1$, we have $A({\vec{r}},t_{%
\mathrm{int}})\approx 1$ and $P_{2}({\vec{r}})\approx 1$. A single component
condensate remains a single component one. In fact, if $e^{-t_{\mathrm{int}%
}/\tau_{\rm A}}\ll 1$, the bias field has already decreased to
zero when the QUIC field is switched off. Thus, during the
switching-off of the QUIC field, the direction of the B-field does
not change and nonadiabatic transitions cannot occur.
In the experiment of PKU, \cite{chenshuai}, the various trap
parameters take the
following values: $B^{\rm Q}_{z}(0)=9$ (Gauss), $B^{%
\rm A}_{z}=7.45$ (Gauss),
$B^{\rm Q\prime\prime}_{z}=4.9\times 10^{2}$ (Gauss-cm$^{-2}$), $B^{\mathrm{%
Q}\prime}_{\perp}=3.0\times 10^{2}$ (Gauss-cm$^{-1}$%
), $\tau_{\rm Q}=40$ (${\mu }$s), $\tau_{\rm A}=3$ (ms). Before
switching-off, the center of the QUIC trap is at
${\vec{r}}_{0}=(0,5\mathrm{\mu m},0)$. Substituting the above
coefficient $A({\vec{r}})$ of Eq. (\ref{estimation1}) into Eq.
(\ref{probability}), we arrive at a simple estimate for the
population distribution $P_{M_{F}}({\vec{r}}_{0})$. More precisely,
we can estimate the population distribution with the
following,
\begin{eqnarray}
N_{M_{F}}=\int P_{M_{F}}({\vec{r}})\rho ({\vec{r}})d{\vec{r}},
\end{eqnarray}
where $\rho ({\vec{r}})$ is the density profile of the trapped gas
cloud. Figure \ref{fig2} shows the typical dependence of such
result on the time interval $t_{\mathrm{int}}$. In the calculation
of Figure \ref{fig2}, we set $\rho ({\vec{r}})$ to be the atomic
density distribution given by the Thomas-Fermi approximation
corresponding to the initial values of the QUIC field and the bias
field. Namely, the spatial motion of the atoms is omitted. This
approximation is based on the fact that the decay time
$(\rm{3ms})$ is shorter than the period ($\sim$4.5-7.5ms) of the
trap potential and seems to be a bit crude. To obtain a more
accurate estimation of the atomic population, variations of the
atomic spatial distribution in the decay process of the bias field
should be considered fully .
\begin{figure}[tbp]
\includegraphics[width=3.25in]{fig2.EPS}\vspace{0.3cm}
\caption{(Color online)
A typical dependence of the population distributions on the time $%
t_{\mathrm{int}}$. The unit of the atomic number is $10^{-4}$.}
\label{fig2}
\end{figure}
The above result is based on the approximation that atoms do not
move during the switching-off process. If atomic motion is
included, more accurate population distribution can be calculated
by solving the multiple-component Gross-Pitaevskii equation
including the time-dependent B-field. A detailed comparison of
these two approaches is given in Ref. \cite{peng}. Overall, we
find the approximate Landau-Zener solution discussed here holds
well for the parameter regimes of the experiment \cite{chenshuai}.
\section{An alternative scenario}
The QUIC coil consists of a pair of quadruple coils and an Ioffe
coil. The B-field ${\vec B}^{\mathrm Q}$ of the QUIC trap is the
sum of the B-fields ${\vec{B}}^{\mathrm{qd}}$ from the quadruple
coils and ${\vec{B}}^{\mathrm{If}}$ from the Ioffe coil. In the
previous section, we simply assumed the B-fields generated by
these two sets of coils decrease synchronously after shutting off
electric currents . However, as was discovered in the experiment
\cite{chenshuai}, the magnetic fields ${\vec{B}}^{\mathrm{qd}}$
and ${\vec{B}}^{\mathrm{If}}$ do not always decay with the same
time constant despite the fact that the two sets of coils forming
the QUIC trap are in series.
Assuming different time constants for the decay of ${\vec{B}}^{%
\mathrm{qd}}$ and ${\vec{B}}^{\mathrm{If}}$, the finial population
distribution needs to be re-calculated.
Before the QUIC field is switched off,
the components of the B-fields ${\vec{B}}^{\mathrm{qd}}$ and ${\vec{B}}^{%
\mathrm{If}}$ are functions of the atomic position, explicitly given by
\begin{equation}
\begin{array}{lll}
B^{\mathrm{qd}}_{z}({\vec r})&=&B^{\mathrm{qd}\prime}(z-z_{0}), \nonumber\\
B^{\mathrm{If}}_{z}({\vec r})&=&B^{\rm Q\prime\prime}_{z }z^{2}
-B^{\mathrm{qd}\prime}(z-z_{0})+B^{\rm Q}_{z}(0), \\
B^{\mathrm{qd}}_{x}({\vec r})&=&-2B^{\mathrm{qd}\prime}x, \\
B^{\mathrm{If}}_{x}({\vec r})&=&(-B^{\mathrm{Q\prime}}_{\perp
}+2B^{\mathrm{qd}\prime%
})x, \\
B^{\mathrm{qd}}_{y}({\vec r})&=&B^{\mathrm{qd}\prime}y, \\
B^{\mathrm{If}}_{y}({\vec r})&=&(B^{\mathrm{Q\prime}}_{\perp
}-B^{\mathrm{qd}\prime })y,
\end{array}
\end{equation}
where $z_{0}$ is the distance between the center of the QUIC trap
(in the absence of gravity) and the center of the quadruple trap.
In their experiment \cite{chenshuai}, $B^{\mathrm{qd}\prime}$ is
about $150$ (Gauss-cm$^{-1}$) and $z_{0}$ is $0.75$ (cm).
Therefore, in the region near the center of the QUIC trap,
we have $B^{\mathrm{qd}}_{z}=-107$ (Gauss). From $B^{\rm Q%
}_{z}(0)=9$ (Gauss), we find $B^{\mathrm{If}}_{z}=116$ (Gauss).
If ${\vec{B}}^{\mathrm{If}}$ and ${\vec{B}}^{\mathrm{qd}}$
decrease with different time constants $\tau_{\mathrm{If}}$ and
$\tau_{\mathrm{qd}}$ after switching-off, the B-field from the
quadruple coils becomes
\begin{eqnarray}
(B^{\mathrm{qd}}_{z}%
\mathbf{e}_{z}+B^{\mathrm{qd}}_{y}\mathbf{e}_{y}+B^{\mathrm{qd}}_{x}\mathbf{e}_{x}) e^{-t/\tau_{%
\mathrm{qd}}}
\end{eqnarray}
and the B-field created by the Ioffe coil becomes
\begin{eqnarray}
(B^{\mathrm{If}}_{z}%
\mathbf{e}_{z}+B^{\mathrm{If}}_{y}\mathbf{e}_{y}+B^{\mathrm{If}}_{x}\mathbf{e}_{x}) e^{-t/\tau_{%
\mathrm{If}}}.
\end{eqnarray}
In this section, we assume the time interval $t_{\mathrm {int}}$
between the switching-off of the B-fields ${\vec B}^{Q}$ and
${\vec B}^{A}$ is sufficiently long, i.e., when the QUIC field is
switched off, the bias field has already decreased to zero. Since
the B-fields $B^{\mathrm {qd}}_{z}$ and $B^{\mathrm{If}}_{z}$ have
different signs, at a time $ \tilde {t}_{0}$ the condition
\begin{eqnarray}
B^{\mathrm{If}}_{z}e^{-\tilde {t}_{0}/\tau_{\mathrm{If}}}+B^{\mathrm{qd}%
}_{z}e^{-\tilde {t}_{0}/\tau_{\mathrm{qd}}}=0
\end{eqnarray}
can be satisfied and the $z$ component of the total B-field
becomes zero. As
before, nonadiabatic transitions happen mainly in the temporal region near $%
\tilde {t}_{0}$. Assuming
\begin{eqnarray}
\tau_{\mathrm{If}}=\frac{\tau_{\mathrm{qd}}}{2},
\end{eqnarray}
the nonadiabatic transition probability $\tilde{A}$ in the
spin-$1/2$ case can again be calculated
with the Landau-Zener method used previously provided that $\tilde {t}_{0}<\tau_{\mathrm{if%
}}$. In the present case, we find $\tilde {t}_{0}<0.6\tau_{\mathrm{If}}$.
Thus, we obtain
\begin{eqnarray}
\tilde {A}({\vec r})=\exp \left[ -\pi \frac{g_{F}\mu_{\mathrm B}\,\tau_{\mathrm{If}}\sum_{l}(B^{\mathrm{qd}%
}_{l}|B^{\mathrm{If}}_{z}|+B^{\mathrm{If}}_{l}|B^{\mathrm{qd}}_{z}|)^{2}}{|B^{\mathrm{If}}_{z}|^3}\right],
\hskip4pt \label{newA}
\end{eqnarray}
where $l=x,y$, the counterpart of the parameter $A$ in the
first scenario.
Now we discuss a special case. We assume the bias field is
switched off adiabatically, such that the atomic cloud follows the
variation of the total B-field, and moves to the region near the
center ${\vec r}_1$ of the QUIC trap (in the absence of the bias
field). Since the Landau-Zener method is based on the assumption
that the atoms are located in the region where the QUIC field lies
approximately along the $z$ axis before being switched off, the
factor $\tilde {A}$ given in Eq. (\ref{newA}) is applicable if the
trap center ${\vec r}_1$ is near the $z$ axis so that $|B^{\mathrm
{Q}}_{(x/y)}|$ is much smaller than $|B^{\mathrm Q}_{z}|$. For
practical values of ${\vec B}^{\mathrm{If}}$ and ${\vec
B}^{\mathrm{qd}}$, the above condition is satisfied and a good
estimate for the transition probability can again be given by Eq.
(\ref{newA}).
From the directions of the electric currents in the quadruple
coils and the Ioffe coil as shown in Figure \ref{fig1}, the
B-fields $B^{\mathrm{If}}_{y}$ and $B^{\mathrm {qd}}_{y}$ are
found to have the same sign while the fields $B^{\mathrm{If}}_{z}$
and $B^{\mathrm {qd}}_{z}$ have opposite signs. Therefore, after
switching off, $B^{\mathrm Q}_{y}$ decays much slower than
$B^{\mathrm Q}_{z}$. At time $\tilde {t}_0$ when $B^{\mathrm
Q}_{z}=0$, $B^{\mathrm Q}_{y}$ has the same order of magnitude as
its initial value. Therefore, if in the region near ${\vec r}_1$,
$B^{\mathrm Q}_{y}$ is sufficiently large, the direction of ${\vec
B}^{Q}$ may be changed very slowly during the switching-off
process of ${\vec B}^{Q}$ so that the atomic spin state can be
adiabatically flipped. Then, we have $\tilde A\approx 0$ and
$P_{-2}\approx 1$. For instance, in the experiment of Ref.
\cite{chenshuai}, ${\vec r}_1=(0,30\mu{\mathrm m},0)$ and
$B^{\mathrm Q}_{y}\sim 0.45 $(Gauss). In this case $85$ percent of
the atoms can be switched to the state $|M_F=-2\rangle$.
A second case of some interest is when the bias field is switched off suddenly.
Once the bias field is turned off, the atoms begin to oscillate in the
new QUIC trap centered at ${\vec r}_{1}$. If the time interval
between the switching off of ${\vec B}^{\mathrm A}$ and ${\vec
B}^{\mathrm Q}$ is $\tilde t_{\mathrm {int}}$, then the population
distribution can be estimated as
\begin{eqnarray}
N_{M_{F}}=\int d{\vec r}\, {\tilde \rho}({\vec r},t_{\mathrm
{int}})P_{M_{F}}[\tilde A({\vec r})].
\end{eqnarray}
Here, ${\tilde \rho}({\vec r},t_{\mathrm {int}})$ is the density
distribution of atoms in the QUIC trap at the time
when the QUIC field is switched off.
\section{Conclusions}
In conclusion, we have presented a detailed theoretical treatment
for the nonadiabatic level crossing dynamics of an atomic spin
coupled to a time dependent magnetic field. When applied to the
condensate experiments in a modified QUIC trap \cite{chenshuai},
our theory provides a satisfactory explanation for the observed
multi-component spinor condensates when the trapping B-fields were
shut-off. In the broad context of condensate wave function
engineering and atom optics with degenerate quantum gases, our
work provides useful insights for experiments. For example, in
some proposals \cite{Machida} and experiments \cite{Ketterle} on
the creation of vortex states in a condensate, the internal atomic hyperfine
state is slated to adiabatically follow the external magnetic field and be changed
from $|m_{F}\rangle_{z}$ to $|-m_{F}\rangle_{z}$. Our method can
then also be used to estimate the nonadiabatic effects in these proposals
and experiments \cite{Machida,Ketterle}.
\vskip 12pt We thank the Peking University atomic quantum gas
group, especially its leader Prof. X. Z. Chen for enlightening
discussions. We thank Prof. Chandra Raman for several helpful
communications. Part of this work was completed while one of us
(L.Y.) was a visitor at the Institute of Theoretical Physics of
the Chinese Academy of Sciences in Beijing, he acknowledges warm
hospitality extended to him by his friends at the Institute. This
work is supported by NSF, NASA, and the Ministry of Education of
China.
|
cond-mat/0603540
|
\section{Introduction}
Advances in the epitaxial growth technologies have lead to the
fabrication of high-quality two-dimensional electron gas (2DEG)
systems that are almost defect-free and upon which electronic
nanostructures can be built. The electron transport properties of
these nanostructures have been studied extensively both experimentally
and theoretically.~\cite{fer97,imr97} The most studied nanostructure
is the quantum point contact (QPC), due to its simple configuration,
and due also to the significant quantization effects in such systems,
as is shown in the conductance $G$.~\cite{wee88,wha88,wee88b}
\begin{figure}[b]
\includegraphics[width=0.41\textwidth,angle=0]{fig1.eps}
\caption{ Sketch of the gated QPC which is connected at each end to
a two-dimensional electron gas electrode. The narrow constriction is
acted upon by external transversely polarized time-dependent
electric field within millimeter wave regime. } \label{fig:1}
\end{figure}
These QPC's, when created electrostatically by negatively biasing
a split-gate located on top of a 2DEG,~\cite{wee88,wha88,wee88b}
can be pictured as a narrow constriction connecting
adiabatically at each end to a 2DEG,~\cite{gla88,gla90} as depicted
in Fig.~\ref{fig:1}. The energy levels in the narrow constriction are
quantized into one-dimensional subbands which density of states (DOS)
is singular near a subband bottom. This singular DOS was found, in
the presence of an attractive scatterer, to give rise to dip
structures in $G$,~\cite{chu89,bag90,tek91,nix91,lev92,tak92,kun92}
which is associated with the formation of impurity-induced
QBS's~\cite{bag90} formed just beneath a subband bottom.
More recently, attentions have been shifted to QPC's acted upon by
high frequency fields. These time-dependent fields include
transversely,\,~\cite{hek91,hu93,wys93,wys95,hu96,ala98,%
fed93,jan94,gor94,gri95,maa96,chu96,tag96,tag97} or
longitudinally\,~\cite{fen93,tan99} polarized fields, or simply gate-induced
time-modulated potentials.~\cite{bag92,tan96,tan97} These studies
focus on coherent inelastic scatterings by assuming the range of
the time-modulated fields to be shorter than the incoherent mean free
path. A number of interesting transport characteristics were
explored. In the case of zero source-drain bias, it was demonstrated
that electron pumping could occur when an asymmetric spit-gate is
acted upon by a time-modulated field.\,\cite{hek91} In the case of a
finite source-drain bias, and the QPC's have varying widths, the
photon-assisted quantum transport characteristics have been
studied.~\cite{hu93,wys93,wys95,hu96,ala98,fed93,jan94,gor94,gri95,maa96}
One might also be prompted by the impurity-induced QBS
features,~\cite{chu89,bag90,tek91,nix91,lev92,tak92,kun92} and ask
whether the QBS's could find their way of manifestation in the time-dependent
transport characteristics in QPC's. Indeed, earlier studies have
already found QBS features for cases when the QPC's have either a
delta-profile oscillating barrier~\cite{bag92} or transverse field.
\cite{chu96} In a recent study, we consider the more realistic case
of a finite-range longitudinally polarized field,
\cite{tan99} and we find QBS features that are associated with
electrons making intrasubband and intersideband transitions to the
vicinity of a subband bottom. Since the transitions allowed depend on
the polarization of the time-modulated field, we opt to investigate
in this paper the QBS features for a finite-range transversely
polarized field.
In theoretical studies of coherent quantum transport in mesoscopic
nanostructures, the transfer-matrix and the scattering-matrix method
are powerful tools. Both the methods enable us to
numerically calculate current transmission
probabilities for arbitrary shaped configurations.
The conductance of the nanostructure can be obtained by
Landauer-B\"uttiker formalism.~\cite{lan57,but86}
However, since the transfer-matrix method fails in systems for
which energy conservation is violated, it is unapt for a
time-dependent external field acting upon the system. Thus,
one has to develop a generalized scattering-matrix method to
calculate numerically the current transmission probability of a
time-modulated mesoscopic nanostructure.
In this paper, we develop a generalized scattering method for
a finite-range transversely polarized time-dependent
electric field acting upon a narrow constriction,
as depicted in Fig.~\ref{fig:1}.
This method enables us to calculate not only the current transmission
and reflection probabilities but also the contributions from each
subband and sideband states yielded by the transversely polarized
electric field. This transverse field
${\bf E}({\bf x},t)={\cal E}(x) \cos(\omega t) \hat{y}$
has a finite longitudinal profile, namely that ${\cal E}(x)$ covers
a length $L$ that excludes the reservoirs.
The finiteness of the electric field in the $x$ direction breaks
the longitudinal translational invariance,
and hence allows electrons to make intersideband transitions not to conserve
their longitudinal momenta.~\cite{chu96,tan96}
Moreover, since the transverse electric field is not uniform in
the $y$ direction, the transverse translational invariance is also violated.
Thus the electron-photon scattering processes can include
intersubband transitions. This transmission mechanism is
quite different with the scatterings that induced by a longitudinal
field\,~\cite{tan99} or a time-modulated potential,~\cite{tan96} where
the electrons can only make intrasubband transitions
(the subband index remains unchanged).
The rest of this paper is organized as follows. In Sec. II, the
generalized scattering-matrix method is formulated that has incorporated
a time-dependent mode-matching scheme to solve the time-dependent
Schr\"odinger equation.
The method described in Sec. II is calculated numerically in Sec. III.
From our numerical evidence, we predict that QBS features caused by the
transversely polarized field can occur in narrow constrictions.
Concluding remarks are given in Sec. IV.
\section{Model and Method}
In this section, the coherent inelastic scattering problem in the presence
of a finite-range transverse electric field is formulated. The finite-range
time-dependent electric field is divided by a series of segments,
each of them is described by a $\delta$-profile field.~\cite{chu96}
The matching between these sliced regions has to be performed
in the cascading of the scattering
matrices, from which the transmission and reflection coefficients are
obtained. The conductance $G$ is then expressed in terms of these
coefficients.
Previously, we have investigated transport properties of electrons in
narrow constrictions acted upon by a longitudinally polarized
time-dependent electric field.~\cite{tan99} The potential due to the
longitudinal field that have a finite range in the $x$ direction, but
remain uniform in the $y$ direction. This uniformity in the transverse
direction allows us to propose a matching scheme that avoids slicing the
region covered by the longitudinal time-dependent field. However,
as long as the narrow constriction is acted upon by a transversely
polarized time-dependent field, the translational invariance in the
$y$ direction is breaked --- both intersubband and
intersideband transitions are involved. Hence, we have to develop a
generalized scattering method to formulate the quantum transport problem
when a transverse external field acts upon the narrow constriction.
Since the electric field is
assumed to be applied only on the narrow constriction region, we
need only to formulate this scattering problem in the narrow
constriction region. In the ballistic regime, the length of
the narrow constriction, $L_{c}$, is smaller than
the phase breaking length, $l_{\phi}$,
and hence the electron transport
can be treated as a single particle problem.
Thus the electron transport can be
formulated by a time-dependent Schr\"odinger equation, given by
\begin{equation}
i\hbar\frac{\partial}{\partial t} \, \Psi({\bf x},t) = {\cal H}({\bf
x},t)\, \Psi({\bf x},t)\, ,
\end{equation}
with the Hamiltonian of the form
\begin{equation}
{\cal H}({\bf x},t) = \left[ {\bf p} + {\frac{e}{c}}{\bf A}({\bf x},t) %
\right]^2 + V_c(y).
\label{eq:ham}
\end{equation}
Here ${\bf p}$ denotes the momentum of an electron
and $V_c(y)$ represents the transverse confinement of the
narrow constriction modeled by a quadratic potential.~\cite{but90}
Taking the Coulomb gauge, the effect of the
transversely polarized electric field can be
represented by a vector potential:
\begin{equation}
{\bf A}({\bf x},t) = -{\frac{c}{\omega}}{\cal E}(x)\sin(\omega t)\hat{y}\, ,
\end{equation}
where ${\cal E}(x)$ represents the profile of the external field with
amplitude ${\cal E}_{0}$ for $|x|<L/2$ and vanishes otherwise.
To be convenient for analysis, below we choose
the length unit $a^{*}=1 / \! k_{{\rm F}}$, the energy unit
$E^{*}=\hbar^2 k_{{\rm F}}^2/(2m^*)$,
the time unit $t^{*}=\hbar / E^{*}$, and field amplitude
${\cal E}_{0}$ in units of $E^{*}/(ea^{*})$, where $-e$ denotes the
electron charge, with effective mass $m^{*}$, and $k_{{\rm F}}$ represents
a typical Fermi wave vector of the reservoir.
Thus we can write the dimensionless transverse confinement
$V_c(y)=\omega_y^2 y^2$, and then gives the
quantized transverse energy levels $\varepsilon_n = (2n+1)\omega_{y}$
and the corresponding wave function $\phi_n(y)$.
The finite-range electric field is divided by $N_{L}$ slices, thus the
width of every slice is given by $\delta L = L/N_{L}$.
Here the sufficiently large
amount of $N_{L}$ is needed such that $\delta L$ is sufficiently
narrow to ensure every slice can be described by a $\delta$-profile
field.~\cite{chu96} The
locations of these $\delta$-profile fields are given by $x_{i}= -L/2 +
(i-1/2)\delta L$, where the positive integer $i=1, 2, \cdots, N_{L}$. By
dividing the profile slice-wisely, the Schr\"odinger equation of the
$i$th $\delta$-profile field is then given by
\begin{widetext}
\begin{eqnarray}
i {\partial\over\partial t} \Phi^{(i)}({\bf x},t)
= \left[ -\left( {\frac{\partial^2}{\partial x^2}} +
\frac{\partial^2}{\partial y^2} \right) + \omega_{y}^2 y^2 + \left(i
{\frac{ 2{\cal E}_{0}} {\omega}} \frac{\partial}{\partial y}
\sin(\omega t) + \frac{{\cal E}_{0}^2 }{\omega^2} \sin^2(\omega t)
\right) \delta L \delta (x-x_{i})\right] \Phi^{(i)}({\bf x},t)\, .
\end{eqnarray}
Considering a $n$th subband electron incident from left-hand side of
the $i$th $\delta$-profile field, and with incident energy
$\mu^{\prime}$, the
scattering wave function is given by~\cite{chu96}
\begin{mathletters}
\begin{eqnarray}
\Phi_{n}^{(i)}({\bf x},t) &=&
\phi_{n}(y)\exp\left[ ik_n(\mu^{\prime})x - i\mu^{\prime}t \right]
\nonumber \\
&&+ \displaystyle
\sum_{n^{\prime},m^{\prime}}r^{(i)}_{n^{\prime}n}(m^{
\prime})\phi_{n^{\prime}}(y) \exp\left[
-ik_{n^{\prime}}(\mu^{\prime}+ m^{\prime}\omega )x \right]
\exp\left[ -i(\mu^{\prime}+m^{\prime}\omega)t \right] \hspace{5mm}
{\rm if}\ x < x_i , \label{eq:phi1}\\
\Phi_{n}^{(i)}({\bf x},t) &=& \displaystyle
\sum_{n^{\prime},m^{\prime}}t^{(i)}_{n^{\prime}n}(m^{\prime})
\phi_{n^{\prime}}(y) \exp\left[
ik_{n^{\prime}}(\mu^{\prime}+m^{\prime} \omega )x \right] \exp\left[
-i(\mu^{\prime}+m^{\prime}\omega)t \right] \hspace{5mm} {\rm if}\ x
> x_i , \label{eq:phi2}
\end{eqnarray}
\end{mathletters}
\end{widetext}
where the electron is scattered into the subband $n^{\prime}$ and
sideband $m^{\prime}$. The wave vector $k_{n^{\prime}}(\mu^{\prime})
= \sqrt{\mu^{\prime}-\varepsilon_n}$ is the effective wave vector
for the electron with energy $\mu^{\prime}$ and in the $n$th
subband. Here we have defined $\Phi^{(i)}({\bf x},t) =
\sum_n\Phi_n^{(i)}({\bf x},t)$ as a summation over all occupied
incident subbands. The coefficients in Eq.~(\ref{eq:phi1}) and
(\ref{eq:phi2}) have to be determined by the following boundary
conditions:
\begin{equation}
\left.\Phi_n^{(i)}\right|_{x=x_i-\delta} =
\left.\Phi_n^{(i)}\right|_{x=x_i+\delta}\, , \label{eq:bound1}
\end{equation}
and
\begin{eqnarray}
&&{\displaystyle \left. {\frac{\partial\Phi_n^{(i)}}{\partial x}}
\right|_{x=x_i+\delta} -\left. {\frac{\partial\Phi_n^{(i)}}{\partial
x}}
\right|_{x=x_i-\delta} } \\
&=& {\displaystyle \left[ i {\frac{ 2{\cal E}_{0}}{%
\omega}} {\frac{\partial}{\partial y}} \sin(\omega t) + {\frac{{\cal
E}_0^2 }{\omega^2 }} \sin^2(\omega t) \right] \delta L
\Phi_n^{(i)}(x=x_i) }\, .\nonumber
\label{eq:bound2}
\end{eqnarray}
Imposing the boundary conditions (\ref{eq:bound1}) and (\ref{eq:bound2}) to
perform the matching at all times and given the expression of the matrix
element
\begin{equation}
<l\mid {\frac{\partial}{\partial y}}\mid n^{\prime}> = \sqrt{{\frac{%
\omega_{y}}{2}} }\left[ \sqrt{n^{\prime}}\delta_{l,n^{\prime}-1} - \sqrt{%
n^{\prime}+1} \, \delta_{l,n^{\prime}+1} \right]\, ,
\end{equation}
we obtain the equations relating the reflection coefficients $%
r^{(i)}_{ln}(m) $ and the transmission coefficients $t^{(i)}_{ln}(m)$,
\begin{mathletters}
\begin{equation}
t^{(i)}_{ln}(m) - r^{(i)}_{ln}(m) = \delta_{m,0} \, \delta_{n,l} \, ,
\label{eq:coeff1}
\end{equation}
and
\begin{eqnarray}
&&\delta_{m,0} \ \delta_{n,l} \, k_{n}(\mu') \nonumber \\
&=& k_{l}(\mu^{\prime}+m\omega) \left[ r^{(i)}_{ln}(m) + t^{(i)}_{ln}(m) %
\right] \nonumber \\
&&+ i\frac{{\cal E}_0}{\omega}\delta L\, \sum_{n^{\prime},
m^{\prime}} \left[\delta_{m^{\prime}, m+1} -
\delta_{m^{\prime}, m-1}\right] \nonumber \\
&&\hspace{20mm} \times <l\mid \frac{\partial}{\partial y}\mid
n^{\prime}>
t^{(i)}_{n^{\prime}n}(m^{\prime}) \\
&&+ i\frac{{\cal E}_0^2}{4\omega^2}\delta L\, \left[
2t^{(i)}_{ln}(m) + t^{(i)}_{ln}(m+2) + t^{(i)}_{ln}(m-2) \right]
.\nonumber
\label{eq:coeff2}
\end{eqnarray}
From these expressions associated with the wave vector, $k_n(\mu)$, along
the channel direction, it is clear that the time-dependent field-induced
electron transitions do not conserve the longitudinal momenta. This means
when the time-dependent field has a finite longitudinal profile
(excluding the reservoirs), the possibility of these transition processes can be made. In
Eq.~(\ref{eq:coeff2}), we can see that the ${\cal E}_0$ term causes the
intersubband and intersideband transitions by emitting or absorbing
one $\hbar\omega$ (one-photon process), while the
${\cal E}_0^2$ term contributes
to the intersubband and intersideband transitions by
emitting or absorbing two $\hbar\omega$.
Solving Eqs.~(\ref{eq:coeff1}) and (\ref{eq:coeff2}), we obtain the
transmission coefficients $t^{(i)}_{ln}(m)$ and reflection coefficients $%
r^{(i)}_{ln}(m)$ of the $i$th slice. For an electron incident from the
right-hand side of the $i$th slice in the same subband, $n$, and energy, $%
\mu^{\prime}$, the transmission coefficient $\widetilde{t}^{(i)}_{ln}(m)$
and the reflection coefficient $\widetilde{r}^{(i)}_{ln}(m)$ differ from
those for an electron incident from the left-hand side of the $i$th slice
only by a phase factor of unit modulus, given by
\begin{mathletters}
\begin{equation}
\widetilde{t}^{(i)}_{ln}(m) = t^{(i)}_{ln}(m)\, \exp\left\{ 2 i \left[
k_{l}(\mu^{\prime}+m\omega) -k_{n}(\mu^{\prime})\right] x_i\right\}\, ,
\end{equation}
and
\begin{equation}
\widetilde{r}^{(i)}_{ln}(m) = r^{(i)}_{ln}(m)\, \exp\left\{-2 i \left[
k_{l}(\mu^{\prime}+m\omega) + k_{n}(\mu^{\prime})\right] x_i\right\} \, .
\end{equation}
\end{mathletters}
In general, for an electron incident from the left-hand side of the $i$th
slice (the $(i-1)$th region) in the subband $n_{i-1}$ and at energy $\mu +
m_{i-1}\omega$, this incident state is denoted as $\alpha_{i-1} =
(n_{i-1},m_{i-1})$. The electron may be transmitted to the right-hand side
of the $i$th slice ($i$th region) into the state $\alpha_i= (n_i,m_i)$ with
a transmission coefficient $t_{\alpha_{i},\alpha_{i-1}}$. Also, the electron
may be reflected to the left-hand side of the $i$th slice ($(i-1)$th region)
into the state $\beta_{i-1}= (n^{\prime}_{i-1},m^{\prime}_{i-1})$ with a
reflection coefficient $r_{\beta_{i-1},\alpha_{i-1}}$. Similarly, for an
electron incident from th right-hand side of the $i$th slice in the incident
state $\beta_i=(n^{\prime}_i,m^{\prime}_i)$, the corresponding transmission
coefficient and reflection coefficient due to this slice are given by $%
\widetilde{t}_{\beta_{i-1},\beta_{i}}$ and $\widetilde{r}_{\beta_{i-1},%
\beta_{i}}$, respectively. After defining these coefficients, we can
establish the scattering matrix equation, given by
\end{mathletters}
\begin{equation}
\left[
\begin{array}{c}
{\bf A}_{i} \\
{\bf B}_{i-1}
\end{array}
\right] = {\bf S}(i-1,i) \left[
\begin{array}{c}
{\bf A}_{i-1} \\
{\bf B}_{i}
\end{array}
\right]\, ,
\label{eq:smat1}
\end{equation}
where ${\bf A}_{i}$ and ${\bf B}_{i}$ are the coefficients of the right- and
the left-going states in the $i$th region, respectively, as illustrated in
Fig.~\ref{fig:2}. Here ${\bf S}(i-1,i)$ is the scattering matrix which
connects the $(i-1)$th to the $i$th region, across the $i$th slice, given by
\begin{equation}
{\bf S}(i-1,i) = \left[
\begin{array}{cc}
{\bf t}(i) & {\bf \widetilde{r}}(i) \\
{\bf r}(i) & {\bf \widetilde{t}}(i)
\end{array}
\right]\, .
\end{equation}
Here, ${\bf t}(i)$ and ${\bf r}(i)$ denote the transmission and
reflection matrices of the right-going electron at the $i$th slice,
respectively, and the tilded two refer to the contribution of
left-going electron. Matching between these sliced regions has to
be performed in the cascading of the scattering matrices. We should point
out the reason for using the scattering matrix formalism, instead of the
transfer matrix method, is to avoid the use of truncation schemes required
in dealing with the exponentially growing solutions. The scattering matrix
method is stable and accurate without any special treatment for any of the
intermediate states.
\begin{figure}[tbp]
\includegraphics[width=0.41\textwidth,angle=0]{fig2.eps}
\caption{ Sketch of the finite-range trnasverse electric field,
which is divided into $N_L$ slices. Every slice is described by a $\protect\delta$%
-profile field, and it is between the $(i-1)$th and the $i$th
region. The coefficients of the right and left going states of the
$i$th region are denoted as ${\bf A}_{i}$ and ${\bf B}_{i}$,
respectively, and these coefficients between successive regions are
connected by the interface matrix ${\bf I}(i)$. $\protect\alpha_{i}$
and $\protect\beta_{i}$ are dummy indices of the $i$th region,
including both subband and sideband indices. } \label{fig:2}
\end{figure}
To do the piece-wise matching, we start from rearranging Eq.~(\ref{eq:smat1}%
) to obtain the following matrix equation
\begin{equation}
\left[
\begin{array}{c}
{\bf A}_{i-1} \\
{\bf B}_{i-1}
\end{array}
\right] = {\bf I}(i) \left[
\begin{array}{c}
{\bf A}_{i} \\
{\bf B}_{i}
\end{array}
\right]\, , \label{eq:i-mat}
\end{equation}
which connects the coefficients of the successive regions across the $i$th
slice. Here ${\bf I}(i)$ is the interface matrix of the $i$th slice,
defined by
\begin{equation}
{\bf I}(i) = \left[
\begin{array}{cc}
{\bf I}_{11}(i) & {\bf I}_{12}(i) \vspace{1mm} \\
{\bf I}_{21}(i) & {\bf I}_{22}(i)
\end{array}
\right]\, ,
\end{equation}
in which
\begin{eqnarray}
{\bf I}_{11}(i) &=& {\bf t}(i)^{-1} ,
\nonumber \\
{\bf I}_{12}(i) &=& -{\bf t}(i)^{-1}
{\bf \widetilde{r}}(i) , \nonumber \\
{\bf I}_{21}(i) &=& {\bf r}(i) {\bf t}(i)^{-1} , \nonumber \\
{\bf I}_{22}(i) &=& {\bf \widetilde{t}}(i)
-{\bf r}(i){\bf t}(i)^{-1} {\bf \widetilde{r}}(i) .
\end{eqnarray}
In general, for the regions up to the ($i-1$)th slice, we have
\begin{equation}
\left[
\begin{array}{c}
{\bf A}_{i-1} \\
{\bf B}_{0}
\end{array}
\right] = {\bf S}(0,i-1) \left[
\begin{array}{c}
{\bf A}_{0} \\
{\bf B}_{i-1}
\end{array}
\right]\, , \label{eq:s-mat2}
\end{equation}
where ${\bf S}(0,i-1)$ is the scattering matrix connecting the $0$th region
to the $(i-1)$th region, defined by
\begin{equation}
{\bf S}(0,i-1) = \left[
\begin{array}{cc}
{\bf S}_{11}(0,i-1) & {\bf S}_{12}(0,i-1) \vspace{1mm} \\
{\bf S}_{21}(0,i-1) & {\bf S}_{22}(0,i-1)
\end{array}
\right]\, .
\end{equation}
Imposing Eqs.~(\ref{eq:i-mat}) and (\ref{eq:s-mat2}), the coefficients ${\bf %
A}_{i-1}$ and ${\bf B}_{i-1}$ may be eliminated, and then we obtain the
matrix equation connecting the $0$th to the $i$th region, given by
\begin{equation}
\left[
\begin{array}{c}
{\bf A}_{i} \\
{\bf B}_{0}
\end{array}
\right] = {\bf S}(0,i) \left[
\begin{array}{c}
{\bf A}_{0} \\
{\bf B}_{i}
\end{array}
\right]\, .
\end{equation}
The submatrices of this scattering matrix ${\bf S}(0,i)$ are,
explicitly,\cite{ko88}
\begin{widetext}
\begin{eqnarray}
{\bf S}_{11}(0,i) &=& \left[ {\bf I}_{11}(i) - {\bf S}_{12}(0,i-1) {\bf I}%
_{21}(i) \right]^{-1} {\bf S}_{11}(0,i-1)\, , \nonumber \\
{\bf S}_{12}(0,i) &=& \left[ {\bf I}_{11}(i) - {\bf S}_{12}(0,i-1) {\bf I}%
_{21}(i) \right]^{-1} \, \left[ {\bf S}_{12}(0,i-1) {\bf I}_{22}(i)
- {\bf I}_{12}(i) \right]\, , \nonumber \\
{\bf S}_{21}(0,i) &=& {\bf S}_{21}(0,i-1) + {\bf S}_{22}(0,i-1) {\bf I}%
_{21}(i) {\bf S}_{11}(0,i)\, , \nonumber \\
{\bf S}_{22}(0,i) &=& {\bf S}_{22}(0,i-1) {\bf I}_{22}(i) + {\bf S}%
_{22}(0,i-1) {\bf I}_{21}(i) {\bf S}_{12}(0,i)\, .
\label{eq:s(0,i)}
\end{eqnarray}
\end{widetext}
This iterative procedure is not as easy to evaluate in terms of the
transfer-matrix method, which simply inverses a product of matrices. More
precisely, once the system is acted upon by an external time-modulated
field, the evanescent modes play an important role due to inelastic
scatterings. In this situation, we may prefer to use the scattering-matrix
method to gain the stability for the numerical computation.
By iterating Eq.~(\ref{eq:s(0,i)}), we obtain the scattering matrix ${\bf S}%
(0,N_{L})$ which satisfies the matrix equation:
\begin{equation}
\left[
\begin{array}{c}
{\bf A}_{N_{L}} \\
{\bf B}_{0}
\end{array}
\right] = {\bf S}(0,N_{L}) \left[
\begin{array}{c}
{\bf A}_{0} \\
{\bf B}_{N_{L}}
\end{array}
\right]\, .
\end{equation}
This equation describes the electron transport through the whole
time-modulated region. The incident state is assumed to be $\alpha_{{\rm in}%
}=(n_{0},0)$ such that the elements of the incident coefficient ${\bf A}_{0}$
can be expressed as $\delta_{n,n_{0}}\delta_{m,0}$, i.e., only $(n_{0},0)$
is the nonzero element. Setting ${\bf B}_{N_{L}}={\bf 0}$, we have
\begin{mathletters}
\begin{equation}
{\bf A}_{N_{L}} = {\bf S}_{11}(0,N_{L}) {\bf A}_{0}\, ,
\end{equation}
and
\begin{equation}
{\bf B}_{0} = {\bf S}_{21}(0,N_{L}) {\bf A}_{0}\, .
\end{equation}
The transmission coefficient for an electron incident from the initial state
$\alpha_{{\rm in}}=(n_{0},0)$ and transmitted, by the finite-range
transverse field, into the final state $\alpha_{{\rm f}}=(n_{{\rm f}},m_{%
{\rm f}})$ is denoted by $t_{\alpha_{{\rm f}},\alpha_{{\rm in}}} =
(A_{N_{L}})_{\alpha_{{\rm f}}}$, where $(A_{N_{L}})_{\alpha_{{\rm f}}}$ is
an element of ${\bf A}_{N_{L}}$. The current transmission coefficient,
corresponding to this inelastic scattering process, is then given by
\end{mathletters}
\begin{equation}
T_{\alpha_{{\rm in}}}^{\alpha_{{\rm f}}}=\left[ {\frac{ k_{n_{{\rm f}}}(\mu
+m_{{\rm f}}\omega)}{k_{n_{0}}(\mu) }}\right] \left| t_{\alpha_{{\rm f}%
},\alpha_{{\rm in}}} \right|^2\, .
\end{equation}
Therefore, the zero temperature conductance may obtained by summing over all
the possible incident and transmitted states, given by
\begin{equation}
G = {\frac{2e^2}{h}}\sum_{\alpha_{{\rm in}}} \sum_{\alpha_{{\rm f}}}\,
T_{\alpha_{{\rm in}}}^{\alpha_{{\rm f}}} = {\frac{2e^2}{h}} \sum_{\alpha_{%
{\rm in}}}\, T_{\alpha_{{\rm in}}}\, ,
\end{equation}
where $T_{\alpha_{{\rm in}}}$ is the current transmission coefficient from
the incident state $\alpha_{{\rm in}}$. Since the incident sideband is
specified to be $m=0$ for an arbitrary incident subband $n_{0}$, so that the
summation $\sum_{\alpha_{{\rm in}}} = \sum_{n_{0}=0}^{N}$, where $N+1$ is
the number of propagating subbands for the chemical potential $\mu$. But for
the final states, both the subband and sideband indices are arbitrary, thus $%
\sum_{\alpha_{{\rm f}}} = \sum_{n_{{\rm f}}=0}^{N} {\sum^{\prime}}_{m_{{\rm f%
}}} $ is expected to be a double sum. Here the superscript prime indicates
that summation is over $m_{{\rm f}}$ such that $k_{n_{{\rm f}}}(\mu +m_{{\rm %
f}}\omega)$ is real, namely that only occupied subbands are included for the
scattering states. The conservation of current, given by the condition
\begin{equation}
\sum_{\alpha_{{\rm f}}}\, \frac{k_{n_{{\rm f}}}(\mu+m_{{\rm f}}\omega)}{%
k_{n_{0}}(\mu)} \, \left[ \, \left| t_{\alpha_{{\rm f}},\alpha_{{\rm in}}}
\right|^2 + \left| r_{\alpha_{{\rm f}},\alpha_{{\rm in}}} \right|^2 \,
\right] = 1\, .
\end{equation}
is used to check our numerical accuracy.
\section{Numerical Results and Discussion}
\begin{figure}[tbp]
\includegraphics[width=0.41\textwidth,angle=0]{fig4.eps}
\caption{ Conductance $G$ as a function of $X$ for frequency $\protect\omega %
= 0.042$ ($\simeq 0.6\Delta\protect\varepsilon$), with time-modulated range $%
L = 50$. The amplitude of the electric field are ${\cal E}_{0}$ = a, 0.002 ($%
\simeq 22.6$ V/cm); b, 0.003 ($\simeq 33.9$ V/cm); and c, 0.004
($\simeq 45.2 $ V/cm). The curves are vertically offset for clarity.
} \label{fig:4}
\end{figure}
In this section the behavior of the conductance $G$ is studied. To
facilitate the experimental performance we fix the length $L$ of the
time-modulated region while varying the field strength ${\cal
E}_{0}$, in
which the angular frequencies are chosen to be $\omega=0.028$ ($%
\nu=\omega/2\pi\cong 61\, {\rm GHz}$) and 0.042 ($\nu\cong 91\, {\rm GHz}$),
as depicted in Figs.~\ref{fig:3} and \ref{fig:4}, respectively. The $L$ in
both the figures are chosen to be $L=50\ (\simeq 0.4\ \mu{\rm m})$. The $G$
characteristics are represented by the dependence on $X$, the suitably
rescaled chemical potential $\mu$. According to this scale, when $\mu$ is
changed by a subband energy level spacing $\Delta\varepsilon$, it
corresponds to $\Delta X=1$, and when $\mu$ is changed by $\hbar\omega$, it
corresponds to $\Delta X=\omega / \Delta\varepsilon = 0.4\ {\rm and}\ 0.6$
for Figs.~\ref{fig:3} and \ref{fig:4}, respectively. In addition, when $X=N$%
, $\mu$ is at the $N$-th subband bottom, namely $N=1,\ 2,\ {\rm or}\ 3$ in
these figures.
\begin{figure}[tbp]
\includegraphics[width=0.41\textwidth,angle=0]{fig3.eps}
\caption{ Conductance $G$ as a function of $X$ for frequency $\protect\omega %
= 0.028$ ($\simeq 0.4\Delta\protect\varepsilon$), with time-modulated range $%
L = 50$. The amplitude of the electric field are ${\cal E}_{0}$ = a, 0.002 ($%
\simeq 22.6$ V/cm); b, 0.003 ($\simeq 33.9$ V/cm); and c, 0.004
($\simeq 45.2 $ V/cm). The curves are vertically offset for clarity.
} \label{fig:3}
\end{figure}
In our numerical examples, the narrow constriction is chosen to be that in a
high mobility
${{\rm GaAs-Al}_{x}{\rm Ga}_{1-x}{\rm As}}$ with a typical electron density
$n \sim 2.5 \times 10^{11}$ cm$^{-2}$, and $m^{*} = 0.067 m_{e}$.
Correspondingly, our choice of length unit $a^{*} = 1/k_{{\rm F}} = 79.6$
\AA, energy unit $E^{*} = \hbar^{2}k_{{\rm F}}^{2}/(2m^{*}) = 9$ meV, and
frequency unit $\omega^{*} = E^{*}/\hbar = 13.6$ THz. We also choose
$\omega_y = 0.035$ such that the transverse energy level spacing
$\Delta\varepsilon = 2\omega_{y}=0.07$,
and the effective narrow constriction
width is of the order of $0.1\ \mu{\rm m}$. In the following, in presenting
the dependence of $G$ on $\mu$, it is more convenient to plot $G$ as a
function of $X$ instead, where
\begin{equation}
X = {\frac{\mu}{\Delta\varepsilon}} + {\frac{1}{2}}\, .
\end{equation}
With this conversion, $X$ is in units of $\Delta\varepsilon$, and the
integral value of $X$ is the number of propagating channels through the NC.
In Fig.~\ref{fig:3}, the field amplitudes are chosen to be
${\cal E}_0=0.002,\ 0.003,\ {\rm and}\ 0.004$ for Figs.~\ref{fig:3}a-c,
respectively. The angular frequency $\omega$ is chosen to be 0.028, whose
energy interval $\omega$ corresponds to an interval $\Delta
X=\omega/\Delta\varepsilon =0.4$. The dotted curves are the unperturbed
results. In general, we find the suppressed features in $G$ that escalate
with both the chemical potential and ${\cal E}_0$, as depicted in Figs.~%
\ref{fig:3}a-c. Besides, there are dip structures in these figures, which can
be understood to be the formation of QBS's due to time-modulated
fields.~\cite{chu96,bag92,tan96,tan97}
These QBS's, formed at energies near the threshold of
subbands, trap temporarily conduction
electrons and give rise to drops in $G$.
However, the trapped electron can also be excited out of the QBS,
resulting a smaller $G$ reduction: $|G|< 1$, in units of
$2e^2/h$.~\cite{chu96} In contrast,
the impurity-induced dips are the results of merely
elastic scattering and thus have $G$ reduction $|G|=1$. \cite{chu89,bag90}
It is shown when we increase the ${\cal E}_{0}$,
these dip structures become
more deeper and shift slightly toward the lower energy direction. Our
results demonstrate the manifestation of a new, and time-dependent
electric-field-induced QBS formed from
transitions due to coherent inelastic scatterings.
Figures 3a-c have common types of dip structures. First, the dip structures
are found at around $X$ = 1.6, 2.6, and 3.6, that is, at $\Delta X=0.4$
beneath a subband bottom. These dip structures are due to the processes that
an electron in the $N$th subband, and at energy $(N+1)-\Delta X$, can absorb
an energy $\hbar\omega$ and become bounded in the QBS just beneath the
threshold of the $(N+1)$th subband. In other words, this process is $(\Delta
n,\Delta m) = (+1,+1)$. Second, the small dip structures at around $X$ = 2.4
and 3.4 are attributed to the processes that an electron in the $N$th subband, and
at energy $N+\Delta X$, can give away $\hbar\omega$ to be trapped
temporarily in the $(N+1)$th subband bottom. This process corresponds to
$(\Delta n,\Delta m) = (+1,-1)$. Third, the dips at around $X$ = 2.2 and 3.2
are contributed by two different kinds of transitions, that is, (+2,+2) and
(-1,+3) processes. The former corresponds to that an electron in the $N$th
subband with energy $(N+2)-2\Delta X$ can absorb $2\hbar\omega$ to the
$(N+2)$th subband bottom, and the latter corresponds to those make
transitions from the $N$th subband, with energy $(N-1)+3\Delta X$, to the
$(N-1)$th subband bottom. We should point out that all the above are
intersubband and intersideband transitions.
Moreover, in Figs.~\ref{fig:3}b and c, there are dip structures at around $X$
= 3.0 and 4.0, which correspond to (+2,0) transitions. These structures are
due to intersubband and intrasideband transitions, namely that the electron
energy remains unchanged. Besides, in Fig.~\ref{fig:3}c, there are dips at
around $X$ = 1.8, 2.8, and 3.8, which correspond to (0,-2) transitions.
We can see that these multi-photon transitions manifest only
when the applied field intensity is larger.
In Fig.~\ref{fig:4}, the field amplitudes are chosen to be the same as
Fig.~\ref{fig:3}. The angular frequency $\omega$
is chosen to be 0.042, whose
energy interval $\omega$ corresponds to an interval
$\Delta X=\omega/\Delta\varepsilon =0.6$.
There are dip structures in these figures
associated with the formation of time-depdent field-induced
QBS's.~\cite{chu96,bag92,tan96,tan97}
There are common types of dip structures in Figs.~\ref{fig:4}a-c.
First, the dip structures are found at around $X$ = 1.4,
2.4, and 3.4, which are associated with (+1,+1) processes. These dip
structures correspond to that an electron in the $N$th subband, and at
energy $(N+1)-\Delta X$, can absorb an energy $\hbar\omega$ and become
bounded as a QBS in the $(N+1)$th subband. Second, the small dips at around
$X$ = 2.6 and 3.6, associated with (+1,-1) processes, correspond to that
an electron in the $N$th subband with energy $N+\Delta X$ can give away
$\hbar\omega$ to be trapped temporarily in the $(N+1)$th subband bottom.
Third, the dips at around $X$ = 1.8, 2.8, and 3.8, associated with (+2,+2) processes,
correspond to that an electron in the $N$th subband, at energy
$(N+2)-2\Delta$, can absorb $2\hbar\omega$ to the $(N+2)$th subband bottom.
Moreover, in Figs.~\ref{fig:4}b and c, dip structures at around
$X$ = 3.0 and 4.0 are identified to be (+2,0) intersubband and
intrasideband transitions. Furthermore, in Fig.~\ref{fig:4}c, there are
dips at around $X$ = 2.2 and 3.2, which are associated with (0,-2) transitions.
As mentioned in our previous work,~\cite{tan99} to observe the above
predicted effects, the experimental setup needs to fulfill two requirements.
First, the bolometric
effect due to the absorption of photons in the QPC's end-electrodes has to
be suppressed or totally eliminated. Recent experiments show that the
transport characteristics are masked by the bolometric effect when the
entire QPC, including the end-electrodes, is exposed to the incident
electromagnetic field.~\cite{ala98} Second,
the length $L$ of the time-dependent field has to be
shorter than the wave length of the incident field. The purpose is to
increase the coupling between the electrons and the photons by breaking the
longitudinal translational invariance. That the coupling between the photon
field and the conduction electrons can be much enhanced, when either the
electrons are confined or the time-dependent field has a localized profile,
has been pointed out recently by Yakubo {\it et al.\/}~\cite{yak96}
Thus the QPC needs to be in the near-field regime of the
time-dependent field.
To avoid the bolometric effect, we suggest
to apply ac field to the split gates of the QPC instead of shining
an electromagnetic wave upon the QPC. The split gates are negatively biased
with respect to a common ground, and made of
superconducting materials with superconducting wires connecting to
an ac-signal generator.
This generator can be available using the IMPATT
diode that has successfully been demonstrated to cover the complete millimeter
range (30-300 GHz).~\cite{bha84}
This proposed experimental setup is expected to generate
a transversely polarized electric field only in the narrow
constriction region while keeping the two-end electrodes
from the time-modulated field. In this work,
though the time-dependent region covers only part of the narrow
constriction, we believe these two situations will manifest similar
features.
Given the availability of millimeter wave sources,~\cite{bha84} the
suggested experimental setup would be manageable by the present
nanotechnology. The features reported in this work, however, are
not limited to millimeter waves.
\section{Conclusion}
A generalized scattering-matrix method has been developed for investigating
coherent quantum transport in narrow constrictions with a transversely
polarized time-dependent electric field. This method allows us to
solve nonpertubatively the time-dependent Schr\"odinger equation in the
numerical sense. Since the energy conservation law is violated in
such a time-modulated system, a conventional transfer-matrix method
technique is inapplicable. Using the present numerical method, not only
the transmission and reflection probabilities of systems can be
calculated, all the subband and sideband states can be
obtained.
The scattering processes due to the time-dependent external field are
both inelastic and coherent. Since this field is transversely
polarized, electrons can make both intersubband and intersideband
transitions. This increases the complexity in calculation, but has
more interesting features.
Different dip structures
associated with different intersubband and intersideband transitions to
the vicinity of a subband bottom are found.
These dip structures can be understood as
the formation of a QBS at energy near a subband bottom due to its singular
DOS.~\cite{chu96}
Moreover, due to the tunability of frequency and
intensity of the field, this proposed configuration can be
applied to be a high-frequency detector.
We expect that these dip structures could also be found
when the QPC has a varying width. We hope that the present method will
be utilized to study new transport phenomena in mesoscopic nanostructures.
\begin{acknowledgments}
The authors would like to thank the National Science Council of the
Republic of China for financially supporting this research under
Contract No. NSC88-2112-M-009-028. Computational facilities
supported by the National Center for High-performance Computing are
gratefully acknowledged.
\end{acknowledgments}
|
cs/0603107
|
\section{The false algorithm, simple version}\label{sec:false-algorithm}
Assume Alice wants to share a secret $s$, which we assume for
simplicity\footnote{This assumption might be relaxed, using an
infinite set is for exposition reasons, see section \ref{sec:requirements}.}
is a non-zero rational number $s=p/q\in {\mathbb Q}^{\star}$. For example, $s$ could be the key of a
symmetric key protocol, a password
or even a complete message such as a pair of
coordinates in a map or a time.\par
Alice picks another random rational $t$ and calls $v=(s,t)$ to
the corresponding point in ${\mathbb Q}^{2}$.\par
She chooses a random transformation $A\in GL_{2}({\mathbb Q})$ in the
linear group of ${\mathbb Q}^{2}$ and computes $v_{1}=v\cdot A$. Alice sends $v_{1}$
to Bob.\par
Bob picks another random transformation $B\in GL_{2}({\mathbb Q})$ and
computes $v_{2}=v_{1}\cdot B$, and sends $v_{2}$ back to Alice. Notice
that $v_{1}$ gives no information to Bob or an eavesdropper (Eve) about $s$,
because $t$ is
random and $v_{1}$ can be \emph{any} point in ${\mathbb Q}^{2}$,
depending on $t$ and $A$, which
are both unknown to both Bob and Eve. For a similar reason, the knowledge of $v_{1}$
and $v_{2}$ gives no
useful information about $B$.\par
Alice now computes $v_{3}=v_{2}\cdot A^{-1}$ and sends $v_{3}$ back
to Bob. Again, the knowledge of $v_{1},$
$v_{2}$ and $v_{3}$ is useless in order to retrieve the original $v$.\par
Finally, Bob computes $v_{4}=v_{3}\cdot B^{-1}$.\par
If only $v_{4}=v$...!
\section{The protocol ``would be'' safe}
Let us assume the above algorithm ends up with $v_{4}=v$ and let us
prove its safeness under this condition.
\begin{theorem}
The above method of communication is information-theoretically safe,
assuming $v$, $A$
and $B$ (and their inverses, obviously) are kept secret. That is,
the knowledge of the whole communication gives no information on the message.
\end{theorem}
\begin{proof}
We only need to show that an eavesdropper which knows all the
communication has no clue about what $s$ may be. In other words, it
is enough to show that for any rational $s^{\prime}$, there exist
another rational number $t^{\prime}$ and matrices $A'$, $B'$ such
that the communication between Alice and Bob is the same
(i.e. $v_{1}, v_{2}$ and $v_{3}$). But this
is trivial.
\end{proof}
\textbf{Remark:\/} The algorithm described above obviously does not work because
$GL_{2}({\mathbb Q})$ is non-commutative (in general, the linear group is noncommutative for dimension greater than $1$).
\section{What is needed?}\label{sec:requirements}
A natural question comes to mind: what are the necessary conditions for a
group action on a set for the above algorithm to provide a valid
system? What we used above is:
\begin{enumerate}
\item A set $S$ (either finite or infinite) (the rational plane in the
example).
\item An action $G\times S^{2}\rightarrow S^{2}$ of a \emph{commutative} group $G$
on $S^{2}$ (the group of movements of
the plane in the example, which is \emph{not} commutative). This
condition means that
after the above protocol is carried out completely, one always gets the
original message.
\item\label{cond:big-enough} Conditions on the action. At least the following ones, but more might be needed:
\begin{itemize}
\item Given $(s,t)\in S^{2}$ and $g\in G$, for any $s^{\prime}\in S$
there are $t^{\prime}\in S$ and $g'\in G$ such that $g\cdot (s,t)
= g^{\prime}\cdot (s^{\prime},t^{\prime})$.
\item For any $(s,t)in S^{2}$ and $A, B\in G$, there are $(s^{\prime},t^{\prime})$ and $A^{\prime},B^{\prime}\in G$ for which the sequences in the above algorithm are the same:
\begin{align*}
[(s,t)\cdot &A, (s,t)\cdot A \cdot B, (s,t) \cdot A \cdot B \cdot
A^{-1}] = \\ &[(s^{\prime},t^{\prime})\cdot A^{\prime},
(s^{\prime},t^{\prime})\cdot A^{\prime} \cdot B^{\prime},
(s^{\prime},t^{\prime})\cdot A^{\prime} \cdot B^{\prime}\cdot
(A^{\prime})^{-1}].
\end{align*}
\end{itemize}
\end{enumerate}
In fact, we do not need exactly an action of $G$ on $S^{2}$.
\begin{definition}
Let $G$ be a (not necessarily commutative) group acting on a set
$T$. We say that $t\in T$ is \emph{comm-fixed} if $g\cdot t = t$ for
any $g\in \text{Comm}(G)$ (the commutator of $G$). A subset
$S\subset T$ is \emph{comm-fixed} if
any $s\in S$ is \emph{comm-fixed}.
\end{definition}
It is clear that a subset $S\subset T$ is comm-fixed if and only if, for any
$s\in S$ and any $g,h\in G$, one has $s=h^{-1}g^{-1}hg\dot s$. From
this, it follows that we do not need exactly an action of a
commutative group on $S^{2}$ but
an action of a (not necessarily commutative) group on a set $X\supset S^{2}$ for which $S^{2}$ is
comm-fixed and which satisfies, at least, condition
(\ref{cond:big-enough}) above.\par
We would like to prove two results; the first one seems relatively easy,
while we have no clue (but are somewhat pessimistic) about the second
one:
\begin{conjecture}\label{con:sufficiency}
With the above conditions on $X$, $S^{2}$ and $G$, the protocol
described in section \ref{sec:false-algorithm}
is information-theoretically safe.
\end{conjecture}
\begin{question}\label{que:existence}
Do there exist $X,S$ and a group $G$ acting on $X$ for
which $S^{2}\subset X$ is comm-fixed and such that the stated
conditions hold?
\end{question}
\textbf{Remark:\/} it is obvious that $S^{2}$ can be changed by any
set of the same cardinal.
\end{document}
|
1611.07590
|
\section{I. Pair representations}
We present details on the group theory calculation of
the pair representations summarized in Table I of the main text.
\subsection{Characters of induced representations}
Starting from induced representations
Eqs.~(3) and (4) in the main text,
we define characters for the Cooper pair symmetry group
$G_\bold{k}\cup {\cal I} G_\bold{k}=\{{\cal E},\Sigma_z,{\cal I},{\cal I}\Sigma_z\}$ where
$G_\bold{k}=\{{\cal E},\Sigma_z\}$ is the little group.
Anticipating two-fold degeneracy of the single-particle states (as confirmed by Herring's criterion),
all single-particle co-representations
$\Gamma_\bold{k}$ are two-dimensional, i.e.~$\chi\left(P^-({\cal E})\right)=\chi^2\left(\Gamma_\bold{k}({\cal E})\right)=4$ and
$\chi\left(P^-({\cal I})\right)=-\chi\left(\Gamma_\bold{k}({\cal E})\right)=-2$.
Employing the multiplication rule for non-symmorphic group elements,
$(g_1,\bold{t}_1)(g_2,\bold{t}_2)=(g_1g_2,\bold{t}_1+g_1\bold{t}_2)$,
one finds
$[{\cal I}\Sigma_z]^2
=(\sigma_z^2,\bold{t}_i-\bold{t}_\sigma-\sigma_z(\bold{t}_i-\bold{t}_\sigma))$
and
$\chi(P^-({\cal I}\Sigma_z))
=\chi(\Gamma_\bold{k}((E,\bold{t}_i-\bold{t}_\sigma-\sigma_z(\bold{t}_i-\bold{t}_\sigma)))
=2e^{2ik_z \bold{e}_z\cdot(\bold{t}_\sigma-\bold{t}_i)}$,
or
$\rho=e^{2ik_z\bold{e}_z\cdot (\bold{t}_\sigma-\bold{t}_i)}$.
Similarly,
${\cal I}\Sigma_z{\cal I}
=(E,\bold{t}_i-\sigma_z \bold{t}_i-2\bold{t}_\sigma)\Sigma_z$ and
$\chi(P^-(\Sigma_z))
=e^{-i\bold{k}\cdot (2\bold{t}_\sigma+\sigma_z\bold{t}_i-\bold{t}_i)}\chi^2(\Gamma_\bold{k}(\Sigma_z))$.
\subsection{Herring's criterion}
Degeneracies induced by $\Theta$ are conveniently detected by Herring's criterion
from summing characters of double-valued representations $\gamma_\bold{k}$ of the little group~\cite{App_herring,App_lax},
$Z(\gamma_\bold{k})\equiv\sum_{ m\in G_\bold{k}} \chi\left(\gamma_\bold{k}([{\cal I}\Theta m]^2)\right)$.
For centrosymmetric crystals with (generalized) time-reversal symmetry,
the two possible outcomes $Z=0,1$ indicate the presence of Kramer's,
respectively, band-sticking degeneracies.
To apply Herring's criterion in our case, we need to sum two characters of the one-dimensional
single-particle representations $\gamma_\bold{k}$. That is,
$\chi\left(\gamma_\bold{k}([{\cal I}\Theta {\cal E}]^2)\right)=
-1$,
and
$\chi\left(\gamma_\bold{k}([{\cal I}\Theta \Sigma_z]^2)\right)=
\chi\left(\gamma_\bold{k}(E,\sigma_z(\bold{t}_\theta+\bold{t}_\sigma)-\bold{t}_\theta-\bold{t}_\sigma)\right)
=e^{i\bold{k}\cdot(\sigma_z[\bold{t}_\theta+\bold{t}_\sigma]-\bold{t}_\theta-\bold{t}_\sigma)}
=e^{2ik_z \bold{e}_z\cdot(\bold{t}_\theta+\bold{t}_\sigma)}
$.
Here we used the multiplication rule for the magnetic group $\theta g_1 \theta g_2=-g_1g_2$, to find
$[{\cal I}\Theta {\cal E}]^2=-{\cal E}$ and
$[{\cal I}\Theta \Sigma_z]^2
=-(\sigma_z^2,\sigma_z(\bold{t}_\theta+\bold{t}_\sigma-\bold{t}_i)-\bold{t}_\theta-\bold{t}_\sigma+\bold{t}_i)$.
\subsection{Co-representations}
The only character of the induced representation that we need to explicitly calculate is
$\chi(P^-(\Sigma_z))=e^{-i\bold{k}\cdot(2\bold{t}_\sigma+\sigma_z\bold{t}_i-\bold{t}_i)}\chi^2(\Gamma_\bold{k}(\Sigma_z))$
fixing the mirror eigenvalue of the Cooper pairs.
To this end we first need to specify co-representations for the two types of degeneracies.
Notice that the
one-dimensional single-particle representation
$\gamma_\bold{k}(\Sigma_z)=\pm i e^{i\bold{k}\cdot (\sigma_z\bold{t}_\sigma+\bold{t}_\sigma)/2}$
and
$\gamma_\bold{k}({\cal I}\Sigma_z{\cal I})=\pm i e^{-i\bold{k}\cdot (\sigma_z\bold{t}_\sigma+\bold{t}_\sigma)/2}$.
Therefore
$\Gamma_\bold{k}(\Sigma_z)=\pm e^{i\bold{k}\cdot (\sigma_z\bold{t}_\sigma+\bold{t}_\sigma)/2}\left(\begin{smallmatrix}
i & \\ & -i \end{smallmatrix}\right)$
in the case of a Kramer's degeneracy and
$\Gamma_\bold{k}(\Sigma_z)=\pm e^{i\bold{k}\cdot (\sigma_z\bold{t}_\sigma+\bold{t}_\sigma)/2}\left(\begin{smallmatrix}
i & \\ & i \end{smallmatrix}\right)$ in the case of band sticking. That is, Kramer's degeneracies lead to pairs of complex conjugate
representations while band sticking doubles the representations.
This implies that
$\chi(P^-(\Sigma_z))=0$ and $\chi(P^-(\Sigma_z))=-4 e^{-2ik_z\bold{e}_z\cdot(\bold{t}_\sigma-\bold{t}_i) }$ for a Kramers degeneracy
and band-sticking, respectively, or
$\chi(P^-(\Sigma_z))= -4 c_d \rho$
as stated in the main text.
\section{II. Clifford algebra extensions}
We here discuss in detail the Clifford algebra extension method and verify the topological stability of line nodes
of odd parity order parameters.
Let us recall the criterion for band-sticking ($c_d=1$), respectively Kramers degeneracy ($c_d=0$)
given in the main text,
\begin{align}
\label{Appeq:6}
(-1)^{c_d}
&=
e^{2ik_z\bold{e}_z\cdot (\bold{t}_\theta+\bold{t}_\sigma-\bold{t}_i)}.
\end{align}
Only the former allow for line nodes of odd parity superconductors, and to study topological stability of this
case we
may thus concentrate on the Brillouin zone face $k_z=\pi$ and
translations $\bold{t}_\theta,\bold{t}_\sigma,\bold{t}_i\in\{0,\bold{t}_\perp\}$.
To simplify the presentation we first assume that $\bold{t}_i=0$
and then discuss what changes for $\bold{t}_i=\bold{t}_\perp$.
Topological arguments showing the absence of nodal line odd parity superconductors for $\bold{t}_\theta,\bold{t}_\sigma=0$
have been introduced in Ref.~\cite{App_koba}. Moreover, topological stability
of the $A_u$ line node for $\bold{t}_\theta=0$, $\bold{t}_\sigma=\bold{t}_\perp$ has been discussed
in the recent work Ref.~\cite{App_sato}.
We will first recall the calculation of Ref.~\cite{App_sato} demonstrating topological stability of the
$A_u$ line node protected by a mirror$^*$ symmetry,
then show that the latter
is destabilized by antiferromagnetic order
$\bold{t}_\theta=\bold{t}_\perp$, $\bold{t}_\sigma=\bold{t}_\perp$
but reappears as a $B_u$ line node
once the mirror$^*$ symmetry is reduced to a conventional
mirror symmetry
$\bold{t}_\theta=\bold{t}_\perp$, $\bold{t}_\sigma=0$.
In terms of the representations introduced in the main text, this corresponds to passing from
$\Pi^-_1\to\Pi^-_0\to\Pi_1^+$.
{\it (Anti-)commutation relations:---}Line nodes in odd parity superconductors can only be encountered on
Brillouin zone faces. Restricting to the vicinity of the $k_z=\pi$ zone face one considers
the ``massive" Dirac Hamiltonian introduced in the main text,
$H+H_\parallel=v_z (k_z-\pi) \gamma_1 +v_\parallel k_\parallel \gamma_0$.
Here $\gamma_0$, $\gamma_1$ are the generators of a real Clifford algebra and
have commutation relations
\begin{align*}
&\{\gamma_0,\gamma_1\}=0,
\,\,\,
[J,\gamma_0]=0,
\,\,\,
[J,\gamma_1]=0.
\end{align*}
$J$ represents the imaginary unit $i$, and we recall that
$\gamma_0, \gamma_1$ are positive, i.e.~$\gamma_0^2=\gamma_1^2=1$, and
$J$ is negative, i.e.~$J^2=-1$.
The fundamental symmetries to be included into the algebra and discussed in the main text include
combinations of particle-hole symmetry ${\cal C}$, (generalized) time-reversal symmetry $\Theta$ and inversion symmetry
${\cal I}$~\cite{App_koba1}.
The latter have commutation relations ($i=0,1$)
\begin{align}
\{\Theta,J\} &=0,
\,\,\,
\{\Theta,\gamma_i\}=0,
\,\,\,
[\Theta,{\cal C}]=0,
\,\,\,
\{{\cal C},J\}=0,
\\
[{\cal C},\gamma_i]&=0,
\,\,\,
\{{\cal C},{\cal I}\}=0,
\,\,\,
[{\cal I},J]=0,
\,\,\,
\{{\cal I}, \gamma_i\}=0,
\end{align}
and we recall that ${\cal C}$ and ${\cal I}$ are both positive, i.e.~${\cal C}^2=1$ and ${\cal I}^2=1$.
Anti-commutation between particle-hole and
inversion symmetry accounts for the fact that we are here considering odd-parity superconductors.
The commutation relation between $\Theta$ and ${\cal I}$ and the sign of $\Theta^2$
both depend on the presence or absence of antiferromagnetic order, i.e.
\begin{align}
\,\,\,
[\Theta,{\cal I}]&=0, \quad \Theta^2=-1, \quad \text{(PM)}
\\
\{\Theta,{\cal I}\}&=0, \quad \Theta^2=1, \quad \text{(AF)},
\end{align}
where in the second line we restricted ourselves to the case of interest $\bold{t}_\theta=\bold{t}_\perp$
and the Brillouin zone face $k_z=\pi$.
Finally, the mirror symmetry $\Sigma_z$ is negative, i.e. ~$\Sigma_z^2=-1$, and
has the following commutation relations
\begin{align}
\{ \Sigma_z,\gamma_1\}=0,
\,\,
[\Sigma_z,\gamma_0]=0,
\,\,
[\Sigma_z,J]=0,
\,\,
{\cal C}\Sigma_z=\eta_{\cal C} \Sigma_z{\cal C}.
\end{align}
In the last equation $\eta_{\cal C}=+/-$ applies for pairs with positive/negative
mirror eigenvalues, i.e.~for order parameters from representations $B_u$ and $A_u$, respectively.
Commutation relations involving $\Sigma_z$ depend on the absence/presence of magnetic order and
of a non-primitive translation.
That is, for the cases of interest (and concentrating here on $\bold{t}_i=0$)
\begin{align}
\label{AppISigma}
[\Theta, \Sigma_z]&=0 \,\, \text{(PM)},
\,\,\,\,\,
[{\cal I},\Sigma_z]=0 \,\, \text{(mirror)},
\\
\{\Theta, \Sigma_z\}
&=0 \,\, \text{(AF)},
\,\,\,\,\,
\{{\cal I},\Sigma_z \}=0 \,\, \text{(mirror$^*$)},
\end{align}
where the second line applies for $\bold{t}_\theta,\bold{t}_\sigma=\bold{t}_\perp$,
and we again used that for our purposes $k_z=\pi$.
In the absence of time-reversal and mirror plane symmetries,
the Clifford algebra accounting
for all symmetries leaving the line node invariant reads
\begin{align}
\label{appeq:1}
{\cal C}l_{2,2}=\{\gamma_0,\gamma_1,\gamma_2\equiv J{\cal C}{\cal I},\gamma_3\equiv {\cal C}{\cal I} \},
\end{align}
where $2,2$ refers to the two positive ($\gamma_0,\gamma_1$) and two negative ($\gamma_2,\gamma_3$) elements.
{\it Extension problem:---}Eq.~\eqref{appeq:1} is the starting algebra to which
then generators $\gamma_4$, $\gamma_5$ representing $\Theta$ and $\Sigma_z$, respectively,
are added. As detailed below, we will encounter the following three situations:
(i) the elements $\gamma_0,...,\gamma_5$ can all be chosen to mutually
anti-commute and define a real Clifford algebra
${\cal C}l_{p,6-p}$, where $p$ ($6-p$) refers to the number of negative (positive) generators.
(ii) five out of the six elements mutually anti-commute, but {\it commute} with the remaining
{\it positive} element $\bar{\gamma}_i$. The latter effectively decouples,
reducing the real Clifford algebra to ${\cal C}l_{p,5-p}$.
(iii) a situation as in (ii) with {\it negative} $\bar{\gamma}_i$, which
defines a complex unit and the resulting complex Clifford algebra of
anti-commuting elements
${\cal C}l_5$.
The extension problem then compares symmetry groups of the
Clifford algebra obtained after $\gamma_0$ has been removed with that of the full algebra.
The latter is a subset of the former and their quotient defines
the manifold of mass terms,
i.e.~the classifying space ${\cal Q}$.
Removing $\gamma_0$ from the algebra corresponds to passing from
${\cal C}l_{p,q}$ (with $\gamma_0$) to ${\cal C}l_{p,q-1}$ (without $\gamma_0$) in the
real cases, and correspondingly from ${\cal C}l_5$ to ${\cal C}l_4$ in the complex case. The
invariance groups of the corresponding algebras and their quotients define the
(symmetric) classifying space ${\cal Q}$ of the extension problem. Finally,
classifying spaces $R_i$ and $C_i$ for all real and complex extension problems, respectively,
as well as their homotopy groups can be looked up in Ref.~\cite{App_Kitaev}.
As discussed in the main text, we will find that in our context
only the complex extension problem ${\cal C}l_4\to {\cal C}l_5$ has a
topologically nontrivial classifying space, $\pi_0({\cal Q})={\mathbb Z}$. That is,
the presence of a commuting negative symmetry-element $\bar{\gamma}_i$
is equivalent to
the topological protection of a nodal-line superconductor
and should be related to the band sticking.
\begin{table}[t!]
\begin{tabular}{p{1.4cm}|p{.5cm}p{.5cm}p{.5cm}p{.5cm}p{.5cm}p{.5cm}|p{.5cm}p{.56cm}}
\hline
\hline
& $\Theta$ & ${\cal C}$ & $J$ & ${\cal I}$ & $\gamma_1$ & $\gamma_0$ & $\,\,\,\Sigma_z$
\\
\hline
$\gamma_0$& - & + & + & - & - & + & $\,\,\,$+
\\
$\gamma_1$& - & + & + & - & + & - & $\,\,\,$-
\\
$\gamma_2\equiv J{\cal C}{\cal I}$&- & +& - & - & - & - &$-\eta_{\cal C}$
\\
$\gamma_3\equiv {\cal C}{\cal I}$&+ & - & - & - & - & - &$-\eta_{\cal C}$
\\
\hline
$\gamma_4\equiv \Theta{\cal C}$& +& + & + & - & - & - & $\,$ $\eta_{\cal C}$
\\
\hline
\hline
\end{tabular}
\caption{Commutation relations applying for a paramagnet with twofold screw axis ($\Pi^-_1)$.
$+$ and $-$ refer to commutation or anti-commutation, and $\eta_C=\pm$ to positive and negative
mirror eigenvalues, i.e. order parameters from representations $B_u$ and $A_u$, respectively.
\label{tableApp:1}
}
\end{table}
{\it ``Paramagnet with screw axis":---}From commutation relations
summarized in Table~\ref{tableApp:1}, one notices that the negative element
$\Theta{\cal C}$ anti-commutes with all Clifford algebra elements \eqref{appeq:1}.
That is,
the time-reversal invariant system defines the Clifford algebra
${\cal C}l_{3,2}=\{\gamma_0 ,\gamma_1, \gamma_2\equiv J{\cal C}{\cal I}, \gamma_3\equiv{\cal C}{\cal I},\gamma_4\equiv\Theta{\cal C} \}$.
Next, we include the mirror symmetry resulting from a twofold screw axis, i.e.~$\bold{t}_\sigma=\bold{t}_\perp$.
Concentrating on order parameters from $B_u$ with positive mirror eigenvalues, one finds from
Table~\ref{tableApp:1} that the negative element
$\gamma_5\equiv J\Sigma_z\gamma_1$ anti-commutes with all other Clifford algebra generators.
Accounting for all symmetry elements, one thus arrives at the Clifford algebra
${\cal C}l_{4,2}=\{
\gamma_0 ,\gamma_1, \gamma_2\equiv J{\cal C}{\cal I}, \gamma_3\equiv{\cal C}{\cal I},\gamma_4\equiv\Theta{\cal C},
\gamma_5\equiv J\Sigma_z\gamma_1\}$.
The extension problem then reads ${\cal C}l_{4,1}\to {\cal C}l_{4,2}$ with classifying space
$R_5$. The trivial topology $\pi_0(R_5)=0$ confirms the absence of protected line nodes for odd-parity
superconductors from $B_u$ also encountered from the group theory analysis.
For order parameters from $A_u$ with negative mirror eigenvalue, one
finds that the negative element $\gamma_5\equiv{\cal C}\Theta\Sigma_z\gamma_1$ {\it commutes} with all Clifford algebra generators.
This element generates the algebra ${\cal C}l_{1,0}=\{\gamma_5\}$ which
introduces the complex unit $i$ into the Clifford algebra accounting for all symmetries. That is the latter is the complex
algebra ${\cal C}l_5=\{\gamma_0,\gamma_1,\gamma_2,\gamma_3,\gamma_4 \}\otimes {\cal C}l_{1,0}$, and
the extension problem reads ${\cal C}l_4\to{\cal C}l_5$ with classifying space
$C_0$ and non-trivial topology $\pi_0(C_0)={\mathbb Z}$.
The latter shows the topological protection of line nodes for odd-parity superconductors
from $A_u$ discussed in the second entry of Table~III in the main text.
The above example has been first mentioned in Ref.~\cite{App_sato}.
We next discuss how the results change
in the presence of antiferromagnetic order corresponding to the cases summarized
in the third and fourth entries of Table~III in the main text.
{\it ``Antiferromagnet with screw axis":---}From commutation relations
summarized in Table~\ref{tableApp:2}, one notices that the
negative element $\gamma_4\equiv\Theta{\cal C}J$ anti-commutes with all other Clifford algebra elements.
Upon inclusion of a (generalized) time-reversal symmetry, one thus arrives at the Clifford algebra
${\cal C}l_{3,2}=\{
\gamma_0,\gamma_1,
\gamma_2\equiv J{\cal C}{\cal I},
\gamma_3\equiv{\cal C}{\cal I},
\gamma_4\equiv \Theta{\cal C}J, \}$.
Concentrating first on order parameters from $B_u$, it can be verified that
the {\it positive} element
$\gamma_5\equiv{\cal C}\Theta\Sigma_z\gamma_1$ commutes with all other Clifford algebra generators.
The latter defines the algebra ${\cal C}l_{0,1}=\{\gamma_5\}$
and the conjunction of all symmetry elements defines the algebra
${\cal C}l_{3,2}\otimes{\cal C}l_{0,1}$. The extension problem is not modified by the second tensor component,
i.e.~is given by ${\cal C}l_{3,1}\to {\cal C}l_{3,2}$ with a topologically trivial classifying space
$R_6$ with $\pi_0(R_6)=0$.
For order parameters from $A_u$, one finds that the positive element $\gamma_5\equiv\Sigma_z\gamma_1$
anti-commutes with all Clifford algebra generators, defining
the algebra
${\cal C}l_{3,3}=\{
\gamma_0,\gamma_1,
\gamma_2\equiv J{\cal C}{\cal I},
\gamma_3\equiv {\cal C}{\cal I},
\gamma_4\equiv \Theta{\cal C}J,
\gamma_5\equiv \Sigma_z\gamma_1\}$.
The extension problem then reads ${\cal C}l_{3,2}\to {\cal C}l_{3,3}$ with again a topologically trivial classifying space
$R_7$ with $\pi_0(R_7)=0$. This shows that antiferromagnetic order destabilizes the line node encountered for
crystals with twofold screw axes, as also found from our group theory analysis summarized in the third entry of Table~III
of the main text.
\begin{table}[t!]
\begin{tabular}{p{1.4cm}|p{.5cm}p{.5cm}p{.5cm}p{.5cm}p{.5cm}p{.5cm}|p{.5cm}p{.56cm}}
\hline
\hline
& $\Theta$ & ${\cal C}$ & $J$ & ${\cal I}$ & $\gamma_1$ & $\gamma_0$ & $\,\,\,\Sigma_z$
\\
\hline
$\gamma_0$& - & + & + & - & - & + & $\,\,\,$+
\\
$\gamma_1$& - & + & + & - & + & - & $\,\,\,$-
\\
$\gamma_2\equiv J{\cal C}{\cal I}$&+ & +& - & - & - & - &$-\eta_{\cal C}$
\\
$\gamma_3\equiv {\cal C}{\cal I}$&- & - & - & - & - & - &$-\eta_{\cal C}$
\\
\hline
$\gamma_4\equiv \Theta{\cal C}J$& -& - & + & + & - & - & $-\eta_{\cal C}$
\\
\hline
\hline
\end{tabular}
\caption{Commutation relations applying for coexistence with
antiferromagnetic order $\bold{t}_\theta=\bold{t}_\perp$ and a crystal with
twofold screw axis giving rise to $\Sigma_z$ with $\bold{t}_\sigma=\bold{t}_\perp$ ($\Pi^-_0$).
$+$ and $-$ refer to commutation or anti-commutation, and $\eta_C=\pm$ to positive and negative
mirror eigenvalues, i.e. order parameters from representations $B_u$ and $A_u$, respectively.
\label{tableApp:2}
}
\end{table}
\begin{table}[b!]
\begin{tabular}{p{1.4cm}|p{.5cm}p{.5cm}p{.5cm}p{.5cm}p{.5cm}p{.5cm}|p{.5cm}p{.56cm}}
\hline
\hline
& $\Theta$ & ${\cal C}$ & $J$ & ${\cal I}$ & $\gamma_1$ & $\gamma_0$ & $\,\,\,\Sigma_z$
\\
\hline
$\gamma_0$& - & + & + & - & - & + & $\,\,\,$+
\\
$\gamma_1$& - & + & + & - & + & - & $\,\,\,$-
\\
$\gamma_2\equiv J{\cal C}{\cal I}$&+ & +& - & - & - & - & $\,$ $\eta_{\cal C}$
\\
$\gamma_3\equiv {\cal C}{\cal I}$&- & - & - & - & - & - &$\,$ $\eta_{\cal C}$
\\
\hline
$\gamma_4\equiv \Theta{\cal C}J$& -& - & + & + & - & - & $-\eta_{\cal C}$
\\
\hline
\hline
\end{tabular}
\caption{Commutation relations applying for symmorphic superconductors coexisting with
antiferromagnetic order $\bold{t}_\theta=\bold{t}_\perp$ ($\Pi^+_1$).
$+$ and $-$ refer to commutation or anti-commutation, and $\eta_C=\pm$ to positive and negative
mirror eigenvalues, i.e.~order parameters from representations $B_u$ and $A_u$, respectively.
\label{tableApp:3}
}
\end{table}
{\it ``Symmorphic antiferromagnet":---}In the case where $\Sigma_z$ is a
conventional mirror symmetry with $\bold{t}_\sigma=0$, we again
include $\gamma_4\equiv\Theta{\cal C}J$ to account for a generalized time-reversal symmetry.
That is, we start
from the Clifford algebra
${\cal C}l_{3,2}=\{
\gamma_0 ,\gamma_1,
\gamma_2\equiv J{\cal C}{\cal I},
\gamma_3\equiv {\cal C}{\cal I},
\gamma_4\equiv \Theta{\cal C}J\}$,
to which
we then include the mirror plane symmetry.
Concentrating on order parameters from $B_u$ with
positive mirror eigenvalue, we find from commutation relations summarized in Table~\ref{tableApp:3}
that the negative element
$\gamma_5\equiv {\cal C}\Theta J\Sigma_z\gamma_1$ commutes with all other Clifford algebra generators.
The situation is then similar to that encountered for $A_u$ order parameters in
a paramagnet with a twofold screw axis. The Clifford algebra accounting for all symmetry elements
is complex, ${\cal C}l_5={\cal C}l_{3,2}\otimes {\cal C}l_{1,0}$, giving rise to
the extension problem ${\cal C}l_4\to{\cal C}l_5$ with classifying space $C_0$. The nontrivial topology
$\pi_0(C_0)={\mathbb Z}$ again indicates the topological protection of line nodes, now for a representation $B_u$
as defined in the fourth entry of Table~III in the main text.
Finally, for order parameters from $A_u$ with negative mirror eigenvalue, one finds that
the negative element $\gamma_5\equiv J\Sigma_z\gamma_1$ anti-commutes with all Clifford algebra generators.
Including this element, one arrives at the Clifford algebra
${\cal C}l_{4,2}=\{
\gamma_0,\gamma_1,
\gamma_2\equiv J{\cal C}{\cal I},
\gamma_3\equiv {\cal C}{\cal I},
\gamma_4\equiv \Theta{\cal C}J,
\gamma_5\equiv J\Sigma_z\gamma_1\}$.
This defines the extension problem ${\cal C}l_{3,2}\to {\cal C}l_{4,2}$ with trivial classifying space
$R_3$, with $\pi_0(R_3)=0$.
{\it Inversion with finite translation:---}We may now straightforwardly extend the above analysis to the case
$\bold{t}_i=\bold{t}_\perp$. To this end we notice that for general $\bold{t}_i$, $\bold{t}_\sigma$
the second of Eq.~\eqref{AppISigma}
changes to
\begin{align}
{\cal I}\Sigma_z
=
e^{2ik_z\bold{e}\cdot(\bold{t}_\sigma-\bold{t}_i)}\Sigma_z{\cal I},
\end{align}
where we employed that $\bold{t}_\sigma,\bold{t}_i=\{0,\bold{t}_\perp\}$.
It is then evident that leaving $\bold{t}_i\in\{0,\bold{t}_\perp\}$ unspecified, the above calculation can be
repeated and generalizes the demonstration of topological stability of an $A_u$, respectively $B_u$, line node
to those conditions leading to band sticking ($c_d=1$) in Eq.~\eqref{Appeq:6}.
\section{III. More examples}
In the main text, we concentrated on the well studied case of UPt$_3$. But there are a
number of other unconventional superconductors which are characterized by non-symmorphic space groups,
many of which are suspected to be odd-parity superconductors. These are tabulated in Table~\ref{tableApp:4},
along with the two antiferromagnetic superconductors discussed in the text.
Much of the information on the uranium-based superconductors can be found in Ref.~\onlinecite{App_pfleid}.
There are others we do not discuss, since not enough is known about their properties.
For instance, the topological semimetal Cd$_3$As$_2$ becomes superconducting under pressure with the space
group P2$_1$/c \cite{App_cd3as2}, but its nodal properties and Fermi surface (in the high pressure phase)
are not known at present.
\begin{table}[t!]
\begin{ruledtabular}
\begin{tabular}{ccccc}
& Space Group & GO
& Node & Spin\\
\colrule
UPt$_3$ & P6$_3$/mmc & S,G & line & T\\
Na$_x$CoO$_2$ & P6$_3$/mmc & S,G & line & S,T\\
Li$_2$Pt$_3$B & P4$_1$32 & S,I & line & T\\
UBe$_{13}$ & Fm${\bar 3}$c & G & point & T\\
CrAs & Pnma & S,G,H & line & ?\\
MnP & Pnma & S,G,H & ? & ?\\
UPd$_2$Al$_3$ & P6/mmm & AF & line & S\\
UNi$_2$Al$_3$ & P6/mmm & AF & line & T\\
\end{tabular}
\end{ruledtabular}
\caption{Properties of non-symmorphic superconductors (first six entries)
and antiferromagnetic superconductors (last two entries).
For GO
(group operations), S indicates a screw axis, G a glide plane, I a lack of inversion symmetry, H
a helical magnet, and AF non-symmorphicity induced by antiferromagnetism.
Node means the experimentally indicated
nodal structure (line, point, or ? for unknown),
and Spin denotes singlet (S) or triplet (T) behavior (both have been advocated for Na$_x$CoO$_2$).
}
\label{tableApp:4}
\end{table}
We now discuss each of these cases in turn. The nodal properties of UPt$_3$
have been known for a long time \cite{App_norm92,App_sauls,App_joynt,App_pfleid}. Strong evidence for line nodes
have been found from specific heat \cite{App_sh}, thermal conductivity \cite{App_kappa}, and transverse
ultrasound \cite{App_ultrasound}, with the lack of a change in
the Knight shift below T$_c$ indicating triplet behavior \cite{App_tou}. An E$_{2u}$ order parameter has been inferred from phase
sensitive Josephson tunneling \cite{App_dvh}. Two of the five Fermi surfaces cross the zone face along
the $k_z$ direction \cite{App_mcm}.
Superconducting sodium-doped (and water intercalated) CoO$_2$, with the same space group as
UPt$_3$, has been heavily studied as well \cite{App_takada}. Band structure calculations \cite{App_singh} reveal Fermi surfaces
that also cross the zone face along the $k_z$ direction (later found to be consistent with ARPES \cite{App_arpes}).
Specific heat \cite{App_yang}, Knight shift \cite{App_ihara} and the spin lattice relaxation rate \cite{App_fujimoto} are
consistent with line nodes. Knight shift data \cite{App_ihara} have been interpreted as either a singlet or a triplet
with the $d$-vector orthogonal to $c$.
Li$_2$Pd$_3$B and Li$_2$Pt$_3$B have the P4$_1$32 chiral space group that breaks inversion \cite{App_lipb}.
The Pd version appears to be a fully gapped superconductor, but for the Pt version, specific heat \cite{App_takeya},
penetration depth \cite{App_yuan} and NMR \cite{App_harada,App_nishiyama} are consistent with line nodes.
A lack of change of the Knight shift below T$_c$ \cite{App_harada,App_nishiyama} indicates triplet behavior.
Band structure calculations \cite{App_chandra,App_pickett}
predict Fermi surfaces at the zone face.
Thus, all of these cases (UPt$_3$, Na$_x$CoO$_2$, and Li$_2$Pt$_3$B) are consistent with our
analysis indicating the possibility of line nodes at the zone face for odd-parity representations,
though for Li$_2$Pt$_3$B, parity mixing is possible that could alter the nodal structure.
The well known heavy fermion superconductor UBe$_{13}$ also has a non-symmorphic space group, but one
which only has glide operations. Specific heat \cite{App_ott} and penetration depth \cite{App_lambda} are consistent with point nodes,
which agrees with our previous analysis \cite{App_MN} that glide operations do not induce line nodes.
A lack of change of the Knight shift below T$_c$ \cite{tien} indicates triplet behavior.
Recently, the helical magnets CrAs \cite{App_cras} and MnP \cite{App_mnp} have been found to be superconducting
under high pressures. Both are characterized by the space group Pnma,
which has screw axes associated with all three crystallographic axes, and glide planes associated with two of them.
NQR on CrAs is consistent with line
nodes \cite{App_kotegawa}, and band structure calculations \cite{App_niu} indicate small pockets around the $Y$
point on the zone face. Again, this is consistent with our analysis.
The case of UPd$_2$Al$_3$ was discussed extensively by Nomoto and Ikeda \cite{App_nomoto}. Its sister compound,
UNi$_2$Al$_3$, is strongly suspected of being a triplet superconductor given the lack of change of the
Knight shift below T$_c$ \cite{App_ishida}, as opposed to UPd$_2$Al$_3$ that appears to be a singlet \cite{App_tou}.
NMR indicates line nodes \cite{App_tou2}. UNi$_2$Al$_3$ is an incommensurate
antiferromagnet with a $Q$ vector of $(1/2 \pm \delta, 0, 1/2)$ and moments perpendicular to $c$ \cite{App_schroder}.
Thus for the zone face perpendicular to $c$, these two cases are the same since $Q_z$ is the same, as discussed in the main text.
|
1502.01555
|
\section{Introduction}
There are two approaches to the {\it $L^2$-Betti numbers} $\beta_n^{(2)}(\Gamma)$, $n=0,1,2,\dots$, of an arbitrary (countable) discrete group $\Gamma$; one is geometric and the other is algebraic, each of which has individual merits. The first and geometric one due to Cheeger and Gromov \cite{cheeger-gromov} utilizes chain complexes of Hilbert spaces obtained from appropriate simplicial complexes equipped with actions of $\Gamma$, while the second and algebraic one due to L{\"{u}}ck (see his book \cite{luck:survey}) does chain complexes of algebraic $\Gamma$-modules with the help of his `algebraization' of the original Murray-von Neumann dimension.
Following Cheeger-Gromov's geometric approach, Gaboriau \cite{gaboriau:betti} introduced the $L^2$-Betti numbers $\beta_n^{(2)}(\mathcal{R})$ of an arbitrary probability measure preserving (pmp for short) (countable) discrete equivalence relation $\mathcal{R}$. For an arbitrary essentially free, pmp action $\Gamma\curvearrowright (X,\mu)$ of a discrete group he showed, among others, that its orbit equivalence relation $\mathcal{R}_{\Gamma\curvearrowright (X,\mu)}$ satisfies the formula
\begin{equation}\label{a}
\beta_n^{(2)}(\mathcal{R}_{\Gamma\curvearrowright (X,\mu)}) = \beta_n^{(2)}(\Gamma),
\end{equation}
which in turn says that the $\beta_n^{(2)}(\Gamma)$ are orbit equivalence invariants. Under the influence of Gaboriau's work, Sauer \cite{sauer} then adapted L\"{u}ck's algebraic approach to an arbitrary pmp discrete groupoid $G$, and defined the $L^2$-Betti numbers $\beta_n^{(2)}(G)$. The pmp discrete groupoids form a natural class including both the discrete groups and the pmp discrete equivalence relations as its subclasses. By definition, Sauer's $\beta_n^{(2)}(G)$ recovers $\beta_n^{(2)}(\Gamma)$ when $G$ is a discrete group $\Gamma$. Moreover, it is rather easier to prove the formula \eqref{a} in his definition, and it turns out that Sauer's $L^2$-Betti numbers agree with Gaboriau's when $G = \mathcal{R}_{\Gamma\curvearrowright (X,\mu)}$ with essentially free, pmp actions $\Gamma\curvearrowright (X,\mu)$. The complete identification between Gaboriau's and Sauer's $L^2$-Betti numbers for pmp discrete equivalence relations was finally settled by Neshveyev and Rustad \cite{neshveyev}. Their proof utilizes more recent technologies developed by Thom \cite{thom}, and turns out to simplify some technical parts of Gaboriau's theory. However, it is still missing to develop the geometric approach to the $L^2$-Betti numbers in the framework of pmp discrete groupoids, and we will fill up this gap in the present notes.
Before his introduction of $L^2$-Betti numbers of pmp discrete equivalence relations, Gaboriau \cite{gaboriau:cost} studied the so-called {\it cost} $C_\mu(\mathcal{R})$ of an arbitrary pmp discrete equivalence relation $\mathcal{R}$ over a probability space $(X,\mu)$ thoroughly, following Levitt's former work \cite{levitt}. He made many non-trivial computations including that $C_\mu(\mathcal{R}_{\mathbb{F}_n\curvearrowright(X,\mu)}) = n$ for any essentially free, pmp action $\mathbb{F}_n \curvearrowright (X,\mu)$ possibly with $n=\infty$. He also proved, in his work \cite{gaboriau:betti} on $L^2$-Betti numbers, the following inequality
\begin{equation}\label{b}
\beta_1^{(2)}(\mathcal{R}) - \beta_0^{(2)}(\mathcal{R}) +1 \leq C_\mu(\mathcal{R}).
\end{equation}
Gaboriau's theory of costs including this inequality also seems missing for arbitrary pmp discrete groupoids. It is rather straightforward, see \cite{ueda},\cite{abert-nikolov},\cite{abert-weiss}, to adapt Levit-Gaboriau's definition of costs to pmp discrete groupoids. However, it is certainly non-trivial to generalize the main assertions in Gaboriau's theory of costs. In fact, \cite[Proposition I.11]{gaboriau:cost} does never hold true for pmp discrete groupoids (see \cite[Remark 12 (1)]{ueda}). Nevertheless, Ueda \cite{ueda} showed that some important others, e.g.~\cite[Proposition II.6, Th{\'e}or{\`e}me IV.15]{gaboriau:cost}, still hold true for arbitrary pmp discrete groupoids, but his work was done in terms of operator algebras. In the present notes we will translate his work into terms of pmp discrete groupoids by supplying necessary technical ingredients, and then establish the formula \eqref{b} for arbitrary pmp discrete groupoids by generalizing necessary parts of Gaboriau's theory to the groupoid setting. We also compute the costs of pmp `treeable' groupoids.
As mentioned above the present notes supply necessary explanations for unifying previous fundamental works on $L^2$-Betti numbers and costs in the class of pmp discrete groupoids. Hence some parts of the present notes may have implicitly been known so far, though nobody explored them in any literature. We intend to provide the present notes as a reference for future study of pmp discrete groupoids. We use the necessary contents from Sauer's paper \cite{sauer} without explanation and also some technical things from \cite{neshveyev} to make these notes short enough. Nevertheless, these notes with the help of only \cite{neshveyev}, \cite{sauer}, and \cite{ueda} are essentially self-contained.
\section{Pmp discrete groupoids and their von Neumann algebras}
Let $G$ be a discrete (standard) Borel groupoid with unit space $X$ (usually denoted by $G^{(0)}$ instead). The source map and the range map are denoted by $s : G \to X$ and $r : G \to X$, respectively. If the mapping $g \in G \mapsto (r(g),s(g)) \in X\times X$ is injective, we say that $G$ is {\it principal}. In this case, $G$ is nothing but a discrete Borel equivalence relation. A Borel subset $E \subset G$ is said to be {\it one-sheeted} if $s \upharpoonright_E$ and $r \upharpoonright_E$ are injective. The symbol $\mathcal{G}_G$ denotes the set of one-sheeted sets of $G$. Since $s$ and $r$ are countable-to-one maps, the following hold true (due to e.g.~\cite[Theorems 15.1, 15.2, 18.10]{kechris:GTM156}): (i) $G$ can be decomposed into countable disjoint union of elements in $\mathcal{G}_G$; (ii) for each $E \in \mathcal{G}_G$ we have a partially defined Borel isomorphism $\varphi_E := (r\!\upharpoonright_E)\circ(s\!\upharpoonright_E)^{-1} : s(E) \to r(E)$. Assume that $X$ is endowed with a probability measure $\mu$ which is invariant under all $\varphi_E$, $E \in \mathcal{G}_G$. We call such a pair $(G,\mu)$ a {\it pmp discrete groupoid}. Define a (possibly infinite) measure $\mu^G$ on $G$ by $\mu^G(B) = \int_X\, \#(s^{-1}(\{x\})\cap B)\, \mu(dx)$ for every Borel subset $B$ of $G$.
\medskip
The {\it groupoid ring} $\mathbb{C}[G]$ of $G$ is defined to be the linear subspace of functions $f \in L^\infty(G,\mu^G)$ such that two functions $x \mapsto \#(s^{-1}(x) \cap \mathop{ {\mathrm{supp} }}\nolimits f)$, $x \mapsto \#(r^{-1}(x) \cap \mathop{ {\mathrm{supp} }}\nolimits f)$ are bounded $\mu$-a.e. The product $(f_1, f_2) \in \mathbb{C}[G]\times\mathbb{C}[G] \mapsto f_1 f_2 \in \mathbb{C}[G]$ and the adjoint $f \in \mathbb{C}[G] \mapsto f^* \in \mathbb{C}[G]$ are defined by $(f_1 f_2)(g) = \sum_{ g_1 g_2 = g} f_1(g_1) f_2(g_2)$ and $(f^*)(g) := \overline{f(g^{-1})}$, respectively. With these operations, $\mathbb{C}[G]$ becomes a $*$-algebra. We remark that if $G$ is a discrete group, then $\mathbb{C}[G]$ is just the usual group ring.
\medskip
The so-called {\it{\rm(}left{\rm)} regular representation} $\mathbb{C}[G] \curvearrowright L^2(G) := L^2(G,\mu^G)$ is defined by $(f \xi)(g) := \sum_{g_1 g_2 = g} f(g_1)\xi(g_2)$ for $f \in \mathbb{C} [G]$ and $\xi \in L^2(G)$, and it generates the {\it groupoid von Neumann algebra} $L(G) = \mathbb{C}[G]''$ on $L^2(G)$. The von Neumann algebra $L(G)$ has a faithful normal tracial state $\tau_G$ defined by a cyclic and separating vector $\mathbbm{1}_X$ (the characteristic function on $X$). Remark that each $u(E) := \mathbbm{1}_E$ inside $L(G)$, $E \in \mathcal{G}_G$, defines a partial isometry in $L(G)$ and that $L(G)$ is generated by these $u(E)$ as a von Neumann algebra, since $G$ is a countable disjoint union of one-sheeted sets. In closing of this section, we give two remarks: (1) If $G$ is a discrete group, then $(L(G),\tau_G)$ is nothing but the {\it group von Neumann algebra} with the canonical tracial state. (2) If $G$ is the {\it transformation groupoid} (see the glossary prior to Lemma \ref{trivial_action} for the definition) arising from a pmp action $\Gamma \curvearrowright X$ of a discrete group, then $L(G)$ is naturally identified with $L^\infty(X)\rtimes\Gamma$, the crossed product of $L^\infty(X)$ by the induced action of $\Gamma$ on $L^\infty(X)$ in the sense of e.g.~\cite[Definition 13.1.3]{kadison-ringrose:bookv2}. The identification is precisely given by $u(E_\gamma) = u_\gamma \otimes \lambda_\gamma$, where $E_\gamma := X \times \{ \gamma \}$, $u_\gamma$ is the unitary representation of $\Gamma$ on $L^2(X)$ associated with the induced action, and $\lambda_\gamma$ denotes the left regular representation.
Throughout the rest of this notes $(G, \mu)$ denotes a pmp discrete groupoid wiht unit space $X$.
\section{Geometric approach to $L^2$-Betti numbers of pmp discrete groupoids}
\subsection{Definitions}
We adapt Gaboriau's definition of $L^2$-Betti numbers to arbitrary pmp discrete groupoids with necessary suitable modifications. This and the next subsections are rather self-contained.
\medskip
A {\it {\rm(}standard{\rm)} fiber space} over $(X,\mu)$ is defined to be a pair which consists of a (standard) Borel space $U$ and a Borel map $\pi_U : U \to X$ with countable fibers, and it is usually denoted by $U$ for simplicity. We equip it with a natural measure $\mu_U$ on $U$ defined by $\mu_U(C) := \int_X\, \# (\pi_U^{-1}(\{x\}) \cap C)\, \mu(dx)$ for every Borel subset $C$ of $U$. Any pmp discrete groupoid $(G,\mu)$ produces two fiber spaces with its source and range maps $s, r$. A Borel subset $E$ of a standard fiber space $U$ is called a {\it Borel section} of $U$ if $\pi_U \upharpoonright_E$ is injective. Note that, by \cite[Theorem 18.10]{kechris:GTM156}, any fiber space is a countable disjoint union of its Borel sections. The {\it fiber product} of fiber spaces $U_1,\dots,U_n$ means the fiber space $U_1*\dots*U_n := \{(u_1,\dots,u_n) \in U_1\times\cdots\times U_n\,|\,\pi_{U_1}(u_1) = \cdots = \pi_{U_n}(u_n) \}$ with $\pi_{U_1*\cdots*U_n} : (u_1,\cdots,u_n) \in U_1*\cdots*U_n \mapsto \pi_{U_1}(u_1) = \cdots = \pi_{U_n}(u_n) \in X$.
\medskip
Let $U$ be a fiber space over $(X,\mu)$. We regard $G$ as a standard fiber space with the source map $s$, and get the fiber product $G*U$. In this setup, a {\it left action} of $G$ on $U$ is defined to be a Borel map $(g,u) \in G * U \mapsto g\cdot u \in U$ satisfying the following conditions: (1) $\pi_U(g \cdot u) = r(g)$, (2) $\pi_U(u) \cdot u = u$ (where $\pi_U(u)$ is viewed as an element in $G$ since $X \subseteq G$), (3) $g \cdot (g^\prime \cdot u) = (g g^\prime) \cdot u$. We call such a fiber space with left action of $G$ a {\it {\rm(}standard left{\rm)} $G$-space}. The `groupoid product map' $(g_1,g_2) \in G*G \mapsto g_1 g_2 \in G$ is nothing but a left action of $G$ on the fiber space $r: G \to X$ so that $G$ itself is a $G$-space.
\medskip
Let $U$ be a $G$-space. The left action of $G$ is said to be {\it essentially free} if $g \cdot u = u$ implies $g = \pi_U(u)$ for $\mu_U$-a.e.~$u$. A Borel subset $F$ of $U$ is called a {\it fundamental domain} for the action of $G$ if $\#((G\cdot u) \cap F) = 1$ holds for $\mu_U$-a.e.~$u$. Following Pichot's notion \cite{pichot} we say that a $G$-space $U$ is {\it quasi-periodic}, if the left action of $G$ is essentially free and has a fundamental domain. It is important below that $G$ itself becomes a quasi-periodic $G$-space with fundamental domain $X$. Note that if $G$ is principal or other words an equivalence relation, then any left action of $G$ must be essentially free. We may and do assume, by choosing smaller co-null subset if necessarily, that for any quasi-periodic $G$-space $U$, the $G$-action is precisely free and has an exact fundamental domain.
\medskip
The next lemma is crucial and the groupoid counterpart of \cite[Lemme 2.3]{gaboriau:betti}.
\begin{lem} \label{lem:quasi-periodic}
For any quasi-periodic $G$-space $U$, there exists a $G$-equivariant Borel injection from $U$ into a disjoint union $\bigsqcup_{i \in I} G = G\times I$ equipped with the left $G$-space structure as follows: its standard fiber space structure is given by the map $(g,i) \mapsto r(g)$ and its left action of $G$ is diagonal, i.e., $(g_1,(g_2,i)) \mapsto (g_1 g_2, i)$.
\end{lem}
\begin{proof}
As we remarked above, we may assume that the action of $G$ on $U$ is exactly free and has an exact fundamental domain. Let $F$ be an exact fundamental domain for the left action of $G$ on $U$. Since $\pi_U \!\upharpoonright_F : F \to X$ is a countable to one Borel map, by \cite[Theorem 18.10]{kechris:GTM156} there exists a countable Borel partition $\{ F_i \}_{i \in I}$ of $F$ such that each $\pi_U\!\upharpoonright_{F_i}$ is injective.
Then we have $U = G \cdot F = \bigsqcup_{i \in I} G \cdot F_i$. Indeed, the first equality follows from the fact that $F$ is an exact fundamental domain and the second is due to the freeness of the action.
Note that, by\cite[Corollary 15.2]{kechris:GTM156}, the map $\pi_U\!\upharpoonright_{F_i} : F_i \to X_i := \pi_U(F_i)$ is a Borel isomorphism so that we have a Borel injection $G \cdot X_i \to U : g \mapsto g \cdot (\pi_U \upharpoonright_{F_i})^{-1} (s(g))$ whose image is $G \cdot F_i$. Thus, by \cite[Corollary 15.2]{kechris:GTM156}, $G \cdot F_i$ is Borel and $f_i : G \cdot F_i \to G \cdot X_i : g \cdot u \mapsto g$ is an Borel isomorphism. Therefore, the desired injection $f : U \to \bigsqcup_{i \in I} G \cdot X_i$ is defined to be $f \upharpoonright_{G \cdot F_i} := f_i$, $i \in I$.
\end{proof}
For any fiber space $U$ over $(X,\mu)$, the symbol $\Gamma(U)$ denotes the space of Borel functions $f : U \to \mathbb{C}$ such that
$S(f)(x) := \# (\pi_U^{-1}(\{x\}) \cap \mathop{ {\mathrm{supp} }}\nolimits (f))$ is finite for $\mu$-a.e.~$x$, where $\mathop{ {\mathrm{supp} }}\nolimits(f):=\{u \in U\,|\,f(u)\neq0\}$. We also define $\Gamma^b(U)$ to be the space of $f \in \Gamma(U) \cap L^\infty(U)$ such that $S(f) \in L^\infty(X)$, and set $\Gamma^{(2)}(U) := L^2(U,\mu_U)$. Note that every function on $U$ is the sum of functions each of which is of the form $(\xi \circ \pi_U) \mathbbm{1}_E$; here $\xi$ is a measurable function on $X$ and $E$ is a Borel section of $U$. In the following the symbol $\Gamma^\star(U)$ denotes the one of $\Gamma(U)$, $\Gamma^b(U)$ and $\Gamma^{(2)}(U)$.
\medskip
Let $U$ be a $G$-space. Then $\Gamma^\star(U)$ have the following natural left $\mathbb{C}[G]$-module structure:
$$
(f \varphi)(u) := \sum_{g \in r^{-1}(\{\pi(u)\})} f(g)\varphi(g^{-1} \cdot u)
$$
for $f \in \mathbb{C}[G]$ and $\varphi \in \Gamma^\star(U)$. If $U$ is quasi-periodic, then $\Gamma^{(2)}(U)$ becomes a Hilbert $L(G)$-module whose {\it Murray-von Neumann dimension} (see \cite[\S 1.1]{luck:survey}) equals the measure of a fundamental domain of $U$. Indeed, since we may assume that $U = \bigsqcup_{i \geq 1} G \cdot X_i$ (see the proof of Lemma \ref{lem:quasi-periodic}), we have $\Gamma^{(2)}(U) = \sideset{}{^\oplus_{i\geq1}}\sum L^2(G) \mathbbm{1}_{X_i}$. Here, note that $(\xi \mathbbm{1}_{X_i}) (g) := \sum_{g_1 g_2} \xi(g_1) \mathbbm{1}_{X_i}(g_2)$ ( i.e., the right action of $\mathbbm{1}_{X_i}$), which defines the projection $\xi \mapsto \xi \mathbbm{1}_{X_i}$ in the commutant $L(G)^\prime$. Thus we conclude that $\Gamma^{(2)}(U)$ is a Hilbert $L(G)$-module and that $\dim_{L(G)} \Gamma^{(2)}(U) = \sum_{i \geq 1} \mu(X_i)$, which equals the measure of a fundamental domain.
\medskip
For a $G$-space $U$, any fiber product $U*\cdots*U$ becomes again a $G$-space endowed with the diagonal action of $G$: $(g,(u_1,\dots,u_n)) \mapsto (gu_1,\dots,gu_n)$. A {\it simplicial $G$-complex} is defined to be a sequence $\Sigma = (\Sigma^{(n)})_{n \geq 0}$ of quasi-periodic $G$-spaces such that each $\Sigma^{(n)}$ is a $G$-invariant Borel subset of the $n+1$ times fiber product of $\Sigma^{(0)}$ with the restriction to $\Sigma^{(n)}$ of the left action of $G$ on the fiber product, and moreover such that the following conditions hold:
\begin{enumerate}
\item if $(v_0, \dots, v_n) \in \Sigma^{(n)}$, then $(v_{\sigma(0)}, \dots, v_{\sigma(n)}) \in \Sigma^{(n)}$ for any permutation $\sigma$;
\item if $(v_0, \dots, v_n) \in \Sigma^{(n)}$, then $v_0 \neq v_1$;
\item if $s = (v_0, \dots, v_n) \in \Sigma^{(n)}$, then $\partial_n^j s := (v_0, \dots, \hat{v_j}, \dots, v_n) \in \Sigma^{(n-1)}$ for every $0 \leq j \leq n$, where $\hat{v_j}$ means the removal of $v_i$ from the sequence $(v_0, \dots, v_n)$.
\end{enumerate}
Note that the maps $\partial_n^j : \Sigma^{(n)} \to \Sigma^{(n-1)}$ are measurable. The fiber of $\pi_{\Sigma^{(n)}} : \Sigma^{(n)} \to X$ at $x$ is denoted by $\Sigma^{(n)}_x$. Then, $\Sigma_x := (\Sigma^{(n)}_x)_{n \geq 0}$ becomes a usual simplicial complex; see \cite[Chapter 3]{spanier} for usual notation on simplicial complexes. We say that $\Sigma$ is {\it contractible} if so is $\Sigma_x$ $\mu$-a.e.~$x$. Similarly, we say that $\Sigma$ is {\it connected} if so is $\Sigma_x$ $\mu$-a.e.~$x$. A simplicial $G$-complex $\Sigma$ is said to be {\it uniformly locally bounded} (ULB for short) if $\Sigma^{(0)}$ has a fundamental domain of finite measure and there exists an integer $N$ such that $\# \{ s \in \Sigma_x | v \in s \} \leq N$ holds for every $v \in \Sigma^{(0)}_x$ and for $\mu$-a.e.$x$. In the case, every $\Sigma^{(n)}$ has a fundamental domain of finite measure. Indeed, if $F$ is a fundamental domain of $\Sigma^{(0)}$, then $F^{(n)} := \{ (v_0, \dots, v_n) \in \Sigma^{(n)} \, | \, v_0 \in F \}$ is a fundamental domain of $\Sigma^{(n)}$ satisfying $\mu_{\Sigma^{(n)}}(F^{(n)}) \leq N \mu_{\Sigma^{(0)}}(F) < \infty$.
\medskip
The {\it universal $G$-complex} $E G = (E G^{(n)})_{n \geq 0}$ plays an important r\^{o}le, and thus we do give its precise definition in what follows. Set $E G^{(0)} := \bigsqcup_{i \in \mathbb{N}} G = G\times\mathbb{N}$, which becomes a $G$-space with the diagonal action, see Lemma \ref{lem:quasi-periodic}. For $n \geq 1$, define $E G^{(n)}$ to be the set of $(n + 1)$-tuples $(v_0, \dots, v_n) \in EG^{(0)} * \dots * EG^{(0)}$ whose entries are distinct. Since $G$ itself is a quasi-periodic $G$-space with fundamental domain $X$ mentioned before, $EG^{(0)}$ is again a quasi-periodic $G$-space with fundamental domain $\bigsqcup_i X$ which is of infinite measure. Hence $EG$ is a contractible, simplicial $G$-complex, but infinite dimensional and far from being ULB.
\medskip
Let $\Sigma$ be a simplicial $G$-complex. A {\it $G$-subcomplex} of $\Sigma$ is defined to be a simplicial $G$-complex $\Xi$ such that each $\Xi^{(n)}$ is a $G$-invariant subset of $\Sigma^{(n)}$ with the restriction to $\Xi^{(n)}$ of the original left action of $G$. A sequence $(\Xi_i)_{i \geq 1}$ of $G$-subcomplexes is called an {\it exhaustion} of $\Sigma$ if $(\Xi^{(n)}_{i, x})_{i \geq 1}$ are increasing subsets of $\Sigma^{(n)}_x$ satisfying $\bigcup_{i \geq 1} \Xi_{i, x} = \Sigma^{(n)}_x$ for $\mu$-a.e.~$x$. An exhaustion $(\Xi_i)_{i \geq 1}$ is said to be ULB if each $\Xi_i$ is ULB. We will prove the existence of ULB exhaustions for any simplicial $G$-complex in the next subsection.
\medskip
For a simplicial $G$-complex $\Sigma$, let $C_n^\star (\Sigma)$ (an analogous notation as $\Gamma^\star(\Sigma)$ before) denote the subspace of $\Gamma^\star (\Sigma^{(n)})$ which consists of functions $f : \Sigma^{(n)} \to \mathbb{C}$ satisfying $f(\sigma^{-1} u) = (\mathrm{sgn} \sigma) f(u)$ for every $u \in \Sigma^{(n)}$ and every permutation $\sigma$. For $f \in C_n^\star(\Sigma)$ and $x \in X$, let $f_x$ denote the restriction of $f$ to $\Sigma_x^{(n)}$.
The family $\{ \partial_{n, x} \}_{x \in X}$ of boundary operators on each $\Sigma^{(n)}_x$ defines a $\mathbb{C} [G]$-module map $\partial_n : C_n(\Sigma) \to C_{n -1}(\Sigma)$ as follows: for $f \in C_n(\Sigma)$, define the function $\partial_n f : \Sigma^{(n-1)} \to \mathbb{C}$ by $(\partial_n f) (u) = \partial_{n, x}(f_x)$ for $u \in \Sigma_x^{(n)}$. Then, $\partial_n f$ is measurable. Indeed, if $f = (\xi \circ \pi_{\Sigma^{(n)}}) \mathbbm{1}_E$ is supported on a Borel section $E$ of $\Sigma^{(n)}$, then we have $\partial_n f = (\xi \circ \pi_{\Sigma^{(n)}}) \sum_{j=0}^n (-1)^j \mathbbm{1}_{\partial_n^j E}$, which is clearly measurable. Thus, we get a chain complex $C_\bullet(\Sigma)$ of $\mathbb{C} [G]$-modules.
If $\Sigma$ is ULB, then we can extend the $\partial_n$ to a unique bounded $L(G)$-module map $\partial_n^{(2)} : C_n^{(2)}(\Sigma) \to C_{n - 1}^{(2)}(\Sigma)$. Indeed, let $N$ be a constant so that $\# \{ s \in \Sigma_x\, | \, v \in s \} \leq N$ holds for $\mu$-a.e.~$x$ and every $v \in \Sigma^{(0)}_x$. Then, using the formula $(\partial_n f)_x (t) = \sum_{j = 0}^n (-1)^j \sum_{s \in (\partial_n^j)^{-1} (t)} f(s)$ and the Cauchy-Schwarz inequality, we get an estimate $\| \partial_n f \| \leq n \sqrt{N} \| f \|$ for every $f \in C_n^{(2)}(\Sigma) \cap C_n(\Sigma)$. Thus, we get a Hilbert $L(G)$-chain complex $C_\bullet^{(2)}(\Sigma)$; see \cite[\S 1.1]{luck:survey} for the terminology of Hilbert chain complexes.
\medskip
We are ready to give the definition of $L^2$-Betti numbers of a simplicial $G$-complex.
\begin{df}\label{geometric-def}
For a ULB simplicial $G$-complex $\Sigma$, define the {\it $n$-th reduced $L^2$-homology} of $\Sigma$ by
\begin{equation}\label{c}
\overline{H}_n^{(2)}(\Sigma, G)
:= H_n^{(2)}(C_\bullet^{(2)}(\Sigma))
= \ker\partial_n^{(2)}/\,\overline{\mathrm{im}\partial_{n + 1}^{(2)}}.
\end{equation}
Here notice that $\overline{H}_n^{(2)}(\Sigma, G)$ becomes a Hilbert space, since we have taken the closure of $\mathrm{im}\partial_{n+1}^{(2)}$.
For an arbitrary simplicial $G$-complex $\Sigma$, we take a ULB {\it exhaustion} $\{ \Sigma_i \}_{i \geq 1}$ (possiblly with all $\Sigma_i = \Sigma$). Remark here that, for every $i \leq j$, the inclusion $\Sigma_i \subset \Sigma_j$ induces the natural bounded $L(G)$-map $J^{i,j}_{n} : C_n^{(2)}(\Sigma_i) \to C_n(\Sigma_j)$ for every $n \geq 0$ in the following manner: $J^{i, j}_n(f)(u)$ is defined to be $f(u)$ if $u \in \Sigma_i^{(n)}$ and $0$ otherwise. The maps $J^{i, j}_n$ commute with the boundary maps $\partial_n^{(2)}$, that is, $J^{i, j}_\bullet$ is a chain morphism from $C_\bullet^{(2)}(\Sigma_i)$ to $C_\bullet^{(2)}(\Sigma_j)$. Let $H_n^{(2)}(J^{i, j}_\bullet) : H_n^{(2)}(C_\bullet^{(2)}(\Sigma_i)) \to H_n^{(2)}(C_\bullet^{(2)}(\Sigma_j))$ be the natural map induced from the chain morphism $J^{i, j}_\bullet$.
With $\nabla_n (\Sigma_i, \Sigma_j) := \dim_{L(G)} \overline{\mathop{\mathrm{im}}\nolimits H_n^{(2)}(J^{i, j}_\bullet) }$, we define the {\it $n$-th $L^2$-Betti number} of $\Sigma$ by
\begin{equation}\label{d}
\beta_n^{(2)}(\Sigma,\{\Sigma_i\}_{i\geq1},G) = \lim_{i \geq 1} \lim_{j \geq i} \nabla_n (\Sigma_i, \Sigma_j).
\end{equation}
\end{df}
\begin{rem}\label{rem:double-limit} Let $\{ \Sigma_i \}_{i \geq 1}$ be an increasing sequence of ULB simplicial $G$-complex. Then, the function $\nabla_n (\Sigma_i, \Sigma_j)$ is increasing in $i$ and decreasing in $j$. In particular, the double limit in \eqref{d} exists.
\end{rem}
\begin{proof}
Take $i \leq j \leq k$ arbitrary. Since the maps $H_n^{(2)}(J^{i, j}_\bullet)$ are induced from inclusion, the equality $H_n^{(2)}(J^{i, k}_\bullet) = H_n^{(2)}(J^{j, k}_\bullet) \circ H_n^{(2)}(J^{i, j}_\bullet)$ holds. Thus the map $H_n^{(2)}(J^{j, k}_\bullet)$ is a surjection from $\overline{\mathop{\mathrm{im}}\nolimits H_n^{(2)}(J^{i, j}_\bullet)}$ to $\overline{\mathop{\mathrm{im}}\nolimits H_n^{(2)}(J^{i, k}_\bullet)}$. Hence, by the additivity of von Neumann dimension (see \cite[Theorem 1.12 (3)]{luck:survey}), we have $\nabla_n(\Sigma_i, \Sigma_j) \geq \nabla_n(\Sigma_i, \Sigma_k)$.
\end{proof}
It is not clear at all whether or not the above definition of $\beta_n^{(2)}(\Sigma,\{\Sigma_i\}_{i\geq1},G)$ is independent of the choice of ULB-exhausion $\{\Sigma_i\}_{i\geq1}$. This issue will be resolved (see Proposition \ref{mainthm1}) in the course of proving the equivalence between the algebraic and the geometric approaches in \S\S 3.3.
\subsection{A construction of ULB exhaustions}
We prove the following proposition:
\begin{prop}\label{exhaustion} The universal $G$-complex $EG$ has a ULB exhaustion, and hence so does any $G$-complex.
\end{prop}
For every $N \geq 1$, define the $G$-subcomplex $(EG)_N$ of $E G$ in the following manner: Set $(EG)_N^{(0)} = \bigsqcup_{i=1}^N G = G\times\{1,\dots,N\}$ that naturally sits in $EG^{(0)}$. For $n \geq 1$, define $(EG)_N^{(n)}$ to be the set of $(n + 1)$-tuples $(v_0, \dots, v_n) \in (EG)_N^{(0)} * \cdots * (EG)_N^{(0)}$ whose entries are distinct.
\begin{lem} The $G$-complex $(EG)_N$ has a ULB exhaustion for every $N \geq 1$.
\end{lem}
\begin{proof}
Fix $N \geq 1$. Let $G = \bigsqcup_{i \geq 1} E_i$ be a decomposition into a countable family of one-sheeted sets; see \S2. For $k \geq 1$, we set $\tilde{E}_k = \bigsqcup_{i = 1}^k E_i$ and define $\Sigma_k = (\Sigma_k^{(n)})_{n \geq 0}$ in the following way: set $\Sigma_k^{(0)} := (EG)_N^{(0)}$; for $n \geq 1$ let $\Sigma_k^{(n)}$ be the set of $((g_0,i_0),\dots,(g_n,i_n)) \in (EG)_N^{(n)}$ such that $g_{j + 1}^{-1} g_j \in \tilde{E}_k \tilde{E}_k^{-1}$ holds for every $0 \leq j \leq n - 1$.
We show that the sequence $(\Sigma_k)_{k \geq 1}$ is a ULB exhaustion of $(EG)_N$.
Remark that, if $( (g_0, i_0), \dots, (g_n, i_n) ) \in (\Sigma_k)^{(n)}$, then $g_j^{-1} g_{j^\prime} \in \tilde{E}_k \tilde{E}_k^{-1}$ holds for every $j \neq j^\prime$. Indeed, by the definition of $(\Sigma_k)^{(n)}$, there exist $h_0, \dots, h_n \in \tilde{E}_k$ so that $g_{j + 1}^{-1} g_j = h_{j + 1} h_j^{-1}$ holds for every $0 \leq j \leq n - 1$. Thus, for $0 \leq j < j^\prime \leq n$, we have $g_{j^\prime}^{-1} g_j = g_{j^\prime} g_{j^\prime - 1} g_{j^\prime - 1}^{-1} g_{j^\prime - 2} \cdots g_{j + 1}^{-1} g_j = h_{j^\prime} h_{j^\prime - 1}^{-1} h_{j^\prime - 1} h_{j^\prime - 2}^{-1} \cdots h_{j + 1} h_j^{-1} = h_{j^\prime} h_j^{-1} \in \tilde{E}_k \tilde{E}_k^{-1}$. We also have $g_j^{-1} g_{j^\prime} = (g_{j^\prime}^{-1} g_j )^{-1} = h_j h_{j^\prime}^{-1} \in \tilde{E}_k \tilde{E}_k^{-1}$ by taking their inverses.
In what follows, we divide the proof into three steps.
\medskip
({\bf Step 1}: Each $\Sigma_k$ is a $G$-subcomplex of $(EG)_N$.) For any $g \in G$, we have $g \cdot s = ((g g_0, i_0), \dots, (g g_n, i_n))$ and $(g g_{j + 1})^{-1} (g g_j) = g_{j + 1}^{-1} g_j \in \tilde{E}_k \tilde{E}_k^{-1}$ for every $0 \leq j \leq n - 1$. Thus, each $\Sigma_k^{(n)}$ is a $G$-invariant subset of $(EG)_N^{(n)}$. Also, by the above remark, each $\Sigma_k$ is clearly a simplicial $G$-complex. Thus $\Sigma_k$ is a $G$-subcomplex.
\medskip
({\bf Step 2}: Each $\Sigma_k$ is ULB.) Take $x \in X$ and $(g_0, i_0) \in (\Sigma_k)_x^{(0)}$. We show that the number of elements $s \in (\Sigma_k)_x$ containing $(g_0, i_0)$ as the first component is not larger than a universal constant ( i.e., it is independent of the choice of $x$ and $(g_0, i_0)$).
Choose $s = ((g_0, i_0), \dots, (g_n, i_n)) \in (\Sigma_k)_x^{(n)}$. Then, by the definition of $\Sigma_k^{(n)}$, there exist $h_0, \dots, h_n \in \tilde{E}_k$ so that $g_j^{-1} g_{j^\prime} = h_j h_{j^\prime}^{-1}$ for every $j \neq j^\prime$. Thus, $h_j = h_{j^\prime}$ implies that $g_j^{-1} g_{j^\prime}$ falls in the unit space, and hence $g_j = g_{j^\prime}$, a contradiction by the definition of $(EG)_N^{(n)}$. Therefore, $h_0, \dots, h_n$ must be different. Also, we have $g_j = g_0 g_0^{-1} g_j = g_0 h_0 h_j^{-1}$ for every $0 \leq j \leq n$.
Define $H_{n, x, g_0}$ to be the set of $(h_0, \dots, h_n) \in \tilde{E}_k \times \cdots \times \tilde{E}_k$ satisfying the following conditions: (1) $h_0, \dots, h_n$ are different; (2) $r(h_0) = s(g_0)$; (3) $s(h_j) = s(h_0)$ for every $0 \leq j \leq n$. Then, by what we have proved in the previous paragraph, the image of the map
$$
H_{n, x, g_0} \times \{ 1, \dots, N \}^n \to (\Sigma_k)_x^{(n)} : ( ( h_0, \dots, h_n), (i_1, \dots, i_n)) \mapsto ( (g_0 h_0 h_j^{-1}, i_j)_{j=0}^n )
$$
is equal to $\{ s \in (\Sigma_k)_x^{(n)}\, | \, (g_0, i_0) \in s \}$. Therefore, we have $\# \{ s \in (\Sigma_k)_x \, | \, (g_0, i_0) \in s\} \leq \sum_{n = 0}^\infty N^n \times \# H_{n, x, g_0}$.
We give an estimate of $ \# H_{n, x, g_0}$ from the above. Take $(h_0, \dots, h_n) \in H_{n, x, g_0}$. By the definition of $H_{n, x, g_0}$, we see that $h_0 \in \bigsqcup_{j = 1}^k E_j \cap r^{-1}(s( g_0))$.Since each $E_j$ is a one-sheeted set, we have $\# (E_j \cap r^{-1}(s(g_0))) \leq 1$ for every $1 \leq j \leq k$. Thus, the number of choice for $h_0$ is not larger than $k$. Without loss of generality, we may assume that $h_0 \in E_1 \cap r^{-1}(s(g_0))$. Then, by the definition of $H_{n, x, g_0}$, we have $h_1, \dots, h_n \in \bigsqcup_{j = 2}^k E_j \cap s^{-1}(s(h_0))$. Let $j_l$ denote the index so that $h_l \in E_{j_l} \cap s^{-1}(s(h_0))$ for every $1 \leq l \leq n$. Then, since $h_1, \dots, h_n$ are different and each $E_j$ is one-sheeted, $j_1, \dots, j_n$ must be different. Since $\# (E_j \cap s^{-1}(s(h_0))) \leq 1$ for every $2\leq j \leq k$, the number of choices for $(h_1, \dots, h_n)$ is not larger than the number of sequences $(j_1, \dots, j_n)$ which consists of different elements of $\{ 2, \dots, k\}$. Hence, $\# H_{n, x, g_0} \leq k (k -1) \cdots (k - n)$ if $n \leq k - 1$. Clearly $H_{n , x, g_0} = \emptyset$ if $n \geq k$.
Therefore, we conclude that $\# \{ s \in (\Sigma_k)_x \, | \, (g_0, i_0) \in s\} \leq \sum_{n=0}^{k - 1} N^n k (k -1) \cdots (k - n)$, which is independent of the choice of $(x, g_0)$.
Let us show that $\Sigma_k^{(0)} = (EG)_N^{(0)}$ has a fundamental domain of finite measure. Note that $F_N := \bigsqcup_{i=1}^N X = X \times \{ 1, \dots, N \}$ is a fundamental domain of $\Sigma_k^{(0)} = (EG)_N^{(0)}$. Since $\# ((EG)_{N, x}^{(0)} \cap F_N) = N $ for every $x$, we have $\mu_{\Sigma_k^{(0)}}(F_N) = N < \infty$.
\medskip
({\bf Step 3}: The sequence $(\Sigma_k)_{k \geq 1}$ is an exhaustion of $(EG)_N$.) It is clear that each $(\Sigma_k^{(n)})_k$ is increasing by definition. It suffices to show that $((EG)_N)_x^{(n)} = \bigcup_{k\geq 1} (\Sigma_k)_x^{(n)}$ holds for every $n \geq 0$ and $x \in X$. Take $x \in X$, $n \geq 0$ and $s = ((g_0, i_0), \dots, (g_n, i_n)) \in (EG)_{N, x}^{(n)}$. Since $G = \bigsqcup_{k\geq 1} E_k$, we have $g_0^{\pm 1}, \dots, g_n^{\pm 1} \in \tilde{E}_j$ for some $j \geq 1$ so that $s \in \Sigma_{j, x}^{(n)}$. Hence we are done.
\end{proof}
We are ready to prove Proposition \ref{exhaustion}.
\begin{proof} (Proposition \ref{exhaustion})
Let $(\Sigma_{N,k})_{k \geq1}$ be a ULB-exhaustion of $(EG)_N$ for each $N \geq 1$, whose existence was established by the above lemma. Then, the sequence $(\Sigma_{k,k})_{k \geq 1}$ is clearly a ULB-exhaustion of $EG$. Note also that any simplicial $G$-complex $\Xi$ can be embedded $G$-equivariantly into the universal $G$-complex $E G$ thanks to Lemma \ref{lem:quasi-periodic}. Then, the sequence $(\Sigma_k)_{k \geq 1}$ defined by $\Sigma_k^{(n)} := \Sigma_{k,k}^{(n)} \cap \Xi^{(n)}$, $n \geq 0$, becomes a ULB-exhaustion of $\Xi$.
\end{proof}
\subsection{Justification
We will justify the geometric definition of $L^2$-Betti numbers of pmp discrete groupoids following the idea of Neshveyev and Rustad \cite{neshveyev} (that seems to originate in \cite[Remark 6.76]{luck:survey} dealing with the discrete group case). In what follows, we use L\"{u}ck's extention of the usual Murray-von Neumann dimension to arbitrary modules (see \cite{luck_98},\cite[\S\S 6.1]{luck:survey}) with keeping the same symbol $\dim_M$.
The next theorem is the main result of this section. Recall that Sauer \cite{sauer} defined the ($n$-th) $L^2$-Betti number of $G$ by $\beta_n^{(2)}(G) = \dim_{L(G)} \mathop{\mathrm{Tor}}\nolimits_n^{\mathbb{C} [G]} (L(G), L^\infty(X))$.
\begin{thm} \label{mainthm2}
If $\Sigma$ is a contractible, simplicial $G$-complex, then $\beta_n^{(2)}(\Sigma,G) = \beta_n^{(2)}(G)$ holds for every $n \geq 0$.
\end{thm}
First, we prove the following proposition:
\begin{prop}\label{mainthm1}
For any simplicial $G$-complex $\Sigma$ and any ULB {\it exhaustion} $\{ \Sigma_i \}_{i \geq 1}$ of $\Sigma$, we have
\begin{align*}
\beta_n^{(2)}(\Sigma,\{\Sigma_i\}_{i\geq1},G)
= \dim_{L(G)} H_n(L(G) \otimes_{\mathbb{C}[G]} C_\bullet^b(\Sigma))
= \dim_{L(G)} H_n(L(G) \otimes_{\mathbb{C}[G]} C_\bullet(\Sigma))
\end{align*}
for every $n \geq 0$. In particular, $\beta_n^{(2)}(\Sigma,\{\Sigma_i\},G)$ is independent of the choice of $\{\Sigma_i\}_{i\geq1}$ so that we write $\beta^{(2)}(\Sigma,G) := \beta_n^{(2)}(\Sigma,\{\Sigma_i\},G)$ from now on.
\end{prop}
Before proving the proposition, we provide a terminology and some general lemmas. Let $(M, \tau)$ be a finite von Neumann algebra equipped with a faithful normal tracial state. A morphism $h : Q_1 \to Q_2$ between two $M$-modules is called a {\it $\dim_M$-isomorphism } if both $\dim_M \ker h$ and $\dim_M \mathrm{coker}\, h$ is zero. In the case, $\dim_M(Q_1) = \dim_M(Q_2)$ holds thanks to the additivity of $\dim_M$ (see \cite[Theorem 6.7 (4) (b)]{luck:survey}). See e.g.~\cite[\S2]{sauer} for further nice properties on $\dim_M$-isomorphisms. For an $M$-module $Q$, the {\it rank norm} $[ \xi ]_M$ of $\xi \in Q$ is defined to be $\inf \{ \tau(p) \, | \, p \in M^p, p \xi = \xi \}$. Then $d_M ( \xi, \eta) := [ \xi - \eta ]_M$ defines a pseudo metric on $Q$. The procedure of completion in the metric $d_M$ defines a functor $c_M$, called the {\it functor of rank completion}, from the category of $M$-modules to itself. See \cite[\S2]{thom} and \cite[Lemma 1.1]{neshveyev} for more on this functor $c_M$ and its connection with the dimension function $\dim_M$.
Here we quote two general lemmas from \cite{neshveyev}.
\begin{lem}\label{lem:alg1} (\cite[Lemma 1.3]{neshveyev})
Let $N \subset \mathfrak{M} \subset M$ be a triple of algebras such that $N$ and $M$ are finite von Neumann algebras with faithful normal tracial states $\tau_N$ and $\tau_M$, respectively. Assume that the inclusion $N \subset \mathfrak{M}$ satisfies the following condition: for any $m \in \mathfrak{M}$ and $\epsilon > 0$, there exists a $\delta > 0$ such that if $p \in N^p$ satisfies $\tau_N (p) < \delta$, then $[ m p ]_N < \epsilon$. Then, for any $\dim_N$-isomorphic $\mathfrak{M}$-map $Q_1 \to Q_2$, the induced $\mathfrak{M}$-map $\mathrm{Tor}_n^{\mathfrak{M}} (M, Q_1) \to \mathrm{Tor}_n^{\mathfrak{M}} (M, Q_2)$ is $\dim_M$-isomorphic for every $n \geq 0$.
\end{lem}
\begin{lem}\label{lem:alg2}(\cite[Lemma 1.4]{neshveyev})
Let $N \subset \mathfrak{M} \subset M$ be as in Lemma \ref{lem:alg1}.
Assume that the pair $N \subset \mathfrak{M}$ satisfies the assumption of Lemma \ref{lem:alg1}. Then, for any resolution $P_\bullet$ of an $\mathfrak{M}$-module $Q$ such that each $P_k$ has a $d_N$-dense projective submodule, we have $\dim_M \mathrm{Tor}_n^\mathfrak{M} (M, Q) = \dim_M H_n(M \otimes_\mathfrak{M} P_\bullet)$ for every $n \geq 0$.
\end{lem}
In order to use the above lemmas in our situation, we prove the next two lemmas.
\begin{lem} \label{lem:alg3} The pair $L^\infty(X) \subset \mathbb{C}[G]$ satisfies the assumption of Lemma \ref{lem:alg1}.
\end{lem}
\begin{proof}
It is known, see \cite[Lemma 3.3]{sauer}, that any element in $\mathbb{C}[G]$ is written as a finite sum of elements in $\mathbb{C}[G]$ supported in one-sheeted sets. Hence it suffices to show that $[f \mathbbm{1}_Z] \leq \tau(\mathbbm{1}_Z)$ for every $f \in \mathbb{C}[G]$ supported in a one-sheeted set $E$ and every subset $Z$ of $X$. We have $(f \mathbbm{1}_Z) (g) = f(g) \mathbbm{1}_E (g) \mathbbm{1}_Z (r(g)) = f(g) \mathbbm{1}_E (g) \mathbbm{1}_{\varphi_E^{-1}(Z)}(s(g)) = (\mathbbm{1}_{\varphi_E^{-1}(Z)} f )(g)$ for all $g \in G$. Hence we have $[ f \mathbbm{1}_Z ]_{L^\infty(X)} = [ \mathbbm{1}_{\varphi_E^{-1}(Z)} f ]_{L^\infty(X)} \leq \tau (\mathbbm{1}_{\varphi_E^{-1}(Z)}) = \mu(\varphi_E^{-1}(Z)) \leq \mu(Z) = \tau (\mathbbm{1}_Z)$. Here the first inequality simply follows from the definition of the rank norm and the second one from the fact that $\varphi_E$ is $\mu$-preserving.
\end{proof}
\begin{lem} \label{lemformain1}
Let $U$ be a quasi-periodic $G$-space with fundamental domain $F$. Then,
\begin{enumerate}
\item $\Gamma(U)$ has a $d_{L^\infty(X)}$-dense, projective $\mathbb{C}[G]$-submodule;
\item if $\mu_U(F) < \infty$, then the $\mathbb{C}[G]$-map
$h : L(G) \otimes_{\mathbb{C}[G]} \Gamma^b(U) \to \Gamma^{(2)}(U)$ sending $m \otimes \xi$ to $m \cdot \xi$ is a $\dim_{L(G)}$-isomorphism.
\end{enumerate}
\end{lem}
\begin{proof}
By Lemma \ref{lem:quasi-periodic} (or more precisely its proof), we may assume that $U = \bigsqcup_{i=1}^\infty G \cdot X_i$. Consider the projective $\mathbb{C}[G]$-module $P := \bigoplus_{i \geq 1} \mathbb{C}[G]\,\mathbbm{1}_{X_i}$ sitting inside $\Gamma^b(U)$.
(1) Take $f \in \Gamma(U)$. For each $m \geq 1$, define $Y_m := \{ x \in X\, | \, \mathop{ {\mathrm{supp} }}\nolimits f \cap \pi_U^{-1}(x) \subset \bigsqcup_{i=1}^m G \cdot X_i \}$. Then $\{ Y_m \}_m$ is an increasing sequence satisfying $\mu(X \setminus \bigcup_{m = 1}^\infty Y_m) = 0$, and hence $d_{L^\infty(X)}( \mathbbm{1}_{Y_m} f, f) \leq \mu(Y_m^c) \to 0$ as $m \to \infty$. Note that $\mathbbm{1}_{Y_m} f$ is supported in $\bigsqcup_{i = 1}^m G \cdot X_i$. Hence $P$ is $d_{L^\infty(X)}$-dense in $\Gamma(U)$, because so is $\mathbb{C}[G]$ in $\Gamma(G)$ as shown below. Take $f \in \Gamma(G)$. Let us decompose $G$ into one-sheeted sets $G = \bigsqcup_{i = 1}^\infty E_i$; see \S2. For each $m \geq 1$, define $Z_m$ to be the set of $x \in X$ satisfying $\sup_{g \in s^{-1}(x)} | f(g) | \leq m$ and $(\mathop{ {\mathrm{supp} }}\nolimits f \cap s^{-1}(x)) \subset \bigcup_{i=1}^m E_i \cap s^{-1}(x)$. Clearly, $\mathbbm{1}_{Z_m} f \in \mathbb{C}[G]$ converges to $f$ in $d_{L^\infty(X)}$. Consequently, we have seen that $P$ is a desired projective $\mathbb{C} [G]$-module.
(2) We have $\Gamma^{(2)}(U) = \sideset{}{^\oplus_{i \geq1}} \sum L^2(G) \mathbbm{1}_{X_i}$, see \S\S 3.1. With $L(G)\otimes_{\mathbb{C}[G]} P = \bigoplus_{i\geq1} L(G)\mathbbm{1}_{X_i}$ naturally, the restriction $\tilde{h}$ of $h$ to $L(G)\otimes_{\mathbb{C}[G]}P$ is exactly the inclusion $ \bigoplus_{i \geq 1} L(G) \mathbbm{1}_{X_i} \hookrightarrow \Gamma^{(2)}(U)$. Thanks to the $d_{L(G)}$-density of $L(G)$ in $L^2(G)$ together with $\sum_{i=1}^\infty \mu(X_i) = \mu_U(F) < +\infty$, it is plain to see that $\bigoplus_{i\geq1}L(G)\mathbbm{1}_{X_i}$ is $d_{L(G)}$-dense in $\sum_{i\geq1}^\oplus L^2(G)\mathbbm{1}_{X_i}$ so that $\tilde{h}$ is a $\dim_{L(G)}$-isomorphism. Since $P$ is $d_{L^\infty(X)}$-dense in $\Gamma^b(U)$ as we actually saw in the above (1), the inclusion $P \hookrightarrow \Gamma^b(U)$ is $\dim_{L^\infty(X)}$-isomorphic, and hence so is $L(G) \otimes_{\mathbb{C}[G]} P \hookrightarrow L(G) \otimes_{\mathbb{C}[G]} \Gamma^b(U)$ by Lemma \ref{lem:alg1}. Therefore, by applying the functor $c_{L(G)}$ to $\tilde{h}$ we conclude that $h$ is a $\dim_{L(G)}$-isomorphism.
\end{proof}
Since $C_n^\star(\Sigma)$ is defined as a subspace of $\Gamma^\star(\Sigma^{(n)})$, we need the following lemma.
\begin{lem}\label{lem:auxiliary}
Let $\Sigma$ be a simplicial $G$-complex. Then,
\begin{enumerate}
\item every $C_n(\Sigma)$ has a $d_{L^\infty(X)}$-dense projective $\mathbb{C} [G]$-submodule;
\item if $\Sigma$ is ULB, then the $\mathbb{C} [G]$-map $L(G) \otimes_{\mathbb{C} [G]} C_n^b(\Sigma) \to C_n^{(2)}(\Sigma)$ sending $m \otimes \xi$ to $m \cdot \xi$ is a $\dim_{L(G)}$-isomorphism for every $n \geq 0$.
\end{enumerate}
\end{lem}
\begin{proof}
For a given function $f : \Sigma^{(n)} \to \mathbb{C}$, define the function $A_n f$ on $\Sigma^{(n)}$ by $(A_n f)(u) = ((n + 1) !)^{-1} \sum_{\sigma \in \mathfrak{S}_{n+1}} (\mathrm{sgn} \sigma) f(\sigma^{-1} u)$. Clearly, $A_n$ defines a $\mathbb{C}[G]$-module map $\Gamma^\star(\Sigma^{(n)})$ to $C_n^\star(\Sigma)$ that acts on $C_n^\star(\Sigma)$ trivially.
(1) By Lemma \ref{lemformain1}, $\Gamma(\Sigma^{(n)})$ has a $d_{L^\infty(X)}$-dense projective $\mathbb{C}[G]$-submodule $P$. Therefore, $A_n(P)$ is a desired $d_{L^\infty(X)}$-dense projective $\mathbb{C}[G]$-submodule of $C_n(\Sigma)$ since $A_n$ acts $C_n(\Sigma)$ trivially and is contractive in $d_{L^\infty(X)}$.
(2) It is plain to see that $\mathop{\mathrm{id}}\nolimits \otimes A_n : L(G) \otimes_{\mathbb{C}[G]} \Gamma^b(\Sigma^{(n)}) \to L(G) \otimes_{\mathbb{C}[G]} C_n^b(\Sigma)$ is an $L(G)$-module map that acts on $L(G) \otimes_{\mathbb{C}[G]} C_n^b(\Sigma)$ trivially. Thus, applying the functor $c_{L(G)}$ and using Lemma \ref{lemformain1}, we conclude that the map $L(G) \otimes_{\mathbb{C}[G]} C_n^b(\Sigma) \to C_n^{(2)}(\Sigma)$ is a $\dim_{L(G)}$-isomorphism.
\end{proof}
Note that since $C_n^{(2)}(\Sigma)$ is the image of the projection $A_n$, we have $\dim_{L(G)} C_n^{(2)}(\Sigma) = ((n + 1) !)^{-1} \mu_{\Sigma^{(n)}}(G \backslash \Sigma^{(n)} )$; here $G \backslash \Sigma^{(n)}$ denotes a fundamental domain of $\Sigma^{(n)}$. In particular, if $\Sigma$ is ULB, then $\dim_{L(G)} C_n^{(2)}(\Sigma)$ is finite for every $n \geq 0$.
Here is the proof of Proposition \ref{mainthm1}.
\begin{proof} (Proposition \ref{mainthm1})
First, consider the case when $\Sigma$ is ULB. The $\mathop{\mathrm{im}}\nolimits \partial_{n+1}^{(2)}$ and its closure have the same $M$-dimension since the $\partial_{n+1}^{(2)} {\partial_{n+1}^{(2)}}^\ast$ maps $\overline{\mathop{\mathrm{im}}\nolimits \partial_{n+1}^{(2)}}$ to $\mathop{\mathrm{im}}\nolimits \partial_{n+1}^{(2)}$ injectively. Thus, one can see that the canonical surjection $q : H_n(C_\bullet^{(2)}(\Sigma)) \to \overline{H}_n^{(2)}(\Sigma,G)$ is a $\dim_{L(G)}$-isomorphism. Since $\Sigma$ is ULB, Lemma \ref{lem:auxiliary} enables us to obtain a $\dim_{L(G)}$-isomorphism $h : L(G) \otimes_{\mathbb{C}[G]} C_n^b(\Sigma) \to C_n^{(2)}(\Sigma)$ so that the induced $L(G)$-map $h_* : H_n(L(G)\otimes_{\mathbb{C}[G]} C_\bullet^b(\Sigma)) \to H_n(C_\bullet^{(2)}(\Sigma))$ is a $\dim_{L(G)}$-isomorphism for every $n \geq 0$. Thus, $q \circ h_* : H_n(L(G) \otimes_{\mathbb{C}[G]}C_\bullet^b(\Sigma)) \to \overline{H}_n^{(2)}(\Sigma,G)$ is a $\dim_{L(G)}$-isomorphism for every $n \geq 0$.
Next, consider the case when $\Sigma$ is an arbitrary simplicial $G$-complex. Let $\{ \Sigma_i \}_{i \geq 1}$ be a ULB-exhaustion of $\Sigma$. By what we have actually proved in the previous paragraph, together with the continuity of $\dim_{L(G)}$ under inductive limit (\cite[Theorem 6.13]{luck:survey}), we have $\beta_n^{(2)}(\Sigma,\{\Sigma_i\}_{i \geq 1}, G) = \dim_{L(G)} H_n(L(G)\otimes_{\mathbb{C}[G]} \bigcup_{i \geq 1} C_\bullet^b(\Sigma_i))$. Since $\bigcup_{i \geq 1}C_n^b(\Sigma_i)$ is $d_{L^\infty(X)}$-dense in $C_n^b(\Sigma)$, Lemma \ref{lem:alg1} shows that the last quantity equals $\dim_{L(G)} H_n(L(G) \otimes_{\mathbb{C}[G]} C_\bullet^b(\Sigma))$. Hence the proof of the first equality is completed.
The second equality immediately follows from the $d_{L^\infty(X)}$-density of $C_\bullet^b(\Sigma)$ in $C_\bullet(\Sigma)$ and Lemma \ref{lem:alg1}.
\end{proof}
We prove Theorem \ref{mainthm2} using Proposition \ref{mainthm1}. This will be done by showing the exactness of the chain complex $\dots \stackrel{\partial_2}{\rightarrow} C_1(\Sigma) \stackrel{\partial_1}{\rightarrow} C_0(\Sigma) \stackrel{\epsilon}{\rightarrow} M(X) \to 0$ of $\mathbb{C}[G]$-modules for a contractible, simplicial $G$-complex $\Sigma$;
here $M(X)$ denotes the space of measurable functions on $X$ and $\epsilon$ denotes the $\mathbb{C} [G]$-module map defined by $\epsilon(f)(u) := \sum_{u \in \Sigma_x^{(0)}} f(u)$.
To this end, we provide a terminology and lemmas. Let $V$ be a vector space over $\mathbb{Q}$ of countable dimension. We endow $V$ with the discrete Borel structure. A family $\{ V_x \}_{x \in X}$ of subspaces of $V$ is said to be {\it measurable} if for any measurable map $s : X \to V$, the set $\{ x \in X \, | \, s(x) \in V_x \}$ is measurable. A family $\{ T_x \}_{x \in X}$ of ($\mathbb{Q}$-linear) operators on $V$ is said to be {\it measurable} if for any measurable map $s : X \to V$, the map $X \ni x \mapsto T_x s(x) \in V$ is measurable. We can check that the measurability of a family $\{V_x\}_{x \in X}$ (resp. $\{T_x\}_{x \in X}$) is equivalent to that of the map $X \ni x \mapsto V_x \in 2^V$ (resp. $X \ni x \mapsto T_x \in V^V$). We quote two lemmas from \cite{neshveyev}.
\begin{lem}\label{lem:proj}(\cite[Lemma 2.4]{neshveyev})
If $\{V_x\}_{x \in X}$ is a measurable family of subspaces of $V$, then there exists a measurable family $\{ p_x \}_{x \in X}$ of projections onto $V_x$.
\end{lem}
\begin{lem}\label{lem:homotopy}(\cite[Lemma 2.5]{neshveyev})
Let $\{T_x \}_{x \in X}$, $\{p_x \}_{x \in X}$ and $\{q_x \}_{x \in X}$ are measurable families of operators on $V$ such that the $p_x$ and the $q_x$ are projections. Assume that, for every $x \in X$, the map $T_x$ maps $\ker q_x$ to $\mathop{\mathrm{im}}\nolimits p_x$ bijectively. Let $S_x$ denotes the operator on $V = \ker p_x \bigoplus \mathop{\mathrm{im}}\nolimits p_x$ defined by $S_x \upharpoonright_{\ker p_x} = 0$ and $S_x \upharpoonright_{\mathop{\mathrm{im}}\nolimits p_x} = (T_x \upharpoonright_{\ker q_x})^{-1}$, so that $T_x S_x = p_x$ and $S_x T_x = \mathop{\mathrm{id}}\nolimits_V - q_x$. Then the family $\{ S_x \}_{x \in X}$ is measurable.
\end{lem}
The next lemma is just a translation of \cite[Proposition 2.6]{neshveyev} into our situation. However, we do give its proof for the sake of completeness.
\begin{lem} \label{resolution}
Let $\Sigma$ be a contractible, simplicial $G$-complex. Then the sequence
$$
\dots \stackrel{\partial_2}{\rightarrow} C_1(\Sigma) \stackrel{\partial_1}{\rightarrow} C_0(\Sigma) \stackrel{\epsilon}{\rightarrow} M(X) \to 0
$$
is contractible as a chain complex of $L^\infty(X)$-modules.
\end{lem}
\begin{proof}
First, we consider the same sequence with rational coefficients. Let $V$ be the vector space which consists of finitely supported functions $f : \mathbb{N} \to \mathbb{Q}$. Clearly, $V$ is of countable dimension. Construct an embedding $C_n(\Sigma_x ; \mathbb{Q}) \to V$ for each $n \geq 0$ as follows: since $\Sigma^{(n)}$ can be written as a disjoint union of its Borel sections, we may regard $\Sigma^{(n)}$ as a fiber subspace of the trivial fiber space $X \times \mathbb{N}$. Then, each $\Sigma_x^{(n)}$ is a subset of $\{ x \} \times \mathbb{N}$. Thus, we can regard $C_n(\Sigma_x; \mathbb{Q})$ as $V$ naturally. It is not hard to see that $x \mapsto \ker \partial_{n, x} \subset C_n(\Sigma_x; \mathbb{Q})$ is measurable. Hence, applying Lemma \ref{lem:proj}, we get a measurable family $\{p_{n, x} \}_{x \in X}$ of projections onto $\ker \partial_{n, x}$. The contractibility of $\Sigma$ gurantees that $\partial_{n, x}$ maps $\ker p_{n +1, x}$ to $\mathop{\mathrm{im}}\nolimits p_{n, x}$ bijectively. Thus, applying Lemma \ref{lem:homotopy}, we obtain measurable families $\{ h_{n, x} \}_{x \in X}$ ($n \geq -1$) of operators $h_{n, x} : C_n(\Sigma_x; \mathbb{Q}) \to C_{n + 1}(\Sigma_x; \mathbb{Q})$ satisfying
\begin{equation}\label{h}
\mathrm{id} = h_{n - 1, x} \circ \partial_{n, x} + \partial_{n+1, x} \circ h_{n, x}
\end{equation}
for every $n \geq -1$ (with $C_{-1}(\Sigma_x; \mathbb{Q}) = \mathbb{Q}$, $\partial_{0, x} = \epsilon_x$).
Next, consider the sequence with complex coefficients. By linearity we extend each $h_{n, x}$ to an operator from $C_n(\Sigma_x)$ to $C_{n+1}(\Sigma_x)$ with keeping Equation (\ref{h}). It is straightforward to check that the family $\{h_{n, x} \}_{x \in X}$ is measurable. Thus, the formula $(h_n f)(u) = (h_{n, x} f_x)(u)$ ($u \in \Sigma_x^{(n)}$) defines an operator $h_n : C_n(\Sigma) \to C_{n +1}(\Sigma)$. Equation (\ref{h}) implies $\mathop{\mathrm{id}}\nolimits = h_{n-1} \circ \partial_n + \partial_{n+1} \circ h_n$, that is, $\{ h_n \}_{n \geq -1}$ is a chain homotopy from $\mathop{\mathrm{id}}\nolimits$ to $0$.
\end{proof}
We are ready to prove Theorem \ref{mainthm2}.
\begin{proof} (Theorem \ref{mainthm2})
Note that $L^\infty(X)$ is $d_{L^\infty(X)}$-dense in $M(X)$, and hence the inclusion map $L^\infty(X) \hookrightarrow M(X)$ is $\dim_{L^\infty(X)}$-isomorphic so that the associated $L(G)$-map from $\mathop{\mathrm{Tor}}\nolimits_n^{\mathbb{C}[G]}(L(G),L^\infty(X))$ to $\mathop{\mathrm{Tor}}\nolimits_n^{\mathbb{C}[G]}(L(G),M(X))$ is also $\dim_{L(G)}$-isomorphic for every $n \geq 0$. Therefore, $\beta_n^{(2)}(G) = \dim_{L(G)} \mathop{\mathrm{Tor}}\nolimits_n^{\mathbb{C}[G]} (L(G),M(X))$. With Lemma \ref{lem:alg2} and Lemma \ref{lem:auxiliary} (1), the resolution of $M(X)$ in Lemma \ref{resolution} enables us to compute
$$
\dim_{L(G)} \mathop{\mathrm{Tor}}\nolimits_n^{\mathbb{C}[G]}(L(G),M(X)) = \dim_{L(G)} H_n (L(G) \otimes_{\mathbb{C}[G]} C_\bullet(\Sigma)),
$$
which equals $\beta_n^{(2)}(\Sigma,G)$ by Proposition \ref{mainthm1}.
\end{proof}
\begin{rem}
Berm\'udez \cite{bermudez} gave another expression of Sauer's $\beta_n^{(2)}(G)$ in terms of his generalization of the Connes-Shlyakhtenko $L^2$-Betti numbers \cite{connes-shlyakhtenko}. He defined, for an inclusion $A \subset B$ of unital $*$-algebras that is called a {\it tracial extension}, its $L^2$-Betti numbers denoted by $\beta_n^{(2)}(A / B)$. Every pmp discrete groupoid $G$ defines a tracial extension $L^\infty(X) \subset \mathbb{C} [G]$. He has proved that $\beta_n^{(2)}(\mathbb{C} [G] / L^\infty(X)) = \beta_n^{(2)}(G)$ holds for every $n \geq 0$ (\cite[Theorem 1.2]{bermudez}).
\end{rem}
\medskip
Since the universal complex $EG$ (see \S\S 3.1) is contractible, we have:
\begin{cor} For every $n \geq 0$, we have $\beta_n^{(2)}(G) = \beta_n^{(2)}(EG,G)$.
\end{cor}
As in the proof of Lemma \ref{resolution}, we can also prove the following:
\begin{cor}\label{n-connected}
If $\Sigma$ is an $n$-connected, simplicial $G$-complex, i.e., $\Sigma_x$ is $n$-connected in the usual sense (see e.g.~\cite[Chapter 1, Section 8]{spanier}) for $\mu$-a.e.~$x$, then $\beta_k^{(2)}(\Sigma, G) = \beta_k^{(2)}(G)$ as long as $0 \leq k \leq n$, and moreover, $\beta_{n +1}^{(2)}(\Sigma, G) \geq \beta_{n+1}^{(2)}(G)$.
\end{cor}
\begin{proof}
For $\mu$-a.e.~$x \in X$, the sequence
$$C_{n+1}(\Sigma_x) \stackrel{\partial_{n+1, x}}{\to} \cdots \stackrel{\partial_{2, x}}{\to} C_1(\Sigma_x) \stackrel{\partial_{1, x}}{\to} C_0(\Sigma_x) \stackrel{\epsilon_x}{\to} \mathbb{C} \to 0$$
is exact since $\Sigma_x$ is $n$-connected. Then, by the proof of Lemma \ref{resolution}, we conclude that the sequence $C_{n+1}(\Sigma) \stackrel{\partial_{n+1}}{\to} \cdots \stackrel{\partial_{2}}{\to} C_1(\Sigma) \stackrel{\partial_{1}}{\to} C_0(\Sigma) \stackrel{\epsilon}{\to} M(X) \to 0$ is exact. Taking a projective $\mathbb{C}[G]$-resolution of $\ker \partial_{n+1}$, we get a resolution $P_\bullet$ of $M(X)$ such that $P_k$ is projective for every $k \geq n+2$. For every $k \leq n$, we have $H_k(L(G)\otimes_{\mathbb{C}[G]}P_\bullet) = H_k(L(G)\otimes_{\mathbb{C}[G]}C_\bullet(\Sigma))$, hence $\beta_k^{(2)}(G) = \beta_k^{(2)}(\Sigma,G)$. Since $\mathop{\mathrm{im}}\nolimits \partial_{n+2} \subset \mathop{\mathrm{im}}\nolimits (P_{n+2} \to P_{n+1})$, we get a surjective $L(G)$-map $H_{n+1}(L(G) \otimes_{\mathbb{C}[G]} C_\bullet(\Sigma)) \to H_{n+1}(L(G) \otimes_{\mathbb{C}[G]} P_\bullet)$; implying $\beta_{n+1}^{(2)}(\Sigma, G ) \geq \beta_{n+1}^{(2)}(G)$.
\end{proof}
\section{Costs of pmp discrete groupoids}
\subsection{Various definitions of costs and their equivalence
We recall some definitions of costs of pmp discrete groupoids and prove their equvalence.
\subsubsection{Measure theoretic approach}
This is a straightforward generalization of the Gaboriau's definition~\cite{gaboriau:cost} to pmp discrete groupoids. Let $\mathcal{E}$ be an at most countable family of elements of $\mathcal{G}_G$, the set of one-sheeted sets, see \S2. A non-empty element $E_1^{\epsilon_1} \cdots E_n^{\epsilon_n}$ with $E_i \in \mathcal{E}$, $\epsilon_i \in \{ 1, -1\}$ ($1 \leq i \leq n$) is called a {\it reduced word} in $\mathcal{E}$, if $E_i = E_{i+1}$ implies $\epsilon_i = \epsilon_{i +1}$ for every $1 \leq i \leq n$. Let $\mathrm{Wr}(\mathcal{E})$ denote the set of reduced words in $\mathcal{E}$. A family $\mathcal{E}$ is called a {\it graphing } of $G$ if it generates $G$ up to null set, namely
$$\mu^G( G \setminus (X \cup \bigcup_{W \in \mathrm{Wr}(\mathcal{E})} W )) = 0$$
holds. The {\it cost} of a graphing $\mathcal{E}$ is defined to be
$$C_\mu(\mathcal{E}) := \sum_{E \in \mathcal{E}} \mu^G(E) = \sum_{E \in \mathcal{E}} \mu(s(E)) = \sum_{E \in \mathcal{E}} \mu(r(E)),$$
and that of $G$ is defined to be $C_\mu(G) = \inf \{ C_\mu(\mathcal{E}) |\, \mathcal{E} : \text{graphing of }\, G \}$.
There is another expression of costs used by Ab\'ert and Weiss \cite{abert-weiss}. A Borel subset $A \subset G$ is called a {\it generating set} of $G$ if $\mu^G( G \setminus ( \bigcup_{n \geq 1} (A \cup A^{-1} \cup X )^n ) = 0$ holds. Let $\tilde{C}_\mu(G)$ denote the number $\inf \{ \mu^G(A) \, | \, A : \text{ generating set of }\, G \}$ for temporarily.
\begin{rem} $C_\mu(G) = \tilde{C}_\mu(G)$.
\end{rem}
\begin{proof}
For any graphing $\mathcal{E}$ of $G$, the set $A_{\mathcal{E}} := \bigcup_{E \in \mathcal{E}} E$ is a generating set of $G$. Thus, we have $\tilde{C}_\mu(G) \leq \mu^G(A_{\mathcal{E}}) \leq \sum_{E \in \mathcal{E}} \mu^G(E) = C_\mu(\mathcal{E})$. Hence $\tilde{C}_\mu(G) \leq C_\mu(G)$. Conversely, take a generating set $A \subset G$. Let $G = \bigsqcup_{i \in I} E_i$ be a countable decomposition of $G$ into one-sheeted sets. Then $\mathcal{E}_A := \{ A \cap E_i \}_{i \in I}$ is a graphing of $G$. Thus, we have $C_\mu(G) \leq C_\mu(\mathcal{E}_A) = \sum_{i \in I} \mu^G(A \cap E_i) = \mu^G(A)$. Hence $C_\mu(G) \leq \tilde{C}_\mu(G)$.
\end{proof}
\subsubsection{Operator algebra approach}
Let $(M, \tau)$ be a finite von Neumann algebra equipped with a faithful normal tracial state, $A$ be a commutative von Neumann subalgebra, and $E^M_A : M \to A$ be the $\tau$-preserving conditional expectation. The {\it normalizing groupoid} of $A$ in $M$ is defined to be the set $\mathcal{G}(M \supset A)$ of partial isometries $v \in M$ satisfying the following: (i) the support projection and the range projection belong to $A$; (ii) $v A v^\ast = A v v^\ast$. Let us recall the definition of $E^M_A$-groupoid, an operator algebraic counterpart of the set of one-sheeted sets.
\begin{df} (\cite[Definition 2]{ueda})
An {\it $E^M_A$-groupoid} is a subset $\mathcal{G}$ of $\mathcal{G}(M \supset A)$ satisfying the following conditions:
\begin{enumerate}
\item If $u$, $v \in \mathcal{G}$ then $u v \in \mathcal{G}$.
\item If $u \in \mathcal{G}$ then $u^{\ast} \in \mathcal{G}$.
\item Every partial isometry in $A$ belongs to $\mathcal{G}$.
\item Let $\{ u_k \}_k$ be a family of elements of $\mathcal{G}$. If both $\{u_k^\ast u_k \}_k$ and $\{u_k u_k^\ast \}_k$ are mutually orthogonal family, then $\sum_k u_k \in \mathcal{G}$ in $\sigma$-strong* topology.
\item For any $u \in \mathcal{G}$ there exists a projection $e \in A$ satisfying $e \leq u^{\ast} u$ and $E_A(u) = e u$.
\item For any $u \in \mathcal{G}$ and $x \in M$ we have $E_A(u x u^\ast) = u E_A(x) u^\ast$.
\end{enumerate}
\end{df}
An at most countable family $\mathcal{U}$ of elements of $\mathcal{G}$ is called a {\it graphing} of $\mathcal{G}$ if $\mathcal{G}^{\prime \prime} = A \vee \mathcal{U}^{\prime \prime}$. The {\it cost } of a graphing $\mathcal{U}$ is defined to be $C_\tau (\mathcal{U}) = \sum_{u \in \mathcal{U}} \tau (u^{\ast} u)$, and that of $\mathcal{G}$ is defined to be $\inf \{ C_\tau(\mathcal{U})\, | \, \mathcal{U} : \text{a graphing of }\, \mathcal{G} \}$.
\subsubsection{Equivalence between two approaches}
In the rest of this section, $(M, \tau)$ and $A$ are $(L(G), \tau)$ and $L^\infty(X)$, respectively. Define $\mathcal{G}(G)$ to be the set of elements $u \in M$ of the form $u = a u(E)$ where $a$ is a partial isometry in $A$ and $E$ is a one-sheeted set of $G$. It is easy to see that $\mathcal{G}(G)$ is an $E^M_A$-groupoid and that $\mathcal{G}(G)^{\prime \prime} = M$. The next lemma, which is missing in \cite{ueda}, guarantees the equivalence between above two approaches.
\begin{lem}\label{equivalence}
$C_{\tau_G}(\mathcal{G}(G)) = C_\mu(G)$.
\end{lem}
\begin{proof}
Let $\mathcal{U}$ be a graphing of $\mathcal{G}(G)$. Then, for each $u \in \mathcal{U}$, there exist a partial isometry $a_u \in A$ and $E_u \in \mathcal{G}_G$ such that $u = a_u u(E_u)$. We show that $\mathcal{E}_{\mathcal{U}} := \{ E_u \}_{u \in \mathcal{U}}$ is a graphing of $G$. Suppose that this is not the case, that is, $\mu^G(G \setminus (X \cup \bigcup_{W \in \mathrm{Wr}(\mathcal{E}_{\mathcal{U}})} W )) > 0$. Then, there exists a non-null one-sheeted set $F$ of $G$ such that $F \subset G \setminus (X \cup \bigcup_{W \in \mathrm{Wr}(\mathcal{E}_{\mathcal{U}})} W )$. Since $\mu(s(F)) = \mu^G(F) \neq 0$, we have $u(F)^* u(F) = \mathbbm{1}_{s(F)} \neq 0$ and hence $u(F) \neq 0$. On the other hand, since $F \cap (X \cup \bigcup_{W \in \mathrm{Wr}(\mathcal{E}_{\mathcal{U}})} W) = \emptyset$, we have $E^M_A(u(F)) = \mathbbm{1}_{X \cap F} = 0$ and $E^M_A(u(W)^* u(F)) = E^M_A(u(W^{-1} \cdot F)) = \mathbbm{1}_{s(W \cap F)} = 0$ for every $W \in \mathrm{Wr}(\mathcal{E}_{\mathcal{U}})$. Thus, by \cite[Lemma 3]{ueda}, we have $u(F) = 0$, which contradicts $u(F) \neq 0$. Hence $\mathcal{E}_{\mathcal{U}}$ is a graphing of $G$. Then, one computes $C_{\tau_G}(\mathcal{U}) = \sum_{u \in \mathcal{U}} \tau_G( u(E_u)^* a_u^* a_u u(E_u)) = \sum_{u \in \mathcal{U}} \tau_G( u(E_u^{-1} E_u)) = \sum_{u \in \mathcal{U}} \mu(s(E_u)) = C_\mu(\mathcal{E}_{\mathcal{U}}) \geq C_\mu(G)$. Since this inequality holds for every graphing $\mathcal{U}$ of $\mathcal{G}(G)$, we obtain $C_{\tau_G}(\mathcal{G}(G)) \geq C_\mu(G)$.
Let $\mathcal{E}$ be a graphing of $G$. We show that $\mathcal{U}_{\mathcal{E}} := \{ u(E) \, | \, E \in \mathcal{E} \}$ is a graphing of $\mathcal{G}(G)$. Let $ \{ W_j \}_{j \geq 0}$ be an enumeration of $\mathrm{Wr}(\mathcal{E}) \cup \{ X \}$ with $W_0 = X$. Define a family $\{ \tilde{W_j} \}_{j \geq 0}$ inductively by $\tilde{W_0} = W_0$ and $\tilde{W_n} = W_n \setminus (\bigcup_{j=0}^{n-1} \tilde{W_j})$. Then $G = \bigsqcup_{j \geq 0} \tilde{W_j}$ up to null set. Take $E \in \mathcal{G}_G$. Since $E = \bigsqcup_{j \geq 0} (E \cap \tilde{W_j})$ up to null set, we have $u(E) = \sum_{j \geq 0} u(E \cap \tilde{W_j})$ in the $\sigma$-strong operator topology. Since $E \cap \tilde{W_j} \subset W_j$, we have $u(E \cap \tilde{W_j}) = \mathbbm{1}_{r(E \cap \tilde{W_j})} u (W_j) \in A \vee \mathcal{U}_{\mathcal{E}}^{\prime \prime} $ for every $j \geq 0$. Thus $u(E) \in A \vee \mathcal{U}_{\mathcal{E}}^{\prime \prime}$ for every $E \in \mathcal{G}_G$. Hence, we conclude that $M = A \vee \mathcal{U}_{\mathcal{E}}^{\prime \prime}$, that is, $\mathcal{U}_{\mathcal{E}}$ is a graphing of $\mathcal{G}(G)$. Then, one compute $C_\mu(\mathcal{E}) = \sum_{E \in \mathcal{E}} \mu(s(E)) = \sum_{E \in \mathcal{E}} \tau_G( u(E)^* u(E)) = C_{\tau_G}(\mathcal{U}_{\mathcal{E}}) \geq C_{\tau_G} ( \mathcal{G}(G))$. Since the inequality holds for every graphing $\mathcal{E}$ of $G$, we obtain $C_\mu(G) \geq C_{\tau_G}(\mathcal{G}(G))$.
\end{proof}
\subsection{Some properties of groupoid cost
We prove that three important results of Gaboriau \cite{gaboriau:cost} hold true even for arbitrary pmp discrete groupoids. The first two (Proposition \ref{induction} and Theorem \ref{additivity}) are proved by translating the corresponding results in \cite{ueda} into pmp discrete groupoid setting, though one can prove them in the framework of groupoids directly by translating the proofs in \cite{ueda} into the framework. The last one (Theorem \ref{treeing_attains_cost}), which is a central result in the theory of costs, is proved directly because it is missing in \cite{ueda}.
\subsubsection{ Induction formula.} For any Borel subsets $Y_1$, $Y_2 \subset X$, the symbol $G^{Y_1}_{Y_2}$ denotes the set $s^{-1}(Y_1) \cap r^{-1}(Y_2)$. The {\it restriction} $G \upharpoonright_{Y}$ of $G$ to a Borel subset $Y \subset X$ is defined to be $G^Y_Y$.
An at most countable family $\mathcal{E} \subset \mathcal{G}_G$ is called a {\it treeing} of $G$ if $\mu^G(W \cap X) = 0$ for every reduced word $W$ in $\mathcal{E}$. For an $E^M_A$-groupoid $\mathcal{G}$, an at most countable family $\mathcal{U} \subset \mathcal{G}$ is called a {\it treeing} if $E^M_A (w) = 0$ for every reduced word $w$ in $\mathcal{U}$. A pmp discrete groupoid $G$ (resp. an $E^M_A$-groupoid $\mathcal{G}$) is said to be {\it treeable} if it has a treeing which is also a graphing. Note that $E^M_A(u(E)) = 0$ if and only if $\mu^G(E \cap X) = 0$. Indeed, it is easy to see that $E^M_A(u(E)) = u(E \cap X)$.
We prove the following proposition:
\begin{prop}\label{induction}(groupoid version of \cite[Proposition II. 6]{gaboriau:cost})
Let $Y \subset X$ be a Borel subset satisfying $\# G^x_Y \geq 1$ for $\mu$-a.e.~$x \in X$. Then, we have the following:
\begin{enumerate}
\item $C_\mu(G) - 1 = C_\mu(G \upharpoonright_Y) - \mu(Y)$;
\item $G$ is treeable if and only if so is $G \upharpoonright_Y$.
\end{enumerate}
\end{prop}
The next lemma seems standard, but we do give its proof for the sake of completeness.
\begin{lem}\label{induction_lem}
For a Bore subset $Y \subset X$, the inequality $\# G^x_Y \geq 1$ holds for $\mu$-a.e.~$x \in X$ if and only if the central support projection $z_M( \mathbbm{1}_Y) = 1$.
\end{lem}
\begin{proof}
Suppose that $z_M(\mathbbm{1}_Y) = 1$. The set $\tilde{Y} := \{ x \in X \, | \, \# G^x_Y \geq 1 \}$ is a $G$-invariant Borel subset that contains $Y$. Thus, we have $\mathbbm{1}_Y \leq \mathbbm{1}_{\tilde{Y}}$, which is a central projection in $M$. Hence $1 = z_M(\mathbbm{1}_Y) \leq \mathbbm{1}_{\tilde{Y}}$, that is, $\mathbbm{1}_{\tilde{Y}} = 1$. This implies that $\# G^x_Y \geq 1$ holds for $\mu$-a.e.~$x \in X$.
Conversely, suppose that $\# G^x_Y \geq 1$ holds for $\mu$-a.e.~$x \in X$. Then, $G \cdot Y := r(s^{-1}(Y))$ is a conull subset, thus $\mathbbm{1}_{G \cdot Y} = 1$. Let $G = \bigsqcup_{i \geq 1} E_i$ be a decomposition into one-sheeted sets; see \S 2. Then, we have $G \cdot Y = \bigcup_{i \geq 1} \varphi_{E_i} (Y)$. Thus, we have $1 = \mathbbm{1}_{G \cdot Y} = \bigvee_{i \geq 1} \mathbbm{1}_{\varphi_{E_i}(Y)} = \bigvee_{i \geq 1} u(E_i) \mathbbm{1}_Y u(E_i)^*$. On the other hand, by an explicit description of the central support, we have $u(E_i) \mathbbm{1}_Y u(E_i)^* \leq z_M(\mathbbm{1}_Y)$ for every $i \geq 1$. Therefore we have $z_M(\mathbbm{1}_Y) = 1$.
\end{proof}
\begin{proof} (Proposition \ref{induction})
(1) We have $z_M(\mathbbm{1}_Y) = 1$ by Lemma \ref{induction_lem}. Applying \cite[Proposition 15]{ueda}, we get $C_\tau(\mathcal{G}(G)) - 1 = C_{\tau \upharpoonright_{\mathbbm{1}_Y M \mathbbm{1}_Y}} ( \mathbbm{1}_Y \mathcal{G}(G) \mathbbm{1}_Y) - \tau (\mathbbm{1}_Y)$. It is not hard to see that $\mathbbm{1}_Y \mathcal{G}(G) \mathbbm{1}_Y = \mathcal{G}(G \upharpoonright_Y)$ and that $\mathbbm{1}_Y M \mathbbm{1}_Y = L(G \upharpoonright_Y)$. Thus, applying Lemma \ref{equivalence}, we get an equality $C_\mu(G) - 1 = C_\mu(G \upharpoonright_Y) - \mu(Y)$.
(2) Thanks to \cite[Proposition 15]{ueda}, it suffices to show that $G$ is treeable if and only if so is $\mathcal{G}(G)$. The only if part is easy. Let $\mathcal{U}$ be a treeing of $\mathcal{G}(G)$ and $\mathcal{E}_{\mathcal{U}}$ be its associated graphing of $G$ (see the proof of \ref{equivalence}). Then, the family $\{ A \vee \{ u \}^{\prime \prime} \}_{u \in \mathcal{U}}$ is a free family of von Neumann algebra with respect to $E^M_A$; see \cite[\S 3.8]{voiculescu-dykema-nica} for the definition of freeness. Since $u(E_u) \in A \vee \{ u \}^{\prime \prime}$ for every $u \in \mathcal{U}$, the freeness of $\{ A \vee \{ u \}^{\prime \prime} \}_{u \in \mathcal{U}}$ implies that $\mathcal{E}_{\mathcal{U}}$ is a treeing of $G$. Hence we are done.
\end{proof}
\subsubsection{ Additivity formula.} Let $G_1 \supset G_3 \subset G_2$ be subgroupoids of a pmp discrete groupoid $G$ with $G_3 = G_1 \cap G_2$. We say that $G$ is the free product of $G_1$ and $G_2$ with amalgamation $G_3$ and write $G = G_1 \bigstar_{G_3} G_2$ if the following conditions are satisfied: $G$ is generated by $G_1$ and $G_2$; for any alternating word $E_1 \cdots E_n$ in $\mathcal{G}(G_1)$ and $\mathcal{G}(G_2)$ satisfying $\mu^G( E_i \cap G_3) = 0$ for every $i \geq 1$, we have $\mu^G( (E_1 \cdots E_n) \cap G_3) = 0$. A rigorous (i.e., measurable) construction of free products with amalgamations was given in \cite{kosaki}, but we do not need it here.
\begin{thm}\label{additivity}(groupoid version of \cite[Th\'eor\`eme IV. 15]{gaboriau:cost})
Let $G_1 \supset G_3 \subset G_2$ be subgroupoids of a discrete pmp groupoid $G$ with $G_3 = G_1 \cap G_2$. Assume that $G = G_1 \bigstar_{G_3} G_2$ and that $G_3$ is principal and hyper finite. Assume further that both $C_\mu(G_1)$ and $C_\mu(G_2)$ are finite.Then, $C_\mu(G) = C_\mu(G_1) + C_\mu(G_2) - C_\mu(G_3)$ holds.
\end{thm}
\begin{proof}
We use the following notation: $\mathcal{G}_i = \mathcal{G}(G_i)$, $N_i = \mathcal{G}_i^{\prime \prime} = L(G_i)$. In order to apply \cite[Theorem 9]{ueda} to our situation, we show the following assertions:
\begin{enumerate}
\item $(M, E^M_{N_3}) = (N_1, E^M_{N_3} \upharpoonright_{N1}) \bigstar_{N_3} (N_2, E^M_{N_3} \upharpoonright_{N2})$;
\item $N_3$ is a hyperfinite von Neumann algebra that contains $A$ as a MASA;
\item the smallest $E^M_A$-groupoid $\mathcal{G}_1 \vee \mathcal{G}_2$ which contains $\mathcal{G}_1$ and $\mathcal{G}_2$ equals $\mathcal{G}(G)$.
\end{enumerate}
(1) First, we show that $M$ is generated by $N_1$ and $N_2$. Let $\mathcal{E}_i$ be a graphing of each $G_i$. Since $G$ is generated by $G_1$ and $G_2$, we have $\mu^G( G \setminus (X \cup \bigcup_{W \in \mathrm{Wr}( \mathcal{E}_1 \cup \mathcal{E}_2) } W)) = 0$. Then, by an argument similar to that in the proof of Lemma \ref{equivalence}, we conclude that $u(\mathcal{G}_G) \subset N_1 \vee N_2$. Thus $M = N_1 \vee N_2$.
Next, we show that $u(\mathcal{G}_{G_1})$ and $u(\mathcal{G}_{G_2})$ are $*$-free with amalgamation $N_3$ with respect to $E^M_{N_3}$. It is not hard to see that $E^M_{N_3}(u(E)) = u(E \cap G_3)$ for every $E \in \mathcal{G}_G$. Thus, $\mu^G(E \cap G_3) = 0$ if and only if $E^M_{N_3}(u(E)) = 0$; this fact enables us to show the assertion.
(2) Since $G_3$ is principal, $G_3$ is nothing but a pmp discrete equivalence relation. Hence, $N_3$ is a hyperfinite von Neumann algebra that contains $A$ as a MASA; see \cite[Proposition 2.9]{feldman-moore}.
(3) Let $\mathcal{E}_i$ be a graphing of each $G_i$. Then, by the proof of Lemma \ref{equivalence}, $\mathcal{U}_i := u(\mathcal{E}_i)$ is a graphing of $\mathcal{G}_i$. Also, we have proved that $M = N_1 \vee N_2$. Thus, $\mathcal{U} := \mathcal{U}_1 \cup \mathcal{U}_2$ is a graphing of $\mathcal{G}(G)$. Therefore, for every $u \in \mathcal{G}(G)$, by \cite[Lemma 3]{ueda}, there exists a family $\{ u_w \}_{w \in \mathrm{Wr}(\mathcal{U})} \subset \mathcal{G}(G)$ satisfying the following: (i) every $u_w$ is a product of a partial isometry in $A$ and a reduced word in $\mathcal{U}$; (ii) the support projections and range projections respectively form mutually orthogonal families; (iii) $u = \sum_{w \in \mathrm{Wr}(\mathcal{U})} u_w$ in the $\sigma$-strong$^*$ topology. Since each $u_w$ belongs to $\mathcal{G}_1 \vee \mathcal{G}_2$, the above condition (ii) implies that $u \in \mathcal{G}_1 \vee \mathcal{G}_2$.
Hence we can apply \cite[Theorem 9]{ueda} to our $E^M_A$-groupoids $\mathcal{G}_1 \supset \mathcal{G}_3 \subset \mathcal{G}_2$. Then, by Lemma \ref{equivalence}, we conclude that $C_\mu(G) = C_\mu(G_1) + C_\mu(G_2) - C_\mu(G_3)$ holds if both $C_\mu(G_2)$ and $C_\mu(G_3)$ are finite.
\end{proof}
\subsubsection{Any treeing attains the cost}
\begin{thm} \label{treeing_attains_cost} (groupoid version of \cite[Th\'eor\`eme IV. 1]{gaboriau:cost})
If $G$ is generated by a treeing $\mathcal{E}$, then we have $C_\mu(G) = C_\mu(\mathcal{E})$.
\end{thm}
To prove the theorem, we provide a terminology and lemmas. A Borel subset $A \subset X$ is said to be G-{\it invariant} if $r(s^{-1}(A)) \subset A$.
\begin{lem}\label{cost_decomp}
If $X = \bigsqcup_{i \in I} X_i$ is a countable Borel partition by $G$-invariant sets, then we have $C_\mu(G) = \sum_{i \in I} C_\mu(G \upharpoonright_{X_i})$.
\end{lem}
\begin{proof}
Let $\mathcal{E}$ be a graphing of $G$. Since each $X_i$ is $G$-invariant, the family $\mathcal{E}_i := \{ s^{-1}(s(E) \cap X_i) \, | \, E \in \mathcal{E} \}$ is a graphing of each $G \upharpoonright_{X_i}$. Then $C_\mu(\mathcal{E}) = \sum_{i \in I} C_\mu(\mathcal{E}_i) \geq \sum_{i \in I} C_\mu(G \upharpoonright_{X_i})$. Thus $C_\mu(G) \geq \sum_{i \in I} C_\mu(G \upharpoonright_{X_i})$. Conversely, let $\mathcal{E}_i$ be a graphing of each $G \upharpoonright_{X_i}$. Then, $\bigcup_{i \in I} \mathcal{E}_i$ is a graphing of $G$, and hence $C_\mu(G) \leq \sum_{i \in I} C_\mu(\mathcal{E}_i)$. Hence we have $C_\mu(G) \leq \sum_{i \in I} C_\mu(G \upharpoonright_{X_i})$.
\end{proof}
Let $\Gamma$ be a discrete group, $X \times \Gamma \ni (x, \gamma) \mapsto x \gamma \in X$ be a (not necessarily, essentially free) pmp action on a probability space $(X, \mu)$. Define a discrete groupoid $X \rtimes \Gamma$ as follows: $X \rtimes \Gamma = X \times \Gamma$ as a Borel space, where $\Gamma$ is endowed with the discrete Borel structure, and the groupoid operations are defined in the following manner: $s : (x, \gamma) \mapsto x \gamma$, $r : (x, \gamma) \mapsto x$ and $(x, \gamma_1) (x \gamma_1, \gamma_2) := (x, \gamma_1 \gamma_2)$. This discrete groupoid clearly becomes pmp with $\mu$. We call the groupoid $X \rtimes \Gamma$ the {\it transformation groupoid} associated with the action.
\begin{lem}\label{trivial_action}
For any finite measure space $(Y, \nu)$, we have $C_\nu(Y \rtimes_{\mathop{\mathrm{id}}\nolimits} \mathbb{Z}) = \nu(Y)$.
\end{lem}
\begin{proof}
Since $\{ Y \times \{ 1 \} \}$ is a graphing of $Y \rtimes_{\mathop{\mathrm{id}}\nolimits} \mathbb{Z}$, we have $C_\nu(Y \rtimes_{\mathop{\mathrm{id}}\nolimits} \mathbb{Z}) \leq \nu(Y)$. Conversely, take an arbitrary graphing $\mathcal{E}$ of $Y \rtimes_{id} \mathbb{Z}$. Since the action $\mathbb{Z} \curvearrowright Y$ is trivial, we have $\nu(Y \setminus \bigcup_{E \in \mathcal{E}} s(E)) = 0$. Thus, we have $\nu(Y) \leq \sum_{E \in \mathcal{E}} \nu(s(E)) = C_\nu(\mathcal{E})$. Hence $C_\nu(Y \rtimes_{\mathop{\mathrm{id}}\nolimits} \mathbb{Z}) \geq \nu(Y)$.
\end{proof}
Let $\mathcal{R}_G$ denote the pmp discrete equivalence relation defined to be $(r \times s) (G)$.
\begin{lem}\label{groupoid_geq_equiv} We have $C_\mu(G) \geq C_\mu(\mathcal{R}_G)$.
\end{lem}
\begin{proof}
Take an arbitrary graphing $\mathcal{E}$ of $G$. Then, $\Phi_{\mathcal{E}} := \{ \varphi_E \}_{E \in \mathcal{E}}$ is a graphing of $\mathcal{R}_G$. We have $C_\mu(\mathcal{E}) = C_\mu(\Phi_{\mathcal{E}}) \geq C_\mu(\mathcal{R}_G)$. Hence we have $C_\mu(G) \geq C_\mu(\mathcal{R}_G)$.
\end{proof}
The next lemma is a special case of Theorem \ref{treeing_attains_cost}.
\begin{lem}\label{one_element}
If $G$ is generated by a single treeing $\{ E \}$ which consists of one element, then we have $C_\mu(G) = C_\mu( \{ E \}) = \mu(s(E))$.
\end{lem}
\begin{proof}
Since $\{ E \}$ is a graphing of $G$, we have $C_\mu(G) \leq \mu(s(E))$.
We show the converse inequality. Let $\mathcal{R}_G$ be the pmp discrete equivalence relation associated with $G$, that is, $(x, y) \in \mathcal{R}_G$ if and only if $y = \varphi_E^n(x)$ for some $n \in \mathbb{Z}$. Set $Y := s(E) \cup r(E)$ and $X_0 := X \setminus Y$. Define $X_n := \{ x \in Y\, | \, \# \mathcal{R}_G (x) = n \}$ for every $1 \leq n \leq \infty$. The family $\{ X_n \}_{0 \leq n \leq \infty}$ gives a $G$-invariant partition of $X$, thus Lemma \ref{cost_decomp} implies
$$C_\mu(G) = C_\mu(G \upharpoonright_{X_0}) + \sum_{n \geq 1} C_\mu(G \upharpoonright_{X_n}) + C_\mu(G \upharpoonright_{X_\infty}).$$
We compute each term below.
\medskip
({\bf First term}: $C_\mu(G \upharpoonright_{X_0}) = 0$.) This is trivial since $G \upharpoonright_{X_0} = X_0$.
\medskip
({\bf Second term}: $C_\mu(G \upharpoonright_{X_n}) = \mu(X_n \cap r(E))$ for every $1 \leq n < \infty$.) Define a Borel subset $D_n \subset s(E)$ for every $1 \leq n \leq \infty$ as follows: $D_n := \operatorname{Dom} (\varphi_E^n) \setminus \operatorname{Dom}(\varphi_E^{n + 1})$ for $1 \leq n < \infty$ and $D_\infty := \bigcap_{n \geq 1} \operatorname{Dom}(\varphi_E^n)$; we have $D = \bigsqcup_{n \geq 1} D_n \sqcup D_\infty$. Since $X_n = (X_n \cap D_\infty) \bigsqcup (X_n \setminus (X_n \cap D_\infty))$ is a $G$-invariant partition, we have $C_\mu(G \upharpoonright_{X_n}) = C_\mu(G \upharpoonright_{X_n \cap D_\infty}) + C_\mu(G \upharpoonright_{X_n \setminus (X_n \cap D_\infty)})$.
First, we compute the first term. Let $F_n \subset X_n \cap D_\infty$ be a fundamental domain for $\mathcal{R}_G \upharpoonright_{X_n \cap D_\infty}$. Then, by the induction formula (Proposition \ref{induction}), we have $C_\mu(G \upharpoonright_{X_n \cap D_\infty}) - \mu(X_n \cap D_\infty) = C_\mu(G \upharpoonright{F_n}) - \mu(F_n)$. Since $\{ E \}$ is a treeing, we have $G = \bigsqcup_{k \in \mathbb{Z}} E^k$, a disjoint union, with $E^0 = X$, and then $G \upharpoonright_{F_n} = \bigsqcup_{k \in \mathbb{Z}} G \upharpoonright_{F_n} \cap E^{n k}$. For every $k \in \mathbb{Z}$, define a homomorphism $G \upharpoonright_{F_n} \cap E^{n k} \to F_n \rtimes_{\mathop{\mathrm{id}}\nolimits} \mathbb{Z} : g \mapsto (s(g), k)$, giving an isomorphism $G \upharpoonright_{F_n} \to F_n \rtimes_{\mathop{\mathrm{id}}\nolimits} \mathbb{Z}$. thus Lemma \ref{trivial_action} implies that $C_\mu(G \upharpoonright_{X_n \cap D_\infty}) = \mu(X_n \cap D_\infty)$.
Next, we compute the second term. Note that $X_n \setminus (X_n \cap D_\infty) = \bigsqcup_{k = 1}^{n -1}(X_n \cap D_k) \sqcup \varphi_E(X_n \cap D_1)$ and that $X_n \cap D_{n -1}$ is a fundamental domain for $\mathcal{R}_G \upharpoonright_{X_n \setminus (X_n \cap D_\infty)}$. Since $G \upharpoonright_{X_n \cap D_{n-1}} = X_n \cap D_{n-1}$, the induction formula implies that $C_\mu(G \upharpoonright_{X_n \setminus (X_n \cap D_\infty)}) = \mu(X_n \setminus (X_n \cap D_\infty)) - \mu(X_n \cap D_{n-1})$.
Hence we have $C_\mu(G \upharpoonright_{X_n}) = \mu(X_n \setminus (X_n \cap D_{n-1}))$. The definition of $\{X_n \}_{1 \leq n < \infty}$ and $\{ D_n \}_{ 1 \leq n \leq \infty}$ implies that $X_n \setminus (X_n \cap D_{n-1}) = X_n \cap r(E)$. Thus we have $C_\mu(G \upharpoonright_{X_n}) = \mu(X_n \cap r(E))$.
\medskip
({\bf Third term}: $C_\mu(G \upharpoonright_{X_\infty}) \geq \mu(X_\infty \cap r(E))$. ) The definition of $X_\infty$ implies that $\mathcal{R}_G$ is an aperiodic (i.e., every orbit is an infinite set) equivalence relation. Thus, by \cite[Proposition III.3 (1)]{gaboriau:cost} and Lemma \ref{groupoid_geq_equiv}, we conclude that $C_\mu(G \upharpoonright_{X_\infty}) \geq \mu(X_\infty \cap r(E))$.
\medskip
Therefore, we have the inequality
$C_\mu(G) \geq \sum_{n \geq 1} \mu(X_n \cap r(E)) + \mu(X_\infty \cap r(E)) = \mu(Y \cap r(E)) = \mu(r(E)) = C_\mu(\{ E \})$, which completes the proof.
\end{proof}
We are ready to prove Theorem \ref{treeing_attains_cost}.
\begin{proof}(Theorem \ref{treeing_attains_cost})
Let $\mathcal{E} = \{ E_i \}_{i =1}^N$ be a treeing which generates $G$. For every $1 \leq i \leq N$, the symbol $G_{E_i}$ denotes the groupoid generated by $E_i$.
First, consider the case when $N$ is finite. Since $\mathcal{E}$ is a treeing, the groupoid $G$ is the free product $G_{E_1} \bigstar_X \cdots \bigstar_X G_{E_N}$. For every $1 \leq i \leq N$ we have $C_\mu(G_{E_i}) = \mu(s(E_i)) < \infty$ by Lemma \ref{one_element}. Thus, by the additivity formula (Theorem \ref{additivity}), we have $C_\mu(G) = \sum_{i=1}^N \mu(s(E_i)) = C_\mu(\mathcal{E})$.
Next, consider the case when $N = \infty$. Take an arbitrary graphing $\mathcal{F} = \{ F_i \}_{i \geq 1}$ of $G$. We show $C_\mu(\mathcal{F}) \geq C_\mu(\mathcal{E})$. As in the proof of \cite[IV.39. Th\'eor\`em IV.1]{gaboriau:cost}, (decomposing each one-sheeted set if necessary) we may and do assume that every $F_i$ is a subset of a reduced word in $\mathcal{E}$. Fix $n \geq 1$. Since $\mathcal{F}$ is a graphing of $G$, there exists an integer $k(n) \geq 1$ and Borel subsets $\tilde{E}_1 \subset E_1, \dots, \tilde{E}_n \subset E_n$ satisfying the following conditions for every $1 \leq j \leq n$: $\mu^G(\tilde{E}_j) \leq 2^{-n}$; any element of $E_j \setminus \tilde{E}_j$ belongs to some word in $\mathcal{F}_{k(n)}$. On the other hand, there exists $m \geq n$ so that every element of $\mathcal{F}_{k(n)}$ is a subset of a word in $\mathcal{E}_m$. Thus, the family $\tilde{\mathcal{F}} := \mathcal{F}_{k(n)} \sqcup \{ \tilde{E}_j \}_{j = 1}^n \sqcup (\mathcal{E}_m \setminus \mathcal{E}_n)$ is a graphing of $G_{\mathcal{E}_m}$. We have $C_\mu(\mathcal{F}_{k(n)}) + \sum_{j=1}^n \mu^G(\tilde{E}_j) + C_\mu(\mathcal{E}_m \setminus \mathcal{E}_n) = C_\mu(\tilde{\mathcal{F}}) \geq C_\mu(G_{\mathcal{E}_m}) = C_\mu(\mathcal{E}_m) \geq C_\mu(\mathcal{E}_n)$. Here the last equality follows from what we have proved in the previous paragraph. Hence the inequality $C_\mu(\mathcal{F}) \geq C_\mu(\mathcal{E}_n) - n 2^{-n}$ holds for every $n \geq 1$, thus we conclude that $C_\mu(\mathcal{F}) \geq C_\mu(\mathcal{E})$. Therefore, we have $C_\mu(\mathcal{G}) = C_\mu(\mathcal{E})$.
\end{proof}
The converse of Theorem \ref{treeing_attains_cost} is not true; in \cite[Remark 12 (1)]{ueda} it was pointed out (with a simple example) that \cite[Proposition I.11]{gaboriau:cost}, a result asserting ``any graphing attaining the cost is a treeing'', does not hold in the groupoid setting.
\begin{cor}\label{free_group_action}
Let $n \in \mathbb{N} \cup \{ \infty \}$. For any (not necessarily essentially free) pmp action of the free group $\mathbb{F}_n$ on a probability space $(X, \mu)$, we have $C_\mu(X \rtimes \mathbb{F}_n) = n$.
\end{cor}
\begin{proof}
Let $\{ a_i \}_{i=1}^n$ be a free generator of $\mathbb{F}_n$. Then $\mathcal{E} := \{ X \times \{a _i\} \}_{i=1}^n$ is a treeing of $X \rtimes \mathbb{F}_n$, thus we have $C_\mu(X \rtimes \mathbb{F}_n) = C_\mu(\mathcal{E}) = n$.
\end{proof}
We will give another explanation of the Corollary in \S\S\S 5.3.1.
\section{The Morse inequalities and its corollaries}
\subsection{The Morse inequalities
Let $\Sigma$ be a simplicial $G$-complex. For simplicity, define $\alpha_k(\Sigma) := \dim_{L(G)} C_k^{(2)}(\Sigma)$. We prove the following theorem:
\begin{thm} \label{morse} (groupoid version of \cite[Proposition 3.19]{gaboriau:betti})
Assume that the number $\alpha_k(\Sigma)$ is finite for every $0 \leq k \leq n$. Then, we have
\begin{align*}
\alpha_n(\Sigma) - &\alpha_{n -1}(\Sigma) + \dots + (-1)^n \alpha_0(\Sigma) \leq
\beta_n^{(2)} ( \Sigma, G) - \beta_{n-1}^{(2)}(\Sigma, G) + \dots + (-1)^n \beta_0^{(2)}(\Sigma, G).
\end{align*}
\end{thm}
To prove the theorem, we need the following general lemma. Let $(M, \tau)$ be a finite von Neumann algebra equipped with a faithful normal tracial state. A morphism $f : V \to W$ between Hilbert $M$-modules is called an {\it $\epsilon$-isomorphism} for $\epsilon > 0$, if both $\dim_M \ker f$ and $\dim_M W / \overline{\mathop{\mathrm{im}}\nolimits_M f}$ are not larger than $\epsilon$. The next lemma is shown in the exactly same way as in \cite[Lemme 4.2]{gaboriau:betti}.
\begin{lem}\label{hilb.module}
Let $V_\bullet$ and $W_\bullet$ are Hilbert chain $M$-complexes such that both $\dim_M V_n$ and $\dim_M W_n$ are finite for every $n \geq 0$. Assume that there exists a chain morphism $\iota_\bullet : V_\bullet \to W_\bullet$ that consists of inclusions and that
$$d(V_\bullet, W_\bullet) := \sum_{n = 0}^\infty | \dim_M V_n - \dim_M W_n |$$
is finite. Then, the induced morphism $H_n^{(2)}(\iota_\bullet) : H_n^{(2)}(V_\bullet) \to H_n^{(2)}(W_\bullet)$ is a $d(V_\bullet, W_\bullet)$-isomorphism for every $n \geq 0$.
\end{lem}
If simplicial $G$-complexes $\Sigma \subset \Sigma^\prime$ are ULB, then we can apply the above lemma to Hilbert chain $L(G)$-complexes $C_\bullet^{(2)}(\Sigma)$ and $C_\bullet^{(2)}(\Sigma^\prime)$. Indeed, we have already seen that $\alpha_k(\Sigma)$ and $\alpha_k(\Sigma^\prime)$ are finite for every $k \geq 0$; see the paragraph posterior to Lemma \ref{lem:auxiliary}. Also, the number $d(C_\bullet^{(2)}(\Sigma), C_\bullet^{(2)}(\Sigma^\prime))$ is finite since every ULB simplicial $G$-complex is finite dimensional.
\begin{proof}(Theorem \ref{morse})
First, we consider the case when $\Sigma$ is ULB. Applying \cite[Lemma 1.18]{luck:survey} to a Hilbert chain $L(G)$-complex $C_\bullet^{(2)}(\Sigma)$, we have $\alpha_k(\Sigma) = \beta_k^{(2)}(\Sigma, G) + b_{k+1} + \dim_{L(G)} \overline{\mathop{\mathrm{im}}\nolimits {\partial_k^{(2)}}^*}$ where $b_k$ denotes $\dim_{L(G)} \overline{\mathop{\mathrm{im}}\nolimits \partial_k^{(2)}}$. Since the $L(G)$-map ${\partial_k^{(2)}}^* : \overline{\mathop{\mathrm{im}}\nolimits \partial_k^{(2)}} \to \overline{\mathop{\mathrm{im}}\nolimits {\partial_k^{(2)}}^*}$ is an injection with dense range, the last term equals $b_k$. Hence we have $\alpha_n(\Sigma) - \alpha_{n -1}(\Sigma) + \dots + (-1)^n \alpha_0(\Sigma) - ( \beta_n^{(2)} ( \Sigma, G) - \beta_{n-1}^{(2)}(\Sigma, G) + \dots + (-1)^n \beta_0^{(2)}(\Sigma, G) ) = b_{n+1} \geq 0$.
Next, consider the general case. Take a ULB-exhaustion $\{ \Sigma_i \}_{i \geq 1}$ of $\Sigma$. Then, the family $\{ C_k^{(2)}(\Sigma_i) \}_{i \geq 1}$ is an increasing sequence of closed subspaces of $C_k^{(2)}(\Sigma)$ with dense union. Therefore, by \cite[Theorem 1.12, (3)]{luck:survey}, we have $\alpha_k(\Sigma) = \lim_{i \to \infty} \alpha_k(\Sigma_i)$ for every $k \geq 0$. Then, we can also show that $\lim_{i \to j} d(C_\bullet^{(2)}(\Sigma_i), C_\bullet^{(2)}(\Sigma_j)) = 0$ for every $j \geq 1$. The additivity of $\dim_{L(G)}$ and Lemma \ref{hilb.module} imply that $| \beta_k^{(2)}(\Sigma_i, G) - \nabla_k(\Sigma_i, \Sigma_j) | \leq d(C_\bullet^{(2)}(\Sigma_i), C_\bullet^{(2)}(\Sigma_j))$ for every $k \geq 0$ and $j \geq i$. Hence we have $\beta_k^{(2)}(\Sigma, G) = \lim_{i \to \infty} \beta_k^{(2)}(\Sigma_i, G)$ for every $k \geq 0$. Combining the ULB case and two equalities which we have proved in the paragraph, we get the inequality for $\Sigma$.
\end{proof}
We define $\chi(\Sigma) := \sum_{n\geq 0} ( -1 )^n \alpha_n(\Sigma)$ as long as it is well-defined, that is, it converges. Similarly, we define $\chi^{(2)}(\Sigma) = \sum_{n \geq 0} (-1)^n \beta_n^{(2)}(\Sigma, G)$ as long as it is well-defined.
\begin{cor}\label{euler-poincare} (groupoid version of \cite[Proposition 3.20]{gaboriau:betti})
If $\chi(\Sigma)$ is well-defined, then so is $\chi^{(2)}(\Sigma)$ and these two quantities must coincide.
\end{cor}
\begin{proof}
Theorem \ref{morse} shows that
$$
\sum_{k = 0}^{2 n + 1} (-1)^k \alpha_k(\Sigma) \leq \sum_{k=0}^{2 n + 1} (-1)^k \beta_k^{(2)}(\Sigma, G) \leq \sum_{k=0}^{2 n} (-1)^k \beta_k^{(2)}(\Sigma, G) \leq \sum_{k=0}^{2 n} (-1)^k \alpha_k(\Sigma);
$$
implying the desired result.
\end{proof}
\subsection{Cost versus $L^2$-betti numbers inequality
We prove the following theorem:
\begin{thm} \label{costvsbetti}(groupoid version of \cite[Corollaire 3.23]{gaboriau:betti})
We have $\beta_1^{(2)}(G) - \beta_0^{(2)}(G) + 1 \leq C_\mu(G)$. Equality holds if $G$ is treeable.
\end{thm}
To prove the theorem, we provide a terminology and lemmas. We say that a graphing of $G$ is {\it disjoint} if it is a disjoint family.
\begin{lem}\label{disjoint_graphing}
$C_\mu(G) = \inf \{ C_\mu(\mathcal{E})\, | \, \mathcal{E} : \text{disjoint graphing of}\, G \}$.
\end{lem}
\begin{proof}
Take an arbitrary graphing $\mathcal{E} = \{ E_i \}_{i \geq 1}$ of $G$. Define a family $\tilde{\mathcal{E}} = \{ \tilde{E}_i \}_{i \geq 1}$ inductively by $\tilde{E}_1 = E_1$ and $\tilde{E}_n = E_n \setminus ( \bigcup_{j=1}^{n-1} \tilde{E}_j)$. Then $\tilde{\mathcal{E}}$ is a disjoint graphing of $G$. We have $C_\mu(\tilde{E}) = \sum_{i \geq 1} \mu^G(\tilde{E}_i) \leq \sum_{i \geq 1} \mu^G(E_i) = C_\mu(\mathcal{E})$, which completes the proof.
\end{proof}
We need a special simplicial $G$-complex associated with each graphing $\mathcal{E}$ of $G$. In the rest of this subsection, we consider only disjoint graphings. Define $\Sigma_{\mathcal{E}}^{(0)}$ and $\Sigma_{\mathcal{E}}^{(1)}$ as follows:
\begin{align*}
&\Sigma_{\mathcal{E}}^{(0)} = G; \\
&\Sigma_{\mathcal{E}}^{(1)} = \{ (g_0, g_1) \in \Sigma_\mathcal{E}^{(0)} * \Sigma_\mathcal{E}^{(0)}\, |\, g_0 \neq g_1 \text{and either}\, g_0^{-1} g_1\, \text{or}\, g_1^{-1} g_0\, \text{belongs to some}\, E \in \mathcal{E} \}.
\end{align*}
\begin{lem}\label{graphing_defines_complex}
The pair $\Sigma_{\mathcal{E}} = ( \Sigma_\mathcal{E}^{(0)}, \Sigma_\mathcal{E}^{(1)} )$ defines a connected, simplicial $G$-complex. Moreover, we have $\alpha_1(\Sigma_\mathcal{E}) = C_\mu(\mathcal{E})$.
\end{lem}
\begin{proof}
It is easy to see that $\Sigma_\mathcal{E}$ is a simplicial $G$-complex. Since $\mathcal{E}$ is a graphing, the complex $\Sigma_\mathcal{E}$ is connected. Define a subset $F \subset \Sigma_\mathcal{E}^{(1)}$ as follows: $F = \bigcup_{E \in \mathcal{E}} (A_E^+ \cup A_E^-)$ where $A_E^+ := \{ (g_0, g_1) \in \Sigma_\mathcal{E}^{(1)}\, |\, g_0 \in X, \, g_1 \in E \}$ and $A_E^- := \{ (g_0, g_1) \in \Sigma_\mathcal{E}^{(1)}\, |\, g_1 \in X, \, g_0 \in E \}$. Since each $r \upharpoonright_E$ is injective, we have $A_E^+ \cap A_E^- = \emptyset$ for every $E \in \mathcal{E}$. Thus $F$ is a fundamental domain of $\Sigma_\mathcal{E}^{(1)}$. The disjointness of $\mathcal{E}$ implies that of the family $\{ A_E^+ \cup A_E^- \}_{E \in \mathcal{E}}$. Since $\mu_{\Sigma_{\mathcal{E}}^{(1)}}(A_E^+) = \mu_{\Sigma_{\mathcal{E}}^{(1)}}(A_E^-) = \mu(r(E))$ for every $E \in \mathcal{E}$, we have $\mu_{\Sigma_{\mathcal{E}}^{(1)}}(F^{(1)}) = 2 \sum_{E \in \mathcal{E}} \mu(r(E)) = 2 C_\mu(\mathcal{E})$. On the other hand, since $F^{(1)}$ is a fundamental domain, we have $\mu_{\Sigma_{\mathcal{E}}^{(1)}}(F^{(1)}) = 2 \alpha_1(\Sigma_\mathcal{E})$. Hence we get $\alpha_1(\Sigma_\mathcal{E}) = C_\mu(\mathcal{E})$.
\end{proof}
\begin{rem}
Lemma \ref{graphing_defines_complex} gives another proof of Theorem \ref{treeing_attains_cost}; we can adapt the $\ell^2$-Proof of \cite[Th\'eor\`eme IV. 1]{gaboriau:betti} due to Gaboriau \cite[\S 8]{gaboriau:notes} to arbitrary pmp discrete groupoids. To this end, it suffices to note the last assertion of Lemma \ref{graphing_defines_complex} and the fact that a graphing $\mathcal{E}$ is a treeing if and only if the simplicial complex $(\Sigma_{\mathcal{E}})_x$ is a tree for $\mu$-a.e.~$x \in X$.
\end{rem}
We are ready to prove Theorem \ref{costvsbetti}.
\begin{proof}(Theorem \ref{costvsbetti})
First, consider the case when $C_\mu(G)$ is finite. Then there exists a graphing $\mathcal{E}$ satisfying $C_\mu(\mathcal{E}) < \infty$. Since $0 \leq \alpha_1(\Sigma_\mathcal{E}) \leq C_\mu(\mathcal{E}) < \infty$, the number $\chi(\Sigma_\mathcal{E})$ is defined and equal to $1 - \alpha_1(\Sigma_\mathcal{E}) = 1 - C_\mu(\mathcal{E})$ by Lemma \ref{graphing_defines_complex}. Thus, by Corollary \ref{euler-poincare}, we have $\beta_0^{(2)}(\Sigma_\mathcal{E}, G) - \beta_1^{(2)}(\Sigma_\mathcal{E}, G) = 1 - C_\mu(\mathcal{E})$. Since $\Sigma_\mathcal{E}$ is connected, Corollary \ref{n-connected} implies that $\beta_0^{(2)}(\Sigma_\mathcal{E}, G) = \beta_0^{(2)}(G)$ and that $\beta_1^{(2)}(\Sigma_\mathcal{E}, G) \geq \beta_1^{(2)}(G)$. Hence we have $\beta_1^{(2)}(G) - \beta_0^{(2)}(G) \leq C_\mu(\mathcal{E}) - 1$ holds for any graphing of finite cost, that is, the desired inequality holds. If $\mathcal{E}$ is a treeing, then we have equality because $\Sigma_{\mathcal{E}}$ is contractible.
Next, consider the case when $C_\mu(G) = \infty$. Then the inequality is trivial since $\beta_0^{(2)}(G)$ is finite as shown below. The number $\beta_0^{(2)}(G)$ equals the $L(G)$-dimension of the module $\mathop{\mathrm{Tor}}\nolimits_0^{\mathbb{C} [G]} (L(G), L^\infty(X)) \cong L(G) \bigotimes_{\mathbb{C} [G]} L^\infty(X)$, which is a quotient of $L(G)$. Thus, by the additivity of $\dim_{L(G)}$, we conclude that $\beta_0^{(2)}(G) \leq \dim_{L(G)} L(G) = 1$. Hence we are done. If $G$ has a treeing $\mathcal{E} = \{ E_i \}_{i=1}^\infty$, then we have $\beta_1^{(2)}(G) - \beta_0^{(2)}(G) + 1 = \infty$. Indeed, the graphing $\mathcal{E}_i := \{E_1, \dots, E_i \}$ gives a ULB exhaustion $\{ \Sigma_{\mathcal{E}_i} \}_{i \geq 1}$ of $\Sigma_{\mathcal{E}}$. Then, by what we have proved in the previous paragraph, we have $\beta_1^{(2)}(G) - \beta_0^{(2)}(G) + 1 = \lim_{i \to \infty} C_\mu(G_i) = C_\mu(G) = \infty$. Here $G_i$ denotes the groupoid generated by $\mathcal{E}_i$.
\end{proof}
\subsection{An application of the cost versus $L^2$-Betti numbers inequality}
We have already computed the cost $C_\mu(X \rtimes \mathbb{F}_n)$ directly in Corollary \ref{free_group_action}. We give another explanation of the corollary as an application of Theorem \ref{costvsbetti}.
To this end, we compute the $L^2$-Betti numbers $\beta_0^{(2)}(X \rtimes \mathbb{F}_n)$ and $\beta_1^{(2)}(X \rtimes \mathbb{F}_n)$. For the case when the action is essentially free, Sauer \cite[Theorem 5.5]{sauer} proved that these $L^2$-Betti numbers coincide exactly with those of the group. Although it is probably well-known that his proof works in the case when the action is not essentially free, we will give its explanation for the sake of completeness.
Define a $L^\infty (X)$-module $L^\infty(X) \rtimes_{\mathrm{alg}} \Gamma$ as a free $L^\infty(X)$-module with a basis $\{ u_\gamma \}_{\gamma \in \Gamma}$. We endow $L^\infty(X) \rtimes_{\mathrm{alg}} \Gamma$ with a ring structure by requiring $u_\gamma f u_{\gamma^{-1}} = f(\cdot \gamma)$ and $u_\gamma u_{\gamma^\prime} = u_{\gamma \gamma^\prime}$ for $\gamma$, $\gamma^\prime \in \Gamma$ and $f \in L^\infty(X)$. Define a map $\iota : L^\infty(X) \rtimes_{\mathrm{alg}} \Gamma \to \mathbb{C} [ X \rtimes \Gamma ]$ by $\iota( \sum_{\gamma} f_\gamma u_\gamma) := \sum_{\gamma} (f_\gamma \circ r) \mathbbm{1}_{X \times \{ \gamma \}}$. It is not hard to see that $\iota$ is an injective $L^\infty(X)$-module map.
\begin{lem}\label{inclusion}
The above inclusion $\iota: L^\infty(X) \rtimes_{\mathrm{alg}} \Gamma \to \mathbb{C} [X \rtimes \Gamma]$ is a $\dim_{L^\infty(X)}$-isomorphism.
\end{lem}
\begin{proof}
It suffices to show the $d_{L^\infty(X)}$-density of $L^\infty(X) \rtimes_{\mathrm{alg}} \Gamma$ in $\mathbb{C} [X \rtimes \Gamma]$. Remark that $\varphi \in \mathbb{C} [X \rtimes \Gamma ]$ belongs to $L^\infty(X) \rtimes_{\mathrm{alg}} \Gamma$ if and only if there exists a finite subset $F \subset \Gamma$ such that $\varphi((x, \gamma)) = 0$ for every $\gamma \notin F$ and $\mu$-a.e.~$x \in X$. Take $\varphi \in \mathbb{C} [X \rtimes \Gamma ]$. Choose an enumeration $\Gamma = \{ \gamma_i \}_{i \geq 1}$. For every $n \geq 0$, define $X_n := \{ x \in X \, | \, \varphi((x, \gamma_i)) = 0 \, \text{ for every } \, i > n \}$. Then, by the above remark, we have $\mathbbm{1}_{X_n} \varphi \in L^\infty(X) \rtimes_{\mathrm{alg}} \Gamma$. Also we have $\mu(X_n) \to 1$ as $n \to \infty$. Thus, we have $d_{L^\infty(X)} (\mathbbm{1}_{X_n} \varphi, \varphi) = [ \mathbbm{1}_{X_n^c} \varphi ] \leq \mu(X_n^c) \to 0$ as $n \to \infty$. Hence we are done.
\end{proof}
\begin{thm}\label{betti_of_action} (\cite[Theorem 5.5]{sauer})
$\beta_n^{(2)}(\Gamma) = \beta_n^{(2)}( X \rtimes \Gamma)$ holds for every $n \geq 0$.
\end{thm}
\begin{proof}
We have
\begin{align*}
\beta_n^{(2)}(\Gamma) &= \dim_{L(\Gamma)} \mathop{\mathrm{Tor}}\nolimits_n^{\mathbb{C} [\Gamma]} (L(\Gamma), \mathbb{C}) \\
&= \dim_{L(X \rtimes \Gamma)} L(X \rtimes \Gamma) \bigotimes_{L(\Gamma)} \mathop{\mathrm{Tor}}\nolimits_n^{\mathbb{C} [\Gamma]} (L(\Gamma), \mathbb{C}) \qquad (\text{\cite[Theorem 2.6]{sauer}})\\
&= \dim_{L(X \rtimes \Gamma)} \mathop{\mathrm{Tor}}\nolimits_n^{\mathbb{C} [\Gamma]} (L(X \rtimes \Gamma) \bigotimes_{\mathbb{C} [\Gamma]} L(\Gamma), \mathbb{C}), \qquad (\text{\cite[Theorem 4.3]{sauer}})
\end{align*}
equals $\dim_{L(X \rtimes \Gamma)} \mathop{\mathrm{Tor}}\nolimits_n^{L^\infty(X) \rtimes_{\mathrm{alg}} \Gamma} (L(X \rtimes \Gamma), L^\infty(X))$, since $L^\infty(X) \rtimes_{\mathrm{alg}} \Gamma$ is a free $L^\infty(X)$-module, it is a flat right $\mathbb{C} [\Gamma]$-module. By Lemma \ref{inclusion} and \cite[Lemma 4.8]{sauer}, we can apply \cite[Theorem 4.11]{sauer} to the ring inclusion
$
L^\infty(X) \subset L^\infty(X) \rtimes_{\mathrm{alg}} \Gamma \subset \mathbb{C} [X \rtimes \Gamma] \subset L(X \rtimes \Gamma)
$.
Then, we get $\dim_{L(X \rtimes \Gamma)} \mathop{\mathrm{Tor}}\nolimits_n^{L^\infty(X) \rtimes_{\mathrm{alg}} \Gamma} (L(X \rtimes \Gamma), L^\infty(X)) = \dim_{L(X \rtimes \Gamma)} \mathop{\mathrm{Tor}}\nolimits_n^{\mathbb{C} [X \rtimes \Gamma ]} (L(X \rtimes \Gamma), L^\infty(X)) = \beta_n^{(2)}( X \rtimes \Gamma)$, which completes the proof.
\end{proof}
We are ready to prove Corollary \ref{free_group_action}.
\begin{proof}(Corollary \ref{free_group_action})
Applying Theorem \ref{costvsbetti}, we get $\beta_1^{(2)}(X \rtimes \mathbb{F}_n) - \beta_0^{(2)}( X \rtimes \mathbb{F}_n) \leq C_\mu(X \rtimes \mathbb{F}_n) - 1$. Theorem \ref{betti_of_action} and \cite[Example 4.2]{cheeger-gromov} imply that the left hand side is equal to $\beta_1^{(2)}(\mathbb{F}_n) - \beta_0^{(2)}(\mathbb{F}_n) = n - 1$. Therefore, $C_\mu(X \rtimes \mathbb{F}_n) =n$, since $C_\mu(X \rtimes \mathbb{F}_n) \leq n$ is trivial.
\end{proof}
\begin{rem}
One can give another proof of Lemma \ref{one_element} for the case when $G$ is ergodic in the same way as that in the above proof; we can compute the $L^2$-Betti numbers $\beta_0^{(2)}(G)$ and $\beta_1^{(2)}(G)$ using results of Alekseev-Kyed \cite[Corollary 6.8]{alekseev-kyed} and Sauer-Thom \cite[Corollary 1.4]{sauer-thom}. Probably, it is also possible to prove Lemma \ref{one_element} for the general case in the same way thanks to \cite[Remark 1.7]{sauer-thom}. However, such a proof is more complicated than that we gave in \S\S\S 4.2.3.
\end{rem}
\subsection*{Acknowledgment}
The author would like to express his gratitude to Professor Yoshimichi Ueda, who is his supervisor for helpful comments and constant encouragement.
|
2005.11262
|
\section{Introduction}
A fundamental problem towards robust speech processing in real-world acoustic environments is to be able to automatically extract or separate target source signals present in an input mixture recording \cite{BookEVincent}.
To date, state-of-the-art performance on the single-channel speech separation task is achieved by deep learning based models \cite{DPCLHershey2016, LSTMLuo2018, ConvTasnetLuo2018, DPRNNLuo2020, Wavesplit2020Zeghidour}. In particular, end-to-end models which directly process the time-domain samples seem to obtain the best performance \cite{CompehensiveBahmaninezhad2019, HeitDemyst2019}.
Such systems (e.g. Conv-TasNet \cite{ConvTasnetLuo2018}, Dual-path RNN \cite{DPRNNLuo2020} or Wavesplit \cite{Wavesplit2020Zeghidour}) perform so well in separating fully overlapping speech mixtures from the wsj0-2mix dataset \cite{DPCLHershey2016} that the separated speech estimates are almost indistinguishable from the reference signals. This led to the development of WHAM!\cite{Wichern2019WHAM} and WHAMR!\cite{Maciejewski2020WHAMR}, respectively the noisy and reverberant extensions of wsj0-2mix.
While these datasets have moved the field towards more realistic and challenging scenarios, there are still steps to be made. In fact, a recent study
reports important drops of performance when Conv-TasNet is trained on wsj0-2mix and tested on other comparable datasets \cite{EmpiricalDolby2020}.
This suggests that, even though Conv-TasNet's separation quality is close to perfect on wsj0-2mix, the ability to generalize to speech coming from a wider range of speakers and recorded in slightly different conditions has not yet been achieved.
Additionally, fully overlapping speech mixtures such as the ones from wsj0-2mix are unnatural. Real-world overlap ratios are typically in the order of 20\% or less in natural meetings \cite{etin2006AnalysisOO} and casual dinner parties \cite{barker2018Chime}. A few studies have shown that speech separation algorithms trained on fully overlapping speech mixtures do not generalize well to such sparsely overlapping mixtures \cite{SparseMenne2019, Wavesplit2020Zeghidour}. Finally, models relying on some kind of speaker identity representation \cite{DPCLHershey2016, Wavesplit2020Zeghidour, DeepCASA2019} cannot easily detect overfitting, since wsj0-2mix's speakers are shared between the training and validation sets.
There have been few initiatives to address these issues. A sparsely overlapping version of wsj0-2mix proposed in \cite{SparseMenne2019} has shown the limitation of Deep Clustering \cite{DPCLHershey2016} on such mixtures. As the original utterances are the same as the ones from wsj0-2mix, we expect the generalization issue to remain the same.
In \cite{EmpiricalDolby2020}, a new speech separation dataset based on LibriTTS \cite{LibriTTS2019} has been designed. The results show that generalizability is improved thanks to the variability of recording conditions and the larger number of unique speakers in the dataset. Sadly, the dataset is limited to two-speaker mixtures without noise, and has not been open-sourced. LibriCSS \cite{LibriCSSChen_2020}, an open-source dataset for sparsely overlapping continuous speech separation, has recently been released. While it addresses most of the shortcomings of wsj0-2mix, its short 10-hour duration restricts its usage to evaluation rather than training purposes. Real diner-party recordings \cite{barker2018Chime, segbroeck2019dipco} as well as meeting recordings \cite{ISCI2013, AMI2006} are also available. While these are natural recordings, the clean speech signals for individual sources are not available\footnote{Close-talk signals are available as references, but these are too corrupted for the evaluation of modern separation algorithms.} and thus, speech separation algorithms cannot be directly evaluated in terms of usual speech separation metrics \cite{SDRVincent2006, SISDRLeroux2019}.
In this work, we introduce LibriMix, an open-source dataset for generalizable noisy speech separation composed of two- or three-speaker mixtures, with or without noise. The speech utterances are taken from LibriSpeech \cite{panayotov2015librispeech} and the noise samples from WHAM!\cite{Wichern2019WHAM}.
An additional test set based on VCTK \cite{VCTK} is designed for fair cross-dataset evaluation.
We evaluate the generalization ability of Conv-TasNet when trained on LibriMix or WHAM! and show that LibriMix leads to better generalization in both clean and noisy conditions.
Stepping further towards real-world scenarios, we introduce a sparsely overlapping version of LibriMix's test set with varying amount of overlap. The scripts used to generate these datasets are publicly released\footnote{\url{https://github.com/JorisCos/LibriMix}}\footnote{\url{https://github.com/JorisCos/VCTK-2Mix}}\footnote{\url{https://github.com/popcornell/SparseLibriMix}}.
The paper is organised as follows. We explain LibriMix's design and give some insights about its characteristics in Section \ref{sec:librimix_dataset}.
In Section \ref{sec:results}, we report experimental results on LibriMix as well as across datasets. We conclude in Section \ref{sec:conclusions}.
\section{Datasets} \label{sec:librimix_dataset}
In the following, we present existing speech separation datasets derived from Wall Street Journal (WSJ0), and introduce our new datasets derived from LibriSpeech. Statistics about the original speech datasets and the speech separation datasets derived from them can be found in Tables \ref{tab:initial_datasets} and \ref{tab:resulting_datasets}, respectively.
\begin{table}[h]
\centering
\caption{Statistics of original speech datasets.}
\label{tab:initial_datasets}
\begin{tabular}{c|c|ccc}
\hline
Dataset & Split & Hours & \begin{tabular}[c]{@{}c@{}}per-spk \\ minutes\end{tabular} & \begin{tabular}[c]{@{}c@{}} \# Speakers\end{tabular} \\ \hline
\multirow{3}{*}{WSJ0} & si\_tr\_s & 25 & 15 & 101 \\
& si\_dt\_05 & 1.5 & 11 & 8 \\
& si\_et\_05 & 2.3 & 14 & 10 \\ \hline
\multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}LibriSpeech\\ clean\end{tabular}} & train-360 & 364 & 25 & 921 \\
& train-100 & 101 & 25 & 251 \\
& dev & 5.4 & 8 & 40 \\
& test & 5.4 & 8 & 40 \\ \hline
VCTK & test & 44 & 24 & 109 \\ \hline
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\caption{Statistics of derived speech separation datasets.}
\label{tab:resulting_datasets}
\begin{tabular}{c|c|ccc}
\hline
Dataset & Split & \begin{tabular}[c]{@{}c@{}} \# Utterances\end{tabular} & Hours \\ \hline
\multirow{3}{*}{wsj0-\{2,3\}mix} & train & \numprint{20000} & 30 \\
& dev & \numprint{5000} & 8 \\
& test & \numprint{3000} & 5 \\ \hline
\multirow{4}{*}{Libri2Mix} & train-360 & \numprint{50800} & 212 \\
& train-100 & \numprint{13900} & 58 \\
& dev & \numprint{3000} & 11 \\
& test & \numprint{3000} & 11 \\ \hline
\multirow{4}{*}{Libri3Mix} & train-360 & \numprint{33900} & 146 \\
& train-100 & \numprint{9300} & 40 \\
& dev & \numprint{3000} & 11 \\
& test & \numprint{3000} & 11 \\ \hline
\begin{tabular}[c]{@{}c@{}}SparseLibri2Mix\end{tabular} & test & \numprint{3000} & 6 \\ \hline
\begin{tabular}[c]{@{}c@{}}SparseLibri3Mix\end{tabular} & test & \numprint{3000} & 6 \\ \hline
VCTK-2mix & test & \numprint{3000} & 9 \\ \hline
\end{tabular}
\end{table}
\subsection{WSJ0, wsj0-2mix and WHAM!}
The WSJ0 dataset was designed in 1992 as a new corpus for automatic speech recognition (ASR)\cite{WSJ_CSR}.
It consists of read speech from the Wall Street Journal.
It was recorded at 16 kHz using a close-talk Sennheiser HMD414 microphone.
The wsj0-2mix dataset \cite{DPCLHershey2016} uses three subsets of WSJ0: \texttt{si\_tr\_s}, \texttt{si\_dt\_05} and \texttt{si\_et\_05} which all come from the 5k vocabulary part of WSJ0. This represents around 30~h of speech from 119 speakers.
Table \ref{tab:initial_datasets} reports details on speaker and hour distributions within the subsets.
The wsj0-2mix datatet is made of a training set, a validation set and a test set.
The training and validation sets share common speakers from the \texttt{si\_tr\_s} subset and the test set is made from a combination of \texttt{si\_dt\_05} and \texttt{si\_et\_05}.
Speech mixtures are generated by mixing pairs of utterances from different speakers at random signal-to-noise ratios (SNRs). The SNR is drawn uniformly between 0 and 5 dB.
Four variations of the dataset are available, which correspond to two different sampling rates (16~kHz and 8~kHz) and two modes (\textit{min} and \textit{max}).
In the \textit{min} mode, the mixture stops with the shortest utterance.
In the \textit{max} mode, the shortest utterance is padded to the longest one.
The wsj0-2mix equivalent for three-speaker mixtures is called wsj0-3mix and was generated in a similar way \cite{DPCLHershey2016}.
Note that, in order to generate more mixtures, utterances from WSJ0 were used multiple times in the three subsets.
Each utterance is repeated up to fifteen times, with an average of four times.
In the WHAM! dataset, wsj0-2mix was extended to include noisy speech mixtures.
Noise samples recorded in coffee shops, restaurants, and bars were added to the mixtures so that the SNR between the loudest speaker and the noise varies from -6 to +3 dB.
The dataset follows the same structure as wsj0-2mix, with the same four variations and the three same subsets. In addition to separation in clean (\textit{sep\_clean}) and noisy conditions (\textit{sep\_noisy}), other enhancement tasks can be considered.
Statistics on noise durations can be seen in Table \ref{tab:Wham's_noises_time}.
WHAM! noises have been released under the CC BY-NC 4.0 License, but WSJ0 and derived data are proprietary (LDC). Note that no noisy version of wsj0-3mix has been released.
\begin{table}[h]
\centering
\caption{Statistics of WHAM!'s noises.}
\label{tab:Wham's_noises_time}
\begin{tabular}{c|c|cc}
\hline
Datasets & Split & Hours & \begin{tabular}[c]{@{}c@{}}Number of \\ utterances \end{tabular} \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}WHAM!\\ noise\end{tabular}} & train & 58 & \numprint{20000} \\
& dev & 14.7 & \numprint{5000} \\
& test & 9 & \numprint{3000}\\ \hline
\end{tabular}
\end{table}
\subsection{LibriSpeech, LibriMix and sparse LibriMix}
LibriSpeech \cite{panayotov2015librispeech} is a read ASR corpus based on LibriVox audiobooks\footnote{https://librivox.org/}.
To avoid background noise in the reference signals, we only use the \texttt{train-clean-100}, \texttt{train-clean-360}, \texttt{dev-clean}, and \texttt{test-clean} subsets of LibriSpeech.
This represents around 470~h of speech from \numprint{1252} speakers, with a 60k vocabulary.
More statistics are given in Table \ref{tab:resulting_datasets}.
We propose a new collection of datasets derived from LibriSpeech and WHAM!'s noises which we call LibriMix. These datasets are entirely open source.
The two main datasets, Libri2Mix and Libri3Mix, consist of clean and noisy, two- and three-speaker mixtures.
Libri2Mix follows the exact same structure as WHAM! and allows for the same tasks.
Mirroring the organization of LibriSpeech, they have two training sets (\texttt{train-100}, \texttt{train-360}), one validation set (\texttt{dev}) and one test set (\texttt{test}).
In order to cover the \texttt{train-360} subset of LibriSpeech without repetition, training noise samples were speed-perturbed with factors of 0.8 and 1.2 as described in \cite{ko2015augmentation}.
Instead of relying on signal power to scale individual utterances as in wsj0-2mix, we rely on loudness units relative to full scale (LUFS) \cite{loudness}\footnote{Available at https://github.com/csteinmetz1/pyloudnorm}, expressed in dB. Based on the ITU-R BS.1770-4 recommendation \cite{loudness}, LUFS measure the perceived loudness of an audio signal. Compared to classical SNRs, LUFS better correlate with human perception, are silence-invariant, and are little sensitive to downsampling.
Speech mixtures are generated by randomly selecting utterances for different speakers.
The loudness of each utterance is uniformly sampled between -25 and -33 LUFS.
Random noise samples with uniformly distributed loudness between -38 and -30 LUFS are then added to the speech mixtures.
The noisy mixtures are then clipped to 0.9, if need be.
The resulting SNRs are normally distributed with a mean of 0~dB and a standard deviation of 4.1~dB in the clean condition and a mean of -2~dB and a standard deviation of 3.6~dB in the noisy condition.
Note that in \texttt{train-100} and \texttt{train-360} each utterance is only used once.
For \texttt{dev} and \texttt{test},
the same procedure is repeated enough times to reach \numprint{3000} mixtures.
This results in around 280~h of noisy speech mixtures, against 45~h for WHAM!.
The variety of speakers is much wider in LibriMix's training set with around \numprint{1000} distinct speakers against 100 in WHAM!. The total number of unique words is also much larger, with 60k unique words in LibriMix against 5k in wsj0-2mix.
Stepping towards more realistic, conversation-like scenarios, we also release sparsely overlapping versions of LibriMix's two- and three-speaker test sets. We refer to these datasets as SparseLibri2Mix and SparseLibri3Mix.
For each mixture, we first sample speaker identities, then, for each speaker, we select an utterance from \texttt{test-clean}.
Cycling through the selected utterances, we keep adding sub-utterances whose boundaries were obtained with the Montreal Forced Aligner (MFA) \cite{mcauliffe2017montreal}, until a maximum length of 15~s has been reached. This mixing process ensures that each speaker utters semantically meaningful speech, which is important for future ASR experiments.
We used the same loudness distribution as the non-sparse version but we sampled it for each sub-utterance. This allows for alternating dominant speakers in the mixtures \cite{Wavesplit2020Zeghidour}.
For both two- and three-speaker versions, we produced 500 mixtures for six different amounts of speech overlap: 0\%, 20\%, 40\%, 60\% 80\%, and 100\%. For three-speaker mixtures we count the amount of three-speaker overlap and not the total overlap, which is higher because two-speaker overlap also occurs.
Note that these overlap ratios reflect the amount of overlap of each sub-utterance with the preceding ones. Because sub-utterances don't have the same length, the real overlap ratios of the mixtures are lower, as it happens with \textit{max} versions of LibriMix and WHAM!.
Because WHAM! noise samples are short on average, the maximum mixture length was restricted to 15~s in order to obtain a reasonable number of samples for testing.
Examples of such sparsely overlapping utterances can be visualized in Fig. \ref{fig:sparselibri2}.
\begin{figure}[h]
\begin{subfigure}{\linewidth}
\centering
\hspace{-0.5cm}
\includegraphics{nooverlap.png}
\caption{}
\label{fig:sub3}
\end{subfigure}
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics{overlap20.png}
\caption{}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics{overlap100.png}
\caption{}
\label{fig:sub2}
\end{subfigure}\\[1ex]
\caption{SparseLibri3Mix example with different 3-speaker overlap percentages:
(a) 0\% overlap, (b) 20\% overlap, (c) 100\% overlap.}
\label{fig:sparselibri2}
\end{figure}
\subsection{VCTK and VCTK-2mix}
We also release VCTK-2mix, an unmatched open-source test set derived from VCTK \cite{VCTK}. VCTK comprises 109 native English speakers reading newspapers.
As VCTK utterances contain a significant amount of silence, we use energy-based voice activity detection to remove silent portions with a 20~dB threshold.
The mixing procedure for VCTK-2mix is identical to that for LibriMix.
The noise samples are also taken from WHAM!'s test set.
The resulting dataset contains around 9~h of speech with \numprint{3000} utterances from 108 speakers.
\section{Results}\label{sec:results}
In order to assess the results achievable using our newly released LibriMix datasets, we use the optimal configuration of Conv-TasNet reported in \cite{ConvTasnetLuo2018} for the separation tasks, as implemented in Asteroid \cite{Asteroid2020} \footnote{\href{https://github.com/mpariente/asteroid}{github.com/mpariente/asteroid}}.
Training is done by maximizing the permutation-invariant, scale-invariant signal-to-distortion ratio (SI-SDR) \cite{PITYu2016, SDRVincent2006} on 3~s segments with a batch size of 24 and Adam \cite{Adam} as the optimizer. All the experiments are performed with the exact same parameters. Since the SI-SDR is undefined for silent sources, results reported on all \textit{max} versions correspond to models trained on the corresponding \textit{min} version.
\subsection{Results on LibriMix}
The results achieved by Conv-TasNet on the clean and noisy versions of Libri2mix and Libri3Mix are reported in Table \ref{tab:Baselinebis} and compared with the Ideal Binary Mask (IBM) and the Ideal Ratio Mask (IRM) for a short time Fourier transform (STFT) window size of 32~ms. Conv-TasNet was trained on \texttt{train-360}, which leads to better performance than \texttt{train-100}. Results are reported in terms of SI-SDR improvement compared to the input mixture (SI-SDR$_\text{i}$).
We refer to the clean two-speaker separation task as \textit{2spk-C}, to the noisy one as \textit{2spk-N}, etc. We see that for two-speaker mixtures, Conv-TasNet outperforms ideal masks in clean conditions and is on par with them in noisy conditions, as in \cite{ConvTasnetLuo2018, FilterbankDesign2019Pariente}. However, oracle performance is still out of reach for three-speaker mixtures, with and without noise.
\begin{table}[h]
\centering
\caption{SI-SDR$_\text{i}$ (dB) achieved on LibriMix (SI-SDR for the "Input" column).}
\label{tab:Baselinebis}
\begin{tabular}{c|c|cccc}
\hline
& mode & Input & IRM & IBM & Conv-TasNet \\ \hline
\multirow{2}{*}{2spk-C} & 8k min & 0.0 & 12.9 & 13.7 & 14.7 \\
& 16k max & 0.0 & 14.1 & 14.5 & 16 \\ \hline
\multirow{2}{*}{2spk-N} & 8k min & -2.0 & 12 & 12.6 & 12 \\
& \multicolumn{1}{l|}{16k max} & -2.8 & 13.4 & 13.7 & 13.5 \\ \hline
\multirow{2}{*}{3spk-C} & 8k min & -3.4 & 13.1 & 13.9 & 12.1 \\
& \multicolumn{1}{l|}{16k max} & -3.7 & 14.5 & 14.9 & 13 \\ \hline
\multirow{2}{*}{3spk-N} & 8k min & -4.4 & 12.6 & 13.3 & 10.4\\
& 16k max & -5.2 & 14.1 & 14.4 & 10.9 \\ \hline
\end{tabular}
\end{table}
\subsection{Results on SparseLibriMix}
We report the results obtained on the 8 kHz test sets of SparseLibri2Mix and SparseLibri3Mix in Table \ref{tab:sparse_results}, in clean and noisy conditions. We used the same 8 kHz models as in Table \ref{tab:Baselinebis}, which were trained on non-sparse LibriMix.
It can be seen that, for both two- and three-speaker mixtures, the higher the overlap, the lower the SI-SDR$_\text{i}$, as was also shown in \cite{Wavesplit2020Zeghidour}. In the 100\% overlap case we obtain results similar to the ones in Table \ref{tab:Baselinebis} for the non-sparse, 8kHz \textit{min} version. The values are slightly higher here because mixtures are not truncated to the shortest utterance. Interestingly, we see that Conv-TasNet performs \textit{worse} than IRM for smaller overlaps. This suggests that there is still room for improvement for source separation of sparsely-overlapping mixtures.
\begin{table}[t]
\centering
\setlength\tabcolsep{3pt}
\small
\caption{SI-SDR$_\text{i}$ (dB) achieved on SparseLibriMix (8kHz). Conv-TasNet is abreviated TCN.}
\begin{tabular}{c|cc|cc|cc|cc}
\hline
& \multicolumn{2}{c|}{2spk-C} & \multicolumn{2}{c|}{2spk-N} & \multicolumn{2}{c|}{3spk-C} & \multicolumn{2}{c}{3spk-N} \\ \hline
Overlap & IRM & TCN & IRM & TCN & IRM & TCN & IRM & TCN \\
0\% & 43.7 & 31.9 & 16.1 & 14.5 & 44.2 & 24.8 & 18.7 & 13.0 \\
20\% & 19.6 & 20.0 & 14.7 & 13.9 & 18.1 & 15.8 & 15.6 & 12.1 \\
40\% & 16.2 & 17.6 & 13.8 & 13.2 & 16.4 & 14.4 & 14.9 & 11.7 \\
60\% & 14.9 & 16.3 & 13.3 & 12.7 & 15.5 & 13.8 & 14.4 & 11.5 \\
80\% & 14.2 & 15.7 & 13 & 12.5 & 14.6 & 13.1 & 13.9 & 11 \\
100\% & 13.8 & 15.3 & 12.7 & 12.2 & 14.3 & 12.5 & 13.6 & 10.7 \\
\hline
\end{tabular}
\label{tab:sparse_results}
\end{table}
\subsection{Dataset comparisons}
The experiments in \cite{EmpiricalDolby2020} have shown that models trained on wsj0-2mix do not generalize well to other datasets. Similarly to \cite{EmpiricalDolby2020}, we investigate the generalization ability of Conv-TasNet when trained on different datasets. We train six different Conv-TasNet models on WHAM! \texttt{train}, LibriMix \texttt{train-100} and \texttt{train-360} in both clean and noisy condition. We evaluate each model on the corresponding (clean or noisy) test sets of Libri2Mix, WHAM!, and VCTK2Mix. The results in clean and noisy conditions are shown in Figs.~\ref{fig:sep_clean} and \ref{fig:sep_noisy}, respectively. Note that noise samples are matched across the three noisy test sets. For both clean and noisy separation, we can see that WHAM!-trained models poorly generalize to LibriMix, with a 4~dB SI-SDR drop compared to LibriMix-trained models, while LibriMix-trained models obtain closer performance to WHAM!-trained models on the WHAM! test set with a 0.8~dB SI-SDR drop only. On the clean and noisy versions of VCTK-2mix, WHAM!-trained models perform around 3--4~dB less well than models trained on Librimix's \texttt{train-360}. The general performance drop from models trained on LibriMix's \texttt{train-100} compared to LibriMix's \texttt{train-360} confirms again that the amount of data is key to better generalization and that the amount of data available in WHAM! is insufficient.
Altogether, these results indicate that the clean and noisy versions of LibriMix allow better generalization than the wsj0-mix and WHAM! datasets.
Several factors can influence generalization. While VCTK-2mix was generated with statistics matching the ones in LibriMix, we argue that this is not the reason, as results reported in \cite{EmpiricalDolby2020} go in the same direction. Instead, we believe that the number of speakers (100 against 900), the size of the vocabulary (5k against 60k), the recording conditions (same room same recording material against varying rooms and material) and the total amount of training data (30~h against 212~h) add up to explain that models trained with LibriMix's \texttt{train-360} offer better generalization.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{clean_graph.pdf}
\caption{Cross-dataset evaluation on the clean separation task. Errors bars indicate 95\% confidence intervals.}
\label{fig:sep_clean}
\end{figure}
Results reported in \cite{EmpiricalDolby2020} are somewhat different than the ones we report here, which can be explained by several factors.
First, the VCTK-based two-speaker test set in \cite{EmpiricalDolby2020} was designed using the Matlab scripts from \cite{DPCLHershey2016}. These scripts do not remove silences and compute SNRs based on signal power instead of LUFS. As utterances from VCTK can be filled with silence, this greatly increases the effective SNR range of mixtures. For example, a short utterance in a long silence mixed at 0~dB with a long utterance without silence can produce a mixture where the second source is almost inaudible. This explains the low performance obtained on VCTK in \cite{EmpiricalDolby2020}.
Second, the alternative training and test sets are based on LibriTTS \cite{LibriTTS2019} which is itself derived from LibriSpeech \cite{panayotov2015librispeech}. LibriTTS has shorter and cleaner utterances, which could explain the higher performance reported on its test set in \cite{EmpiricalDolby2020}, and the larger drop in performance when tested on wsj0-2mix's test set.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{noisy_graph.pdf}
\caption{Cross dataset evaluation on the noisy separation task. Errors bars indicate 95 \% confidence interval}
\label{fig:sep_noisy}
\end{figure}
\section{Conclusions} \label{sec:conclusions}
In this work, we introduced LibriMix, a new family of datasets for generalizable single-channel speech separation. Libri2Mix and Libri3Mix enable two- and three-speaker separation in clean and noisy conditions. We report competitive results in all conditions using Asteroid's implementation of Conv-TasNet.
A new independent test set, VCTK-2mix, is also released to enable reproducible cross-dataset evaluation. Experiments show that models trained on Libri2Mix generalize better to VCTK-2mix than models trained with WHAM!. Additionaly, Libri3Mix is the first open-source dataset to enable three-speaker noisy separation.
Stepping towards more realistic scenarios, we release SparseLibri2Mix and SparseLibri3Mix, two- and three-speaker test sets consisting of sparsely overlapping speech mixtures with a varying amount of overlap. Initial results reported on it suggest that there still is room for improvement on this scenario.
Future work includes the design of a training set of sparsely overlapping speech mixtures, as well as a more diverse set of noise samples.
\bibliographystyle{IEEEtran}
|
1105.5691
|
\section{Introduction}
\label{intro}
Hydrogen, in its atomic and molecular form, is the most abundant of chemical species in the interstellar medium. The number and size of detected interstellar molecules have been increasing last eighty years.
Polycyclic aromatic hydrocarbons (PAHs) have been investigated as carriers of the diffuse interstellar bands (DIBs), because of
unidentified infrared (UIR) emission features, and as a source of an anomalous microwave emission \citep{Tielens2008}.
It has been suggested that protonated PAH molecules play a catalytic role in the process of the H$_2$ formation in space \citep{Bauschlicher1998,Hirama2004}.
Electronic transitions in the optical spectral region of protonated hydrogenated coronene, ovalene, pyrene, and circumpyrene have been recently calculated and discussed in the context of the DIBs problem
\citep{Pathak2008,Hammonds2009}. Spectra of
isomers of protonated anthracene and phenanthrene have been measured in neon matrices and studied by the time-dependent density functional theory
\citep{Garkusha2011}.
We have selected here cations of naphthalene and proflavine to study the change
of their overall optical spectra under additions of hydrogen atoms.
The presence of C$_{10}$H$_8$$^{+}$ in the interstellar medium has been discussed \citep{Iglesias-Groth2008,Galazutdinov2011}.
Several times in the past the evidence of the interstellar naphthalene cation was reported and later shown to be premature
\citep{Galazutdinov2011}.
Proflavine is a substituted PAH molecule with possible applications in astrophysics and astrobiology \citep{Sarre2006,Bonaca2010}.
A theoretical study of reactions has shown that the first H atom attaches to the naphthalene cation C$_{10}$H$_8^+$ in exothermic reaction of about 60 kcal mol$^{-1}$, whereas the process for the second H atom is exothermic by 45 kcal mol$^{-1}$ \citep{Bauschlicher1998}.
Herbst and Le Page have found that the naphthalene cation efficiently associates with atomic hydrogen at all densities \citep{Herbst1999}.
Calculations have also shown that the addition of H atoms to the naphthalene cation proceeds with little to no barriers
\citep{Ricca2007}.
Electronic absorption spectra of
H-C$_{10}$H$_8$$^{+}$ have been studied by the photofragment spectroscopy and theoretical analysis
\citep{Alata2010a,Alata2010b}. It has been found that the protonated naphthalene cation absorbs in the visible part of the spectrum around 500 nm.
A possibility that C$_{10}$H$_8$ and C$_{10}$H$_8$$^{+}$ are carriers of
DIBs has been studied \citep{Salama1992,Hirata1999,Krelowski2001,Malloci2007b}.
The evidence of the naphthalene cation in the direction of the star Cernis 52 in the Perseus molecular cloud complex has been reported \citep{Iglesias-Groth2008}.
The presence of some known DIBs has been confirmed in this study, and two new bands consistent with laboratory measurements on the naphthalene cation have been observed. It has also been proposed that hydrogen additions produce hydronaphthalene cations which contribute to
the anomalous microwave emission in this cloud \citep{Iglesias-Groth2008}. However, a recent work \citep{Galazutdinov2011} has shown that this report \citep{Iglesias-Groth2008} is premature.
The presence of the naphthalene cation and related species in the interstellar medium deserves further studies.
In addition to PAHs, the related molecules where C and H are substituted by other atoms are also studied in the astrophysical context \citep{Hudgins2005,Sarre2006}.
Nitrogen is abundant in the interstellar medium. Its compounds have been detected in interstellar dust particles and meteorites.
Organic dye proflavine, C$_{13}$H$_{11}$N$_3$ (3,6-diaminoacridine),
has been proposed as one of molecules which act as a molecular ``midwife'' because of their property to accelerate DNA and RNA synthesis \citep{Jain2004}. Therefore, the optical spectrum of proflavine and its ions is of astrobiological interest.
Studies of such large biological molecules are important for understanding of prebiotic chemistry \citep{Puletti2010}.
Organic dyes often exhibit strong lines in the visible spectrum.
It has been shown that the optical spectrum of proflavine in aqueous solutions strongly depends on the pH value \citep{DeSilvestri1984,Mennucci2005}. Protons can attach to nitrogen and carbon atoms in the molecule, and the state of proflavine in water and other solvents depends on pH of the solution.
For example, it has been measured that the maximum of absorption in the visible spectrum of proflavine ($\lambda_{max}$) in water at room temperature,
changes from 444 nm to 394 nm when pH changes from 7.0 to 14.0 \citep{DeSilvestri1984}.
The visible and UV optical spectra of proflavine and its ions
have been recently studied using the pseudopotential time-dependent density functional theory methods \citep{Bonaca2010}. Positions of spectral lines have been compared with DIBs, but with no definite conclusions.
Because of the sensitivity of optical properties of proflavine in water on the pH factor,
we investigated here the impact of hydrogen additions on the optical spectrum of its cation.
In this work we consider minimal and maximal hydrogenations of two organic cations.
An addition of one, two and ten H atoms to the naphthalene cation, as well as one (in two positions) and fourteen hydrogen atoms to the proflavine cation are studied.
We use the pseudopotential density functional theory (DFT) \citep{Martin2004} to determine the ground states of all base and hydrogenated species.
Calculations of optical spectra of these systems within the pseudopotential time-dependent density functional theory (TDDFT)
methods \citep{Runge1984} are described in section~\ref{methods}. Results are presented and
discussed in section~\ref{results}, while final remarks are given in section~\ref{concl}.
\section{Computational methods}
\label{methods}
Optical spectra are calculated using the Octopus code \citep{Castro2006}.
The geometries of all naphthalene and proflavine systems are minimized independently
by the Quantum ESPRESSO DFT package \citep{Giannozzi2009}. This approach is taken because of the fact that
the Octopus is not in general suitable for a search of optimal geometries.
Conditions as close as possible to simulations of the spectra within the Octopus code have been used in the Quantum ESPRESSO calculations
(i.e., the local density approximation (LDA) and corresponding pseudopotentials, see below and in Bonaca \& Bilalbegovi\' c 2010).
The optimized geometries of cations are calculated without imposing any constraints, and they are
used as inputs in the ground state and time-dependent calculations in the Octopus code.
Optical spectra are obtained using a time propagation method and
the approximated enforced time-reversal symmetry algorithm \citep{Castro2004}.
The ground state is perturbed by an impulsive time-dependent potential and
the time evolution is followed for 15.8 fs.
In the time propagation TDDFT method used in this work, the width of spectral lines is inversely proportional to the total
propagation time \citep{Castro2006,Lopez2003,Malloci2007b,Puletti2010}.
The step of 0.0012 $\hbar/$eV is applied.
The absolute cross-section $\sigma(\omega)$ is obtained from dynamical polarisability $\alpha(\omega)$ which is calculated from the Fourier transform
of the time-dependent dipole momentum of the system.
Electronic spectra are calculated from
\begin{equation}
\sigma(\omega)=\frac{2\omega}{\pi}Im\; \alpha(\omega).
\label{cross}
\end{equation}
In this equation $Im\; \alpha(\omega)$ is the imaginary part of the dynamical polarisability.
The Troullier-Martins pseudopotentials \citep{Troullier1993} and the TDLDA approximation with the Perdew-Zunger exchange-correlation functional \citep{Perdew1981} are used, as well as the spacing of 0.13 \AA{} in the real space method. The simulation cell is constructed
from the spheres of the $4$ \AA{} radii around atoms.
The TDLDA approximation in the pseudopotential TDDFT method shows a very good stability and produces results in an agreement with experiments, even for large biological molecules \citep{Lopez2003}. It has been found that the results of TDDFT calculations (within the Tamm-Dancoff approximation and
using the BLYP functional) for the excitation energies of several PAH cations
agree with experiments within 0.3 eV \citep{Hirata1999}.
We have found that calculations with the B3LYP functional (still in a development in the latest version of the Octopus code) do not change substantially the spectrum of proflavine \citep{Bonaca2010}. It has also been found that spectra of proflavine obtained using the time propagation method in the Octopus code agree with results
of the Lanczos chain TDDFT module in the Quantum ESPRESSO \citep{Walker2006,Bonaca2010}.
It is known that the real-time propagation method of the Octopus code produces the whole optical spectrum up to the far-UV \citep{Castro2006}.
However, for energies above $10$ eV only envelopes of the spectra are accurate.
In addition, there is little astronomical interest for a high-energy region of the spectrum.
To facilitate a comparison with other studies
of PAHs and organic molecules of astrophysical interest carried out by the same TDDFT method \citep{Malloci2007a,Malloci2007b,Bonaca2010,Puletti2010}
we show optical spectra up to {\bf 6} eV.
The hydronaphthalene cation C$_{10}$H$_9^+$ and the dihydronaphthalene cation C$_{10}$H$_{10}^+$, with additional H atoms attached at the same positions as in Bauschlicher (1998), are studied.
Two situations of a minimal hydrogenation, where one hydrogen atom is added to the proflavine cation, are also investigated here:
an additional H atom attached as the second hydrogen atom at the central carbon atom (labelled as (I)), and at the opposite nitrogen atom in the same central ring (labelled as (II)).
We also study fully hydrogenated species of naphthalene and proflavine cations: H$_{10}$-C$_{10}$H$_8$$^{+}$, and H$_{14}$-C$_{10}$H$_{13}$N$_3$$^{+}$.
\section{Results and Discussion}
\label{results}
\begin{figure}
\vspace{40pt}
\centering
\includegraphics[width=8.0cm]{Fig1.eps}
\caption{
Optical spectra of the naphthalene cation: (a) after the geometry optimization without constraints,
(b) for the cation with the optimized structure of the neutral naphthalene.
Insets: Black balls represent carbon atoms, whereas white ones model hydrogen atoms. Structures are visualized using the XCrySDen program
\citep{Kokalj2003}.}
\label{fig1}
\end{figure}
The optimized structures of all investigated species of naphthalene
and proflavine are shown as insets in Figs. 1-3.
In contrast to the naphthalene and proflavine molecules,
carbon skeletons of fully hydrogenated H$_{10}$-C$_{10}$H$_8$$^{+}$ and H$_{14}$-C$_{13}$H$_{11}$N$_3$$^{+}$ (shown in Fig.~\ref{fig2}(c)
and Fig.~\ref{fig3}(c)) are not planar. In addition, hydrogen atoms attached in pairs on peripheral carbon atoms
are positioned below and above the middle plane of carbon rings.
Average C-C distances increase from 1.39 \AA{} in the optimized naphthalene cation to 1.51 \AA{} in H$_{10}$-C$_{10}$H$_8$$^{+}$.
Average N-C distances increase from 1.34 \AA{} in C$_{13}$H$_{11}$N$_3$$^{+}$ to 1.41 \AA{} in H$_{14}$-C$_{13}$H$_{11}$N$_3$$^{+}$. Average C-C distances increase from 1.40 \AA{} in the proflavine cation to 1.51 \AA{} in H$_{14}$-C$_{13}$H$_{11}$N$_3$$^{+}$.
Hydrogen atoms are also attached on the C atoms in pairs above and below rings in optimized
H-C$_{10}$H$_8$$^{+}$ (one pair, inset in Fig.~\ref{fig2}(a)) and H$_2$-C$_{10}$H$_8$$^{+}$ (two pairs, inset in Fig.~\ref{fig2}(b)).
However, carbon skeletons are planar in both these structures with a minimal hydrogenation,
as well as in H-C$_{13}$H$_{11}$N$_3$$^{+}$ (in both positions I and II, insets in Fig.~\ref{fig3}(a) and Fig.~\ref{fig3}(b)).
The similar conclusion about a planarity of H-C$_{10}$H$_8$$^{+}$ has been reached by Alata and coworkers \citep{Alata2010a}.
Optical spectra presented in this work are calculated starting from structures optimized separately for all pure and hydrogenated cations, and without any constraints in geometrical
optimizations. We found that the fully optimized naphthalene cation is still a planar structure. However, the charge and small displacements from atom positions of the neutral naphthalene produce small differences in the spectrum. Optical spectra for the fully optimized cation and the structure where
one electron is removed from the optimized geometry of a neutral naphthalene \citep{Niederalt1995,Hirata1999,Malloci2007b}
are shown in Fig.~\ref{fig1}. The geometry used as an input for Fig.~\ref{fig1}(b) is taken from the Theoretical spectral database of PAHs \citep{Malloci2007a}.
Our result for the optical spectrum of the naphthalene cation in this situation agrees with one presented in the database \citep{Malloci2007a}.
Lines for spectra shown in Fig.~\ref{fig1} are compared in Table~\ref{table:1}.
\begin{figure}
\vspace{50pt}
\centering
\includegraphics[width=8.0cm]{Fig2.eps}
\caption{
Optical spectra of hydrogenated species of naphthalene cation: (a) H-C$_{10}$H$_8$$^{+}$,
(b) H$_2$-C$_{10}$H$_8$$^{+}$,
(c) H$_{10}$-C$_{10}$H$_8$$^{+}$.}
\label{fig2}
\end{figure}
\begin{figure}
\vspace{50pt}
\centering
\includegraphics[width=8.0cm]{Fig3.eps}
\caption{
Optical spectra of hydrogenated species of proflavine cation: (a) H-C$_{13}$H$_{11}$N$_3$$^{+}$, additional H atom is attached
at the central C atom (position I),
(b) H-C$_{13}$H$_{11}$N$_3$$^{+}$, additional H atom attached
at the central N atom (position II),
(c) H$_{14}$-C$_{13}$H$_{11}$N$_3$$^{+}$.
Insets: The nitrogen atoms are labelled by ``N'' on the right-hand side of corresponding balls.}
\label{fig3}
\end{figure}
Optical spectra of hydrogenated protonated naphthalene species are shown in Fig.~\ref{fig2}.
Peaks of the naphthalene cation move in H-C$_{10}$H$_8$$^{+}$, H$_2$-C$_{10}$H$_8$$^{+}$, and H$_{10}$-C$_{10}$H$_8$$^{+}$, and their intensities change.
Optical spectra of hydrogenated protonated proflavine species are shown in Fig.~\ref{fig3}.
The most notable fact is that
the spectrum of H$_{14}$-C$_{13}$H$_{11}$N$_3$$^{+}$ (Fig.~\ref{fig3}(c)) is broad and this broadening is much more pronounced than in the fully hydrogenated naphthalene (Fig.~\ref{fig2}(c)).
Spectral lines of all naphthalene and proflavine related systems are presented in in Table~\ref{table:1} and Table~\ref{table:2}.
Visible spectra of hydrogenated naphthalene cations extend up to $\sim$ 500 nm, whereas long tails of visible spectra of hydrogenated proflavine cations
spread above 800 nm.
The photofragmentation spectroscopy measurements by Alata and coworkers
have also been shown that H-C$_{10}$H$_8$$^{+}$ absorbs in the visible, around 500 nm \citep{Alata2010a,Alata2010b}.
We also present near-UV lines (above 200 nm)
to facilitate a comparison with corresponding spectral measurements of interstellar organic materials \citep{Kwok2008}.
Such experiments are designed to be carried out on the Cosmic Origins Spectrograph (COS) of the Hubble Space Telescope \citep{Osterman2011}.
The near-UV channel of COS shows a high sensitivity in the region between 200 nm and 300 nm
where lines of hydrogenated protonated naphthalene and proflavine exist.
The lines of 281 nm and 422 nm in the spectrum of the proflavine cation [Bonaca2010] move to the 273 nm and 454 nm
and three new lines appeared between these two
in the spectrum of H-C$_{13}$H$_{11}$N$_3$$^{+}$ (position I).
Rather strong lines exist at 201 nm, 222 nm, and 244 nm.
An addition of the H atom to
the central N atom produces lines between 206 nm and 470 nm, as shown in Table~\ref{table:2}.
The change of optical lines is the most obvious in the spectrum of H$_{14}$-C$_{13}$H$_{11}$N$_3$$^{+}$, where the UV lines exist at 216 nm and 253 nm.
Then intensity drops, and the lines at 352 nm, 420 nm, and 496 nm are very weak.
It is known that the majority of DIBs are present in the atomic hydrogen gas \citep{Herbig1993,Snow2006}.
In Table~\ref{table:3} we compare positions of several calculated lines with the closest DIBs \citep{Hobbs2008,Hobbs2009,Tuairisg2000}.
The best agreement is for a line of the fully hydrogenated naphthalene cation H$_{10}$C$_{10}$H$_8$$^{+}$, and a line of
H-C$_{13}$H$_{11}$N$_3$$^{+}$ (position II). Although TDDFT calculations show only trends in DIBs positions \citep{Malloci2007a,Malloci2007b,Pathak2008,Hammonds2009},
the agreement presented in Table~\ref{table:3} is good and deserves a further experimental investigation.
In the recent report of the high resolution spectroscopy observation of C$_{10}$H$_8$$^{+}$ in the Cernis 52, the importance of hydronaphthalene cations has been pointed out \citep{Iglesias-Groth2008}. Although this report has been shown to be premature
\citep{Galazutdinov2011}, the presence of the naphthalene cation and its hydrogenated forms in the interstellar medium remains an important subject
for further investigations.
\section{Conclusions}
\label{concl}
Studies of spectra of polycyclic aromatic hydrocarbon
cations with attached hydrogen atoms are important for the DIB and UIR problems, an anomalous microwave emission, as well as for general properties
of organic matter in the interstellar medium. We study the changes in electronic absorption spectra induced by
hydrogen additions to the naphthalene and proflavine cations using the time-dependent density functional method in its pseudopotential version.
Calculated spectra are based on the overall density of electronic transitions and
show that hydrogen additions substantially change intensities, shapes,
and positions of optical lines in a comparison with spectra of base cations.
Therefore, additions of hydrogen atoms is important in astrophysical applications of optical spectra of organic molecules.
Similar conclusions have been found for protonated hydrogenated coronene, ovalene, pyrene, and circumpyrene
\citep{Pathak2008,Hammonds2009}.
Calculated lines of protonated hydrogenated naphthalene and proflavine
are compared with measured DIBs. An especially good agreement exists for the visible line of the fully hydrogenated naphthalene
cation and two visible lines of
the hydroproflavine cations.
The lines in the near-UV spectral region are also presented.
Our calculations should give guidelines for the change of near-UV and visible spectral lines for similar larger cations and their derivatives under the process of hydrogenation.
\begin{table*}
\caption{Optical spectral lines of the naphthalene cation and hydrogenated naphthalene cation species. The star labells data
for the naphthalene cation with the optimized geometry of the neutral naphthalene molecule, whereas all other data are obtained
for optimized cations. Because of a limited accuracy of TDDFT method, the calculated results are presented rounded to whole numbers.}
\label{table:1}
\centering
\begin{tabular}{l l l l l l l}
\hline
Structure & $\lambda _1$ (nm) & $\lambda _2$ (nm) & $\lambda _3$ (nm) & $\lambda _4$ (nm) & $\lambda _5$ (nm) & $\lambda _6$ (nm)\\
\hline
C$_{10}$H$_{8}$$^{+}$ & 200. & 232. & 250. & 342.& 517. & - \\
C$_{10}$H$_{8}$$^{+}$ (*) & 201. & 234. & 253. & 350. & 528.& - \\
H-C$_{10}$H$_{8}$$^{+}$ & 209. & 233. & 254 & 281.& 367.& 494.\\
H$_2$-C$_{10}$H$_{8}$$^{+}$ & 217. & 238. & 279. & 315. & 395. & 453.\\
H$_{10}$-C$_{10}$H$_{8}$$^{+}$ & 206. & 226. & 246. & 285. & 313.& 426.\\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\caption{Optical spectral lines of the proflavine cation and hydrogenated proflavine cation species. Labels (I) and (II) are for H atoms attached, respectively to the central carbon and central nitrogen atom. Because of a limited accuracy of TDDFT method, the calculated results are presented rounded to whole numbers.}
\label{table:2}
{\centering
\begin{tabular}{l l l l l l l l l }
\hline
Structure & $\lambda _1$ (nm) & $\lambda _2$ (nm) & $\lambda _3$ (nm) &$\lambda _4$ (nm)&$\lambda _5$ (nm)&$\lambda _6$ (nm)&$\lambda _7$ (nm)&$\lambda _8$ (nm)\\
\hline
C$_{13}$H$_{11}$N$_3$$^{+}$ & 281.$^{1}$ & 422.$^{1}$ & - &&&&&\\
H-C$_{13}$H$_{11}$N$_3$$^{+}$ (I) & 201. & 222. & 244. &273.& 301.& 332.& 375.& 454.\\
H-C$_{13}$H$_{11}$N$_3$$^{+}$ (II)& 206. & 228. & 249. &274.& 343.& 399.& 470.& -\\
H$_{14}$-C$_{13}$H$_{11}$N$_3$$^{+}$ & 216. & 253. & 352.& 420.& 496. & - & - & - \\
\hline
\end{tabular}
}
$^{1}${\citep{Bonaca2010}}
\end{table*}
\begin{table*}
\caption{Lines calculated using the time-dependent density functional theory (TDDFT) method
compared to the nearest DIBs. Because of a limited accuracy of TDDFT method, the calculated results are presented rounded to whole numbers.
FWHM of experimental lines are also shown.}
\label{table:3}
{\centering
\begin{tabular}{l l l l}
\hline
Structure & $\lambda$(TDDFT) (nm) & $\lambda$(DIB) (nm) & FWHM (nm)\\
\hline
C$_{13}$H$_{11}$N$_3$$^{+}$ & 422.$^{1}$ & 417.55$^{2}$ & 1.72\\
H-C$_{13}$H$_{11}$N$_3$$^{+}$ (I) & 454. & 450.17$^{3}$ & 0.30\\
H-C$_{13}$H$_{11}$N$_3$$^{+}$ (II) & 470. & 476.26$^{3}$ & 0.25\\
H$_{10}$C$_{10}$H$_8$$^{+}$ & 426. & 425.90$^{4}$ & 0.11\\
\hline
\end{tabular}
}
$^{1}${\citep{Bonaca2010}}
$^{2}${\citep{Tuairisg2000}}
$^{3}${\citep{Hobbs2009}}
$^{4}${\citep{Hobbs2008}}
\end{table*}
\section*{Acknowledgments}
This work has been done under the HR-MZOS project 119-1191458-1011, and using computer resources at
the University of Zagreb Computing Centre SRCE. The authors would like to thank
Alberto Castro and Layla Martin-Samos for discussions.
\bibliographystyle{mn2e}
|
1504.08289
|
\section{Introduction}
\vspace{-5pt}
Object parts play a crucial role in many recent approaches for fine-grained recognition.
They allow for capturing very localized discriminative features of an object~\cite{Goering14:NPT}.
Learning part models is often either done in a completely supervised manner by providing part annotations~\cite{branson14cub75acc,zhang12-ppk}
or labeled bounding boxes~\cite{dpm,Simon14:PDD}.
In contrast, we show how to learn part-models in a completely unsupervised manner, which drastically reduces annotation costs for learning.
Our approach is based on learning constellations of neural activation patterns obtained from pre-learned convolutional neural networks (CNN).
Fig.~\ref{fig:teaser} shows an overview of our approach. Our part hypotheses are outputs of an intermediate CNN layer for
which we compute neural activation maps~\cite{Simon14:PDD,SimonyanSaliency}. Unsupervised part models are either build by
randomly selecting a subset of the part hypotheses or learned by estimating the parameters of a generative spatial part
model. In the latter case, we implicitly find subsets of part hypotheses that ``fire'' consistently in a certain constellation in the images.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{other_img/teaser2.pdf}
\caption{Overview of our approach.
Deep neural activation maps are used to exploit the channels of a CNN as a part detector.
We estimate a part model from completely unsupervised data by selecting part detectors that fire at similar relative locations.
The created part models are then used to extract features at object parts for weakly-supervised classification.
\label{fig:teaser}
}
\end{figure}
Although creating a model for the spatial relationship of parts has already been introduced a decade ago~\cite{fergus2003classRecog,fei2006oneShot},
these approaches face major difficulties due to the fact that part proposals
are based on hand-engineered local descriptors and detectors without correspondence
We overcome this problem by using implicit part detectors of a pre-learned CNN, which at the same time greatly simplifies the part-model training.
As shown by \cite{zeiler2013visualizing}, intermediate CNN outputs can often be linked to semantic parts of common objects and
we are therefore using them as part proposals.
Our part model learning has to select only a few parts for each view of an object from an already high quality pool of part proposals.
This allows for a much simpler and faster part model creation without the need to explicitly consider appearance of the individual parts as done in previous works~\cite{fergus2003classRecog,andriluka}.
At the same time, we do not need any ground-truth part locations or bounding boxes.
The obtained approach and learning algorithm improves the state-of-the-art in fine-grained recognition on three datasets including CUB200-2011~\cite{wahCUB_200_2011} if no ground-truth part or bounding box annotations are available at all.
In addition, we show how to use the same approach for generic object recognition on Caltech-256.
This is a major difference to previous work on fine-grained recognition, since most approaches are not directly applicable to other tasks.
For example, our approach is able to achieve state-of-the-art performance on Caltech-256 without the need for expensive dense evaluation on different scales of the image~\cite{Simonyan14veryDeep}.
Furthermore, our work has impact beyond fine-grained recognition, since our method can also be used to guide data augmentation during fine-tuning for image classification.
We demonstrate in our experiments that it even yields a more discriminative CNN compared to a CNN fine-tuned with ground-truth bounding boxes of the object.
In the next section, we give a brief overview over recent approaches in the areas of part constellation models and fine-grained classification.
Sect.~\ref{sec:neural_maps} reviews the approach of Simon~\etal~\cite{Simon14:PDD} for part proposal generation.
In Sect.~\ref{sec:part_discovery}, we present our flexible unsupervised part discovery method.
The remaining paper is dedicated to the experiments on several datasets (Sect.~\ref{sec:experiments}) and conclusions (Sect.~\ref{sec:conclusions}).
\section{Related work\label{sec:related_work}}
\vspace{-5pt}
\myparagraph{Part constellation models}
Part constellation models describe the spatial relationship between object parts.
There are many supervised methods for part model learning which rely on ground-truth part or bounding box annotations~\cite{zhang2013dpd,Goering14:NPT,Simon14:PDD}.
However, annotations are often not available or expensive to obtain.
In contrast, the unsupervised setting does not require any annotation and relies on part proposals instead.
It greatly differs from the supervised setting as the selection of useful parts is crucial.
We focus on unsupervised approaches as these are the most related to our work.
One of the early works in this area is \cite{zobel2000robust}, where facial landmark detection was done by fusing single detections with a coupled ray model.
Similar to our approach, a common reference point is used and the position of the other parts are described by a distribution of their relative polar coordinates.
However, they rely on manually annotated parts while we focus on the unsupervised setting.
Later on, Fergus~\etal~\cite{fergus2003classRecog} and Fei-Fei~\etal~\cite{fei2006oneShot} build models based on generic SIFT interest point detections.
The model includes the relative positions of the object parts as well as their relative scale and appearance.
While their interest point detector delivers a number of detections without any semantics, each of the CNN-based part detectors we use correspond to a specific object part proposal already.
This allows us to design the part selection much more efficient and to speed up the inference.
The run time complexity compared to \cite{fergus2003classRecog,fei2006oneShot} decreases from exponential in the number of modeled parts to linear time complexity.
Similar computational limitations occur in other works as well, for example \cite{riabchenko14constellation}.
Especially in the case of a large number of part proposals this is a significant benefit.
Yang~\etal~\cite{templatenips} select object part templates from a set of randomly initialized image patches.
They build a part model based on co-occurrence, diversity, and fitness of the templates in a set of training images.
The detected object parts are used for part-based fine-grained classification of birds.
In our application, co-occurrence and fitness are rather weak properties for the selection of CNN-based part
proposals.
For example, detectors of frequently occurring background patterns such as leaves of a tree would likely
be selected by their algorithm.
Instead our work considers the spatial relationship in order to filter unrelated background detectors
that fire on inconsistent relative locations.
Crandall~\etal~\cite{crandall07composite} improve part model learning by jointly considering object and scene-related parts.
However, the number of combinations of possible views of an object and different background patterns is huge.
In contrast, our approach selects the part proposals based on the relative positions which is simpler and effective since we only want to identify useful part proposals for classification.
In the area of detection, there are numerous approaches based on object parts.
The deformable part model (DPM, \cite{dpm}) is the most popular one.
It learns part constellation models relative to the bounding box with a latent discriminative SVM model.
Most detection methods require at least ground-truth bounding box annotations.
In contrast, our approach does not require such annotations or any negative examples, since we learn
the constellation model in a generative manner and by using object part proposals not restricted to a bounding box.
\myparagraph{Fine-grained recognition with part models}
Fine-grained recognition focuses on visually very similar classes,
where the different object categories sometimes differ only in minor details.
Examples are bird species~\cite{wahCUB_200_2011} or car models~\cite{krause2013cars} recognition.
Since the differences of small parts of the objects matter, localized feature extraction using
a part model plays an important role.
One of the earliest work in the area of fine-grained recognition uses an ellipsoid to model the bird pose~\cite{farrell11-bsc} and fuse obtained parts using very specific kernel functions~\cite{zhang12-ppk}.
Other works build on deformable part models~\cite{dpm}. For example,
the deformable part descriptor method of \cite{zhang2013dpd} uses a supervised version of \cite{dpm}
for training deformable part models, which then allows for pose normalization by comparing corresponding parts.
The work of \cite{gavves2013alignment} and \cite{Goering14:NPT} demonstrated nonparametric part detection for fine-grained recognition.
The basic idea is to transfer human-annotated part positions from similar training examples obtained with nearest neighbor matching.
Chai~\etal~\cite{chai2013symbiotic} use the detections of DPM and the segmentation output of GrabCut to predict part locations.
Branson~\etal~\cite{branson14cub75acc} use the part locations to warp image patches into a pose-normalized representation.
Zhang~\etal~\cite{zhang2014partRCNN} select object part detections from object proposals generated by Selective Search~\cite{uijlings13selSeach}.
The mentioned methods use the obtained part locations to calculate localized features.
Berg~\etal~\cite{berg2013partclass} learns a linear classifier for each pair of parts and classes.
The decision values from numerous of such classifiers are used as feature representation.
While all these approaches work well in many tasks, they require ground-truth part annotations at training and often also at test time.
In contrast, our approach does not rely on expensive annotated part locations and is fully unsupervised for part model learning instead.
This also follows the recent shift of interest towards less annotation during training \cite{zhang2014partRCNN,xiao14attention,Simon14:PDD}.
The method of Simon~\etal~\cite{Simon14:PDD} presents a method, which
requires bounding boxes of the object during training rather than part annotations.
They also make use of neural activation maps for part discovery, but although our approach does not
need bounding boxes we are still able to improve over their results.
The unsupervised scenario that we tackle has also been considered by Xiao~\etal~\cite{xiao14attention}.
They cluster the channels of the last convolutional layers of a CNN into groups.
Patches for the object and each part are extracted based on the activation of each of these groups.
The patches are used to classify the image.
While their work requires a pre-trained classifier for the objects of interest,
we only need a CNN that can be pre-trained on a weakly related object dataset.
\section{Deep neural activation maps\label{sec:neural_maps}}
\vspace{-5pt}
CNNs have demonstrated an amazing potential to learn a complete classification pipeline from scratch without the need to manually define low level features.
Recent CNN architectures \cite{krizhevsky2012imagenet,Simonyan14veryDeep} consist of multiple layers of convolutions, pooling operations,
full linear transformations and non-linear activations.
The convolutional layers convolve the input with numerous kernels.
As shown by \cite{zeiler2013visualizing}, the kernels of the convolutions in early layers are similar to the filter masks used in many popular
low level feature descriptors like HOG or SIFT.
Their work also shows that the later layers are sensitive to increasingly abstract patterns in the image.
These patterns can even correspond to whole objects~\cite{SimonyanSaliency} or parts of objects~\cite{Simon14:PDD} and this is exactly what we exploit.
The output $f$ of a layer before the fully-connected layers is organized in multiple channels $1 \leq p \leq P$ with a two-dimensional arrangement of output elements,
\ie we denote $f$ by $( f_{j,j'}^{(p)}(\bm{I}) )$ where $\bm{I} \in \mathbb{R}^{W \times H}$ denotes the input image and $j$ and $j'$ are indices of the output elements in the channel.
Fig.~\ref{fig:pool5_outputs} shows examples of such a channel output for the last convolutional layer.
As can be seen the output can be interpreted as detection scores of multiple object part detectors.
Therefore, the CNN automatically learned implicit part detectors relevant for the dataset it was trained from.
In this case, the visualized channel shows high outputs at locations corresponding to the head of birds and dogs.
\begin{figure}
\centering
{\scriptsize
\setlength{\tabcolsep}{2pt}
\setlength{\arraycolsep}{2pt}
\begin{tabular}{ccc}
\includegraphics[width=0.25\linewidth]{other_img/1_image.png} &
\includegraphics[width=0.25\linewidth]{other_img/1_blob.png} &
\includegraphics[width=0.25\linewidth]{other_img/1_grad.png} \\
\includegraphics[width=0.25\linewidth]{other_img/2_image.png} &
\includegraphics[width=0.25\linewidth]{other_img/2_blob.png} &
\includegraphics[width=0.25\linewidth]{other_img/2_grad.png}\\
&&\\
Input & CNN last conv. output & Neural activation map\\
$\bm{I}$ &
\resizebox{0.32\linewidth}{0.06\linewidth}{$\left[ \begin{array}{ccc} f_{1,1} & \ldots & f_{1,13}\\ \ldots & \ldots & \ldots\\ f_{13,1} & \ldots & f_{13,13} \end{array} \right]$ }&
\resizebox{0.32\linewidth}{0.07\linewidth}{$\left[ \begin{array}{ccc} m_{1,1} & \ldots & m_{1,227}\\ \ldots & \ldots & \ldots\\ m_{227,1} & \ldots & m_{227,227} \end{array} \right]$}
\end{tabular}
\setlength{\tabcolsep}{8pt}
}
\caption{Examples for the output of a channel of the last convolutional layer and the corresponding neural activation maps for two images (index of the channel is skipped to ease
notation). A deep red corresponds to high activation and a deep blue to no activation at all. Activation maps are available in higher resolution and better suited
for part localization. Best viewed in color.
\label{fig:pool5_outputs}}
\end{figure}
A disadvantage of the channel output is its resolution, which would not allow for precise localization of parts.
Due to this reason, we follow the basic idea of \cite{SimonyanSaliency} and \cite{Simon14:PDD} and
compute \emph{deep neural activation maps}. We calculate the gradient
of the average output of the channel $p$ with respect to the input image pixels $I_{x,y}$:
\vspace{-5pt}
\begin{align}
m_{x,y}^{(p)}(\bm{I}) &= \frac{\partial}{\partial I_{x,y}} \sum\limits_{j,j'} f_{j,j'}^{(p)}(\bm{I})
\end{align}
\vspace{-2pt}
The calculation can be easily achieved with a back-propagation pass~\cite{Simon14:PDD}.
The absolute value of the gradient shows which pixels in the image have the largest impact on the
output of the channel. Similar to the actual output of the layer, it allows for localizing image areas this channel is sensitive to.
However, the resolution of the deep neural activation maps is much higher (Fig.~\ref{fig:pool5_outputs}).
In our experiments, we compute part proposal locations for a training image $\bm{I}_{i}$ from these maps by
using the point of maximum activation:
\begin{align}
\bm{\mu}_{i,p} &= \argmax_{x,y} \left| m^{(p)}_{x,y}(\bm{I}_{i}) \right|.
\end{align}
Each channel of the CNN delivers one neural activation map per image and we therefore obtain one part proposal per channel $p$.
RGB images are handled by adding the absolute activation maps of each input channel.
Hence we reduce a deep neural activation map to a 2D location
and do not consider image patches for each part during the part model learning.
In classification, however, image patches are extracted at predicted part locations for feature extraction.
The implicit part detectors are learned automatically during the training of the CNN.
This is a huge benefit compared to other part discovery approaches like poselets~\cite{poselets}, which do not necessarily produce parts useful for discrimination of classes a priori.
In our case, the dataset used to train the CNN does not necessarily need to be the same as the final dataset
and task for which we want to build part representations.
In addition, determining the part proposals is nearly as fast as the classification with the CNN (only $110$ms per image for $10$ parts on a standard PC with GPU),
which allows for real-time applications.
A video visualizing a bird head detector based on this idea running at 10fps is available at our project website.
We use the part proposals throughout the rest of this paper.
\vspace{-5pt}
\section{Unsupervised part model discovery\label{sec:part_discovery}}
\vspace{-5pt}
In this section, we show how to construct effective part models in an unsupervised manner
given a set of training images of an object class.
The resulting part model is used for localized feature extraction and subsequent fine-grained classification.
In contrast to most previous work, we have a set of robust but not necessarily related part proposals and need to select useful ones for the current object class.
Other approaches like DPM are faced with learning part detectors instead.
The main consequence is that we do not need to care about expensive training of robust part detectors.
Our task simplifies to a selection of useful detectors instead.
As input, we use the normalized part proposal locations $\bm{\mu}_{i,p}\in\left[0,1\right]^{2}$
for training image $i=1,\dots,N$ and part proposal $p=1,\dots,P$. The $P$ part proposals
correspond to the channels an intermediate output layer in a CNN and $\bm{\mu}_{i,p}$ is determined
by calculating the activation map of channel $p$ for input image
$i$ and locating the maximum response. If the activation map of a channel is equal to $\bm{0}$, the part proposal
is considered hidden. This sparsity naturally occurs due to the rectified linear unit used as a nonlinear activation.
\subsection{Random selection of parts\label{sec:random_selection}}
A simple method to build a part model with multiple parts is to select $M$ random parts from all $P$ proposals.
For all training images, we then extract $M$ feature vectors describing the image region around the
part location. The features are stacked and
a linear SVM is learned using image labels. This can even be combined with fine-tuning of the CNN
used to extract the part features. Further details about part feature representations are given in Sect.~\ref{sec:experiments}.
In our experiments, we show that for generic object recognition random selection is indeed a valid technique. However,
for fine-grained recognition, we need to select the parts that likely correspond to the same object and not a background artifact.
Furthermore, using all proposals is not an option since the feature representation increases dramatically rendering training impractical. Therefore, we show in the following how to select only a few parts with a constellation model to boost classification
performance and reduce computation time for feature calculation significantly.
\subsection{Constellations of neural activations\label{sec:part_constellation}}
The goal is to estimate a star shape model for a subset of selected proposals
using the 2D locations of all part proposals of all training images.
Similar to other popular part models like DPM~\cite{dpm}, our model also incorporates multiple views $v=1,\dots,V$
of the object of interest. For example, the front and the side view
of a car is different and different parts are required to
describe each view.
Each view consists of a selection of $M$ part proposals denoted by the indicator
variables $b_{v,p} \in \left\{ 0,1\right\}$ and we refer to them as parts.
In addition, there is a set of corresponding shift vectors $\bm{d}_{v,p}\in\left[-1,1\right]^{2}$.
The shift vectors are the ideal relative offset of part $p$ to the common root
location $\bm{a}_{i}$ of the object in image $i$.
The $\bm{a}_i$ are latent variables since no object annotations are given during learning.
Another set of latent variables $s_{i,v}\in\left\{ 0,1\right\}$
denotes the view selection for each training
image. We assume that there is only one target object visible in each image
and hence only one view is selected for each image. Finally, $h_{i,p}\in\left\{ 0,1\right\}$
denotes if part $p$ is visible in image
$i$. In our case, the visibility of a part is provided by the part
proposals and not estimated during learning.
\myparagraph{Learning objective}
We identify the best model for the given training images
by maximum a-posteriori estimation of all model and latent parameters
$\bm{\Gamma} = (\bm{b}, \bm{d}, \bm{s}, \bm{a})$ from provided part proposal locations $\bm{\mu}$:
\vspace{-3pt}
\begin{equation}
\bm{\hat{\Gamma}} = \textstyle\argmax_{ \bm{\Gamma} } \quad
p\left(\bm{\Gamma} \;|\; \bm{\mu}\right).\label{eq:optimization-1}
\end{equation}
\vspace{-2pt}
In contrast to a marginalization of the latent variables, we obtain a very efficient learning algorithm.
We apply Bayes' rule, use the typical assumption that training images and part proposals are independent
given the model parameters~\cite{andriluka}, assume flat priors for $\bm{a}$ (no prior preference for the object's center)
and $\bm{d}$ (no prior preference for part offsets), and independent priors for $\bm{b}$ and $\bm{s}$:
\vspace{-3pt}
\begin{align}
& \argmax_{\bm{\Gamma}}\; p\left(\bm{\mu} \,|\, \bm{b}, \bm{d}, \bm{s}, \bm{a}\right)\cdot p(\bm{b}) \cdot p(\bm{s}) \nonumber \\
= & \argmax_{\bm{\Gamma}}\; \prod_{i=1}^{N} \left( \prod_{p=1}^{P} p\left(\bm{\mu}_{i,p} \,|\, \bm{b}, \bm{d}, \bm{s}, \bm{a}\right) \right) p(\bm{b}) \cdot p(\bm{s})\label{eq:optimization-2}
\end{align}
\vspace{-2pt}
The term $p\left(\bm{\mu}_{i,p} \,|\, \bm{b},\bm{d}, \bm{s}, \bm{a}\right)$ is
the distribution of the predicted part locations given the model.
If the part $p$ is used in view $v$ of image $i$, we assume
that the part location is normally distribution around the root location plus the shift vector, \ie $\bm{\mu}_{i,p} \sim \mathcal{N}(\bm{d}_{v,p} + \bm{a}_i, \sigma^2_{v,p} \bm{E})$
with $\bm{E}$ denoting the identity matrix.
If the part is not used, there is no
prior information about the location and we assume it to be uniformly
distributed over all possible image locations in $\bm{I}_i$. Hence, the distribution is given by
\begin{align}
&p\left(\bm{\mu}_{i,p} \,|\, \bm{b}, \bm{d}, \bm{s}, \bm{a}\right) =\\
\notag
&\prod\limits_{v=1}^V \mathcal{N}\left( \bm{\mu}_{i,p} \,|\, \bm{a}_{i}+\bm{d}_{v,p}, \sigma^2_{v,p} \bm{E}\right)^{t_{i,v,p}} \cdot \left(\frac{1}{\left|\bm{I}_{i}\right|}\right)^{1-t_{i,v,p}},
\end{align}
where $t_{i,v,p}=s_{i,v} b_{v,p} h_{i,p} \in\left\{ 0,1\right\} $
indicates whether part $p$ is used and visible in view $v$ which is itself active in image $i$.
The prior distribution for the part selection $\bm{b}$ only
captures the constraint that $M$ parts need to be selected, \ie $\forall v: M = \sum_{p=1}^P b_{v,p}$.
The prior for the view selection $\bm{s}$ incorporates our assumption that only a single view
is active in training image $i$, \ie $\forall i: 1 = \sum_{v=1}^V s_{i,v}$. In general,
we denote the feasible set of variables as $\mathcal{M}$.
Exploiting this and applying $\log$ simplifies Eq.~\eqref{eq:optimization-2}
further:
\begin{align}
\notag
&\argmin_{\bm{\Gamma} \in \mathcal{M}} -\sum_{i=1}^{N}\sum_{p=1}^{P} \sum_{v=1}^V t_{i,v,p}\log
\mathcal{N}\left( \bm{\mu}_{i,p} \,|\, \bm{a}_{i}+\bm{d}_{v,p}, \sigma^2_{v,p}\right)
\end{align}
In addition, we assume the variance
$\sigma_{v,p}^2$ to be constant for all parts of all views. Hence, the final formulation of the optimization problem
becomes
\begin{align}
& \argmin_{\bm{\Gamma} \in \mathcal{M}} \sum_{i=1}^{N}\sum_{p=1}^{P}\sum_{v=1}^{V}s_{i,v}b_{v,p}h_{i,p}\left\Vert \bm{\mu}_{i,p}-\bm{a}_{i}-\bm{d}_{v,p}\right\Vert ^{2}\label{eq:opt_problem}
\end{align}
\myparagraph{Optimization}
Eq.~\eqref{eq:opt_problem} is solved by alternately optimizing each
of the model variables $\bm{b}$ and $\bm{d}$, as well as the latent variables $\bm{a}$ and $\bm{s}$,
independently, similar to the standard EM algorithm. For each of the variables $\bm{b}$ and $\bm{s}$, we can calculate
the optimal value by sorting error terms. For example, $b_{v,p}$ is
calculated by analyzing
\vspace{-2pt}
\begin{equation}
\hspace{-8pt}\argmin_{\bm{b} \in \Gamma_{\bm{b}}} \sum_{p=1}^{P}\sum_{v=1}^{V} b_{v,p} \underbrace{\Bigl(\sum_{i=1}^{N} s_{i,v} h_{i,p} \left\Vert \bm{\mu}_{i}^{p}-a_{i}-\bm{d}_{v,p}\right\Vert ^{2} \Bigr)}_{E(v,p)}
\label{eq:optimization_b}
\end{equation}
\vspace{-2pt}
This optimization can be intuitively solved. First, each view is considered independently, as we select a fixed number of parts
for each view without considering the others. For each part proposal, we calculate
$E\left(v,p\right)$. This term describes, how well the part proposal $p$
fits to the view $v$. If its value is small, then the part proposal fits well
to the view and should be selected. We now calculate $E\left(v,p\right)$
for all parts of view $v$ and select the $M$ parts with the smallest
value. In a similar manner, the view selection $\bm{s}$ can be determined.
The root points $\bm{a}$ are obtained for fixed $\bm{b}, \bm{s}$, and $\bm{d}$ by
\begin{equation}
\hat{\bm{a}}_{i} = \sum_{v,p} t_{i,v,p} \left( \bm{\mu}_{i}^{p}-\bm{d}_{v,p}\right) / \bigl( \sum\limits_{v',p'} t_{i,v',p'} \bigr).
\label{eq:optimization_a}
\end{equation}
Similarly, we obtain the shift vectors $\hat{\bm{d}}_{v,p}$:
\vspace{-2pt}
\begin{equation}
\hat{\bm{d}}_{v,p}= \sum_{i=1}^N t_{i',v,p}\cdot \left( \bm{\mu}_{i,p}-\bm{a}_{i}\right) / \bigl( \sum\limits_{i'=1}^N t_{i',v,p} \bigr).
\end{equation}
\vspace{-2pt}
The formulas are intuitive as, for example, the shift vectors $\bm{d}_{v,p}$
are assigned the mean offset between root point $\bm{a}_{i}$ and predicted
part location $\bm{\mu}_{i,p}$. The mean, however, is only calculated
for images in which part $p$ is used.
This kind of optimization is comparable to the EM-algorithm and thus shares the same challenges.
Especially the initialization of the variables is crucial.
We initialize $\bm{a}$ to be the center of the image and $\bm{s}$ as well as $\bm{b}$ randomly to an
assignment of views and selection of parts for each view, respectively.
The initialization of $\bm{d}$ is avoided by calculating it first.
The value of $\bm{b}$ is used to determine convergence.
This optimization is repeated with different initializations and the result with the best objective value is used.
\myparagraph{Inference}
The inference step for an unseen test image is similar to the calculations during training.
The parameters $\bm{s}$ and $\bm{a}$ are iteratively estimated
by solving Eq.~(\ref{eq:optimization_b}) and (\ref{eq:optimization_a})
for fixed learned model parameters $\bm{b}$ and $\bm{d}$. The visibility is again
provided directly by the neural activation maps.
\section{Experiments\label{sec:experiments}}
\vspace{-5pt}
The experiments cover three main aspects and applications of our approach.
First, we present a data augmentation technique based on the part models of our approach for fine-tuning, which outperforms fine-tuning on bounding boxes.
Second, we apply our approach to fine-grained classification, a task in which most current approaches rely on ground-truth part annotations~\cite{branson14cub75acc,zhang2014partRCNN,Simon14:PDD}.
Finally, we show how to use the same approach for generic image classification, too, and present the benefits in this area. Code for our method will be made available.
\subsection{Experimental setup}
\vspace{-5pt}
\myparagraph{Datasets}
We use five different datasets in the experiments.
For fine-grained classification, we evaluate our approach on CUB200-2011 \cite{wahCUB_200_2011} (200 classes, 11788 images), NA birds~\cite{horn15nabirds} (555 classes, 48562 images), Stanford dogs~\cite{khosla11stanfordDogs} (120 classes, 20580 images), Oxford flowers 102~\cite{nilsback08-afc} (102 classes, 8189 images), and Oxford-IIIT Pets~\cite{parkhi12-cd} (37 classes, 7349 images).
We use the provided split into training and test and follow the evaluation protocol of the corresponding papers.
Hence we report the overall accuracy on CUB200-2001 and the mean class-wise accuracy on all other datasets.
For the task of generic object recognition, we evaluate on Caltech 256 \cite{cal256scenesurl}, which contains 30607 images of a diverse set of 256 common objects.
We follow the evaluation protocol of \cite{Simonyan14veryDeep} and randomly select 60 training images and use the rest for testing.
\myparagraph{CNNs and parameters}
Two different CNN architectures were used in our experiments: the widely used architecture of Krizhevsky~\etal~\cite{krizhevsky2012imagenet} (AlexNet) and the more accurate one of Simonyan~\etal~\cite{Simonyan14veryDeep} (VGG19).
In case of NA birds, we use GoogLeNet~\cite{szegedy2014going}.
For details about the architecture, we kindly refer the reader to the corresponding papers.
It is important to note that our approach can be used with any CNN.
Features were calculated using the \emph{relu6}, \emph{relu7} and \emph{pool5/7x7\_s1} layer, respectively.
For the localization of parts, the \emph{pool5} layer was used.
This layer consists of 256 and 512 channels resulting in 256 and 512 part proposals, respectively.
In case of the CUB200-2011, NA birds, Oxford dogs, pets and flowers datasets, fine-tuning with our proposed data augmentation technique is used.
We use two-step fine-tuning \cite{branson14cub75acc} starting with a learning rate of $0.001$ and decrease it to $0.0001$ when there is no change in the loss anymore.
In case of Stanford dogs, the evaluation with CNNs pre-trained on ILSVRC 2012 images is biased as the complete dataset is a subset of the ILSVRC 2012 training image set.
Hence, we remove the testing images of Stanford dogs from the training set of ILSVRC 2012 and learned a CNN from scratch on this modified dataset.
The trained model is available on our website for easy comparison with this work.
If not mentioned otherwise, the learned part models use 5 views and 10 parts per view.
A model is learned for each class separately.
The part model learning is repeated 5 times and the model with the best objective value was taken.
We count in how many images each part is used and select the 10 most often selected parts for use in classification.
\myparagraph{Classification framework}
We use the part-based classification approach presented by Simon~\etal~\cite{Simon14:PDD}.
Given the predicted localization of all selected parts, we crop square boxes centered at each part and calculate features for all of them.
The size of these boxes is given by $\sqrt{\lambda \cdot W \cdot H}$, $\lambda \in \left\{\frac{1}{5},\frac{1}{16}\right\}$, where $W$ and $H$ are the width and height of the uncropped image, respectively.
If a part is not visible, the features calculated on a mean image are used instead.
This kind of imputation has comparable performance to zero imputation, but yields in a slight performance gain in some cases.
In case of CUB200-2011, we also estimate a bounding box for each image.
Selective Search~\cite{uijlings13selSeach} is applied to each image to generate bounding box proposals.
Each proposal is classified by the CNN and the proposal with the highest classification confidence is used as estimated bounding box.
The features of each part, the uncropped image and the estimated bounding box are stacked and classified using a linear SVM.
In case of CUB200-2011, flipped training images were used as well.
Hyperparameters were optimized using cross-validation on the training data of CUB200-2011 and used for the other datasets as well.
\subsection{Data augmentation using part proposals}
\vspace{-5pt}
Fine-tuning is the adaption of a pre-learned CNN to a domain specific dataset.
It significantly boosts the performance in many tasks~\cite{azizpour14CNNanalysis}.
Since the domain specific datasets are often small and thus the training of a CNN is prone to overfitting, the training set is artificially enlarged by using ``data augmentation''.
A common technique used for example by \cite{krizhevsky2012imagenet,Simonyan14veryDeep} is random cropping of a large fixed sized image patch.
This is especially effective if the training images are cropped to the object of interest.
If the images are not cropped and no ground-truth bounding box is available, uncropped images can be used instead.
However, fine-tuning is less effective as shown in Tab.~\ref{tab:augmentation_performance}.
Since ground-truth bounding box annotations are often not available or expensive to obtain, we propose to fine-tune on object proposals filtered by a novel selection scheme instead.
An overview of our approach is shown in Fig.~\ref{fig:patch_filter}.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{other_img/patch_filter.pdf}
\caption{Overview of our approach to filter object proposals for fine-tuning of CNNs.
Best viewed in color.
\label{fig:patch_filter}}
\end{figure}
First, we select for each training image the five parts of the corresponding view, which fit the model best.
Second, numerous object proposals are generated using Selective Search~\cite{uijlings13selSeach}.
These proposals are very noisy, \ie many only contain background and not the object of interest.
We count how many of the predicted parts are inside of each proposal and select only proposals containing at least three parts.
The remaining patches, $\approx 48$ on average in case of CUB200-2011, are high quality image regions containing the object of interest.
Finally, fine-tuning is performed using the filtered proposals of all training images.
\begin{table}
\centering
\scalebox{0.85}{
\begin{tabular}{llc}
\toprule
Train. Anno. &Method & Accuracy\tabularnewline
\midrule
Bbox&Fine-tuning on cropped images& 67.24\%\tabularnewline
\midrule
None&No fine-tuning & 63.77\%\tabularnewline
None&Fine-tuning on uncropped images& 66.10\%\tabularnewline
\textbf{None}&Fine-tuning on filtered part proposals& \textbf{67.97\%}\tabularnewline
\bottomrule
\end{tabular}
}
\caption{Influence of the augmentation technique used for fine-tuning in case of AlexNet on CUB200-2011.
Classification accuracies were obtained by using 8 parts as described in Sect.~\ref{sec:experiments_finegrained}.
\label{tab:augmentation_performance}}
\end{table}
The result of this approach is shown in Tab.~\ref{tab:augmentation_performance}.
Fine-tuning on these patches provides not only a gain even compared to fine-tuning on cropped images, it also eliminates the need for ground-truth bonding box annotations.
\subsection{Fine-grained recognition without annotations\label{sec:experiments_finegrained}}
\vspace{-5pt}
Most approaches in the area of fine-grained recognition rely on additional annotation like ground-truth part locations or bounding boxes.
Recent works distinguish between several settings based on the amount of annotations required.
The approaches either use part annotations, only bounding box annotations, or no annotation at all.
In addition, the required annotation in training is distinguished from the annotation required at test time.
Our approach only uses the class labels of the training images without additional annotation.
\myparagraph{CUB200-2001}
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{cclc}
\toprule
Train. &Test & Method & Accuracy\tabularnewline
Anno. &Anno. & & \tabularnewline
\midrule
Parts&Bbox&Bbox CNN features& 56.00\%\tabularnewline
Parts&Bbox&Berg \etal~\cite{berg2013partclass}& 56.78\%\tabularnewline
Parts&Bbox&Goering \etal~\cite{Goering14:NPT}& 57.84\%\tabularnewline
Parts&Bbox&Chai \etal~\cite{chai2013symbiotic}& 59.40\%\tabularnewline
Parts&Bbox&Simon~\etal~\cite{Simon14:PDD} & 62.53\%\tabularnewline
Parts&Bbox&Donahue \etal~\cite{donahue2013decaf}& 64.96\%\tabularnewline
\midrule
Parts& None&Simon~\etal~\cite{Simon14:PDD} & 60.55\%\tabularnewline
Parts& None&Zhang~\etal~\cite{zhang2014partRCNN} & 73.50\%\tabularnewline
Parts& None&Branson~\etal~\cite{branson14cub75acc} & 75.70\%\tabularnewline
\midrule
Bbox& None&Simon~\etal~\cite{Simon14:PDD} & 53.75\%\tabularnewline
\midrule
None& None&Xaio~\etal~\cite{xiao14attention} (AlexNet)& 69.70\%\tabularnewline
\vspace{7pt}None& None&Xaio~\etal~\cite{xiao14attention} (VGG19)& 77.90\%\tabularnewline
None& None&No parts (AlexNet)& 52.20\%\tabularnewline
None& None&Ours, rand., Sect.~\ref{sec:random_selection} (AlexNet)& $60.30\pm 0.74\%$\tabularnewline
None& None&Ours, const., Sect.~\ref{sec:part_constellation} (AlexNet)& 68.50\%\tabularnewline
None& None&No parts (VGG19)& 71.94\%\tabularnewline
None& None&Ours, rand., Sect.~\ref{sec:random_selection} (VGG19)& $79.44\pm 0.56\%$\tabularnewline
None& None&Ours, const., Sect.~\ref{sec:part_constellation} (VGG19)& \textbf{81.01\%}\tabularnewline
\bottomrule
\end{tabular}
}
\caption{Species categorization performance on CUB200-2011.
\label{tab:cub200_results}}
\end{table}
The results of fine-grained recognition on CUB200-2011 are shown in Tab.~\ref{tab:cub200_results}.
We present three different results for every CNN architecture.
``No parts'' corresponds to global image features only.
``Ours, rand.'' and ``Ours, const.'' are the approaches presented in Sect.~\ref{sec:random_selection} and \ref{sec:part_constellation}.
As can be seen in the table, our approach improves the work of Xiao~\etal~\cite{xiao14attention} by 3.1\%, an error decrease of more than 16\%.
It is important to note that their work requires a pre-trained classifier for birds in order to select useful patches for fine-tuning.
In addition, the authors confirmed that they used a much larger bird subset of ImageNet for pre-training of their CNN.
In contrast, our work is easier to adapt to other datasets as we only require a generic pre-trained CNN and no domain specific outside training data.
The gap between our approach and the third best result in this setting by Simon~\etal~\cite{Simon14:PDD} is even higher with more than 27\% difference.
The table also shows results for the use of no parts and random part selection.
As can be seen, even random part selection improves the accuracy by 8\% on average compared to the use of no parts.
The presented part selection scheme boosts the performance even further to 68.5\% using AlexNet and 81.01\% using VGG19.
\myparagraph{NA birds}
\begin{table}
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{cclc}
\toprule
Train. &Test & Method & Accuracy\tabularnewline
Anno. &Anno. & & \tabularnewline
\midrule
Parts&Parts&Horn~\etal~\cite{horn15nabirds}& 75.0\%\tabularnewline
\midrule
None& None&No parts (GoogLeNet)& 63.9\%\tabularnewline
None& None&Ours, const., Sect.~\ref{sec:part_constellation} (GoogLeNet)& \textbf{76.3\%}\tabularnewline
\bottomrule
\end{tabular}
}
\caption{Species categorization performance on NA Birds.
\label{tab:nabirds_results}}
\end{table}
The results of our approach on the relatively new NA birds dataset are shown in Tab.~\ref{tab:nabirds_results}.
The accuracy without using any parts is only 63.9\%.
Similar to the CUB200-2011 dataset, there is a clear advantage of using parts selected by our approach with an accuracy of 76.3\%.
Interestingly, the accuracy is very close to the one on CUB200, while there are more than 2.5 times more classes in NA birds.
We outperform the baseline provided the authors using the approach of \cite{branson14cub75acc} even though we are not using any kind of part annotation.
\myparagraph{Stanford dogs}
\begin{table}
\centering
\scalebox{0.885}{%
\begin{tabular}{lc}
\toprule
Method & Accuracy\tabularnewline
\midrule
Chai~\etal~\cite{chai2013symbiotic}& 45.60\%\tabularnewline
Gavves~\etal~\cite{gavves2013alignment}& 50.10\%\tabularnewline
Chen~\etal~\cite{chen15selPooling}& 52.00\%\tabularnewline
\vspace{7pt}Google LeNet ft~\cite{sermanet2014attention}& 75.00\%\tabularnewline
No parts (AlexNet)& 55.90\%\tabularnewline
Ours, rand., Sect.~\ref{sec:random_selection} (AlexNet)& $63.29\pm 0.97\%$\tabularnewline
Ours, const., Sect.~\ref{sec:part_constellation} (AlexNet)& 68.61\%\tabularnewline
\bottomrule
\end{tabular}
}
\caption{Species categorization performance on Stanford dogs.
\label{tab:dogs_results}}
\end{table}
The accuracy on Stanford dogs is given in Tab.~\ref{tab:dogs_results}.
To the best of our knowledge, there is only one work showing results for a CNN trained from scratch excluding the testing images of Stanford dogs.
Sermanent~\etal~\cite{sermanet2014attention} fine-tuned the architecture of their very deep Google LeNet to obtain 75\% accuracy.
In our experiments, we used the much weaker architecture of Krizhevsky~\etal and still reached 68.61\%.
Compared to the other non-deep architectures, this means an improvement of more than 16\%.
\myparagraph{Oxford pets and flowers}
\begin{table}
\centering
\scalebox{0.85}{
\begin{tabular}{lc}
\toprule
Method & Accuracy\tabularnewline
\midrule
Angelova~\etal~\cite{angelova13flower}& 80.66\%\tabularnewline
Murray~\etal~\cite{murray14catsFlowers}& 84.60\%\tabularnewline
Razavian~\etal~\cite{razavian14cnnFeatures}& 86.80\%\tabularnewline
\vspace{7pt}Azizpour~\etal~\cite{azizpour14CNNanalysis}& 91.30\%\tabularnewline
No parts (AlexNet)& 90.35\%\tabularnewline
Ours, rand., Sect.~\ref{sec:random_selection} (AlexNet)& $90.32\pm 0.18\%$\tabularnewline
Ours, const., Sect.~\ref{sec:part_constellation} (AlexNet)& 91.74\%\tabularnewline
No parts (VGG19)& 93.07\%\tabularnewline
Ours, rand., Sect.~\ref{sec:random_selection} (VGG19)& $94.20\pm 0.23\%$\tabularnewline
Ours, const., Sect.~\ref{sec:part_constellation} (VGG19)& \textbf{95.34\%}\tabularnewline
\bottomrule
\end{tabular}
}
\caption{Classification performance on Oxford 102 flowers.
\label{tab:flowers_results}}
\end{table}
\begin{table}
\centering
\scalebox{0.85}{
\begin{tabular}{lc}
\toprule
Method & Accuracy\tabularnewline
\midrule
Bo~\etal~\cite{bo13cats}.& 53.40\%\tabularnewline
Angelova~\etal~\cite{angelova13flower}.& 54.30\%\tabularnewline
Murray~\etal~\cite{murray14catsFlowers}.& 56.80\%\tabularnewline
\vspace{7pt}Azizpour~\etal~\cite{azizpour14CNNanalysis}.& 88.10\%\tabularnewline
No parts (AlexNet)&78.55\%\tabularnewline
Ours, rand., Sect.~\ref{sec:random_selection} (AlexNet)& $82.70\pm 1.64\%$\tabularnewline
Ours, const., Sect.~\ref{sec:part_constellation} (AlexNet)& 85.20\%\tabularnewline
No parts (VGG19)& 88.76\%\tabularnewline
Ours, rand., Sect.~\ref{sec:random_selection} (VGG19)& $90.42\pm 0.94\%$\tabularnewline
Ours, const., Sect.~\ref{sec:part_constellation} (VGG19)& \textbf{91.60\%}\tabularnewline
\bottomrule
\end{tabular}
}
\caption{Species categorization performance on Oxford-IIIT Pets.
\label{tab:cats_dogs_results}}
\end{table}
The results for the Oxford flowers and pets dataset are shown in Tab.~\ref{tab:flowers_results} and \ref{tab:cats_dogs_results}.
Our approach consistently outperforms previous work by a large margin on both datasets.
Similar to the other datasets, randomly selected parts already improve the accuracy by up to 4\%.
Our approach significantly improves this even further and achieves 95.35\% and 91.60\%, respectively.
\myparagraph{Influence of the number of parts}
\begin{table}
\small
\centering
\resizebox{0.9\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[
xlabel=Number of parts used,
ylabel style={align=center},ylabel=Accuracy in \%\\on CUB200-2011,
ymin = 69,
ymax = 82,
legend pos=south east,
width=\linewidth,
height=0.6\linewidth]
\addplot[color=red,mark=x] coordinates {
(0, 70.8)
(1, 76.9)
(2, 78.6)
(4, 79.0)
(5, 79.5)
(6, 79.2)
(9, 79.4)
(10, 79.6)
(16, 79.6)
(25, 79.8)
(36, 79.8)
(49, 79.8)
(64, 79.4)
(81, 79.5)
(100, 79.7)
(256, 79.168)
};
\addplot[color=blue,mark=+] coordinates {
(0, 70.8)
(1, 73.0296)
(2, 72.7764)
(3, 75.0547)
(4, 75.1870)
(5, 74.6116)
(6, 75.1237)
(9, 75.6587)
(10, 76.5907)
(22, 77.5285)
(36, 77.4019)
(60, 78.4950)
(80, 78.5899)
(100, 78.5468)
(256, 79.1681 )
};
\legend{{Ours, constellation},{Ours, random parts}}
\end{axis}
\end{tikzpicture}
}
\caption{Influence of the number of parts on the accuracy on CUB200-2011.
One patch was extracted for each part proposal.
\label{tab:influence_parts}}
\end{table}
Fig.~\ref{tab:influence_parts} provides insight into the influence of the number of parts used in classification.
We compare to random part to the part constellation model based selection.
In contrast to the previous experiments, one patch is extracted per part using $\lambda=\frac{1}{10}$.
While random parts increase the accuracy for any amount of parts, the presented scheme clearly selects more relevant parts and helps to greatly improve the accuracy.
\subsection{From fine-grained to generic classification}
\begin{table}
\centering
\scalebox{0.85}{
\begin{tabular}{lc}
\toprule
Method & Accuracy\tabularnewline
\midrule
Zeiler~\etal~\cite{zeiler2013visualizing}& 74.20\%\tabularnewline
Chatfield~\etal~\cite{chatfield14return}& 78.82\%\tabularnewline
\vspace{7pt}Simonyan~\etal~\cite{Simonyan14veryDeep} + VGG19& $85.10\%$\tabularnewline
No parts (AlexNet)& 71.44\%\tabularnewline
Ours, rand., Sect.~\ref{sec:random_selection} (AlexNet)& $72.39\%$\tabularnewline
Ours, const., Sect.~\ref{sec:part_constellation} (AlexNet)& $72.57\%$\tabularnewline
No parts (VGG19)& $82.44\%$\tabularnewline%
Ours, const., Sect.~\ref{sec:part_constellation} (VGG19)& $84.10\%$\tabularnewline
\bottomrule
\end{tabular}
}
\caption{Accuracy on the Caltech 256 dataset with 60 training images per category.
\label{tab:cal256_res}}
\end{table}
Almost all current approaches in fine-grained recognition are specialized algorithms and it is hardly possible to apply them to generic classification tasks.
The main reason is the common assumption in fine-grained recognition that there are shared semantic parts for all objects.
Does that mean that all the rich knowledge in the area of fine-grained recognition will never be useful for other areas?
Are fine-grained and generic classification so different?
In our opinion, the answer is a clear no and the proposed approach is a good example for that.
There are two main challenges for applying fine-grained classification approaches to other tasks.
First, the semantic part detectors need to be replaced by more abstract interest point detectors.
Second, the selection or training of useful interest point detectors needs to consider that each object class has its own unique shape and set of semantic parts.
Our approach can be applied to generic classification tasks in a natural way.
The first challenge is already solved by using the part detectors of a CNN trained to distinguish a huge number of classes.
Because of these properties, part proposals can be seen as generic interest point detectors with a focus on a special pattern.
In contrast to semantic parts, they are not necessarily only recognizing a specific part of a specific object.
Instead, they capture interesting points of many different kinds of objects.
The second challenge is tackled by building class-wise part models and selecting part proposals that are shared among most classes.
However, even a random selection of part detectors turns out to increase the classification accuracy already.
\myparagraph{Caltech 256}
The results of our approach on Caltech 256 are shown in Tab.~\ref{tab:cal256_res}.
The proposed methods improves the baseline of global features without oversampling by $1\%$ in case of AlexNet and 1.6\% in case of VGG19.
While Simonyan~\etal achieves slightly higher performance, their approach is also much more expensive due to dense evaluation of the whole CNN over all possible crops at three different scales.
Their best result of $86.2\%$ is achieved by using a fusion of two CNN models, which is not done in our case and consequently not comparable.
The results clearly shows that replacing semantic part detectors by more generic detectors can be enough to apply fine-grained classification approaches in other areas.
Many current approaches in generic image classification rely on ``blind'' parts.
For example, spatial pyramids or other oversampling methods are equivalent to part detectors that always detect something at a fixed position in the image.
Replacing these ``blind'' detections by more sophisticated ones
in combination with class-wise part models is a natural improvement.
\section{Conclusions\label{sec:conclusions}}
\vspace{-5pt}
This paper presents an unsupervised approach for the selection of generic parts for fine-grained and generic image classification.
Given a CNN pre-trained for classification, we exploit the learned inherit part detectors for generic part detection.
A part constellation model is estimated by analyzing the predicted part locations for all training images.
The resulting model contains a selection of useful part proposals as well as their spatial relationship in different views of the object of interest.
We use this part model for part-based image classification in fine-grained and generic object recognition.
In contrast to many recent fine-grained works, our approach surpasses the state-of-the-art in this area and is beneficial for other tasks like data augmentation and generic object classification as well.
This is supported by, among other results, a recognition rate of 81.0\% on CUB200-2011 without additional annotation and $84.1\%$ accuracy on Caltech 256.
In our future work, we plan to use the deep neural activation maps directly as probability maps while maintaining the speed of our current approach.
The estimation of object scale would allow for applying our approach to datasets in which objects only cover a small part of the image.
Our current limitation is the assumption that a single channel corresponds to a object part.
A combination of channels can be considered to improve localization accuracy.
In addition, we plan to learn the constellation models and the subsequent classification jointly in a common framework.
\section{Changelog}
\begin{itemize}
\item V3: Added results for NA birds
\item V2: Updated to camera ready version
\end{itemize}
{\small
\bibliographystyle{ieee}
|
1910.07394
|
\section{Introduction}\label{sec:introduction}
An increasingly common task in computational musicology --
specifically: music performance analysis -- consists in
annotating different performances (recordings) of classical
music pieces with structural information (e.g., beat positions)
that defines a temporal grid, in order then to carry out some
comparative performance analyses, which require time alignments
between the performances.
As manually annotating many recordings is a very time-consuming
and tedious task, an obvious shortcut would be to manually annotate
only one performance, and then use automatic audio-to-audio
matching algorithms to align additional recordings to it, and
thus also be able to automatically transfer structural annotations.
The work presented here is part of a larger project on the analysis of orchestral music performance. In this musicological context, it is crucial to understand the level of precision one can expect of the empirical data collected. The present study attempts to answer two specific questions: (1) what is the precision / consistency we can expect from human time annotations in such complex music? and (2) can automatic alignment be precise enough to be used for transferring annotations between recordings, instead of tediously annotating each recording manually?
We will approach this by collecting manual annotations from expert musicians, on a small set of carefully selected pieces and recordings (Section \ref{sec:annotation}), analyzing these with statistical methods (Section \ref{sec:eval:annotations}) -- which will also supply us with a ground truth for the subsequent step --, then performing systematic experiments with different audio features and parameters for automatic audio-to-audio alignment (Section \ref{sec:alignment}), quantifying the degree of alignment precision that can be achieved, and relating this to the results from the previous annotation study (Section \ref{sec:eval:alignments}).
\section{Related Work}
\cite{weiss_2016_measure_annotation} presented a case study of opera recordings that were annotated by five annotators, at the bar level.
The authors used the mean values over the annotators as ground-truth values for the respective marker positions and the variance to identify sections possibly problematic to annotate, and offered a qualitative analysis of the musical material and sources for error and disagreement between annotators.
\cite{grachten_alignment_structure} deals with the alignment of recordings with possibly different structure.
Their contribution is relevant for our endeavor in so far as they evaluated different audio features and parameters ranges for an audio-to-audio alignment task on a data set of, among others, symphonies by Beethoven, which matches our data set very well.
\cite{kirchhoff_2011_evaluation_features_alignment} evaluated audio features for the audio-to-audio alignment task using several different data sets.
While many studies of alignment features do not use real human performances but artificial data, we only use ground-truth produced from human annotations (by averaging over multiple annotations per recording) of existing recordings for the evaluation of the alignment task.
Furthermore, the results of our analysis of manual annotations (Step 1) will inform our interpretation of the automatic alignment experiments in Step 2 (by relating the observed alignment errors to the variability within the human annotations), leading to some insights useful for quantitative musicological studies.
\section{Annotation and Ground-truth}
\label{sec:annotation}
\subsection{Annotation vs.~Tapping}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{Boulez_1969_std_along_piece.png}
\caption{
Standard deviations of annotations along a performance, Webern Op21-2, Boulez. Blue: computed from three markers per score event. Magenta: computed from pooled differences (details see text). Orange: median standard deviation (SD), green: median of SD from pooled differences (see boxplots right). Right: boxplots for SD (summary of blue curve) and for SD of pooled values (summary of magenta curve), central quartile is median.
} \label{fig:std_annotat}
\end{figure*}
Our primary goal is to map the musical time grid as defined by the score, to one or more performances given as audio recordings.
Due to expressiveness performance, these mapped time points may be very different between different recordings.
Following \cite{dixon_2005_match}, we will call the occurrence of one or more (simultaneous) score notes a \emph{score event}.
In our case, we were interested in annotating regularly spaced score events, for instance, on the quarter note beats.
Different methods can be employed for marking score events in a recording.
One possibility is to tap along a recording on a keyboard (or other input device) and have the computer store the time-stamps.
We will refer to a sequence of time-stamps produced this way as a \emph{tapping} in the following.
Producing markers this way has been termed \enquote{reverse conducting} by the Mazurka project\footnote{\url{www.mazurka.org.uk/info/revcond/example/}}.
This is to be distinguished from what we will call an \emph{annotation} throughout this paper.
In that case, markers are first placed by tapping along, or even by visually inspecting the audio waveform, and then
iteratively corrected on (repeated) critical listening.
In general, we assume corrected annotations to have smaller deviations from the \enquote{true} time-stamps than uncorrected tappings, especially around changes of tempo.
\subsection{Pieces, Annotators, and Annotation Process}
The annotation work for this study was distributed over a pool of four annotators.
Three are graduates of musicology and one is a student of the violin.
The pieces considered are: Ludwig van Beethoven's Symphony No. 9, 1st movement; Anton Bruckner's Symphony No. 9, 3rd movement; and Anton Webern's Symphony Op. 21, 2nd movement (see Table \ref{table:pieces} for details).
The first two are symphonic movements, played by full classical/romantic period orchestra.
The third is an atonal piece where the second movement is of a \enquote{theme and variations} form, and requires a much smaller ensemble (clarinets, horns, harp, string section).
While the first two pieces can be considered to be well known even to average listeners of classical music, the Webern piece was expected to be less familiar to the annotators. It is rhythmically quite complicated, with many changes in tempo and many sections ending in a fermata.
We expected it to be a suitable challenge for the annotators as well as the for the automatic alignment procedure.
The quarter beat level was chosen as (musically reasonable) temporal annotation granularity, in all three cases. The annotators were asked to mark
all score events (notes or pauses) at the quarter beat level, using the Sonic Visualiser software \cite{SonicVisualiser},
and then to correct markers such that they coincide with the score events when listening to the playback with a ``click" sound together with the recording of the piece.
They also had to annotate ``silent" beats (i.e. general pauses) or even single or multiple whole silent bars with the given granularity.
It is clear that this may create large deviations between annotators at such points, as the way to choose the marker positions is not always obvious or even meaningfully possible in these situations.
Each recording was annotated by three annotators, giving us a total of 21 complete manual annotations\footnote{Supplemental material to this publication is available online at 10.5281/zenodo.3260499}.
\begin{table}[]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}ccccc@{}}
\toprule
Composer & Piece & Part & Section & \# Events \\ \midrule
Beethoven & Sym. 9 & 1st mov. & complete & 1093 \\
A. Bruckner & Sym. 9 & 3rd mov. & 150 - end & 371 \\
A. Webern & Sym. Op. 21 & 2nd mov. & complete & 198 \\ \bottomrule
\end{tabular}%
}
\caption{Annotated works/parts, and number of events. Granularity in all cases: quarter notes.}
\label{table:pieces}
\end{table}
\begin{table}[]
\small
\centering
{
\begin{tabular}{@{}cccccc@{}}
\toprule
Composer & Conductor & Orch. & Year & Dur. & Med. SD \\ \midrule
Beethoven & Karajan & VPO & '47 & 16:00 & 32 \\
& Karajan & BPO & '62 & 15:28 & 32 \\
& Karajan & BPO & '83 & 15:36 & 27 \\ \midrule
A. Bruckner & Karajan & BPO & '75 & 09:30 & 68 \\
& Abbado & VPO & '96 & 10:40 & 52 \\ \midrule
A. Webern & Boulez & LSO & '69 & 03:08 & 47 \\
& Karajan & BPO & '74 & 03:28 & 63 \\ \bottomrule
\end{tabular}%
}
\caption{Annotated recordings.
VPO = Vienna Philharmonic Orchestra, BPO = Berlin Philharmonic, LSO = London Symphony Orchestra.
Each recording was annotated by three annotators.
Med. SD is the median value of standard deviations of the annotations (in milliseconds, rounded to nearest integer), for details see Sec. \ref{sec:eval:annotations}.
}
\label{table:recordings}
\end{table}
\section{Evaluation of Annotations}
\label{sec:eval:annotations}
For a statistical analysis of this rather small number of human annotations, we need to make some idealizing assumptions.
We assume that there is one clear point in time that can be attributed to each respective score event, i.e. there are \enquote{true} time-stamps $\tau_n$, $n = 1, 2, \ldots$ for the score events we sought to annotate.
If each score event is annotated multiple times, the annotated markers $\theta_n$ will show random variation around their true time-stamps, with a certain variance $\sigma^2_n$.
It seems reasonable to assume the respective markers to be realizations of random variables $\varTheta_n$, each following a normal distribution, i.e. $\varTheta_n \sim \mathcal{N}(\tau_n, \sigma_n^2)$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{score_fig_ipe.pdf}
\caption{
Modeling annotations as random variables.
Musical score and waveform of a performance.
Hypothetical true time-stamps $\tau_n$.
Annotation markers $\theta_n$.
Bottom row: pdfs of random variables $\varTheta_n$, each of mean $\tau_n$.
} \label{fig:theta_Theta}
\end{figure}
Thus, for each event to be annotated we would expect (a large number of) annotations to exhibit a normal distribution around some mean $\tau_n$.
This is schematically illustrated in Figure \ref{fig:theta_Theta}.
However, for estimating the parameters of these distributions, rather large numbers of annotations would be required.
\cite{dannenberg_2009_single_tap_error_distribution} has shown that with some additional assumptions, the distribution can be estimated from as little as two sequences of markers.
We follow \cite{dannenberg_2009_single_tap_error_distribution} in the derivations below.
If the variance $\sigma^2$ of the time stamps is assumed to be constant over time (across the whole piece or part to be annotated), subtracting two sequences $\theta_n^1$, $\theta_n^2$ of markers for the same score events, i.e.
\begin{equation} \label{eq:delta_diff}
\Delta_n = \varTheta_n\topi{1} - \varTheta_n\topi{2},
\end{equation}
yields the variable $\Delta_n \sim \mathcal{N}(0, 2 \sigma_\varTheta^2)$.
Note that if the mean of $\Delta_n$ is not zero, we can force it to be by suitably offsetting either $\varTheta_n\topi{1}$ or $\varTheta_n\topi{2}$ by $\bar{\Delta}_n$ -- since we assume both sequences to mark the same events with mean zero, a total mean deviation can be viewed as a systematic offset by either annotator.
One could then use the differences $\delta_n = \theta_n^1 - \theta_n^2$ to estimate the variance $\sigma_\varTheta^2$ around the true time-stamps:
\begin{equation} \label{eq:estimate_var}
\hat{\sigma}_\varTheta^2 = \frac{1}{2N}\sum_{n = 1}^N (\theta_n\topi{1} - \theta_n\topi{2})^2.
\end{equation}
In \cite{dannenberg_2009_single_tap_error_distribution}, two example analyses of tap sequences were presented that support these assumptions.
We analyzed our annotation data according to these ideas.
First, for each annotated recording, we calculated the time-stamp differences between each pair of annotations, according to Eq. \eqref{eq:delta_diff}, and tested the resulting distributions for normality, using the Shapiro-Wilk test.
However, for all annotations created, none of the distributions is normal according to these tests.
On visual inspection of the distributions of differences of annotation sequences $\delta_n$ using quantile-quantile plots (see Fig. \ref{fig:qqplot_webern}), the tails of the distributions turned out to be typically significantly heavier than for a normal distribution.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{Boulez_1969_qqplot_diffs_legend.png}
\caption{
Webern Op21-2, Boulez. Quantile-quantile plot of the differences of a pair of annotation sequences for the whole piece. Solid red line fitted to first and third data quartile, dashed lines show $\pm$95\% confidence around this line. Non-normal data deviate strongly from area enclosed by dashed lines.
} \label{fig:qqplot_webern}
\end{figure}
We suspect that this discrepancy to the results given in \cite{dannenberg_2009_single_tap_error_distribution} is most likely due to the higher complexity of our musical material, with large orchestras playing highly polyphonic and rhythmically complex music in varying tempi.
It seems intuitively clear that for some sections, the deviations among annotated markers will be much smaller than in complex parts.
Additionally, as we asked also silent beats to be annotated, even during whole silent bars, we should expect substantial deviations for at least a few such events in every recording.
We therefore conclude that at least the assumption of identical variance across a whole piece should be dropped (for more complex material) when more detailed information about local uncertainties of the annotation is desired.
However, it is interesting to note that locally, when the differences for only a few consecutive (around 20-30) annotated time-stamps are pooled, they conform to a normal distribution quite well.
This means that the assumption of about equal variance for the annotation of score events tends to hold for short blocks of time, but rather not globally (for a whole piece), at least for the musical material considered here.
As estimating the standard deviation (as a measure of uncertainty) of each time-stamp's markers is not reliable given only few annotations, we used an alternative based on the above observation.
For blocks of 24 consecutive score events (with a hop size of 12), the differences of a pair of annotation sequences were \textit{pooled} and used to estimate the standard deviation for each respective block.
The resulting, block-wise constant curve of standard deviations is shown in Fig. \ref{fig:std_annotat} (magenta), along with the simple standard deviation per score event, calculated from three markers (blue), for a specific recording and pair of annotations.
The median of these per-block estimated standard deviations is used as a global estimate of the precision of the annotations for the respective performance, and is given for the respective performance as the right-most column in Table \ref{table:recordings}.
As can be seen, the values differ substantially across the pieces as well as within the pieces, for different performances.
The right-most boxplot in Fig. \ref{fig:std_annotat} shows a summarization of the per-block estimated standard deviations.
Interestingly, for the 1st movement of Beethoven's Symphony 9 (with its relatively constant tempo), the estimated standard deviation is close to the value presented in \cite{dannenberg_2009_single_tap_error_distribution}, but it is considerably larger for the other pieces that exhibit more strongly varying tempo.
\section{Automatic Alignment}
\label{sec:alignment}
As mentioned above, annotating a large number of performances of the same piece is a time-consuming process.
A more efficient alternative would be to automatically transfer annotations from one recording to a number of unseen recordings, via audio-to-audio alignment.
\subsection{Alignment Procedure and Ground-truth}
The method of choice for (off-line) audio-to-audio alignment is \textit{Dynamic Time-warping (DTW)} \cite{muller_2019_cross_modal}.
Aligning two recordings via DTW involves extracting sequences $X \in \field{R}^{L \times D}$ and $Y \in \field{R}^{M \times D}$ of feature vectors, respectively.
Using a distance function $d(x_l, y_m)$, the DTW algorithm finds a path of minimum cost, i.e. a mapping between elements $x_l$, $y_m$ of the sequences $X$, $Y$.
An alignment is thus a mapping between pairs of feature vectors (from different recordings), each vector representing a block of consecutive audio samples.
As each audio sample has an associated time-stamp (an integer multiple of the inverse of the sample rate), each feature vector, say $x_l$, can be associated with a time-stamp $t_l^X$ as well, (here) representing the center of the block of audio samples.
The matching of sequence elements is schematically illustrated in Fig.~\ref{fig:alig_ts_gt}, for the \enquote{direction} $X \rightarrow Y$ (note that direction here refers to the evaluation, as will be illustrated next).
For each block of $X$, the matching block of $Y$ is found, and its associated time-stamp $t_m^Y$ is subtracted from the ground-truth time-stamp $g_n^Y$.
This produces the pairwise error sequence $e_n^{X\rightarrow Y}$.
As we have ground-truth annotations for both recordings of a pair available, we can also calculate an error sequence for the \enquote{reverse} direction $Y \rightarrow X$.
The sequences of ground-truth time-stamps were produced from the multiple annotations discussed above (Section \ref{sec:annotation}), by taking for each annotated score event the sample mean across the three annotations per recording.
For computing the alignments, an implementation of FastDTW \cite{salvador_2007_fastdtw} in python was used.
\begin{figure}
\centering
\includegraphics[width=0.5\columnwidth]{alignment_blocks_ts.pdf}
\caption{
Matching feature vectors through DTW, and calculating errors between associated time-stamps $t_m^Y$ and ground-truth time-stamps $g_n^Y$, for direction $X \rightarrow Y$.
This yields the error sequence $e_n^{X \rightarrow Y}$.
} \label{fig:alig_ts_gt}
\end{figure}
\subsection{Choice of Audio Features}
The actual alignment process is preceded by extracting features from the recordings to be aligned.
Different features have been proposed and evaluated for this task in the literature.
We decided to choose only features that have proven to yield highly accurate alignments and thus small alignment errors.
\cite{grachten_alignment_structure} evaluated several different audio features separately on data sets of different music genres, among them symphonies by Beethoven.
They achieved the best results overall by using 50 MFCC (in contrast to 13 or even 5 as used in \cite{kirchhoff_2011_evaluation_features_alignment}), for two different block lengths.
As the results on these corpora, which are similar to ours, were dominated by MFCC, we decided to use these with similar configurations for our experiments.
Additionally, we included a variant of MFCC (in the following addressed as \enquote{MFCC mod}) following an idea described in \cite{muller_2009_chroma_features_}, where 120 MFCC are extracted, then the first $n_{skip}$ are discarded and only the remaining ones used.
However, in contrast to their proposal we skip the subsequent extraction of chroma information and use the MFCC directly.
The second family of features that has proven successful for alignment tasks are chroma features, which were tested as an alternative.
For extracting the feature values, the implementations from LibROSA \cite{librosa_2015} were used.
Besides \enquote{classical} chroma features, the variants chroma\_cqt (employing a constant-Q transform) and chroma\_cens were used.
We decided not to include more specialized features that include local onset information, like LNSO / NC \cite{arzt_2012_adaptive_distance}, or DLNCO (in combination with chroma), as they would seem to give no advantage on our corpus as suggested from the results in \cite{grachten_alignment_structure} and \cite{ewert_2009_hires_sync_chroma}.
\subsection{Systematic Experiments Performed}
In order to find the best setup for audio-to-audio alignment for complex orchestral music recordings, we carried out a large number of alignment experiments, by systematically varying the following parameters:
\begin{itemize} [nolistsep]
\item FFT sizes: 1024 to 8192 (chroma), up to 16384 (MFCC)
\item Hop sizes MFCC: half of FFT size, for 16384 fixed to 4096
\item Hop sizes Chroma: 512 and 1024, for each FFT size; additionally 2048 for chroma\_cens and chroma\_cqt
\item Number of MFCC: 13, 20, 30, 40, 50, 80, 100
\item MFCC mod: 120 coefficients, first 10, 20, $\ldots$, 80 discarded
\item Distance measures: Euclidean ($l_2$), city block ($l_1$) and cosine distance.
\end{itemize}
Note that the audio signals were not down-sampled in any of the cases, but used with their full sample rate of 44.1 kHz.
All in all, a total number of 312 different alignments were computed and evaluated for each performance pair.
Each alignment of each pair of performances was evaluated in both directions.
As it is impossible to display all results in this paper, we will only report a subset of best results in Section \ref{sec:eval:alignments}.
\section{Evaluation of Alignments}
\label{sec:eval:alignments}
\subsection{Alignment Accuracy}
For quantifying the alignment accuracy, we calculated pairwise errors $e_n$ between the ground-truth time-stamps $g_n$ for the respective recording and the matching alignment time-stamps $t_l$ (see Fig. \ref{fig:alig_ts_gt}).
Per pair of recordings, two error sequences are obtained, one for each evaluation direction, i.e. $e_n^{X \rightarrow Y}$ and $e_n^{Y \rightarrow X}$.
As a general global measure of the accuracy of a full alignment, the mean absolute error is used, where the maximum absolute error can be seen as a measure for lack of robustness.
For reporting of the best results, we first ranked all alignments whose absolute maximum errors are below 5 seconds by their mean absolute errors.
As large maximum error is taken as lack of robustness, the worst performing settings were thus discarded.
For each pair of recordings, from the remaining error sequences (from originally 312 alignments per pair, each with 2 directions of evaluation), the 10 best results, in terms of mean absolute error, were then kept for further analysis.
The error values for both directions of each specific alignment were then pooled, i.e. the error values were collected and analyzed jointly.
A one-way ANOVA (null hypothesis: no difference in the means) was conducted for the 10 best alignments per pair of recordings, where for all cases the null hypothesis could not be rejected (recording pair with smallest p-value: $F = 0.6$, $p = 0.8$).
Thus, as the different settings of the 10 best alignments do not result in significant differences in terms of mean error performance, the error sequences for those 10 best alignments were collected, to estimate a distribution of the absolute errors.
Fig. \ref{fig:alig_eCDF_summarized} shows the empirical cumulative distribution function of the pairwise absolute errors for all 5 alignment (performance) pairs, where each curve is obtained from the 2 error sequences (both evaluation directions) of each of the 10 best alignments for the respective performance pair.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{+eCDF_abs_errors_all_pieces_short.png}
\caption{Cumulative distribution of absolute pairwise errors. Each curve represents pooled errors of 10 best alignments (mean absolute error) for both evaluation directions, per pair of recordings. s9-1: Beethoven, S.9, 1st Mov., s9-3: Bruckner, S.9, 3rd Mov., op21-2: Webern, Op21, 2nd Mov. (+) and (x) markers for median standard deviation of annotation, cf. Table \ref{table:recordings}.}
\label{fig:alig_eCDF_summarized}
\end{figure}
In the following, the settings and results, in terms of mean absolute error and maximum absolute error, for the 10 best alignments are presented.
For the Beethoven piece, we restricted the reporting to one pair of recordings (BPO 1962 vs. VPO 1947) due to limited space (Table \ref{table:results_beethoven}).
As can be seen from Fig. \ref{fig:alig_eCDF_summarized}, the other two pairs do not differ substantially in terms of error performance, and the settings for obtaining these results are almost identical to the ones presented in the table, with an even stronger favor of the MFCC mod feature.
Tables \ref{table:results_webern} and \ref{table:results_bruckner} show the results for the Webern and Bruckner pair of recordings, respectively.
\begin{table}[]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}ccccccc@{}}
\toprule
Feature & \#MFCC & \#skip & fft size & dist. & mean err. & max. err \\ \midrule
MFCC mod & 120 & 20 & 2048 & $l_1$ & 38 & 220 \\
MFCC mod & 120 & 30 & 4096 & $l_2$ & 38 & 411 \\
MFCC mod & 120 & 40 & 2048 & $l_2$ & 39 & 239 \\
MFCC & 100 & - & 8192 & $l_1$ & 40 & 243 \\
MFCC & 80 & - & 2048 & $l_1$ & 40 & 253 \\
MFCC mod & 120 & 10 & 4096 & $l_1$ & 41 & 318 \\
MFCC & 100 & - & 16384 & $l_1$ & 41 & 318 \\
MFCC mod & 120 & 10 & 2048 & $l_2$ & 41 & 370 \\
MFCC mod & 120 & 20 & 16384 & $l_1$ & 42 & 285 \\
MFCC mod & 120 & 20 & 16384 & cos & 43 & 709 \\ \bottomrule
\end{tabular}%
}
\caption{Settings and results for top 10 alignments, Beethoven S9-1, BPO 1962 vs. VPO 1947. The other two cases show almost identical results (omitted for lack of space), with stronger favor of MFCC mod. Errors in ms, rounded to nearest integer.}
\label{table:results_beethoven}
\end{table}
\begin{table}[]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}ccccccc@{}}
\toprule
Feature & \#MFCC & \#skip &fft size & dist. & mean err. & max. err \\ \midrule
MFCC mod & 120 & 30 & 2048 & cos & 116 & 4133 \\
MFCC mod & 120 & 20 & 4096 & cos & 123 & 3901 \\
MFCC mod & 120 & 50 & 2048 & $l_1$ & 127 & 3341 \\
MFCC mod & 120 & 40 & 8192 & cos & 137 & 3597 \\
MFCC mod & 120 & 40 & 2048 & cos & 138 & 4180 \\
MFCC mod & 120 & 60 & 4096 & $l_2$ & 139 & 2639 \\
MFCC mod & 120 & 10 & 16384 & cos & 144 & 4319 \\
MFCC mod & 120 & 20 & 2048 & cos & 145 & 4110 \\
MFCC mod & 120 & 20 & 16384 & cos & 150 & 4226 \\
MFCC mod & 120 & 60 & 16384 & cos & 150 & 4040 \\ \bottomrule
\end{tabular}%
}
\caption{Settings and results for top 10 alignments, Bruckner S9-3. Errors in ms, rounded to nearest integer.}
\label{table:results_bruckner}
\end{table}
\begin{table}[]
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}ccccccc@{}}
\toprule
Feature & \#MFCC & \#skip & fft size & dist. & mean err. & max. err \\ \midrule
MFCC & 80 & - & 2048 & $l_1$ & 62 & 1049 \\
MFCC & 40 & - & 2048 & $l_1$ & 65 & 1026 \\
MFCC & 100 & - & 4096 & $l_1$ & 67 & 980 \\
MFCC & 100 & - & 4096 & $l_2$ & 69 & 980 \\
MFCC mod & 120 & 10 & 4096 & $l_2$ & 69 & 980 \\
MFCC mod & 120 & 20 & 4096 & $l_2$ & 73 & 980 \\
MFCC & 100 & - & 4096 & cos & 76 & 980 \\
MFCC & 80 & - & 2048 & cos & 77 & 980 \\
MFCC & 100 & - & 8192 & $l_1$ & 78 & 1026 \\
MFCC mod & 120 & 20 & 2048 & cos & 82 & 956 \\ \bottomrule
\end{tabular}%
}
\caption{Settings and results for top 10 alignments, Webern Op.21-2. Errors in ms, rounded to nearest integer.}
\label{table:results_webern}
\end{table}
As can be seen from the tables, best results are achieved with either MFCC or the modified MFCC.
There does not seem to be a very clear pattern of which parameter setting gives best results, even within one pair of recordings.
A slight advantage of medium to large FFT sizes is observed, as is a larger number of MFCC ($\geq$ 80, a number much larger than what is suggested in the literature for timbre related tasks).
For the modified MFCC, skipping the first 20 to 40 out of the 120 coefficients seems a good suggestion.
Interestingly, there seems to be no clear relation to the FFT size.
\subsection{Relation to Human Alignment Precision}
We would like to relate the accuracy achieved by automatic alignment methods to the precision with which human annotators mark score events in such recordings.
This will enable us to judge the errors in the alignment methods in such a way that we cannot only say which is best, but which are probably sufficiently good for musicological studies (in relation to how precise human annotations tend to be).
By comparing the global measures of variation of the annotations (Table \ref{table:recordings}) with the mean errors obtained from the alignment study, the following can be stated.
We would like the errors introduced by the alignments to be in the range of the variation introduced by human annotators.
If, for example, the above estimated standard deviations are used for describing an interval (e.g. $\pm$ 1 SD) around the ground-truth annotations, then markers placed by the DTW alignment within such an interval can be taken to be as accurate as an average human annotation.
However, as Tables \ref{table:results_beethoven} to \ref{table:results_webern} reveal, on average, the absolute errors are at least slightly (or even much in case of the Bruckner performances) larger than the estimated standard deviations, but still in a reasonable range, even for larger proportions of the score events (see Figure \ref{fig:alig_eCDF_summarized}).
\section{Discussion and Conclusions}
Given our results, we expect the presented feature settings to be quite suitable as a first step for developing further musicological questions related to comparing multiple performances of one piece.
With careful annotation of one recording, transferring the score event markers to other recordings of the same piece should yield not much worse accuracy than what is to be expected from human annotations.
Detailed analyses of e.g. tempo may still need a moderate amount of manual correction, however.
An interesting application we consider is the exploration of a larger corpus of unseen recordings.
Being able to establish, within a reasonable uncertainty, a common musical grid for a number of performances allows for search of (a first impression of) commonalities and differences across performances, for parameters such as tempo, or features extracted directly from the recording, such as loudness, mapped to the musical grid.
This will e.g. allow the pre-selection of certain performances for more careful human annotation and further more detailed analyses.
Recently, performance related data have been presented for a larger corpus in \cite{kosta_2018_mazurkabl}.
We hope to have presented some new insights with the data on annotation precision, and the applied methods for their quantification.
Further work could make use of estimates of typical uncertainty of annotations to estimate, or give bounds for, the uncertainty of data derived from these.
One way would be to use simple error propagation to quantify uncertainty of tempo representations, and automatically find (sections of) performances of significantly different tempo within a large corpus of recordings.
\section{Acknowledgments}
This work was supported by the Austrian Science Fund (FWF) under project number P29840, and by the European
Research Council via ERC Grant Agreement 670035, project CON ESPRESSIONE.
We would like to thank the annotators for their work, as well as the anonymous reviewers for their valuable feedback.
Special thanks go to Martin Gasser for fruitful discussions of an earlier draft of this work.
|
2010.06699
|
\section*{Keywords}
Stochastic dynamical systems, Modeling neuronal networks, Asymptotic analysis, WKB, Eikonal equations, Escape from an attractor.
\section*{AMS classification}
37A50, 60G40, 92B25,41A60
\section{Introduction}
Trajectories of a dynamical system perturbed by small noise can escape from a basin of attraction and in general the dynamics presents large fluctuations away from a stable attractor \cite{dykman1994observable,dykman1994large,maier1993}. These perturbations can even induce switching in multi-stable systems. Noise can also enhance the response to periodic external stimuli, a phenomenon known as stochastic resonance \cite{lindner2004}. In the case of interaction between noise and a dynamics presenting a Hopf bifurcation, oscillations that would disappear in the deterministic case can be maintained. Finally, noise can induce a shift in bifurcation values \cite{schimanskyGeier1985} or can stabilize an unstable equilibrium \cite{arnold1979,arnold1990,Wihstutz_book2003}.\\
In the context of modeling biological neuronal networks, noise also plays a critical role in defining collective rhythms or large synchronization. To reduce the complexity and difficulty inherent in analyzing large neuronal ensembles, mean-field models are used to study averaged behavior, which corresponds to projecting a high dimensional system into a low dimension, as it is the case for modeling fast synaptic adaptations \cite{Tsodyks1997}. Such models are used to study bursting activity, synchronization and oscillations in excitatory neuronal networks \cite{BrunelHakim1999,Hansel2001,Holcman_Tsodyks2006,Barak2007,daoduc2015}. However, in such models the distribution of interburst durations remains unclear, although these durations have recently been shown to control to the overall neuronal networks dynamics \cite{Rouach_CxKO} and can even influence the bursting activity during epilepsy.\\
Stuyding the escape rate from a basin of attraction for a noisy dynamical system usually consists in collecting trajectories that terminate when they hit for the first time the boundary of the basin of attraction, which occurs with probability one \cite{Matkowsky1977exit, Schuss:Book}. The escape rate and the distribution of exit points can be computed in the small noise limit using WKB approximation. Another interesting property is that the exit point distribution peaks at a distance $O(\sqrt{\sigma})$ from the saddle-point (where $\sigma$ is the noise amplitude) \cite{Schuss1980,BobrovskySchuss1982}. Metrics relation can also play a role in shaping the dynamics, so that when a focus attractor falls into the boundary layer of the basin of attraction, escaping trajectories exhibit periodic oscillations leading to an escape time distribution which is not exponential, because several eigenvalues are necessary to describe the distribution \cite{maier1996,verechtchaguina2006_1,verechtchaguina2006_2,verechtchaguina2007,daoduc2016,daoducPRE}. \\ In the case of periodically-driven systems, the escape rate scales by the field intensity \cite{smelyanskiy1999time,dykman2005}. In all these examples, escape ends when a trajectory hits the separatrix for the first time which will not be the case for the systems we wish to study here. We consider here a class of dynamical systems perturbed by a white noise of small amplitude for which trajectories exiting the basin of attraction can reenter multiple times before eventually escaping to infinity. This effect requires to clarify the difference between exiting versus escaping that we explain below.\\
The manuscript is organized as follows: in the first part, we introduce a reduced stochastic dynamical system for bursting, based on modeling synaptic depression-facilitation for an excitatory neuronal network \cite{Tsodyks1997,daoduc2015}. The distribution of interburst intervals corresponds to the escape from an attractor and numerical simulations reveal a shift in the distribution of exit points and multiple returns inside the attractor. In the second part, we describe a generic two-dimensional dynamical system, containing an attractor and one saddle-point. We analyze the stochastic perturbation and show that the maximum of the probability density function of trajectories before escape, is not centered at the attractor, but at a shifted location that depends on the noise amplitude $\sigma$. Finally, we focus on the escape from the basin of attraction. After exiting, trajectories can return inside the basin of attraction multiple times before eventually escaping to infinity. To conclude, each excursion outside and inside the basin of attraction contributes to increase the total escape time by a factor between 2 to 3 compared to the first exit time, providing a novel explanation for the large tail distribution of interbust intervals in experimental time series \cite{Rouach_CxKO}.
\section{Modeling the interburst durations in neuronal networks}
\subsection{Noisy Depression-facilitation model}
Biological neuronal networks exhibit complex patterns of activity as revealed by time series of a single neuron \cite{hille1978,yuste2010}, a population or an entire Brain area \cite{niedermeyer2005}. To analyse neuronal networks, mean-field models are used to formulate the dynamics as stochastic differential equations. \\
Bursting dynamics is a transient period of time where an ensemble of neurons discharge and these events can be accounted for by using short-term plasticity properties of synapses, such as the classical depression and facilitation \cite{Tsodyks1997,Barak2007,daoduc2015}. Bursting is followed by an interburst period, where the network is almost silent. Neuronal population bursts separated by long interbursts can result from two-state synaptic depression \cite{Guerrier2015}. However, the refractory period could also be the result of other mechanisms such as afterhyperpolarization (AHP), mediated by leading long-lasting voltage hyperpolarisation transient and generated by potassium channels \cite{AHPmodel}.\\
We focus here on a depression-facilitation short-term synaptic plasticity model of network neuronal bursting \cite{Tsodyks1997,daoduc2015frontiers,Holcman_Tsodyks2006}, which consists of three equations \eqref{sys} for the mean voltage $h$, the depression $y$, and the facilitation $x$. We recall that synaptic depression describes the possible depletion of vesicular pools, necessary for neurotransmission following an action potential. In this phenomenology, facilitation is a synaptic mechanism that reflects a transient increase of the release vesicular probability, mediated possibly by an increase of the local calcium concentration in the pre-synaptic terminal. The associated equation is driven by two opposite forces: one is the return to an equilibrium $X$ with a time constant $t_f$ and the other is an increase induced by a mean firing rate $h^+=max(h,0)$. Similarly, the depression variable $y$ returns exponentially to steady state with a time constant $t_r$. It can also decrease following a firing rate $h^+$, proportional to the available fraction $y$ of vesicles and the facilitation $x$. The three coupled equations for the mean voltage $h$, the depression $y$, and the synaptic facilitation $x$ are
\begin{eqnarray} \label{sys}
\tau \dot{h} &=& - h + Jxy h^+ +\sqrt{\tau}\sigma \dot{\omega}\nonumber\\
\dot{x} &=& \dfrac{X-x}{t_f} + K(1-x) h^+ \\
\dot{y} &=& \dfrac{1-y}{t_r} - L xy h^+ , \nonumber
\end{eqnarray}
where the population average firing rate $h^+ = \max(h,0)$ is a linear threshold function of the synaptic current. The mean number of connections (synapses) per neuron is accounted for by the parameter $J$ and the term $Jxy$ reflects the combined effect of the short-term synaptic plasticity on the network activity. We previously distinguished \cite{daoduc2015} the parameters $K$ and $L$ which describe how the firing rate is transformed into molecular events that are changing the duration and probability of vesicular release. The time scales $t_f$ and $t_r$ define the recovery of a synapse from the network activity. Finally, $\dot \omega$ is an additive Gaussian noise and $\sigma$ its amplitude. We shall focus now on a reduced version of this system to study the distribution of interburst intervals which are interpreted as escape from a basin of attraction that we will define below, with similar properties as the ones presented in equation \eqref{Drift}.
\subsection{Reduction to a two-dimensional system and phase-space analysis}
System \eqref{sys} has 3 critical points: one attractor and two saddle-points. At the attractor $A=(0,X,1)$, the dynamics is very anisotropic \\
$\left(|\lambda_1|= \cfrac{1-JX}{\tau} \approx 12.6 \gg |\lambda_2| = \cfrac{1}{\tau_f} \approx 1.1 \gg |\lambda_3| \approx \cfrac{1}{\tau_r} = 0.34, \text{using parameters in table \ref{tableParam}}\right)$ and thus can be reduced to a 2D-plan $y=Cte$ so that
\begin{eqnarray}
\dot{y}=0=\dfrac{1-y}{\tau_r} - L xy h^+ = 0 \iff y = \cfrac{1}{1+\tau_r Lxh^+},
\end{eqnarray}
and we obtain the simplified system:
\begin{eqnarray}\label{2Dsyst}
\begin{array}{r c l}
\arraycolsep=1.4pt\def2.5{2.5}
\dot{h}&=&\cfrac{h\left(Jx-1-\tau_rLxh^+ \right)}{\tau(1+\tau_rLxh^+)}+\sqrt{\tau}\sigma \dot{\omega}\\
\dot{x} &=& \cfrac{X-x}{\tau_f} + K(1-x) h^+.
\end{array}
\end{eqnarray}
\begin{figure}[http!]
\centering
\includegraphics[scale=0.74]{fig0_3Dmodel.pdf}
\caption{ \textbf{Modeling Bursting and interbursting using a facilitation-depression model} \textbf{A. Dynamics generated by system \eqref{sys}. A.1} a bursting trajectory (black) in the phase-space, characterized by an attractor $A_0$ (yellow), two saddle-points $S$ (blue) and $S_1$ (pink) with a separatrix delimiting the basin of attraction of $A_0$ (cyan surface). \textbf{A.2} Simulated time-series of system \eqref{sys} with a noise amplitude $\sigma=5$, mean voltage $h$ (upper), facilitation $x$ (center) and depression $y$ (lower) for $T=100s$. \textbf{B. Dynamics in the basin of attraction of $A_0$. B.1} Inset near $A_0$ showing an escaping trajectory (black). \textbf{B.2} Phase-space of the 2D projected dynamics \eqref{2Dsyst} for several trajectories ($T=500s$) associated with three noise levels $\sigma = 1$ (pink),1.5 (blue) and 2.5 (green) rotating around shifted attractors $A_\sigma$ (centers of mass of the trajectories). \textbf{B.3} Distance d($A_0$,$A_\sigma$) for $\sigma \in [0.5,3]$ compared to a numerical fit (yellow). \textbf{C. Escape dynamics with several returns inside the basin of attraction. C.1} Trajectory exiting the basin of attraction. \textbf{C.2} Distribution of exit times where the tail is well approximated by a single exponential with $\lambda_e=0.09$ (or $\bar{\tau}_e~11s$).} \label{application}
\end{figure}
The deterministic system \eqref{2Dsyst} for $\sigma=0$ has 3 critical points, two attractors and one saddle-point:
{\bf Attractor $A_0$ }is given by $h=0$ and $x=X$. The Jacobian is
\begin{eqnarray}\label{jac_A}
J_{A_0} = \arraycolsep=1.4pt\def2.5{2.0}
\left(
\begin{array}{c c}
\cfrac{-1+JX}{\tau} \phantom{1234}& 0 \\
K(1-X) \phantom{1234}& - \cfrac{1}{\tau_f}\\
\end{array}
\right).
\end{eqnarray}
With the parameters defined in table \ref{tableParam}, the eigenvalues are $(\lambda_1,\lambda_2) = \left(\cfrac{JX-1}{\tau}, \cfrac{1}{\tau_f}\right) \approx (-12.6, -1.11)$.\\
{\bf Saddle-point $S$} has coordinates $S_1 (h_1 \approx 8.07; x_1 \approx 0.28)$. Its eigenvalues are $(\lambda_1,\lambda_2) \approx (-5.73,1.43)$ .\\
{\bf Attractor $A_2$} is given by $A_2 (h_2 \approx 28.8; x_2 \approx 0.53)$. Its eigenvalues are $(\lambda_1,\lambda_2) \approx (-11.9,-1.33)$.\\
The two attractors are separated by a 1D stable manifold $\Gamma$ passing through the saddle-point $S$ (fig. \ref{application}A, solid black). To study the dynamics around the attractor $A_0$, where we approximate $\dot{y}=0$, we first generated stochastic trajectories and observe two novel phenomena: 1) in the basin of attraction before exiting, trajectories fluctuate around a point not centered at the attractor $A_0$, but rather around a shifted point $A_\sigma$ the position of which depends on the noise amplitude (fig. \ref{application}B.2-B.3); 2) trajectories that exit the basin of attraction through the separatrix $\Gamma$ can reenter multiple times before finally escaping. In the reduced system \ref{2Dsyst}, escape is characterized by falling to the second attractor $A_2$. The most interesting unexplain phenomena revealed by figs. \ref{application}C.1-C.2 is the single exponential decay rate with a decay $\lambda=0.09$ (or $\bar{\tau}_e=11s$). This is in contrast with the mean escape time $\langle\tau_0\rangle=4.35$ (estimated numerically) from the attractor for a trajectory starting at the attractor $A$ and reaching the separatrix $\Gamma$. The rest of the manuscript is dedicated to understand how this discrepancy can be resolved and also to better study the properties 1) and 2) we identified numerically. For that purpose, we study below a generic dynamical system that serves as a model and obtain specific computational criteria and finally, we resolve the present enigma.
\section{When a perturbation of a two-dimensional system by a Gaussian noise induces a shift of the density function peak with respect to the attractor position} \label{2D_description}
We consider a class of two-dimensional stochastic dynamical system described by
\begin{eqnarray}\label{Drift}
\begin{array}{r c l}
\arraycolsep=1.4pt\def2.5{2.5}
\dot{h}&=&-\alpha h + x^2 +\sigma \dot{\omega}=b_1(\mbox{\boldmath$s$})\\
\dot{x} &=& F(h,x)=b_2(\mbox{\boldmath$s$}),
\end{array}
\end{eqnarray}
where
\begin{eqnarray}
F(h,x)=\left\{
\begin{array}{l c l}
\arraycolsep=1.4pt\def2.5{2.5}
h - \gamma x &\text{for}& h\geq 0\\
- \gamma x &\text{for}& h\leq 0,\\
\end{array} \right.
\end{eqnarray}
We rewrite this process with $\mbox{\boldmath$s$}=(x,h)$
\begin{eqnarray} \label{stocProc}
d\mbox{\boldmath$s$} = \mbox{\boldmath$B$}(\mbox{\boldmath$s$})dt + \Xi dW,
\end{eqnarray}
where
\begin{eqnarray} \label{B}
\mbox{\boldmath$B$}(\mbox{\boldmath$s$}) = \begin{pmatrix}
b_1(\mbox{\boldmath$s$}) \\
b_2(\mbox{\boldmath$s$})
\end{pmatrix} \hbox{ and }
\Xi= \begin{pmatrix}
\sqrt{\sigma} & 0 \\
0 & 0
\end{pmatrix}.
\end{eqnarray}
In the following, $\alpha \in ]0,1]$, $\gamma \in ]0, \alpha[$, $\dot{\omega}$ is a Gaussian white noise and $\sigma$ its amplitude. Our goal here is to study some properties of such systems. This system has two critical points, $A = (0,0)$ (fig. \ref{PhaseSpace}A yellow star) and $S = (\gamma^2 \alpha, \gamma \alpha)$ (fig. \ref{PhaseSpace}A cyan star). The jacobian of the system at point $A$ can be computed either for $h \geq 0$ or for $h \leq 0$ and in both cases, we have
\begin{eqnarray}\label{jacAnoDrift}
J_A =
\begin{pmatrix}
-\alpha & 0 \\
1 & -\gamma\\
\end{pmatrix}.
\end{eqnarray}
The attractor $A$ has real eigenvalues $\lambda_1 = -\alpha$ and $\lambda_2 = -\gamma$ (its stable manifolds are shown in fig. \ref{PhaseSpace}A, dotted black lines). The first coordinate of the point $S$ is $h_S=\gamma^2\alpha>0$ and the jacobian is
\begin{eqnarray}\label{jacSnoDrift}
J_S =
\begin{pmatrix}
-\alpha & 2\alpha \gamma \\
1 & -\gamma\\
\end{pmatrix}.
\end{eqnarray}
Both eigenvalues are real, $\lambda_\pm = -\cfrac{1}{2}\left(-(\alpha+\gamma)\pm \sqrt{(\alpha+\gamma)^2+4\alpha \gamma}\right)$ and thus $S$ is a saddle point (with $\alpha = 1$ and $\gamma = 0.6$ we have $\lambda_+ \approx 0.314$ and $\lambda_-\approx -1.914$). \\
The separatrix that delimits the basin of attraction of $A$ is the stable manifold of $S$ (fig. \ref{PhaseSpace}A solid black curve). As we shall describe below the unstable manifold defines the escaping direction (fig. \ref{PhaseSpace}A yellow curve). It is located between the $x$ (respectively $h$) nullcline $\Phi_x=\{(x,h) | h=\gamma x\}$, \ref{PhaseSpace}A red (respectively $\tilde{\Phi}_h=\{(x,h) |h=x^2/\alpha\}$, purple).\\
\begin{figure}[http!]
\includegraphics[scale=0.74]{fig1_PhaseSpaceAndAttractors.pdf}
\caption{\textbf{Emergence of a shift in the attractor's position for the noisy dynamical system \eqref{Drift}.} \textbf{A.} Phase-space of system \eqref{Drift}. The basin of attraction associated to the attractor $A$ (yellow) has two stable manifolds (dashed lines) and is delimited by the stable manifold $\Gamma$ (solid black) passing through the saddle-point $S$ (cyan). \textbf{B.} Stochastic trajectory (green) for a noise amplitude $\sigma = 0.09$. The center of mass $A_\sigma$ is shifted towards the right with $x_{A_\sigma}\approx 0.05$. \textbf{C.} Successive intersection points $P_k$ and $Q_k$ of trajectories with $\Phi_x$ (red). \textbf{D.} Distributions $\rho_{P_k}$ and $\rho_{Q_k}$ (500 runs, $\sigma=0.09$) for $1\leq k\leq 50$. The peak of the converging distributions $\rho_{P_{46}}$ to $\rho_{P_{50}}$ (purple) indicates the x-coordinate of the shifted attractor $x_{A_\sigma}\approx 0.05$.}\label{PhaseSpace}
\end{figure}
Numerical simulations reveal that the stochastic trajectories are not centered around the deterministic attractor $A$ (fig. \ref{PhaseSpace}B, green trajectory). Indeed, we computed the shifted attractor as the expectation of the center of mass for each trajectory, before escaping at time $\tau_{\omega}$,
\begin{eqnarray}
A_\sigma=\hbox{\bb E}_{\omega}[\frac{1}{\tau_{\omega}}\int_{0}^{\tau_{\omega}} \mbox{\boldmath$s$}_{\omega}(t)dt].
\end{eqnarray}
The empirical distribution peaks at the point $A_\sigma$, which we found to be shifted towards the right of the attractor $A$ (fig. \ref{PhaseSpace}B, green star). Our next goal is to study how this shift depends on $\sigma$.
\subsection{Numerical study of a shifted maximum density at \boldmath $A_\sigma$ \unboldmath}
To better characterize the shifted peak $A_\sigma$ induced by the noise for the maximum of the density position, before trajectories escape the attractor, we ran simulations of model eq. \eqref{Drift} (fig. \ref{PhaseSpace}B $\sigma =0.09$), where trajectories are simulated for 500s. We observed that trajectories are looping around the $x$-nullcline $\Phi_x$ (red). To characterize the shift in the distribution, we study the distribution of points $P_0$
\begin{eqnarray}
\rho_{P_0} = P(\mbox{\boldmath$s$}(t)\in \Phi | \mbox{\boldmath$s$}(0)=A)
\end{eqnarray}
where a trajectory starting at $A$ hits $\Phi$ for the first time (fig. \ref{PhaseSpace}C, red). We generated the empirical distribution $\rho_{P_0}$ by simulating 150 trajectories (fig. \ref{PhaseSpace}D, red) and we found that this distribution is peaked at $P_0$ close to $A$. To further understand the dynamics we then investigated the distribution of points $Q_0$
\begin{eqnarray}
\rho_{Q_0}=P(\mbox{\boldmath$s$}(t)\in \Phi | \mbox{\boldmath$s$}(0)=P_0)
\end{eqnarray}
where a trajectory starting at $P_0$ hits $\Phi_x$ for the first time (fig. \ref{PhaseSpace}C, orange). This distribution is also peaked and located nearby $\rho_{P_0}$ (fig. \ref{PhaseSpace}D, orange). We then iterated this process to obtain the successive distributions $P_k$
\begin{eqnarray}
\rho_{P_k}=P(\mbox{\boldmath$s$}(t)\in \Phi | \mbox{\boldmath$s$}(0)=Q_{k-1}),
\end{eqnarray}
of points where a trajectory starting initially at $Q_{k-1}$ hits $\Phi_x$. Similarly, we define the distributions of the points
\begin{eqnarray}
\rho_{Q_k}=P(\mbox{\boldmath$s$}(t)\in \Phi | \mbox{\boldmath$s$}(0)=P_k)
\end{eqnarray}
where a trajectory starting at the peak of $\rho_{P_k}$ hits $\Phi_x$ for the first time (fig. \ref{PhaseSpace}D). Interestingly, we observed that the distributions are peaked and progress along $\Phi_x$ towards the separatrix $\Gamma$. However, after a few iterations, the successive distributions $\rho_{P_k}$ and $\rho_{Q_k}$, seems to accumulate toward a shifted equilibrium (fig. \ref{PhaseSpace}D, pink and purple distributions).
\subsection{Computing the steady-state distribution and the distance of the maximum to the attractor \boldmath $A$ \unboldmath using WKB approximation}\label{pseudoEqLoc}
\subsubsection{Steady state Fokker-Planck equation for system \eqref{Drift}}\label{pseudoEqLocPDF}
In this subsection, we study the probability density function (pdf) of the system \eqref{Drift} for trajectories that stay inside the basin of attraction of $A$. We first generated 300s simulations for three values of the noise amplitude $\sigma=0.03$ (pink), 0.09 (blue) and 0.12 green (fig. \ref{phaseSpace_PDF}A) showing that the pseudo-distributions for points that do not escape the basin of attraction are peaked at $A_\sigma$ shifted to the right from $A$. We now compute this pdf using WKB approximation \cite{Schuss1980}. The steady-state pdf $p$ satisfies the stationary Fokker-Planck Equation (FPE)
\begin{eqnarray}\label{FPE}
\cfrac{\sigma}{2}\cfrac{\partial^2 p}{\partial h^2} -(\nabla \cdot \mbox{\boldmath$B$}) p - \mbox{\boldmath$B$} \cdot \nabla p = -\delta_A,
\end{eqnarray}
where $\delta_A$ is the $\delta$-Dirac function at point $A$. Due to the discontinuity of the field at $h=0$ we compute $\nabla \cdot \mbox{\boldmath$B$}$ on the two half spaces $(h \geq 0)$ and $(h \leq 0)$ separately. In the small noise limit $\sigma \to 0$, the WKB solution has the form
\begin{eqnarray}\label{WKB}
p(\mbox{\boldmath$s$}) = K_{\sigma}(\mbox{\boldmath$s$}) e^{-\cfrac{\psi(\mbox{\boldmath$s$})}{\sigma}},
\end{eqnarray}
where $K_{\sigma}$ is a regular function that admits an expansion
\begin{eqnarray}\label{K}
K_{\sigma}(\mbox{\boldmath$s$})=\sum_{i=0}^{\infty} K_i(\mbox{\boldmath$s$}) \sigma^i.
\end{eqnarray}
The eikonal equation is obtained by injecting \eqref{WKB} in \eqref{FPE} and by keeping only the higher order terms in $\sigma$ (ie $\sigma^{-1}$)
\begin{eqnarray}\label{eikonal}
\mbox{\boldmath$B$} \cdot \nabla \psi + \cfrac{1}{2} \left(\cfrac{\partial \psi}{\partial h}\right)^2 = 0
\end{eqnarray}
and the transport equation is obtained using the order 1 terms:
\begin{eqnarray}\label{transport}
\mbox{\boldmath$B$} \cdot \nabla K_0 + \cfrac{\partial \psi}{\partial h}\cfrac{\partial K_0}{\partial h}=-\left(\nabla \cdot \mbox{\boldmath$B$} + \cfrac{1}{2}\cfrac{\partial ^2 \psi}{\partial h^2}\right) K_0.
\end{eqnarray}
To solve \eqref{eikonal}, we use the method of characteristics with notation $q=(q_1,q_2)=\nabla \psi$. Then eq. \eqref{eikonal} becomes
\begin{eqnarray}
F(\mbox{\boldmath$s$},q,\psi)=\mbox{\boldmath$B$} \cdot q + \cfrac{1}{2}q_1^2=0
\end{eqnarray}
and the characteristics are given by
\begin{eqnarray}\label{characteristics}
\arraycolsep=1.4pt\def2.5{2.5}
\begin{array}{r c l}
\cfrac{dh}{dt}&=&b_1(\mbox{\boldmath$s$})+q_1\\
\cfrac{dx}{dt}&=&b_2(\mbox{\boldmath$s$})\\
\cfrac{dq_1}{dt}&=&-F_h=-\cfrac{\partial b_1}{\partial h}q_1-\cfrac{\partial b_2}{\partial h}q_2\\
\cfrac{dq_2}{dt}&=&-F_x=-\cfrac{\partial b_1}{\partial x}q_1-\cfrac{\partial b_2}{\partial x}q_2\\
\cfrac{d\psi}{dt}&=&\cfrac{1}{2} q_1^2.\\
\end{array}\
\end{eqnarray}
To define the initial condition, we can choose a neighborhood $V_A$ of $A$ (positioned at the origin), where $\psi$ has a quadratic approximation
\begin{eqnarray}
\psi(\mbox{\boldmath$s$}) \approx \cfrac{1}{2}\mbox{\boldmath$s$}^TR\mbox{\boldmath$s$} +o(|s|^2)) \hbox{ for } \mbox{\boldmath$s$} \in V_A,
\end{eqnarray}
and $R$ is a symmetric positive definite matrix defined by a degenerated matrix equation at $A$
\begin{eqnarray}
(J_A \mbox{\boldmath$s$})^T \cdot \nabla \psi + \cfrac{1}{2}q_1^2 = 0,
\end{eqnarray}
where $J_A$ is the Jacobian matrix at point $A$ defined by relation \eqref{jacAnoDrift}. We obtain
\begin{eqnarray} \label{psiApprox}
\psi(\mbox{\boldmath$s$}) \approx \cfrac{1}{2}\mbox{\boldmath$s$}^T\begin{pmatrix}
2\alpha & 2\gamma\\
2\gamma & 2\gamma
\end{pmatrix} \mbox{\boldmath$s$}.
\end{eqnarray}
The $\psi$ contours are the ellipsoids given by
\begin{eqnarray}\label{ellipsoids}
\alpha h^2+ 2\gamma xh + \gamma x^2=\epsilon,
\end{eqnarray}
for small $\epsilon>0$. To conclude, we choose for the initial conditions one of the small ellipsoids given by \eqref{ellipsoids} by fixing later on the value of $\epsilon$.
\subsubsection{Solution in the subspace \boldmath $ h \leq 0 $ \unboldmath}
A direct integration of system \eqref{characteristics} gives for $t\geq0$
\begin{eqnarray} \label{characHneg}
\arraycolsep=1.4pt\def2.5{2.5}
\begin{array}{r c l}
h(t)&=&\left(h_0-\cfrac{ x_0^2}{\alpha-2\gamma}-\cfrac{q_{1,0}}{2\alpha}\right)e^{\displaystyle -\alpha t} + \cfrac{x_0^2}{\alpha -2\gamma}e^{\displaystyle -2\gamma t} +\cfrac{q_{1,0}}{2\alpha}e^{\displaystyle \alpha t}\\
x(t)&=&x_0e^{\displaystyle -\gamma t}\\
q_1(t)&=&q_{1,0}e^{\displaystyle \alpha t} \\
q_2(t)&=&\left(q_{2,0}-\cfrac{2 x_0 q_{1,0}}{\alpha-2\gamma}\right)e^{\displaystyle \gamma t}-\cfrac{2 x_0 q_{1,0}}{\alpha-2\gamma}e^{\displaystyle (-\gamma+\alpha)t}\\
\psi(t)&=&\cfrac{q_{1,0}^2}{4 \alpha}e^{\displaystyle 2\alpha t},
\end{array}
\end{eqnarray}
where the initial conditions are $h(0)=h_0, x(0)=x_0, q_1(0)=q_{1,0} \text{ and } q_2(0)=q_{2,0}$. Substituting the expression of $x$ and $q_1$ in $h$, we obtain
\begin{eqnarray} \label{psi_xh_hNeg}
\psi(h,x)=\alpha\left(h- \left(h_0-\cfrac{ x_0^2}{\alpha-2\gamma}-\cfrac{q_{1,0}}{2\alpha}\right) \left(\cfrac{x}{x_0}\right)^{\displaystyle \cfrac{\alpha}{\gamma}} - \cfrac{x^2}{\alpha-2\gamma}\right)^2.
\end{eqnarray}
We solve the transport equation \eqref{transport} along the characteristics \eqref{characHneg}. Using \eqref{Drift} and \eqref{psi_xh_hNeg}, we obtain
\begin{eqnarray}
\cfrac{d K_0(\mbox{\boldmath$s$}(t))}{dt}=\gamma K_0(\mbox{\boldmath$s$}(t)),
\end{eqnarray}
yielding
\begin{eqnarray} \label{K0hNeg}
K_0(s(t))=C e^{\displaystyle \gamma t}.
\end{eqnarray}
For $\alpha > \gamma$, and choosing $\mbox{\boldmath$s$}(t)\in V_A,$
\begin{eqnarray} \label{absCurv}
\tilde{\mbox{\boldmath$s$}}(t) \approx \int_0^t x_0 e^{-\gamma u} du \approx - x_0 \cfrac{e^{\displaystyle -\gamma t}-1}{\gamma}.
\end{eqnarray}
Thus
\begin{eqnarray}
K_0 \sim \cfrac{x_0}{x_0 + \gamma \mbox{\boldmath$s$} \cdot e_2} = \cfrac{x_0}{x_0 + \gamma x},
\end{eqnarray}
where $e_2=(0,1)^T$ is the eigenvector associated to the eigenvalue $\lambda_2$=$-\gamma$. Finally,
\begin{eqnarray} \label{pdfFinal_hneg}
p(\mbox{\boldmath$s$}) \sim \cfrac{x_0}{x_0 + \gamma x} e^{-\cfrac{\psi(\mbox{\boldmath$s$})}{\sigma}}.
\end{eqnarray}
\subsubsection{Solution in the subspace \boldmath $ h \geq 0 $ \unboldmath}
In this case, we cannot integrate system \eqref{characteristics} analytically. In the neighborhood of $A$, $x\ll 1$ and since $\psi$ is a smooth function, we can neglect the quadratic terms in system \eqref{characteristics} yielding
\begin{eqnarray} \label{linCharacHpos}
\arraycolsep=1.4pt\def2.5{2.5}
\begin{array}{r c l}
\cfrac{dh}{dt}&\approx&-\alpha h+q_1\\
\cfrac{dx}{dt}&=&-\gamma x + h\\
\cfrac{dq_1}{dt}&=&\alpha q_1-q_2\\
\cfrac{dq_2}{dt}&\approx&-F_x= \gamma q_2\\
\cfrac{d\psi}{dt}&=&\cfrac{1}{2} q_1^2.\\
\end{array}\
\end{eqnarray}
Integration of system \eqref{linCharacHpos} gives, for $t\geq 0$
\begin{eqnarray} \label{characHpos}
\arraycolsep=1.4pt\def2.5{2.5}
\begin{array}{r c l}
h(t)&=&H_0e^{\displaystyle -\alpha t} + \cfrac{Q_0}{2\alpha}e^{\displaystyle \alpha t} - \cfrac{q_{2,0}}{\gamma^2-\alpha^2}e^{\displaystyle \gamma t}\\
x(t)&=&X_0e^{\displaystyle -\gamma t}+\cfrac{H_0}{\gamma-\alpha}e^{\displaystyle -\alpha t} +\cfrac{Q_0}{2\alpha (\gamma+\alpha)}e^{\displaystyle \alpha t} - \cfrac{q_{2,0}}{2\gamma(\gamma^2-\alpha^2)}e^{\displaystyle \gamma t}\\
q_1(t)&=&Q_0e^{\displaystyle \alpha t} -\cfrac{q_{2,0}}{\gamma -\alpha}e^{\displaystyle \gamma t}\\
q_2(t)&=&q_{2,0}e^{\displaystyle \gamma t}\\
\psi(t)&=&\cfrac{Q_0^2}{4\alpha}e^{\displaystyle 2\alpha t}+\cfrac{q_{2,0}^2}{4\gamma(\gamma -\alpha)^2}e^{\displaystyle 2 \gamma t}-\cfrac{Q_0 q_{2,0}}{\gamma^2-\alpha^2}e^{\displaystyle (\gamma+\alpha)t},
\end{array}
\end{eqnarray}
where
\begin{eqnarray}
\arraycolsep=1.4pt\def2.5{2.5}
\begin{array}{r c l}
Q_0&=&q_{1,0}+\cfrac{q_{2,0}}{\gamma-\alpha}\\
H_0&=&h_0-\cfrac{Q_0}{2\alpha}+\cfrac{q_{2,0}}{\gamma^2-\alpha^2}\\
X_0&=&x_0-\cfrac{H_0}{\gamma-\alpha}-\cfrac{Q_0}{2\alpha(\gamma+\alpha)}+\cfrac{q_{2,0}}{2\gamma(\gamma^2-\alpha^2)}
\end{array}
\end{eqnarray}
and the initial conditions are $h(0)=h_0, x(0)=x_0, q_1(0)=q_{1,0} \text{ and } q_2(0)=q_{2,0}$.\\
To derive the eikonnal solution, we eliminate the time and start with the relation
\begin{eqnarray} \label{approx}
q_2 \approx 2\gamma (h+x),
\end{eqnarray}
which is obtained from \eqref{psiApprox}. Substituting \eqref{approx} in $h$, we obtain near the attractor $A$
\begin{eqnarray} \label{psi_xh_hpos}
\psi(h,x) \approx \cfrac{Q_0^2}{4\alpha}\left(\cfrac{2\gamma (h+x)}{q_{2,0}}\right)^{\cfrac{2\alpha}{\gamma}}+\cfrac{\gamma(h+x)^2}{(\gamma-\alpha)^2}+\cfrac{Q_0 q_{2,0}}{\gamma^2-\alpha^2}\left(\cfrac{2\gamma(h+x)}{q_{2,0}}\right)^{\cfrac{\alpha+\gamma}{\gamma}}.
\end{eqnarray}
Furthermore, we solve the transport equation \eqref{transport} along the characteristics \eqref{characHpos}. We differentiate twice \eqref{psi_xh_hpos} with respect to $h$, we obtain $\cfrac{\partial ^2 \psi}{\partial h^2} \approx \cfrac{\gamma}{(\alpha-\gamma)^2}$, which leads to
\begin{eqnarray}
\cfrac{d K_0(\mbox{\boldmath$s$}(t))}{dt} \approx \left(\alpha+\gamma -\cfrac{\gamma}{(\gamma-\alpha)^2}\right)K_0(\mbox{\boldmath$s$}(t)).
\end{eqnarray}
Using \eqref{absCurv} we obtain
\begin{eqnarray} \label{K0_hpos}
K_0 \sim \left(\cfrac{x_0}{x_0 + \gamma x}\right)^{\cfrac{\alpha+\gamma}{\gamma}-\cfrac{1}{(\alpha-\gamma)^2}}.
\end{eqnarray}
Finally, for $\mbox{\boldmath$s$} \in V_A$
\begin{eqnarray}\label{pdfFinal}
p(\mbox{\boldmath$s$})\sim K_0(\mbox{\boldmath$s$}) e^{-\cfrac{\psi(\mbox{\boldmath$s$})}{\sigma}},
\end{eqnarray}
where $K_0$ and $\psi$ are defined by relations \eqref{K0_hpos} and \eqref{psi_xh_hpos}, respectively. When $\alpha \neq \gamma$ and $\alpha \neq 2\gamma$, the exponent in \eqref{K0_hpos} is positive when
\begin{eqnarray}
\cfrac{\alpha+\gamma}{\gamma} -\cfrac{1}{(\gamma-\alpha)^2}>0 \iff 1-\left(\cfrac{\gamma}{\alpha}\right)-\left(\cfrac{\gamma}{\alpha}\right)^2+\left(\cfrac{\gamma}{\alpha}\right)^3-\cfrac{\gamma}{\alpha^3}>0,
\end{eqnarray}
that is for $\cfrac{\gamma}{\alpha}>0.45$. For the range of parameters $0.45\alpha<\gamma<\alpha$ and $\alpha \neq 2\gamma$, the pdf has a maximum located on the $h=0^+$ axis shifted towards the right of $A$ (fig. \ref{phaseSpace_PDF}B, for three values of the noise amplitude $\sigma=0.03$ (pink), 0.09 (blue) and 0.12 green and for $\gamma=0.6$). This maximum gives the position of the shifted attractor $A_\sigma$, which depends on the noise amplitude. When $\cfrac{\gamma}{\alpha}\leq0.45$, since the linearization approximation in \eqref{approx} is not valid, formula \eqref{pdfFinal} cannot be use to approximate the pdf.
\begin{figure}[http!]
\centering
\includegraphics[scale=0.74]{fig2_Analytics_small.pdf}
\caption{\textbf{Position of the shifted attractor \boldmath $A_\sigma$ \unboldmath }\textbf{A.} Simulated pdf of the trajectories for three noise levels $\sigma = 0.03$ (pink), $\sigma = 0.09$ (blue) and $\sigma = 0.12$ (green) for $\gamma = 0.6$ and $\alpha = 1$. \textbf{B.} Analytical distributions for the three noise levels. Inset: same distributions with a different perspective. \textbf{C.} $\cfrac{\partial p}{\partial x}_{|h=0^+}(x)$ for $\sigma \in [0.01,0.2]$ (the crosses indicate the zeros). \textbf{D-E.} Distance $d(A,A_\sigma)$ as a function of $\sigma$. Numerical solution (black stars with numerical fit in pink) and analytical relation \ref{xM_06} (resp. \ref{xM_09}, cyan crosses with yellow curve) for $\gamma=0.6$ (resp. 0.9) and $\alpha = 1$. \textbf{F.} Distance $d(A,A_\sigma)$ as a function of $\sigma$. Numerical solution (black stars with numerical fit in pink) compared with the analytical expressions \eqref{xM_06} and \eqref{xM_09} (cyan crosses with yellow curve and magenta crosses with blue curve) in the case $\gamma=0.75$ and $\alpha = 1$ for which neither polynomial approximation is valid.}\label{phaseSpace_PDF}
\end{figure}
\subsubsection{Computing the distance between \boldmath $A_\sigma$ \unboldmath and \boldmath $A$ \unboldmath}\label{pseudoEqLocSol}
To study how the distance between $A$ and $A_\sigma$ depends on the noise amplitude $\sigma$, we use that the maximum of the pdf \eqref{pdfFinal} is given by
\begin{eqnarray}
\nabla p = 0.
\end{eqnarray}
However, the partial derivative along $h$ is discontinuous, and the analytical expression for pdf $p$ (\ref{pdfFinal_hneg}, \ref{pdfFinal}) is decreasing with $|h|$ on both halves of the phase-space. Thus based on the numerical motivation that the maximum of the pdf $A_\sigma$ is shifted along the $x$ axis, we are left with solving $\cfrac{\partial p}{\partial x}_{|h=0^+}=0$. Using relation \ref{pdfFinal}, for $h \geq 0$ we obtain
\begin{eqnarray} \label{gradPeq0}
\begin{split}
-\cfrac{\left(\cfrac{\alpha + \gamma}{\gamma}-\cfrac{1}{(\alpha-\gamma)^2}\right) \gamma}{x_0+\gamma x}+\cfrac{1}{\sigma}\left(-\cfrac{Q_0^2}{q_{2,0}}\left(\cfrac{2\gamma}{q_{2,0}}\right)^{\displaystyle \frac{2\alpha}{\gamma}-1}x^{\displaystyle \frac{2\alpha}{\gamma}-1}\right.\\
\left.-\cfrac{2Q_0}{\gamma-\alpha}\left(\cfrac{2\gamma}{q_{2,0}}\right)^{\displaystyle \frac{\alpha}{\gamma}}x^{\displaystyle \frac{\alpha}{\gamma}}
-\cfrac{2\gamma x}{(\alpha-\gamma)^2}\right)=0,
\end{split}
\end{eqnarray}
We rewrite \eqref{gradPeq0} as
\begin{eqnarray}\label{dpdxEq0}
-\cfrac{A1}{1+\cfrac{\gamma}{x_0}x} - \cfrac{1}{\sigma}\left(A_2x^{\displaystyle \frac{2\alpha}{\gamma}-1}+A_3x^{\displaystyle \frac{\alpha}{\gamma}}+A_4x\right)=0,
\end{eqnarray}
where
\begin{eqnarray}
A_1 = \left(\cfrac{\alpha+\gamma}{\gamma}-\cfrac{1}{(\alpha-\gamma)^2}\right)\cfrac{\gamma}{x_0},
A_2 = \cfrac{Q_0^2}{q_{2,0}}\left(\cfrac{2\gamma}{q_{2,0}}\right)^{\displaystyle \frac{2\alpha}{\gamma}-1},
A_3 = \cfrac{2Q_0}{\gamma-\alpha}\left(\cfrac{2\gamma}{q_{2,0}}\right)^{\displaystyle \frac{\alpha}{\gamma}},
A_4 = \cfrac{2\gamma}{(\alpha-\gamma)^2}.
\end{eqnarray}
The algebraic equation \ref{gradPeq0} describes the shift in the $x$ axis of the pdf peak compared to the attractor. This equation cannot be solved analytically in general and we solved it numerically for various values of $\sigma$ (fig. \ref{phaseSpace_PDF}C). \\
However, we shall describe two cases for which equation \eqref{dpdxEq0} can be approximated by a polynomial equation. In the range $0.5<\cfrac{\gamma}{\alpha}\leq0.645$ and $0.885\leq\cfrac{\gamma}{\alpha}<1$, we can compute the absolute difference $d(\sigma)=|x_{M,num}(\sigma)-x_M(\sigma)|$ between the numerical result $x_{M,num}(\sigma)$ and the solution of the approximated polynomial equation $x_M(\sigma)$ that we define below (equations \ref{xM_06} and \ref{xM_09}) for $\sigma \in [0,0.2]$ and by using the following criteria $d_{max}=\max\limits_{\sigma \in [0,0.2]}(d(\sigma))<0.02$ (fig. \ref{phaseSpace_PDF}D-F).
\subsubsection{Approximated expression for the distance \boldmath $A_\sigma-A$ \unboldmath in the range \boldmath $0.5<\cfrac{\gamma}{\alpha}<0.645$}
In the mentioned range, we approximate $\cfrac{2\alpha}{\gamma}-1 \approx 2$ and $\cfrac{\alpha}{\gamma} \approx 2$ and \eqref{dpdxEq0} becomes the third order polynomial equation
\begin{eqnarray}\label{ploy3approx}
A_1\sigma + A_4x + \left(A_2+A_3+\cfrac{A_4\gamma}{x_0}\right)x^2+ (A_2+A_3)\cfrac{\gamma}{x_0}x^3 = 0.
\end{eqnarray}
The solution is (fig. \ref{phaseSpace_PDF}D, cyan crosses and yellow curve)
\begin{eqnarray}\label{xM_06}
x_M(\sigma) = \left(\cfrac{-q(\sigma)-\sqrt{\Delta(\sigma)}}{2}\right)^{\displaystyle 1/3}+\left(\cfrac{-q(\sigma)+\sqrt{\Delta(\sigma)}}{2}\right)^{\displaystyle 1/3}-\cfrac{c_2}{3c_1},
\end{eqnarray}
where
\begin{eqnarray}\label{interParam_06}
\begin{array}{r c l}
\arraycolsep=1.4pt\def2.5{2.5}
c_1 &=&A_2+A_3+\cfrac{A_4\gamma}{x_0} \approx 397,2\\
c_2 &=& A_2+A_3 \approx 117.5\\
A_4 &=& 7.5\\
A_1 &\approx& -17.7\\
q(\sigma) &=& \cfrac{2c_2^3-9c_1c_2A_4}{27c_1^3}+\cfrac{A_1}{c_1}\sigma \approx 5,5.10^{-5}-0.04\sigma\\
\Delta(\sigma) &=& q(\sigma)^2+\cfrac{4}{27}\left(\cfrac{3c_1A_4-c_2^2}{3c_1^2}\right)^3 \approx q(\sigma)^2-1,6.10^{-7},
\end{array}
\end{eqnarray}
for the parameter values $\alpha = 1, \gamma=0.6$, $h_0=0.001$, $x_0=0.12$. Expression \eqref{xM_06} is valid as long as $\Delta(\sigma)\geq 0$, that is $\sigma>0.0114$.
\subsubsection{Expression of distance \boldmath $A_\sigma$ \unboldmath and \boldmath $A$ \unboldmath in the range \boldmath $0.885\leq\cfrac{\gamma}{\alpha}<1$}
In the range $0.9\leq \frac{\gamma}{\alpha}<1$ we approximate \eqref{dpdxEq0} by the second order polynomial equation
\begin{eqnarray}\label{dpdx_ordre2}
\cfrac{\gamma}{x_0}x^2+x+\cfrac{A_1 \sigma}{A_2+A_3+A_4}=0,
\end{eqnarray}
the solution is
\begin{eqnarray}\label{xM_09}
x_M(\sigma)=\cfrac{-1+\sqrt{1-\cfrac{4\gamma}{x_0}\cfrac{A_1 \sigma}{A_2+A_3+A_4}}}{2\gamma}x_0 \approx -0.56 + \cfrac{\sqrt{1+38.45\sigma}}{1.8},
\end{eqnarray}
for the parameter values $\alpha=1$, $\gamma=0.9$ and $\sigma\geq 0$ (fig. \ref{phaseSpace_PDF}E cyan crosses and yellow curve).\\
To test the range of validity of our approximations for the position of $A_\sigma$, we ran simulations for $\sigma \in [0.03, 0.12]$ (fig. \ref{noise_Asigma}A $\sigma =0.12$, green, 0.09, blue and 0.03, pink). We compared the distance $d(A,A_\sigma)$ obtained from numerical simulations (\ref{noise_Asigma}B-C black stars) and the analytical formula \eqref{xM_06} (resp. \ref{xM_09}) (fig. \ref{noise_Asigma}B (resp. C) yellow curve) for $\alpha=1$ and $\gamma=0.6$ (resp. $\gamma=0.9$). In the case $\gamma=0.6$, we added an offset in formula \eqref{xM_06} to minimize the absolute value of the difference between the analytical formula and the simulations:
\begin{eqnarray}
\hat{c}=\min\limits_{c \in [0,0.1]}\sum\limits_{\sigma\in[0.03,0.12]}|x_M(\sigma)-c-x_{M,sim}(\sigma)|,
\end{eqnarray}
where $x_M(\sigma)$ is defined by \eqref{xM_06} and $x_{M,sim}(\sigma)$ is the value obtained from the numerical simulations. Using a finite number of values, we directly found that $\hat{c}= 0.032$. This offset is probably due to the approximations we made for the derivation of formula \eqref{xM_06}. \\
To conclude we have found here analytical expressions (\ref{xM_06} and \ref{xM_09}) for the position of $A_\sigma$ and showed that these expressions approximate well the numerical simulations.
\begin{figure}[http!]
\centering
\includegraphics[scale=0.74]{fig3_NoiseInfluence_Asigma.pdf}
\caption{\textbf{Influence of the noise amplitude on the peak distribution \boldmath $A_\sigma$ \unboldmath. }\textbf{A.} Stochastic trajectories simulated for $T=500s$ with three noise levels $\sigma = 0.03$ (pink), 0.09 (blue) and 0.12 (green) and the shifted peaks $A_\sigma$. \textbf{B.} Distance $d(A,A_\sigma)$ vs $\sigma$. Numerical simulations are obtained with a stopping time $T=800s$ per noise value (black stars), compared to the analytical formula \ref{xM_06} (yellow) minus a corrective offset $\hat{c}=0.032$. \textbf{C.} Distance $d(A,A_\sigma)$ vs $\sigma$. Numerical simulations are generated with a stopping time $T=800s$ per noise value, (black dots) compared to the analytical formula \ref{xM_09} (yellow).}\label{noise_Asigma}
\end{figure}
\section{Multiple re-entries and distributions of escape times and points}
In this section, we report a novel mechanism of stochastic escape from an attractor, based on multiple re-entry.
\subsection{The different steps of escape}\label{numerics_escape}
The escape from the basin of attraction of point $A$ can be divided into three steps.
\begin{enumerate}
\item Step 1: starting from the attractor $A$, trajectories fall into the basin of attraction of the shifted equilibrium $A_\sigma$. The duration of this step is almost immediate and can be neglected compared with the durations of the next steps 2 and 3.
\item Step 2: trajectories fluctuate around the shifted equilibrium $A_\sigma$ until they reach the separatrix $\Gamma$ for the first time.
\item Step 3: trajectories cross $\Gamma$, exiting and reentering the basin of attraction several times before eventually escaping far away (fig. \ref{RTtimes}A-C).
\end{enumerate}
We will quantify these excursions occurring in step 3 by counting the number of round-trips (RT) across $\Gamma$. We first study these three steps numerically by simulating 5000 trajectories starting from $A$ and lasting $T=300s$ for $\sigma=0.78$. To obtain the distribution of exit times and points on the separatrix $\Gamma$ at each RT, we decided to replace $\Gamma$ by its tangent $T_\Gamma$ at $S$ (fig. \ref{RTtimes}A-C pink line). Indeed, the distribution of exit points peaks at a distance $O(\sqrt{\sigma})$ from $S$ \cite{BobrovskySchuss1982} and thus the difference between the separatrix and its tangent is of order 2. This approximation will allow us to use the analytical expression of the tangent $T_\Gamma$.\\
We then decompose the escape time in the first time to reach the separatrix $\Gamma$ plus the time spent doing successive excursions outside and inside the basin of attraction (fig. \ref{RTtimes}D, color gradient indicates the contribution of the trajectories doing a specific number of RT to the total distribution of escape times). This decomposition can be used to estimate the proportion of trajectories that escape at each RT and to evaluate the escape probability after crossing the separatrix as we will see below.
\begin{figure}[http!]
\centering
\includegraphics[scale=0.74]{fig4_RTDistributionTrajectories.pdf}
\caption{\textbf{Recurrent exit pattern and contribution to the escape time. }\textbf{A-C.} Trajectory doing zero (resp. one, two) RT before escape (red, resp. orange, yellow) the red (resp. green) arrows indicate exit (resp. reentry) points. \textbf{D.} Distribution of escape times from 5000 trajectories lasting T=300s ($\gamma =0.6$, $\alpha=1$ and $\sigma=0.78$) with the contribution of trajectories doing for each number of RT around $\Gamma$ before escaping (color gradient).}\label{RTtimes}
\end{figure}
\subsection{First exit time and exit points distributions on the separatrix $\Gamma$}
We study here the influence of the noise on the distributions of first exit times and exit points: we simulate $N=2500$ trajectories starting at $A$ and lasting $T=300s$ for $\sigma \in [0.21,1.05]$. The first exit time can be very long for small values of $\sigma$ (fig. \ref{variationSigma}A, orange distribution for $\sigma = 0.21$, and light green for $\sigma = 0.33$) but becomes shorter with peaked distributions when $\sigma$ increases (dark green to red). The distribution of the first exit points is peaked and located on the left of the saddle-point $S$ (fig. \ref{S2}A purple for $\sigma = 0.78$). We found here that the distance $d(P_E,S)$ between the peak $P_E$ of this distribution and the saddle-point $S$ for $\sigma \in [0.15, 1.05]$ is of order $O(\sqrt{\sigma})$ (fig. \ref{variationSigma}B) in agreement with the classical theory \cite{BobrovskySchuss1982}. We further observed from numerical simulations that the density of exit points of the first trajectories that reenter the basin of attraction at least once follows a similar distribution as the one of first exit points for all trajectories (fig. \ref{S2}A green) indicating that there are no correlations between the position of the escape points and the phenomenon of reentry in the basin of attraction. Finally, the distribution of the first reentry points also peaks on the left of the saddle-point but spreads on both sides of $S$ (fig. \ref{S2}A red).
\begin{figure}[http!]
\centering
\includegraphics[scale=0.74]{fig5_VariationSigma.pdf}
\caption{\textbf{Influence of the noise amplitude on the first exit times and points. }\textbf{A.} Distribution of the first exit times with respect to the noise amplitude $\sigma$ with an inset on short times. \textbf{B.} Distance between the peak $P_E$ of the distribution of first exit points on the separatrix and the saddle-point $S$ with respect to the noise amplitude $\sigma$ for $\gamma=0.6$ and $\alpha=1$ with a square-root fit (magenta).}\label{variationSigma}
\end{figure}
\subsection{Characterization of the mean escape time}\label{analytics_escape}
To obtain a general expression for the mean escape time, we use Bayes'law and condition by the RT numbers so that
\begin{eqnarray} \label{realEscapeTime}
\langle\tau_{esc}\rangle = \sum_{k=0}^{\infty} \langle\tau | k \rangle P_{RT}(k),
\end{eqnarray}
where $\langle \tau | k \rangle$ (resp. $P_{RT}(k)$) is the mean time (resp. probability) to return $k-$times inside the basin of attraction. Because the RT are independent events, the probability $\tilde{p}$ that a trajectory crossing the separatrix $\Gamma$ escapes does not depend on $k$, yielding
\begin{eqnarray}
P_{RT}(k) = \tilde{p}(1-\tilde{p})^{k-1},
\end{eqnarray}
thus
\begin{eqnarray}\label{tEscSummed}
\langle\tau_{esc}\rangle = \langle\tau_{0}\rangle + (\langle\tau_{ext}\rangle+\langle\tau_{int}\rangle)\tilde{p}\sum_{k=1}^\infty k (1-\tilde{p})^{k-1} = \langle\tau_{0}\rangle + \frac{\langle\tau_{ext}\rangle+\langle\tau_{int}\rangle}{\tilde{p}},
\end{eqnarray}
where $\langle\tau_{ext}\rangle$ (resp. $\langle\tau_{int}\rangle$) is the mean time spent outside (resp. inside) the basin of attraction during one RT. To avoid counting small Brownian fluctuations as RT inherent to the discretization (fig. \ref{escTimes}A black arrows), we added a second line $\tilde{\Gamma}$ at distance $\delta=0.25$ parallel to the separatrix (fig. \ref{escTimes}B blue line) and thus a trajectory is considered to have fully exited the basin of attraction once it has crossed both the tangent $T_\Gamma$ and $\tilde{\Gamma}$.\\
Using this procedure we estimated the probability $\tilde{p}(k)$ to escape after $k$ RT by counting the proportion of trajectories that reenter the basin of attraction at least once and we found using numerical simulations $\tilde{p}(1)\approx0.40$. We iterated this process for each RT until all trajectories had escaped to infinity and found that the probability $\tilde{p}(k)$ for $k \geq 1$ does not depend on $k$ (fig. \ref{S2}B), thus $\tau_{esc} \approx \tau_0 + 2.5(\tau_{ext}+\tau_{int})$. Finally, with the parameters $\alpha=1$, $\gamma=0.6$ and $\sigma=0.78$, numerical simulations show that $\langle\tau_0\rangle \approx 5s$ and $\langle\tau_{ext}\rangle+\langle\tau_{int}\rangle \approx 2.6s$ (fig. \ref{escTimes}C). \\
To conclude, the process of entering and exiting multiple times increases the mean escape time by a factor of 2.3. In addition we found, based on simulations for $\sigma\in[0.54,0.90]$, that the noise amplitude does not influence the number of RT before escape (fig. \ref{escTimes}D) and that trajectories perform 2.5 RT on average.
\begin{figure}[http!]
\centering
\includegraphics[scale=0.65]{fig6_EscapeTimes.pdf}
\caption{\textbf{Characterization of escape times. }\textbf{A.} Trajectory escaping (red cross) after small brownian fluctuations at the separatrix (black arrows). \textbf{B.} Trajectory exiting (red crosses) after crossing $T_\Gamma$ (pink) and $\tilde{\Gamma}$ (blue), reentering (green circles) and reexiting (orange crosses) the basin of attraction before escaping. \textbf{C.} Mean time $\langle\tau_{ext}\rangle$ (resp. $\langle\tau_{int}\rangle$) spent outside (resp. inside) the basin of attraction at each RT. \textbf{D.} Distributions of the RT number before trajectories escape to infinity for $\sigma\in[0.54,0.90]$. Inset: mean RT number $\langle RT\rangle$ with respect to the noise amplitude. \textbf{E.} Distribution of exit times for trajectories doing 0 (upper left, resp. 1 (lower left), 2 (upper right), 3 (lower right)) RT before escape. \textbf{F.} Distribution of exit times with the contribution of each RT number (color gradient) with the analytical distribution \eqref{pdfExitTimes} (black) \textbf{G-H.} Application to the model \eqref{2Dsyst}}\label{escTimes}
\end{figure}
\subsection{Characterization of escape times distributions} \label{analytics_escape_distrib}
To determine the distribution of escape times, we condition the escape on the number of RT, so that
\begin{eqnarray}\label{distrib}
P(\tau_{esc}<t)=\sum_{k=0}^{\infty} P(\tau^k<t|k) P_{RT}(k),
\end{eqnarray}
where $P(\tau^k<t|k)$ is the probability distribution of escape times after $k$ RT. This probability is obtained by the $k$-th convolution of the distribution $f_1$ of times for trajectories exiting after a single RT with the distribution $f_0$ of escape times with 0 RT
\begin{eqnarray}\label{pk}
P(\tau^k<t|k)=f_0(t)*f_1(t)^{*k},
\end{eqnarray}
where $f(t)^{*k}=f(t)*f(t)*...*f(t)$, $k$ times. Thus \eqref{distrib} becomes
\begin{eqnarray}\label{pdfExitTimes}
P(\tau_{esc}<t)=\sum_{k=0}^{\infty} f_0(t)*f_1(t)^{*k} \tilde{p}(1-\tilde{p})^{k}.
\end{eqnarray}
To compare this formula with our numerical results, we decided to fit the distributions with
\begin{eqnarray}\label{fitFi}
f_i(t)=c_i t^{\displaystyle a_i} e^{\displaystyle -\lambda_i t}, \text{ for } i=0,1, a_i\geq0 \text{ and } \lambda_i\geq0
\end{eqnarray}
Using the Matlab fit function (fig. \ref{escTimes}C), we obtained for the distribution escape without any RT
\begin{eqnarray}\label{F_0}
\tilde{p}f_0(t) = 0.09 t^{\displaystyle 0.09}e^{\displaystyle -0.20 t}
\end{eqnarray}
and for the trajectories doing exactly one RT before escape
\begin{eqnarray}\label{F1}
F_1(t)=\tilde{p}(1-\tilde{p})f_0*f_1(t) = 0.04 t^{\displaystyle 0.75}e^{\displaystyle -0.22 t}.
\end{eqnarray}
To recover the distribution $f_1$ \eqref{F1}, we deconvolved numerically $F_1$ from $f_0$ \eqref{F_0} (fig. \ref{escTimes}E lower left). This procedure allows us to validate our approach by computing the distributions of 2 and 3 RT and comparing them with the empirical distributions \eqref{pk} (fig. \ref{escTimes}E upper and lower right). Finally, we decomposed the entire escape times distribution using \eqref{pdfExitTimes} to evaluate the contribution of each term (fig. \ref{escTimes}E-F).
\begin{center}
\begin{tabular}{l l l}
& Parameters & Values \\
\hline
$\tau$ & Time constant for $h$ & 0.05s\\
$J$ & Synaptic connectivity & 4.21\\
$K$ & Facilitation rate & 0.037Hz\\
$X$ & Facilitation resting value & 0.08825\\
$L$ & Depression rate & 0.028Hz\\
$\tau_r$ & Depression time rate & 2.9s\\
$\tau_f$ & Facilitation time rate & 0.9s\\
$T$ & Depolarization parameter & 0\\
\hline
\end{tabular}
\label{tableParam}
\end{center}
\section{Further application and concluding remarks}
\subsection{Distribution of interburst durations} \label{application_2D}
The phase-space of system \eqref{2Dsyst}, restricted to the region $\{x \leq 0.5 \mbox{ and } h \leq 30\}$ is topologically equivalent to the phase-space of system \eqref{Drift}. It contains one attractor and one saddle-point and the stable manifold of the saddle-point defines the boundary of the basin of attraction (fig. \ref{application}A). Following the result of system \eqref{Drift}, trajectories fall into a basin of attraction centered around a shifted attractor towards the saddle-point $S$, as shown in fig. \ref{application}B, red (resp. blue, green) star for $\sigma=1$ (resp. $1.5$, $2.5$)). The shifted attractor position depends on the noise amplitude $\sigma$ (fig. \ref{application}C) and the escaping trajectories can return several times inside the basin of attraction before escaping far away (fig. \ref{application}D, one RT). We used formula \eqref{pdfExitTimes} to fit the distribution of exit times (fig. \ref{application}E-F) and obtained, for $\sigma = 6$ the distribution of exit with no return
\begin{eqnarray}
f_0(t) = 0.02 t^{\displaystyle 1.82}e^{\displaystyle -0.40 t}
\end{eqnarray}
and the distribution of loop is
\begin{eqnarray}
f_1(t) = 0.01 t^{\displaystyle 1.57}e^{\displaystyle -0.26 t}.
\end{eqnarray}
Finally, using numerical simulations, we estimated the escape probability $\tilde{p} \approx 0.37$ by generating trajectories starting from the attractor A and counting the fraction that fully escaped far away vs those that did return inside the basin of attraction (fig. \ref{S3}A). The mean escape time is given by formula \eqref{tEscSummed}: \begin{eqnarray}
\langle\tau_{esc}\rangle \approx \langle\tau_0\rangle + 2.7(\langle\tau_{ext}\rangle+\langle\tau_{int}\rangle).
\end{eqnarray}
Using parameters of table \ref{tableParam}, we obtain $\langle\tau_0\rangle \approx 4.35s$ and $\langle\tau_{ext}\rangle+\langle\tau_{int}\rangle\approx 2.6s$ (fig. \ref{S3}B). We conclude that returns to the attractors increases the escape time from 4.35 to 11.37 leading to an increase by a factor 2.6. Moreover, the number of RT before escape does not depend on the noise amplitude (section \ref{analytics_escape} and fig. \ref{S3}C) and trajectories generate in average 2.7 RT before escape (fig. \ref{S3}C, inset). Finally, this escape mechanism could explain long interburst durations occurring in excitatory neuronal networks without the need of adding any other refractory mechanisms. Indeed, the present computations can be used to study the neuronal interburst dynamics modeled in the mean-field approximation by depression-facilitation equations \cite{Tsodyks1997,daoduc2015}. We also provided a possible explanation for the long interburst durations observed in neuronal networks reported in \cite{Rouach_CxKO}.\\
\begin{figure}[http!]
\centering
\includegraphics[scale=0.74]{Fig7_Conditions.pdf}
\caption{\textbf{Recurrent escape mechanism. }\textbf{A.} Trajectories exiting the basin of attraction (red), re entering (purple) and exiting again (yellow) before eventually escaping to infinity (black) after reaching the cone $C_\infty$ (yellow surface). \textbf{B.} Schematic of escape process divided in four steps. \textbf{C.} Distribution of the first exit (resp. reentry) points (red, resp. purple) on the separatrix $\tilde{\Gamma}$. \textbf{D.} Distributions of successive exit points (first exit, red, second yellow, third green) on $ \tilde{\Gamma} $.}\label{recap}
\end{figure}
To conclude this article, motivated by finding a possible mechanism that generates long interburst intervals, we examined a family of stochastic dynamical systems perturbed by a small Gaussian noise. The dynamics exhibits specific properties such as peaks of the pdf inside the basin of attraction shifted compared to the attractor. In addition, escaping the basin is characterized by multiple reentries inside the attractor. We computed the position of this shifted attractor using WKB approximation and we derived algebraic formulas to link the position to the noise amplitude $\sigma$ (formulas \ref{xM_06} and \ref{xM_09}). We also computed the escape time, decomposed into the time to reach the boundary of the basin of attraction plus the time spent going back and forth through the separatrix (formulas \ref{tEscSummed} and \ref{pdfExitTimes}). Finally, we emphasize the generic conditions associated with this escape dynamics into four conditions:
\begin{enumerate}
\item The distribution of exit points peaks at a distance $O(\sqrt{\sigma})$ from the saddle-point (generically satisfied \cite{BobrovskySchuss1982}).
\item The shallow field near the separatrix allows the trajectories to reenter the basin of attraction with high probability.
\item The peaks of the successive exit points distributions drift towards the saddle-point $S$ (fig. \ref{recap}C).
\item When trajectories enter the cone $C_{\infty}$ (yellow surface in fig. \ref{recap}A-B) where the field increases, they eventually escape to infinity.
\end{enumerate}
{\bf Acknowledgements}
L. Zonca has received support from the FRM (FDT202012010690). This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 882673).
\subsection*{Appendix Figures}
\renewcommand\thefigure{A.\arabic{figure}}
\setcounter{figure}{0}
\begin{figure}[http!]
\centering
\includegraphics[scale=0.74]{FigS1_ARandTauEsc.pdf}
\caption{{\bf Distributions of the first exit points located on the separatrix $\Gamma$ }\textbf{A.} for all trajectories (purple) and first exit for re-entering trajectories (green, 60\% of trajectories), reentry points (red) for $\sigma=0.78$. \textbf{B.} Probability to escape after exiting the basin of attraction for the $k$-th time, $\tilde{p}(k)$ estimated by the proportion of trajectories that reenter the basin of attraction after $k$ RT for noise amplitudes $\sigma \in [0.54,0.90]$.} \label{S2}
\end{figure}
\begin{figure}[http!]
\centering
\includegraphics[scale=0.74]{FigS2.pdf}
\caption{{\bf Probability $\tilde{p}$ to escape after exiting the basin of attraction} \textbf{A.} Probability $\tilde{p}$ vs the RT number for $\sigma \in [4,7]$, with a linear fit. \textbf{B.} Mean times $\langle\tau_{ext}\rangle$ (red) and $\langle\tau_{int}\rangle$ (black) vs the RT number. \textbf{C.} Distributions of RT numbers before escape for $\sigma\in[4,7]$. Inset: mean RT number with respect to $\sigma$} \label{S3}
\end{figure}
\newpage
\normalem
\bibliographystyle{ieeetr
|
2101.07381
|
\section{\label{sec:Introduction}Introduction}
The existence of massive or massless weakly interacting neutral particles ($X$) has been suggested to augment the standard model with motivations that include providing dark matter candidates \cite{DarkMatter}, explaining baryogenesis \cite{Baryogenesis}, revealing the origin of neutrino masses \cite{SK}, and finding solutions to the strong $CP$ problem \cite{StrongCP1, StrongCP2} involving the axion \cite{familon, axion1, axion2, axion3, axion4}.
Pion and kaon decays are potential sources of $X$ particles as discussed by Altmannshofer, Gori, and Robinson \cite{ALP} who investigated a model with axionlike particles involved in pion decay ${\pi}^+{\to}e^+{\nu}X$.
Batell {\it et al.} \cite{DM} studied a model of thermal dark matter emitted in three body meson decay ${\pi}^+(K^+){\to}l^+{\chi}{\phi}$ where $\chi$ and $\phi$ are assumed to be sterile neutrinos.
Light vector bosons emitted in ${\pi}^+(K^+){\to}l^+{\nu}X$ decay have been discussed by Dror \cite{Dror}.
A Nambu-Goldstone boson, the ``Majoron" proposed by Gelmini and Roncadelli \cite{majoron1}, is also a candidate of interest.
It arises in gauge models that have a spontaneous breaking of the baryon and lepton numbers ($B-L$) global symmetry \cite{majoron1, majoron2}.
In the Majoron models, neutrino masses arise from the vacuum expectation value of a weak isotriplet scalar Higgs boson.
Barger, Keung, and Pakvasa extended the Majoron model to the decay processes of pions and kaons ${\pi}^+(K^+){\to}l^+{\nu}X$ via Majoron-neutrino couplings \cite{majoron3}.
Other related processes and models have been discussed in Refs. \cite{ref1, ref2, ref3, ref4}.
Three body pion decays ${\pi}^+{\to}l^+{\nu}X$ can be investigated using the decay lepton energy spectra in pion decays.
Figure \ref{fig:MajoronShapes} shows the total and kinetic energy spectra of ${\pi}^+{\to}e^+{\nu}X$ and ${\pi}^+{\to}{\mu}^+{\nu}X$ decays assuming the decay products of $X$ are invisible or have very long lifetimes allowing undetected escape.
The signal shapes were obtained from Eq. (12) in Ref. \cite{DM}.
A previous search for the decay ${\pi}^+{\rightarrow}e^+{\nu}X$ was performed by Picciotto {\it et al.} \cite{Picciotto} as a byproduct of the branching ratio measurement $R^{\pi}={\Gamma}[{\pi}^+{\to}e^+{\nu_e}({\gamma})]/{\Gamma}[{\pi}^+{\rightarrow}{\mu}^+{\nu_{\mu}}({\gamma})]$, where ($\gamma$) indicates the inclusion of radiative decays, using stopped pions in an active target \cite{Britton}.
The upper limit on the branching ratio was found to be $R^{{\pi}e{\nu}X}={\Gamma}({\pi}^+{\to}e^+{\nu}X)/{\Gamma}({\pi}^+{\to}{\mu}^+{\nu_{\mu}}){\lesssim}4{\times}10^{-6}$ in the mass range $m_X$ from 0 to 125 MeV/$c^2$.
The sensitivity was limited by statistics and the remaining background originated from pion decay-in-flight (${\pi}$DIF) events.
For ${\pi}^+{\to}{\mu}^+{\nu}X$ decay, no comparable studies have been performed.
In the present work, the decays ${\pi}^+{\to}e^+{\nu}X$ and ${\pi}^+{\to}{\mu}^+{\nu}X$ were sought using the full data set of the PIENU experiment \cite{PIENU} corresponding to two orders of magnitude larger statistics than the previous experiment \cite{Picciotto}.
The analyses were based on the searches for heavy neutrinos ${\nu}_H$ in ${\pi}^+{\to}e^+{\nu_H}$ decay \cite{PIENU2} and ${\pi}^+{\to}{\mu}^+{\nu_H}$ decay \cite{PIENU3}, and the decays ${\pi}^+{\to}e^+{\nu}_e{\nu}\bar{\nu}$ and ${\pi}^+{\to}{\mu}^+{\nu}_{\mu}{\nu}\bar{\nu}$ \cite{PIENU4}.
\begin{figure}
\includegraphics[scale=0.45]{MajoronShapes.eps}
\caption{\label{fig:MajoronShapes} Total energy spectra of ${\pi}^+{\to}e^+{\nu}X$ and kinetic energy spectra of ${\pi}^+{\to}{\mu}^+{\nu}X$ decays. (a) ${\pi}^+{\to}e^+{\nu}X$ decay with mass $m_X$ of 0 MeV/$c^2$ (solid black), 40 MeV/$c^2$ (dotted red), and 80 MeV/$c^2$ (dashed blue). (b) ${\pi}^+{\to}{\mu}^+{\nu}X$ decay with mass $m_X$ of 5 MeV/$c^2$ (solid black), 15 MeV/$c^2$ (dotted red), and 25 MeV/$c^2$ (dashed blue).}
\end{figure}
\section{Experiment}
\begin{figure}
\includegraphics[scale=0.4, clip, trim=7cm 0cm 7cm 0cm]{NewDetector.eps}
\caption{\label{fig:detector} Schematic of the PIENU detector \cite{NIMA}.}
\end{figure}
The PIENU detector \cite{NIMA} shown schematically in Fig. \ref{fig:detector} was designed to measure the pion branching ratio $R^{\pi}={\Gamma}[{\pi}^+{\to}e^+{\nu_e}({\gamma})]/{\Gamma}[{\pi}^+{\rightarrow}{\mu}^+{\nu_{\mu}}({\gamma})]$.
The decay positron in ${\pi}^+{\rightarrow}e^+{\nu_e}$ decay has total energy $E_e=69.8$ MeV.
For ${\pi}^+{\to}{\mu}^+{\nu_{\mu}}$ decay followed by ${\mu}^+{\to}e^+{\nu_e}\bar{\nu_{\mu}}$ decay (${\pi}^+{\to}{\mu}^+{\rightarrow}e^+$ decay chain), the decay muon has kinetic energy $T_{\mu}=4.1$ MeV and a range in plastic scintillator of about 1 mm; the total energy of the positron in the subsequent muon decay ${\mu}^+{\to}e^+{\nu}_e\bar{\nu}_{\mu}$ ranges from $E_e=0.5$ to 52.8 MeV.
A pion beam with momentum of $75{\pm}1$ MeV/$c$ provided by the TRIUMF M13 beam line \cite{M13} was tracked by two multiwire proportional chambers (WC1 and WC2) and two sets of silicon strip detectors (S1 and S2).
Following WC2, the beam was degraded by two thin plastic scintillators (B1 and B2) to measure time and energy loss for particle identification.
After S2, pions stopped and decayed at rest in the center of an 8 mm thick plastic scintillator target (B3).
The pion stopping rate in B3 was $5{\times}10^4$ ${\pi}^+/$s.
Positrons from pion or muon decay were detected by another silicon strip detector (S3) and a multiwire proportional chamber (WC3) located downstream of B3 to reconstruct tracks and define the acceptance.
Two thin plastic scintillators (T1 and T2) were used to measure the positron time, and its energy was measured by a 48 cm (dia.) $\times$ 48 cm (length) single crystal NaI(T$\ell$) calorimeter surrounded by 97 pure CsI crystals to detect shower leakage.
The energy resolution of the calorimeter for positrons was 2.2\% (FWHM) at 70 MeV.
The pion and positron signals were defined by a coincidence of B1, B2, and B3, and a coincidence of T1 and T2, respectively.
A coincidence of the pion and positron signals within a time window of $-$300 ns to 540 ns with respect to the pion signal was the basis of the main trigger condition.
This was prescaled by a factor of 16 to form an unbiased trigger (Prescaled trigger).
${\pi}^+{\rightarrow}e^+{\nu_e}$ event collection was enhanced by an early time trigger selecting all events occurring between 6 and 46 ns after the arrival of the pion (Early trigger).
The typical trigger rate including calibration triggers was about 600 s$^{-1}$.
To extract the energy and time information, plastic scintillators, silicon strip detectors and CsI crystals, and the NaI(T$\ell$) crystal were read out by 500 MHz, 60 MHz, and 30 MHz digitizers, respectively.
The wire chambers and trigger signals were read by multi-hit time$-$to$-$digital converters with 0.625 ns resolution \cite{NIMA}.
\section{\label{sec:pie} ${\pi}^+{\rightarrow}e^+{\nu}X$ decay}
\subsection{\label{subsec:pieselection} Event selection}
\begin{figure*}
\begin{minipage}[b]{1\linewidth}
\includegraphics[scale=0.68]{Suppressed_noCsI.eps}
\end{minipage}\\
\begin{minipage}[b]{1\linewidth}
\includegraphics[scale=0.68]{Suppressed_CsI.eps}
\end{minipage}
\caption{\label{fig:Suppressed} First and third panels from the top: the $E_e$ spectra of ${\pi}^+{\to}e^+{\nu_e}$ decay after ${\pi}^+{\rightarrow}{\mu}^+{\rightarrow}e^+$ suppression cuts for datasets 1 (a) and 2 (c). The black crosses with the statistical uncertainties show the data. Background components illustrated by the dashed and dotted green line, dotted blue line, dashed gray line, and solid red line represent ${\pi}^+{\rightarrow}{\mu}^+{\rightarrow}e^+$ decays, low energy ${\pi}^+{\rightarrow}e^+{\nu_e}$ tail, $\mu$DIF events, and the sum of those three components, respectively (see text). Second and fourth panels from the top: the residual plots shown by the black circles with statistical error bars and hypothetical signals (solid red lines) with a mass of $m_X=80$ MeV/$c^2$ and a branching ratio $R^{{\pi}e{\nu}X}=2.0{\times}10^{-6}$ from datasets 1 (b) and 2 (d) (the branching ratio obtained by the fit at this mass was $R^{{\pi}e{\nu}X}=(-7.1{\pm}7.1){\times}10^{-8})$.}
\end{figure*}
The decay ${\pi}^+{\rightarrow}e^+{\nu}X$ was searched for by fitting the ${\pi}^+{\rightarrow}e^+{\nu_e}$ energy spectra after ${\pi}^+{\rightarrow}{\mu}^+{\rightarrow}e^+$ background suppression.
The cuts used for the pion selection, the rejection of the extra activity in scintillators, and the suppression of ${\pi}^+{\to}{\mu}^+{\to}e^+$ backgrounds were the same as for the analysis of ${\pi}^+{\to}e^+{\nu}_e{\nu}\bar{\nu}$ decay \cite{PIENU4}.
Pions were identified using the energy loss information in B1 and B2.
Events with extra activity in B1, B2, T1 or T2 were rejected.
Since the calibration system for the CsI crystals was not available before November 1, 2010, the data were divided into two sets (dataset 1, before, and dataset 2, after November 1, 2010).
A 15\% solid angle cut was used for the dataset 2, and a tighter cut (10\%) was applied to the dataset 1 to minimize the effects of electromagnetic shower leakage.
The ${\pi}^+{\rightarrow}{\mu}^+{\rightarrow}e^+$ backgrounds were suppressed using decay time, energy in the target, and tracking information provided by WC1, WC2, S1, and S2 \cite{PIENU3, PIENU4}.
Events were first selected by the Early trigger and a decay time cut $t=7-35$ ns after the pion stop was applied.
The energy loss information in B3 was used because ${\pi}^+{\rightarrow}{\mu}^+{\rightarrow}e^+$ backgrounds deposit larger energy in B3 than ${\pi}^+{\to}e^+{\nu}_e$ decays due to the presence of the decay muon ($T_{\mu}=4.1$ MeV).
After the timing selection and the energy cut in B3, the beam pion tracking cut, which used the angle between WC1, 2 and S1, 2 track segments, was applied to reject events with a larger angle than most ${\pi}^+{\to}e^+{\nu}_e$ events (mostly, $\pi$DIF events before B3) \cite{NIMA}.
Figure \ref{fig:Suppressed} shows the decay positron energy spectra of ${\pi}^+{\rightarrow}e^+{\nu_e}$ decays after ${\pi}^+{\rightarrow}{\mu}^+{\rightarrow}e^+$ background suppression cuts ((a) dataset 1 and (c) dataset 2).
The bumps in the positron energy spectra at about 58 MeV are due to photo-nuclear reactions in the NaI(T$\ell$) \cite{PN}.
The total number of ${\pi}^+{\rightarrow}e^+{\nu_e}$ events was $1.3{\times}10^6$ ($5{\times}10^{5}$ in dataset 1 and $8{\times}10^{5}$ in dataset 2).
\subsection{Energy spectrum fit}
The energy spectrum was fitted with a combination of background terms and a shape to represent the signal.
The background component due to the remaining ${\pi}^+{\rightarrow}{\mu}^+{\rightarrow}e^+$ events was obtained from the data by requiring a late time region $t>200$ ns.
The shape of the low energy ${\pi}^+{\rightarrow}e^+{\nu_e}$ tail was obtained by Monte Carlo (MC) simulation \cite{geant4} including the detector response which was measured using a mono-energetic positron beam \cite{NIMA, PN}.
Because the solid angle cut was reduced and the CsI was not used for dataset 1, the shapes of the low energy ${\pi}^+{\to}e^+{\nu}_e$ tails are slightly different for the two datasets.
Another background came from the decays-in-flight of muons ($\mu$DIF) following ${\pi}^+{\rightarrow}{\mu}^+{\nu_{\mu}}$ decays in B3 that has a similar time distribution to ${\pi}^+{\rightarrow}e^+{\nu_e}$ decay.
The shape of the $\mu$DIF event spectrum was obtained by MC simulation.
The signal shapes as shown in Fig. \ref{fig:MajoronShapes} (a) were produced with mass range $m_X$ from 0 to 120 MeV/$c^2$ in 5 MeV/$c^2$ steps by MC simulation including the detector response.
These shapes were normalized to 1 and used for the fit to search for the signals.
To combine the two data sets, simultaneous fitting with a common branching ratio as a free parameter was performed.
The fit in the range of $E_e=5-56$ MeV without any signal resulted in ${\chi}^2/$d.o.f.=1.04 (d.o.f.=402).
The addition of the signals did not change the fit result.
\subsection{Results}
\begin{figure}
\includegraphics[scale=0.45]{BR_UL.eps}
\caption{\label{fig:BR_UL} Results of the 90\% C.L. upper limit branching ratio $R^{{\pi}e{\nu}X}$. Dashed black line: previous TRIUMF results \cite{Picciotto}. Solid red line with filled circles: results from this work.}
\end{figure}
Figure \ref{fig:Suppressed} (b) and (d) show the residual plots without any signal in datasets 1 and 2; hypothetical signals assuming $m_X=80$ MeV/$c^2$ with the branching ratio $R^{{\pi}e{\nu}X}=2.0{\times}10^{-6}$ are also shown.
No significant excess above the statistical uncertainty was observed.
For example, the branching ratio with $m_X=0$ MeV/$c^2$ obtained by the fit was $R^{{\pi}e{\nu}X}=(0.3{\pm}3.2){\times}10^{-7}$.
Figure \ref{fig:BR_UL} shows the 90\% confidence level (C.L.) upper limits for the branching ratio ${\pi}^+{\rightarrow}e^+{\nu}X$ in the mass region from 0 to 120 MeV/$c^2$ calculated using the Feldman and Cousins (FC) approach \cite{FC}.
Since the signal shape at a mass of 55 MeV/$c^2$ is similar to the ${\pi}^+{\to}{\mu}^+{\to}e^+$ energy spectrum, the sensitivity was worse than for other masses due to the strong correlation; $R^{{\pi}e{\nu}X}=(-0.3{\pm}10.0){\times}10^{-7}$.
The statistical uncertainty dominates because the systematic uncertainties and the acceptance effects are approximately canceled out by taking the ratio of the number of signal events obtained by the fit to the number of pion decays.
The acceptance effect due to the cuts was examined by generating positrons in B3 isotropically with an energy range of $E_e=0-70$ MeV using the MC simulation and the systematic uncertainty was estimated to be $<$5\%.
Compared to the previous TRIUMF experiment \cite{Picciotto}, the limits were improved by an order of magnitude.
\section{\label{muon}${\pi}^+{\rightarrow}{\mu}^+{\nu}X$ decay}
The decay ${\pi}^+{\rightarrow}{\mu}^+{\nu}X$ can be sought by a measurement of the muon kinetic energy in ${\pi}^+{\to}{\mu}^+{\nu}$ decay (followed by ${\mu}^+{\to}e^+{\nu}_e\bar{\nu}_{\mu}$ decay) in the target (B3).
In the ${\pi}^+{\to}{\mu}^+{\to}e^+$ decay chain, three hits are expected in B3: the first signal is from the beam pion, the second is from the decay muon, and the third is from the decay positron.
Thus, the second of three pulses in B3 would be due to the muon kinetic energy.
However, the pulse detection logic could not efficiently identify pulses below 1.2 MeV \cite{PIENU2}.
Therefore, the search was divided into two muon energy regions, above and below 1.2 MeV.
The number of Prescaled trigger events used for the analysis was $4{\times}10^9$.
The analysis strategy and event selection cuts were based on the massive neutrino \cite{PIENU3} and three neutrino decay \cite{PIENU4} searches, briefly described in the following sections.
\subsection{\label{sec:MuH} Analysis of the region above 1.2 MeV}
\begin{figure}
\includegraphics[scale=0.45]{FitH.eps}
\caption{\label{fig:FitH} (a) The $T_{\mu}$ spectra of ${\pi}^+{\to}{\mu}^+{\to}e^+$ decay. The black crosses with the statistical uncertainties show the data. The dotted green line, dashed blue line, and solid red line represent a Gaussian distribution centered at 4.1 MeV, ${\pi}^+{\rightarrow}{\mu}^+{\nu}_{\mu}{\gamma}$ decay, and the sum of those two functions, respectively. (b) Residual plots shown by the black circles with statistical error bars in the range $T_{\mu}$=1.3 to 3.4 MeV. The solid red line represents a hypothetical signal with mass of $m_X=15$ MeV/$c^2$ and the branching ratio $R^{{\pi}{\mu}{\nu}X}=6.0{\times}10^{-5}$; the branching ratio obtained by the fit was $R^{{\pi}{\mu}{\nu}X}=(-3.6{\pm}5.1){\times}10^{-6}$.}
\end{figure}
As described in Sec. \ref{subsec:pieselection}, pions were identified using B1 and B2 and events with extra hits in B1, B2, T1, or T2 were rejected.
A solid angle acceptance of about 20\% for the decay positron was used.
To ensure the selected events were from ${\pi}^+{\rightarrow}{\mu}^+{\rightarrow}e^+$ decays, a late positron decay time $t>200$ ns after the pion stop and the positron energy in the NaI(T$\ell$) calorimeter $E_e<55$ MeV were required.
Then, the events with three clearly separated pulses in the target (B3) were selected and the second pulse information was extracted and assigned to the decay muon \cite{PIENU2}.
The muon kinetic energy ($T_{\mu}$) spectrum after the event selection cuts is shown in Fig. \ref{fig:FitH} (a).
As described above, the drop below 1.2 MeV was due to the inefficiency of the pulse detection logic \cite{PIENU2}.
The main background below 3.4 MeV was due to the radiative pion decay ${\pi}^+{\rightarrow}{\mu}^+{\nu_{\mu}}{\gamma}$ (branching fraction $2{\times}10^{-4}$ \cite{pimunug}).
The total number of ${\pi}^+{\to}{\mu}^+{\to}e^+$ events available was 9.1${\times}10^6$.
The decay ${\pi}^+{\rightarrow}{\mu}^+{\nu}X$ was searched for by fitting the $T_{\mu}$ energy spectrum of ${\pi}^+{\to}{\mu}^+{\to}e^+$ decays.
The fit was performed using a Gaussian peak centered at 4.1 MeV (energy resolution ${\sigma}=0.16$ MeV), the ${\pi}^+{\rightarrow}{\mu}^+{\nu}_{\mu}{\gamma}$ decay spectrum obtained by MC simulation \cite{geant4}, and the normalized signal spectra including the energy resolution in B3.
The signal spectra as shown in Fig. \ref{fig:MajoronShapes} (b) were generated with the mass range $0<m_X<26$ MeV/$c^2$ with 1 MeV/$c^2$ steps using MC including detector resolution.
The fit for $T_{\mu}$ from 1.3 to 4.2 MeV without any ${\pi}^+{\rightarrow}{\mu}^+{\nu}X$ signal introduced gave ${\chi}^2/$d.o.f.=1.27 (d.o.f.=53) and the residuals of the fit for the signal sensitive region are shown in Fig. \ref{fig:FitH} (b).
The addition of signal components did not change the fit result.
No significant signal beyond the statistical uncertainty was observed.
For example, the branching ratios for the signals with mass $m_X=0$ MeV/$c^2$ and 26 MeV/$c^2$ obtained by the fit were $R^{{\pi}{\mu}{\nu}X}={\Gamma}({\pi}^+{\to}{\mu}^+{\nu}X)/{\Gamma}({\pi}^+{\to}{\mu}^+{\nu}_{\mu})=(-2.1{\pm}1.3){\times}10^{-4}$ and $(-4.8{\pm}8.8){\times}10^{-6}$, respectively.
Systematic uncertainties and acceptance effects were approximately canceled by taking the ratio of amplitudes for the signal and ${\pi}^+{\to}{\mu}^+{\nu}_{\mu}$ decays.
The systematic uncertainties and acceptance effects due to the cuts were examined by generating decay muons in the target with several kinetic energies in the range $T_{\mu}=0-4.1$ MeV using MC simulation, and the systematic uncertainty was estimated to be $<$5\%.
The black circles in Fig. \ref{fig:BR_UL_Mu_FC} show the result of the 90\% C.L. upper limit branching ratio $R^{{\pi}{\mu}{\nu}X}$ in this energy region calculated using the FC method.
\begin{figure}
\includegraphics[scale=0.45]{BR_UL_Mu_FC.eps}
\caption{\label{fig:BR_UL_Mu_FC} Summary of the 90\% C.L. upper limit branching ratio $R^{{\pi}{\mu}{\nu}X}$ in this work. The black circles show the result of the search in the energy region $T_{\mu}>1.2$ MeV (see text in Sec. \ref{sec:MuH}) and the red squares represent the analysis result in the region $T_{\mu}<1.2$ MeV (see text in Sec. \ref{sec:MuL}).}
\end{figure}
\subsection{\label{sec:MuL} Analysis of the region below 1.2 MeV}
\begin{figure}
\includegraphics[scale=0.45]{FitL.eps}
\caption{\label{fig:FitL} (a) The total energy in the target due to the pion and muon after subtracting 17 MeV. The black crosses with statistical uncertainties show the data. The dotted green line, dashed blue line, and solid red line represent the main peak at 4.1 MeV, quadratic background due to ${\pi}$DIF events, and the sum of those two functions, respectively. (b) Residual plots shown by the black circles with the statistical error bars in the signal region $T_{\mu}$=-1.8 to 1.8 MeV. The solid red line represents a hypothetical signal with mass of $m_X=33.9$ MeV/$c^2$ and the branching ratio $R^{{\pi}{\mu}{\nu}X}=3.0{\times}10^{-5}$.}
\end{figure}
For $T_{\mu}<1.2$ MeV, the selection of pions, rejection of extra activity in scintillators, the solid angle cut for the decay positron, and the positron energy cut in the NaI(T$\ell$) calorimeter were all the same as in the analysis in the energy region $T_{\mu}>1.2$ MeV.
To minimize ${\pi}$DIF events, the same tracking cut by WC1, WC2, S1, and S2 used in Sec. \ref{subsec:pieselection} was also applied.
After these basic cuts, the energies observed in B3 in a wide time window (700 ns) including pion and positron energies were obtained.
To cleanly subtract the positron contribution from the integrated energy, events with late positron decay $t>300$ ns were selected and the isolated positron energy was subtracted.
After that, the contribution of the averaged pion kinetic energy ($\sim$17 MeV) was subtracted from the total energy (due to the pion and the muon).
Figure \ref{fig:FitL} (a) shows the total energy (corresponding to $T_{\mu}$) after subtracting 17 MeV.
The background below $T_{\mu}<1$ MeV was mainly due to remaining ${\pi}$DIF events.
The number of ${\pi}^+{\to}{\mu}^+{\to}e^+$ events available for the analysis is $1.3{\times}10^8$.
There are two background shapes, the 4.1 MeV peak and the ${\pi}$DIF events.
A quadratic function was used for the ${\pi}$DIF events.
To search for ${\pi}^+{\to}{\mu}^+{\nu}X$ decay, the width of the signal shape was scaled using that at the 4.1 MeV peak.
Figure \ref{fig:FitL} (b) shows the residual plots in the signal region from -1.8 to 1.8 MeV without any signal shape and a hypothetical signal shape assuming a mass of $m_X=33.9$ MeV/$c^2$ with the branching ratio $R^{{\pi}{\mu}{\nu}X}=3.0{\times}10^{-5}$.
The branching ratio obtained by the fit was $(1.0{\pm}2.0){\times}10^{-6}$.
The fit was performed from -4.0 to 4.1 MeV and the fitting range of -4.0 to 2.0 MeV (signal region) resulted in ${\chi}^2/$d.o.f.=1.03 (d.o.f.=115); there is some small deviation above 2 MeV due to a small mismatch due to the kinetic energy distribution of the beam pion.
The signals of ${\pi}^+{\to}{\mu}^+{\nu}X$ decay were searched for in the mass range of $m_X=26$ to 33.9 MeV/$c^2$, but no significant excess beyond the statistical uncertainty was observed.
The red squares in Fig. \ref{fig:BR_UL_Mu_FC} represent the result of the 90\% C.L. upper limit branching ratio $R^{{\pi}{\mu}{\nu}X}$ in this energy region calculated using the FC approach.
\section{Constraints on the Majoron model}
The Majoron model can be constrained using the experimental value of the pion branching ratio $R^{\pi}$.
The predicted branching ratio including the massless Majoron $X_0$ and a light neutral Higgs $H'$ ($\lesssim$1 MeV/$c^2$) can be written as
\begin{equation}
\frac{{\Gamma}({\pi}{\to}eL^0)/{\Gamma}({\pi}{\to}{\mu}L^0)}{{\Gamma}({\pi}{\to}e{\nu}_e)/{\Gamma}({\pi}{\to}{\mu}{\nu}_{\mu})}=1+157.5g^2
\end{equation}
where $L^0$ is the final state ${\nu}$, ${\nu}X_0$, and ${\nu}H'$, and $g$ is the Majoron-neutrino coupling constant \cite{majoron3}.
The upper limit of the ratio $R^{\pi}_{\rm exp}/R^{\pi}_{\rm SM}$ at 90\% C.L. using the current averaged experimental value $R^{\pi}_{\rm exp}=(1.2327{\pm}0.0023){\times}10^{-4}$ \cite{PDG} is
\begin{equation}
\frac{R^{\pi}_{\rm exp}}{R^{\pi}_{\rm SM}}<1.0014.
\end{equation}
Using this limit, the 90\% C.L. upper limit of the coupling constant can be found to be
\begin{equation}
g^2<9{\times}10^{-6},
\end{equation}
which was improved by a factor of three over the previous experiment \cite{Britton}.
\section{Conclusion}
No evidence of the three body pion decays ${\pi}^+{\to}e^+{\nu}X$ or ${\pi}^+{\to}{\mu}^+{\nu}X$ was found and new upper limits were set.
The limits on the branching ratio ${\pi}^+{\to}e^+{\nu}X$ were improved by an order of magnitude over the previous experiment.
For ${\pi}^+{\to}{\mu}^+{\nu}X$ decay, the limits obtained are the first available results.
The Majoron model was also constrained using the pion branching ratio $R^{\pi}$.
\begin{acknowledgments}
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC, No. SAPPJ-2017-00033), and by the Research Fund for the Doctoral Program of Higher Education of China, by CONACYT doctoral fellowship from Mexico, and by JSPS KAKENHI Grant No. 18540274, No. 21340059, No. 24224006, and No. 19K03888 in Japan.
We are grateful to Brookhaven National Laboratory for the loan of the crystals, and to the TRIUMF operations, detector, electronics and DAQ groups for their engineering and technical support.
\end{acknowledgments}
|
2005.13814
|
\section{Introduction}\label{sec:intro}
\input{intro}
\section{Overview}\label{sec:overview}
\input{overview}
\section{The \textsc{ImpEff}\xspace Language}\label{sec:source}
\input{source}
\section{The \textsc{ExEff}\xspace Language}\label{sec:target}
\input{target}
\section{Type Inference \& Elaboration}\label{sec:inference}
\input{inference}
\section{Erasure of Effect Information from \textsc{ExEff}\xspace}\label{sec:erasure}
\input{erasure}
\section{Elaboration to a Language Without Effects}\label{sec:elaboration-to-ocaml}
\input{ml}
\section{Related Work \& Conclusion}\label{sec:conclusion}
\input{conclusion}
\paragraph{\bf Acknowledgements}
We would like thank the anonymous reviewers, members of the IFIP WG 2.1 group, participants of Dagstuhl seminars 16112~\cite{DBLP:journals/dagstuhl-reports/Bauer0PY16} and 18172~\cite{DBLP:journals/dagstuhl-reports/ChandrasekaranL18}, Gert-Jan Bottu, Mauro Jaskelioff, Filip Koprivec, Žiga Lukšič, Leonidas Lampropoulos, Klara Mardirosian, Ruben Pieters, Alexander Vandenbroucke, Nicolas Wu, and Žiga Zupančič for all their helpful comments and suggestions. Part of this work was funded by the KU Leuven Special Research Fund (BOF), project 3E160354, and by the Fund for Scientific Research - Flanders, project G0D1419N. This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-17-1-0326.
\subsection{The \textsc{SkelEff}\xspace Language}
Figure \ref{fig:erased-syntax} defines the syntax of \textsc{SkelEff}\xspace. The type system
and operational semantics of \textsc{SkelEff}\xspace follow from those of \textsc{ExEff}\xspace, and can be
found in Appendix~\ref{appendix:erased-additional}.
\begin{myfigure}[h]
\input{Figures/erased-syntax}
\vspace{-5mm}
\caption{\textsc{SkelEff}\xspace Syntax}
\label{fig:erased-syntax}
\end{myfigure}
\subsection{Erasure}
Figure \ref{fig:erasure} defines erasure functions $\eraseV{v}$, $\eraseC{c}$,
$\eraseVT{\vty}$, $\eraseCT{\cty}$ and $\eraseEnv{\Gamma}$ for values,
computations, value types, computation types, and type environments
respectively. All five functions take a substitution $\sigma$ from the free
type variables $\alpha$ to their skeleton $\tau$ as an additional parameter.
\begin{myfigure}[h]
\vspace{-3mm}
\input{Figures/erasure}
\vspace{-3mm}
\caption{Definition of type erasure.}
\label{fig:erasure}
\end{myfigure}
Thanks to the skeleton-based design of \textsc{ExEff}\xspace, erasure is straightforward.
All types are erased to their skeletons,
dropping quantifiers for type variables and all occurrences of dirt sets.
Moreover, coercions are dropped from values and computations. Finally,
all binders and elimination forms for type variables, dirt set variables and coercions are dropped
from values and type environments.
\begin{example}
\label{exa:running-erasure}
Continuing the Example~\ref{exa:running-target}, a monomorphic function
\[
\begin{array}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
\mathtt{let}~~\mathit{f} & : & (\mathtt{Unit} \to \mathtt{Unit}~!~\emptyset) \to \mathtt{Unit}~!~\emptyset \\
& = & \fun{(g : \mathtt{Unit} \to \mathtt{Unit}~!~\emptyset)}{g~\mathtt{unit}} \\
\mathtt{in}~\ldots \\
\end{array}
\]
is erased to
\[
\begin{array}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
\mathtt{let}~~\mathit{f} & : & (\mathtt{Unit} \to \mathtt{Unit}) \to \mathtt{Unit} \\
& = & \fun{(g : \mathtt{Unit} \to \mathtt{Unit})}{g~\mathtt{unit}} \\
\mathtt{in}~\ldots \\
\end{array}
\]
while its polymorphic variant
\[
\begin{array}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
\mathtt{let}~~\mathit{f} & : & \forall \varsigma. \forall \alpha : \varsigma. \forall \alpha' : \varsigma. \forall \delta. \forall \delta'.
(\alpha \le \alpha') \Rightarrow (\delta \le \delta') \Rightarrow (\mathtt{Unit} \to \alpha~!~\delta) \to \alpha' ~!~ \delta' \\
& = & \Lambda \varsigma. \Lambda (\alpha : \varsigma). \Lambda (\alpha' : \varsigma). \Lambda \delta. \Lambda \delta'. \Lambda (\omega : \alpha \le \alpha'). \Lambda (\omega' : \delta \le \delta'). \\
& & \quad \fun{(g : \mathtt{Unit} \to \alpha \,!\, \delta)}{(\cast{(g~\mathtt{unit})}{(\omega \,!\, \omega')})} \\
\mathtt{in}~\ldots \\
\end{array}
\]
is erased to
\[
\begin{array}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
\mathtt{let}~~\mathit{f} & : & \forall \varsigma. (\mathtt{Unit} \to \varsigma) \to \varsigma \\
& = & \Lambda \varsigma. \fun{(g : \mathtt{Unit} \to \varsigma)}{g~\mathtt{unit}} \\
\mathtt{in}~\ldots \\
\end{array}
\]
Note that in addition to removing all effect annotations and coercions, the erasure
removed type quantifiers and abstractions, and replaced $\alpha$ and $\alpha'$ with their skeleton $\varsigma$.
We proceed similarly in applications, where
\[
f~\mathtt{Unit}~\mathtt{Unit}~\mathtt{Unit}~\emptyset~\emptyset~\langle \tyUnit \rangle~\emptyset_\emptyset~(\fun{(x:\mathtt{Unit})}{\return{x}})
\]
is erased simply to
\[
f~\mathtt{Unit}~(\fun{(x:\mathtt{Unit})}{\return{x}})
\]
where only the skeleton application remains. Similarly
\begin{align*}
&f~\mathtt{Unit}~\mathtt{Unit}~\mathtt{Unit}~\{\texttt{Tick}\}~\{\texttt{Tick}, \texttt{Tock}\}~\langle \tyUnit \rangle~(\{\texttt{Tick}\} \cup \emptyset_{\{\texttt{Tock}\}}) \\
&\quad (\fun{x : \mathtt{Unit}}{\operation[\mathtt{Tick}]{x}{y : \mathtt{Unit}}{(\cast{(\return{y})}{\langle \tyUnit \rangle~!~\emptyset_{\{\texttt{Tick}\}}})}})
\end{align*}
is erased to
\begin{align*}
&f~\mathtt{Unit}~(\fun{x : \mathtt{Unit}}{\operation[\mathtt{Tick}]{x}{y : \mathtt{Unit}}{\return{y}}})
\end{align*}
showing that a polymorphic function is applied in exactly the same way to a pure or an impure function.
\end{example}
The expected theorems hold. Firstly, types are preserved by
erasure, where typing for \textsc{SkelEff}\xspace values and computations takes the obvious forms
$\tcErsVal{\Gamma}{v}{\tau}$ and $\tcErsComp{\Gamma}{c}{\tau}$.
\begin{theorem}[Type Preservation]
\label{thm:erasure_type_preservation}
If $\tcTrgVal{\Gamma}{v}{\vty}$ then $\tcErsVal{\eraseEnv[\emptyset]{\Gamma}}{\eraseV[\Gamma]{v}}{\eraseVT[\Gamma]{\vty}}$.
If $\tcTrgComp{\Gamma}{c}{\cty}$ then $\tcErsComp{\eraseEnv[\emptyset]{\Gamma}}{\eraseC[\Gamma]{c}}{\eraseCT[\Gamma]{\cty}}$.
\end{theorem}
Here we abuse of notation and use $\Gamma$ as a substitution
from type variables to skeletons used by the erasure functions.
Finally, we have that erasure preserves the operational semantics.
\begin{theorem}[Semantic Preservation]
\label{thm:erasure_semantic_preservation}
If $\smallStepVal{v}{v'}$ then $\congStepVal{\eraseV{v}}{\eraseV{v'}}$.
If $\smallStepComp{c}{c'}$ then $\congStepComp{\eraseC{c}}{\eraseC{c'}}$.
\end{theorem}
In both cases, $\equiv^{\leadsto}$ denotes the congruence closure of the step relation in \textsc{SkelEff}\xspace, defined in
Appendix~\ref{appendix:erased-additional}.
The choice of substitution $\sigma$ does not matter as types do not affect
the behaviour. Note that because coercions are dropped during erasure, this
means that also in \textsc{ExEff}\xspace they do not have an essential runtime impact.
\begin{corollary}[Coercion Irrelevance]
If $\bigStepVal{v}{v_1}$ and $\bigStepVal{\cast{v}{\coercion}}{v_2}$
then $\congStepVal{\eraseV{v_1}}{\eraseV{v_2}}$.
If $\bigStepComp{c}{c_1}$ and $\bigStepComp{\cast{c}{\coercion}}{c_2}$
then $\congStepComp{\eraseV{c_1}}{\eraseV{c_2}}$.
\end{corollary}
\paragraph{\bf Discussion}
The reason we need to use the symmetric congruence
closure of the step relation in our preservation theorem is that the original
\textsc{ExEff}\xspace program and the resulting \textsc{SkelEff}\xspace program do not necessarily operate in
lockstep. Indeed, the erasure of casts with coercions, of type and coercion
binders and of their applications means that the erased program does not have
to step through their reductions. On the other hand, the erasure of type and
coercion binders may expose applications of skeleton binders that the \textsc{SkelEff}\xspace
program has to reduce whereas the original \textsc{ExEff}\xspace program does not.
For example, take the \textsc{ExEff}\xspace term
\[ c_1 = (\lambda (x:\forall \delta.\mathtt{Unit}). \return{(\lambda (y: \mathtt{Unit}). \return{(x~\emptyset)})})~(\Lambda \delta. (\Lambda \varsigma.\mathtt{unit})~\mathtt{Unit}) \]
which $\beta$-reduces to
\[ c_2 = \return{(\lambda (y: \mathtt{Unit}). \return{((\Lambda \delta. (\Lambda \varsigma.\mathtt{unit})~\mathtt{Unit})~\emptyset)})} \]
When we erase $c_1$, we get
\[ \eraseC{c_1} = (\lambda (x:\mathtt{Unit}). \return{(\lambda (y: \mathtt{Unit}). \return{x})})~((\Lambda \varsigma.\mathtt{unit})~\mathtt{Unit}) \]
The erasure of the $\Lambda \delta$ binder exposes a new redex that has precedence. Hence, $\eraseC{c_1}$ steps to
\[ (\lambda (x:\mathtt{Unit}). \return{(\lambda (y: \mathtt{Unit}). \return{x})})~\mathtt{unit} \]
which steps to the irreducible computation
\[ \return{(\lambda (y: \mathtt{Unit}). \return{\mathtt{unit}})} \]
In contrast, $c_2$ erases to a different irreducible computation
\[ \eraseC{c_2} = \return{(\lambda (y: \mathtt{Unit}). \return{((\Lambda \varsigma.\mathtt{unit})~\mathtt{Unit})})} \]
These two irreducible computations can be made equal by reducing under the
$\lambda (y: \mathtt{Unit})$ binder in $\eraseC{c_2}$. The congruence
closure of the step relation allows this reduction under binders. Morever, the closure
is symmetric because an \textsc{ExEff}\xspace step may defer or block a \textsc{SkelEff}\xspace step
that is exposed by the erasure.
Typically, when type information is erased from call-by-value languages,
type binders are erased by replacing them with other (dummy) binders. For instance,
the expected definition of erasure would be:
\begin{equation*}
\eraseV{\Lambda (\alpha:\tau).v} = \lambda (x:\mathtt{Unit}). \eraseV{v}
\end{equation*}
This replacement is motivated by a desire to preserve the behaviour of the typed
terms. By dropping binders, values might be turned into computations that
trigger their side-effects immediately, rather than at the later point where
the original binder was eliminated. However, there is no call for this circumspect
approach in our setting, as our grammatical partition of terms in values (without
side-effects) and computations (with side-effects) guarantees that this problem
cannot happen when we erase values to values and computations to computations.
Nevertheless, when adding recursion to the language, care is needed to preserve
the termination behavior of values under erasure, though we believe this is not a problem as
appropriate recursive constructs are invoked only at the computation level.
\subsection{Elaboration of \textsc{ImpEff}\xspace into \textsc{ExEff}\xspace}
The greyed parts of Figure~\ref{fig:source-typing} augment the typing rules for
\textsc{ImpEff}\xspace value and computation terms with typing-directed elaboration to
corresponding \textsc{ExEff}\xspace terms. The elaboration is mostly straightforward, mapping
every \textsc{ImpEff}\xspace construct onto its corresponding \textsc{ExEff}\xspace construct while adding
explicit type annotations to binders in Rules \textsc{TmTmAbs},
\textsc{TmHandler} and \textsc{TmOp}. Implicit appeals to subtyping are turned
into explicit casts with coercions in Rules \textsc{TmCastV} and
\textsc{TmCastC}. Rule \textsc{TmLet} introduces explicit binders for skeleton,
type, and dirt variables, as well as for constraints. These last also introduce
coercion variables $\omega$ that can be used in casts.
Binders introduced by Rule \textsc{TmLet} are
eliminated in Rule \textsc{TmVar} by means of explicit application with
skeletons, types, dirts and coercions. The coercions are produced by the auxiliary
judgement $\proveCt{\Gamma}{\coercion}{\pi}$, defined in
Figure~\ref{fig:source-coercion-typing}, which provides a coercion witness for
every subtyping proof.
As a sanity check, we have shown that elaboration preserves types.
\begin{theorem}[Type Preservation]
\label{thm:type_preservation}
\vspace{-1mm}
\begin{itemize}
\item
If $\tcElabVal{\Gamma}{v}{\mathit{A}}{v'}$
then $\tcTrgVal{\elabTyEnv{\Gamma}}{v'}{\elabVty{\mathit{A}}}$.
\item
If $\tcElabComp{\Gamma}{c}{\underline{\mathit{C}}}{c'}$
then $\tcTrgComp{\elabTyEnv{\Gamma}}{c'}{\elabCty{\underline{\mathit{C}}}}$.
\end{itemize}
\end{theorem}
Here $\elabTyEnv{\Gamma}$, $\elabVty{\mathit{A}}$ and $\elabCty{\underline{\mathit{C}}}$ convert
\textsc{ImpEff}\xspace environments and types into \textsc{ExEff}\xspace environments and types; they
are defined in Appendix~\ref{appendix:inference-additional}.
\begin{example}
\label{exa:running-target}
A valid elaboration of the polymorphic expression
\[
\mathtt{let}~\mathit{f} = (\fun{g}{g~\mathtt{unit}})~\mathtt{in}~\ldots
\]
from Example~\ref{exa:running-source} can be
\[
\begin{array}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
\mathtt{let}~~\mathit{f} & : & (\mathtt{Unit} \to \mathtt{Unit}~!~\emptyset) \to \mathtt{Unit}~!~\emptyset \\
& = & \fun{(g : \mathtt{Unit} \to \mathtt{Unit}~!~\emptyset)}{g~\mathtt{unit}} \\
\mathtt{in}~\ldots \\
\end{array}
\]
if the simple monomorphic typing is used (we have included the signature of $\mathit{f}$ for clarity).
For the polymorphic variant, the elaboration features both type-level abstractions and explicit casts:
\[
\begin{array}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
\mathtt{let}~~\mathit{f} & : & \forall \varsigma. \forall \alpha : \varsigma. \forall \alpha' : \varsigma. \forall \delta. \forall \delta'.
(\alpha \le \alpha') \Rightarrow (\delta \le \delta') \Rightarrow (\mathtt{Unit} \to \alpha~!~\delta) \to \alpha' ~!~ \delta' \\
& = & \Lambda \varsigma. \Lambda (\alpha : \varsigma). \Lambda (\alpha' : \varsigma). \Lambda \delta. \Lambda \delta'. \Lambda (\omega : \alpha \le \alpha'). \Lambda (\omega' : \delta \le \delta'). \\
& & \quad \fun{(g : \mathtt{Unit} \to \alpha \,!\, \delta)}{(\cast{(g~\mathtt{unit})}{(\omega \,!\, \omega')})} \\
\mathtt{in}~\ldots \\
\end{array}
\]
Here, coercion variables $\omega$ and $\omega'$ are utilized by the body of $\mathit{f}$ for
upcasting $(g~\mathtt{unit})$ to have type $\alpha' ~!~ \delta'$.
Similarly, applications of the latter variant need to include explicit type-level applications and coercion witnesses.
Elaborating the application of $f$ to the pure function $\mathit{id}$ we get
\[
f~\mathtt{Unit}~\mathtt{Unit}~\mathtt{Unit}~\emptyset~\emptyset~\langle \tyUnit \rangle~\emptyset_\emptyset~(\fun{(x:\mathtt{Unit})}{\return{x}})
\]
whereas for the impure $\mathit{tick}$ at a type $\mathtt{Unit}~!~\{\texttt{Tick}, \texttt{Tock}\}$ we get
\begin{align*}
&f~\mathtt{Unit}~\mathtt{Unit}~\mathtt{Unit}~\{\texttt{Tick}\}~\{\texttt{Tick}, \texttt{Tock}\}~\langle \tyUnit \rangle~(\{\texttt{Tick}\} \cup \emptyset_{\{\texttt{Tock}\}}) \\
&\quad (\fun{x : \mathtt{Unit}}{\operation[\mathtt{Tick}]{x}{y : \mathtt{Unit}}{(\cast{(\return{y})}{\langle \tyUnit \rangle~!~\emptyset_{\{\texttt{Tick}\}}})}})
\end{align*}
where $\mathtt{return}$ had to be coerced in order to match the dirt of the operation call.
\end{example}
\subsection{Constraint Generation \& Elaboration}\label{subsec:constraint-generation}
Constraint generation with elaboration into \textsc{ExEff}\xspace is presented in
Figures~\ref{fig:constraint-generation-values} (values)
and~\ref{fig:constraint-generation-computations} (computations).
Before going into the details of each, we first introduce the three auxiliary
constructs they use.
\begin{center}\begin{shaded}\begin{minipage}{\columnwidth}
\small
\[
\begin{array}{r@{~}c@{~}l}
\text{constraint set}~\mathcal{P}, \mathcal{Q} & ::= & \bullet \mid
\tau_1 = \tau_2, \mathcal{P} \mid
\alpha : \tau, \mathcal{P} \mid
\highlight{\omega :} \pi, \mathcal{P} \\
\text{typing environment}~\Gamma & ::= & \epsilon \mid
\Gamma, x : \mathit{S} \\
\text{substitution}~\sigma & ::= & \bullet \mid
\sigma \cdot [\tau/\varsigma] \mid
\sigma \cdot [\mathit{A}/\alpha] \mid
\sigma \cdot [\Delta/\delta] \mid
\sigma \cdot \highlight{[\coercion/\omega]} \\
\end{array}
\]
\end{minipage}\end{shaded}\end{center}
At the heart of our algorithm are sets $\mathcal{P}$,
containing three different kinds of constraints:
\begin{inparaenum}[(a)]
\item skeleton equalities of the form $\tau_1 = \tau_2$,
\item skeleton constraints of the form $\alpha : \tau$, and
\item wanted subtyping constraints of the form $\omega : \pi$.
\end{inparaenum}
The purpose of the first two becomes clear when we discuss constraint solving,
in Section~\ref{subsec:constraint-solving}.
Next, typing environments $\Gamma$ only contain term variable bindings, while
other variables represent unknowns of their sort and may end up being
instantiated after constraint solving.
Finally, during type inference we compute substitutions $\sigma$, for refining as
of yet unknown skeletons, types, dirts, and coercions. The last one is
essential, since our algorithm simultaneously performs type inference and
elaboration into \textsc{ExEff}\xspace.
\paragraph{\bf Values.}
\begin{myfigure}[t!]
\input{Figures/constraint-generation-values}
\vspace{-7mm}
\caption{Constraint Generation with Elaboration (Values)}
\label{fig:constraint-generation-values}
\end{myfigure}
Constraint generation for values takes the form
$\inferStVal{\mathcal{Q}}{\Gamma}{v}{\mathit{A}}{\mathcal{Q}'}{\sigma}{v'}$. It takes
as inputs a set of {\em} wanted constraints $\mathcal{Q}$, a typing environment
$\Gamma$, and a \textsc{ImpEff}\xspace value $v$, and produces a value type $\mathit{A}$, a new set of
wanted constraints $\mathcal{Q}'$, a substitution $\sigma$, and a \textsc{ExEff}\xspace
value $v'$.
In order to support let generalization, our inference algorithm does not keep constraint generation and solving
separate. Instead, the two are interleaved, as
indicated by the additional arguments of our relation:
\begin{inparaenum}[(a)]
\item constraints $\mathcal{Q}$ are passed around in a stateful manner (i.e., they are input
and output), and
\item substitutions $\sigma$ generated from constraint solving constitute
part of the relation output.
\end{inparaenum}
The rules are syntax-directed on the input \textsc{ImpEff}\xspace value.
Rule~\textsc{TmVar} handles
term variables $x$: as usual for constraint-based type inference the rule
instantiates the polymorphic type $(\forall \bar{\varsigma}.
\overline{\alpha:\tau}. \forall \bar{\delta}. \bar{\pi} \Rightarrow
\mathit{A})$ of $x$ with fresh variables; these are placeholders that are determined during constraint
solving. Moreover, the rule
extends the wanted constraints $\mathcal{P}$ with
$\bar{\pi}$, appropriately instantiated. In \textsc{ExEff}\xspace, this corresponds
to explicit skeleton, type, dirt, and coercion applications.
More interesting is Rule~\textsc{TmAbs}, which handles term abstractions. Like in
standard Hindley-Damas-Milner~\citep{DamasMilner}, it generates a fresh type
variable $\alpha$ for the type of the abstracted term variable $x$. In
addition, it generates a fresh skeleton variable $\varsigma$, to capture the (yet
unknown) shape of $\alpha$.
As explained in detail in Section~\ref{subsec:constraint-solving}, the
constraint solver instantiates type variables only through their skeletons
annotations. Because we want to allow local constraint solving for the body
$c$ of the term abstraction the opportunity to
produce a substitution $\sigma$ that instantiates $\alpha$, we have to pass in the annotation constraint
$\alpha : \varsigma$, which hints at why we
need to pass constraints in a stateful manner. We apply the resulting
substitution $\sigma$ to the result type $\sigma(\alpha) \to
\underline{\mathit{C}}$ (though $\sigma$ refers to \textsc{ImpEff}\xspace types, we abuse notation to
save clutter and apply it directly to \textsc{ExEff}\xspace entities too).
Finally, Rule~\textsc{TmHand} is concerned with
handlers. Since it is the most complex of the rules, we discuss each of its
premises separately:
Firstly, we infer a type $\mathit{B}_r\,!\,\Delta_r$ for the right hand side of the
$\mathtt{return}$-clause. Since $\alpha_r$ is a fresh unification variable,
just like for term abstraction we require $\alpha_r : \varsigma_r$, for a fresh
skeleton variable $\varsigma_r$.
Secondly, we check every operation clause in $\mathcal{O}$ in order.
For each clause, we generate fresh skeleton, type, and dirt variables
($\varsigma_i$, $\alpha_i$, and $\delta_i$), to account for the (yet unknown)
result type $\alpha_i\,!\,\delta_i$ of the continuation $k$, while inferring
type $\mathit{B}_{\mathtt{Op}_i}\,!\,\Delta_{\mathtt{Op}_i}$ for the right-hand-side $c_{\mathtt{Op}_i}$.
More interesting is the (final) set of wanted constraints $\mathcal{Q}'$.
First, we assign to the handler the overall type
\[
\alpha_\mathit{in}\,!\,\delta_\mathit{in} \Rrightarrow \alpha_\mathit{out}\,!\,\delta_\mathit{out}
\]
where $\varsigma_\mathit{in}, \alpha_\mathit{in}, \delta_\mathit{in},
\varsigma_\mathit{out}, \alpha_\mathit{out}, \delta_\mathit{out}$ are fresh
variables of the respective sorts.
In turn, we require that
\begin{inparaenum}[(a)]
\item the type of the return clause is a subtype of
$\alpha_\mathit{out}\,!\,\delta_\mathit{out}$ (given by the combination of
$\omega_1$ and $\omega_2$),
\item the right-hand-side type of each operation clause is a subtype of the
overall result type: $\sigma^n(\mathit{B}_{\mathtt{Op}_i}\,!\, \Delta_{\mathtt{Op}_i}) \le
\alpha_\mathit{out}\,!\,\delta_\mathit{out}$ (witnessed by
$\omega_{3_i}\,!\,\omega_{4_i}$),
\item the actual types of the continuations $\mathit{B}_i \to \alpha_\mathit{out}\,!\,\delta_\mathit{out}$ in the operation
clauses should be subtypes of their assumed types $\mathit{B}_i \to \sigma^n(\alpha_i\,!\,\delta_i)$ (witnessed by $\omega_{5_i}$).
\item the overall argument type $\alpha_\mathit{in}$ is a subtype of the
assumed type of $x$:~$\sigma^n(\sigma_r(\alpha_r))$ (witnessed by
$\omega_6$), and
\item
the input dirt set $\delta_\mathit{in}$ is a subtype of the resulting dirt
set $\delta_\mathit{out}$, extended with the handled operations $\mathcal{O}$
(witnessed by $\omega_7$).
\end{inparaenum}
All the aforementioned implicit subtyping relations become explicit in the
elaborated term $c_\mathit{res}$, via explicit casts.
\paragraph{\bf Computations.}
\begin{myfigure}[t!]
\input{Figures/constraint-generation-computations}
\vspace{-7mm}
\caption{Constraint Generation with Elaboration (Computations)}
\label{fig:constraint-generation-computations}
\end{myfigure}
The judgement
$\inferStComp{\mathcal{Q}}{\Gamma}{c}{\underline{\mathit{C}}}{\mathcal{Q}'}{\sigma}{c'}$
generates constraints for computations.
Rule~\textsc{TmApp} handles term applications of the form $v_1\,v_2$. After
inferring a type for each subterm ($\mathit{A}_1$ for $v_1$ and $\mathit{A}_2$ for
$v_2$), we generate the wanted constraint $\sigma_2(\mathit{A}_1) \le \mathit{A}_2 \to
\alpha\,!\,\delta$, with fresh type and dirt variables $\alpha$ and
$\delta$, respectively. Associated coercion variable $\omega$ is then used in
the elaborated term to explicitly (up)cast $v_1'$ to the expected type $\mathit{A}_2
\to \alpha\,!\,\delta$.
Rule~\textsc{TmReturn} handles \texttt{return}-computations and is entirely
straightforward.
Rule~\textsc{TmLet} handles polymorphic let-bindings. First, we infer a type $\mathit{A}$
for $v$, as well as wanted constraints $\mathcal{Q}_v$. Then, we simplify
wanted constraints $\mathcal{Q}_v$ by means of function $\mathtt{solve}$ (which we
explain in detail in Section~\ref{subsec:constraint-solving} below), obtaining
a substitution $\sigma_1'$ and a set of {\em residual constraints}
$\mathcal{Q}_v'$.
Generalization of $x$'s type is performed by the auxiliary function $\mathit{split}$,
given by the following clause:
\[
\inferrule*[right=]
{ \bar{\varsigma} = \{ \varsigma \mid (\alpha : \varsigma) \in \mathcal{Q},
\nexists \alpha'. \alpha' \notin \bar{\alpha} \land (\alpha' : \varsigma) \in \mathcal{Q} \}
\\
\bar{\alpha} = \ftv{\mathcal{Q}} \cup \ftv{\mathit{A}}~\backslash~\ftv{\Gamma}
\\
\mathcal{Q}_1 = \{ (\omega : \pi) \mid
(\omega : \pi) \in \mathcal{Q}, \fv{\pi} \not\subseteq \fv{\Gamma} \}
\\
\bar{\delta} = \fdv{\mathcal{Q}} \cup \fdv{\mathit{A}}~\backslash~\fdv{\Gamma} ~~
\\
\mathcal{Q}_2 = \mathcal{Q} - \mathcal{Q}_1
}
{ \splash{\Gamma}{\mathcal{Q}}{\mathit{A}} = \langle \bar{\varsigma}, \overline{\alpha : \tau}, \bar{\delta}, \mathcal{Q}_1, \mathcal{Q}_2 \rangle }
\]
In essence, $\mathit{split}$ generates the type (scheme) of $x$ in parts. Additionally, it
computes the subset $\mathcal{Q}_2$ of the input constraints $\mathcal{Q}$
that do not depend on locally-bound variables. Such constraints can be floated
``upwards'', and are passed as input when inferring a type for $c$. The remainder
of the rule is self-explanatory.
Rule~\textsc{TmOp} handles operation calls. Observe that in the elaborated term,
we upcast the inferred type to match the expected type in the signature.
Rule~\textsc{TmDo} handles sequences.
The requirement that all computations in a $\mathtt{do}$-construct have the
same dirt set is expressed in the wanted constraints $\sigma_2(\Delta_1)
\le \delta$ and
$\Delta_2 \le \delta$ (where $\delta$ is a fresh dirt variable; the
resulting dirt set), witnessed by coercion variables $\omega_1$
and $\omega_2$. Both coercion variables are used in the
elaborated term to upcast $c_1$ and $c_2$, such that both draw effects from the
same dirt set $\delta$.
Finally, Rule~\textsc{TmHandle} is concerned with effect handling.
After inferring type $\mathit{A}_1$ for the handler $v$, we require that it takes
the form of a handler type, witnessed by coercion variable $\omega_1 :
\sigma_2(\mathit{A}_1) \le (\alpha_1\,!\,\delta_1 \Rrightarrow
\alpha_2\,!\,\delta_2)$, for fresh $\alpha_1,
\alpha_2, \delta_1, \delta_2$. To ensure that the type
$\mathit{A}_2\,!\,\Delta_2$ of $c$ matches the expected type,
we require that $\mathit{A}_2\,!\,\Delta_2 \le \alpha_1\,!\,\delta_1$.
Our syntax does not include coercion variables for computation subtyping;
we achieve the same effect by combining
$\omega_2 : \mathit{A}_2 \le \alpha_1$ and $\omega_3 : \Delta_2 \le \delta_1$.
In the following, notation $\sigma \models \mathcal{Q}$ denotes that the
substitution $\sigma$ is a \emph{solution} of the constraint set $\mathcal{Q}$,
i.e., when after applying $\sigma$ to all constraints in $\mathcal{Q}$,
we get derivable judgements according to rules of Figure~\ref{fig:source-coercion-typing}.
\begin{theorem}[Soundness of Inference]
If $\inferStVal{\bullet}{\Gamma}{v}{\mathit{A}}{\mathcal{Q}}{\sigma}{v'}$ then for any $\sigma' \models \mathcal{Q}$, we have
$\tcElabVal{(\sigma' \cdot \sigma)(\Gamma)}{v}{\sigma'(\mathit{A})}{\sigma'(v')}$, and analogously for computations.
\end{theorem}
\begin{theorem}[Completeness of Inference]
If $\tcElabVal{\Gamma}{v}{\mathit{A}}{v'}$ then we have
$\inferStVal{\bullet}{\Gamma}{v}{\mathit{A}'}{\mathcal{Q}}{\sigma}{v''}$ and there exists $\sigma' \models \mathcal{Q}$ and $\coercion$, such that
$\sigma'(v'') = v'$ and $\proveCt{\sigma(\Gamma)}{\coercion}{\sigma'(\mathit{A}') \le \mathit{A}}$. An analogous statement holds for computations.
\end{theorem}
\subsection{Constraint Solving}\label{subsec:constraint-solving}
The second phase of our inference-and-elaboration algorithm is the constraint
solver. It is defined by the $\mathtt{solve}$ function signature:
\[
\ruleform{ \mathtt{solve}(\sigma;\, \mathcal{P};\, \mathcal{Q}) = (\sigma',\, \mathcal{P}') }
\]
It takes three inputs: the substitution $\sigma$ accumulated so far, a list of
already processed constraints $\mathcal{P}$, and a queue of still to be processed
constraints $\mathcal{Q}$. There are two outputs: the substitution $\sigma'$ that
solves the constraints and the residual constraints $\mathcal{P}'$. The substitutions
$\sigma$ and $\sigma'$ contain four kinds of mappings: $\varsigma \mapsto \tau$, $\alpha \mapsto \mathit{A}$, $\delta \mapsto \Delta$ and
$\omega \to \coercion$ which respectively instantiate skeleton variables,
type variables, dirt variables and coercion variables.
\begin{theorem}[Correctness of Solving]
For any set $\mathcal{Q}$, the call $\mathtt{solve}(\bullet; \bullet; \mathcal{Q})$ either results in a failure,
in which case $\mathcal{Q}$ has no solutions, or returns $(\sigma, \mathcal{P})$ such that for any $\sigma' \models \mathcal{Q}$, there exists
$\sigma'' \models \mathcal{P}$ such that $\sigma' = \sigma'' \cdot \sigma$.
\end{theorem}
The solver is invoked with $\mathtt{solve}(\bullet;\, \bullet;\, \mathcal{Q})$, to
process the constraints $\mathcal{Q}$ generated in the first phase of the
algorithm, i.e., with an empty substitution and no processed constraints.
The $\mathtt{solve}$ function is defined by case analysis on the queue.
\paragraph{\bf Empty Queue} When the queue is empty, all constraints
have been processed. What remains are the residual constraints
and the solving substitution $\sigma$, which are both returned as the result
of the solver.
\begin{center}\begin{shaded}\begin{minipage}{\columnwidth}
\small
\vspace{-3mm}
\[
\begin{array}{l}
\mathtt{solve}(\sigma;\, \mathcal{P};\, \bullet) = (\sigma,\, \mathcal{P}) \\
\end{array}
\]
\vspace{-5mm}
\end{minipage}\end{shaded}\end{center}
\paragraph{\bf Skeleton Equalities}
The next set of cases we consider are those where the queue is non-empty
and its first element is an equality between skeletons $\tau_1 = \tau_2$.
We consider seven possible cases based on the structure of $\tau_1$ and $\tau_2$
that together essentially implement conventional unification as used in
Hindley-Milner type inference~\citep{DamasMilner}.
\begin{center}\begin{shaded}\begin{minipage}{\columnwidth}
\small
\vspace{-2mm}
\[
\begin{array}{l}
\mathtt{solve}(\sigma;\, \mathcal{P}; \tau_1 = \tau_2, \mathcal{Q}) = \kpre{match} \tau_1 = \tau_2 \kpost{with} \\
\quad \mathop{\text{\texttt{|}}} \varsigma = \varsigma \mapsto \mathtt{solve}(\sigma;\, \mathcal{P};\, \mathcal{Q}) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \varsigma = \tau \mapsto \kpre{if} \varsigma \notin \fsv{\tau}
\kop{then} \letin{\sigma' = [\tau/\varsigma]} \mathtt{solve}(\sigma' \cdot \sigma;\, \bullet; \sigma'(\mathcal{Q}, \mathcal{P}))
\kop{else} \mathtt{fail} \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \tau = \varsigma \mapsto \kpre{if} \varsigma \notin \fsv{\tau}
\kop{then} \letin{\sigma' = [\tau/\varsigma]} \mathtt{solve}(\sigma' \cdot \sigma;\, \bullet; \sigma'(\mathcal{Q}, \mathcal{P}))
\kop{else} \mathtt{fail} \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \mathtt{Unit} = \mathtt{Unit} \mapsto \mathtt{solve}(\sigma;\, \mathcal{P};\, \mathcal{Q}) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} (\tau_1 \to \tau_2) = (\tau_3 \to \tau_4) \mapsto
\mathtt{solve}(\sigma;\, \mathcal{P};\, \tau_1 = \tau_3, \tau_2 = \tau_4, \mathcal{Q}) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} (\tau_1 \Rrightarrow \tau_2) = (\tau_3 \Rrightarrow \tau_4) \mapsto
\mathtt{solve}(\sigma;\, \mathcal{P};\, \tau_1 = \tau_3, \tau_2 = \tau_4, \mathcal{Q}) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \mathtt{otherwise} \mapsto \mathtt{fail} \\
\end{array}
\]
\vspace{-2mm}
\end{minipage}\end{shaded}\end{center}
The first case applies when both skeletons are the same type variable $\varsigma$.
Then the equality trivially holds. Hence we drop it and proceed with solving
the remaining constraints.
The next two cases apply when either $\tau_1$ or $\tau_2$ is a skeleton
variable $\varsigma$. If the occurs check fails, there is no finite solution and
the algorithm signals failure. Otherwise, the constraint is solved by
instantiating the $\varsigma$. This additional substitution
is accumulated and applied to all other constraints $\mathcal{P},\mathcal{Q}$. Because
the substitution might have modified some of the already processed constraints
$\mathcal{P}$, we have to revisit them. Hence, they are all pushed back onto the
queue, which is processed recursively.
The next three cases consider three different ways in which the two skeletons
can have the same instantiated top-level structure. In those cases
the equality is decomposed into equalities on the subterms, which are pushed
onto the queue and processed recursively.
The last catch-all case deals with all ways in which the two skeletons can
be instantiated to different structures. Then there is no solution.
\paragraph{\bf Skeleton Annotations}
The next four cases consider a skeleton annotation $\alpha : \tau$ at the head
of the queue, and propagate the skeleton instantiation to the type variable.
The first case, where the skeleton is a variable $\varsigma$, has nothing to do,
moves the annotation to the processed constraints and proceeds with the remainder
of the queue. In the other three cases, the skeleton is instantiated and
the solver instantiates the type variable with the corresponding structure, introducing
fresh variables for any subterms, where implicitly annotate every type variable with its
skeleton: $\alpha^{\tau}$. The instantiating substitution is accumulated and applied
to the remaining constraints, which are processed recursively.
\begin{center}\begin{shaded}\begin{minipage}{\columnwidth}
\small
\vspace{-2mm}
\[
\begin{array}{l@{~}c@{~}l}
\multicolumn{3}{l}{\mathtt{solve}(\sigma;\, \mathcal{P};\, \alpha : \tau, \mathcal{Q}) = \kpre{match} \tau \kpost{with}} \\
\quad \mathop{\text{\texttt{|}}} \varsigma & \mapsto & \mathtt{solve}(\sigma;\, \mathcal{P}, \alpha : \tau;\, \mathcal{Q}) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \mathtt{Unit} & \mapsto &
\letin{\sigma' = [\mathtt{Unit}/\alpha]}
\mathtt{solve}(\sigma' \cdot \sigma;\, \bullet;\, \sigma'(\mathcal{Q}, \mathcal{P})) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \tau_1 \to \tau_2 & \mapsto &
\letin{\sigma' = [(\alpha_1^{\tau_1} \to \alpha_2^{\tau_2}\,!\,\delta)/\alpha]}
\mathtt{solve}(\sigma' \!\cdot\! \sigma;\, \bullet;\, \alpha_1 : \tau_1, \alpha_2 : \tau_2, \sigma'(\mathcal{Q}, \mathcal{P})) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \tau_1 \Rrightarrow \tau_2 & \mapsto &
\letin{\sigma' = [(\alpha_1^{\tau_1}\,!\,\delta_1 \Rrightarrow \alpha_2^{\tau_2}\,!\,\delta_2)/\alpha]}
\mathtt{solve}(\sigma'\! \cdot \! \sigma;\, \bullet;\, \alpha_1 : \tau_1, \alpha_2 : \tau_2, \sigma'(\mathcal{Q}, \mathcal{P})) \\
\end{array}
\]
\vspace{-2mm}
\end{minipage}\end{shaded}\end{center}
\paragraph{\bf Value Type Subtyping}
Next are the cases where a subtyping constraint between two value types
$\mathit{A}_1 \le \mathit{A}_2$---evidenced by coercion variable $\omega$---is at
the head of the queue.
We consider six different situations.
\begin{center}\begin{shaded}\begin{minipage}{\columnwidth}
\small
\vspace{-2mm}
\[
\begin{array}{l}
\mathtt{solve}(\sigma;\, \mathcal{P};\, \omega : \mathit{A}_1 \le \mathit{A}_2, \mathcal{Q}) = \kpre{match} \mathit{A}_1 \le \mathit{A}_2 \kpost{with} \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \mathit{A} \le \mathit{A} \mapsto
\letin{\vty = \elabVty{\mathit{A}}} \mathtt{solve}([\refl{\vty}/\omega] \cdot \sigma;\, \mathcal{P};\, \mathcal{Q}) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \alpha^{\tau_1} \le \mathit{A} \mapsto
\letin{\tau_2 = \skeletonOf{\mathit{A}}}
\mathtt{solve}(\sigma;\, \mathcal{P}, \omega : \alpha^{\tau_1} \le \mathit{A};\, \tau_1 = \tau_2, \mathcal{Q}) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \mathit{A} \le \alpha^{\tau_1} \mapsto
\letin{\tau_2 = \skeletonOf{\mathit{A}}}
\mathtt{solve}(\sigma;\, \mathcal{P}, \omega : \mathit{A} \le \alpha^{\tau_1};\, \tau_2 = \tau_1, \mathcal{Q}) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} (\mathit{A}_1 \to \mathit{B}_1\,!\,\Delta_1) \le (\mathit{A}_2 \to \mathit{B}_2\,!\,\Delta_2) \mapsto
\letin{\sigma' = [(\omega_1 \to \omega_2\,!\,\omega_3)/\omega]} \\[0.5mm]
\qquad\qquad
\mathtt{solve}(\sigma' \cdot \sigma;\, \mathcal{P};\, \omega_1 : \mathit{A}_2 \le \mathit{A}_1, \omega_2 : \mathit{B}_1 \le \mathit{B}_2, \omega_3 : \Delta_1 \le \Delta_2, \mathcal{Q}) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} (\mathit{A}_1\,!\,\Delta_1 \Rrightarrow \mathit{A}_2\,!\,\Delta_2) \le (\mathit{A}_3\,!\,\Delta_3 \Rrightarrow \mathit{A}_4\,!\,\Delta_4) \mapsto
\letin{\sigma' = [(\omega_1\,!\,\omega_2 \Rrightarrow \omega_3\,!\,\omega_4)/\omega]} \\[0.5mm]
\qquad\qquad
\mathtt{solve}(\sigma' \cdot \sigma;\, \mathcal{P};\, \omega_1 : \mathit{A}_3 \le \mathit{A}_1, \omega_2 : \Delta_3 \le \Delta_1, \omega_3 : \mathit{A}_2 \le \mathit{A}_4, \omega_4 : \Delta_2 \le \Delta_4, \mathcal{Q}) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \mathtt{otherwise} \mapsto \mathtt{fail} \\
\end{array}
\]
\vspace{-2mm}
\end{minipage}\end{shaded}\end{center}
If the two types are equal, the subtyping
holds trivially through reflexivity. The solver thus drops the constraint and
instantiates $\omega$ with the reflexivity coercion $\refl{\vty}$. Note that
each coercion variable only appears in one constraint. So we only accumulate
the substitution and do not have to apply it to the other constraints.
In the next two cases, one of the two types is a type variable $\alpha$. Then
we move the constraint to the processed set.
We also add an equality constraint between the skeletons to the queue, thus enforcing the
invariant that only types with the same skeleton are compared.
Through the skeleton equality the type structure (if any) from the type is also
transferred to the type variable.
The next two cases concern two types with the same top-level instantiation. In these cases the solver
decomposes the constraint into constraints on the corresponding subterms and
appropriately relates the evidence of the old constraint to the new ones.
The final case catches all situations where the two types are instantiated
with a different structure and thus there is no solution.
\noindent
Auxiliary function $\skeletonOf{\mathit{A}}$, defined in Appendix~\ref{appendix:inference-additional}, computes the skeleton of $\mathit{A}$.
\paragraph{\bf Dirt Subtyping}
The final six cases deal with subtyping constraints between dirts.
\begin{center}\begin{shaded}\begin{minipage}{\columnwidth}
\small
\vspace{-2mm}
\[
\begin{array}{@{\hspace{0mm}}l}
\mathtt{solve}(\sigma;\ \mathcal{P}; \omega :\Delta \le \Delta', \mathcal{Q}) = \kpre{match} \Delta \le \Delta' \kpost{with} \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \mathcal{O} \cup \delta \le \mathcal{O}' \cup \delta' \mapsto
\kpre{if} \mathcal{O} \neq \emptyset \kop{then}
\letin{\sigma' = [((\mathcal{O} \backslash \mathcal{O}') \cup \delta'')/\delta',\mathcal{O} \cup \omega'/\omega]} \\[0.5mm]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\mathtt{solve}(\sigma' \cdot \sigma;\ \bullet; (\omega': \delta \leq \sigma'(\Delta')) , \sigma'(\mathcal{Q}, \mathcal{P}) ) \\[0.5mm]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\kop{else} \mathtt{solve}(\sigma;\ \mathcal{P}, (\omega :\Delta \le \Delta') ;\ \mathcal{Q}) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \emptyset \le \Delta' \mapsto \mathtt{solve}([\emptyset_{\Delta'}/\omega] \cdot \sigma;\ \mathcal{P} ;\ \mathcal{Q} ) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \delta \le \emptyset \mapsto \letin{\sigma' = [\emptyset/\delta ;\ \emptyset_\emptyset / \omega]}
\mathtt{solve}( \sigma' \cdot \sigma ;\ \bullet ;\ \sigma'(\mathcal{Q}, \mathcal{P}) ) \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \mathcal{O} \cup \delta \le \mathcal{O}' \mapsto \\[0.5mm]
~~~~~
\kpre{if} \mathcal{O} \subseteq \mathcal{O}' \kop{ then } \letin{\sigma' = [\mathcal{O} \cup \omega'/\omega]} \mathtt{solve}(\sigma' \cdot \sigma;\ \mathcal{P}, (\omega' :\delta \le \mathcal{O}') ;\ \mathcal{Q}) \kop{else} \mathtt{fail } \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \mathcal{O} \le \mathcal{O}' \mapsto \kpre{if} \mathcal{O} \subseteq \mathcal{O}'
\kop{then} \letin{\sigma' = [\mathcal{O} \cup \emptyset_{\mathcal{O}' \backslash \mathcal{O}}/\omega]} \mathtt{solve}(\sigma' \cdot \sigma;\ \mathcal{P} ;\ \mathcal{Q})
\kop{else} \mathtt{fail} \\[0.5mm]
\quad \mathop{\text{\texttt{|}}} \mathcal{O} \le \mathcal{O}' \cup \delta' \mapsto
\letin{\sigma' = [ (\mathcal{O} \backslash \mathcal{O}' ) \cup \delta'' /\delta' ;\ \mathcal{O}' \cup \emptyset_{(\mathcal{O}' \backslash \mathcal{O}) \cup \delta''}/\omega]}
\mathtt{solve}(\sigma' \cdot \sigma;\ \bullet ;\ \sigma'(\mathcal{Q}, \mathcal{P})) \\
\end{array}
\]
\vspace{-2mm}
\end{minipage}\end{shaded}\end{center}
If the two dirts are of the general form $\mathcal{O} \cup \delta$ and $\mathcal{O}' \cup
\delta'$, we distinguish two subcases. Firstly, if $\mathcal{O}$ is empty, there
is nothing to be done and we move the constraint to the processed set. Secondly,
if $\mathcal{O}$ is non-empty, we partially instantiate $\delta'$ with any of the operations
that appear in $\mathcal{O}$ but not in $\mathcal{O}'$. We then drop $\mathcal{O}$ from the
constraint, and, after substitution, proceed with processing all constraints.
For instance, for $\{ \mathtt{Op}_1 \} \cup \delta \le \{ \mathtt{Op}_2 \} \cup \delta'$,
we instantiate $\delta'$ to $\{ \mathtt{Op}_1 \} \cup \delta''$---where $\delta''$ is a fresh dirt variable---and proceed with
the simplified constraint $\delta \le \{ \mathtt{Op}_1, \mathtt{Op}_2 \} \cup \delta''$.
Note that due to the set semantics of dirts, it is not valid to simplify the
above constraint to $\delta \le \{ \mathtt{Op}_2 \} \cup \delta''$. After all
the substitution $[\delta \mapsto \{ \mathtt{Op}_1 \}, \delta'' \mapsto \emptyset]$ solves the
former and the original constraint, but not the latter.
The second case, $\emptyset \le \Delta'$, always holds and is discharged by
instantiating $\omega$ to $\emptyset_{\Delta'}$.
The third case, $\delta \le \emptyset$, has only one solution: $\delta \mapsto \emptyset$
with coercion $\emptyset_\emptyset$.
The fourth case, $\mathcal{O} \cup \delta \le \mathcal{O}'$, has as many solutions as there are
subsets of $\mathcal{O}'$, provided that $\mathcal{O} \subseteq \mathcal{O}'$. We then simplify the constraint
to $\delta \le \mathcal{O}'$, which we move to the set of processed constraints.
The fifth case, $\mathcal{O} \le \mathcal{O}'$, holds iff $\mathcal{O} \subseteq \mathcal{O}'$.
The last case, $\mathcal{O} \le \mathcal{O}' \cup \delta'$, is like the first, but
without a dirt variable in the left-hand side. We can satisfy it in a similar
fashion, by partially instantiating $\delta'$ with $(\mathcal{O} \setminus \mathcal{O}')
\cup \delta''$---where $\delta''$ is a fresh dirt variable. Now the constraint is satisfied and can be
discarded.
\subsection{Syntax of \textsc{NoEff}\xspace}\label{sec:ml-syntax}
\begin{myfigure}[t]
\input{Figures/ml-syntax}
\vspace{-5mm}
\caption{\textsc{NoEff}\xspace Syntax}
\label{fig:ml-syntax}
\end{myfigure}
Figure~\ref{fig:ml-syntax} presents the syntax of \textsc{NoEff}\xspace. Notably \textsc{NoEff}\xspace
replaces \textsc{ExEff}\xspace's two syntactic sorts of values and computations by a single
syntactic sort of terms that combines their syntactic forms. The four
absent forms are dirt and skeleton abstraction and application, as \textsc{NoEff}\xspace does not feature
either dirt or skeletons. Similarly, \textsc{ExEff}\xspace's syntactic sorts for value types $\vty$ and computation types $\cty$ are merged
into a single sort of types $A$. Here \textsc{ExEff}\xspace's computation types of the form
$\withdirt{\vty}{\Delta}$ are replaced by \textsc{NoEff}\xspace's computation types $\mkMlCompTy{A}$ without dirt.
The absence of dirt can also be seen in \textsc{NoEff}\xspace's coercion types $\mlCoTy$, which do not feature a form
for dirt subtyping.
Finally, \textsc{NoEff}\xspace features adapted versions of \textsc{ExEff}\xspace's type coercions. Absent
are those related to dirt and skeletons, and the computation type coercion is abstracted to
the form $(\mkMlCompCo{\mlCoercion})$ which does not feature a dirt coercion.
There are also four new coercion forms
($\handToFun{\mlCoercion_1}{\mlCoercion_2}$, $\funToHand{\mlCoercion_1}{\mlCoercion_2}$, $\mlReturn{\mlCoercion}$ and
$\unsafe{\mlCoercion}$) which enable the elaboration from \textsc{ExEff}\xspace into \textsc{NoEff}\xspace;
we explain their semantics when we discuss typing and their purpose when
explaining the elaboration.
\subsection{Typing of \textsc{NoEff}\xspace}\label{sec:ml-typing}
We now turn to typing of \textsc{NoEff}\xspace. First, we introduce \textsc{NoEff}\xspace typing
environments; they are identical to those for \textsc{ExEff}\xspace, modulo dirt and skeleton
information:
\[
\begin{array}{r@{~}c@{~}l}
\Gamma & ::= & \epsilon \mid
\Gamma, \alpha \mid
\Gamma, x : A \mid
\Gamma, \omega : \mlCoTy \\
\end{array}
\]
\noindent
The remainder of this section gives the typing judgements for terms
(Section~\ref{sec:ml-term-typing}) and coercions
(Section~\ref{sec:ml-coercion-typing}); uninteresting judgements like
well-formedness of types ($\tcNoEffTy{\Gamma}{A}$) and well-formedness of
constraints ($\tcNoEffCoTy{\Gamma}{\mlCoTy}$) are included in
Appendix~\ref{appendix:ml-additional}.
\subsubsection{Term Typing}\label{sec:ml-term-typing}
\begin{myfigure}[t]
\input{Figures/ml-term-typing}
\vspace{-5mm}
\caption{\textsc{NoEff}\xspace Term Typing}
\label{fig:ml-term-typing}
\end{myfigure}
Typing for \textsc{NoEff}\xspace terms is given by judgement
$\tcNoEffTm{\Gamma}{\mathit{t}}{A}$, which is presented in
Figure~\ref{fig:ml-term-typing}. The rules are similar to those of
\textsc{ImpEff}\xspace and \textsc{ExEff}\xspace, with the exception of
dirt and skeleton features, which are absent in \textsc{NoEff}\xspace.
There is one subtle point: By design, type $A \Rrightarrow B$ classifies
handlers that handle terms of type $\mkMlCompTy{A}$ and produce results of
type $\mkMlCompTy{B}$. This way, we enforce that handlers always take
computations to computations. If the input is not a computation, we can use a
regular function instead of a handler. So this restriction matters little.
More importantly, by forcing the output to be a computation, we avoid a
potential source of unsoundness in \textsc{NoEff}\xspace. Indeed, because the type system
does not track which operations are performed in the input computation, we
cannot tell whether or not they will all be handled. Of course, we do want any
operation that is not handled to be forwarded to the output, just like in
\textsc{ExEff}\xspace. Hence, because we cannot statically tell in \textsc{NoEff}\xspace whether any
operations will be forwarded, to remain on the safe side we have to assume that
there may be some. Thus, with forwarded operations, the output must be a
computation. We will see that this causes additional difficulties in the elaboration
from \textsc{ExEff}\xspace to \textsc{NoEff}\xspace.
\subsubsection{Coercion Typing}\label{sec:ml-coercion-typing}
\begin{myfigure}[t]
\input{Figures/ml-coercion-typing}
\vspace{-5mm}
\caption{\textsc{NoEff}\xspace Coercion Typing}
\label{fig:ml-coercion-typing}
\end{myfigure}
Coercion typing is given by judgement
$\tcNoEffCoercion{\Gamma}{\mlCoercion}{\mlCoTy}$, presented in
Figure~\ref{fig:ml-coercion-typing}. Most of the rules are straightforward so
we only focus on the four new coercion forms.
The first new coercion form ($\handToFun{\mlCoercion_1}{\mlCoercion_2}$) concerns the
issue of handler typing above. It converts a handler, which expects a
computation as input, into a function, which can be applied to a
non-computation. The next coercion form ($\funToHand{\mlCoercion_1}{\mlCoercion_2}$) is its
dual; it turns a function into a handler that only specifies how to handle the $\mathtt{return}$
case and forwards all operations.
The third new coercion form ($\mlReturn{\mlCoercion}$) promotes a value $\mathit{t}$ of any type $A$
to a computation $\return{\mathit{t}}$ of type $\mkMlCompTy{A}$.
The last new coercion form ($\unsafe{\mlCoercion}$) is the dual of the previous form. It forces
a value of computation type $\mkMlCompTy{A}$ to a value of type $A$. This only
works when the value is of the form $\return{\mathit{t}}$ and in that case yields $\mathit{t}$.
If the computation is of the form $\operation{\mathit{t}_1}{y : B}{\mathit{t}_2}$, the cast
gets stuck; hence its name. We will see that this is the single source of type unsafety in \textsc{NoEff}\xspace, though we claim that programs elaborated from \textsc{ExEff}\xspace into \textsc{NoEff}\xspace only use this coercion
in a safe way and never get stuck.
\subsection{Operational Semantics of \textsc{NoEff}\xspace}\label{sec:ml-opsem}
\begin{myfigure}[t]
\input{Figures/ml-opsem-selected}
\vspace{-5mm}
\caption{\textsc{NoEff}\xspace Operational Semantics (Selected Rules)}
\label{fig:ml-opsem-selected}
\end{myfigure}
Figure~\ref{fig:ml-opsem-selected} presents selected rules of \textsc{NoEff}\xspace's
small-step, call-by-value operational semantics. We omit
other rules as they closely follow the rules for \textsc{ExEff}\xspace, except being adjusted for the amalgamation of values and computations. The complete operational semantics can be found in Appendix~\ref{appendix:ml-additional}.
The first rule pushes the cast onto the returned value; in contrast to \textsc{ExEff}\xspace, there is no effect information to lose, making this reduction type-preserving. This allows the second and third rule which are simplified variants of the ones for \textsc{NoEff}\xspace: because all the coercions can be pushed into $\mlTm^R$, there is no need to extract their pure parts before substituting $\mlTm^R$ for a variable. The remaining five rules capture the semantics of the newly introduced coercion forms, exactly as described in Section~\ref{sec:ml-coercion-typing}.
\paragraph{The \textsc{NoEff}\xspace Metatheory}
We have proven a weak form of type safety for \textsc{NoEff}\xspace in terms
of type preservation and (partial) progress theorems. The latter
characterises the way in which well-typed terms can get stuck.
\begin{theorem}[Preservation]
If $\tcNoEffTm{\Gamma}{\mathit{t}}{A}$ and
$\mlSmallStep{\mathit{t}}{\mathit{t}'}$, then $\tcNoEffTm{\Gamma}{\mathit{t}'}{A}$.
\label{thm:ml-preservation}
\end{theorem}
\begin{theorem}[Partial Progress]
\label{thm:ml-progress}
If $\tcNoEffTm{\Gamma}{\mathit{t}}{A}$ then either
\begin{inparaenum}[(a)
\item
$\mathit{t}$ is a value,
\item
$\mlSmallStep{\mathit{t}}{\mathit{t}'}$, or
\item
$\mathit{t}$ is {\em``stuck''}.
\end{inparaenum
\end{theorem}
Stuck terms are defined as follows:
\begin{center}\begin{shaded}\begin{minipage}{\columnwidth}
\vspace{-4mm}
\small
\[
\begin{array}{r@{~}c@{~}l}
\mlTm^S & ::= & \cast{\operation{\mlTm^R}{y : A}{\mathit{t}}}{\unsafe{\mlCoercion}} \mid
\mlTm^S~A \mid
\cast{\mlTm^S}{\mlCoercion} \mid
\mlTm^S~\mlCoercion \mid
\mlTm^S~\mathit{t} \mid
\mlTm^R~\mlTm^S \mid
\letval{x}{\mlTm^S}{\mathit{t}} \mid
\return{\mlTm^S} \\
& \mid & \operation{\mlTm^S}{y : A}{\mathit{t}} \mid
\doin{x}{\mlTm^S}{\mathit{t}} \mid
\withhandle{\mlTm^S}{\mathit{t}_c} \mid
\withhandle{\mlTm^R}{\mlTm^S} \\
\end{array}
\]
\vspace{-4mm}
\end{minipage}\end{shaded}\end{center}
The first case is the essential one, while the remaining ones just provide an
evaluation context around it. Hence, terms only get stuck when an unsafe
coercion is applied to an operation. As we have already indicated, we claim
that elaborated \textsc{NoEff}\xspace programs never end up in this situation.
\subsection{Elaboration of \textsc{ExEff}\xspace to \textsc{NoEff}\xspace}\label{sec:eff-to-ml}
\subsubsection{Type Elaboration}
\label{sec:eff-to-ml-types}
Figure~\ref{fig:eff-to-ml-types} presents the elaboration of value types
($\valTyToNoEff{\Gamma}{\vty}{\tau}{A}$) and computation types
($\compTyToNoEff{\Gamma}{\cty}{\tau}{A}$). The latter captures the main
idea of the whole elaboration: when the dirt $\Delta$ of a computation type is
empty, the elaboration of the computation type $\vty \bang \Delta$ is just the
elaboration of the value type $\vty$. If it is non-empty, the elaborated
value type $A$ is wrapped in a computation type, $\mkMlCompTy{A}$.
We cannot always tell whether $\Delta$ is empty or not, namely in case it is
a dirt variable $\delta$. Our conservative solution is to assume that
dirt variables are also non-empty. This works because we can always represent
a term $\mathit{t}$ of type $A$ in terms of a trivial computation $\kpre{return}{t}$ of
type $\mkMlCompTy{A}$.
Most cases for value types are straightforward, but a few are worth mentioning.
Firstly, to respect the particularities of \textsc{NoEff}\xspace handler types explained in
Section~\ref{sec:ml-term-typing}, we distinguish two different cases for elaborating \textsc{ExEff}\xspace.
Recall that if a computation type has an empty dirt, it is elaborated to some pure type~$A$, not a computation type~$\mkMlCompTy{A}$ that handlers expect. Correspondingly, handler types with empty input dirts are elaborated into
\textsc{NoEff}\xspace function types. If the dirt is non-empty, we unavoidably elaborate to a \textsc{NoEff}\xspace handler type.
Note that in the latter case, we ignore whether or not the output computation type has
an empty dirt; the \textsc{NoEff}\xspace handler type always implicitly assumes an output computation type.
Secondly, since dirts and skeletons are absent from \textsc{NoEff}\xspace, the elaboration drops
universal quantification over skeletons and dirts, as well as dirt subtyping qualifiers.
\begin{myfigure}[t]
\input{Figures/eff-to-ml-types}
\vspace{-5mm}
\caption{Elaboration of \textsc{ExEff}\xspace Types to \textsc{NoEff}\xspace Types}
\label{fig:eff-to-ml-types}
\end{myfigure}
\paragraph{Coercion Elaboration}
We now turn to the elaboration of \textsc{ExEff}\xspace coercions to \textsc{NoEff}\xspace coercions.
Most cases are straightforward and either copy a \textsc{ExEff}\xspace coercion to its \textsc{NoEff}\xspace
counterpart, or drop a dirt- or skeleton-related \textsc{ExEff}\xspace construct that is not
present in \textsc{NoEff}\xspace. Hence, we only discuss the interesting cases here; the
complete definition can be found in Appendix~\ref{appendix:elab-additional}.
Two groups of rules do deserve additional explanation. The first group concerns
the elaboration of handler coercions. If we compare the input dirts of the
source and target handler types of the coercion, there are three different
cases: either both are empty, both are non-empty, or the source input dirt is
non-empty and the target input dirt is empty. The fourth
combination is not possible due to the monotonicity
of subtyping and the contravariance in the input argument.
In the first case, both the source and the target \textsc{ExEff}\xspace type elaborate to
\textsc{NoEff}\xspace function types, and thus the coercion is elaborated to a function
coercion:
\begin{mathpar}
\small
\inferrule*[right=]
{ \coToNoEff{\Gamma}{\coercion_1}{\vty_2~!~\emptyset \le \vty_1~!~\emptyset}{\mlCoercion'_1} \\
\coToNoEff{\Gamma}{\coercion_2}{\cty_1 \le \cty_2}{\mlCoercion'_2}
}
{ \coToNoEff{\Gamma}{\coercion_1 \Rrightarrow \coercion_2}{(\vty_1~!~\emptyset \Rrightarrow \cty_1) \le (\vty_2~!~\emptyset \Rrightarrow \cty_2)}{\mlCoercion'_1 \to \mlCoercion'_2} }
\end{mathpar}
In the second case, both types elaborate to \textsc{NoEff}\xspace handler types, and thus the
whole coercion is elaborated to a \textsc{NoEff}\xspace handler coercion:
\begin{mathpar}
\small
\inferrule*[right=]
{ \fullDirt{\Delta_1} \\
\fullDirt{\Delta_2} \\\\
\coToNoEff{\Gamma}{\coercion_1}{(\vty_2~!~\Delta_2 \le \vty_1~!~\Delta_1)}{\mlCoercion'_1} \\
\coToNoEff{\Gamma}{\coercion_2}{\vty'_1 \le \vty'_2}{\mlCoercion'_2} \\
\tcTrgCo{\Gamma}{\coercion_3}{\Delta'_1 \le \Delta'_2}
}
{ \coToNoEff{\Gamma}
{(\coercion_1 \Rrightarrow (\coercion_2~!~\coercion_3))}
{((\vty_1~!~\Delta_1) \Rrightarrow (\vty'_1~!~\Delta'_1)) \le ((\vty_2~!~\Delta_2) \Rrightarrow (\vty'_2~!~\Delta'_2))}
{\mlCoercion'_1 \Rrightarrow \mkMlCompCo{\mlCoercion'_2}}
}
\end{mathpar}
In the third case the elaborated source type is a handler type and the target
type a function type. Here we use the $\mathtt{handToFun}$ coercion to bridge
between the two. There are two subcases to consider though, depending on
whether the source output dirt is empty or not:
\begin{mathpar}
\small
\inferrule*[right=]
{ \fullDirt{\Delta_1} \\
\coToNoEff{\Gamma}{\coercion_1}{\vty_2 \le \vty_1}{\mlCoercion'_1} \\
\coToNoEff{\Gamma}{\coercion_2}{(\vty'_1 \le \vty'_2)}{\mlCoercion'_2} \\
\tcTrgCo{\Gamma}{\coercion_3}{\emptyset \le \Delta_1} \\
\tcTrgCo{\Gamma}{\coercion_4}{\emptyset \le \Delta'_2}
}
{ \coToNoEff{\Gamma}
{(\coercion_1~!~\coercion_3 \Rrightarrow \coercion_2~!~\coercion_4)}
{((\vty_1~!~\Delta_1 \Rrightarrow \vty'_1~!~\emptyset) \le (\vty_2~!~\emptyset \Rrightarrow \vty'_2~!~\Delta'_2))}
{\handToFun{\mlCoercion'_1}{(\unsafe{\mlCoercion'_2})}}
}
\inferrule*[right=]
{ \fullDirt{\Delta_1} \\
\fullDirt{\Delta'_1} \\
\coToNoEff{\Gamma}{\coercion_1}{\vty_2 \le \vty_1}{\mlCoercion'_1} \\
\coToNoEff{\Gamma}{\coercion_2}{(\vty'_1 \le \vty'_2)}{\mlCoercion'_2} \\
\tcTrgCo{\Gamma}{\coercion_3}{\emptyset \le \Delta_1} \\
\tcTrgCo{\Gamma}{\coercion_4}{\Delta'_1 \le \Delta'_2}
}
{ \coToNoEff{\Gamma}
{(\coercion_1~!~\coercion_3 \Rrightarrow \coercion_2~!~\coercion_4)}
{((\vty_1~!~\Delta_1 \Rrightarrow \vty'_1~!~\Delta'_1) \le (\vty_2~!~\emptyset \Rrightarrow \vty'_2~!~\Delta'_2))}
{\handToFun{\mlCoercion'_1}{\mlCoercion'_2}}
}
\end{mathpar}
In the former case, \textsc{NoEff}\xspace does not respect the emptiness in the elaborated
handler type, but does respect it in the elaboration of $\coercion_2$. To
bridge the discrepancy that arises here, we insert an $\mathtt{unsafe}$ coercion. In
the latter case, no discrepancy arises, and no $\mathtt{unsafe}$ coercion is needed.
The second group of interest concerns the elaboration of computation type
coercions. Again we distinguish three different cases based on the source and
target dirt. If both are empty, the computation type coercion is elaborated
like the underlying value type coercion $\coercion_1$:
\begin{mathpar}
\small
\inferrule*[right=]
{ \coToNoEff{\Gamma}{\coercion_1}{\vty_1 \le \vty_2}{\mlCoercion'_1} \\
\tcTrgCo{\Gamma}{\coercion_2}{\emptyset \le \emptyset}
}
{ \coToNoEff{\Gamma}{(\coercion_1~!~\coercion_2)}{(\vty_1~!~\emptyset \le \vty_2~!~\emptyset)}{\mlCoercion'_1} }
\end{mathpar}
If both are non-empty, we elaborate to a \textsc{NoEff}\xspace computation type coercion
$\mkMlCompCo{\coercion_1'}$:
\begin{mathpar}
\small
\inferrule*[right=]
{ \coToNoEff{\Gamma}{\coercion_1}{\vty_1 \le \vty_2}{\mlCoercion'_1} \\
\tcTrgCo{\Gamma}{\coercion_2}{\emptyset \le \Delta_2} \\
\fullDirt{\Delta_2}
}
{ \coToNoEff{\Gamma}{(\coercion_1~!~\coercion_2)}{(\vty_1~!~\emptyset \le \vty_2~!~\Delta_2)}{\mlReturn{\mlCoercion'_1}} }
\end{mathpar}
In the third case, there is a mismatch because the source is pure and the
target is impure; we bridge this with a $\mathtt{return}$ coercion:
\begin{mathpar}
\small
\inferrule*[right=]
{ \coToNoEff{\Gamma}{\coercion_1}{\vty_1 \le \vty_2}{\mlCoercion'_1} \\
\tcTrgCo{\Gamma}{\coercion_2}{\Delta_1 \le \Delta_2} \\
\fullDirt{\Delta_1} \\
\fullDirt{\Delta_2}
}
{ \coToNoEff{\Gamma}{(\coercion_1~!~\coercion_2)}{(\vty_1~!~\Delta_1 \le \vty_2~!~\Delta_2)}{\mkMlCompCo{\mlCoercion'_1}} }
\end{mathpar}
\subsubsection{Value Elaboration}\label{sec:eff-to-ml-values}
Again, the elaboration of \textsc{ExEff}\xspace values into \textsc{NoEff}\xspace terms is mostly
straightforward, so we only discuss the interesting cases here; the complete
definition can be found in Appendix~\ref{appendix:elab-additional}. There are
two cases of interest: handlers and dirt applications.
\paragraph{\bf Handlers}
We have three rules describing different cases of elaborating handlers of type
$\withdirt{\vty_x}{\mathcal{O}} \Rrightarrow \withdirt{\vty}{\Delta}$. Recall from
Section~\ref{sec:eff-to-ml-types} that if $\mathcal{O} = \emptyset$, handlers need to
be elaborated into functions, which is described by the first of these three
rules:
\begin{mathpar}
\small
\inferrule*[right=
{ \valTyToNoEff{\Gamma}{\vty}{\tau}{A} \\
\compToNoEff{\Gamma,x\!:\!\vty}{c_r}{\cty}{\mathit{t}}
}
{ \valToNoEff{\Gamma}{\{\return{(x : \vty)} \mapsto c_r\}}{\withdirt{\vty}{\emptyset} \Rrightarrow \cty}{\fun{(x:A)}{\mathit{t}}} }
\end{mathpar}
The second rule describes the case where $\mathcal{O}$ is non-empty, but $\Delta$ is
empty:
\begin{mathpar}
\small
\inferrule*[right=
{ \fullDirt{\mathcal{O}} \\
\valTyToNoEff{\Gamma}{\vty_x}{\tau}{A} \\
\compToNoEff{\Gamma, x\!:\!\vty_x}{c_r}{\withdirt{\vty}{\emptyset}}{\mathit{t}_r} \\
\left[
(\mathtt{Op} : \vty_1^\mathtt{Op} \to \vty_2^\mathtt{Op}) \in \Sigma \quad
\valTyToNoEff{}{\vty_i^\mathtt{Op}}{\tau_i^\mathtt{Op}}{A_i^\mathtt{Op}} \qquad
\compToNoEff{\Gamma, x : \vty_1^\mathtt{Op}, k : \vty_2^\mathtt{Op} \to \vty\,!\,\emptyset}{c_\mathtt{Op}}{\vty\,!\,\emptyset}{\mathit{t}_\mathtt{Op}}
\right]_{\mathtt{Op} \in \mathcal{O}}
}
{ \Gamma \vdashNamedD{v} \trgShorthand : \withdirt{\vty_x}{\mathcal{O}} \Rrightarrow \withdirt{\vty}{\emptyset} \\
\highlight{\rightsquigarrow
{\handler{\return{(x : A)} \mapsto \return{\mathit{t}_r}
,\big[\call{\mathtt{Op}}{x}{k} \mapsto \return{\mathit{t}_\mathtt{Op}[\cast{k}{\refl{A_1^\mathtt{Op}} \to \unsafe{\refl{A_2^\mathtt{Op}}}}/k]}\big]_{\mathtt{Op} \in \mathcal{O}}}}
}
}
\end{mathpar}
Since $\mathcal{O}$ is non-empty, we do elaborate a handler into a handler, but there is an
important caveat. Recall from~\ref{sec:ml-term-typing} that to ensure safe forwarding of
unhandled operations, handlers take computations to computations.
But as $\Delta$ is empty, handler clauses of type~$\withdirt{\vty}{\emptyset}$ are
elaborated to terms of type~$A$ (the elaboration of $\vty$), not $\mkMlCompTy{A}$ as expected.
We amend this by wrapping them with a $\mathtt{return}$. However, the handled continuations
now include an extraneous $\mathtt{return}$, which we remove with an $\mathtt{unsafe}$ coercion
before plugging them into the operation clause that expects $k$ to result in~$A$, not $\mkMlCompTy{A}$.
In the third rule, both $\mathcal{O}$ and $\Delta$ are non-empty, and the elaboration
is structural:
\begin{mathpar}
\small
\inferrule*[right=
{ \fullDirt{\mathcal{O}} \\
\fullDirt{\Delta} \\
\valTyToNoEff{\Gamma}{\vty_x}{\tau}{A} \\
\compToNoEff{\Gamma, x\!:\!\vty_x}{c_r}{\withdirt{\vty}{\Delta}}{\mathit{t}_r} \\
\left[
(\mathtt{Op} : \vty_1^\mathtt{Op} \to \vty_2^\mathtt{Op}) \in \Sigma \qquad
\compToNoEff{\Gamma, x : \vty_1^\mathtt{Op}, k : \vty_2^\mathtt{Op} \to \vty\,!\,\Delta}{c_\mathtt{Op}}{\vty\,!\,\Delta}{\mathit{t}_\mathtt{Op}}
\right]_{\mathtt{Op} \in \mathcal{O}}
}
{ \Gamma \vdashNamedD{v} \trgShorthand : \withdirt{\vty_x}{\mathcal{O}} \Rrightarrow \withdirt{\vty}{\Delta} \\
\highlight{\rightsquigarrow
{\handler{\return{(x : A)} \mapsto {\mathit{t}_r}
,[\call{\mathtt{Op}}{x}{k} \mapsto {\mathit{t}_\mathtt{Op}}]_{\mathtt{Op} \in \mathcal{O}}}}
}
}
\end{mathpar}
\begin{myfigure}[t!]
\begin{center}\begin{shaded}\begin{minipage}{\columnwidth}
\vspace{-3mm}
$\ruleform{\fromImpureVal {\vty}{\Delta}{\mlCoercion}}$~\textbf{Value Type Coercion from Impure Dirt Instantiation}
\begin{mathpar}
\inferrule*[right=FiUnit]
{ }
{ \fromImpureVal{\mathtt{Unit}}{\Delta}{\langle \mathtt{Unit} \rangle} }
\inferrule*[right=FiArr]
{ \toImpureVal{\vty}{\Delta}{\mlCoercion_1} \\
\fromImpureComp{\cty}{\Delta}{\mlCoercion_2}
}
{ \fromImpureVal{\vty \to \cty}{\Delta}{\mlCoercion_1 \to \mlCoercion_2} }
\inferrule*[right=FiHand1]
{ \toImpureVal{\vty}{\Delta}{\mlCoercion_1} \\
\fromImpureComp{\cty}{\Delta}{\mlCoercion_2}
}
{ \fromImpureVal{\withdirt{\vty}{\emptyset} \Rrightarrow \cty}{\Delta}{\mlCoercion_1 \to \mlCoercion_2} }
\inferrule*[right=FiHand2]
{ \Delta_2[\emptyset/\delta] = \emptyset \\
\toImpureVal{\vty_1}{\emptyset}{\mlCoercion_1} \\
\fromImpureVal{\vty_2}{\emptyset}{\mlCoercion_2}
}
{ \fromImpureVal{\withdirt{\vty_1}{\delta} \Rrightarrow \withdirt{\vty_2}{\Delta_2}}{\emptyset}{\handToFun{\mlCoercion_1}{(\unsafe{\mlCoercion_2})}} }
\inferrule*[right=FiHand3]
{ \fullDirt{\Delta_2[\emptyset/\delta]} \\
\toImpureVal{\vty_1}{\emptyset}{\mlCoercion_1} \\
\fromImpureVal{\vty_2}{\emptyset}{\mlCoercion_2}
}
{ \fromImpureVal{\withdirt{\vty_1}{\delta} \Rrightarrow \withdirt{\vty_2}{\Delta_2}}{\emptyset}{\handToFun{\mlCoercion_1}{(\mkMlCompCo{\mlCoercion_2})}} }
\inferrule*[right=FiHand4]
{ \fullDirt{\Delta_1[\Delta/\delta]} \\
\toImpureComp{\withdirt{\vty_1}{\Delta_1}}{\Delta}{\mlCoercion_1} \\
\fromImpureComp[\Gamma, \delta']{\withdirt{\vty_2}{\delta'}}{\Delta}{\mlCoercion_2} \quad \text{fresh}\,\delta'
}
{ \fromImpureVal{\withdirt{\vty_1}{\Delta_1} \Rrightarrow \withdirt{\vty_2}{\Delta_2}}{\Delta}{\mlCoercion_1 \Rrightarrow \mlCoercion_2} }
\inferrule*[right=FiSkelAbs]
{ \fromImpureVal[\Gamma, \varsigma]{\vty}{\Delta}{\mlCoercion}
}
{ \fromImpureVal{\forall \varsigma. \vty}{\Delta}{\mlCoercion} }
\inferrule*[right=FiTyAbs]
{ \fromImpureVal[\Gamma, \alpha\!:\!\tau]{\vty}{\Delta}{\mlCoercion}
}
{ \fromImpureVal{\forall \alpha\!:\!\tau. \vty}{\Delta}{\forall \alpha. \mlCoercion} }
\inferrule*[right=FiDirtAbs]
{ \fromImpureVal[\Gamma, \delta']{\vty}{\Delta}{\mlCoercion}
}
{ \fromImpureVal{\forall \delta'.\vty}{\Delta}{\mlCoercion} }
\inferrule*[right=FiCoAbsTy]
{ \fromImpureVal{\vty}{\Delta}{\mlCoercion}
}
{ \fromImpureVal{\alpha_1 \le \alpha_2 \Rightarrow \vty}{\Delta}{\alpha_1 \le \alpha_2 \Rightarrow \mlCoercion} }
\inferrule*[right=FiCoAbsDirt]
{ \fromImpureVal{\vty}{\Delta}{\mlCoercion}
}
{ \fromImpureVal{\Delta_1 \le \Delta_2 \Rightarrow \vty}{\Delta}{\mlCoercion} }
\end{mathpar}
$\ruleform{\fromImpureComp {\cty}{\Delta}{\mlCoercion}}$~\textbf{Computation Type Coercion from Impure Dirt Instantiation}
\begin{mathpar}
\inferrule*[right=FiCmp1]
{ \fromImpureVal{\vty}{\Delta}{\mlCoercion} }
{ \fromImpureComp{\vty~!~\emptyset}{\Delta}{\mlCoercion} }
\inferrule*[right=FiCmp2]
{ \fromImpureVal{\vty}{\emptyset}{\mlCoercion} }
{ \fromImpureComp{\vty~!~\delta}{\emptyset}{\unsafe{\mlCoercion}} }
\inferrule*[right=FiCmp3]
{ \fullDirt{\Delta'[\Delta/\delta]} \\
\fromImpureVal{\vty}{\Delta}{\mlCoercion}
}
{ \fromImpureComp{\vty~!~\Delta'}{\Delta}{\mkMlCompCo{\mlCoercion}} }
\end{mathpar}
$\ruleform{\toImpureVal {\vty}{\Delta}{\mlCoercion}}$~\textbf{Value Type Coercion to Impure Dirt Instantiation}
\[
\text{defined dually to $\fromImpureVal {\vty}{\Delta}{\mlCoercion}$}
\]
$\ruleform{\toImpureComp {\cty}{\Delta}{\mlCoercion}}$~\textbf{Computation Type Coercion to Impure Dirt Instantiation}
\[
\text{defined dually to $\fromImpureComp{\vty}{\Delta}{\mlCoercion}$}
\]
\vspace{-5mm}
\end{minipage}\end{shaded}\end{center}
\vspace{-5mm}
\caption{Type Coercions from and to an Impure Dirt Instantiation}
\label{fig:eff-to-ml-impure}
\end{myfigure}
\paragraph{\bf Dirt applications}
The elaboration of dirt applications possibly needs to bridge between an impure
and a pure type. Consider for instance a \textsc{ExEff}\xspace value $v$ of type $\forall
\delta. \mathtt{Unit} \to \withdirt{\mathtt{Unit}}{\delta}$ which is applied to the
empty dirt; thus the type of the dirt application is $\mathtt{Unit} \to
\withdirt{\mathtt{Unit}}{\emptyset}$. The elaboration of the former type is $\mathtt{Unit}
\to \mkMlCompTy{\mathtt{Unit}}$, while the latter is $\mathtt{Unit} \to \mathtt{Unit}$.
Such elaborations are handled by the following rule:
\begin{mathpar}
\small
\inferrule*[right=
{ \valToNoEff{\Gamma}{v}{\forall \delta. \vty}{\mathit{t}} \\
\fromImpureVal{\vty}{\Delta}{\mlCoercion}
}
{ \valToNoEff{\Gamma}{v~\Delta}{\vty[\Delta/\delta]}{\cast{\mathit{t}}{\mlCoercion}} }
\end{mathpar}
where for a given $v$ of type $\forall \delta. \vty$, we need a
coercion~$\mlCoercion$ from the elaboration of $\vty$ (recall this is done
under the assumption $\fullDirt{\delta}$) to the elaboration of $\vty[\Delta /
\delta]$. Such coercion is produced by a judgement
$\fromImpureVal{\vty}{\Delta}{\mlCoercion}$, driven by the structure of $\vty$.
This judgement is defined in Figure~\ref{fig:eff-to-ml-impure} alongside with
the judgment $\fromImpureComp{\cty}{\Delta}{\mlCoercion}$ for computation types.
In addition, there are two dual judgements
$\toImpureVal{\vty}{\Delta}{\mlCoercion}$ and
$\toImpureComp{\cty}{\Delta}{\mlCoercion}$ for the opposite coercions, which are
used on types in contravariant positions. We have omitted their definitions
because they are obtained by flipping the sides of all
$\mapsto$ arrows, and replacing $\mathtt{unsafe}$ with $\mathtt{return}$ and $\mathtt{handToFun}$
with $\mathtt{funToHand}$.
Most rules of these judgements are straightforward congruences.
The main rule of interest is the one that produces an $\mathtt{unsafe}$ coercion
where the dirt variable $\delta$ in a computation type
$\withdirt{\vty}{\delta}$ is instantiated to the empty dirt $\emptyset$
(Rule~\textsc{FiCmp2}). In that case, the elaboration of the polymorphic
abstraction conservatively assumes the computation is impure, while the
elaboration of its instantiation accurately knows it is pure.
A further case that deserves attention is that of the handler type, where four
different rules (Rules~\textsc{FiHand1},~\textsc{FiHand2},~\textsc{FiHand3},
and~\textsc{FiHand4}) cover the possible scenarios related to elaboration into
handler and function types.
Note that in Rule~\textsc{FiCoAbsTy} we have restricted the case of $\vty_1
\le \vty_2 \Rightarrow \vty$ to situations where $\vty_1$ and $\vty_2$ are both
types variables. This is not a severe restriction because subtyping constraints
can be simplified to this form; this simplification is precisely what our type
inference algorithm does. Moreover, there is a good reason to impose the
syntactic restriction. Consider the trivial reflexive subtyping constraint
$(\mathtt{Unit} \to \withdirt{\mathtt{Unit}}{\delta}) \le (\mathtt{Unit} \to
\withdirt{\mathtt{Unit}}{\delta})$. If we conservatively assume that $\delta$ is
non-empty, the constraint is elaborated to $(\mathtt{Unit} \to \mkMlCompTy{\mathtt{Unit}})
\le (\mathtt{Unit} \to \mkMlCompTy{\mathtt{Unit}})$, whereas, if $\delta$ is instantiated
to $\emptyset$, the constraint is elaborated to $(\mathtt{Unit} \to \mathtt{Unit}) \le
(\mathtt{Unit} \to \mathtt{Unit})$. Hence, we would need to be able to coerce a coercion
for the former constraint to a coercion for the latter, and vice versa. This
would require a complication of the \textsc{NoEff}\xspace language with additional coercion
forms to accomplish this coercion of coercions, which, happily, the above
syntactic restriction allows us to avoid.
\subsubsection{Computation Elaboration}\label{sec:eff-to-ml-computations}
\begin{myfigure}[t]
\input{Figures/eff-to-ml-computations}
\vspace{-5mm}
\caption{Elaboration of \textsc{ExEff}\xspace Computations to \textsc{NoEff}\xspace Terms}
\label{fig:eff-to-ml-computations}
\end{myfigure}
Finally, Figure~\ref{fig:eff-to-ml-computations} defines how \textsc{ExEff}\xspace computations
are elaborated into \textsc{NoEff}\xspace terms. There are a number of interesting cases.
Firstly, because $(\return{v})$ has an empty dirt, its elaborated form drops
the $\mathtt{return}$ (Rule~\textsc{CRet}).
Secondly, $\mathtt{do}$- computations are translated to either $\mathtt{let}$- or
$\mathtt{do}$- terms, depending on whether the dirt is empty or not
(Rules~\textsc{CDo1} and~\textsc{CDo2}, respectively).
Thirdly, handler applications are elaborated in three possible ways. If the
input dirt of the handler is empty, it is elaborated as a function and thus the
handler application too should be elaborated as function application
(Rule~\textsc{CHandle1}).
Otherwise, a handler application is indeed elaborated as a handler application.
If the output dirt is empty, the translation is straightforward
(Rule~\textsc{CHandle3}).
However, if the output dirt is empty, then the elaboration of the handler still
produces a computation where none is expected. Hence, we insert an
$\mathtt{unsafe}$ coercion to bridge the gap (Rule~\textsc{CHandle2}).
\begin{example}
\label{exa:running-noeff}
Elaboration of terms to \textsc{NoEff}\xspace again depends on the type of a \textsc{ExEff}\xspace term. A monomorphic function
\[
\begin{array}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
\mathtt{let}~~\mathit{f} & : & (\mathtt{Unit} \to \mathtt{Unit}~!~\emptyset) \to \mathtt{Unit}~!~\emptyset \\
& = & \fun{(g : \mathtt{Unit} \to \mathtt{Unit}~!~\emptyset)}{g~\mathtt{unit}} \\
\mathtt{in}~\ldots \\
\end{array}
\]
is erased to
\[
\begin{array}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
\mathtt{let}~~\mathit{f} & : & (\mathtt{Unit} \to \mathtt{Unit}) \to \mathtt{Unit} \\
& = & \fun{(g : \mathtt{Unit} \to \mathtt{Unit})}{g~\mathtt{unit}} \\
\mathtt{in}~\ldots \\
\end{array}
\]
as before, while its polymorphic variant
\[
\begin{array}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
\mathtt{let}~~\mathit{f} & : & \forall \varsigma. \forall \alpha : \varsigma. \forall \alpha' : \varsigma. \forall \delta. \forall \delta'.
(\alpha \le \alpha') \Rightarrow (\delta \le \delta') \Rightarrow (\mathtt{Unit} \to \alpha~!~\delta) \to \alpha' ~!~ \delta' \\
& = & \Lambda \varsigma. \Lambda (\alpha : \varsigma). \Lambda (\alpha' : \varsigma). \Lambda \delta. \Lambda \delta'. \Lambda (\omega : \alpha \le \alpha'). \Lambda (\omega' : \delta \le \delta'). \\
& & \quad \fun{(g : \mathtt{Unit} \to \alpha \,!\, \delta)}{(\cast{(g~\mathtt{unit})}{(\omega \,!\, \omega')})} \\
\mathtt{in}~\ldots \\
\end{array}
\]
is conservatively elaborated to an impure
\[
\begin{array}{l@{\hspace{1mm}}c@{\hspace{1mm}}l}
\mathtt{let}~~\mathit{f} & : & \forall \alpha. \forall \alpha'.
(\alpha \le \alpha') \Rightarrow (\mathtt{Unit} \to \mkMlCompTy{\alpha}) \to \mkMlCompTy{\alpha'} \\
& = & \Lambda \alpha. \Lambda \alpha'. \Lambda (\omega : \alpha \le \alpha'). \\
& & \quad \fun{(g : \mathtt{Unit} \to \mkMlCompTy{\alpha})}{(\cast{(g~\mathtt{unit})}{\omega})} \\
\mathtt{in}~\ldots \\
\end{array}
\]
Note that in contrast to the erasure to \textsc{SkelEff}\xspace, we keep type variables $\alpha$ and $\alpha'$, while removing any mention of their skeleton $\varsigma$.
As before, we remove any effect annotations, conservatively assuming that dirt variables are impure,
but keep an explicit coercion $\omega$ between types.
Recall that in \textsc{ExEff}\xspace, the application $f~\mathit{id}$ was pure, and so must be its elaboration. However, since $f$ itself was conservatively assumed to be impure,
the application must be suitably coerced. In particular, the elaboration of
\[
f~\mathtt{Unit}~\mathtt{Unit}~\mathtt{Unit}~\emptyset~\emptyset~\langle \tyUnit \rangle~\emptyset_\emptyset~(\fun{(x:\mathtt{Unit})}{\return{x}})
\]
is
\[
(\cast{(\cast{(f~\mathtt{Unit}~\mathtt{Unit})}{\mlCoercion_1})}{\mlCoercion_2})~\langle \mathtt{Unit} \rangle~(\fun{(x:\mathtt{Unit})}{x})
\]
where for $\mlCoTy = \mathtt{Unit} \le \mathtt{Unit}$, the coercion $\mlCoercion_1$, which lifts a pure function into one that
returns a computation, is given by
\begin{align*}
&\mlCoTy \Rightarrow (\langle \mathtt{Unit} \rangle \to \mlReturn{\langle \mathtt{Unit} \rangle}) \to \mkMlCompCo{\langle \mathtt{Unit} \rangle} \\
&: (\mlCoTy \Rightarrow (\mathtt{Unit} \to \mkMlCompTy{\mathtt{Unit}}) \to \mkMlCompTy{\mathtt{Unit}})
\le (\mlCoTy \Rightarrow (\mathtt{Unit} \to \mathtt{Unit}) \to \mkMlCompTy{\mathtt{Unit}})
\end{align*}
while $\mlCoercion_2$, which extracts back the value from a pure computation is:
\begin{align*}
&\mlCoTy \Rightarrow (\langle \mathtt{Unit} \rangle \to \langle \mathtt{Unit} \rangle) \to \unsafe{\langle \mathtt{Unit} \rangle} \\
&: (\mlCoTy \Rightarrow (\mathtt{Unit} \to \mathtt{Unit}) \to \mkMlCompTy{\mathtt{Unit}})
\le (\mlCoTy \Rightarrow (\mathtt{Unit} \to \mathtt{Unit}) \to \mathtt{Unit})
\end{align*}
On a side note, observe the removal of $\mathtt{return}$ in the identity as its elaboration is a pure function.
For an impure function
\begin{align*}
&f~\mathtt{Unit}~\mathtt{Unit}~\mathtt{Unit}~\{\texttt{Tick}\}~\{\texttt{Tick}, \texttt{Tock}\}~\langle \tyUnit \rangle~(\{\texttt{Tick}\} \cup \emptyset_{\{\texttt{Tock}\}}) \\
&\quad (\fun{x : \mathtt{Unit}}{\operation[\mathtt{Tick}]{x}{y : \mathtt{Unit}}{(\cast{(\return{y})}{\langle \tyUnit \rangle~!~\emptyset_{\{\texttt{Tick}\}}})}})
\end{align*}
the elaboration
\begin{align*}
&(\cast{(\cast{(f~\mathtt{Unit}~\mathtt{Unit})}{\mlCoercion_1'})}{\mlCoercion_2'})~\langle \mathtt{Unit} \rangle \\
&\quad (\fun{x : \mathtt{Unit}}{\operation[\mathtt{Tick}]{x}{y : \mathtt{Unit}}{(\cast{y}{\mlReturn{\langle \tyUnit \rangle}})}})
\end{align*}
is similar, except that the coercions $\mlCoercion_1' = \mlCoercion_2'$ are both trivial:
\begin{align*}
&\mlCoTy \Rightarrow (\langle \mathtt{Unit} \rangle \to \mkMlCompCo{\langle \mathtt{Unit} \rangle}) \to \mkMlCompCo{\langle \mathtt{Unit} \rangle} \\
&: (\mlCoTy \Rightarrow (\mathtt{Unit} \to \mkMlCompTy{\mathtt{Unit}}) \to \mkMlCompTy{\mathtt{Unit}})
\le (\mlCoTy \Rightarrow (\mathtt{Unit} \to \mkMlCompTy{\mathtt{Unit}}) \to \mkMlCompTy{\mathtt{Unit}})
\end{align*}
and may be removed by an optimizer.
Also note that just as in $\mathit{id}$, the $\mathtt{return}$ vanishes in the elaboration, though in this case it is reintroduced by the elaboration of the coercion~$\emptyset_{\{\texttt{Tick}\}}$.
\end{example}
\subsubsection{Metatheory of Elaboration}
We have proven in Abella that the elaboration of \textsc{ExEff}\xspace values and
computations into \textsc{NoEff}\xspace terms preserves typing.
\begin{theorem}[Type Preservation]
\label{thm:eff-to-ml-type-preservation}
\begin{itemize}
\item
If $\valToNoEff{\Gamma}{v}{\vty}{\mathit{t}}$
and $\tyEnvToNoEff{\Gamma}{\Gamma'}$
then $\valTyToNoEff{\Gamma}{\vty}{\tau}{A}$
and $\tcNoEffTm{\Gamma'}{\mathit{t}}{A}$.
\item
If $\compToNoEff{\Gamma}{c}{\cty}{\mathit{t}}$
and $\tyEnvToNoEff{\Gamma}{\Gamma'}$
then $\compTyToNoEff{\Gamma}{\cty}{\tau}{B}$
and $\tcNoEffTm{\Gamma'}{\mathit{t}}{B}$.
\end{itemize}
\end{theorem}
A key lemma in the theorem's proof establishes the appropriate typing of the coercion produced by the
$\fromImpureVal {\vty}{\Delta}{\mlCoercion}$ judgement.
\begin{lemma}[From Impure Coercion Typing]
If
$\fromImpureVal {\vty}{\Delta}{\mlCoercion}$
and
$\valTyToNoEff{\Gamma,\delta}{\vty}{\tau}{A}$
then there exists a $B$ such that
$\valTyToNoEff{\Gamma}{\vty[\Delta/\delta]}{\tau}{B}$
and
$\tcNoEffCoercion{\Gamma}{\mlCoercion}{A \le B}$.
\end{lemma}
Semantic preservation for the elaboration from \textsc{ExEff}\xspace to \textsc{NoEff}\xspace turns out to
be much more complicated than for the elaboration to \textsc{SkelEff}\xspace. Indeed, the
congruence closure of the step relation is not sufficient in
the case of \textsc{NoEff}\xspace.
For instance, consider the following \textsc{ExEff}\xspace evaluation step:
\[\smallStepVal{(\Lambda \delta. \fun{x:\mathtt{Unit}}{v})\,\emptyset}{\fun{x:\mathtt{Unit}}{v[\emptyset/\delta]}}\]
where the dirt abstraction $(\Lambda \delta. \fun{x:\mathtt{Unit}}{v})$ has type $\forall \delta. \mathtt{Unit}
\to \withdirt{\mathtt{Unit}}{\set{\mathtt{Op}} \cup \delta}$ and its application to $\emptyset$
has type $\mathtt{Unit} \to \withdirt{\mathtt{Unit}}{\set{\mathtt{Op}} \cup \emptyset}$. Suppose that
the right-hand side elaborates to the \textsc{NoEff}\xspace term $\fun{x:\mathtt{Unit}}{v'}$. Then the left-hand
side elaborates to $\cast{\fun{(x:\mathtt{Unit}}{v'})}{(\langle \mathtt{Unit} \rangle \to \mkMlCompCo{\langle \mathtt{Unit} \rangle})}$;
observe that the function coercion is nothing more than a reflexivity coercion.
Neither of these two elaborated \textsc{NoEff}\xspace terms is reducible.
In particular, we cannot eliminate the reflexivity coercion by reduction and
thus the two terms are not related by a congruence closure of the step
relation.
Instead, we believe that a semantic notion of equivalence is needed: contextual
equivalence~\cite{morris1969lambda}. Informally, this notion expresses that two terms are
equivalent iff, when placed in any ``appropriate'' program context, the
resulting programs reduce to normal forms that are equivalent under some other,
simpler notion of equivalence such as syntactic equality.
The precise formal definition depends on the particular setting it is used in.
In our setting there are a number of complicating factors that need to be taken
into account.
\begin{itemize}
\item
Firstly, we are dealing with two mutually recursive syntactic
sorts for terms, values and computations. This calls for four different
mutually recursive sorts of program contexts: ones that take a
value/computation and yield a value/computation.
\item
Secondly, we need to consider
what simpler notion of equivalence to use and how to restrict program contexts
so that we can use it. A common approach is to consider only contexts that have
some atomic type as a result, such as naturals or integers, where syntactic
equality is appropriate. We believe that approach works here too. Indeed, we can expect that
an appropriate computation context handles all
operations and yields a pure program.
\item
Thirdly, we do not want to admit all possible \textsc{NoEff}\xspace contexts. In particular,
we do not want to admit those that get stuck because of an inappropriate use of
an $\mathtt{unsafe}$ coercion. Hence, we want to restrict ourselves to those that are
the image of a \textsc{ExEff}\xspace program context.
\end{itemize}
We leave working out the precise formal definition of contextual equivalence
and proving semantic preservation on top of it a substantial open challenge.
Yet, we point to the work of Bi et al.~\shortcite{ecoop2018:it} as an important source of
inspiration. They also deal with an elaboration-based setting, for disjoint intersection
types, and use logical relations as the basis of their proofs.
\subsection{Algebraic Effect Handlers}
The main premise of algebraic effects is that impure behaviour arises from a set
of \emph{operations} such as $\mathtt{Get}$ and $\mathtt{Set}$ for mutable
store, $\mathtt{Read}$ and $\mathtt{Print}$ for interactive input and output, or
$\mathtt{Raise}$ for exceptions~\citep{DBLP:journals/acs/PlotkinP03}. This
allows generalizing exception handlers to other effects,
to express backtracking, co-operative multithreading and other examples in a
natural way~\citep{DBLP:journals/corr/PlotkinP13,bauer15}.
Assume operations $\mathtt{Tick} : \mathtt{Unit} \rightarrow \mathtt{Unit}$ and $\mathtt{Tock} : \mathtt{Unit} \rightarrow \mathtt{Unit}$ that take a unit
value as a parameter and yield a unit value as a result. Unlike special built-in
operations, these operations have no intrinsic effectful behaviour, though we
can give one through handlers. For example, the handler
\begin{align*}
\handler{
&\call{\mathtt{Tick}}{x}{k} \mapsto (\mathtt{Print} \text{``tick''}; k~\mathtt{unit}), \\
&\call{\mathtt{Tock}}{x}{k} \mapsto \mathtt{Print} \text{``tock''}
}
\end{align*}
replaces all calls of $\mathtt{Tick}$ by printing out ``tick'' and similarly for $\mathtt{Tock}$.
But there is one significant difference between the two cases. Unlike exceptions, which
always abort the evaluation, operations have a continuation waiting for their result.
It is this continuation that the handler captures in the variable~$k$ and potentially uses in
the handling clause. In the clause for $\mathtt{Tick}$, the continuation is resumed by passing it
the expected unit value, whereas in the clause for $\mathtt{Tock}$, the operation is discarded. Thus, if we handle a computation emitting the two operations, it will print out ``tick''
until a first ``tock'' is printed, after which the evaluation stops.
For a more thorough explanation of algebraic effect handlers, we refer the reader to Pretnar's tutorial~\citep{pretnar:tutorial},
which is conveniently based on a calculus with essentially the same term-level syntax and
operational semantics (but a far less involved type system).
\subsection{Elaborating Subtyping}
Consider the computation
$\doin{x}{\mathtt{Tick}~{\mathtt{unit}}}{f~x}$
and assume that $f$ has
the function type $\mathtt{Unit} \to \mathtt{Unit}~!~\{\mathtt{Tock}\}$, taking unit values
to unit values and perhaps calling $\mathtt{Tock}$ operations in the process.
The whole computation then has the type
$\mathtt{Unit}~!~\{\mathtt{Tick},\mathtt{Tock}\}$ as it returns the unit value and may
call $\mathtt{Tick}$ and $\mathtt{Tock}$.
The above typing implicitly appeals to subtyping in several places. For instance,
$\mathtt{Tick}~{\mathtt{unit}}$ has type $\mathtt{Unit}~!~\{\mathtt{Tick}\}$
and $f~x$ type $\mathtt{Unit}~!~\{\mathtt{Tock}\}$. Yet, because they are sequenced with
$\mathtt{do}$, the type system expects them to have the same set of effects. The discrepancies
are implicitly reconciled by the subtyping which admits both $\{\mathtt{Tick}\} \le \{\mathtt{Tick},\mathtt{Tock}\}$
and $\{\mathtt{Tock}\} \le \{\mathtt{Tick},\mathtt{Tock}\}$.
We elaborate the \textsc{ImpEff}\xspace term into the explicitly-typed core language \textsc{ExEff}\xspace,
where such implicit appeals to subtyping turn into explicit casts using \emph{coercions}:
\[\doin{x}{(\cast{(\mathtt{Tick}~{\mathtt{unit}})}{\coercion_1})}{\cast{(f~x)}{\coercion_2}}\]
A coercion $\coercion$ is a witness for a subtyping $A~!~\Delta \le A'~!~\Delta'$ and can be used
to cast a term $c$ of type $A~!~\Delta$ to a term $\cast{c}{\coercion}$ of type $A'~!~\Delta'$.
In the above term, $\coercion_1$ and $\coercion_2$ respectively witness
$\mathtt{Unit}~!~\{\mathtt{Tick}\} \le \mathtt{Unit}~!~\{\mathtt{Tick},\mathtt{Tock}\}$ and
$\mathtt{Unit}~!~\{\mathtt{Tock}\} \le
\mathtt{Unit}~!~\{\mathtt{Tick},\mathtt{Tock}\}$.
At this point, the reader
might wonder why coercions can influence value types, and not just effect sets.
This design allows us to flexibly cast types of higher-order functions and
handlers which would otherwise not be possible. For example, we can use a
coercion for $\delta_3 \le \delta_1$ to construct value type coercions that witnesses
\[
((\alpha \to \alpha'~!~\delta_1) \to \alpha''~!~\delta_2)
\le
((\alpha \to \alpha'~!~\delta_3) \to \alpha''~!~\delta_2)
\]
or
\[
(\alpha'~!~\delta_1 \Rrightarrow \alpha''~!~\delta_2)
\le
(\alpha'~!~\delta_3 \Rrightarrow \alpha''~!~\delta_2)
\]
\subsection{Polymorphic Subtyping for Types and Effects}
\label{sec:overview:polymorphism}
The above basic example only features monomorphic types and effects. Yet, our
calculus also supports polymorphism, which makes it considerably more expressive.
For instance the type of $\mathit{f}$ in $\letval{\mathit{f}}{(\fun{g}{g~\mathtt{unit}})}{\ldots}$
is generalised to:
\[\forall \alpha, \alpha'.\forall \delta,\delta'.\alpha \le \alpha'\Rightarrow \delta \le \delta' \Rightarrow (\mathtt{Unit} \to \alpha~!~\delta) \to \alpha' ~!~ \delta'\]
This polymorphic type scheme follows the qualified types convention~\citep{markjones} where the
type $(\mathtt{Unit} \to \alpha~!~\delta) \to \alpha' ~!~ \delta'$ is subjected to
several qualifiers, in this case $\alpha \le \alpha'$ and $\delta \le \delta'$. The universal
quantifiers on the outside bind the type variables $\alpha$ and $\alpha'$, and the
effect set variables $\delta$ and $\delta'$.
The elaboration of $f$ into \textsc{ExEff}\xspace introduces explicit binders for both the
quantifiers and the qualifiers, as well as the explicit casts where subtyping
is used.
\[
\begin{array}{l}
\Lambda \alpha.\Lambda \alpha'\!\!.\Lambda \delta.\Lambda \delta'\!\!.\Lambda (\omega\!:\!\alpha \le \alpha'). \Lambda (\omega'\!:\!\delta \le \delta'). \\
\qquad\fun{(g\!:\!\mathtt{Unit} \to \alpha \,!\, \delta)\!}{\!(\cast{(g\,\mathtt{unit})\!}{\!(\omega\,!\,\omega')})} \\
\end{array}
\]
Here the binders for qualifiers introduce \emph{coercion variables}
$\omega$ between pure types and $\omega'$ between operation sets, which are then
combined into a \emph{computation coercion} $\omega~!~\omega'$ and used for casting the function application
$(g\,\mathtt{unit})$ to the expected type.
Suppose that $h$ has type $\mathtt{Unit} \to \mathtt{Unit}\,!\,\{\mathtt{Tick}\}$ and $f\,h$
type $\mathtt{Unit}\,!\,\{\mathtt{Tick},\mathtt{Tock}\}$. In the \textsc{ExEff}\xspace calculus the
corresponding instantiation of $f$ is made explicit through type and coercion
applications
\[ f\,\mathtt{Unit}\,\mathtt{Unit}\,\{\mathtt{Tick}\}\,\{\mathtt{Tick},\mathtt{Tock}\}\,\coercion_1\,\coercion_2\,h\]
where $\coercion_1$ needs to be a witness for $\mathtt{Unit} \le \mathtt{Unit}$ and
$\coercion_2$ for $\{\mathtt{Tick}\} \le \{\mathtt{Tick},\mathtt{Tock}\}$.
\subsection{Guaranteed Erasure with Skeletons}
One of our main requirements for \textsc{ExEff}\xspace is that its effect information and
subtyping can be easily erased. The reason is twofold. Firstly, we want to
show that neither plays a role in the runtime behaviour of \textsc{ExEff}\xspace programs.
Secondly and more importantly, we want to use a conventionally typed (System
F-like) functional language as a backend for the Eff compiler.
At first, erasure of both effect information and subtyping seems easy: simply
drop that information from types and terms. But by dropping the effect variables
and subtyping constraints from the type of $\mathit{f}$, we get
$\forall \alpha, \alpha'. (\mathtt{Unit} \to \alpha) \to \alpha'$
instead of the expected type
$\forall \alpha. (\mathtt{Unit} \to \alpha) \to \alpha$.
In our naive erasure attempt we have carelessly discarded the connection between
$\alpha$ and $\alpha'$. A more appropriate approach to erasure would be to unify
the types in dropped subtyping constraints. However, unifying types may reduce
the number of type variables when they become instantiated, so corresponding
binders need to be dropped, greatly complicating the erasure procedure and its
meta-theory.
Fortunately, there is an easier way by tagging all bound type
variables with \emph{skeletons}, which are bare-bones types
without effect information. For example, the skeleton of a function type
$A \to B~!~\Delta$ is $\tau_1 \to \tau_2$, where $\tau_1$ is the
skeleton of $A$ and $\tau_2$ the skeleton of $B$. In \textsc{ExEff}\xspace every well-formed type
has an associated skeleton, and any two types
$A_1 \le A_2$ share the same skeleton. In particular, binders for type variables
are explicitly annotated with skeleton variables $\varsigma$. For instance, the actual
type of $\mathit{f}$ is:
\[\forall\varsigma.\forall (\alpha:\varsigma), (\alpha':\varsigma).\forall \delta,\delta'.\alpha \le
\alpha'\Rightarrow \delta \le \delta' \Rightarrow (\mathtt{Unit} \to
\alpha~!~\delta) \to \alpha' ~!~ \delta'\]
The skeleton quantifications
and annotations also appear at the term-level:
\begin{equation*}
\Lambda \varsigma. \Lambda (\alpha:\varsigma).\Lambda (\alpha':\varsigma).\Lambda \delta.\Lambda \delta'. \Lambda (\omega : \alpha \le \alpha'). \Lambda (\omega': \delta \le \delta').
\ldots
\end{equation*}
Now erasure is really easy: we drop not only effect and subtyping-related term
formers, but also type binders and application.
We do retain skeleton binders and applications, which take over the role of (plain) types in the backend
language. In terms, we replace types by their skeletons.
For instance, for $\mathit{f}$ we get:
\begin{equation*}
\Lambda \varsigma. \fun{(g : \mathtt{Unit} \to \varsigma)}{g\,\mathtt{unit}} ~~:~~ \forall\varsigma. (\mathtt{Unit} \to \varsigma) \to \varsigma
\end{equation*}
\subsection{Elaboration into a Pure Language}
We can drop effectful information only if the targeted language has a native implicit support for algebraic effects at any type. In a pure functional language, effectful computations that yield a result of type $A$ are represented with a user-defined type $\mkMlCompTy{A}$, which typically uses one of the known encodings, such as free monads~\cite{kammar,optimization}, delimited control~\cite{eff2ocaml}, or continuation-passing style~\cite{koka2017}.
Targeting such a language requires a more careful elaboration. For example, \textsc{ExEff}\xspace types $\tyInt~!~\{\mathtt{Tick}\}$ and $\tyInt~!~\{\mathtt{Tock}\}$ are both mapped to a type $\mkMlCompTy{\tyInt}$. The same could be done for the type $\tyInt~!~\emptyset$, but computations of that type are pure and do not require any encoding, so it is more efficient to avoid the library overhead and map the type to the pure type $\tyInt$ directly~\cite{koka2017, optimization}. This difference is the main complicating factor in the elaboration.
Since the computation $\return{5} : \tyInt~!~\emptyset$ is pure, it should be elaborated to $5$ of type $\tyInt$. But if we take a witness $\coercion$ for $\tyInt \le \tyInt$ and $\coercion_1$ for $\emptyset \le \{\mathtt{Tick}\}$, the coerced computation $\cast{(\return{5})}{(\coercion~!~\coercion_1)} : \tyInt~!~\{\mathtt{Tick}\}$ should be elaborated to the lifted value $\return{5} : \mkMlCompTy{\tyInt}$.
However, it is not simply a matter of replacing each cast with a $\mathtt{return}$. If we further take a witness $\coercion_2$ of $\{\mathtt{Tick}\} \le \{\mathtt{Tick},\mathtt{Tock}\}$, the computation
\[
\cast{(\cast{(\return{5})}{(\coercion~!~\coercion_1)})}{(\coercion~!~\coercion_2)} : \tyInt~!~\{\mathtt{Tick}, \mathtt{Tock}\}
\]
also has to be elaborated to $\return{5} : \mkMlCompTy{\tyInt}$.
We will see that this is just one of the (smaller) issues that stem from the different treatment of pure and impure computation types, and show how to construct an appropriate elaboration (Section~\ref{sec:eff-to-ml}).
\subsection{Outline}
The remainder of this article formalizes essentially a compiler pipeline for
Eff. Figure~\ref{fig:outline} depicts this pipeline and annotates the different
parts with the sections they are covered in.
\begin{figure}[th!]
\pgfimage[width=.9\textwidth]{outline.pdf}
\caption{Compiler and section structure}\label{fig:outline}
\end{figure}
\begin{description}
\item[Section~\ref{sec:source}:]
The starting point of the pipeline is \textsc{ImpEff}\xspace, an implicitly-typed calculus for
algebraic effects and handlers with a subtyping-based type-and-effect system.
It is the core of the desugared source language as it appears in the compiler
frontend of Eff. We present its syntax and type system.
\item[Section~\ref{sec:target}:]
The heart of the compiler is \textsc{ExEff}\xspace, an intermediate language that
is explicitly annotated with type and effect information. Its main novelty is that
it also makes appeals to subtyping explicit by means of coercions.
We present its syntax, type system and operational semantics.
\item[Section~\ref{sec:inference}:]
We explain how to elaborate \textsc{ImpEff}\xspace into \textsc{ExEff}\xspace, and provide a type inference algorithm
for \textsc{ImpEff}\xspace that performs this elaboration. The algorithm is constraint-based, i.e., it
consists of two interleaved phases: constraint generation and constraint solving.
\item[Section~\ref{sec:erasure}:] Towards the end, the compiler forks to support two
different compilation targets. The first compilation target is \textsc{SkelEff}\xspace. This
language is modelled after Multicore OCaml. In particular, it is a statically
typed language with built-in support for algebraic effects, but its type system
does not track effects. We provide its syntax and, in the appendix, also its type system
and operational semantics. Also, we explain how to elaborate the intermediate \textsc{ExEff}\xspace
into the \textsc{SkelEff}\xspace target language. Thanks to the skeleton-based setup of \textsc{ExEff}\xspace, this
elaboration is a fairly straightforward erasure procedure.
\item[Section~\ref{sec:elaboration-to-ocaml}:] The second compilation target is
\textsc{NoEff}\xspace, a statically typed calculus that distinguishes in its types between
pure and impure computations, but does not track which operations can happen in
impure computations. This models encodings of algebraic effects in languages
without native support. We present its syntax, type system and operational semantics.
Finally, we show how to elaborate \textsc{ExEff}\xspace into \textsc{NoEff}\xspace. This is much more involved than
the straightforward erasure procedure into \textsc{SkelEff}\xspace. Instead of
just throwing away all effect information and coercions, we have to
abstract it to the presence (pure) or absence (impure) of effects.
Unfortunately, polymorphism does not interact well with this abstraction
process. We show how to address this problem by conservatively assuming that polymorphic code
is impure and by adding unsafe coercions to obtain pure instantiations.
\end{description}
\subsection{Syntax}
Figure~\ref{fig:source-syntax} presents the syntax of the source language.
There are two main kinds of terms: (pure) values $v$ and (dirty) computations
$c$, which may call effectful operations. Handlers $h$ are a subsidiary sort of
values. We assume a given set of \emph{operations}~$\mathtt{Op}$, such as $\kord{Get}$
and $\kord{Put}$.
We abbreviate $\call{\op_1}{x}{k} \mapsto c_{\op_1}, \ldots, \call{\op_n}{x}{k} \mapsto c_{\op_n}$ as $[\call{\op}{x}{k} \mapsto c_\op]_{\op \in \ops}$, and write $\mathcal{O}$ to denote the set
$\set{\mathtt{Op}_1, \dots, \mathtt{Op}_n}$.
Similarly, we distinguish between two basic sorts of types: the value types $A,
B$ and the computation types $\underline{C}, \underline{D}$. There are four forms of value types:
type variables $\alpha$, function types $A \to \underline{C}$, handler types $\underline{C} \Rrightarrow \underline{D}$
and the $\mathtt{Unit}$ type. Skeletons $\tau$ capture the shape of types, so, by
design, their forms are identical.
The computation type $A \mathrel{!} \Delta$ is assigned to a
computation returning values of type $A$ and potentially calling operations
from the \emph{dirt} set $\Delta$. A dirt set contains zero or more operations
$\mathtt{Op}$ and is terminated either by an empty set or a dirt variable $\delta$.
Though we use \texttt{cons}-list syntax, the intended semantics
of dirt sets $\Delta$ is that the order of operations $\mathtt{Op}$ is irrelevant.
That is, $(\{ \mathtt{Op}_1 \} \cup (\{ \mathtt{Op}_2 \} \cup \Delta))$ denotes the same dirt as
$(\{ \mathtt{Op}_2 \} \cup (\{ \mathtt{Op}_1 \} \cup \Delta))$.
Similarly to all HM-based systems, we discriminate between value types (or
monotypes) $A$, qualified types $\mathit{K}$ and polytypes (or type schemes)
$\mathit{S}$.
(Simple) subtyping constraints $\pi$ denote inequalities between
either value types or dirts. We also present the more general form of
constraints $\rho$ that includes inequalities between computation types (as we
illustrate in Section~\ref{subsec:source-typing} below, this allows for a single, uniform constraint entailment
relation).
Finally, polytypes consist of zero or more skeleton, type or dirt
abstractions followed by a qualified type.
\begin{myfigure}
\input{Figures/source-typing}
\vspace{-5mm}
\caption{\textsc{ImpEff}\xspace Typing \& Elaboration}
\label{fig:source-typing}
\end{myfigure}
\subsection{Typing}\label{subsec:source-typing}
Figure~\ref{fig:source-typing} presents the typing rules for values and
computations, along with a typing-directed elaboration into our target
language~\textsc{ExEff}\xspace. In order to simplify the presentation, in this section we
focus exclusively on typing. The parts of the rules that concern elaboration
are highlighted in gray and are discussed in Section~\ref{sec:inference}.
In all the rules, we assume a global signature $\Sigma$ that captures
all defined operations along with their (well-formed) types.
\paragraph{\bf Values}
Typing for values takes the form $\tcVal{\Gamma}{v}{\mathit{A}}$, and, given
a typing environment $\Gamma$, checks a value $v$ against a value type $A$.
Rule~\textsc{TmVar} handles term variables. Given that $x$ has type $(\forall
\overline{\varsigma}. \overline{\alpha:\tau}. \forall \overline{\delta}.
\overline{\pi} \Rightarrow \mathit{A})$, we {\em appropriately}
instantiate the skeleton ($\overline{\varsigma}$), type ($\overline{\alpha}$), and
dirt ($\overline{\delta}$) variables, and ensure that the instantiated wanted
constraints $\overline{\sigma(\pi)}$ are satisfied, via side
condition $\overline{\tcProveCt{\Gamma}{\sigma(\pi)}}$.
Rule~\textsc{TmCastV} allows casting the type of a value $v$ from $\mathit{A}$ to
$\mathit{B}$, if $\mathit{A}$ is a subtype of $\mathit{B}$
(upcasting).
As illustrated
by Rule~\textsc{TmTmAbs}, we omit freshness conditions by adopting the
Barendregt convention~\citep{barendregt}.
Finally, Rule~\textsc{TmHand} gives typing for handlers. It requires that
the right-hand sides of the return clause and all operation clauses have the same computation type
($\mathit{B}\,!\,\Delta$), and that all operations mentioned are part of
the top-level signature $\Sigma$. The result type takes the form $\mathit{A}~!~\Delta \cup \mathcal{O} \Rrightarrow
\mathit{B}~!~\Delta$, capturing the intended handler semantics: given a computation
of type $\mathit{A}~!~\Delta \cup \mathcal{O}$, the handler
\begin{inparaenum}[(a)]
\item produces a result of type $\mathit{B}$,
\item handles operations $\mathcal{O}$, and
\item propagates unhandled operations $\Delta$ to the output.
\end{inparaenum}
\paragraph{\bf Computations}
Typing for computations takes the form $\tcComp{\Gamma}{c}{\underline{\mathit{C}}}$,
and, given a typing environment $\Gamma$, checks a computation $c$ against a
type $\underline{C}$.
Rule~\textsc{TmCastC} behaves like Rule~\textsc{TmCastV}, but
for computation types.
Rule~\textsc{TmLet} handles polymorphic, non-recursive let-bindings.
Rule~\textsc{TmReturn} handles $\return{v}$ computations. Keyword
$\kpre{return}$ effectively lifts a value $v$ of type $\mathit{A}$ into a computation of type
$\mathit{A}~!~\emptyset$.
Rule~\textsc{TmOp} checks operation calls. First, we ensure that $v$ has the
appropriate type, as specified by the signature of $\mathtt{Op}$. Then, the
continuation $(y. c)$ is checked. The side condition $\mathtt{Op} \in \Delta$ ensures
that the called operation $\mathtt{Op}$ is captured in the result type.
Rule~\textsc{TmDo} handles sequencing. Given that $c_1$ has type
$\mathit{A}\,!\,\Delta$, the pure part of the result of type $\mathit{A}$ is bound to term
variable $x$, which is brought in scope for checking $c_2$. As we mentioned
in Section~\ref{sec:overview}, all computations in a
$\texttt{do}$-construct should have the same effect set, $\Delta$.
Rule~\textsc{TmHandle} eliminates handler types, just as Rule~\textsc{TmTmApp}
eliminates arrow types.
\begin{myfigure}[t!]
\input{Figures/source-coercion-typing}
\vspace{-5mm}
\caption{\textsc{ImpEff}\xspace Constraint Entailment}
\label{fig:source-coercion-typing}
\end{myfigure}
\paragraph{\bf Constraint Entailment}
The specification of constraint entailment takes the form
$\tcProveCt{\Gamma}{\rho}$ and is presented in
Figure~\ref{fig:source-coercion-typing}. Notice that we use $\rho$
instead of $\pi$, which allows us to capture subtyping between two value
types, computation types or dirts, within the same relation. Subtyping can be
established in several ways:
Rule~\textsc{CoVar} handles assumptions.
Rules \textsc{UCoRefl}, \textsc{ACoRefl}, and \textsc{DCoRefl} express that
subtyping is reflexive, for the unit type, type variables, and dirts,
respectively. Notice that we do not have dedicated rules for reflexivity of
arbitrary computation or value types; as we illustrate below
(Section~\ref{subsec:target-syntax}), they can both be established using the
reflexivity of their subparts.
Rule~\textsc{VCoArr} establishes inequality of arrow types. As usual, the arrow
type constructor is contravariant in the argument type.
Rule~\textsc{VCoHand} is similar, but for handler types.
Rule~\textsc{CCoComp} captures the covariance of type constructor ($!$),
establishing subtyping between two computation types if subtyping is
established for their respective subparts.
Finally, Rules~\textsc{DCoNil} and~\textsc{DCoOp} establish subtyping between
dirts. Rule \textsc{DCoNil} captures that the empty dirty
set $\emptyset$ is a subdirt of any dirt $\Delta$ and Rule~\textsc{DCoOp} expresses that
dirt subtyping preserved under extension with the same operation
$\mathtt{Op}$.
\paragraph{\bf Well-formedness of Types, Constraints, Dirts, and Skeletons}
The relations $\tcPolyTy{\Gamma}{\mathit{A}}{\tau}$ and
$\tcCompTy{\Gamma}{\underline{\mathit{C}}}{\tau}$ check the well-formedness of value
and computation types respectively. Similarly, relations
$\tcConstraint{\Gamma}{\rho}$ and
$\wfDirt{\Gamma}{\Delta}$ check the well-formedness of constraints and dirts,
respectively. They are all
defined in
Appendix~\ref{appendix:source-additional}.
\begin{example}
\label{exa:running-source}
Recall the definition $\letval{\mathit{f}}{(\fun{g}{g~\mathtt{unit}})}{\ldots}$
of a polymorphic $f$ from Section~\ref{sec:overview:polymorphism}. Under different rule applications,
$f$ can be given different typings, including simple $(\mathtt{Unit} \to \mathtt{Unit}~!~\emptyset) \to \mathtt{Unit}~!~\emptyset$
under the typing
\[
\inferrule*[Right=TmTmAbs]{
\inferrule*[Right=TmTmApp]{
\inferrule*[right=TmVar]{ }{
\tcVal{g : (\mathtt{Unit} \to \mathtt{Unit}~!~\emptyset)}{g}{\mathtt{Unit} \to \mathtt{Unit}~!~\emptyset}
}
\and
\inferrule*[Right=TmUnit]{ }{
\tcVal{\Gamma}{\mathtt{unit}}{\mathtt{Unit}}
}
}{
\tcComp{g : (\mathtt{Unit} \to \mathtt{Unit}~!~\emptyset)}{g~\mathtt{unit}}{\mathtt{Unit}~!~\emptyset}
}
}{
\tcVal{\epsilon}{(\fun{g}{g~\mathtt{unit}})}{(\mathtt{Unit} \to \mathtt{Unit}~!~\emptyset) \to \mathtt{Unit}~!~\emptyset}
}
\]%
and the more involved polytype
\[
\mathit{S} = \forall \varsigma. \forall \alpha:\varsigma, \alpha':\varsigma.\forall \delta,\delta'.\alpha \le \alpha'\Rightarrow \delta \le \delta' \Rightarrow (\mathtt{Unit} \to \alpha~!~\delta) \to \alpha' ~!~ \delta'
\]
obtained by generalizing
\[
\inferrule*[Right=TmTmAbs]{
\inferrule*[Right=TmCastC]{
\inferrule*[Left=TmTmApp]{
\cdots
}{
\tcComp{\Gamma, g : (\mathtt{Unit} \to \alpha~!~\delta)}{g~\mathtt{unit}}{\alpha ~!~ \delta}
}
\and
\inferrule*[Right=CCoComp]{
\inferrule*[Left=CoVar]{ }{
\tcProveCt{\Gamma}{\alpha \le \alpha'}
}
\and
\inferrule*[Right=CoVar]{ }{
\tcProveCt{\Gamma}{\delta \le \delta'}
}
}{
\tcProveCt{\Gamma}{\alpha~!~\delta \le \alpha'~!~\delta'}
}
}{
\tcComp{\Gamma, g : (\mathtt{Unit} \to \alpha~!~\delta)}{g~\mathtt{unit}}{\alpha' ~!~ \delta'}
}
}{
\tcVal{\underbrace{\varsigma, \alpha : \varsigma, \alpha' : \varsigma, \delta, \delta', \alpha \le \alpha', \delta \le \delta'}_{\Gamma}}{(\fun{g}{g~\mathtt{unit}})}{(\mathtt{Unit} \to \alpha~!~\delta) \to \alpha' ~!~ \delta'}
}
\]
Using the latter typing, $f$ may be applied to a pure $\mathit{id} = \fun{x}{\return{x}}$ as
\[
\inferrule*[Right=TmTmApp]{
\inferrule*[Left=TmVar]{
\tcPolyTy{\Gamma}{\mathtt{Unit}}{\mathtt{Unit}} \\\\
\tcProveCt{\Gamma}{\emptyset \le \emptyset} \\\\
\sigma = [\mathtt{Unit} / \varsigma, \mathtt{Unit} / \alpha, \mathtt{Unit} / \alpha', \emptyset / \delta, \emptyset / \delta']
}{
\tcVal{\Gamma}{f}{(\mathtt{Unit} \to \mathtt{Unit}~!~\emptyset) \to \mathtt{Unit}~!~\emptyset}
}
\and
\inferrule*[Right=TmTmAbs]{
\cdots
}{
\tcVal{\Gamma}{\mathit{id}}{\mathtt{Unit} \to \mathtt{Unit}~!~\emptyset}
}
}{
\tcComp{\underbrace{f : \mathit{S}}_{\Gamma}}{f~\mathit{id}}{\mathtt{Unit} ~!~ \emptyset}
}
\]
We can also apply $f$ to an impure $\mathit{tick} = \fun{x}{\operation[\mathtt{Tick}]{x}{y}{\return{y}}}$,
and even enlarge the final dirt as
\[
\inferrule*[Right=TmTmApp]{
\inferrule*[Left=TmVar]{
\tcPolyTy{\Gamma}{\mathtt{Unit}}{\mathtt{Unit}} \\\\
\tcProveCt{\Gamma}{\{\mathtt{Tick}\} \le \{\mathtt{Tick}, \mathtt{Tock}\}} \\\\
\sigma = [\mathtt{Unit} / \varsigma, \mathtt{Unit} / \alpha, \mathtt{Unit} / \alpha', \{\mathtt{Tick}\} / \delta, \{\mathtt{Tick}, \mathtt{Tock}\} / \delta']
}{
\tcVal{\Gamma}{f}{(\mathtt{Unit} \to \mathtt{Unit}~!~\{\mathtt{Tick}\}) \to \mathtt{Unit}~!~\{\mathtt{Tick}, \mathtt{Tock}\}}
}
\and
\inferrule*[Right=TmTmAbs,vdots=4em,leftskip=6em]{
\cdots
}{
\tcVal{\Gamma}{\mathit{tick}}{\mathtt{Unit} \to \mathtt{Unit}~!~\{\mathtt{Tick}\}}
}
}{
\tcComp{\Gamma}{f~\mathit{tick}}{\mathtt{Unit} ~!~ \{ \mathtt{Tick}, \mathtt{Tock} \}}
}
\]
\end{example}
\section{Introduction}
In addition to the standard submission of hardcopy from authors, the
journal now accepts machine-readable forms of papers
in \LaTeXe. The layout design for the \emph{Functional Programming} journal
has been implemented as a \LaTeXe\ class file, based on the \verb"article"
class as discussed in the \LaTeX\ manual (2nd edition) \cite{LaTeX}.
Commands which differ from the standard \LaTeXe\ interface, or which are
provided in addition to the standard interface, are explained in this
guide (which is \emph{not} a substitute for the \LaTeXe\ manual itself).
Note that the final printed version of papers will use the Monotype Times
typeface rather than the Computer Modern typeface available to authors. For
this reason line and page breaks will change and authors should not insert
hard breaks in their text.
Authors planning to submit their papers in \LaTeXe\ are advised to use
\verb"jfp1.cls" as early as possible in the creation of their files.
The older \LaTeX\ style file is no longer supported, but authors should
find it easy to convert to \LaTeXe\ (see Section~\ref{usingjfpclass}).
\subsection{Introduction to \LaTeX}
\LaTeX\ is constructed as a series of macros on top of the \TeX\ typesetting
program. \LaTeX\ adds to \TeX\ a collection of facilities which simplify
typesetting for authors by allowing them to concentrate on the logical
structure of the document rather than its visual layout. Careful use of the
\LaTeX\ mark-up philosophy results in uniform layout rather than the
\emph{ad hoc} results of some word-processing systems. Authors are advised to
let the defaults control font selection etc., rather than tinker themselves.
The \LaTeX\ system provides a consistent and comprehensive document preparation
interface. Among other things, \LaTeX\ can automatically number list
entries, equations, figures, tables and footnotes, as well as sections and
subsections. Using this numbering system, bibliographic citations, page
references and cross references to any other numbered entity (e.g. sections,
equations, figures) are straightforward.
\subsection{The JFP document class}
The use of document classes allows a simple change of style (or style option)
to transform the appearance of your document. The JFP class preserves the
standard \LaTeX\ interface such that any document which can be produced
using the standard \LaTeX\ \verb"article" class can also be produced with
the JFP class.
However, the measure (or width of text) is different from that
for ARTICLE; therefore line breaks will change and it is possible that
longer equations may need re-setting.
Authors are urged to use \verb"jfp1.cls" from the beginning of their document
preparation, otherwise longer lines may require re-formatting at a later stage.
\subsection{General style issues}
Use of \LaTeX\ defaults will result in a pleasing uniformity of layout
and font selection. Authors should resist the temptation to make
\emph{ad hoc} changes to these. Also avoid use of direct formatting unless
really necessary. Papers will be edited as usual, and this process may be
obstructed by the use of inserted line breaks, etc.
For general style issues, authors are referred to the `Preparation of
manuscripts' in the back cover of the journal. Authors who are interested in
the details of style are referred to \cite{Butcher} and \cite{Chicago}. The
language used in the journal is British English, and spelling should conform
to this.
Use should be made of symbolic references (\verb"\ref") in order to
protect against late changes of order, etc.
\subsection{Submission of \LaTeX\ articles}
Authors who intend to submit a \LaTeX\ article should obtain a copy of the
JFP class file. This is available by anonymous FTP from
\begin{verbatim}
ftp.cup.cam.ac.uk
\end{verbatim}
You will find the class file and instructions contained in a
in the directory
\begin{verbatim}
pub/texarchive/journals/latex/jfp-cls
\end{verbatim}
The \verb"readme.txt" (which is the same directory) gives more details.
When submitting the final article, ensure that the following are included and
are clearly labelled.
\begin{enumerate}
\item A hardcopy printout of the article.
\item The input file (exactly matching the hardcopy).
\item A copy of any user-defined macros.
\item If you have used \textsc{Bib}\TeX, the \verb".bib", \verb".bbl"
and \verb".bst" files that were used.
\item Any other files necessary to prepare the article for typesetting.
\end{enumerate}
The source files for the \emph{final} article should be text-only with no
system-dependent control codes, via email as an attachment, along with
a pdf file that matches the source files exactly.
\section{Using the JFP1 class file}
\label{usingjfpclass}
First, copy the file \verb"jfp1.cls" (and \verb"jfp.bst" if you use Bib\TeX)
into an appropriate subdirectory on your system. The JFP class is implemented
as a complete document class, and \emph{not} as an class option.
In order to use the JFP class, replace \verb"article" by \verb"jfp1" in the
\verb"\documentclass" command at the beginning of your document: that is,
\begin{verbatim}
\documentclass{article}
\end{verbatim}
is replaced by
\begin{verbatim}
\documentclass{jfp1}
\end{verbatim}
Author-defined macros should be inserted before \verb"\begin{document}",
or in a separate file and should be included with the submission.
Authors must not change any of the macro definitions or parameters
in \verb"jfp1.cls".
If you have a document prepared using an old \verb"jfp.sty" file,
just change the line
\begin{verbatim}
\documentstyle{jfp}
\end{verbatim}
to
\begin{verbatim}
\documentclass{jfp1}
\end{verbatim}
The latter form uses the \verb"jfp1.cls" file.
\subsection{Document class options}\label{sec:ClassOp}
In general, the following standard document class options should \emph{not} be
used with the JFP class file:
\begin{itemize}
\item \texttt{10pt}, \texttt{11pt} and \texttt{12pt} -- unavailable;
\item \texttt{twoside} is the default (\texttt{oneside} is disabled);
\item \texttt{onecolumn} is the default (\texttt{twocolumn} is disabled);
\item \texttt{titlepage} is not required and is disabled;
\item \texttt{fleqn} and \texttt{leqno} should not be used, and are disabled.
\end{itemize}
\ifprodtf
The following new class options are provided:
\begin{itemize}
\item \texttt{prodtf} -- tells the class file that we want to use the
production typeface. This automatically sets the odd, even and top
margins.
\end{itemize}
\fi
\section{Additional facilities}
In addition to all the standard \LaTeX\ design elements, the JFP class
includes the following features.
\begin{itemize}
\item Additional commands for typesetting the title page. Extended
commands for specifying a short version of the title and author(s)
for the running headlines.
\item New options to the \verb"\maketitle" command to create Functional and
Theoretical Pearl(s).
\item A \verb"proof" environment.
\item A \verb"capsule" environment for typesetting Capsule Reviews.
\item Control of enumerated lists.
\end{itemize}
Once you have used these additional facilities in your document,
it can be processed only with \verb"jfp1.cls".
\subsection{Titles, authors' names and running headlines}
At the beginning of your article, the title should be generated in the
usual way using the \verb"\maketitle" command. Immediately following the
title you may include an abstract and/or capsule review. For example, the titles
for this guide were produced by the following source.
\begin{verbatim}
\title[Journal of Functional Programming]
{\LaTeXe\ guide for the Journal of Functional Programming}
\author[M. A. Reed]
{MARK A. REED\\
Cambridge University Press, Cambridge CB2 2BS, UK\\
\email{texline@cup.cam.ac.uk}}
\end{verbatim}
\verb"\begin{document}"\\
\indent\verb"\maketitle"
\begin{verbatim}
\begin{abstract}
This guide is for authors who are preparing papers...
\end{abstract}
\end{verbatim}
In the JFP1 class, the title of the article and the author's name (or
authors' names) are used both at the beginning of the article for the main
title and throughout the article as running headlines at the top of every
page. The title is used on odd-numbered pages (rectos) and the author's name
appears on even-numbered pages (versos). The \verb"\pagestyle" and
\verb"\thispagestyle" commands should \emph{not} be used. Similarly, the
commands \verb"\markright" and \verb"\markboth" should not be necessary.
Although the article title can run to several lines of text, the running
headline must be a single line. Moreover, the title can incorporate new-line
commands (e.g. \verb"\\"), but these are not acceptable in a running headline.
To enable you to specify an alternative short title, and an alternative short
author's name, the standard \verb"\title" and \verb"\author" commands have
been extended to take an optional argument to be used as the running headline.
\begin{verbatim}
\title[Short title]
{Full title which can be as long as necessary}
\author[Author name]
{AUTHOR NAME \\ Affiliation}
\end{verbatim}
Notice that the author name in the argument for the running head should
be in mixed case, and the author name for the title should be in upper
case only. The author affiliation is set in the normal way, after a
\verb"\\" in the argument to the \verb"\author" command.
Any `work supported by' or `authors current address' information should
be inserted via \verb"\thanks" commands, which should be positioned
after the appropriate `AUTHOR NAME' in the \verb"\author" command.
If there are four (or more) authors for the article, the author running head
should contain the first author name followed by `et~al.' only. e.g.
\begin{verbatim}
\author[Author1 et al.]
{AUTHOR1...}
\end{verbatim}
The previous examples show an article with one author, the normal
\LaTeX\ conventions have been extended to allow the author names and
their affiliations to be typeset in the correct JFP style. The following
examples should cover most possibilities:
\textit{Case 1.} Two authors with the same affiliation:
\begin{verbatim}
\author[Author1 and Author2]
{AUTHOR1 and AUTHOR2\\
Affiliation for both authors}
\end{verbatim}
If the author names are too long to fit onto one line, it should be
broken into two or more lines using the \verb"\authorbreak" command.
Don't use \verb"\\" to linebreak the author names -- as this will not do what
you expect.
\textit{Case 2.} Two authors with different affiliations:
\begin{verbatim}
\author[Author1 and Author2]
{AUTHOR1\\
Affiliation for Author1
\and AUTHOR2\\
Affiliation for Author2}
\end{verbatim}
\textit{Case 3.} Three (or more) authors, two with the same affiliation:
\begin{verbatim}
\author[Author1, Author2 and Author3]
{AUTHOR1, AUTHOR2\\
Affiliation for Author1 and Author2
\and AUTHOR3\\
Affiliation for Author3}
\end{verbatim}
\subsection{Pearls}
To provide for the Theoretical Pearl and Functional Pearl styles,
\verb"\maketitle" can now take an optional argument, which can take the
values \verb"t" and \verb"f". Thus, \verb"\maketitle[t]" will make a
title page, and set up the running headline, for a \linebreak Theoretical
Pearls article; \verb"\maketitle[f]" will do the same for a Functional Pearls
article.
If you need to have a `Theoretical Pearl' or `Functional Pearl' (singular)
you can use the \verb"T" and \verb"F" options to obtain these.
\subsubsection{Other `Pearl' styles}
If you need to have a `Pearl' style which cannot be generated by the
standard \verb"t", \verb"f", \verb"T" or \verb"F" options, then the \verb"o"
option can be used.
The \verb"o" option allows you to define a custom `Pearl' style. \eg for
`CUSTOM PEARL' you would use:
\begin{verbatim}
\begin{document}
\renewcommand\otherpearl{C\ls U\ls S\ls T\ls O\ls M\ns
P\ls E\ls A\ls R\ls L}
\maketitle[o]
\shorttitle{Custom pearl}
\end{verbatim}
Notice the use of the \verb"\ls" and \verb"\ns" commands to letter-space the
words in the title.
You can use the \verb"\\" command to line-break the title as normal.
The \verb"\shorttitle" command should appear \emph{after} the
\verb"\maketitle" command (as above).
\subsection{Proofs}
A new environment exists for creating Proofs, e.g.
\begin{proof}
Use $K_\lambda$ and $S_\lambda$ to translate combinators
into $\lambda$-terms. For the converse, translate
$\lambda x$ \ldots by [$x$] \ldots and use induction
and the lemma.
\end{proof}
This was produced by the following code:
\begin{verbatim}
\begin{proof}
Use $K_\lambda$ and $S_\lambda$ to...
\end{proof}
\end{verbatim}
The end of proof marker \proofbox\ is produced automatically. If you wish
to omit this, use the \verb"proof*" environment instead.
If a proof ends with a display equation, then it is customary for the proofbox
to be positioned at the end of equation finishing the proof. \eg
\begin{proof*}
Use $K_\lambda$ and $S_\lambda$ to translate combinators
into $\lambda$-terms. For the converse, translate
$\lambda x$ \ldots by [$x$] \ldots and use induction
and the lemma.
\[ a_1 \equiv (2\Omega M^2/x) \mathproofbox \]
\end{proof*}
Was produced with:
\begin{verbatim}
\begin{proof*}
Use $K_\lambda$ and $S_\lambda$ to...
\[ a_1 \equiv (2\Omega M^2/x) \mathproofbox \]
\end{proof*}
\end{verbatim}
Notice the use of \verb"proof*" to turn off the automatic proofbox.
The proof environment will also take an optional argument which allows you
to produce `special' proofs. e.g.
\begin{proof}[Proof of Theorem 27]
We define a linear isometry $A:H\rightarrow H$ by the nonlinear
Schr\"{o}dinger equation. It would not be hard to modify the
proof to obtain an analogous result for ellipsoids rather than
spheres.
\end{proof}
Which was produced like this:
\begin{verbatim}
\begin{proof}[Proof of Theorem 27]
We define a linear isometry...
\end{proof}
\end{verbatim}
Notice that once the optional argument is used, you have to type all of
the text which is to appear as the heading.
\subsection{Programs}\label{sec-programs}
JFP encourages authors to use one of two styles for typesetting programs,
\emph{mathematical} and \emph{verbatim}.
A program typeset in the mathematical style is shown in
Figure~\ref{mathfigure}, and the commands used to typeset this program
are shown in Figure~\ref{mathtypeset}. This uses the ordinary
mathematics mode of \LaTeX: displayed programs are surrounded by
\verb|\[| and \verb|\]|, and the \verb|array| command is used for
alignment. However, there are two important differences. First, the
\verb|\programmath| command appears before the program text; this
causes math mode to use ordinary spacing for italic identifiers,
rather than math spacing. The \verb|\unprogrammath| command returns
to normal math spacing. Second, in math mode spaces are ignored, so a
tilde \verb|~| is used instead. (In \LaTeX, a tilde generates a
``hard space'' that is never replaced by a line break.) To include
program text in mathematics style inline, surround it with dollar
\verb|$| signs. For example, the input
\begin{verbatim}
See how \programmath $differ~x$
differs from \unprogrammath $differ x$.
\end{verbatim}
produces the output
\begin{center}
See how \programmath $differ~x$
differs from \unprogrammath $differ x$.
\end{center}
A program typeset in the verbatim style is shown in
Figure~\ref{verbfigure}, and the commands used to typeset this program
are shown in Figure~\ref{verbtypeset}. This uses the ordinary
verbatim mode of \LaTeX: displayed programs are surrounded by
\verb|\begin{verbatim}| and \verb|\end{verbatim}|, and alignment is
indicated with spaces in the source file (don't use tabs, which may
not be processed properly). To include program
text in verbatim style inline, use the \verb|\verb| command.
For example, the input
\begin{center}
\begin{verbatim}
On a terminal, this looks like \verb"differ x".
\end{verbatim}
\end{center}
produces the output
\begin{center}
On a terminal, this looks like \verb"differ x".
\end{center}
It is recommended that programs in figures be offset from the text
using the \verb|\figrule| command, as shown in Figures~\ref{mathfigure
--\ref{verbtypeset}.
\begin{figure}
\figrule
\programmath
\[
\begin{array}{lcl}
exp & :: & Exp \rightarrow Arr \rightarrow Val \\
exp~(Var~i)~a & = & index~i~a \\
exp~(Const~v)~a & = & v \\
exp~(Plus~e_1~e_2)~a & = & exp~e_1~a+exp~e_2~a \\
\\
com & :: & Com \rightarrow Arr \rightarrow Arr \\
com~(Asgn~i~e)~a & = & update~i~(exp~e~a)~a \\
com~(Seq~c_1~c_2)~a & = & com~c_2~(com~c_1~a) \\
com~(If~e~c_1~c_2)~a & = & \textrm{if } exp~e~a \dequals 0 \textrm{ then }
com~c_1~a \textrm{ else } com~c_2~a \\
\\
prog & :: & Prog \rightarrow Val\\
prog~(Prog~c~e) & = & exp~e~(com~c~(newarray~0))
\end{array}
\]
\unprogrammath
\caption{Example program in mathematical style.}\label{mathfigure}
\figrule
\begin{center}
\begin{verbatim}
\begin{figure}
\figrule
\programmath
\[
\begin{array}{lcl}
exp & :: & Exp \rightarrow Arr \rightarrow Val \\
exp~(Var~i)~a & = & index~i~a \\
exp~(Const~v)~a & = & v \\
exp~(Plus~e_1~e_2)~a & = & exp~e_1~a+exp~e_2~a \\
\\
com & :: & Com \rightarrow Arr \rightarrow Arr \\
com~(Asgn~i~e)~a & = & update~i~(exp~e~a)~a \\
com~(Seq~c_1~c_2)~a & = & com~c_2~(com~c_1~a) \\
com~(If~e~c_1~c_2)~a & = & \textrm{if } exp~e~a \dequals 0 \textrm{ then }
com~c_1~a \textrm{ else } com~c_2~a \\
\\
prog & :: & Prog \rightarrow Val\\
prog~(Prog~c~e) & = & exp~e~(com~c~(newarray~0))
\end{array}
\]
\unprogrammath
\caption{Example program in mathematical style.}\label{mathfigure}
\figrule
\end{verbatim}
\end{center}
\caption{Typesetting the example program in mathematical style.}\label{mathtypeset}
\figrule
\end{figure}
\begin{figure}
\figrule
\begin{verbatim}
exp :: Exp -> Arr -> Val
exp (Var i) a = index i a
exp (Const v) a = v
exp (Plus e1 e2) a = exp e1 a + exp e2 a
com :: Com -> Arr -> Arr
com (Asgn i e) a = update i (exp e a) a
com (Seq c1 c2) a = com c2 (com c1 a)
com (If e c1 c2) a = if exp e a == 0 then com c1 a else com c2 a
prog :: Prog -> Val
prog (Prog c e) = exp e (com c (newarray 0))
\end{verbatim}
\caption{Example program in verbatim style.}\label{verbfigure}
\figrule
\begin{tabular}{l}
\verb|\begin{figure}| \\
\verb|\figrule| \\
\verb|\begin{center}| \\
\verb|\begin{verbatim}| \\
\verb|exp :: Exp -> Arr -> Val| \\
\verb|exp (Var i) a = index i a| \\
\verb|exp (Const v) a = v| \\
\verb|exp (Plus e1 e2) a = exp e1 a + exp e2 a| \\
\\
\verb|com :: Com -> Arr -> Arr| \\
\verb|com (Asgn i e) a = update i (exp e a) a| \\
\verb|com (Seq c1 c2) a = com c2 (com c1 a)| \\
\verb|com (If e c1 c2) a = if exp e a == 0 then com c1 a else com c2 a| \\
\\
\verb|prog :: Prog -> Val| \\
\verb|prog (Prog c e) = exp e (com c (newarray 0))| \\
\verb|\end{verbatim}| \\
\verb|\end{center}| \\
\verb|\caption{Example program in verbatim style.}\label{verbfigure}| \\
\verb|\figrule| \\
\verb|\end{figure}|
\end{tabular}
\caption{Typesetting the example program in verbatim style.}\label{verbtypeset}
\figrule
\end{figure}
Some new macros have been provided for a few convenient symbols in
math mode. These are illustrated in Table~\ref{newcommands}.
\ifprodtf\newpage\fi
\begin{table}
\caption{New symbol macros}\label{newcommands}
\programmath
\begin{tabular}{lll}
\hline\hline
Symbol & Usage & Keyed as\\
\hline
\verb"\dplus" & $abc \dplus xyz$ & \verb"$abc \dplus xyz$"\\
\verb"\dequals" & $abc \dequals xyz$ & \verb"$abc \dequals xyz$"\\
\verb"\dcolon" & $abc \dcolon xyz$ & \verb"$abc \dcolon xyz$"\\
\verb"\dcolonequals" & $abc \dcolonequals xyz$ & \verb"$abc \dcolonequals xyz$"\\
\hline\hline
\end{tabular}
\unprogrammath
\end{table}
\subsection{Abstract and Capsule reviews}
The JFP class provides for an abstract and/or a capsule review; the
abstract is produced by the following commands:
\begin{verbatim}
\begin{abstract}
:
\end{abstract}
\end{verbatim}
whereas the capsule review is produced by:
\begin{verbatim}
\begin{capsule}
:
\end{capsule}
\end{verbatim}
Either or both of these may be used, in either order, but it is assumed
that, if both are used, there will be no other material between them.
In JFP the abstract should precede any capsule review.
\subsection{Lists}
The JFP class provides the three standard list environments.
\begin{itemize}
\item Numbered lists, created using the \verb"enumerate" environment;
\item Bulleted lists, created using the \verb"itemize" environment;
\item Labelled lists, created using the \verb"description" environment.
\end{itemize}
The \verb"enumerate" environment numbers each list item with an arabic numeral;
alternative styles can be achieved by inserting a redefinition of the
number labelling command after the \verb"\begin{enumerate}". For example, a
list numbered with roman numerals inside parentheses can be produced by the
following commands:
\begin{verbatim}
\begin{enumerate}[(iii).]
\renewcommand{\theenumi}{(\roman{enumi})}
\item first item
:
\end{enumerate}
\end{verbatim}
This produces the following list:
\begin{enumerate}[(iii).]
\renewcommand{\theenumi}{(\roman{enumi})}
\item first item
\item second item
\item \etc
\end{enumerate}
Notice that an optional argument ``\verb"(iii)."'' has been given to the
\verb"enumerate" environment, specifying the \emph{widest label} used in the
list. This is because roman numerals are wider than the arabic numerals
normally used by \verb"enumerate", and so the labels would otherwise have been
pushed out into the margin.
\section{User-defined macros}
If you define your own macros, you must ensure that their names do not
conflict with any existing macros in \LaTeX\ (or \AMSLaTeX\
if you are using this). You should also place them in the preamble of
your input file, between the \verb"\documentclass" (but after any
\verb"\usepackage" commands) and before the \verb"\begin{document}" command.
Apart from scanning the indexes of the relevant manuals, you can check
whether a macro name is already used by using \verb"\newcommand", which
will check for the existence of the macro you are trying to define.
If the macro exists \LaTeX\ will respond with:
\begin{verbatim}
! LaTeX Error: Command ... already defined.
\end{verbatim}
In this case you should choose another name, and try again.
Such macros must be in a place where they can easily be found and
modified by the journal's editors or typesetter. They must be gathered
together in the preamble of your input file, or in a separate
\verb"macros.tex" file with the command \verb"\input{macros}" in the
preamble. Macro definitions must not be scattered about your document
where they are likely to be completely overlooked by the typesetter.
The same applies to font definitions that are based on Computer Modern
fonts. These must be changed by the typesetter to use the journal's
correct typeface. In this case, you should draw
attention to these font definitions on the hard copy that you submit for
publication and by placing a comment in your input file just before the
relevant definitions, for example \verb
\section{Some guidelines for using standard facilities}
The following notes may help you achieve the best effects with the JFP class file.
\subsection{Sections}
\LaTeX\ provides five levels of section headings and they are all
defined in the JFP class file:
\begin{itemize}
\item[] Heading A -- \verb"\section{...}"
\item[] Heading B -- \verb"\subsection{...}"
\item[] Heading C -- \verb"\subsubsection{...}"
\item[] Heading D -- \verb"\paragraph{...}"
\item[] Heading E -- \verb"\subparagraph{...}"
\end{itemize}
Section numbers are given for sections, subsection and subsubsection headings.
\subsection{Figures and tables}
The \texttt{figure} and \texttt{table} environments are implemented as described in
the \LaTeX\ Manual to
provide consecutively numbered floating inserts for illustrations and tables
respectively.
The standard inserts and their captions are formatted centred.
Line breaks in captions can be inserted as required using \verb"\\".
\subsubsection{Illustrations (or figures)}
The JFP class will cope with most positioning of your illustrations
and you should not normally use the optional positional qualifiers on
the \verb"figure" environment which would override these decisions.
Figure captions should be below the figure itself, therefore the
\verb"\caption" command should appear after the figure or space left
for an illustration.
Figures in JFP will frequently illustrate programs, as shown in
section~\ref{sec-programs} of this guide. Figure~\ref{sample-figure}
shows an example of space left above a caption for artwork to be pasted in.
This was produced with the following commands:
\begin{verbatim}
\begin{figure}
\vspace{5cm}
\caption{An example figure with space for artwork.}
\label{sample-figure}
\end{figure}
\end{verbatim}
\begin{figure}
\vspace{5cm}
\caption{An example figure with space for artwork.}
\label{sample-figure}
\end{figure}
The vertical depth should correspond roughly to the artwork you will submit;
it will be adjusted to fit the final artwork exactly.
If your illustration extends over two~pages, you can use the
\verb"\continuedfigure" facility. To use this, you key the figure
caption for the second figure as follows:
\begin{verbatim}
\begin{figure}
\continuedfigure
\vspace{80pt}
\caption{First figure, continued.}
\label{continued}
\end{figure}
\end{verbatim}
This ensures that the figure counter does not get incremented, and at the
same time adds the word (cont.) to the caption. You may still use labels
and references for this figure.
\subsubsection{Tables}
The JFP class file will cope with most positioning of your tables
and you should not normally use the optional positional qualifiers on the
\verb"table" environment which would override these decisions.
Normal journal style sets the table caption first, followed by a double
rule, the table body and a double rule at the bottom. Single rules and
spanner rules (\verb"\cline") can be used to separate headings from the
columns. For example, Table~\ref{sample-table} is produced using the
following commands:\par
{\small
\begin{verbatim}
\begin{table}
\caption{Results of Overloading for 3 Experimental Setups}
\label{sample-table}
\begin{minipage}{\textwidth}
\begin{tabular}{lcrrrrr}
\hline\hline
Program& Expt.&
CPU\footnote{Seconds of elapsed time on an unloaded Sun 3/50.}&
RelCPU\footnote{CPU Time relative to experiment (a).}& GC&
Mem\footnote{Bytes of heap used over the duration of the program.}&
RelMem\footnote{Memory usage relative to experient (a).}\\
\hline
8 Queens& (a)& 2.88& 1.00& 6& 1.7M& 1.00\\
& (b)& 32.51& 11.29& 193& 48.9M& 28.76\\
& (c)& 7.90& 2.74& 42& 11.3M& 6.65\\
\noalign{\vspace {.5cm}}
Primes& (a)& 4.89& 1.00& 19& 5.3M& 1.00\\
& (b)& 47.54& 9.72& 204& 54.5M& 10.28\\
& (c)& 10.08& 2.06& 47& 13.0M& 2.45\\
\noalign{\vspace {.5cm}}
Nfib& (a)& 21.65& 1.00& 161& 40.4M& 1.00\\
& (b)& 221.65& 10.24& 1382& 349.0M& 8.64\\
& (c)& 21.30& 0.98& 161& 42.0M& 1.03\\
\noalign{\vspace {.5cm}}
KWIC& (a)& 7.07& 1.00& 15& 6.3M& 1.00\\
& (b)& 34.55& 4.89& 109& 47.8M& 7.59\\
& (c)& 31.62& 4.47& 53& 45.0M& 7.14\\
\hline\hline
\end{tabular}
\vspace{-2\baselineskip}
\end{minipage}
\end{table}
\end{verbatim}}
\begin{table}
\caption{Results of Overloading for 3 Experimental Setups}
\label{sample-table}
\begin{minipage}{\textwidth}
\begin{tabular}{lcrrrrr}
\hline\hline
Program& Expt.&
CPU\footnote{Seconds of elapsed time on an unloaded Sun 3/50.}&
RelCPU\footnote{CPU Time relative to experiment (a).}& GC&
Mem\footnote{Bytes of heap used over the duration of the program.}&
RelMem\footnote{Memory usage relative to experient (a).}\\
\hline
8 Queens& (a)& 2.88& 1.00& 6& 1.7M& 1.00\\
& (b)& 32.51& 11.29& 193& 48.9M& 28.76\\
& (c)& 7.90& 2.74& 42& 11.3M& 6.65\\
\noalign{\vspace {.5cm}}
Primes& (a)& 4.89& 1.00& 19& 5.3M& 1.00\\
& (b)& 47.54& 9.72& 204& 54.5M& 10.28\\
& (c)& 10.08& 2.06& 47& 13.0M& 2.45\\
\noalign{\vspace {.5cm}}
Nfib& (a)& 21.65& 1.00& 161& 40.4M& 1.00\\
& (b)& 221.65& 10.24& 1382& 349.0M& 8.64\\
& (c)& 21.30& 0.98& 161& 42.0M& 1.03\\
\noalign{\vspace {.5cm}}
KWIC& (a)& 7.07& 1.00& 15& 6.3M& 1.00\\
& (b)& 34.55& 4.89& 109& 47.8M& 7.59\\
& (c)& 31.62& 4.47& 53& 45.0M& 7.14\\
\hline\hline
\end{tabular}
\vspace{-2\baselineskip}
\end{minipage}
\end{table}
Notice the use of the `\verb"\vspace{-2\baselineskip}"' command to remove the
unwanted vertical space from above the table footnotes in this example.
Captions for `continued' tables can be generated (in the same way as for figures)
using the \verb"\continuedtable" command. These should be positioned just before
the \verb"\caption" command in the appropriate table environment.
The \verb"tabular" environment should be used to produce ruled tables;
it has been modified for the JFP class in the following ways:
\begin{enumerate}
\item Additional vertical space is inserted above and below a horizontal rule
(produced by \verb"\hline");
\item Tables are centred, and span the full width of the page; that is,
they are similar to the tables that would be produced by
\verb"\begin{minipage}{\textwidth}".
\end{enumerate}
Because of this reformatting, vertical rules should not be used;
furthermore, commands to
redefine quantities such as \verb"\arraystretch" should be omitted. If
the old tabular facilities are needed, there is a new environment,
\verb"oldtabular", which has none of the reformatting; it should be used
in exactly the same way.
\subsection{Appendices}
You should use the standard \LaTeX\ \verb"
\subsection{Syntax}\label{subsec:target-syntax}
Figure~\ref{fig:target-syntax} presents \textsc{ExEff}\xspace's syntax. \textsc{ExEff}\xspace is a
type theory akin to System F\xspace~\citep{systemf}, where every term
encodes its own typing derivation. In essence, all abstractions and
applications that are implicit in \textsc{ImpEff}\xspace, are made explicit in \textsc{ExEff}\xspace via
new syntactic forms. Additionally, \textsc{ExEff}\xspace supports impredicative and higher-rank polymorphism, which is reflected
in the lack of discrimination between value types, qualified types and type
schemes; all non-computation types are denoted by $\vty$. While this design choice
is not strictly required for the purpose at hand, it makes for a cleaner system.
In short, \textsc{ExEff}\xspace relates to \textsc{ImpEff}\xspace the same way that System
F~\cite{girardthesis,reynolds-systemf-1,reynolds-systemf-2} relates to the
Hindley-Damas-Milner system~\cite{hindley,milner,DamasMilner}.
\paragraph{\bf Coercions}
Of particular interest is the use of explicit {\em subtyping coercions},
denoted by $\coercion$. \textsc{ExEff}\xspace uses these to replace the
implicit casts of \textsc{ImpEff}\xspace (Rules~\textsc{TmCastV} and~\textsc{TmCastC} in
Figure~\ref{fig:source-typing}) with explicit casts $(\cast{v}{\coercion})$ and
$(\cast{c}{\coercion})$. Essentially, coercions $\coercion$ are explicit
witnesses of subtyping derivations: each coercion form corresponds to a
subtyping rule.
The first coercion form, $\omega$, is a coercion variable, that is, a yet
unknown proof of subtyping. Forms $\langle \tyUnit \rangle$, $\refl{\alpha}$, and
$\dirtRefl{\Delta}$ witness reflexivity for the $\mathtt{Unit}$ type, type variables,
and dirts $\Delta$, respectively.
Most of the remaining coercion forms are simple congruences;
subtyping for skeleton abstraction, type abstraction, dirt abstraction, and
qualification is witnessed by forms $\forall \varsigma. \coercion$, $\forall
\alpha. \coercion$, $\forall \delta. \coercion$, and $\pi
\Rightarrow \coercion$, respectively;
similarly, syntactic forms $\coercion_1 \to \coercion_2$ and $\coercion_1 \Rrightarrow
\coercion_2$ capture injection for the arrow and the handler type constructor,
respectively.
Subtyping for computation types is witnessed by coercion form
$\coercion_1~!~\coercion_2$, which combines subtyping proofs of their
components.
Finally, coercion forms $\emptyset_\Delta$ and $\{\mathtt{Op}\} \cup \coercion$ are
concerned with dirt subtyping. Form $\emptyset_\Delta$ witnesses that the empty
dirt $\emptyset$ is a subdirt of any dirt $\Delta$. Lastly, coercion form
$\{\mathtt{Op}\} \cup \coercion$ witnesses that subtyping between dirts is preserved
under extension with a new operation.
\begin{myfigure}[t]
\input{Figures/target-typing-values}
\vspace{-5mm}
\caption{\textsc{ExEff}\xspace Value Typing}
\label{fig:target-typing-values}
\end{myfigure}
\begin{myfigure}[t]
\input{Figures/target-typing-computations}
\vspace{-5mm}
\caption{\textsc{ExEff}\xspace Computation Typing}
\label{fig:target-typing-computations}
\end{myfigure}
\paragraph{\bf A Note on Reflexivity of Arbitrary Types}
In contrast to our earlier work~\cite{esop2018}, \textsc{ExEff}\xspace (and the other calculi
we present in the remainder of this paper) does not syntactically allow for
reflexivity of arbitrary types. Nevertheless, we avoid notational burden and
throughout the paper write $\refl{\vty}$ to denote the coercion that witnesses
$\vty \le \vty$; such a coercion can be built by traversing the structure of
$\vty$ (see Appendix~\ref{appendix:target-additional}). A similar situation
arises when applying a type substitution on a coercion, but it can be remedied
in exactly the same way.
One of the problems with reflexivity of arbitrary types is that it allows for
many trivially different proofs for the same constraint. The same is also true
for \emph{inversion coercions}, which are coercion formers that allow for
decomposition of coercion types. For example, our earlier
work~\cite{esop2018} included a coercion former $\leftinv{\coercion}$ which is
a proof of $\vty_2 \le \vty_1$, if $\coercion$ is a proof of $\vty_1 \to \cty_1
\le \vty_2 \to \cty_2$.
By removing both, we have managed to greatly simplify the proofs of
the metatheoretical properties of our calculi, since now there are much less
proofs for any type inequality. Additionally, as we show in
Section~\ref{subsec:exeff-operational-semantics}, \textsc{ExEff}\xspace's operational
semantics inspect the coercions so having uniqueness of proofs (coercions) is
essential.
The situation is quite different when it comes to dirts. Dirts can take much
less forms than types do (and so do coercions about them), and coercions
regarding dirts need never be inspected during evaluation. Hence, we do not
require unique coercion forms for dirt inequalities and can allow the simpler
and more conventional reflexivity coercions~$\refl{\Delta}$ for arbitrary dirts~$\Delta$.
\subsection{Typing}
\paragraph{\bf Value \& Computation Typing}
Typing for \textsc{ExEff}\xspace values and computations is presented in
Figures~\ref{fig:target-typing-values} and~\ref{fig:target-typing-computations}
and is given by two mutually recursive relations of the form
$\tcTrgVal{\Gamma}{v}{\vty}$ (values) and $\tcTrgComp{\Gamma}{c}{\cty}$
(computations). \textsc{ExEff}\xspace typing environments $\Gamma$ contain bindings for
variables of all sorts:
\[
\begin{array}{r@{~}c@{~}l}
\Gamma & ::= & \epsilon \mid
\Gamma, \varsigma \mid
\Gamma, \alpha : \tau \mid
\Gamma, \delta \mid
\Gamma, x : \vty \mid
\Gamma, \omega : \pi \\
\end{array}
\]
Typing is entirely syntax-directed. Apart from the typing rules for skeleton,
type, dirt, and coercion abstraction (and, subsequently, skeleton, type, dirt,
and coercion
application), the main difference between typing for \textsc{ImpEff}\xspace and \textsc{ExEff}\xspace lies
in the explicit cast forms, $(\cast{v}{\coercion})$ and
$(\cast{c}{\coercion})$.
Given that a value $v$ has type $\vty_1$ and that $\coercion$ is a proof that
$\vty_1$ is a subtype of $\vty_2$, we can upcast $v$ with an explicit
cast operation $(\cast{v}{\coercion})$. Upcasting for computations works
analogously.
\paragraph{\bf Well-formedness of Types, Constraints, Dirts \& Skeletons}
The definitions of the judgements that check the well-formedness of \textsc{ExEff}\xspace
value types ($\tcTrgVty{\Gamma}{\vty}{\tau}$), computation types
($\tcTrgCty{\Gamma}{\cty}{\tau}$), dirts ($\wfDirt{\Gamma}{\Delta}$), and
skeletons ($\tcSkeleton{\Gamma}{\tau}$) are equally straightforward as those
for \textsc{ImpEff}\xspace and can be found in Appendix~\ref{appendix:target-additional}.
\paragraph{\bf Coercion Typing}
Coercion typing formalizes the intuitive interpretation of coercions we gave in
Section~\ref{subsec:target-syntax} and takes the form
$\tcTrgCo{\Gamma}{\coercion}{\trgFullCt}$, defined in Appendix~\ref{appendix:target-additional}. It is essentially an
extension of the constraint entailment relation of
Figure~\ref{fig:source-coercion-typing}.
\subsection{Operational Semantics}
\label{subsec:exeff-operational-semantics}
\begin{myfigure}[t]
\begin{center}
\end{center}
\input{Figures/target-opsem-partial}
\vspace{-7mm}
\caption{\textsc{ExEff}\xspace Operational Semantics (Selected Rules)}
\label{fig:target-opsem-partial}
\end{myfigure}
Figure~\ref{fig:target-opsem-partial} presents selected rules of \textsc{ExEff}\xspace's
small-step, call-by-value operational semantics. For lack of
space, we omit $\beta$-rules and other common rules and focus only on
cases of interest. The complete operational semantics can be found in
Appendix~\ref{appendix:target-additional}.
Firstly, one of the non-conventional features of our system lies in the
stratification of results in plain results and cast results:
\begin{center}\begin{shaded}\begin{minipage}{\columnwidth}
\small
\vspace{-1mm}
\[
\begin{array}{r@{~}c@{~}l}
\text{terminal value}~v^T & ::= &%
\mathtt{unit} \mid
\fun{x:\vty}{c} \mid
h \mid
\Lambda \varsigma. v \mid
\Lambda (\alpha : \tau). v \mid
\Lambda \delta. v \mid
\lambda (\omega : \pi). v \\
\text{value result}~v^R & ::= &%
v^T \mid
\cast{v^R}{(\coercion_1 \to \coercion_2)} \mid
\cast{v^R}{(\coercion_1 \Rrightarrow \coercion_2)} \mid
\cast{v^R}{(\forall \varsigma. \coercion)} \\
& \mid &%
\cast{v^R}{(\forall (\alpha : \tau). \coercion)} \mid
\cast{v^R}{(\forall \delta. \coercion)} \mid
\cast{v^R}{(\pi \Rightarrow \coercion)} \\
\text{terminal computation}~c^T & := &%
\return{v^R} \mid
\cast{c^T}{(\coercion_1~!~\coercion_2)} \\
\text{computation result}~c^R & ::= &%
c^T \mid
\operation{v^R}{y : \vty}{c} \\
\end{array}
\]
\end{minipage}\end{shaded}\end{center}
Terminal values $v^T$ represent conventional values, and value results
$v^R$ can either be plain terminal values $v^T$ or cast value results,
where we exclude reflexivity coercions, as those can be further reduced.
This stratification can also be found in Henglein's coercion
calculus~\citep{CoercionCalculus}, Crary's coercion calculus for inclusive
subtyping~\citep{crary}, and, more recently, in System F$_\text{C}$\xspace~\citep{systemfc}.
Computations evaluate either to a returned value or an operation call. Both can be further cast, though we are able to delegate any coercion on the operation call to its continuation, leading to a slightly different stratification than in values. The same is not true for returned values. Consider for example the expression
$(\cast{\return{5}}{\refl{\tyInt}\,!\,\emptyset_{\{\mathtt{Op}\}}})$, of type
$\tyInt\,!\,\{\mathtt{Op}\}$. We can not reduce the expression further without
losing effect information; removing the cast would result in computation
$(\return{5})$, of type $\tyInt\,!\,\emptyset$. Even if we consider
type preservation only up to subtyping, the redex may still occur as a subterm
in a context that expects solely the larger type.
Secondly, we need to make sure that casts do not stand in the way of
evaluation. This is captured in the so-called ``push'' rules, all of which
appear in Figure~\ref{fig:target-opsem-partial}.
In relation $\smallStepVal{v}{v'}$, Rule~\textsc{VCast} evaluates under the coercion, while the
rest are push rules: whenever a redex is ``blocked'' due to a cast, we take
the coercion apart and redistribute it (in a type-preserving manner) over the
subterms, so that evaluation can progress.
\begin{example}
Consider the evaluation of $((\cast{(\Lambda \alpha. v)}{(\forall \alpha.
\coercion)})~\vty)$ (we elide skeleton annotations for clarity; they are
orthogonal to the task at hand). The evaluation is ``blocked'' because of the
type cast; in order to expose the redex $((\Lambda \alpha. v)~\vty)$ we need to
\emph{push} the coercion outside the redex, which we achieve using
Rule~\textsc{VPushTy}:
\[
(\cast{(\Lambda \alpha. v)}{(\forall \alpha. \coercion)})~\vty \leadsto_\mathrm{v} \cast{((\Lambda \alpha. v)~\vty)}{\coercion[\vty/\alpha]}
\]
Since the type cast now happens after the instantiation, we change the coercion
accordingly (to $\coercion[\vty/\alpha]$), to ensure that the type of the
expression remains the same as before (preservation).
Now using Rule~\textsc{CCast} we can continue with the evaluation of the redex
under the cast, thus obtaining:
\[
\cast{((\Lambda \alpha. v)~\vty)}{\coercion[\vty/\alpha]} \leadsto_\mathrm{v} \cast{v}{\coercion[\vty/\alpha]}
\]
The rest of the push rules behave similarly.
\end{example}
The situation in relation $\smallStepComp{c}{c'}$ is quite similar.
Rule~\textsc{CCast} continues evaluating the computation under the coercion.
Rule~\textsc{CPushApp} is a push rule for function application.
Rule~\textsc{CPushOp} pushes a coercion inside an operation-computation, illustrating
why the syntax for $c^R$ does not require casts on operation-computations;
we can always push the casts inside the continuation.
Rule~\textsc{CDoRet} is a $\beta$-reduction for sequencing and performs two
tasks at once. Since we know that the computation bound to $x$ calls no
operations, we
\begin{inparaenum}[(a)]
\item safely ``drop'' the impure part of coercions, and
\item substitute $x$ with $v^R$, cast with the pure part of coercions
(so that types are preserved).
\end{inparaenum}
Rule~\textsc{CDoOp} handles operation calls in sequencing computations. If an
operation is called in a sequencing computation, evaluation is suspended and
the rest of the computation is captured in the continuation.
The last four rules are concerned with effect handling.
Rule~\textsc{CPushHandle} pushes a coercion on the handler ``outwards'', such that the
handler can be exposed and evaluation is not stuck (similarly to the push rule
for term application).
Rule~\textsc{CHandleRet} behaves similarly to the push/beta rule for sequencing computations.
Finally, the last two rules are concerned with handling of operations. Rule~\textsc{CHandleOp1}
captures cases where the called operation is handled by the
handler, in which case the respective clause of the handler is called. As
illustrated by the rule, like~\citet{Pretnar}{DBLP:journals/corr/Pretnar13}, \textsc{ExEff}\xspace features
{\em deep handlers}: the continuation is also wrapped within a
$\mathtt{with}$-$\mathtt{handle}$ construct.
Rule~\textsc{CHandleOp2} captures cases where the operation is not covered by the
handler and thus remains unhandled.
We have shown that \textsc{ExEff}\xspace is type safe:
\begin{theorem}[Type Safety]
\label{thm:safety}
\vspace{-1mm}
\begin{itemize}
\item
If $\tcTrgVal{\Gamma}{v}{\vty}$ then either $v$ is a result value or
$\smallStepVal{v}{v'}$ and $\tcTrgVal{\Gamma}{v'}{\vty}$.
\item
If $\tcTrgComp{\Gamma}{c}{\cty}$ then either $c$ is a result computation or
$\smallStepComp{c}{c'}$ and $\tcTrgComp{\Gamma}{c'}{\cty}$.
\end{itemize}
\end{theorem}
|
2005.01262
|
\section{Introduction}
Zuo (2019) (Z19) addressed the computation of projection regression depth (PRD) and its induced deepest estimator (aka regression median) $\bs{\beta}^*_{PRD}$ which were introduced in Zuo (2018) (Z18).
By modifying the definition of the univariate sample median, Z19 achieved the exact computation of the unfitness (UF) (defined is section 2), or equivalently the PRD. The approach, however, consequently scarifies the regression, scale, and affine invariance of PRD and the regression, scale, and affine equivariance of $\bs{\beta}^*_{PRD}$ (for related definitions, see Z18).\vs
A natural question is: can one compute the UF exactly without modifying the definition of univariate median and consequently keeping the very desirable properties? This article presents a positive answer to the question.\vs
Another major issue with Z19 is that the algorithm for computation of $\bs{\beta}^*_{PRD}$ is relatively very slow.
Can the speed of the algorithm be improved so that it is more feasible and competitive in practice whereas the accuracy is maintained or even improved meanwhile?
\vs
The second major contribution of this article is to introduce a much faster algorithm for $\bs{\beta}^*_{PRD}$, which can run in some cases more than $100$ times faster than that of Z19, meanwhile, always has a better accuracy or relative efficiency (i.e. smaller empirical mean squared error).\vs
The rest of the article is organized as follows. Section 2 presents the projection regression depth (PRD) and its induced median $\bs{\beta}^*_{PRD}$. Section 3 addresses the exact computation of PRD and approximate computation of $\bs{\beta}^*_{PRD}$.
Section 4 is devoted to the examples of the exact computation of PRD as well as approximate computation of $\bs{\beta}^*_{PRD}$.
Section 5 introduces three PRD induced regression estimators that can run much (could be $300$ times) faster than that of Z19 meanwhile maintaining small empirical mean squared errors, which constitutes the third major contribution of this article.\vs
Throughout, the linear regression model considered is:
\begin{eqnarray}
y&=&\mathbf{x}'\boldsymbol{\beta}+{{e}}, \label{eqn.model}
\end{eqnarray}
where $'$ denotes the transpose of a vector, and random vector $\mathbf{x}=(x_1,\cdots, x_p)'$ and parameter vector $\boldsymbol{\beta}$ are in $\R^p$ ($p\geq2$) and random variables $y$ and ${e}$ are in $\R^1$.
If $\bs{\beta}=(\beta_0, \bs{\beta}'_1)'$ and $x_1=1$, then one has $y=\beta_0+\mb{x}'_1\bs{\beta}_1+{e}$, where $\mb{x}_1=(x_2,\cdots, x_p)' \in \R^{p-1}$.
Let $\mb{w}=(1,\mb{x}'_1)'$. Then $y=\mb{w}'\bs{\beta}+{e}$. We use this model or (\ref{eqn.model}) interchangeably depending on the context.
\vs
\section{Projection regression depth and its induced median}
Z18 introduced the PRD. For a given candidate parameter $\bs{\beta}\in \R^p$, it is defined based on the UF (unfitness) as:
\be
\mbox{UF}(\bs{\beta};F_{(\mb{x}', y)})=\sup_{\mb{v}\in \mbs^{p-1}} \mbox{UF}_{\mb{v}}(\bs{\beta}; F_{(\mb{x}', y)}):=\sup_{\mb{v}\in \mbs^{p-1}}|{R}(F_{(\mb{w'}\mb{v},~ y-\mb{w'}\bs{\beta})})|\big/{S}(F_y), \label{UF.eqn}
\ee
\vs
\be
\mbox{PRD}(\bs{\beta};F_{(\mb{x}', y)})=1/(1+\mbox{UF}(\bs{\beta};F_{(\mb{x}', y)})),\label{PRD.eqn}
\ee
\vs
\noin
where $F_{\mb{Z}}$ stands for the distribution of the d-dimensional random vector $\mb{Z} \in\R^d$ for any $d$,
$\mb{w'}=(1,\mb{x}')\in\R^p$, $\mbs^{p-1}=\{\mb{u}\in\R^p:~\|\mb{u}\|=1\}$. Throughout, $R$ will be restricted to the univariate regression functional of the form $R(F_{(\mb{w'}\mb{v},~ y-\mb{w'}\bs{\beta})})=T\big(F_{(y-\mb{w}'\bs{\beta})/{\mb{w}'\mb{v}}}\big)$ and it is regression, scale, and affine equivariant (see page 116 of Rousseeuw and Leroy (1987)(RL87) for definitions). $T$ could be a univariate location functional that is location, scale and affine equivariant and $S$ is a scale functional that is translation invariant and scale equivariant (see pages 158-159 of RL87 for definitions), and $S(F_y)$ does not depend on $\mb{v}$ and $\bs{\beta}$.
\vs
For robustness consideration, in the sequel, $(T,S)$ is the fixed pair $(\mbox{Med}, \mbox{MAD})$. That is the median (Med) and the median of absolute deviations (MAD) pair.
Hereafter, we write $\text{Med}(Z)$ rather than
$\text{Med}(F_Z)$.
For this special choice of $T$ and $S$ such that
\bee
R(F_{(\mb{w'}\mb{v},~ y-\mb{w'}\bs{\beta})})&=&\text{Med}_{\mb{w'}\mb{v}\neq 0}\big(\frac{y-\mb{w}'\bs{\beta}}{\mb{w'}\mb{v}}\big),\\[1ex]\label{spesific-T.eqn}
S(F_y)&=& \text{MAD}(F_y). \label{S.eqn}
\ene
We have the \emph{unfitness} (UF) of $\bs{\beta}$ as
\be
\text{UF}(\bs{\beta}; F_{(\mb{x}', y)})=\sup_{\mb{v}\in \mbs^{p-1}}\Big|\text{Med}_{\mb{w'}\mb{v}\neq 0}\big(\frac{y-\mb{w}'\bs{\beta}}{\mb{w'}\mb{v}}\big)\Big|\bigg/ \text{MAD}(F_y),
\ee
and the \emph{projection regression depth} (PRD) of $\bs{\beta}$ as
\be
\text{PRD}\left(\bs{\beta}; F_{(\mb{x}', y)}\right)=\inf_{\mb{v}\in \mbs^{p-1},\mb{w'}\mb{v}\neq 0}
\frac{\text{MAD}(F_y)}{\text{MAD}(F_y)+\Big|\text{Med}\big(\frac{y-\mb{w}'\bs{\beta}}{\mb{w'}\mb{v}}\big)\Big|}. \label{special-PRD.eqn}
\ee
\vs
\noin
Applying the min-max (or max-min) scheme, we obtain the maximum (deepest) \emph{projection regression depth estimating functional} (median) $\bs{\beta}^*_{PRD}$ (also denoted by $T^*_{\text{PRD}}$)
w.r.t. the pair $(T,S)$
\begin{eqnarray}
\bs{\beta}^*_{PRD}(F_{(\mb{x}', y)})&=&\arg\!\min_{\boldsymbol{\beta}\in \R^p}\mbox{UF}(\boldsymbol{\beta}; ~F_{(\mb{x}', y)}) \label{eqn.T*}\\[1ex]
&=&\arg\!\max_{\bs{\beta}\in\R^p}\text{PRD}\left(\boldsymbol{\beta}; ~F_{(\mb{x}', y)}\right).\nonumber
\end{eqnarray}
PRD and $\bs{\beta}^*_{PRD}$ satisfy desirable properties, such as regression, scale and affine invariance and equivariance, respectively, see Z18
for definitions and more detailed discussions. These desirable properties will be deprived in the empirical case when the sample median is modified as did in Z19.
\section{Computational problems}
\subsection{Exact computation of PRD} \label{EX-prd.section}
For a given $\bs{\beta}$ and sample $\mb{Z}^{(n)}=\{(\mb{x}'_i, y_i), i=1,\cdots, n\}$ in $\R^p$, the computation of PRD$(\bs{\beta}, F^n_{\mb {Z}})$, or equivalently of UF$(\bs{\beta}, F^n_{\mb{Z}})$, is to compute the quantity below:
\be
\mbox{UF}(\bs{\beta}; F^n_{\mb{Z}})=\sup_{\mb{v}\in \mbs^{p-1}}\Big|\text{Med}_{\mb{w}'_i\mb{v}\neq 0}\big\{\frac{y_i-\mb{w}'_i\bs{\beta}}{\mb{w}'_i\mb{v}}\big\}\Big|\bigg/S_y, \label{UF-1.eqn}
\ee
where $F^n_{\mb{Z}}$ is the empirical distribution based on $\mb{Z}^{(n)}$, $\mb{w}'_i=(1,\mb{x}'_i)$ and $S_y=\text{MAD}_i\{y_i\}$. Hereafter we assume that \tb{(A1)}: $P(\mb{w}'\mb{v}=0)=0$,~$\forall~ \mb{v}\in\mbs^{p-1}$;
and \tb{(A2)} $P(r(\bs{\beta})=0)=0$, where $r(\bs{\beta})=y-\mb{w}'\bs{\beta}$,
~$\forall ~\bs{\beta}\in \R^p$.
\tb{(A1)-(A2)} hold automatically if $(\mb{x}', y)'$ has a density, or if $\mb{x}$ does not concentrate on a single $(p-2)$ dimensional hyperplane in $\mb{x}$ space and any $(p-1)$ dimensional hyperplane determined by $r(\bs{\beta})=0$ in $(\mb{x}', y)'$ space does not contain any probability mass.
\vs
For the simplicity of description, we write $\mb{t}'_i=\mb{w}_i'/r_i(\bs{\beta})$, where
$r_i(\bs{\beta})=y_i-\mb{w}'_i\bs{\beta}$. Now the computation of $\mbox{UF}(\bs{\beta}; F^n_{\mb{Z}})$ in (\ref{UF-1.eqn}) is equivalent to
the computation of
\be
\mbox{UF}(\bs{\beta}; F^n_{\mb{Z}})=\sup_{\mb{v}\in \mbs^{p-1}}\bigg|\text{Med}_{\mb{t}'_i\mb{v}\neq 0 }\big\{\frac{1}{\mb{t}'_i\mb{v}}\big\}\bigg|\bigg/S_y .\label{UF-2.eqn}
\ee
Again for the simplicity of description, we write $k^{\mb{v}}_i=1/\mb{t}_i'\mb{v}$ and $u^{\mb{v}}_i=\mb{t}_i'\mb{v}$. The latter two are well defined almost surely (a.s.) under \tb{(A1)-(A2)}. Without loss of generality,
hereafter assumes that $S_y=1$ (since it does not depend on $\mb{v}$ or $\bs{\beta}$). The UF$(\bs{\beta};F^n_{\mb{Z}})$ in (\ref{UF-2.eqn}) is then
\be
\mbox{UF}(\bs{\beta}; F^n_{\mb{Z}})=\sup_{\mb{v}\in\mbs^{p-1}}\bigg|\text{Med}_i\big\{k^{\mb{v}}_i\big\} \bigg| :=\sup_{\mb{v}\in\mbs^{p-1}}\bigg|g(\mb{v}) \bigg|.
\label{UF-3.eqn}
\ee
\vs
The exact computation of (\ref{UF-3.eqn}) above is still very challenging if it is not impossible.
Let $k^{\mb{v}}_{(1)}\leq k^{\mb{v}}_{(2)}\cdots\leq k^{\mb{v}}_{(n)}$ be ordered values of $k^{\mb{v}}_i$.
Partition $\mbs^{p-1}$ into two disjoint parts
\be
\mc{S}_1=\{\mb{v}\in \mbs^{p-1}:~ k^{\mb{v}}_{(1)}<0 ~\mbox{and}~ k^{\mb{v}}_{(n)}>0\};~~~\mc{S}_2=\{\mb{v}\in\mbs^{p-1}:~ k^{\mb{v}}_{(1)}> 0 ~\mbox{or}~ k^{\mb{v}}_{(n)}< 0\}. \label{Si.eqn}
\ee
It is readily seen that both $\mc{S}_1$ and $\sc{S}_2$ are symmetric about the origin. That is, if $\mb{v} \in \mc{S}_i$ then, $\mb{-v} \in \mc{S}_i$.
Now the UF$(\bs{\beta};F^n_{\mb{Z}})$ in (\ref{UF-3.eqn}) can be expressed as follows:
\be
\mbox{UF}(\bs{\beta}; F^n_{\mb{Z}})=\max\big\{\sup_{\mb{v}\in \mc{S}_1}|g(\mb{v})|, ~\sup_{\mb{v}\in \mc{S}_2}|g(\mb{v})|\big\}. \label{UF-4.eqn}
\ee
\vs
For a given sample $\mb{Z}^{(n)}:=\{(\mb{x}'_i, y_i), ~ i=1,\cdots, n\}$, $\bs{\beta}$ in $\R^p$ and $\mb{v}\in\mbs^{p-1}$, since $k^{\mb{v}}_{(1)}\leq k^{\mb{v}}_{(2)}\leq \cdots \leq k^{\mb{v}}_{(n)}$ are ordered value of $k^{\mb{v}}_{i}= 1/\mb{t}'_i\mb{v}$, then $1/\mb{t}'_{i_1}\mb{v}\leq 1/\mb{t}'_{i_2}\mb{v}\leq \cdots \leq 1/\mb{t}'_{i_n}\mb{v}$ for some $\{i_1,\cdots, i_n\}$, a permutation of $\{1, 2, \cdots, n\}$.
Similarly, $u^{\mb{v}}_{(1)}\leq u^{\mb{v}}_{(2)}\leq \cdots \leq u^{\mb{v}}_{(n)}$ corresponds to a permutation $\{j_i,\cdots, j_n\}$ such that
$u^{\mb{v}}_{j_1}\leq u^{\mb{v}}_{j_2}\leq \cdots,\leq u^{\mb{v}}_{j_n}$ for $u^{\mb{v}}_i=\mb{t}'_i\mb{v}$.
\vs
\noin
\tb{Proposition 3.1}: Assume \tb{(A1)-(A2)} hold. Let $N^{-}_{\mb{v}}:=\sum_{i=1}^n\mb{I}(k_i^{\mb{v}}<0)$. The unfitness of $\bs{\beta}$ in (\ref{UF-1.eqn}) can be computed equivalently via
(\ref{UF-4.eqn}). The latters can be computed
as follows.
\vs
\noin
Denote $n1:=\lfloor(n+1)/2\rfloor$ and $n2:=\lfloor(n+2)/2\rfloor$, where $\lfloor \cdot\rfloor$ is the floor function. \vs
\noin
(i) For $\mb{v} \in \mc{S}_2$,
\[
\sup_{\mb{v}\in \mc{S}_2}|g(\mb{v})|= \left\{
\begin{array}{ll}
\max_{\mb{v}\in \mc{S}_2} \frac{(\mb{t}'_{i_{n1}}+\mb{t}'_{i_{n2}})\mb{v}\big/2} {\mb{v}'\mb{t}_{i_{n1}}\mb{t}'_{i_{n2}}\mb{v}}
&\mbox{~if~} N^{-}_{\mb{v}}=0,
\\[3ex]
-\min_{\mb{v}\in \mc{S}_2} \frac{(\mb{t}'_{i_{n1}}+\mb{t}'_{i_{n2}})\mb{v}\big/2} {\mb{v}'\mb{t}_{i_{n1}}\mb{t}'_{i_{n2}}\mb{v}} &\mbox{~if~} N^{-}_{\mb{v}}=n.
\end{array}
\right.
\label{UF-34.eqn}
\]
\vs
\noin
(ii) For $\mb{v}\in \mc{S}_1$, let $m$ be a non-negative integer.
\begin{itemize}
\item[]
if $n=2m+1$,
\[
\sup_{\mb{v}\in \mc{S}_1}|g(\mb{v})|=\left\{
\begin{array}{ll}
-1\Big/\max_{\mb{v}\in \mc{S}_1} \mb{t}'_{i_{n1}}\mb{v}
& \mbox{if $k^{\mb{v}}_{(n1)}<0$},\\[2ex]
1\Big/\min_{\mb{v}\in \mc{S}_1}\mb{t}'_{i_{n1}}\mb{v}
& \mbox{if $k^{\mb{v}}_{(n1)}>0$},
\end{array}
\right.
\]
\item[] if $n=2m+2$,
\[
\sup_{\mb{v}\in S_1}|g(\mb{v})|=\left\{
\begin{array}{ll}
\bigg|\max_{\mb{v}\in \mc{S}_1} \frac{(\mb{t}'_{i_{n1}}+\mb{t}'_{i_{n2}})\mb{v}\big/2}{ \mb{v}'\mb{t}_{i_{n1}}\mb{t}'_{i_{n2}}\mb{v} } \bigg| &
~\mbox{if}~~ k^{\mb{v}}_{(n1)}<0 ~\mbox{and}~ k^{\mb{v}}_{(n2)}>0, \\[2.5ex]
\max_{\mb{v} \in \mc{S}_1} \frac{\big(\mb{t}'_{i_{n1}}+\mb{t}'_{i_{n2}}\big)\mb{v}\big/2}
{\mb{v}'\mb{t}_{i_{n1}}\mb{t}'_{i_{n2}} \mb{v}}
& ~\mbox{if}~~k^{\mb{v}}_{(n1)}>0,\\[3.5ex]
-\min_{\mb{v}\in \mc{S}_1}
\frac{\big(\mb{t}'_{i_{n1}}+\mb{t}'_{i_{n2}}\big)\mb{v}\big/2}
{\mb{v}'\mb{t}_{i_{n1}}\mb{t}'_{i_{n2}} \mb{v}}
& ~\mbox{if}~~ k^{\mb{v}}_{(n2)}<0.
\end{array}
\right.
\]
\end{itemize}
\vs
\noindent
\tb{Proof:} Note that under \tb{(A1)-(A2)} $\mc{S}_i$ ($i=1,2$) are closed sets a.s.. In light of Proposition 2.1 and Corollary 2.1 of Z19 and the proofs there, the proof here follows immediately. Details are straightforward to verify and thus are omitted. \hfill \pend
\vs
\noin
\tb{Remarks 3.1}:
The proposition gives a clear foundation for the exact computation of UF$(\bs{\beta}, F^n_{\mb{Z}})$, or equivalently PRD($(\bs{\beta}, F^n_{\mb{Z}})=\big(1+\mbox{UF}(\bs{\beta}, F^n_{\mb{Z}})\big)^{-1}$.
\bi
\item[(I)] UF$(\bs{\beta}, F^n_{\mb{Z}})$ can be exactly computed via the optimization over closed sets $\mc{S}_i$. There are unified formulas over $\mc{S}_i$ for distinct cases of permutations. And two types of optimization problems exist in the proposition
\bi
\item[(i)]\tb{Type I}: $\min$ (or $\max$) of $\mb{c}'\mb{v}$ for $\mb{v} $ over a closed subset set of $\mbs^{p-1}$ and $\mb{c}\in\R^p$.
\item[(ii)] \tb{Type II}: $\min$ (or $\max$) of $\frac{\mb{b}'\mb{v}}{\mb{v}'\mb{A}\mb{v}}$ for $\mb{v}$ over a closed subset set of $\mbs^{p-1}$ and $\mb{b}\in\R^p$, $\mb{A}\in \R^{p\times p}$ ($A$ is symmetric and positive-definite over the set).
\ei
\item[(II)]
$\mb{b}$, $\mb{c}$, and $\mb{A}$ above are determined by $\{\mb{t}_i\}$ and depend on $\mb{v}$ only through the permutation $i_1,\cdots, i_n$ which is induced by the projection of $\{\mb{t}_i\}$ onto $\mb{v}$. That is, for a given sample and $\bs{\beta}\in \R^p$, and a $\mb{v}\in \mbs^{p-1}$ or more generally a fixed permutation $i_1,\cdots, i_n$ (of $\{1, 2,\cdots, n\}$) over a set of $\mb{v}$, $\mb{b}$, $\mb{c}$, and $\mb{A}$ are constant vectors and matrix.\vs
Hence, with the constraints discussed in the sequel, \tb{Type I} optimization could be solved by linear programming and \tb{Type II} optimization could be solved by gradient-type, Newton-type, or interior-point methods (see, e.g. Numerical Recipes (2007) Chapter 10, Freund (2004), and Boyd and Vandenberghe (2004)), among others.
\item[(III)] When $n$ is odd, there is just one type, \tb{Type I}, optimization problem. The exact computation is much easier. To deal with even $n$ case, Z19 modified the definition of the regular sample median (adopted the lower median) to simplify the exact computation to just a \tb{Type I} optimization problem.
\hfill \pend
\ei
\noin
To get the exact value of UF$(\bs{\beta}, F^n_{\mb{Z}})$ utilizing the proposition,
it seems that one has to know the set $\mc{S}_i$ first, $i=1,2$ (or more accurate their boundaries). $\mc{S}_2$ can be empty. In fact, when the convex hull formed by all $\mb{t}_i$'s contains the origin, then $\mc{S}_1=\mbs^{p-1}$. Fortunately, we do not have to identify $\mc{S}_i$, $i=1,2$. \v
Since there is no unique formula over $\mc{S}_i$ in proposition, therefore, for the exact computation task, we have to further partition $\mc{S}_i$ into disjoint pieces. For example, partition $\mc{S}_1$ into five pieces and $\mc{S}_2$ into two pieces, according to the cases listed in the proposition 3.1. The latter task is not easier than identifying $\mc{S}_i$.
For example, identify all $\mb{v} \in\mc{S}_1$ such that $k^{\mb{v}}_{(n1)}>0$ for even $n$ case is not straightforward at all. we seek other approaches below.\vs
For a given sample $\mb{Z}^{(n)}$ and $\bs{\beta} \in \R^p$ and $\mb{v} \in \mbs^{p-1}$, there is a unique permutation $i_1,\cdots, i_n$ of $\{1,2,\cdots, n\}$ induced by $k^{\mb{v}}_i=1/ \mb{t}'_i \mb{v}$. The $k^{\mb{v}}_{i_j}$ ($j=1,\cdots, n$) are all we need for the calculation in (\ref{UF-3.eqn}) or Proposition 3.1. However, a permutation $i_1,\cdots, i_n$ corresponds to a set of $\mb{v}\in\mbs^{p-1}$ each of them can produce the same permutation via $\{k^{\mb{v}}_i\}$.
\vs
That is, a fixed permutation corresponds to a unique piece of $\mbs^{p-1}$ (or of the surface of the unit sphere). There are
totally at most $n!$ possible permutations hence $n!$ disjoint pieces that partition the $\mbs^{p-1}$ (or the surface of the unit sphere).
By proposition 2.2 of Z19, each piece belongs to either $\mc{S}_1$ or $\mc{S}_2$.
Selecting one $\mb{v}$ from each piece suffices for the exact computation of UF$(\bs{\beta}, F^n_Z)$ via Proposition 3.1.
The cost is approximate of order $O(n^{n+1/2})$ without counting optimization cost, unaffordable magnitude of cost. We seek to merge some pieces.
\vs
\begin{comment}
For each permutation, or equivalently ordered values of $\{k^{\mb{v}}_i=1/\mb{t}'_i\mb{v}\}$, $k^{\mb{v}}_{i_1}\leq k^{\mb{v}}_{i_2}\leq \cdots \leq k^{\mb{v}}_{i_n}$, what matters most is the middle two indices (or just one in odd $n$ case) of original points for the exact calculation via formula given in (\ref{UF-3.eqn}).
Obviously, there are at most $O(n^2)$ possible distinct middle pairs of indices. For $p=2$ or for odd $n$, it actually is $O(n)$).
\vs
The latter holds true obviously for odd $n$ while Z19 proves it implicitly for any $n$ in $p=2$ case.
Partitioning the entire $\mbs^{p-1}$ (or surface of the unit sphere) into $O(n^2)$ disjoint pieces (or just $n$ pieces in odd $n$ case)
is enough for the exact computation. So we now just need one $\mb{v}$ from each of $O(n^2)$ pieces, in contrast to $O(n^{n+1/2})$ above for the original approach. That is, we can merge all $O(n^{n+1/2})$
disjoint pieces into just $O(n^2)$ pieces for the exact computation.
\vs
The only problem left is how to identify those $O(n^2)$ pieces of the surface of the unit sphere, the cost of that will be ironically much more than $O(n^2)$ (see Lemma 2.1 of Z19 for the median sequence in the $p=2$ case). We have to consider other approach.\vs
\end{comment}
In light of the Observation 3 (O3) in Z19 on $\mb{v}$ induced permutations (the circular or spherical sequence), when $\mb{v}$ moves on the surface of the unit sphere, its induced permutation changes only when it crosses a hyperplane ($H_0$) that goes through the origin and is perpendicular to another hyperplane ($H_1$) that is formed by
sample points from $\{\mb{t}_i\}$.
\vs
The former hyperplanes ($H_0$'s) (each contains the origin) cut the $\mbs^{p-1}$ into disjoint $N(n,p)$ pieces $P_k$ ($k=1,\cdots, N(n, p)$), where $N(n, p):= 2\sum_{i=0}^{p-1}{q-1\choose i}$ (see Winder(1966)) and $q:=N_n^p(\{\mb{t}_i\})$ is the total distinct $(p-1)$-dimensional hyperplanes formed by
points from $\{\mb{t}_i\}$. $q \leq {n \choose p}$. Assume $q> 1$. When $\mb{t}_1,\cdots, \mb{t}_n$ are in a general position (see Z19 for definition), $q={n \choose p}$. In the latter case, $N(n, p)=O(n^{p(p-1)})$, smaller than $O(n^{n+1/2})$ above if $n\geq p (p-1)$.
\vs
Each $P_k$ ($k=1,\cdots, N(n,p)$) corresponds to a unique permutation $\{i_1,\cdots, i_n\}$, that is, $1/\mb{t}'_{i_1}{\mb{v}_0}\leq 1/\mb{t}'_{i_2}{\mb{v}_0}\leq \cdots \leq 1/\mb{t}'_{i_n}{\mb{v}_0}$, $\forall~ \mb{v}_0\in P_k$. The latter in turn corresponds to a polyhedral cone (see Z19) which is determined by
\be
\mb{B}'\mb{v}\leq \mb{0}_{(n-1)\times 1}, \label{B.eqn}
\ee
where $\mb{v}\in \mbs^{p-1}$ and $B=(B_1,\cdots, B_{n-1})_{p \times (n-1)}$, $B_j:= \mb{t}_{i_j}-\mb{t}_{i_{j+1}}$, $j=1,\cdots, N^{-}_{\mb{v}_0}$; $B_j:= -(\mb{t}_{i_j}-\mb{t}_{i_{j+1}})$, $j=N^{-}_{\mb{v}_0}+1,\cdots, (n-1)$, and vector inequality is in the coordinate-wise sense.
\vs
By proposition 2.2 of Z19, the entire $P_k$ belongs to only one of $\mc{S}_i$. So as long as we have one $\mb{v}_0$ from each $P_k$, we can easily produce the permutation associated with $P_k$ and the induced $k^{\mb{v_0}}_i$ and determine which $\mc{S}_i$ and formulae should use in Proposition 3.1. Coupled with the constraints $B'\mb{v}\leq \mb{0}_{(n-1)\times 1}$ above, both Type I and Type II optimization problems in the proposition could be solved in linear time (note that $\mb{b},\mb{c}$ and $\mb{A}$ are constants over the entire piece of $P_k$). The exact computation of UF$(\bs{\beta},F^{n}_{\mb{Z}})$ could be achieved with the worst-case time complexity of order $TC(n,p,N_{iter}):=O( N(n,p)(p^{2.5}+n\log n+np^{1.5}+npN_{iter}))$, where $N_{iter}$ is the number of iterations needed when solving the type II optimization problem.
\vs
\noin
\tb{Theorem 3.1} Under \tb{(A1)-(A2)}, for a given sample $\mb{Z}^{(n)}$ and a $\bs{\beta}$ in $\R^p$, UF$(\bs{\beta},F^n_{\mb{Z}})$ (or PRD$(\bs{\beta}, F^n_{\mb{Z}})$)
can be computed exactly with the worst-case complexity of $TC(n,p,N_{iter})$.\vs
\noin
\tb{Proof}: ~
Obviously, exact computation is achieved if we can obtain the RHS of display (\ref{UF-4.eqn}). For the latter, we appeal to Proposition 3.1.
To implement the proposition, essentially, we need to solve the two types of optimization problems in the proposition.\vs
By the discussion immediately before the theorem, we know the key for the optimization problems is to identify all pieces $P_k$ ($k=1,\cdots, N(n,p)$) of $\mbs^{p-1}$. Equivalently, to identify all $N(n,p)$ distinct permutations of $\{1,2,\cdots, n\}$. The latter is equivalently to find a unit vector $\mb{u}\in P_k$ for each $P_k$ which can produce the unique fixed permutation over $P_k$.
\vs
Each $P_k$ is the intersection of $\mbs^{p-1}$ and the
polyhedron cone formed by the constraint $B'\mb{v}\leq \mb{0}_{(n-1)\times 1}$. The edge (or ridge) of the cone can be used to find the $\mb{u}$ above, which is shared by another adjacent cone. In other words, it is the intersection of (at least) two hyperplanes $H_0$'s which go through the origin and are perpendicular to two hyperplanes $H_1$'s each of which is formed by points from $\{\mb{t}_i\}$, respectively.
\vs
The direction from the origin to any other point on the intersection hyperline of two hyperplanes $H_0$'s is the solution of the vector sought.
Denote the direction by $\mb{u}$ ($\mb{u}$ could also be obtained more costly via the origin and any vertex of the cone through vertex enumeration
(see Bremner et al., 1998, Paindaveine and \' Siman (2012) and Liu and Zuo (2014))).\vs
Each $\mb{u}$ above lies on the boundary of $P_k$. It not only lies in the facet of one cone but also
lies in that of an adjacent cone which shares the common intersection hyperline (edge or ridge) with the former cone. Tiny perturbation
of $\mb{u}$ in opposite directions will lead $\mb{u}$ entering the interiors of the two adjacent cones. There might be more than two cones that are adjacent. Thus, every $\mb{u}$ might yield two or more new permutations (the scheme in the algorithm yields up to $8 \times(p-2)$ distinct ones, $p> 2$). \vs
Update the total number $N_{permu}$ of distinct permutations.
With respect to each distinct permutation, or equivalent over each $P_k$, update $\sup_{v\in\mbs^{p-1}}|g(\mb{v})|$ according to Proposition 3.1 and carry out one of the two types of optimization.\vs
Repeat above steps until $N_{permu}=N(n,p)$ or UF could not be improved after trying $\kappa p$ more distinct permutations ($\kappa$ is a positive integer, could be, say, $10, 20$, or even $50$
\vs
The cost of computation of each of elements of the descriptions above is as follows.
\bi
\item[(a)] obtaining all $\{\mb{t}_i\}$ costs $O(np)$,
\item[(b)] calculating normal vectors $\mb{v}_i$ of $H^i_1$ and normal vectors $\mb{u}_i$ of $H^i_0$ ($i=1, 2$), and $\mb{u}$ that is perpendicular to $\mb{u}_i$, the total cost is $O(p^3)$,
\item[(c)] producing each permutation costs $O(n(p+\log n))$,
\item[(d)] updating $\sup_{v\in\mbs^{p-1}}|g(\mb{v}|$ according to Proposition 3.1 costs $O(n\log n)$,
\item[(e)] linear programming is $O(p^{1.5}n+p^{2.5})$ (see Yin Tat Lee and Aaron Sidford (2015) which is even further improved by Cohen, Lee, and Song (2019)),
\item[(f)] for the type II non-convex and nonlinear optimization problem, one can use the conjugate gradient method or even better the primal-dual interior-point method (Wright (1997), Morales, et al (2003)) combined with the sequential quadratic programming (SQP) ( Nocedal and Wright (2006)), e.g. package LOQO (Vanderbei and Shanno (1999) and Vanderbei (1999)) with cost
$O(npN_{iter})$, where $N_{iter}$ is the number of iterations needed in LOQO.
\ei
\vs
Keeping only the dominating terms, we thus have the overall worst-case time complexity $TC(n,p, N_{iter})=O(N(n,p)(n\log n+np^{1.5}+p^{2.5}+npN_{iter}))$.
\hfill \pend
\vs
\noin
\tb{Pseudocode} (Exact computation of UF$(\bs{\beta},F^n_{\mb{Z}})$, or equivalently of PRD$(\bs{\beta}, F^n_{\mb{Z}})$)
\bi
\item Calculate $\{\mb{t}_i\}$ and $N(n,p)$ (assume that $\{\mb{t}_i\}$ are in general position); set $N_{permu
=\mbox{UF}=0$.
\item While ($N_{permu}<N(n,p)$)
\bi
\item[1] Obtain $\mb{u}$ and its induced permutations, store (and update $N_{permu}$ of the total number of) the distinct permutations.
\item[2] Update UF$=\sup_{\mb{v}\in\mbs^{p-1}}|g(\mb{v})|$ via proposition 3.1 and carry out the corresponding optimization.
\item[3]
If UF could not be improved after trying $\kappa p$ more distinct permutations, break the loop.
\ei
\item Output UF (or $1/(1+\mbox{UF})$). \hfill \pend
\ei
\vs
\noin
\tb{Remarks 3.2}\vs
\tb{(I)} In the best scenario, $N(n,p)$ could be replaced by $O(n^2)$
if $p=2$). Even in this case, the cost of exact computation is $O(n^2(n\log n+np^{1.5}+p^{2.5}+npN_{iter}))$, which is still unaffordable for large $n$ and $p$. An approximate algorithm, such as \tb{AA-UF-3} of Z19 with cost of order $O(N(np+p^3) + np)$, where tuning parameter $N$ being the total number of normal directions of the hyperplanes formed by p points from $\{\mb{t}_i\}$, is more feasible in practice. \vs
\tb{(II)} By altering the definition of the traditional sample median (using the ``low median"), Z19 also achieved the exact computation of UF$(\bs{\beta},F^n_{\mb{Z}})$ and proposed an algorithm that has slightly less cost (no term of $npN_{iter}$). For this advantage, it pays a price of losing affine invariance of resulting PRD and the affine equivariance of induced $\bs{\beta}^*_{PRD}$, nevertheless. \vs Furthermore, $\bs{\beta}^*_{PRD}$ in Z19 can no longer recover the traditional univariate median when $p=1$. That is, the
maximum regression depth estimator in Z19 is not a generalization of univariate median to regression in a multi-dimensional setting. We show that in next section,
our current version of $\bs{\beta}^*_{PRD}$ does recover the univariate median when $p=1$.
\hfill \pend
\subsection{Approximate computation of PRD induced median}
Before addressing the approximate computation of maximum projection depth estimator (or median), we first show that it indeed deserves to be called a median since it recovers the univariate sample median when $p=1$.
Recall that (assume, without the loss of generality (w.l.o.g), again $S_y=1$)
\be
\bs{\beta}^*_{PRD}=\arg\min_{\bs{\beta}\in \R^p}\sup_{\mb{v}\in\mbs^{p-1}}\Big|\mbox{Med}_i\{\frac{y_i-\mb{w}'_i\bs{\beta}}{\mb{w}'_i\mb{v}}\} \Big|.
\label{beta*.eqn}
\ee
When $p=1$, it reduces to the following
\be
{\beta}^*_{PRD}=\arg\min_{{\beta}\in \R}\sup_{{v}=\pm 1}\Big|\mbox{Med}_i\{\frac{y_i-{\beta}}{{v}}\} \Big|. \label{beta-cover-median.eqn}
\ee
We have\vs
\noin
\tb{Proposition 3.2}
When $p=1$,
the ${\beta}^*_{PRD}$ recovers to the regular sample median of $\{y_i\}$.\vs
\noin
\tb{Proof}:
~
Let $y_{(1)}\leq y_{(2)}\leq \cdots \leq y_{(n)}$ be the ordered values of $y_i$ and $\mu=(y_{(n1)}+y_{(n2)})/2$, where $n1$ and $n2$ are defined in proposition 3.1, i.e., $\mu$ is the regular sample median of $\{y_i\}$. We show that $\beta^*_{PRD}=\mu$.
It is readily seen that
\be
{\beta}^*_{PRD}=\arg\min_{{\beta}\in \R}\big|\mbox{Med}_i\{{y_i-{\beta}}\}\big|=
\arg\min_{{\beta}\in \R}\big|\mbox{Med}_i\{y_i\}-{\beta}\big|= \arg\min_{{\beta}\in \R}\big|\mu-{\beta}\big|,
\label{proof-cover-medain.eqn}
\ee
where the first equality follows from (\ref{beta-cover-median.eqn}) and the oddness of median operator, the second one follows from the translation equivalence (see page 249 of RL87 for definition) of the median as a location estimator,
the third one follows from the definition of $\mu$.\vs
The RHS of (\ref{proof-cover-medain.eqn}) above indicates that $\mu$ is the only solution for $\bs{\beta}^*_{PRD}$.
\hfill \pend
\vs
\noin
\tb{Remarks 3.3}\vs
\tb{(I)} The proposition holds true for the univariate population median. That is, $\bs{\beta}^*_{PRD}$ also recovers the univariate median in the population case.\vs
\tb{(II)} If one modifies the definition of Med in the UF as did in Z19, then the proposition no longer holds true in both sample or population cases.
\hfill \pend
\vs \vs
Now we turn to the approximate computation of $\bs{\beta}^*_{PRD}$ in (\ref{beta*.eqn}). First, we notice that the $\bs{\beta}^*_{PRD}$ must be bounded, or equivalently, the search for the optimal $\bs{\beta}$ in the RHS of (\ref{beta*.eqn}) could be limited within a bounded set (hypersphere). To see this, notice that for a given $\bs{\beta}\neq 0$, let $\mb{v}_0=\bs{\beta}/\|\bs{\beta}\|$, then
\bee
\mbox{UF}(\bs{\beta}, F^n_Z)&=& \sup_{\mb{v} \in\mbs^{p-1}}\Big| \mbox{Med}_i\{(y_i-\mb{w}_i'\bs{\beta})/\mb{w}'_i\mb{v}\}\Big|
\geq \Big| \mbox{Med}_i\{(y_i-\mb{w}_i'\bs{\beta})/\mb{w}'_i\mb{v}_0\}\Big|\\[1ex]
&=& \Big| \mbox{Med}_i\{y_i/\mb{w}'_i\mb{v}_0\}- \|\bs{\beta}\|\Big|
\longrightarrow \infty ~(a.s.), \mbox{~~~~as $\|\bs{\beta}\|\to \infty$,}
\ene
where the last step follows from the fact that $\mbox{Med}_i\{y_i/\mb{w}'_i\bs{\beta}\}=1$ cannot hold for any $\bs{\beta}$ with $\|\bs{\beta}\|\to \infty$. Let $\delta=\mbox{UF}(\mb{0},F^n_Z)$ and $c^*=\sup\{\|\bs{\beta}\|, \mbox{UF}(\bs{\beta}, F^n_Z)\leq \delta\}$,
then we have
$$
\bs{\beta}^*_{PRD}=\arg\min_{\|\bs{\beta}\|\leq c^*}\sup_{\mb{v}\in\mbs^{p-1}}\Big|\mbox{Med}_i\Big\{\frac{y_i-\mb{w}'_i\bs{\beta}}{\mb{w}'_i\mb{v}}\Big\}\Big|.$$
In light of RHS of the display above, the generic steps of compute $\bs{\beta}^*_{PRD}$ are listed as follows:
\bi
\item[(A)] Select randomly $p$ points from $\mb{Z}^{(n)}=\{(\mb{x}'_i, y_i)\} \in \R^p$, which determine a $\bs{\beta}$ through the hyperplane $y=\mb{w}'\bs{\beta}$. Produce a set of $N_{\bs{\beta}}$ possible $\bs{\beta}$'s: $S_{\bs{\beta}}=\{\bs{\beta}_1,\cdots, \bs{\beta}_{N_{\bs{\beta}}}\}$ in this way, where $N_{\bs{\beta}}$
is a tuning parameter (could be, e.g., $N_{\bs{\beta}}=\min\{1000, {n \choose p}\}$).
\item[(B)] Let $S^1_{\mb{v}}=\{\mb{v}_i \in \mbs^{p-1}, i=1,\cdots, N_{\mb{v}}\}$, where $\mb{v}_i$ are normal vectors to the hyperplanes in (A).
Let $S^2_{\mb{v}}=\{\mb{v}_i \in \mbs^{p-1}, i=1,\cdots, N_{\mb{v}}\}$, where $\mb{v}_i$ is the normal vector to the hyperplane formed by $p$ points from $\{\mb{w}_i/r_i(\bs{\beta})\}$, where $N_{\mb{v}}$ is another tuning parameter.
Let $S^3_{\mb{v}}=\{\mb{v}^j_i:=\frac{\bs{\beta}_j-\bs{\beta}_i}{\|\bs{\beta}_j-\bs{\beta}_i\|}, ~\forall ~ \bs{\beta}_i(\neq \bs{\beta}_j)\in S_{\bs{\beta}}\}$ for some $\bs{\beta}_j\in S_{\bs{\beta}}$. Set $S_{\mb{v}}=S^1_{\mb{v}}\cup S^2_{\mb{v}} $ plus some $\mb{v}$'s from $S^3_{\mb{v}}$.
\item[(C)]
Search over
the convex hull formed by all $\bs{\beta}\in S_{\bs{\beta}}$ (or the ball $\|\bs{\beta}\|\leq c^*$) for the $\bs{\beta}$ that has the minimum approximate unfitness (using all $\mb{v} \in S_{\mb{v}}$ for the calculation for the unfitness). The $\bs{\beta}$ serves as an approximate $\bs{\beta}^*_{PRD}$.
\item[(D)] To mitigate the effect of randomness, repeat steps (A)-(C) above many times to get the final overall best approximate $\bs{\beta}^*_{PRD}$ with the minimum overall unfitness.
\ei
\noin
\tb{Remarks 3.4}
\bi
\item[\tb{(I)}] Note that in (C) above, due to the objective function in (\ref{beta*.eqn}) is not differentiable w.r.t. $\bs{\beta}$, therefore many fancy gradient-type optimization methods are not applicable. However, downhill simplex method (Nelder-Mead), and other non-linear and non-convex optimization algorithms (such as MCMC and simulated annealing) could be used.
\vs
\item[\tb{(II)}] The algorithm above is essentially a modification of the one given in Z19, where it first searches for $(p+1)$ deepest sample points, then
over the convex hull formed by these $(p+1)$ points searches for the final $\bs{\beta}$. A drawback of the latter algorithm is the convex hull might be too small and misses the real deepest point $\bs{\beta}^*(F^n_{\mb{Z}})$.
\hfill \pend
\ei
\section{Examples}
Examples are presented below
for the illustration of the algorithms proposed in this article on the exact computation of PRD and approximate computation of its induced median $\bs{\beta}^*$.
\subsection{On the computation of PRD}
Here we want to compare the exact computation algorithms in Z19 and the one in this article. For the latter one, we now explain in detail the implementation of two types of optimization.\vs
Given a direction $\mb{v}\in P_k$, a permutation, say, $i_1,\cdots, i_n$ is obtained. That is, for all the values from $\{k^{\mb{v}}_i=1/\mb{t}'_i\mb{v}\}$,
we have $k^{\mb{v}}_{i_1}\leq k^{\mb{v}}_{i_2}\leq \cdots \leq k^{\mb{v}}_{i_n}$. \tb{Type I} optimization problem could be described as
\bi
\item[]\tb{minimize}: $
\mb{c}'\mb{v}$,
\item[]\tb{subject to}: (i) $\mb{B}'\mb{v}\leq \mb{0}_{(n-1)\times 1}$; ~~ (ii) $\mb{v}'\mb{v}=1$,
\ei
where $\mb{c}$ and $\mb{B}$ are constant vector and matrix, respectively (see Remarks 3.1 and (\ref{B.eqn})), $\min$ could also be $\max$. That is, we have a linear objective function, and a linear inequality constraint and a quadratic equality constraint.\vs
When $p=2$, each $P_k$ becomes a piece of arc of the unit circle and the cones formed by the linear constraints are the angular regions with two radii as their boundaries. The optimization problem becomes
linear programming over the piece of arc. By the fundamental theory of linear programming, the minimization
or maximization occurs only at the boundary. So only evaluation of $\mb{c}'\mb{v}$ is needed for $\mb{v}$ at the two boundary directions. There are at most $O(n^2)$ pieces of $P_k$'s. \vs Generally,
\tb{Type I} optimization problem can be solved by an augmented
Lagrangian minimization using R package `alabama’, or by sequential quadratic programming using
R solver `slsqp’. Alternatively, it can be
transformed into semidefinite programming
problems and solved using R solver `csdp’. Also R packages ``optisolve" and ``nlopt" are applicable. \vs
Now we turn to the \tb{Type II} optimization problem. It could be describes as
\bi
\item[]\tb{minimize}: $
\frac{\mb{b}'\mb{v}}{\mb{v}'\mb{A}\mb{v}}$,
\item[]\tb{subject to}: (i) $\mb{B}'\mb{v}\leq \mb{0}_{(n-1)\times 1}$; ~~ (ii) $\mb{v}'\mb{v}=1$,
\ei
where $\mb{b}$ and $\mb{A}_{p\times p}$, $\mb{B}$ are constant vector and matrices, respectively (see Remarks 3.1 and (\ref{B.eqn})), $\mb{A}$ could be treated as a symmetric and positive definite one, $\min$ could also be $\max$.\vs That is, we have a non-linear, non-convex, but differentiable objective function, or a rational objective function, and a linear inequality constraint and a quadratic equality constraint.
The problem again can be solved by using R packages `alabama’, ``optisolve", and ``nlopt".
\vs
\vs
In the following example, we examine the performance of exact (Z19 and Section \ref{EX-prd.section})
and approximate (AA-UF-3 of Z19)
computation of UF, equivalently PRD, for a real data set.\vs
\noin
\tb{Example 4.1.} Average of brain and body weight data (source: Table 7, page 58 of RL87).\vs
The average of brain weight (in grams) and the body weight (in kilograms) of 28 animals are investigated whether a larger
brain is required to govern a heavier body. A plot of original measurements is not very informative, a logarithmic transformation was necessary. The plot of the transformed data exhibits an overall linear relationship (see the left panel of Fig. \ref{fig:example-4-1}). It is clear that three outliers (dinosaurs) form the right lower cluster.
\vs
We regress the transformed data with four methods:
LS (least squares); ltsReg (least trimmed squares (Rousseeuw (1984)); $T^*_{RD}$ (maximum regression depth (RD) (Rousseeuw and Hurbert (1999) (RH99) estimator, see section \ref{prd.section} for definition, see Rousseeuw and Struyf (1998) and Liu and Zuo (2014) for computation); and $T^*_{PRD}$ (or $\bs{\beta}^*_{PRD}$) (see section \ref{prd.section} for computation).\vs The last two represent the maximum depth induced median type regression estimators whereas the first (LS) is the traditional one which is notorious for its non-robustness and the second one (ltsReg) represents the most robust and prevailing regression estimator.
\vs
Four lines (or four $\bs{\beta}$'s, $\bs{\beta}'=(\beta_0, \beta_1)=\mbox{(intercept, slope)}$) from the four methods are (2.55490, 0.49599), (2.00135, 0.75087), (2.258175, 0.7028644), and (2.45098, 0.64920), respectively. The first (LS) line is (slight different from the one given in RL87) apparently attracted by the outlier cluster downwards.
Other three robust alternatives indeed resist to the outliers while the last two depth induced medians are almost identical (see the right panel of
Fig. \ref{fig:example-4-1})
\bec
\begin{figure}[h!]
\vspace*{-5mm}
\includegraphic
[width=\textwidth] {4-lines-5.eps}
\vspace*{-5mm}
\caption{{Four regression lines based on the data of brain and body weight. Solid black for LS line; dashed red for ltsReg line, dotted green for $T^*_{RD}$; dot-dash blue for $T^*_{PRD}$.
}}
\label{fig:example-4-1}
\vspace*{-8mm}
\end{figure}
\enc
Note that there actually exit three deepest regression depth lines: (2.258175, 0.7028644);
(2.445328, 0.6677692) and (2.466361, 0.6501526), each possessing RD (of RH99): $12/28$. The non-uniqueness issue of maximum regression depth estimator has been addressed in Zuo (2020).
\vs Note that the average of all three deepest RD lines is (2.38995, 0.67360). This is the line recommended in RH99. However, its regression depth is $11/28$, no longer the maximum regression depth (or the line no longer fits ``the deepest regression method"). This phenomenon has been observed in Mizera and Volauf (2002) and Van Aelst et al (2002).\vs
For the four $\bs{\beta}$'s, we calculate their UF's with the exact algorithms of Z19 (EA-Z19) and the one in Section
\ref{EX-prd.section} (denoted by EA-Z20) and approximate algorithm AA-UF-3 of Z19 (AA-Z19). The obtained UF and the consumed time are reported in the table below.\vs
UF induced rank (in ascending order) of each line is also reported. Regression depth (RD) of RH99 (see section \ref{prd.section}) of each line, as well as the induced rank (in descending order) are also reported. \vs
\vspace*{-5mm}
\bec
\begin{table}[h!]
\centering
Table entries (a,b,c) are a:= UF (or RD), b:=time consumed (in seconds), c:=induced rank.\\[2ex]
\begin{tabular}{c c c c c c }
method & LS & ltsReg & $T^*_{RD}$ & $T^*_{PRD}$\\[1ex]
\hline
UF(EA-Z19) &(1.365, 0.017, 4)&(0.637, 0.023, 3) &(0.407, 0.015, 2)&(0.347, 0.015, 1)\\[1ex]
UF(EA-Z20)& (1.286, 2.803, 4)& (0.569, 2.799, 3) & (0.350, 2.779, 2) &(0.290, 2.776, 1)\\ [1ex]
UF(AA-Z19)&(1.285, 0.030, 4)& (0.569, 0.030, 3)& (0.332 ,0.030, 2)& (0.290, 0.031, 1)\\[1ex]
\hline
RD(RH99)&(4/28, 0.002, 4)&(10/28, 0.001, 3)& (12/28, 0.001, 1)& (11/28, 0.001, 2)\\[1ex]
\hline
\end{tabular}
\caption{~Performance of exact and approximate algorithms w.r.t. different $\bs{\beta}'s$ (lines). Four lines are ranked by different criteria}
\label{ex-aa.tab}
\end{table}
\vspace*{-10mm}
\enc
Table \ref{ex-aa.tab} consists of two parts. One part is about the unfitness, or equivalently, the projection depth and its induced rank and the consumed computation time of each method for four lines. The other part is about the same thing but based on regression depths for four lines which are obtained by utilizing the R package "mrfDepth" that utilizes R package Rcpp.
\vs
\noin
\tb{Remarks 4.1} The table reveals that
\bi
\item[(I)]
Three methods EA-Z19, EA-Z20, and AA-Z19 yield the same induced rank of the four lines. Based on their UF, from the worst to the best, it is
LS, ltsReg, T$^*_{RD}$ and T$^*_{PRD}$.
\item[ (II)] EA-Z19 produces the largest UF in all four cases while the AA-Z19 yields UF's that very close to those of EA-Z20 (the results from AA-Z19 are very stable in the approximation for the different direction numbers used: $10^3$, $10^4$ or $10^5$. In the table it employed $10^3$) but always no greater than the latter. Generally speaking, the larger the UF obtained the more accurate the results are. This general principle indicates that EA-Z20 does its job
whereas EA-Z19 although it gives the largest UF's but they are not the most accurate.\vs How can that be? The largest UF's are due to the modification of the regular sample median in EA-Z19. The latter is modified to be the ``low median" in Z19. The low median is always less than the regular median with respect to projected values. However, its absolute value might be greater than that of the regular median if both the regular median and the low median of the projected values are negative in some direction. Consequently, they are the most inaccurate results. This indeed is the price EA-Z19 has to pay for its speed (see (III) below).\vs
\item[(III)] In terms of computation time for UF, EA-Z19 is surprisingly the fastest (and even faster than the AA-Z19), and EA-Z20 is the slowest. This is due to the modification of the median in EA-Z19 which leads the optimization problem to the evaluation of UF along $O(n)$ directions (see Z19, the proof of Theorem 2.1).
\item[(IV)] In terms of regression depth ranking, LS and ltsReg are still the worst and the second-worst choices whereas the ranks of $T^*_{RD}$ and $T^*_{PRD}$ are switched, $T^*_{RD}$ becomes the only best choice as it is expected. This is no longer true if $T^*_{RD}$ is the average of the three deepest lines. (The comparisons here are somewhat unfair since if we look at the sum of residuals squares, then LS becomes the best choice. Likewise, ltsReg could also become the best if the comparison criterion is changed.) \hfill \pend \vs
\ei
All results above (and below) were obtained on a desktop Intel(R)Core(TM) i7-2600 CPU @ 3.40GHz 3.40GHz. AA-UF-Z19 employed matlab code.
R code and matlab code are downloadable via https://www.stt.msu.edu/users/zuo/Codes/2020/readme-Z20.txt.
\vs
\subsection{On the computation of PRD induced median}\label{prd.section}
The most famous notion of depth in regression and its induced median are regression depth of Rousseeuw and Hurbert (1999) (RH99) and its induced median, respectively.
\vs
For any $\bs{\beta}\in \R^p$ and the joint distribution $P$ of $(\mb{x}', y)'$ in (\ref{eqn.model}), RH99 defined the regression depth of $\bs{\beta}$, denoted hence by $\mbox{RD}_{RH} (\bs{\beta};P)$,
to be the minimum probability mass that needs to be passed when tilting (the hyperplane induced from) $\bs{\beta}$ in any way until it is vertical. The maximum regression depth functional $\bs{\beta}^*_{{RD}_{RH}}$ (also denoted by $T^*_{RD}$ or $\bs{\beta}^*_{RD}$) (aka regression median) is defined as
\be \bs{\beta}^*_{{RD}_{RH}}(P)=\arg\!\max_{\bs{\beta}\in\R^p}\mbox{RD}_{RH}(\bs{\beta};P) \label{T-RD.eqn}
\ee
Many characterizations of $\mbox{RD}_{RH}(\bs{\beta};P)$, or equivalent definitions, have been given in the literature, see, e.g., Z18 and references cited therein.\vs
\vs
\vspace*{-0mm}
\begin{table}[h!]
\centering
~~ Table entries: (empirical mean squared error, average time per sample (seconds))
\bec
\begin{tabular}{c c c c c c}
n& method & $p=2$~~~~ & $p=3$~~~~ &$p=4$~~~~& $p=6$~~~~ \\
\hline\\[0.ex]
$40$& $\bs{\beta}^*_{PRD} (Z19)$ &(0.244, 7.424)&(0.488, 18.69)& (0.737, 13.21) & (1.505, 12.01)\\[.5ex]
& $\bs{\beta}^*_{PRD} (Z20)$& (0.232, 0.060) &(0.468, 0.261)&(0.723, 0.304)&(1.429, 0.354)\\[.5ex]
&$\bs{\beta}^*_{RD}$ &(0.243, 0.038)&(0.492, 0.124)& (2.7e+04, 6.542)& (1.717, 9.619)\\[.5ex]
<sReg &(0.380, 0.007)&(0.579, 0.011)& (0.781, 0.010)& (1.434, 0.018)\\[.5ex]\\
$60$& $\bs{\beta}^*_{PRD}(Z19)$ &(0.172, 9.076)&(0.339, 22.04)& (0.543, 19.70)& (0.986, 22.82)\\[.5ex]
& $\bs{\beta}^*_{PRD} (Z20)$& (0.160, 0.080) &(0.323, 0.310)&(0.510, 0.445)&(0.894, 0.532)\\[.5ex]
&$\bs{\beta}^*_{RD}$ &0.172, 0.043)&(0.366, 0.286)& (2565.1, 23.14)& (1.206, 11.82)\\[.5ex]
<sReg &(0.326, 0.007)&(0.475, 0.013)& (0.599, 0.015)& (0.894, 0.024)\\[.5ex]\\
$80$& $\bs{\beta}^*_{PRD} (Z19) $& (0.131, 10.29) &(0.273, 26.82)&(0.428, 25.00)&(0.821, 26.17)\\[.5ex]
& $\bs{\beta}^*_{PRD} (Z20)$& (0.124, 0.100) &(0.260, 0.436)&(0.413, 0.613)&(0.691, 0.634)\\[.5ex]
&$\bs{\beta}^*_{RD}$&(0.130, 0.047) &(0.291, 0.569)&(2012.6, 58.42)& (1.111, 14.08)\\[.5ex]
<sReg&(0.290, 0.009) &(0.416, 0.018)&(0.506, 0.020)& (0.703, 0.029)\\[.5ex]\\
$100$& $\bs{\beta}^*_{PRD}(Z19)$ &(0.108, 10.22)&(0.233, 28.90)& (0.370, 28.63)& (0.655, 31.40)\\[.5ex]
& $\bs{\beta}^*_{PRD} (Z20)$& (0.100, 0.123) &(0.221, 0.528)&(0.346, 0.687)&(0.555, 0.763)\\[.5ex]
&$\bs{\beta}^*_{RD}$ &(0.109, 0.048)&(0.252, 0.950)& (5.5e+06, 101.8)& (0.963, 16.37)\\[.5ex]
<sReg &(0.252, 0.010)&(0.418, 0.021)& (0.455, 0.024)& (0.578, 0.035)\\[1ex]
\hline
\end{tabular}
\enc
\caption{Performance of different regression methods for various $n$ and $p$.}
\label{table-comp-time}
\end{table}
\vspace*{-5mm}
As a median in regression, $\bs{\beta}^*_{{RD}_{RH}}(P)$ is a promising robust alternative to the classic least squares (LS) regression estimator. In fact,
in terms of asymptotic breakdown point (ABP) robustness, the former possesses a $33\%$ ABP (Van Aelst and Rousseeuw (2000) (VAR00)), in contrast to $0\%$ of the latter. \vs
Zuo (2019b) (Z19b) has investigated the ABP of $\bs{\beta}^*_{PRD}$, it turns out that it possesses the highest possible ABP, $50\%$. For this advantage over
$\bs{\beta}^*_{{RD}_{RH}}$ (see illustration examples in Z19b), it has to pay a price in the computation. The cost of the computation of $\bs{\beta}^*_{{RD}_{RH}}$ is generally lower than that of $\bs{\beta}^*_{PRD}$.
\vs
To see the difference in the computation cost, we list below the computation time consumed by both medians for different sample sizes n and dimensions d. For the benchmark and comparison purpose, we also list the times consumed by the famous least trimmed squares
(Rousseeuw (1984)) regression (ltsReg) estimator and the times consumed by $\bs{\beta}^*_{PRD}$ in Z19 (denoted by $\bs{\beta}^*_{PRD}(Z19))$.
The latter is the one of deepest hyperplanes obtained by searching the convex hull formed
by $(p+1)$ deepest candidate $\bs{\beta}’$s (see Section 5). Function rdepth in R package ”mtfDepth” was used to calculate the RD of each candidate hyperplane. The performance of four algorithms for $\bs{\beta}^*_{RD_{RH}}$, $\bs{\beta}^*_{PRD}(Z19)$, $\bs{\beta}^*_{PRD}$ in Section 3.2 (denoted by $\bs{\beta}^*_{PRD}(Z20)$)), and ltsReg,
respectively, is demonstrated in the table \ref{table-comp-time}. \vs
We generate $1000$ samples $\mb{Z}^{(n)}=\{(\mb{x}'_i, y_i), i=1,\cdots, n, \mb{x}_i \in \R^{p-1}\}$ from the Gaussian distribution with zero mean vector and $1$ to $p$ as its diagonal entries of the diagonal covariance matrix for various $n$ and $p$. They are contaminated by $5\%$ i.i.d. normal $p$-dimensional points with individual mean $10$ and variance $0.1$. Thus, we no longer have a symmetric errors and homoscedastic variance model
(skewness and heteroscedasticity are allowed for RD of RH99).\vs
For a general estimator $\mb{T}$, if it
is regression equivariant, then we can assume (w.l.o.g.) that the true parameter $\bs{\beta}_0=\mb{0}\in \R^p$. We calculate
$\mbox{EMSE}:=\frac{1}{R}\sum_{i=1}^R \|\mb{T}_i - \bs{\beta}_0\|^2$, the empirical mean squared error (EMSE) for $\mb{T}$, where
$R = 1000$, $\bs{\beta}_0 = (0, \cdots, 0)'\in \R^{p}$,
and $\mb{T}_i$ is the realization of $\mb{T}$ obtained from the ith sample with size $n$. The EMSE and the average computation time (in seconds) per sample by different estimators are listed in Table \ref{table-comp-time}.\vs
\noindent
\tb{Remarks 4.2} Table \ref{table-comp-time} reveals that
\bi
\item[(I)] In terms of the average time consumed per sample, or computation speed, (i) the ltsReg is the fastest in all cases whereas the $\bs{\beta}^*_{RD}$ is the second fast method when $p$ is $2$, or $3$ (and $n\leq 60$).
(ii) $\bs{\beta}^*_{PRD}$(Z19) is the slowest in almost all cases with exceptions in $p=4~(n>40)$ cases where $\bs{\beta}^*_{RD}$ unexpectedly becomes the slowest. (iii) $\bs{\beta}^*_{PRD}$(Z20) is at least 30 times faster than $\bs{\beta}^*_{PRD}$(Z19) in all cases, sometimes (p=2) it is more than 100 times faster. It is also at least 20 times faster than $\bs{\beta}^*_{RD}$ when $p>3$.\vs
Note the comparison here is somewhat unfair to $\bs{\beta}^*_{PRD}$(Z19) since it is the only one that utilizes purely R programming for the entire calculation whereas ltsReg using Fortran and
$\bs{\beta}^*_{RD}$ and $\bs{\beta}^*_{PRD}$(Z20) employing Rcpp in the background computation. This example also confirms that old Forthan is still an
excellent programming language for scientific computation.
\item[(II)] Computation speed is just one of the important performance criteria. Accuracy or efficiency is another, if not more important one.
In terms of EMSE, there is an across-board winner. That is, $\bs{\beta}^*_{PRD}$(Z20) has the smallest EMSE in all cases
considered.
\item[(III)] In terms of speed and EMSE, $\bs{\beta}^*_{PRD}$(Z20) outperforms $\bs{\beta}^*_{PRD}$(Z19) in all cases. Furthermore, the former consumes less than one second in all cases considered.
\hfill \pend
\ei
\vs The ltsReg has a fairly good finite sample relative efficiency, but it is also notorious
for its inefficient in the asymptotic sense (with asymptotic efficiency just $7\%$ (see Stromberg, et al.(2000)). It benefits from Fortran for its speed.
In the sequel, ltsReg will be excluded from our discussion for a pure apple vs apple (depth median vs depth median) fair comparison.
\vs
\begin{table}[b!]
\centering
Replication $1000$ times, $n=65$ \\[1ex]
\bec
\begin{tabular}{c c c c c }
Performance criteria~~~ &$\bs{\beta}^*_{PRD} (Z19)$~~~ & $\bs{\beta}^*_{PRD} (Z20)$~~~~&$\bs{\beta}^*_{RD}$
\\[.5ex]
\hline\\[0.ex]
&{\bf{Case I}}& $p=3$ & &\\[1.5ex]
EMSE&0.10434764 &0.09433006 &0.11191986
\\[.5ex]
Time consumed per sample &21.14003496 &0.36948846&0.34871839
\\[.5ex]
\hline\\[.ex]
&{\bf{Case II}}& $p=4$ & &\\[1.5ex]
EMSE&0.1652269&0.1516346&5657894
\\[.5ex]
Time consumed per sample &12.75727514 &0.26809841 &26.00944714
\\[.5ex]
\hline\\[.ex]
&{\bf{Case III}}& $p=5$ & &\\[1.5ex]
EMSE&0.2622625 &0.2372195 &0.251908
\\[1.5ex]
Time consumed per sample &13.41192399 &0.22595816 &6.72852676
\\[.5ex]
\hline\\[0.ex]
\end{tabular}
\enc
\vspace*{-9mm}
\caption{Performance of different regression depth medians for three true $\bs{\beta}_0$'s.}
\label{table-3-betas}
\vspace*{-0mm}
\end{table}
\noin
\tb{Example 4.2} Now we investigate the performance of the three regression depth medians ($\bs{\beta}^*_{PRD}(Z19)$, $\bs{\beta}^*_{PRD}(Z20)$, and $\bs{\beta}^*_{RD}$) in a slightly different setting. We generate $1000$ samples $\{(\mb{x}'_i, y_i) \in \R^p\}$ with a fixed sample size $65$ from an assumed model: $y=\bs{\beta_0}'\mb{x}
+e$, where $\mb{x}=(1,x_1,\cdots, x_{p-1})'$ and $\bs{\beta_0}=(\beta_0,\cdots, \beta_{p-1})'$ are in $\R^p$ and $x_i$ and e are from either Cauchy or standard Gaussian distribution.\vs We list the average time consumed (in seconds) per sample and the EMSE (the same formula as before) for the three methods
with respect to different $\bs{\beta_0}$'s in Table \ref{table-3-betas}. {\bf{Case I}} $\bs{\beta_0}=(-2, 0.1,1)'$, all $x_i$ and $e$ are from $N(0,1)$ distribution.
{\bf{Case II}} $\bs{\beta_0}=(-2, 0.1,1, 5)'$, $x_1$ is from $N(0,1)$ and all other $x_i$ and $e$ are from Cauchy distribution. {\bf{Case III}} $\bs{\beta_0}=(50, 0.1, -2, 15, 100)'$, all $x_i$ and $e$ are from $N(0,1)$ distribution.
\vs
Inspecting table 3 reveals that (i) $\bs{\beta}^*_{PRD}(Z20)$ is much (ranging from $47-59$ times) faster than the slower $\bs{\beta}^*_{PRD}(Z19)$ in all cases, it is also $97$ and $29.78$ times faster than $\bs{\beta}^*_{RD}$ in the cases of $p=4$ and $p=5$, respectively, (ii) $\bs{\beta}^*_{PRD}(Z20)$ has the smallest EMSE as well in all cases, (iii) the sample variance (or more precisely EMSE) of both PRD induced medians increases when $p$ increase whereas the time consumed per sample for the fixed sample size by $\bs{\beta}^*_{PRD}(Z20)$ decreases in this case.
\vs
All results above and below are obtained on a desktop Intel(R)Core(TM) i7-2600 CPU @ 3.40GHz 3.40GHz.
To download R codes in this and the next sections, utilizing the link: https://www.stt.msu.edu/users/zuo/Codes/2020/readme-Z20.txt.\vs
\section{Other estimators induced from PRD}
Before introducing other estimators, we like to first explain why $\bs{\beta}^*_{PRD}(Z20)$ runs faster than $\bs{\beta}^*_{PRD}(Z19)$.
First, we briefly review the main computation steps of $\bs{\beta}^*_{PRD}(Z19)$ (cf. Section 3.2 (A)-(D))
\bi
\item[(i)] Generating $N_{\bs{\beta}}$ $\bs{\beta}$'s via the hyperplane $y=\mb{x}'\bs{\beta}$ based on $p$ points sampled from $\mb{Z}^{(n)}:=\{(\mb{x}'_i, y_i), i=1,\cdots, n\}$,
where $N_{\bs{\beta}}$ is a tuning parameter and never greater than $n\choose p$.
\item[(ii)] Computing the unfitness (UF) for each $\bs{\beta}$ using special directions (including those perpendicular to $\mb{t}_i=\mb{x}_i/r_i(\bs{\beta})$ where $r_i(\bs{\beta})=y_i-\mb{x}'_i\bs{\beta}$, and those $p$ axis directions, and those $N_{\mb{v}}$ normal directions of hyperplane formed as those in (i) by $p$ sample points from $\mb{Z}^{(n)}$, where $N_{\mb{v}}$ is another tuning parameter which increases when $p$ increases.
\item[(iii)] After the computation of UF for $(p+1)$ $\bs{\beta}$'s in step (ii) above, calculating the minimum UF (UF-min), and updating this UF-min after each computation of UF of candidate $\bs{\beta}$ and using it to skip the computation of some candidate $\bs{\beta}$'s if along some direction, the one-dimensional unfitness of the $\bs{\beta}$ (see the RHS of (\ref{UF.eqn}) or (13) of Z18) is greater than the UF-min since this $\bs{\beta}$ can never be final solution which shall have a global minimum UF. This UF-min cuts a tremendous amount of unnecessary computation cost.
\item[(iv)] Selecting Nbet (another tuning parameter) $\bs{\beta}$'s from
the convex hull formed by $(p+1)$ deepest (or equivalently with minimum UF) $\bs{\beta}$'s. The deepest $\bs{\beta}$ among the Nbet $\bs{\beta}$ is
treated as the final $\bs{\beta}^*_{PRD}(Z19)$.
\ei
\vs
$\bs{\beta}^*_{PRD}(Z20)$ has almost the same steps but with different details. For example, in (iii) above, Z19 computes UF for $N$ ($\leq N_{\bs{\beta}}$) $\bs{\beta}$'s, each time it samples a $\bs{\beta}$ from the candidate $\bs{\beta}$ matrix B ($N_{\bs{\beta}}$ by $p$) constructed from (i), and after finishing
$(p+1)$ computations, it calculates the minimum unfitness (UF-min) of all $(p+1)$ UF's, then updates the UF-min after each computation of UF utilizing a nested if and else statement.
\vs
$\bs{\beta}^*_{PRD}(Z20)$ first skips the sampling step and just directly invokes the $\bs{\beta}$ from matrix B, and it replaces the nested if and else statement by a simple if statement. $\bs{\beta}^*_{PRD}(Z20)$ also uses min function to replace sort function in the search over the convex hull for the final solution. These simple steps boost the computation speed five times. Furthermore, $\bs{\beta}^*_{PRD}(Z20)$ employs Rccp package
which eventually resulted in its speed is at least 30 times faster than $\bs{\beta}^*(Z19)$.\vs
Computational speed is not the only improvement of $\bs{\beta}^*_{PRD}(Z20)$, it also cuts the EMSE of $\bs{\beta}^*(Z19)$. To achieve this goal, $\bs{\beta}^*_{PRD}(Z20)$ takes the advantage of the solution from ltsReg and the deepest $\bs{\beta}$'s with maximum RD (might not be a unique one, but all are also from $B$ which is shared by $\bs{\beta}^*_{RD}$) and adds them (a sub-matrix $B_1$) to the B matrix. It not only searches over the convex hull formed by $(p+1)$ deepest $\bs{\beta}$'s with minimum UF from $B$ but also considers the combinations of member of $B_1$. The final $\bs{\beta}$ with minimum UF is the solution of $\bs{\beta}^*_{PRD}(Z20)$. For more details, see the code posted on the link mentioned before.
\vs
$\bs{\beta}^*_{PRD}(Z20)$ is much faster than $\bs{\beta}^*(Z19)$, are there any depth induced estimators that run even faster than $\bs{\beta}^*_{PRD}(Z20)$?
From the discussion above, there are obviously other projection regression depth (PRD) induced estimators that can be computed even faster.
\vs
The first one adds no extra computation cost to already obtained candidate $\bs{\beta}$ matrix $B$, it is just the deepest $\bs{\beta}$ with minimum UF in matrix $B$, denoted it by $\bs{\beta}^*_{PRD1}$. The second one is the plain average of deepest $(p+1)$ $\bs{\beta}$'s from $B$, denoted it by $\bs{\beta}^*_{PRD2}$.
The third one is a UF weighted estimator defined below, denoted it by $\bs{\beta}^*_{PRD3}$,
\be
\bs{\beta}^*_{PRD3}=\frac{\sum_{i=1}^{(p+1)}w(\rho_i)\bs{\beta}_{(i)}}{\sum_{i=1}^{(p+1)}w(\rho_i)}, \label{wprd.eqn}
\ee
where $\rho_i=\mbox{UF}(\bs{\beta}_{(i)})$ and $\bs{\beta}_{(1)},\cdots,\bs{\beta}_{(p+1)}$ are first $(p+1)$ deepest $\bs{\beta}$'s (with least UF) in B and the weight function $w$ is defined as follows:
\be
w(r)={\bf{I}}(r\leq r_0)+{\bf{I}}(r>r_0)\frac{exp~\Big(k\big(2r_0/r-(r_0/r)^2\big)\Big)-1}{exp~ (k)-1}, \label{weight.eqn}
\ee
with two turning parameters $k$ and $r_0$, we set $k=3$ and $r_0=\rho_{(p-1)}$, the $(p-1)$th smallest UF among the $(p+1)$ minimum UF's.
For more discussion on this weight function and the tuning parameters, refer to Zuo (2003) and Z19b.
\vs
These estimators obviously can run faster than $\bs{\beta}^*_{PRD}(Z20)$ since they skip the time-consuming step of searching over the convex hull. One naturally wonders what are their EMSE's?\vs
Next, we investigate the performance of $\bs{\beta}^*_{PRD}(Z19)$, $\bs{\beta}^*_{PRD}(Z20)$, $\bs{\beta}^*_{PRD1}$, $\bs{\beta}^*_{PRD2}$, and $\bs{\beta}^*_{PRD3}$. For the benchmark purpose, the famous depth median: $\bs{\beta}^*_{RD}$ of RH99 is included in the comparison. $1000$ samples are generated with the same scheme as that for table 2.
\vs
\vspace*{-0mm}
\begin{table}[h!]
\centering
~~ Table entries: (empirical mean squared error, average time per sample (seconds))
\bec
\begin{tabular}{c c c c c c}
n& method & $p=2$~~~~ & $p=3$~~~~ &$p=4$~~~~& $p=6$~~~~ \\
\hline\\[0.ex]
$40$& $\bs{\beta}^*_{PRD}(Z19) $ &(0.249, 7.289)&(0.465, 9.083)& (0.743, 8.144) & (1.493, 12.06)\\[.5ex]
& $\bs{\beta}^*_{PRD}(Z20) $& (0.237, 0.062) &(0.448, 0.142)&(0.736, 0.208)&(1.373, 0.343)\\[.5ex]
& $\bs{\beta}^*_{PRD1} $& (0.244, 0.023) &(0.481, 0.040)&(0.831, 0.068)&(1.646, 0.142)\\[.5ex]
& $\bs{\beta}^*_{PRD2} $& (0.268, 0.023) &(0.489, 0.040)&(0.882, 0.068)&(1.431, 0.142)\\[.5ex]
& $\bs{\beta}^*_{PRD3} $& (0.258, 0.023) &(0.476, 0.040)&(0.771, 0.068)&(1.375, 0.142)\\[.5ex]
&$\bs{\beta}^*_{RD}$ &(0.240, 0.040)&(0.466, 0.124)& (3195.3, 6.507)& (1.678, 9.382)\\[.5ex]\\
$60$& $\bs{\beta}^*_{PRD}(Z19)$ &(0.164, 9.140)&(0.346, 11.19)& (0.552, 9.564)& (1.050, 7.139)\\[.5ex]
& $\bs{\beta}^*_{PRD}(Z20) $& (0.157, 0.082) &(0.329, 0.187)&(0.519, 0.268)&(0.923, 0.193)\\[.5ex]
& $\bs{\beta}^*_{PRD1} $& (0.167, 0.031) &(0.363, 0.051)&(0.613, 0.090)&(1.139, 0.088)\\[.5ex]
& $\bs{\beta}^*_{PRD2} $& (0.188, 0.031) &(0.484, 0.051)&(0.603, 0.090)&(1.131, 0.088)\\[.5ex]
& $\bs{\beta}^*_{PRD3} $& (0.175, 0.031) &(0.446, 0.051)&(0.568, 0.090)&(1.075, 0.088)\\[.5ex]
&$\bs{\beta}^*_{RD}$ &(0.165, 0.043)&(0.350, 0.300)& (4703.0, 21.18)& (1.337, 8.585)\\[.5ex]\\
$80$& $\bs{\beta}^*_{PRD}(Z19) $& (0.135, 9.371) &(0.284, 27.79)&(0.446, 25.66)&(0.795, 9.229)\\[.5ex]
& $\bs{\beta}^*_{PRD}(Z20) $& (0.128, 0.101) &(0.261, 0.441)&(0.412, 0.611)&(0.666, 0.288)\\[.5ex]
& $\bs{\beta}^*_{PRD1} $& (0.134, 0.040) &(0.297, 0.095)&(0.492, 0.165)&(0.832, 0.129)\\[.5ex]
& $\bs{\beta}^*_{PRD2} $& (0.165, 0.040) &(0.315, 0.095)&(0.509, 0.165)&(0.872, 0.129)\\[.5ex]
& $\bs{\beta}^*_{PRD3} $& (0.147, 0.040) &(0.302, 0.095)&(0.481, 0.165)&(0.830, 0.129)\\[.5ex]
&$\bs{\beta}^*_{RD}$&(0.132, 0.047) &(0.291, 0.583)&(4446.2, 58.50)& (1.050, 10.64)\\[.5ex]\\
$100$& $\bs{\beta}^*_{PRD}(Z19)$ &(0.121, 10.24)&(0.237, 14.73)& (0.387, 27.63)& (0.698, 11.79)\\[.5ex]
& $\bs{\beta}^*_{PRD}(Z20)$& (0.109, 0.121) &(0.218, 0.301)&(0.361, 0.719)&(0.551, 0.338)\\[.5ex]
& $\bs{\beta}^*_{PRD1} $& (0.117, 0.048) &(0.247, 0.086)&(0.439, 0.202)&(0.682, 0.148)\\[.5ex]
& $\bs{\beta}^*_{PRD2}$& (0.153, 0.048) &(0.275, 0.086)&(0.467, 0.202)&(0.851, 0.148)\\[.5ex]
& $\bs{\beta}^*_{PRD3}$& (0.142, 0.048) &(0.263, 0.086)&(0.437, 0.202)&(0.771, 0.148)\\[.5ex]
&$\bs{\beta}^*_{RD}$ &(0.115, 0.050)&(0.240, 0.960)& (2427164, 113.4)& (0.970, 12.24)\\[.5ex]
\hline
\end{tabular}
\enc
\caption{Performance of regression depth induced estimators for various $n$ and $p$.}
\label{table-comp-time-2}
\end{table}
\vspace*{-3mm}
\vs
Inspecting the table \ref{table-comp-time-2} immediately reveals that (i) $\bs{\beta}^*_{PRD}(Z20)$ has the smallest EMSE in all cases
and it is at least $34$ (sometimes more than $100$) times faster than $\bs{\beta}^*_{PRD}(Z19)$; (ii) $\bs{\beta}^*_{PRD}(Z19)$ is the slowest (with the exceptions in $p=4$, $p=6$ and $n>40$ cases where $\bs{\beta}^*_{RD}$ becomes the slowest). (iii) $\bs{\beta}^*_{PRD1}$, $\bs{\beta}^*_{PRD2}$, and $\bs{\beta}^*_{PRD3} $ are the fastest ($86$ to $300$ times faster than $\bs{\beta}^*_{PRD}(Z19)$) and they are currently regarded as having the same speed (all depend on the given matrix $B$ of candidate $\bs{\beta}$'s and their unfitness and then on the sorted values of their unfitness).
Among the three, the deepest of all $\bs{\beta}$ in $B$, $\bs{\beta}^*_{PRD1}$, and the depth weighted deepest $(p+1)$ $\bs{\beta}$'s, $\bs{\beta}^*_{PRD3}$ seemingly perform better and the plain average of them, $\bs{\beta}^*_{PRD2}$, seemingly performs worst in most cases.
Furthermore, our empirical evidence indicates that $\bs{\beta}^*_{PRD3}$ performs even better when $p$ increases (say $p\geq 8$). (iv) Overall, $\bs{\beta}^*_{PRD}(Z20)$ should be recommended among the six depth induced regression estimators, it becomes empirically the same as $\bs{\beta}^*_{PRD1}$ for large $p$ (e.g. $p=20$, $n=40, 60, 80$), the second one should be recommended is the
$\bs{\beta}^*_{PRD3}$ (or $\bs{\beta}^*_{PRD}(Z19)$), and $\bs{\beta}^*_{PRD2}$ could be abandoned.
\vs
\section{Concluding comments}
Unlike Z19, this article presents the exact algorithm for the computation of the UF (or equivalently the PRD)
without modifying the original definition of univariate median and thus without scarifying invariance of projection regression depth and the equivariance of the depth induced median. The second major contribution is to boost the speed of computation of the $\bs{\beta}^*_{PRD}(Z19)$ by at least $30$ times,
more importantly to reduce the empirical mean squared error of the depth induced regression median meanwhile.\vs
The article also introduces three regression depth induced estimators that can run even faster, $86$ to $300$ times faster than $\bs{\beta}^*_{PRD}(Z19)$. These estimators satisfy regression, scale, and affine equivariance (see Z18 for definitions) and more importantly have roughly the same level of empirical mean squared errors as that of the latter. \vs
The major motivation of introducing depth induced regression estimators is to provide robust alternatives to the traditional least squares estimator and to overcome the non-robustness fatal drawback of the latter.
The three depth induced regression estimators are expected to be highly robust, just like the $\bs{\beta}^*_{PRD}$ in Z19b with high finite sample breakdown point. Detailed investigation of robustness and other properties of the three deserves to be pursued independently and elsewhere through.\vs
Finally, in light of five PRD induced estimators in Table 4, one can even introduce another estimator which is the one among the three of the five (or all five) with minimum unfitness. Call this estimator as $\bs{\beta}^*_{PRD4}$. Its performance and properties is worthy of a thorough examination.
\vs
\begin{center}
{\textbf{\large Acknowledgments}}
\end{center}
The author thanks Hanshi Zuo, Yan-Han Chen, and Dr. Wei Shao for their proofreading of the manuscript and useful discussions on C++, R, and Rcpp programming, all of which have led to improvements in the manuscript.
\vs
{\small
|
2012.11670
|
\section{INTRODUCTION}
GRAVITY is a high precision narrow angle astrometry and interferometric imaging instrument~\cite{Eisenhauer2011}. It has been built for the Very Large Telescope Interferometer (VLTI) of the European Southern Observatory and was born with the goal of monitoring the stellar sources in the vicinity of the Galactic Center super massive black hole and the actual emission of the black hole close environment environment. It combines four beams (in the K-band) coherently of either Unit Telescopes (UTs) or Auxiliary Telescopes (ATs), delivering an astrometry of $\sim 10$ micro-arcseconds ($\mu$as) and a angular resolution of around $\sim 4$ milli-arcseconds (mas).
The main error sources for the GRAVITY astrometric measurements are the atmospheric turbulence and the error in the baseline length ($B$) between any two telescopes. The GRAVITY-Coud\'e Infrared Adaptive Optics (CIAO) measures and corrects wavefront aberrations induced by the atmospheric turbulence in the light path from the sky to the Coud\'e laboratory~\cite{Kendrew2012}. However the CIAO corrections do not include the VLTI tunnel seeing induced fast moving tip-tilts. These are tracked by launching external lasers at the telescopes (at the star separator) and imaging them in the GRAVITY beam combiner~\cite{Pfuhl2014}. The measured tip-tilts are corrected by dedicated tip-tilt-piston mirrors. But the above corrections involve tip-tilt residuals of more than 10\,mas. These tip-tilt errors limit the performance of GRAVITY in two ways~\cite{Lacour2014a}: a) efficiency of star light injection into the single mode fibers which are used to transport the light to the integrated optics~\cite{Jocou2010} to make the beam combination; b) unwanted tip-tilt error causes additional astrometric error.
The factor limiting the precision in measuring the accurate baseline (RMS error of a few mm for a $\sim 100$\,m baseline) length between two telescopes is pupil position errors. These errors are mainly due to the misalignments and temperature gradients. In between the telescope pupil and the fiber-fed beam combiner, there exists several optical pupils that are under motion due to the optical train vibrations while tracking the object of interest. The typical lateral and longitudinal pupil position errors experienced by the VLTI are around 5\% in the pupil diameter and 1\,m, respectively, for the 80\,mm beam. Eq. \ref{Chap2:EqAstrometricError} gives the astrometric error~\cite{Lacour2015c} associated with the given tip-tilt error ($\Delta \alpha$) and pupil lateral position ($\Delta L$) errors. For example, for 10\,mas tip-tilt error and 0.4\,m (5\% of 8~m pupil) lateral pupil position error, the astrometric error becomes 40\,$\mu$as for a single beam combiner
\begin{equation}\label{Chap2:EqAstrometricError}
\sigma = \dfrac{\Delta \alpha \times \Delta L}{ B} .
\end{equation}
The CIAO measures and corrects the incoming distorted wavefronts with respect to a flat wavefront generated in the Coud\'e laboratory. However, there are many optics and tunnel seeing in between the CIAO and the GRAVITY beam combiner. They will also introduce additional wavefront aberrations to the incoming beam. The quasi-static wavefront aberrations and the non-common path errors that exist between the CIAO and the GRAVITY beam combiner also contribute to the astrometric error~\cite{Lacour2014a}.
Therefore a beam stabilization system is required to address the above issues. The GRAVITY acquisition camera has been built to image and analyze the VLTI beams and to: a) enable the stabilization of star light injection into the single mode fibers; b) minimize the astrometric errors induced by the field and the pupil errors.
\section{Acquisition camera}
\begin{figure}[h!]
\centering{
\def\textwidth{\textwidth}
\input{figs/fig2_ACQ_optical_layoutV3.pdf_tex}
\vspace{11pt}
\caption{\label{Chap3:Fig1} The acquisition camera working principle and optical layout (for clarity only one telescope beam case is shown). The astrophysical target's K-band beam is used for science measurements and the H-band beam is used for beam stabilization. The H-band beam feeds the three imaging modes of the acquisition camera: aberration sensor, pupil imager, field imager. The pupil tracker images external pupil reference laser light (\SI{1.2}{\micro m}). The box in red color encloses the acquisition camera optics.}
}
\end{figure}
Figure~\ref{Chap3:Fig1} presents the optical layout of the acquisition camera for one telescope beam. The acquisition camera provides four optical imaging modes: a) field imager -- images and tracks the field (tip-tilt) of input beams; b) pupil tracker -- tracks the telescope pupil motions by imaging four pupil beacons that are installed on the spiders of secondary mirror (M2) of the telescope; c) aberration sensor -- implements four $9\times9$ lenslets to sense the incoming distorted wavefronts; d) pupil imager -- images telescope pupil, which is used for monitoring the pupil visually during the observations. Pupil tracker images four pupil guiding laser beacons of \SI{1.2}{\micro m} wavelength with a $2\times2$ lenslet. The other three imaging modes image astrophysical targets in the H-band. The acquisition camera uses a $2048 \times 2048$ pixel Hawaii-2RG detector to image all four input telescope beams. The optical design of the camera~\cite{Amorim2012} and its optical alignment and integration~\cite{Gordo2014} are presented in the previous SPIE proceedings.
To extract the beam stabilization parameters from the detector images, a dedicated software has been developed based on the image analysis methods presented in Anugu et al. (2014)~\cite{Anugu2014S}. The software was written in C and C++ using CLIP library~\cite{BallesterP.2008} and integrated in the GRAVITY observational software~\cite{burtscher2014gravity}. This software works on the instrument workstation and does the image analysis in two steps. First, it reads the acquisition camera detector image from the instrument shared memory for every 0.7\,s (detector frame rate). Second, it evaluates the beam's tip-tilts, pupil shifts and the wavefront aberrations by analyzing the input detector image. The evaluated parameters are written to the instrument database. They feed the beam stabilization which uses: a) the tip-tilt and piston (TTP); b)the pupil motion controller (PMC)~\cite{Pfuhl2014} and; c) the Variable Curvature Mirror (VCM)~\cite{Ferrari2003}. By stabilizing the field, it enables the injection of the K-band light of the astrophysical targets into single mode fibers which transport the beams towards the interferometric beam combiner. The image acquisition, analysis and beam correction are carried out on-line in the instrument workstation in parallel to the observations for all four telescopes.
\section{Characterization results}
GRAVITY passed Preliminary Verification Europe tests (PAE) in early 2015 and was moved in August 2015 to the Paranal Observatory where it got its first light in November 2015. Fig. \ref{fig9} presents the acquisition camera detector image acquired on-sky with the ATs.
\begin{figure}[h!]
\parbox[t]{11pt}{\rotatebox{90}{meter}}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{AT_acq_image_Pupil.pdf} meter
\end{subfigure}
\end{tabular} \hspace{5pt} \rotatebox{90}{arc-sec}
\begin{tabular}{
@{}c@{}}
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{AT_acq_image_Field.pdf} arc-sec
\end{subfigure}
\end{tabular}
\vspace{11pt}
\caption{\label{fig9} On-sky measurements of the acquisition camera imaging modes (left to right): pupil tracker, pupil imager, field tracker and aberration sensor. The astrophysical target is $\theta^1$ Ori A (North up and East on right). The aberration sensor image is rotated counter clockwise to the field image due to the mirror reflection.}
\end{figure}
\subsection{\label{LAB}Characterization for PAE}
During the PAE, the acquisition camera was characterized using the GRAVITY calibration unit~\cite{Blind2014}. The calibration unit is a GRAVITY subsystem which provides laboratory beams and artificial stars to test GRAVITY functionalities. It generates two artificial stars and four pupil guiding reference beams to test the acquisition camera imaging modes.
\subsubsection{Field tracking}\label{FI}
Characterization of the field tracker is accomplished in three fronts: a) RMS and absolute tracking accuracy; b) RMS accuracy as a function of target magnitude; c) target flux injection into the fibers. To characterize the absolute accuracy of the field tracker, known tip-tilts ($\theta_{\rm i}$) are applied to the incoming beams by manipulating the TTP device and measured backm ($\theta_{\rm o}$), using the field tracker function.
Figure~\ref{Chap7:fig5} (left panel) presents the absolute tracking accuracy and is around $\sim 2$\,mas and the RMS error is $\sim 1$\,mas. These measurements are carried out with a "star" with an H-band magnitude of 15. Figure~\ref{Chap7:fig5} (middle panel) presents the RMS error of the field tracker as a function of "star" magnitude. These "stars" are realized by the Calibration Unit and by varying the voltage of the lamp. Figure~\ref{Chap7:fig5} (right panel) presents how the flux injection in the fiber is reduced. By enabling the field stabilization, the coupling efficiency of the fiber is maintained around 80\%.
\begin{figure}[h!]
\parbox[t]{18pt}{\rotatebox{90}{ \hspace{-0.8cm}$|\theta_{\rm i}$ - $\theta_{\rm o}|$ (mas) }}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{fig10_FIPerformance_1.pdf} $\theta_{\rm i}$ (arcsec)
\end{subfigure}
\end{tabular} \hspace{5pt} \rotatebox{90}{\hspace{-0.5cm}RMS error (mas)}
\begin{tabular}{
@{}c@{}}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{fig10_FIPerformance_2.pdf} Target magnitude
\end{subfigure}
\parbox[t]{11pt}{\rotatebox{90}{\hspace{-1.5cm} Normalized flux in the fiber}}
\begin{subfigure}{0.27\textwidth}
\centering
\includegraphics[width=\textwidth]{fig10_FIPerformance_3.pdf} Star shift from the fiber center (arcsec)
\end{subfigure}
\end{tabular}
\vspace{11pt}
\caption{\label{Chap7:fig5} Performance of the field tracker in the laboratory. Left: the absolute field tracking error $|\theta_{\rm i}-\theta_{\rm o}|$ as a function of true object position $\theta_{\rm i}$. Middle: the RMS error of the field tracking as a function of target magnitude. Right: Normalized star's flux injection into the fiber as function of tip-tilt/star shift from the fiber center. }
\end{figure}
\subsubsection{Lateral pupil tracking}
The lateral pupil tracking accuracy tests are implemented in two steps by manipulating the PMC actuators. First, known lateral pupil shifts ($L_{\rm {x_0}}$) are applied to the incoming beams by actuating the PMCs. Second, the input pupil shifts are sensed (say, $L_{\rm x}$) using the pupil tracker function. Figure~\ref{Chap7:fig4} presents the lateral pupil tracking accuracy as a function of the input pupil shifts. The lateral pupil tracking absolute accuracy is better than \SI{4}{\milli \meter} at the UT beam magnification (0.05\% of the UT diameter) and the RMS error is \SI{2}{\milli \meter}.
\begin{figure}[h!]
\centering
\parbox[t]{11pt}{\rotatebox{90}{\hspace{-0.8cm} $|L_{\rm {x_0}} - L_{\rm x}|$ (mm) }}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{fig9_PTPerformance_1.pdf} $L_{\rm {x_0}}$ (m)
\end{subfigure}
\end{tabular}
\caption{\label{Chap7:fig4} The absolute error in the lateral pupil position measurement $|L_{\rm x}-L_{\rm {x_0}}|$ as a function of input lateral pupil shift $L_{x_0}$. The measurements are in the UT scale and where obtained in the laboratory.}
\end{figure}
\subsubsection{Longitudinal pupil tracking}
Longitudinal pupil tracking precision is characterized by manipulating the VCM positions. In this experiment, known longitudinal pupil shifts are applied to the beam by moving the VCM and those are measured back by the pupil tracker. Fig.\ref{fig10} (left panel) presents the longitudinal pupil characterization results and it can be seen that the accuracy is better than 40\,mm for the 80\,mm beam. The middle and right panel present, respectively, the UT pupil before and after closing the pupil guiding loop.
\begin{figure}[h!]
\centering
\rotatebox{90}{\hspace{-20pt}$|L_{\rm z} - L_{\rm {z_0}}|$ (mm)}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{fig9_PTPerformance_2.pdf} $L_{\rm {z_0}}$ (meter)
\end{subfigure}
\hspace{11pt }\rotatebox{90}{meter}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{Pupil_long_corr_OFF_ON1} meter
\end{subfigure}
\end{tabular} \hspace{5pt} \rotatebox{90}{meter}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{Pupil_long_corr_OFF_ON2} meter
\end{subfigure}
\end{tabular}
\end{tabular}
\caption{Left: On-sky absolute longitudinal pupil position error measurement ($|L_{\rm z}-L_{\rm {z_0}}|$) at 80\,mm as a function of input longitudinal pupil position $L_{z_0}$. Center, right: On-sky pupil images obtained before and after closing the pupil guiding closed loop. }
\label{fig10}
\end{figure}
\subsection{On-sky beam guiding performance}
The acquisition camera beam guiding has been tested for the ATs and for the UTs. The guiding difference between the ATs and the UTs is that the ATs are not corrected by CIAO. For this reason, the field imager is used to measure slow (tunnel) atmospheric tip-tilts. Whereas for the UTs, the field imager is used to measure the residual tip-tilts of the CIAO. The ATs telescope focus correction is implemented by tracking the focus using the aberration sensor and correcting it with their M2 mirrors.
As stated previously, the GRAVITY field and pupil offset corrections are achieved using two types of actuators: a) the VLTI star separator actuators~\cite{Nijenhuis2008} and the delay line VCM; b) GRAVITY internal field (TTP) and pupil (PMC) actuators. The VLTI star separator has a field selector mirror and a pupil position mirror (M14, laterally and longitudinally movable). Small field and pupil corrections occurring during interferometric observations are corrected in closed loop using GRAVITY internal actuators for the speed and accuracy. Large field and pupil offsets (which usually take place during the initial alignment of GRAVITY with the VLTI) are corrected using star separator actuators. While operating with GRAVITY internal actuators in the closed loop, when the offsets are larger than GRAVITY internal actuators range, they are offloaded to the VLTI actuators.
During the installation, verification and commissioning phases, the guiding loops of the field, the pupil were extensively tested. Characterization of the field guiding is realized by observing several dual-field stars and checking the stability of injection of star's light into single mode fibers. Characterization of the pupil guiding is implemented by applying known pupil shifts with pupil correcting actuators and checking their stability over an hour. The pupil tracker experiences high backgrounds from bright astrophysical targets due to the closeness of: a) the operating wavelengths of the pupil tracker (operating at 1.2\,$\mu$m) and the field tracker (operates at $H$-band -- 1.65\,$\mu$m) and; b) no adequate dichroic filter for the pupil tracker. The operation of the pupil guiding for an interferometric observation depends on two factors: a) the flux of the pupil guiding laser beacons; b) the magnitude of the astrophysical target. To remove the astrophysical target background, a new software mode, BLINK, is implemented. The basic idea of this mode is to remove the background by switching OFF and ON the pupil beacons. During the turn OFF period, the background is stored and the is subtracted from the subsequent frame. The performance of the pupil guiding is improved by 2 magnitudes when using this BLINK mode. Currently it can operate up to magnitude 1 for the ATs.
The non-common path errors between the CIAO and GRAVITY are calibrated by inputting known wavefronts to the GRAVITY beams that are generated by manipulating the MACAO deformable mirror~\cite{Arsenault2003} and measuring the input wavefront aberrations using the aberration sensor. The offsets are then taken into account by CIAO to compensate the non-common path aberrations.
The following summarizes the current on-sky performance of the acquisition camera:
\begin{itemize}
\item \textbf{Pupil guiding residuals:} The standard deviation of the lateral and the longitudinal pupil guiding residuals are remain smaller than $\pm~0.2$ pixels RMS on the detector (i.e., 12\,mm at the UTs) and the 50\,mm for 80 beam size respectively in the presence of faint astrophysical target.
\item \textbf{Field guiding residuals:} The standard deviation of the field guiding residuals are smaller than $\pm~0.36$ pixels (6.4\,mas). When the tests of the field guiding for the ATs (not equipped with the CIAO) took place, the laser based tunnel seeing tip-tilt tracking system was not installed. That is why the residuals are large.
\item \textbf{Focus guiding} The standard deviation of the focus guiding residuals for the ATs are within $\pm~\lambda/8$ RMS.
\end{itemize}
\section{Discussion and conclusions}
The GRAVITY acquisition camera was installed at the Very Large Telescope Interferometer during the last months of 2015. Since then it has been successfully working in the closed loop to stabilize the slow (tunnel) atmospheric turbulence field motions and the pupil motions.
The camera is characterized using laboratory generated telescope beams at MPE and on-sky at the Paranal Observatory. The characterization results revealed that it is able to analyze the telescope beams with the accuracy required by $10\,\mu$as astrometry, namely: a) field tracking with 2\,mas RMS; b) lateral pupil tracking with 4\,mm RMS (at the UT scale); c) longitudinal pupil tracking with 50\,mm RMS (at 80 mm beam scale); and d) quasi-static higher order wavefront aberration measurements with 80\,nm RMS.
The acquisition camera measured beam parameters are used in the stabilization of the field, the pupil and the focus correction. With the stabilization of the field, the star's light injection into single mode fibers is improved. The quasi-static higher order aberrations measurements are used for the non-common path errors corrections between the CIAO and GRAVITY. The pupil imager is used for visual monitoring. By stabilizing the telescope beams, the astrometric error induced by the field (accuracy of 2\,mas RMS, cf. Section \ref{FI}) and the pupil errors (accuracy of 12\,mm RMS) is minimized to $0.34\,\mu$as).
\section*{Acknowledgments}
Anugu acknowledges FCT-Portugal grant SFRH/BD/52066/2012. The research leading to these results was partially supported by FCT-Portugal grant PTDC/CTE-AST/116561/2010 and the European Community's Seventh Framework Programme under Grant Agreement 312430.
\section{INTRODUCTION}
GRAVITY is a high precision narrow angle astrometry and interferometric imaging instrument~\cite{Eisenhauer2011}. It has been built for the Very Large Telescope Interferometer (VLTI) of the European Southern Observatory and was born with the goal of monitoring the stellar sources in the vicinity of the Galactic Center super massive black hole and the actual emission of the black hole close environment environment. It combines four beams (in the K-band) coherently of either Unit Telescopes (UTs) or Auxiliary Telescopes (ATs), delivering an astrometry of $\sim 10$ micro-arcseconds ($\mu$as) and a angular resolution of around $\sim 4$ milli-arcseconds (mas).
The main error sources for the GRAVITY astrometric measurements are the atmospheric turbulence and the error in the baseline length ($B$) between any two telescopes. The GRAVITY-Coud\'e Infrared Adaptive Optics (CIAO) measures and corrects wavefront aberrations induced by the atmospheric turbulence in the light path from the sky to the Coud\'e laboratory~\cite{Kendrew2012}. However the CIAO corrections do not include the VLTI tunnel seeing induced fast moving tip-tilts. These are tracked by launching external lasers at the telescopes (at the star separator) and imaging them in the GRAVITY beam combiner~\cite{Pfuhl2014}. The measured tip-tilts are corrected by dedicated tip-tilt-piston mirrors. But the above corrections involve tip-tilt residuals of more than 10\,mas. These tip-tilt errors limit the performance of GRAVITY in two ways~\cite{Lacour2014a}: a) efficiency of star light injection into the single mode fibers which are used to transport the light to the integrated optics~\cite{Jocou2010} to make the beam combination; b) unwanted tip-tilt error causes additional astrometric error.
The factor limiting the precision in measuring the accurate baseline (RMS error of a few mm for a $\sim 100$\,m baseline) length between two telescopes is pupil position errors. These errors are mainly due to the misalignments and temperature gradients. In between the telescope pupil and the fiber-fed beam combiner, there exists several optical pupils that are under motion due to the optical train vibrations while tracking the object of interest. The typical lateral and longitudinal pupil position errors experienced by the VLTI are around 5\% in the pupil diameter and 1\,m, respectively, for the 80\,mm beam. Eq. \ref{Chap2:EqAstrometricError} gives the astrometric error~\cite{Lacour2015c} associated with the given tip-tilt error ($\Delta \alpha$) and pupil lateral position ($\Delta L$) errors. For example, for 10\,mas tip-tilt error and 0.4\,m (5\% of 8~m pupil) lateral pupil position error, the astrometric error becomes 40\,$\mu$as for a single beam combiner
\begin{equation}\label{Chap2:EqAstrometricError}
\sigma = \dfrac{\Delta \alpha \times \Delta L}{ B} .
\end{equation}
The CIAO measures and corrects the incoming distorted wavefronts with respect to a flat wavefront generated in the Coud\'e laboratory. However, there are many optics and tunnel seeing in between the CIAO and the GRAVITY beam combiner. They will also introduce additional wavefront aberrations to the incoming beam. The quasi-static wavefront aberrations and the non-common path errors that exist between the CIAO and the GRAVITY beam combiner also contribute to the astrometric error~\cite{Lacour2014a}.
Therefore a beam stabilization system is required to address the above issues. The GRAVITY acquisition camera has been built to image and analyze the VLTI beams and to: a) enable the stabilization of star light injection into the single mode fibers; b) minimize the astrometric errors induced by the field and the pupil errors.
\section{Acquisition camera}
\begin{figure}[h!]
\centering{
\def\textwidth{\textwidth}
\input{figs/fig2_ACQ_optical_layoutV3.pdf_tex}
\vspace{11pt}
\caption{\label{Chap3:Fig1} The acquisition camera working principle and optical layout (for clarity only one telescope beam case is shown). The astrophysical target's K-band beam is used for science measurements and the H-band beam is used for beam stabilization. The H-band beam feeds the three imaging modes of the acquisition camera: aberration sensor, pupil imager, field imager. The pupil tracker images external pupil reference laser light (\SI{1.2}{\micro m}). The box in red color encloses the acquisition camera optics.}
}
\end{figure}
Figure~\ref{Chap3:Fig1} presents the optical layout of the acquisition camera for one telescope beam. The acquisition camera provides four optical imaging modes: a) field imager -- images and tracks the field (tip-tilt) of input beams; b) pupil tracker -- tracks the telescope pupil motions by imaging four pupil beacons that are installed on the spiders of secondary mirror (M2) of the telescope; c) aberration sensor -- implements four $9\times9$ lenslets to sense the incoming distorted wavefronts; d) pupil imager -- images telescope pupil, which is used for monitoring the pupil visually during the observations. Pupil tracker images four pupil guiding laser beacons of \SI{1.2}{\micro m} wavelength with a $2\times2$ lenslet. The other three imaging modes image astrophysical targets in the H-band. The acquisition camera uses a $2048 \times 2048$ pixel Hawaii-2RG detector to image all four input telescope beams. The optical design of the camera~\cite{Amorim2012} and its optical alignment and integration~\cite{Gordo2014} are presented in the previous SPIE proceedings.
To extract the beam stabilization parameters from the detector images, a dedicated software has been developed based on the image analysis methods presented in Anugu et al. (2014)~\cite{Anugu2014S}. The software was written in C and C++ using CLIP library~\cite{BallesterP.2008} and integrated in the GRAVITY observational software~\cite{burtscher2014gravity}. This software works on the instrument workstation and does the image analysis in two steps. First, it reads the acquisition camera detector image from the instrument shared memory for every 0.7\,s (detector frame rate). Second, it evaluates the beam's tip-tilts, pupil shifts and the wavefront aberrations by analyzing the input detector image. The evaluated parameters are written to the instrument database. They feed the beam stabilization which uses: a) the tip-tilt and piston (TTP); b)the pupil motion controller (PMC)~\cite{Pfuhl2014} and; c) the Variable Curvature Mirror (VCM)~\cite{Ferrari2003}. By stabilizing the field, it enables the injection of the K-band light of the astrophysical targets into single mode fibers which transport the beams towards the interferometric beam combiner. The image acquisition, analysis and beam correction are carried out on-line in the instrument workstation in parallel to the observations for all four telescopes.
\section{Characterization results}
GRAVITY passed Preliminary Verification Europe tests (PAE) in early 2015 and was moved in August 2015 to the Paranal Observatory where it got its first light in November 2015. Figure~\ref{fig9} presents the acquisition camera detector image acquired on-sky with the ATs.
\begin{figure}[h!]
\parbox[t]{11pt}{\rotatebox{90}{meter}}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{AT_acq_image_Pupil.pdf} meter
\end{subfigure}
\end{tabular} \hspace{5pt} \rotatebox{90}{arc-sec}
\begin{tabular}{
@{}c@{}}
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{AT_acq_image_Field.pdf} arc-sec
\end{subfigure}
\end{tabular}
\vspace{11pt}
\caption{\label{fig9} On-sky measurements of the acquisition camera imaging modes (left to right): pupil tracker, pupil imager, field tracker and aberration sensor. The astrophysical target is $\theta^1$ Ori A (North up and East on right). The aberration sensor image is rotated counter clockwise to the field image due to the mirror reflection.}
\end{figure}
\subsection{\label{LAB}Characterization for PAE}
During the PAE, the acquisition camera was characterized using the GRAVITY calibration unit~\cite{Blind2014}. The calibration unit is a GRAVITY subsystem which provides laboratory beams and artificial stars to test GRAVITY functionalities. It generates two artificial stars and four pupil guiding reference beams to test the acquisition camera imaging modes.
\subsubsection{Field tracking}\label{FI}
Characterization of the field tracker is accomplished in three fronts: a) RMS and absolute tracking accuracy; b) RMS accuracy as a function of target magnitude; c) target flux injection into the fibers. To characterize the absolute accuracy of the field tracker, known tip-tilts ($\theta_{\rm i}$) are applied to the incoming beams by manipulating the TTP device and measured backm ($\theta_{\rm o}$), using the field tracker function.
Figure~\ref{Chap7:fig5} (left panel) presents the absolute tracking accuracy and is around $\sim 2$\,mas and the RMS error is $\sim 1$\,mas. These measurements are carried out with a "star" with an H-band magnitude of 15. Figure~\ref{Chap7:fig5} (middle panel) presents the RMS error of the field tracker as a function of "star" magnitude. These "stars" are realized by the Calibration Unit and by varying the voltage of the lamp. Figure~\ref{Chap7:fig5} (right panel) presents how the flux injection in the fiber is reduced. By enabling the field stabilization, the coupling efficiency of the fiber is maintained around 80\%.
\begin{figure}[h!]
\parbox[t]{18pt}{\rotatebox{90}{ \hspace{-0.8cm}$|\theta_{\rm i}$ - $\theta_{\rm o}|$ (mas) }}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{fig10_FIPerformance_1.pdf} $\theta_{\rm i}$ (arcsec)
\end{subfigure}
\end{tabular} \hspace{5pt} \rotatebox{90}{\hspace{-0.5cm}RMS error (mas)}
\begin{tabular}{
@{}c@{}}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{fig10_FIPerformance_2.pdf} Target magnitude
\end{subfigure}
\parbox[t]{11pt}{\rotatebox{90}{\hspace{-1.5cm} Normalized flux in the fiber}}
\begin{subfigure}{0.27\textwidth}
\centering
\includegraphics[width=\textwidth]{fig10_FIPerformance_3.pdf} Star shift from the fiber center (arcsec)
\end{subfigure}
\end{tabular}
\vspace{11pt}
\caption{\label{Chap7:fig5} Performance of the field tracker in the laboratory. Left: the absolute field tracking error $|\theta_{\rm i}-\theta_{\rm o}|$ as a function of true object position $\theta_{\rm i}$. Middle: the RMS error of the field tracking as a function of target magnitude. Right: Normalized star's flux injection into the fiber as function of tip-tilt/star shift from the fiber center. }
\end{figure}
\subsubsection{Lateral pupil tracking}
The lateral pupil tracking accuracy tests are implemented in two steps by manipulating the PMC actuators. First, known lateral pupil shifts ($L_{\rm {x_0}}$) are applied to the incoming beams by actuating the PMCs. Second, the input pupil shifts are sensed (say, $L_{\rm x}$) using the pupil tracker function. Figure~\ref{Chap7:fig4} presents the lateral pupil tracking accuracy as a function of the input pupil shifts. The lateral pupil tracking absolute accuracy is better than \SI{4}{\milli \meter} at the UT beam magnification (0.05\% of the UT diameter) and the RMS error is \SI{2}{\milli \meter}.
\begin{figure}[h!]
\centering
\parbox[t]{11pt}{\rotatebox{90}{\hspace{-0.8cm} $|L_{\rm {x_0}} - L_{\rm x}|$ (mm) }}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{fig9_PTPerformance_1.pdf} $L_{\rm {x_0}}$ (m)
\end{subfigure}
\end{tabular}
\caption{\label{Chap7:fig4} The absolute error in the lateral pupil position measurement $|L_{\rm x}-L_{\rm {x_0}}|$ as a function of input lateral pupil shift $L_{x_0}$. The measurements are in the UT scale and where obtained in the laboratory.}
\end{figure}
\subsubsection{Longitudinal pupil tracking}
Longitudinal pupil tracking precision is characterized by manipulating the VCM positions. In this experiment, known longitudinal pupil shifts are applied to the beam by moving the VCM and those are measured back by the pupil tracker. Figure~\ref{fig10} (left panel) presents the longitudinal pupil characterization results and it can be seen that the accuracy is better than 40\,mm for the 80\,mm beam. The middle and right panel present, respectively, the UT pupil before and after closing the pupil guiding loop.
\begin{figure}[h!]
\centering
\rotatebox{90}{\hspace{-20pt}$|L_{\rm z} - L_{\rm {z_0}}|$ (mm)}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{fig9_PTPerformance_2.pdf} $L_{\rm {z_0}}$ (meter)
\end{subfigure}
\hspace{11pt }\rotatebox{90}{meter}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{Pupil_long_corr_OFF_ON1} meter
\end{subfigure}
\end{tabular} \hspace{5pt} \rotatebox{90}{meter}
\begin{tabular}
{@{}c@{}}
\begin{subfigure}{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{Pupil_long_corr_OFF_ON2} meter
\end{subfigure}
\end{tabular}
\end{tabular}
\caption{Left: On-sky absolute longitudinal pupil position error measurement ($|L_{\rm z}-L_{\rm {z_0}}|$) at 80\,mm as a function of input longitudinal pupil position $L_{z_0}$. Center, right: On-sky pupil images obtained before and after closing the pupil guiding closed loop. }
\label{fig10}
\end{figure}
\subsection{On-sky beam guiding performance}
The acquisition camera beam guiding has been tested for the ATs and for the UTs. The guiding difference between the ATs and the UTs is that the ATs are not corrected by CIAO. For this reason, the field imager is used to measure slow (tunnel) atmospheric tip-tilts. Whereas for the UTs, the field imager is used to measure the residual tip-tilts of the CIAO. The ATs telescope focus correction is implemented by tracking the focus using the aberration sensor and correcting it with their M2 mirrors.
As stated previously, the GRAVITY field and pupil offset corrections are achieved using two types of actuators: a) the VLTI star separator actuators~\cite{Nijenhuis2008} and the delay line VCM; b) GRAVITY internal field (TTP) and pupil (PMC) actuators. The VLTI star separator has a field selector mirror and a pupil position mirror (M14, laterally and longitudinally movable). Small field and pupil corrections occurring during interferometric observations are corrected in closed loop using GRAVITY internal actuators for the speed and accuracy. Large field and pupil offsets (which usually take place during the initial alignment of GRAVITY with the VLTI) are corrected using star separator actuators. While operating with GRAVITY internal actuators in the closed loop, when the offsets are larger than GRAVITY internal actuators range, they are offloaded to the VLTI actuators.
During the installation, verification and commissioning phases, the guiding loops of the field, the pupil were extensively tested. Characterization of the field guiding is realized by observing several dual-field stars and checking the stability of injection of star's light into single mode fibers. Characterization of the pupil guiding is implemented by applying known pupil shifts with pupil correcting actuators and checking their stability over an hour. The pupil tracker experiences high backgrounds from bright astrophysical targets due to the closeness of: a) the operating wavelengths of the pupil tracker (operating at 1.2\,$\mu$m) and the field tracker (operates at $H$-band -- 1.65\,$\mu$m) and; b) no adequate dichroic filter for the pupil tracker. The operation of the pupil guiding for an interferometric observation depends on two factors: a) the flux of the pupil guiding laser beacons; b) the magnitude of the astrophysical target. To remove the astrophysical target background, a new software mode, BLINK, is implemented. The basic idea of this mode is to remove the background by switching OFF and ON the pupil beacons. During the turn OFF period, the background is stored and the is subtracted from the subsequent frame. The performance of the pupil guiding is improved by 2 magnitudes when using this BLINK mode. Currently it can operate up to magnitude 1 for the ATs.
The non-common path errors between the CIAO and GRAVITY are calibrated by inputting known wavefronts to the GRAVITY beams that are generated by manipulating the MACAO deformable mirror~\cite{Arsenault2003} and measuring the input wavefront aberrations using the aberration sensor. The offsets are then taken into account by CIAO to compensate the non-common path aberrations.
The following summarizes the current on-sky performance of the acquisition camera:
\begin{itemize}
\item \textbf{Pupil guiding residuals:} The standard deviation of the lateral and the longitudinal pupil guiding residuals are remain smaller than $\pm~0.2$ pixels RMS on the detector (i.e., 12\,mm at the UTs) and the 50\,mm for 80 beam size respectively in the presence of faint astrophysical target.
\item \textbf{Field guiding residuals:} The standard deviation of the field guiding residuals are smaller than $\pm~0.36$ pixels (6.4\,mas). When the tests of the field guiding for the ATs (not equipped with the CIAO) took place, the laser based tunnel seeing tip-tilt tracking system was not installed. That is why the residuals are large.
\item \textbf{Focus guiding} The standard deviation of the focus guiding residuals for the ATs are within $\pm~\lambda/8$ RMS.
\end{itemize}
\section{Discussion and conclusions}
The GRAVITY acquisition camera was installed at the Very Large Telescope Interferometer during the last months of 2015. Since then it has been successfully working in the closed loop to stabilize the slow (tunnel) atmospheric turbulence field motions and the pupil motions.
The camera is characterized using laboratory generated telescope beams at MPE and on-sky at the Paranal Observatory. The characterization results revealed that it is able to analyze the telescope beams with the accuracy required by $10\,\mu$as astrometry, namely: a) field tracking with 2\,mas RMS; b) lateral pupil tracking with 4\,mm RMS (at the UT scale); c) longitudinal pupil tracking with 50\,mm RMS (at 80 mm beam scale); and d) quasi-static higher order wavefront aberration measurements with 80\,nm RMS.
The acquisition camera measured beam parameters are used in the stabilization of the field, the pupil and the focus correction. With the stabilization of the field, the star's light injection into single mode fibers is improved. The quasi-static higher order aberrations measurements are used for the non-common path errors corrections between the CIAO and GRAVITY. The pupil imager is used for visual monitoring. By stabilizing the telescope beams, the astrometric error induced by the field (accuracy of 2\,mas RMS, cf. Section \ref{FI}) and the pupil errors (accuracy of 12\,mm RMS) is minimized to $0.34\,\mu$as).
\section*{Acknowledgments}
Anugu acknowledges FCT-Portugal grant SFRH/BD/52066/2012. The research leading to these results was partially supported by FCT-Portugal grant PTDC/CTE-AST/116561/2010 and the European Community's Seventh Framework Programme under Grant Agreement 312430.
|
cond-mat/0606049
|
\section{Introduction}
Despite increasing interest in recent years in geometrically frustrated systems \cite{Moessner06},
there still exists considerable controversy over the nature of the phase transition in the
prototypical magnetic system characterized by near-neighbor antiferromagnetic exchange interactions
on a stacked triangular lattice \cite{Villian77, Toulouse77, Collins97}. The prediction made twenty
years ago that helical degeneracy associated with the 120$^\circ$ magnetic order leads to new
Heisenberg and XY chiral universality classes \cite{Kawamura87} found support over the following
ten years or so from numerous renormalization-group studies, Monte-Carlo simulations and
experimental data \cite{Collins97, Kawamura87, Kawamura98, aa, Weber95}. Results in favor of this
scenario continue to appear \cite{Plakhty00, DeFotis02, Peles03}. An alternative proposal of a very
weak fluctuation-induced first order transition made soon after the original suggestion
\cite{Azaria90} has also been strengthened by further theoretical studies and numerical simulations
\cite{Plumer97, Loison98, bb, Delamotte04, Peles04, Kanki06}. Experimental data on both rare-earth
helimagnets as well as ABX$_3$ compounds, such as CsMnBr$_3$, have been used extensively to support
each scenario. Evidence for a weak first-order transition was found in the thermal expansion data
on the helimagnet Ho long before this controversy surfaced \cite{Tindall77}. The results presented
in this Letter reveal the first experimental evidence that the 120$^\circ$ XY transition is weakly
first order. This is achieved also through magnetoelastic coupling effects, via ultrasonic sound
velocity measurements in the field-induced 120$^\circ$ phase of CsNiCl$_3$.
CsNiCl$_3$ is one of the more widely investigated of the large class of quasi-one-dimensional
hexagonal ABX$_3$ materials with strong antiferromagnetic $c$-axis exchange \cite{Collins97}. Along
with its sister compounds CsMnI$_3$, CsNiBr$_3$ and RbNiBr$_3$, it has weak $c$-axis anisotropy
giving rise to a Linear (L) Ising-like ordered state in zero field at $T_{N_1}=4.8~$K, with an
additional in-plane ordering at $T_{N_2}=4.4~$K, resulting in an elliptical (E) polarization of the
spin density ({\bf S}) at low temperatures \cite{Plumer88}. These phases are characterized by a
period-2 modulation along the $c$-axis and period-3 in the basal plane. A magnetic field applied
along the $c$-axis of only 2.3~T is sufficient to induce a spin-flop phase where {\bf S} now lies
in the basal plane and forms the familiar 120$^\circ$ spin structure of the frustrated triangular
antiferromagnet. These three phases meet at an unusual type of multicritical point
\cite{Kawamura90} (at $T_m = 4.53~$K, $H_m = 2.3~$T). The transition to the Ising-like state
involves a phase-factor degeneracy associated with the triangular geometry and is been predicted to
belong to the XY universality class. The nature of even this transition has a history of
controversy, with the most recent addition being an analysis of neutron diffraction data on a
similar transition in CsCoBr$_3$ suggesting tricritical behavior \cite{Mao02}. The transition at
$T_{N_2}$ is generally accepted to also be of XY universality. At the multicritical point, the
axial anisotropy is exactly canceled by the applied field and the system is effectively isotropic
(Heisenberg-like). At higher field strengths, there is XY symmetry along with chiral degeneracy.
Such systems provide a convenient platform for the investigation of a line of phase transitions (to
the paramagnetic state) which can be sampled by changing the field strength. In contrast with
present data, previous experimental investigations of this transition boundary support the notion
of $n=2$ chiral universality \cite{Collins97, Beckmann93, Enderle97}.
Magnetoelastic coupling has been repeatedly demonstrated to be a useful mechanism to reveal the
nature of the magnetic ordering in CsNiCl$_3$ \cite{Almond75, Johnson79, Poirier90}. Landau theory
can be used to show how elastic constants scale with the various order parameters and also to yield
mean-field predictions of anomalies at the transition boundaries \cite{Quirion05}. In the present
work, results and analysis are presented of high-resolution measurements of various elastic
constants as a function of both temperature and magnetic field, with a focus on the
paramagnetic-to-120$^\circ$ spin phase boundary. Step-like discontinuities are found where Landau
theory predicts none. Attempts at curve fitting to extract a critical exponent $\beta$ show that
this leads to values which are field dependent. The strongest evidence that the paramagnetic to
spin-flop phase boundary is weakly first order is found in the hysteretic behavior of $C_{44}$ as a
function of temperature and field.
Contrary to previous investigations \cite{Almond75, Poirier90, Trudeau92}, where ultrasonic
techniques were mainly used in order to determine phase diagrams, the emphasis of the present work
is on the measurement of critical phenomena in CsNiCl$_3$. This was realized using a
high-resolution pulsed ultrasonic interferometer to measure the temperature and magnetic field
dependence of different acoustic modes propagating along or perpendicular to the hexagonal
$c$-axis. Measurements were carried out on a single crystal specimen of 8.9~mm in length along the
$c$-axis and approximately 2.5~~mm along the perpendicular directions. The acoustic modes at 30~MHz
were generated using longitudinal and transverse lithium niobate transducers.
The analysis of the temperature or field dependence of elastic properties provides a convenient way
to study magnetic critical behavior in many systems. For hexagonal CsNiCl$_3$, a Landau type
approach has been used to determine the relationship between the variations in the elastic
constants $C_{33}$ and $C_{11}$ and the various order parameters \cite{Quirion05}. This coupling
occurs due to magnetoelastic contributions to the free energy of the form $\sim g~ e_i~ S^2$, where
$e_i$ is an element of the strain tensor and $S$ is the order parameter. A similar approach can be
used to include coupling terms which account for the application of a magnetic field along the
$c$-axis. One result of this new model \cite{Quirion06} indicates that the relative variation
$\Delta C_{33}/C_{33}$ can be generalized as
\begin{equation}\label{eq:C33S}
\frac{\Delta C_{33}(T,H)}{C_{33}} = - \Delta + \gamma~ S(T,H)^2
\end{equation}
where $\Delta$ and $\gamma$ are constants specific to the magnetic transition of interest, while
the temperature and field dependencies are directly associated with those of the order parameter
$S$. According to (\ref{eq:C33S}), $C_{33}$ is expected to show a discontinuity even in the case of
a continuous phase transition. Thus, results on $C_{33}$ cannot be used to discriminate between a
continuous and weakly first order transition. This type of behavior is to be expected whenever a
linear-quadratic coupling, between the strain and the order parameter, is allowed by symmetry
\cite{Quirion05}. In order to clearly determine the character of the transition of the 120$^\circ$
phase, other ultrasonic modes need to be used, in particularly those that depend exclusively on
quadratic-quadratic coupling terms ($e^2 S^2$). The allowed coupling terms, compatible with
hexagonal symmetry (not included in our previous analysis \cite{Quirion05}) are simply
\begin{eqnarray}
F_c(S,e_i) & = & \frac{g_{s_4}}{2}(e_4^2+e_5^2) S_z^2 + \frac{g_{s_6}}{2}e_6^2 S_z^2 \\
\nonumber & + & \frac{g_{\beta_4}}{2}(e_4^2+e_5^2) S_\perp^2 + \frac{g_{\beta_6}}{2}e_6^2
S_\perp^2
\end{eqnarray}
where the notation of Ref.~[28] has been used. As these coupling terms are quadratic in
strain, the variation in the elastic constants can be written as
\begin{eqnarray}
\frac{\Delta C_{44}}{C_{44}} = \frac{\Delta C_{55}}{C_{55}} & = &g_{s_4} S_z^2 +
g_{\beta_4} S_\perp^2 \label{eq:C44TQ}\\
\frac{\Delta C_{66}}{C_{66}} & = & g_{s_6} S_z^2+ g_{\beta_6} S_\perp^2~.
\label{eq:C66TQ}
\end{eqnarray}
In the case of an hexagonal structure, $C_{44}$ and $C_{66}$ can be obtained by measuring the
velocity of transverse waves propagating along and perpendicular to the $c$-axis, respectively.
Figure~\ref{C66} shows the most significant results of the temperature dependence of $\Delta
C_{66}/C_{66}$ with the magnetic field applied along the $c$-axis.
\begin{figure}[b]
\includegraphics{C66HvsT}
\caption{\label{C66} Relative variation of the elastic constant $C_{66}$
as a function of temperature. The broken and continuous lines represent results obtained below and
above $H_m = 2.3~$T, respectively.}
\end{figure}
At $H = 0~$T, the onset of the L - E phase transition is clearly visible ($T_{N_2} = 4.33~$K) while
the variation at $T_{N_1}$ is barely noticeable. The results show two distinct behaviors depending
on whether the value of the field is lower or higher than $H_m$. In the elliptical phase, $C_{66}$
softens as the temperature decreases while the opposite trend is observed in the 120$^\circ$ spin
phase above $H_m$. The observed temperature dependencies of $\Delta C_{66}/C_{66}$ are perfectly
consistent with the Landau predictions (\ref{eq:C66TQ}) and these data can be used to estimate the
temperature dependence of the order parameter $S_\bot$. The results of this analysis, obtained at
different fields, are presented in Fig.~\ref{QC66} as a function of the reduced temperature $\tau =
1 - T/T_{N_2}$ on a log-log plot. All curves show a well defined power law behavior over a minimum
of two decades in $\tau$. Clearly, for $H < 1.5~$T a unique scaling is observed, confirming that
$\beta_E = 0.35 \pm 0.02$ is field independent in the elliptical phase. As the value of the field
approach $H_m$, the value of the critical exponent $\beta$ suddenly decreases and then gradually
increases at higher fields. The insert in Fig.~\ref{QC66} illustrates in detail how the value of
$\beta$ evolves as a function of a magnetic field for CsNiCl$_3$.
\begin{figure}[tb]
\includegraphics{QC66}
\caption{\label{QC66} Order parameter as a function of the reduced temperature $\tau = 1-
T/T_{N_2}$ calculated using (\ref{eq:C66TQ}) and the results presented in Fig.~\ref{C66} . The data
obtained at different field are represented by symbols while lines represent fitted data at small
$\tau$ using a simple power law. The inset shows the field dependence of the critical exponent
$\beta$ obtained from the power law fits.}
\end{figure}
Very close to the multicrital point, $\beta$ reaches a minimum of $\beta = 0.25 \pm 0.02$ which
corresponds to the predicted value for $n=2$ chirallity \cite{Kawamura87} but is also close to that
of tricriticality, $\beta = 1/4$. At higher fields, $\beta$ increases significantly.
\begin{figure}[b]
\includegraphics{C44HvsT}
\caption{\label{C44} Relative variation of the elastic constant $C_{44}$
as a function of temperature. The broken and continuous lines represent results obtained below and
above $H_m = 2.3~$T, respectively.}
\end{figure}
As noted previously \cite{Delamotte04, Peles04}, variation in the value of effective critical
exponents as a function of an irrelevant parameter (in this case, $H$) may indicate a weak
fluctuation-induced first order transition.
Close inspection of the upper curves presented in Fig.~\ref{C66} show an unexpected dip right at
the para-$120^\circ$ phase boundary. This very small anomaly persists at all fields above $H_m$ and
could be interpreted as the effect of a weakly first order transition. Motivated by these results,
additional measurements were made using transverse waves propagating along the $c$-axis and
polarized in the basal plane to obtain $C_{44}$. This series of results was obtained with an
exceptionally high resolution of 0.1~ppm. The data are presented in Fig.~\ref{C44} as a function of
temperature. At $H = 0$, two distinct features are noticeable and correspond to the onset of the
phase transitions at $T_{N_1}$ and $T_{N_2}$. As the field is increased, both anomalies merge at
$H_m = 2.3~$T. Above the multicrital point (see continuous lines), the observed temperature
dependence changes within the 120$^\circ$ phase as the field increases. Moreover, no power law
relationship, as predicted by the Landau Model (\ref{eq:C44TQ}), could be identified. More
significantly, a step like variation is noticeable at the critical temperature. These observations
taken all together cannot be reconciliated with a continuous phase transition.
\begin{figure}[b]
\includegraphics{QC44b}
\caption{\label{QC44} Thermal cycle analysis realized at different fields using the relative
variation of the elastic constant $C_{44}$. The data collected during the cooling process are all
represented in red color for clarity. Curve (a) corresponds to the zero field transition at
$T_{N_2}$. All other curves are associated with the transition to the 120$^\circ$ phase.}
\end{figure}
As a test of the first order character of the para-$120^\circ$ phase transition, the possibility of
thermal hysteresis was investigated. A collection of thermal cycles, realized using a
cooling/heating rate of 0.1~K/min, is presented in Fig.~\ref{QC44}. The results obtained for the
$120^\circ$ phase are compared to the data collected at $H = 0~$T. At zero field, where all
experimental evidence presented in this paper clearly indicate that the phase transition is
continuous, no significant hysteresis is observed. However, the transition to the $120^\circ$ spin
phase show a difference between the data collected during the heating and cooling processes. Those
differences are small but are systematically observed at all field values. The hysteresis is
maximum just below the critical temperature and persist over a temperature range of about 0.5~K. An
additional confirmation of first-order character is obtained from the field dependence of $C_{44}$.
The data collected at $T = 5.0~$K, presented in Fig.~\ref{C44H} as a function of $H^2$, clearly
show the quadratic field dependence of $C_{44}$ in the paramagnetic phase.
\begin{figure}[t]
\includegraphics{C445vsH}
\caption{\label{C44H} Relative variation of the elastic constant $C_{44}$
as a function of $H^2$ measured at $T=5~$K. The inset shows the relative variation of $C_{44}$
after subtracting the magnetoelastic contribution observed in the paramagnetic phase.}
\end{figure}
This field dependence is associated with magnetostriction effects previously observed in CsNiCl$_3$
\cite{Rayne84}. The field dependence of $C_{44}$ near the 120$^\circ$ phase transition shown in the
insert of Fig.~\ref{C44H} has been isolated by subtracting this magnetostriction. It is clear that
the step like variation of $C_{44}$, along with the observed hysteresis, at the $120^\circ$ phase
boundary cannot be accounted for in the context of a continuous phase transition.
The data and analysis presented in this work serve to fill a long-standing gap regarding
experimental support for the growing body of theoretical and numerical work that the phase
transition in the prototypical geometrical frustrated system, the stacked triangular
antiferromagnet, is fluctuation-induced first order in nature. CsNiCl$_3$ provides a convenient
field-temperature phase diagram for this purpose with a phase boundary line to the 120$^\circ$ spin
structure which may be sampled at various points. Sound velocity measurements have proven to be an
accurate tool for obtaining high resolution data on the various order parameters. The possibility
to extract effective (field dependent) critical exponents, the weak nature of the discontinuities
and small hysteresis, taken together, provide evidence for the weakness of the first-order
character of this transition. It is thus not surprising that conventional renormalization-group
techniques and Monte-Carlo simulations had previously been supportive of the notion of new chiral
universality classes associated with such frustrated systems.
We thank B. Southern for enlightening comments. This work was supported by grants from the Natural
Science and Engineering Research Council of Canada (NSERC) and Canada Foundation for Innovation
(CFI).
|
astro-ph/0606699
|
\section{Introduction}
The observed Universe contains order on all scales we can probe.
It is generally believed that the largest structures arose via the
amplification of primordial (quantum) fluctuations during a period of
accelerated expansion, processed by the subsequent $13\,$Gyrs of
gravitational instability.
The pattern of clustering of objects on large scales is a calculable
prediction of cosmological models and thus comprises one of the
fundamental cosmological statistics.
Within modern theories of structure formation, the clustering of rare,
massive dark matter halos is enhanced relative to that of the general mass
distribution \citep{Kai84,Efs88,ColKai89,MoWhi96,SheTor99},
an effect known as bias. The more massive the halo, the larger the bias.
As a result, the mass of halos hosting a given population of objects is
sometimes inferred by measuring their degree of clustering -- allowing a
statistical route to the notoriously difficult problem of measuring masses
of cosmological objects (e.g.~\citet{CooShe}).
Since halos of a given mass can differ in their formation history and
large-scale environment\footnote{
The large-scale environment of a halo refers to the density, smoothed
on some scale larger than the halos, e.g.~$5-10\,h^{-1}$Mpc.},
a natural question arises: do these details affect halo clustering?
In currently viable scenarios for structure formation, objects grow
either by accretion of smaller units or by major mergers with
comparable-sized objects.
The formation history of a halo can thus be characterized by its mass
accumulation over time, such as when it reached half of its mass,
had a mass jump in a short time, or last underwent a (major) merger.
Theoretically, the simplest descriptions of halo growth and clustering
\citep{bondetal,bower,lc93,lc94,kit-sut,kit-sut2}
do not give a dependence upon halo formation history
\citep{Whi93,SheTor04b,FurKam05,Har06}.
To reprise these arguments: pick a random point in the universe and imagine
filtering the density field around it on a sequence of successively smaller
scales. The enclosed density executes a random walk, which in the usual
prescription is taken to be uncorrelated from scale to scale. The
formation of a halo of a given mass corresponds to the path passing a certain
critical value of the density, $\delta_c$, at a given scale.
The bias of the halo is the `past' of its random walk and its history the
`future' of the walk.
All halos of the same mass at that time correspond to random walks crossing
the same point, and thus have the same bias.
(Note that the derivation, using sharp $k$-space filtering, does not match
the way the prescription is usually applied, and this has been suggested by some
of the above authors as a way to obtain history dependence. Introducing an
environmental dependence through e.g.~elliptical collapse will also give a
history dependence.)
The lack of dependence on halo history in the simplest descriptions does
not close the discussion theoretically or otherwise.
While these analytic methods work much better than might be expected given their
starting assumptions, the Press-Schechter based approaches still suffer many
known difficulties (e.g.~\citet{ShePit97}, \citet{BenKamHas05}).
Other analytical ways of estimating the clustering of mergers have
been explored. For example, \citet{FurKam05} defined a merger kernel
(not calculable from first principles) and assumed that all peaks within
a certain volume eventually merged. Such an ansatz implies that recently
merged halos are more clustered for $M>M_*$ and less clustered for $M<M_*$,
with some dependence upon predecessor mass ratios and redshifts.
(Here $M_*(z)$ is the mass at which $\sigma(M)$, the variance of the linear
power spectrum smoothed on scale $M$, equals the threshold for linear density
collapse $\delta_c(z)$, see e.g.~\citet{Peacock}.)
Using close pairs as a proxy for recently merged halos, they found a similar
enhancement of clustering for $M>M_*$ and reduction for $M<M_*$ in several
(analytic) clustering models.
To foreshadow our results: the signals we see are consistent
with this trend.
Simple analytic models cannot be expected to capture all of the complexities
of halo formation in hierarchical models, and full numerical simulations are
required to validate and calibrate the fits.
Fortunately, numerical simulations are now able to produce samples with
sufficient statistics to test for the dependence of clustering on formation
history.
Early work by \citet{LemKau99} showed that the properties of dark matter
halos, in particular formation times, are little affected by their
large-scale environment if the entire population of objects is averaged over.
They interpreted this as evidence against formation history and environment
affecting clustering. As emphasized by \citet{SheTor04b}, however, this
finding --- plus the well known fact that the typical mass of halos depends
on local density --- implies that the clustering of halos of the same mass
must also depend on formation time.
Using a marked correlation function, \citet{SheTor04b} found that close pairs
tend to have earlier formation times than more distant pairs, work which was
extended and confirmed by \citet{Har06}.
\citet{GaoSprWhi05} found that later forming, low-mass halos are less
clustered than typical halos of the same mass at the present; a possible
explanation of this result was given by \citet{WanMoJin06}.
\citet{Wec06} found a similar dependence upon halo formation time, showing
that the trend reversed for more massive halos and that the clustering
depended on halo concentration. However, in order to probe to higher masses
these authors assumed that the mass dependence was purely a function of the
mass in units of the non-linear mass, then used earlier outputs to probe to
higher values of this ratio.
It should be noted that scaling quantities by $M/M_*$
gives a direct equality only if clustering is self-similar. Since $P(k)$ is not
a power-law and $\Omega_{\rm mat}\ne 1$, a check of this approximation,
as is done here, is crucial.
These formation time dependencies are based on (usually smooth) fits to the
accretion history of the halo.
However, halo assembly histories are often punctuated by large jumps from major
mergers that have dramatic effects on the halos.
Major mergers can be associated with a wide variety of phenomena, ranging from
quasar activity \citep{KauHae00} and starbursts in galaxies \citep{MihHer96}
to radio halos and relics in galaxy clusters (see e.g.~\citet{Sar04} for
phenomena associated with galaxy cluster mergers). Major mergers of galaxy
clusters are the most energetic events in the universe.
It follows that major merger phenomena can either provide signals of interest or
can cause noise in selection functions that depend upon a merger-affected
observable. If recently merged halos cluster differently from the general
population (merger bias), and this is unaccounted for, conclusions drawn about
halos on the basis of their clustering would be suspect. The question of
whether such merger bias exists remains unresolved, as previous work to
identify a merger bias through N-body simulations and analytic methods yields
mixed results \citep{Got02,Per03,ScaTha03,FurKam05}.
In this paper we consider the clustering of the most massive dark matter halos,
measured from two large volume ($1.1\,h^{-1}$Gpc)${}^3$ N-body simulations
described in \S\ref{sec:sims}. We concentrate on massive halos, as most
previous simulations did not have the volume to effectively probe this end of
the mass function, and furthermore, for the largest mass halos the correspondence
between theory and observation is particularly clean.
We first examine the long-term growth history of halos, calculating the
``assembly bias'' as a function of growth history in \S\ref{sec:history},
extending previous results mentioned above to higher masses.
We then look to short-term history effects (i.e. events), measuring the
``merger bias'' as a function of recent major merger activity or large mass
gain in \S\ref{sec:merger}, where we find a weak, but statistically significant,
signal for both cases. We conclude in \S\ref{sec:conclusions}.
\section{Simulations} \label{sec:sims}
To investigate the effects of formation history on clustering statistics
we use two high resolution N-body simulations performed with independent
codes: the HOT code \citep{HOT} and the TreePM code \citep{TreePM}.
Both simulations evolved randomly generated, Gaussian initial conditions
for $1024^3$ particles of mass $10^{11}\,h^{-1}M_\odot$ from $z=34$ to
the present, using the same $\Lambda$CDM cosmology
($\Omega_M=0.3=1-\Omega_\Lambda$, $\Omega_B=0.046$, $h=0.7$, $n=1$ and
$\sigma_8=0.9$) in a periodic, cubical box of side $1.1\,h^{-1}$Gpc.
For the HOT simulation a Plummer law with softening
$35\,h^{-1}$kpc (comoving) was used. The TreePM code used a spline
softened force with the same Plummer equivalent softening.
The TreePM data were dumped in steps of light crossings of $136\,h^{-1}$Mpc
(comoving), producing 30 outputs from $z\approx 3$ to $z=0$.
The HOT data were dumped from $z\approx 1$ (lookback time of
$5.3\,h^{-1}$Gyr) to $z=0$ in intervals of $0.7\,h^{-1}$Gyr,
with the last interval at $z=0$ reduced to $0.4\,h^{-1}$Gyr.
The outputs before $z\approx 1$ had so few high mass halos that the statistics
were not useful for the merger event calculations.
For comparisons of how using light crossings vs. fixed time steps in Gyrs
changes merger ratios, see \citet{CohWhi05}.
The TreePM simulations were used for the assembly histories and the HOT
simulations for the merger bias calculations -- though the results from the
two simulations were consistent so either could have been used in principle.
For each output we generate two catalogs of halos via the Friends-of-Friends
(FoF) algorithm \citep{DEFW}, using linking lengths $b=0.2$ and $0.15$
in units of the mean interparticle spacing. These groups correspond roughly
to all particles above a density threshold $3/(2\pi b^3)$, thus both linking
lengths enclose primarily virialized material.
Henceforth halo masses are quoted as the sum of the particle masses within FoF
halos, thus a given halo's $b=0.15$ mass will be smaller than its $b=0.2$ mass
(see \citet{Whi01} for more discussion).
We consider halos with mass $M>5\times 10^{13}\,h^{-1}M_{\odot}$
(more than 500 particles);
at $z=0$ there are approximately $96,000$ such halos in each box for the
$b=0.15$ catalog and $120,000$ for the $b=0.2$ catalog.
The mass functions and merger statistics from the two
simulations are consistent within Poisson scatter.
Given a child-parent relationship between halos at neighboring output times,
construction of the merger tree is straightforward since we are tracking
massive halos rather than e.g. subhalos.
Progenitors are defined as those halos at an earlier time which
contributed at least half of their mass to a later (child) halo.
Of the approximately $10^5$ halos at $z=0$ we find only 14 for
which our simple method fails. In these cases a ``fly-by'' collision of two
halos gives rise to a halo at $z=0$ with no apparent progenitors.
Excluding these halos does not change our results.
For the TreePM run, we use all 30 outputs to construct the merger tree,
which stored all of the halo information (mass, velocity dispersion, position,
etc.) for each halo at each output. Each node of the tree pointed to a linked
list of its progenitors at the earlier time, enabling a traversal of the tree
to find mass accretion histories and mergers.
The HOT run produced outputs for each time interval of child and
parent halos.
\section{Measuring clustering}
A basic measure of clustering is the two-point function, which in
configuration space is the correlation function, $\xi(r)$.
To compute $\xi(r)$ we use the method of \citet{LanSza92}:
\begin{equation}
\xi(r) = \frac{\langle DD\rangle-2\langle DR\rangle+\langle RR\rangle}
{\langle RR\rangle} \; ,
\end{equation}
where $D$ and $R$ are data and random catalogs, respectively, and the angle
brackets refer to counts within a shell of small width having radius $r$.
In computing $\langle DR\rangle$ and $\langle RR\rangle$ we use
$10\times$ as many random as data points.
To compute errors, we divide the simulation volume into 8 octants and compute
$\xi(r)$ within each octant. Since we probe scales much smaller than the
octants, we treat them as uncorrelated volumes, and we quote the mean $\xi(r)$
and error on the mean under this assumption.
These errors tend to be $\sim1.4$--$2$ times larger than the more approximate
$\sqrt{N_{\rm pair}}$ error estimates used in some previous work.
Our goal is to test the dependence of the clustering of objects
associated with some history dependent property.
A relevant quantity for comparison is the (mass dependent) bias of the halos
relative to the underlying dark matter, which we define as:
\begin{equation}
\xi(r) = b^2 \xi_{\rm dm}(r).
\end{equation}
Analytically, the large-scale bias is related to a derivative of the halo mass
function \citep{Efs88,ColKai89,MoWhi96,SheTor99}.
For the Sheth-Tormen form of the mass function one finds
\begin{equation}
b_{ST}(M_{180 \rho_b}) = 1 + \frac{\nu'^2-1}{\delta_c}
+ \frac{0.6}{\delta_c\left(1+\nu'^{0.6}\right)},
\end{equation}
where $\nu' =0.841\delta_c/\sigma(M_{180 \rho_b})$ and $\delta_c = 1.686$.
This has been improved upon using the Hubble volume simulations
\citep{Col00,Ham01} --- see also \citet{SelWar04}
for discussion of the bias defined through $P(k)$ on similar scales.
\citet{Ham01} used FoF halos with $b=0.164$ and found
\begin{eqnarray}
b(M, R, z) &=& b_{ST}(M_{108},z) \nonumber \\
&\times& \left[1.0 + b_{ST}(M_{108}, z)\sigma_R(R, z)\right]^{0.15} .
\end{eqnarray}
The subscripts on the mass $M$ indicate which overdensity threshold is being
used to define the halo mass.
We took $M=0.93\,M_{108}$ and $M=1.07\,M_{180b}$, calculating the
conversion using the profile of \citet{NFW} assuming a concentration $c=5$.
The change in conversion factor was less than a percent for the range of
concentrations of interest.
See \citet{Whi01} for more details, discussion and definitions.
\begin{figure}
\begin{center}
\resizebox{3.1in}{!}{\includegraphics{f1.eps}}
\end{center}
\caption{The bias $b(r)=\sqrt{\xi(r)/\xi_{\rm dm}(r)}$ at
$r=18\,h^{-1}$Mpc for two different binnings in mass. The horizontal
error bars on each point show the range of masses used. The bias was
approximately scale-invariant in this mass regime from $15-30\,h^{-1}$Mpc.
We show 2 fits to $b(M)$ proposed in the literature: that of
\protect\citet{Ham01} (dashed) and
\protect\citet{SheTor99} (dotted),
each plotted for both mass binnings.}
\label{fig:hbias}
\end{figure}
We show the bias $b=\sqrt{\xi(r)/\xi_{\rm dm}(r)}$ at $r=18\,h^{-1}$Mpc
as a function of mass in Fig.~\ref{fig:hbias}.
The bias for halos with $M > 5\times 10^{13}\,h^{-1} M_\odot$ changed
less than 5\% on scales $r\ge 15\,h^{-1}$Mpc.
We include the two bias fits given above for $r=18\,h^{-1}$Mpc at $z=0$.
The \citet{Ham01} fit was derived from a larger simulation volume;
Fig.~\ref{fig:hbias} is included to illustrate the mass dependence of
the global bias, to provide a comparison context for the sizes of the
additional biases of concern in this paper.
We now turn to estimates of bias effects due to the history of the halos.
\begin{figure*}
\begin{center}
\resizebox{6.2in}{!}{\includegraphics{f2.eps}}
\end{center}
\caption{Correlation function of the lowest (filled triangles) and
highest (open squares) quartiles of (reduced) concentration,
$c$ (left),
half mass scale factor, $a_{1/2}$ (center) and formation scale factor,
$a_f$ (right). The solid line is $\xi(r)$ for the full halo
sample. Top panels:
$10^{14}\,h^{-1}M_\odot\leq M\leq 3\times 10^{14}\,h^{-1}M_\odot$
(31551 halos). Bottom panels: $5\times 10^{13}\,h^{-1}M_\odot\leq
M\leq 8\times10^{13}\,h^{-1}M_\odot$ (43638 halos).
A clear signal is seen for concentration and formation scale factor
for the more massive halos.}
\label{fig:histcorr}
\end{figure*}
\section{Assembly bias} \label{sec:history}
We begin by considering parameterizations of the formation history of halos
which emphasize the global properties, i.e.~those related to the halo mass
growth over a long period of time.
We consider three parameterizations of halo histories which have previously
been used with lower mass halos: $c$, $a_{1/2}$, and $a_{f}$
\citep{Wec06,GaoSprWhi05,SheTor04b}.
Using these parameterizations
\citet{SheTor04b}, \citet{Har06}, \citet{GaoSprWhi05},
\citet{Wec06}, and \citet{CroGaoWhi06}
have shown that the clustering of halos of fixed mass is correlated with
``formation time'', a result which has come to be termed assembly bias.
The effect is strongest for smaller halos, and this has been the focus of
earlier work.
For the extremely massive halos that we consider halo identification
is simpler, as none of our halos are subhalos.
However, since massive halos are rarer, the statistics are poor even for a
simulation volume as large as ours.
The concentration, $c$, is a parameter in an NFW fit to a halo density
profile \citep{NFW}\footnote{
We follow NFW and take $c=r_{200}/r_s$; note that \citet{Wec06} use
$c_{\rm vir}=r_{\rm vir}/r_s$ where $r_{\rm vir}\simeq r_{100}$ for our
cosmology. At $z=0$, $c_{\rm vir}\simeq 1.25\,c$.}.
We perform a least squares fit of the NFW functional form to the radial mass
distribution of all the particles in the FoF group, allowing $c$ and $M_{200}$
to vary simultaneously.
This is in order to be similar to the procedure of \citet{Bul01} to allow ready
comparison.
The concentration is expected to correlate with the time by which most of the
halo formed (earlier forming halos are more concentrated, see
\citet{NFW96,Wec02,Gao04}).
There is also a weak dependence of concentration on halo mass.
We have tried to minimize this effect by dividing out the average concentration
for each mass (calculated from the data) to get a ``reduced'' concentration,
which is essentially uncorrelated with mass (correlation is less than 0.2\%).
The second parameter encapsulating the formation history is $a_{1/2}$,
the scale factor at which a halo accumulates half of its final mass.
We find $a_{1/2}$ by linearly interpolating between the two bracketing times.
Analytic properties of this definition have been studied in \citet{SheTor04b},
and $a_{1/2}$ is often used as a proxy for formation epoch.
The third parameter, $a_f$, the formation scale factor, is also a formation
time proxy. It is defined through a fit to the halo mass accretion history
\citep{Wec02}\footnote{\citet{Mil06} present an analytic justification for
this form based on extended Press-Schechter theory.}:
\begin{equation}
M(z)=M_0 \exp\left[ -2a_f z\right],
\end{equation}
where $M_0$ is the mass of the halo at $z=0$.
We calculate this from the history by doing a least squares fit
of $\ln(M_i/M_0)$ against $z_i$ for all the $z_i$ steps.
Although this form does not fit the mass accretion history of massive halos
particularly well due to their frequent mergers, the fit is well defined
and, as will be shown below, $a_f$ nonetheless appears to be correlated
with clustering.
The correlations\footnote{Defined as
$(\langle a b \rangle -\langle a\rangle\langle b\rangle)/
\sqrt{\langle(a-\langle a\rangle)^2\rangle
\langle(b-\langle b\rangle)^2\rangle}$,
see e.g.~\citet{Lup93}.}
for many of the above parameters were presented in \citet{CohWhi05}.
Some of these correlations have been compared in different combinations in
\citet{Wec02}, \citet{Zha03a}, \citet{Zha03b},
\citet{Wec06}, and \citet{CroGaoWhi06}.
Except for \citet{Zha03a,Zha03b}, these were for galaxy scale halos rather
than galaxy cluster scale halos.
The formation histories for low mass halos tend to be smoother and better
fit to the form of \citet{Wec02}, since they undergo fewer mergers
than high mass halos at late times.
Wechsler and Zhao give a formula for the concentration in terms of the
formation time of \citet{Wec02}; our correlation coefficient is characterizing
the scatter around any such correlation.
For the current sample the strongest correlation ($0.69$) is between the
formation redshift, $z_f = 1/a_f -1$, and the half-mass redshift,
$z_{1/2}=1/a_{1/2}-1$, consistent with the 0.70 found by \citet{CohWhi05}
with a sample about 1/7 the size.
The formation redshift, $z_f$, and reduced concentration have a correlation
of $0.53$.
The full concentration and $z_{1/2}~(z_f)$ have a correlation of $0.56~(0.54)$.
These correlations increase as the lower mass limit is decreased from
$10^{14}\,h^{-1}M_\odot$ to $5 \times 10^{13}\,h^{-1}M_\odot$.
To highlight any effects of assembly bias we take the highest and lowest
quartiles of the distribution of each of these three parameterization values and
compare the resulting $\xi(r)$ to that of the full sample
(similar to \citet{Wec06}).
We show examples for
$10^{14}\,h^{-1}M_\odot<M<3\times 10^{14}\,h^{-1}M_\odot$ and
$5\times 10^{13}\,h^{-1}M_\odot<M<8\times 10^{13}\,h^{-1}M_\odot$
in Fig.~\ref{fig:histcorr}.
For the higher mass halos we see a strong dependence of clustering on
concentration.
We see a similar, but noticeably smaller, dependence on $a_f$, indicating that
more recently formed objects cluster more strongly.
As all of the objects we consider have $M>M_*$, our results are in line with the
expectation of \citet{Wec06} and the theoretical model of
\citet{FurKam05}.
Specifically, this confirms the result found by \citet{Wec06} at $z=0$,
without needing to make the approximation that $b(c,M,z)=b(c,M/M_*)$.
The ratio of their correlation function at their top $c$ quartile to the total
sample for halos $\sim 10\,M_*$ was $\sim 1.25$.
This is larger than our ratio, which doesn't reach 1.2 for any of the radii
considered in Fig.~\ref{fig:histcorr}, though it is well within our (and their) errors.
This is mirrored for the lowest $c$ quartile where our effect is similarly
reduced but within the errors.
We are using reduced concentration, while they divide each halo's concentration
by the average concentration in its mass bin, $\tilde{c}_{\rm vir}$.
For the lower mass sample a much weaker trend is seen (e.g.~the ratios for
the quartiles when selected on concentration barely reaches 10\%), agreeing
with the expectation that the signal decreases as $M\to M_*$.
At fixed mass, the trend of $b$ with $c$ is consistent with the fit of
\citet{Wec06}, but the trend is so weak relative to the noise
that the result is of marginal significance.
\citet{GaoSprWhi05} and \citet{Har06} found bias for $M>M_*$ based
on $z_{1/2}$, where both the lowest and highest quartiles of $z_{1/2}$ tended to
be more clustered than the full sample.
We see a hint of this as well, but the fluctuations are large.
\citet{CroGaoWhi06} also found more dependence of clustering
on $z_{1/2}$ (their formation time) than on concentration,
once luminosity dependent bias was taken out. Note that their luminosity
dependence might include some of the history measured by concentration or
$z_{1/2}$ and their focus was on galaxies populating the halos rather than
the halos themselves.
Note also that even though $z_f$ and $z_{1/2}$ are correlated, the correlation
is not strong enough so that bias in one implies bias in the other.
The overlap of the upper and lower quartiles for these quantities for $M>
10^{14} \, h^{-1}M_\odot$ is 62\% and 54\% respectively.
As the rest of the clusters differ, the overall biases can be quite different,
as seen in Fig.~\ref{fig:histcorr}.
Another formation time related quantity, the redshift of last mass jump by
20\% or more in a time step corresponding to the light crossing
time of $136\,h^{-1}$Mpc comoving, had correlations with $z_{1/2}~(0.70)$,
$z_f~(0.61)$, and $c~(0.40)$.
We found a small sign of bias in the correlation functions of its highest and
lowest quartiles as well, leading us to expect a merger bias signal, as will be
examined in \S\ref{sec:merger}.
In summary, we confirm and extend previous results to lower redshift
and higher mass for concentration dependent bias.
We see a smaller signal for formation time bias, and we see very little (if any)
signal for bias based on when halos reach half of their mass.
Bias in concentration and half-mass redshift have been seen in previous
work for smaller masses at higher redshift; our results show a smaller bias,
but well within errors, at least for the concentration dependent bias.
\section{Merger Bias} \label{sec:merger}
In the previous section we demonstrated the dependence of $\xi(r)$ upon
halo formation history, characterized by an average property such as the
``formation time''. As halo assembly histories are punctuated
by large jumps from major mergers, we can also ask whether the clustering
of recently merged halos differs from that of the general population.
Although the concept of a major merger is intuitively easy to understand,
there is no standard definition in the literature of ``merger'' or
``major merger'' (these terms will be used interchangeably henceforth).
In simulations, where the progenitors can be tracked and masses measured,
major mergers can be defined in terms of masses of the progenitors and the
final halo. We define progenitors as those halos at an earlier time which
contributed at least half of their mass to a later halo at the time of interest.
The three most common ways to define a halo merger are: (1) the mass
ratio of the two largest progenitors, $M_2/M_1<1$ (2) the same ratio, but
using the contributing mass of the two most mass-contributing progenitors, and
(3) $M_f/M_i$, the ratio of the current halo mass to the total mass of its
largest progenitor at an earlier time.
We also consider (4) $M_{f}/M_1$, the ratio of the current halo
mass to the largest contributed mass.
In our simulations the merger fraction per $0.7\,h^{-1}$Gyr with
$M_2/M_1>0.3$ increases by more than a factor of 3 from $z=0$ to 1.
\begin{figure}
\begin{center}
\resizebox{3.1in}{!}{\includegraphics{f3.eps}}
\end{center}
\caption{The cumulative distribution of $(M_1+M_2)/M_f$ for different
subsamples of our $b=0.15$ halos at $z=0$. Looking back $0.4\,h^{-1}$Gyr
the subsamples are defined by $M_f/M_i>1.5$, 1.2 or $M_2/M_1>0.3$,0.1.
The lines are in the same order, top to bottom, with the lowest line being
the full sample.
At left $M_1$, $M_2$ are the full masses of the two largest progenitors,
at right $M_1$, $M_2$ refer to the contributing mass.}
\label{fig:mergedefs}
\end{figure}
One way to quantify how well the two body criteria ($M_2,M_1$ and $M_f,M_i$)
describe the halo growth is to consider the ratio $(M_1+M_2)/M_f$.
This ratio is 1 for a halo formed only from its two largest
predecessors: a two body merger with no other accretion.
It is lowered by accretion or multi-body mergers.
Fig.~\ref{fig:mergedefs} shows the cumulative distribution of
$(M_1+M_2)/M_f$ for halos with $M> 10^{14} h^{-1} M_\odot$ satisfying a
variety of merger criteria.
We considered both cases where $M_1$ and $M_2$ are the full and contributing
progenitor masses.
As can be seen on the right, for all halos with $M>10^{14}\,h^{-1}M_\odot$ at
$z=0$, considering mass gains within the last $0.4\,h^{-1}$Gyr,
at least 5\% of the final halo mass is not from the two largest contributors.
As the merger criteria is hardened (i.e.~the merger is more ``major''), the
two largest progenitors contribute less and less of the final mass.
As can be seen on the left, the same amount of mass as found in the two largest
progenitors makes up the entire mass of the final halo in $\sim$25\% of the full
sample of halos.
Lengthening the time step or looking to higher redshift also increases the
fraction of halos getting their mass from halos other than the two largest
progenitors.
For simplicity, our subsequent analysis uses only the two body criteria to
define mergers, so the accuracy of this assumption as examined above should be
kept in mind.
Previous work to identify a merger bias through N-body simulations and
analytic methods gives a mixed picture.
\citet{Got02} found a clustering bias for recently ($\Delta t = 0.5\,$Gyr)
merged objects with $M_f/M_i>1.25$ and $M\leq M_*$ at $z=0$.
These authors, however, did not try to match the mass distribution of the
comparison sample to that of the merged halos --- a problem since mergers
occur more often for more massive halos, and the bias is known to increase
with halo mass.
To isolate the effects due to merging, the comparison sample needs to have
the same mass distribution as the merged sample, and most subsequent work
has ensured this.
\citet{Per03} found no bias between the correlation functions of recently
merged ($\Delta t=10^8\,$yr, $M_2/M_1>0.3$) and general samples at $z=2$
for halos with $M\sim M_*$, $25M_*$, and $150M_*$.
\citet{ScaTha03} confirmed Percival et al.'s results for major mergers in a
$z=3$ sample for a smaller range of masses, but surprisingly found an
enhancement of clustering for halos with recent
($\Delta t=5\times 10^7$, $10^8\,$yr) large total mass gain, $M_f/M_i>1.20$.
That is, they find a bias when selecting halos with a recent large mass gains,
but not when selecting on recently merged halos' parent masses.
Their signal was weak due to limited statistics.
That the previous literature is inconclusive is to be expected, given that the
effects of merger history upon clustering are small, and extremely difficult to
measure numerically. We expect the largest signal when $M\gg M_*$, but this
is where the number density of objects is smallest. In addition, the most
extreme mergers are the rarest, increasing the shot-noise in the measurement of
$\xi(r)$.
If we include more common events, the ``merged'' and ``comparison'' samples
become more similar, washing out the signal of interest.
At higher redshift, the merger rate increases, thus the merged and comparison
samples have more overlap unless the merger ratio is increased, leading to
worse statistics.
To try to overcome these statistical effects, we use our very large samples of
simulated halos to search for a merger, or temporal, bias.
To define a ``recent major merger'' requires both a choice of threshold for
one of the merger ratios and a choice of time interval.
As we expect the halo crossing time to be $\sim0.7\,h^{-1}$Gyr,
(e.g.~\citet{TKGK}; \citet{GotKlyKra01};
\citet{RowThoKay04}), we expect that outputs at this separation or shorter are
small enough to catch recently merged halos while they are still ``unrelaxed''.
That is, a ``recent merger'' might be expected to correspond to a dynamically
disturbed halo.
We consider the four merger criteria mentioned above, as well
as a wide range of samples and merger definitions.
We used 9 different time intervals from $z\approx1$ to $z=0$ as given in
\S\ref{sec:sims}.
We considered 4 different thresholds for both $M_2/M_1$ and $M_f/M_i$
using both total and contributing mass of the progenitors:
$M_2/M_1>0.1,0.2,0.3,0.5$ and $M_f/M_i>1.2,1.3,1.5,2.0$.
Furthermore, we used two minimum masses,
$5\times 10^{13}\,h^{-1}M_\odot$,
$10^{14}\,h^{-1}M_\odot$, and two FoF linking lengths, $b=0.15,0.2$.
Combinations of each of these criteria resulted in over 700 different pairs of
``merged'' and ``comparison'' samples.
Although this data set is very rich, systematic trends are difficult to
identify. This is in part because increasing the merger ``strength''
simultaneously increases the noise (due to lower numbers of events).
Evidence of bias is very slight in the binned $\xi(r)$.
We used three methods to try to isolate the signal:
the marked correlation function, the integrated correlation function, and a
likelihood fit to a power law for the correlation function.
The clustering and merger criteria influence these three quantities in distinct
ways.
We now describe each method, and our corresponding results, in turn.
\begin{figure}
\begin{center}
\resizebox{3.1in}{!}{\includegraphics{f4.eps}}
\end{center}
\caption{The marked correlation function for halos in the range
$5$--$7\times 10^{13}\,h^{-1}M_\odot$ at $z=0$. The mark is the maximum
progenitor mass ratio, $M_2/M_1$, within the last $1~h^{-1}$Gyr.
The error bars come from dividing the sample into 8 octants.}
\label{fig:mark2}
\end{figure}
\subsection{Marked correlation function}
One problem with computing merger effects in terms of $\xi(r)$ is that, to
compute the difference in clustering of merged and random samples, one must
define a boolean merger criterion --- a halo is either in the merged sample or
not. As halo histories are complex, a more nuanced measure of merger
clustering is useful, and this can be provided by using the marked correlation
function \citep{BeiKer00,BeiKerMec02,Got02,SheTor04b,Har06,SheConSki05}.
Each of $N$ objects gets assigned a mark, $m_i$, for $i=1,\dots,N$.
Denoting the separation of the pair $(i,j)$ by $r_{i,j}$, the marked
correlation function, $M(r)$, is defined by
\begin{equation}
M(r) = \sum_{ij} \frac{m_i m_j }{ n(r)\bar{m}^2}, \;
\label{eqn:marked}
\end{equation}
where the sum is over all pairs of objects $(i,j)$ with separation $r_{ij}=r$,
$n(r)$ is the number of pairs, and the mean mark, $\bar{m}$, is calculated
over all objects in the sample.
The marked correlation function ``divides'' out the clustering of the average
sample, and thus a difference in clustering is detected for $M(r)\ne 1$.
\begin{figure*}
\begin{center}
\resizebox{6.2in}{!}{\includegraphics{f5.eps}}
\end{center}
\caption{The integrated correlation function, $\bar{\xi}(r)$, of recently
(within $0.4\,h^{-1} $Gyr) merged halos (triangles) and a comparison sample
of the same mass (squares) for
$M_2/M_1>0.1$ (left), $M_2/M_1>0.2$ (middle), and $M_f/M_i>1.2$ (right),
where $M_1,M_2$ are the full masses of the progenitor halos, for
halos in our $b=0.15$ catalog at $z=0$.
The number of halos that merged out of the 96319 total halos with
$M> 5\times10^{13}\,h^{-1}M_\odot$ is shown at upper right in for each case.
For these three examples, the differences between the two samples are largest
at $30\,h^{-1}$Mpc, with significance $3.1\sigma$ (left), $2.7\sigma$ (middle),
and $2.5\sigma$ (right).}
\label{fig:xibars}
\end{figure*}
We consider five marks:
$M_2/M_1$ (for both total and contributed masses), $M_f/M_i$, $M_f/M_1$
(where $M_1$ is contributed mass) and $\frac{1}{2}(1 + M_2/M_1)$.
The last case had a smaller range of marks, and thus tests sensitivity to
extreme events. The results for this mark were similar to the
others, suggesting that we are not dominated by outliers.
Halos are chosen with mass in a narrow range,
$M_{\rm min}<M<\sqrt{2}M_{\rm min}$, to minimize the previously
mentioned bias due to merged halos being more massive.
The global bias changes less than a percent over the mass ranges we consider.
In our combined sample of several output times and mass ranges,
the largest signal comes from using as mark the maximum value
of $M_2/M_1$ within $\Delta t$ of the present, as shown in Fig.~\ref{fig:mark2}.
As $\Delta t$ was increased the signal went smoothly to zero.
We find similar behavior for $M_2/(M_1+M_2)$, which suggests that any bias
is contributed by the systems where $M_2\ll M_1$.
The signal is extremely weak for the other marks we considered.
By stacking the signal across multiple output times (see
\S\ref{sec:lik} for details) we are able to find small, but
statistically significant detections of excess power for the
marks $M_2/M_1$, $M_2/(M_1+M_2)$, and $M_f/M_i$, for halos near
$5\times10^{13}\,h^{-1}M_\odot$.
At higher masses there is weak evidence for an effect, but the large error bars
weaken the statistical significance.
As the marked correlation function approach finds only a weak signal,
typically an enhanced clustering of order 5--10\%,
we also explore two indicators which characterize the correlations by fewer
parameters: the integrated correlation function observed at a single scale,
and a likelihood fit to a power law correlation function.
\subsection{Integrated correlation function} \label{sec:xibar}
Given an object at some position, the integrated correlation function
\begin{equation}
\bar{\xi}(r) \equiv \frac{3}{r^3}\int_0^r x^2\xi(x)\, dx \;
\end{equation}
is the probability, above random, that a second object will be within a sphere
of radius $r$.
This quantity enhances any increased clustering at short distances, but gives
error bars that are even more highly correlated than those of the correlation
function, $\xi(r)$, itself. A typical result is shown in Fig. \ref{fig:xibars},
where a significant signal can be seen.
As in the previous section, we find a weak signal regardless of merger
definition in our 700 plus samples.
Considering all the samples and all the separations $r$,
more than $2/3$ of the time the difference
$\bar{\xi}_{\rm merge}(r)-\bar{\xi}_{\rm all}(r)$ was positive.
This method separates the data into radial bins, requiring us to
estimate the clustering at many locations.
Since the errors on the binned correlation points are highly correlated, we
reduced $\bar{\xi}(r)$ to a single measurement by fixing a preferred scale.
The signal tends to be largest near $r=20\,h^{-1}$Mpc (though the signal is
largest at $r=30\,h^{-1}$Mpc in the examples in Fig.~\ref{fig:xibars}),
and so we compare $\bar{\xi}(r)$ of the merged and general samples at this
radius.
On average, when a $2\sigma$ signal is seen (5--15\% of the time, depending
on mass ratio, etc.), $\bar{\xi}(r)$ for the mergers is $\sim$20\% higher
than for the general sample, although in extreme cases the difference can be
as large as a factor of 2 or 3.
Due to the noisy statistics it was hard to identify any clear trends.
\subsection{Likelihood fit to $r_0$} \label{sec:lik}
\begin{figure*}
\begin{center}
\resizebox{6.2in}{!}{\includegraphics{f6.eps}}
\end{center}
\caption{(Left) The correlation function for a recently merged sample
(triangles) and a comparison sample (squares) of the same mass. The lines
indicate the best-fit $\gamma=1.9$ power law model, fit directly to the
cluster positions (not the binned $\xi(r)$).
(Right) The likelihood for the clustering amplitude, $r_0$, assuming a slope
$\gamma=1.9$ for the same samples at left.
The sample is at $z=0$, with a minimum mass of
$5\times 10^{13}\,h^{-1} M_\odot$ ($b=0.15$),
looking back $0.4\,h^{-1}$Gyr. Mergers are tagged as having
$M_2/M_1 \geq 0.2$, $M_1,M_2$ full progenitor masses.}
\label{fig:maxl}
\end{figure*}
The integrated correlation function sums all pairs within a spherical region.
As an alternate approach, we approximate the correlation function as a power law
over some range of radii, and we perform a likelihood fit to this power law
correlation function:
\begin{equation}
\xi(r) = \left(\frac{r}{r_0}\right)^{-\gamma}
\label{eqn:xipl}
\end{equation}
over the range of scales $(r_{\rm min},r_{\rm rmax})$.
This method incorporates information from many scales, similar to the
integrated correlation function. However, it is combined with the expectation
that the correlation function should be a power law and excises the center
region. By using the positions of the halos directly in the fit
to the likelihood, the errors differ from those in the integrated
correlation function as well.
Assuming that the pair counts form a Poisson sample with mean proportional to
$1+\xi(r)$, the likelihood $L$ is \citep{Cro97,Ste97}
\begin{eqnarray}
\ln L(r_0) &=&
-2\pi\,\bar{n}^2 \int_{r_{\rm min}}^{r_{\rm max}} r^2
\ \left[1 + \xi(r)\right]\,dr\nonumber \\
&+& \sum_{i<j} \ln\left(\bar{n}^2r_{i,j}^2\left[1+\xi(r_{i,j})\right]\right)
+ {\rm const}\mbox{ ,}
\label{eqn:like}
\end{eqnarray}
where the sum is over measured pairs $i,j$ with separation $r_{i,j}$,
$\bar{n}$ is the measured average density\footnote{
We find that marginalizing or maximizing over $\bar{n}$ as a free
parameter results in biased fits for several samples.},
and $\xi(r)$ is given by Eq.~(\ref{eqn:xipl}).
We fit over the range $5$--$25\,h^{-1}$Mpc, where the correlation function
exhibits an approximately power law behavior.
For the comparison sample we multiply the likelihoods for several different
realizations, to reduce the noise, and then renormalize to unit area.
A typical result, where a significant signal can be seen, is shown in
Fig.~\ref{fig:maxl}, demonstrating both the power law fit and the maximum
likelihood distribution. For the fits, $r_0$ was usually $\sim 10\,h^{-1}$Mpc,
within the range where the power law fit was being applied.
Across all of our samples, we find $\gamma\simeq 1.9\pm 0.1$.
To allow us to compare different samples more easily, we reduce the number
of free parameters to one by holding $\gamma\equiv 1.9$.
A typical example, demonstrating the ratio of the power law fit correlation
functions of the merged and general sample, is shown in
Fig.~\ref{fig:biasevol} as a function of lookback time/redshift.
Since we fix $\gamma=1.9$ for both the merged and general sample,
the ratio $\xi_{\rm merge}/\xi_{\rm all}$ using Eq.~(\ref{eqn:xipl}) is
scale-invariant within our fit range.
While the enhanced clustering of the recently merged sample is small, it
remains statistically significant.
Typically, the merged sample shows an enhanced clustering of
$5-10\%$ in the correlation function for the $0.7\,h^{-1}$Gyr spacings, though
we find no strong evidence of systematic bias evolution with redshift.
Moreover, at $z=0$, where the spacing is smaller ($0.4\,h^{-1}$Gyr), we find a
significantly enhanced $\xi(r)$ for the mergers, often $10-20\%$.
Presumably, this increased clustering signal is caused by the smaller time interval.
Larger intervals encompass more mergers, leading to smaller errors, but also
leading to a smaller signal, since mergers now encompass a more
significant fraction of the comparison population. As mentioned above,
looking at earlier times also makes the merged and comparison population overlap
increase dramatically.
\begin{figure}
\resizebox{3.1in}{!}{\includegraphics{f7.eps}}
\caption{The (scale-independent) ratio of the power law fit correlation
functions for the merged and comparison samples, as a function of lookback
time/redshift; $\Delta t = 0.7\,h^{-1}$Gyr (triangles), $0.4\,h^{-1}$Gyr
(square). The mergers satisfy the criterion $M_2/M_1>0.2$, with
$M_2,M_1$ total progenitor mass, for the
$M>5\times 10^{13}\,h^{-1}M_\odot$ halos in our $b=0.15$ catalog.
No evidence of systematic bias evolution with redshift is found.
The enhanced clustering at $z=0$ arises presumably from the shorter
time interval used.}
\label{fig:biasevol}
\end{figure}
\begin{figure}
\resizebox{3.1in}{!}{\includegraphics{f8.eps}}
\caption{The (scale-independent) ratio of the power law fit correlation
functions for the merged and comparison samples as a function of merger ratio,
$M_2/M_1$ (left points; full progenitor mass) and $M_f/M_i$ (right points),
for halos above $5\times 10^{13}\,h^{-1}M_\odot$ in our $b=0.15$ catalog.
Mergers are counted within $0.4\,h^{-1}$Gyr of $z=0$ (squares), and an
average across all $0.7\,h^{-1}$Gyr spacings from $z\approx 1$ to
$z=0.04$ (triangles).
In both cases, clear trends can be seen.}
\label{fig:mratio}
\end{figure}
By averaging $\xi_{\rm merge}/\xi_{\rm all}$ across all of the $0.7\,h^{-1}$Gyr
spacings from $z\approx 1$ to $z=0.04$, we are able to study the size of the
merger bias simply as a function of merger ratio.
Figure \ref{fig:mratio} shows the increase of $\xi_{\rm merge}/\xi_{\rm all}$
with $M_2/M_1$ (full mass) and $M_f/M_i$ both for mergers within
$0.4\,h^{-1}$Gyr of the present and for the redshift-averaged
$0.7\,h^{-1}$Gyr spacings.
The merger bias clearly increases with increasing merger ratio, with the
smaller time step yielding stronger clustering as described above.
In summary, we find a weak bias in many cases (but not all---the signals are
very noisy) for recent major mergers and recent large mass gains.
While \citet{Per03} found no such merger bias, our signal is consistent
with their upper limit of $20\%$ on the bias effects of recent mergers.
The work of \citet{ScaTha03} saw a small bias for large mass gains but noted
that their statistics limited their ability to determine the significance.
Our larger box allowed us to incorporate the effects of cosmic variance, which
had been neglected in previous work.
Cosmic variance increased the errors by 40\% or more, which limited the
significance of the signal.
Nonetheless, we still found a small bias for \textit{both} mergers and large
mass gains.
\section{Conclusions} \label{sec:conclusions}
The large-scale structure of the Universe is built upon a skeleton of clustered
dark matter halos. For the past two decades we have known that rarer, more
massive dark matter halos cluster more strongly than their lower mass
counterparts.
Halos of a fixed mass, however, can differ in their formation
history and large-scale environment, and recent work on halos
smaller than galaxy clusters has shown that this can lead to
further changes in their clustering.
In this paper we have used two large-volume, high resolution N-body simulations
to study the clustering of massive halos as a function of formation history.
We confirmed earlier results that the lower concentration massive halos are
more clustered than the population as a whole;
extending these results to higher masses (and thus lower redshifts)
than had been probed previously \citep{Wec06}.
(Previous work had looked at similar regimes of $M/M_*$ but for smaller $M$
and thus higher redshift; note again that exact scaling with $M/M_*$ is
not expected for non power law $P(k)$ and $\Omega_m \ne 1$.)
Similarly, we confirmed the enhanced clustering of halos with later formation
times, though the signal was not as strong as for concentration.
The signal for bias based on a halo reaching half of its mass is weaker than
that seen in \citet{GaoSprWhi05} (again for higher $z$), and not statistically
significant in our case.
We also investigated whether recent merger activity affected the clustering
of massive halos --- a topic with a muddied history in the literature.
While we found statistically significant ($>2\sigma$) merger effects on
clustering in many cases we considered, both for recent major mergers and
large mass gain, in most cases this signal was weak: a 5--10\% increase in bias.
Our strongest signal came from using a likelihood fit of the correlation
function to a power law, particularly for major mergers within $0.4\,h^{-1}$Gyr
of the present, where we saw a typical merger bias of up to $20\%$.
This bias signal is not necessarily at odds with the lack of signal in previous
work, which looked for larger bias than that seen on average here.
Even with a $(1.1\,h^{-1}{\rm Gpc})^3$ volume, massive halos remain very
rare objects and small changes in their correlations are difficult to detect.
We were plagued by the competing effects that increasing the severity of the
merger (and hence underlying signal) decreases the number of pairs, worsening
the statistics.
General trends remain elusive, since changing various criteria (e.g.~merger
definition, minimum mass, time step) generally changed the number of halos
involved, thus changing the errors.
However, we did find that the strength of the merger bias typically increased
with increasing merger ratio, i.e.~more major mergers are more strongly biased.
Finally, we note that the correlations found between the last large (20\%)
mass gain and the different definitions of formation redshifts provide a
connection between the assembly bias studied in \S\ref{sec:history} and
the merger bias in \S\ref{sec:merger}.
This bias is not expected from direct application of extended Press-Schechter
theory, and it provides a phenomenon that a more precise analytic model of
mergers should reproduce.
We thank J. Bullock, R. Croft, G. Jungman, P. Norberg, G. Rockefeller,
R. Sheth, E. Scannapieco, R. Wechsler and A. Zabludoff for enlightening
conversations and especially R. Sheth and R. Wechsler, who also provided
useful comments on the draft.
J.D.C., D.E.H. and M.W. thank the staff of the Aspen Center for Physics for
their hospitality while this work was being completed.
The simulations and analysis in this paper were carried out on supercomputers
at Los Alamos National Laboratory and NERSC.
J.D.C. was suported in part by the NSF.
M.W. was supported in part by NASA. D.E.H. gratefully acknowledges a
Feynman Fellowship from LANL.
|
cond-mat/0606734
|
\section{Introduction}
With a thrust from applications in quantum computing, the
manipulation of quantum states in superconducting nanocircuits has
made tremendous progress over the last decade
\cite{Bertet05b,Wallraff04,Vion02,Duty04,Makhlin01,Pashkin03,McDermott05,Oliver05,Plourde05}. A
crucial step for these successes is the understanding of decoherence
and the design of good measurement schemes. The latter is a particular
challenge as the detector is made using the same technology as the
system being detected i.e.~the qubit. Also, the measurement timescale cannot be considered to be
infinitesimally short as compared to the intrinsic scales of the qubit
evolution. Thus, understanding the measurement process is crucial both
fundamentally and for improving experiments.
A specifically attractive development is the emergence of circuit
quantum electrodynamics (cQED)
\cite{Buisson01,You03,Goorden04,PRBR033,Kleff04,Yang04,Sarovar05,Mariantoni05},
where effective Hamiltonians, similar to those of the coherent
light-matter interaction of quantum optics and in particular of
cavity QED, can be realized in the microwave frequency domain. There
are many approaches to realize the qubit, including flux and charge,
and the cavity, including a superconducting quantum interference device (SQUID) or a coplanar waveguide.
In this context, measurement protocols making use of dispersive
qubit-oscillator interactions~\cite{Bertet05b,Wallraff04} are useful for reducing the backaction
on the qubit \cite{Lupascu04}. For example, in the flux qubit--SQUID
combination, as in the Delft setup of Refs.~\cite{Bertet05b, Bertet05c}, the
SQUID behaves like a harmonic oscillator. Its inductive coupling to
the flux qubit leads to a frequency shift depending on the qubit
state $\Omega_{\uparrow,\downarrow}=\sqrt{\Omega^2\pm\Delta^2}$.
Here, $\Omega$ is the bare oscillator frequency and $\Delta$ is the
quadratic frequency shift. A measurement of the SQUID
resonance frequency provides information of the qubit state. While the manipulation of the qubit is usually performed at
the optimum working point \cite{Vion02}, the readout can and should
be performed in quantum nondemolition measurement i.e.~in the pure
dephasing limit.
In this letter we study the decoherence of a qubit
due to the dispersive coupling to a damped harmonic oscillator, taking the Delft setup as an example though our results may be adapted to several physical systems. In the Purcell effect a narrow oscillator linewidth enhances the absorbtion of the resonant photon emitted by the two-level atom and thus the energy relaxation of the latter. In the weak
qubit-oscillator coupling regime (WQOC), we explain the behavior of dephasing in terms of a similar process, the phase Purcell effect.
This regime is characterized, as we will be show later, by
$\Delta/\Omega<\sqrt{\kappa/\Omega}/
(1+n(\Omega))^{1/4}$, where $n(\Omega)$ is the Bose function at the frequency $\Omega$ and environment temperature $T$. { The main result of this work lays} beyond the WQOC, in a regime where fast qubit-oscillator entanglement plays the dominant role. We find a qualitatively different behavior of the dephasing rate.
The divergence of the qubit dephasing rate $1/\tau_\phi\propto 1/\kappa$ when the oscillator decay rate $\kappa \rightarrow 0$ is lifted by the onset of the strong coupling regime.
The Hamiltonian describing the Delft setup~\cite{Bertet05c}
can be written as
\begin{eqnarray}
\!\!\!\!\!\hat{H}\!\!&=&\!\!\underbrace{\frac{E}{2}\hat{\sigma}_z+\hbar\Omega
\left(\hat{a}^\dagger \hat{a}+\frac{1}{2}\right)+\frac{\hbar
\Delta^2}{4\Omega}(\hat{a}+\hat{a}^{\dagger})^2\hat{\sigma}_z}_{\hat{H}_S}+\hat{H}_{D}
\label{eq:Hamiltonian} . \nonumber \\
\end{eqnarray}
Here, $\hat{a}$ and $\hat{a}^\dagger$ are the annihilation and
creation operators of the harmonic oscillator, $\hat{\sigma}_z$ acts
in the Hilbert space of the qubit and $\hat{H}_D$ describes the
damping of the oscillator. A full-length derivation of Hamiltonian (\ref{eq:Hamiltonian}) and discussion of the approximations used is given in Ref. \cite{Serban06}. It basically derives the Hamiltonian from the equations of motion of the Josephson phases across the junctions and truncates the SQUID potential to the second order.
We will show that key experiments~\cite{Bertet05b, Wallraff04} are
performed outside the WQOC. Moreover, a very recent
experiment~\cite{Schuster06} explicitly relies on the use of a
strong dispersive coupling regime. We
demonstrate that the dephasing rate $1/\tau_\phi\propto 1/\kappa$
for WQOC, and $1/\tau_\phi\propto\kappa$ at
strong coupling. We discuss the crossover between these regimes and
its dependence on $\kappa$ and temperature $T$. We provide physical
interpretations of both regimes, the former as a phase Purcell
effect and the latter as the onset of qubit-oscillator
entanglement. The results of the present study may be extended
straightforwardly to any system with similar dispersive
qubit-oscillator coupling: the charge-qubit--coplanar wave guide system (see Yale
setup~\cite{Wallraff04}), trapped ions~\cite{Leibfried03} and 3D
microwave cavity QED~\cite{Raimond01}, quantum
dots~\cite{Balodato05}, among others.
\section{Method}
In studying the qubit dephasing we are facing the challenge of a complex non-markovian environment consisting in the main oscillator (i.e. SQUID) and the ohmic bath. Moreover, the qubit couples to a non-Gaussian variable of its environment. Therefore the tools developed for Gaussian baths \cite{Weiss99} cannot be applied in this system for arbitrary strong coupling between the qubit and the oscillator.
We study the qubit dynamics under the Hamiltonian (\ref{eq:Hamiltonian}) for arbitrary $\Delta/\Omega$, assuming essentially
the dimensionless oscillator decay rate $\kappa/\Omega$ as the {\em
only} small parameter. In this regime we avoid over-damping of the oscillator and the strong backaction on the system which this would cause. We give in the following a brief description of the crucial steps
and approximations of the calculation. We model the damping, associated with the
oscillator decay rate $\kappa$, in the Caldeira-Leggett way by a
bath of harmonic oscillators
\begin{eqnarray}
\!\!\!\hat{H}_D\!\!\!&=&\!\!\!\!\underbrace{\sum_j\hbar\omega_j
\left(\hat{b}_j^{\dagger}\hat{b}_j+\frac{1}{2}\right)}_{\hat{H}_B}
\!+\!\underbrace{\sum_j\frac{\hbar(\hat{a}+\hat{a}^{\dagger})}
{2\sqrt{m\:\Omega}}\frac{\lambda_j(\hat{b}_j^{\dagger}+\hat{b}_j)}
{\sqrt{m_j\:\omega_j}}}_{\hat{H}_{I}}+\hat{H}_c, \nonumber \\\label{eq:bath}
\end{eqnarray}
with $J(\omega)=\sum_j\lambda_j^2\hbar/(2m_j\omega_j)
\delta(\omega-\omega_j)=m\hbar\kappa\omega\Theta(\omega-\omega_c)/
\pi$ and $\hat{H}_c$ the counter term \cite{Ingold98,Cohen92,Nato06II} where $\Theta$ is the Heaviside step
function and $\omega_c$ an intrinsic high frequency cut-off.
Our starting point is the Born-Markov master equation in the weak coupling
to the bath limit for the reduced density matrix $\hat{\rho}_S$ in the qubit-oscillator Hilbert space
\begin{eqnarray*}
\dot{\hat{\rho}}_S(t)&=&
\frac{1}{i \hbar}\left[\hat{H}_S,\hat{\rho}_S(t)\right]\label{mastereq}\\
&+&\int_0^t\!\!\! \frac{dt'}{(i \hbar)^2}\:{\rm Tr}_B \!
\left[\hat{H}_{I},[\hat{H}_{I}(t,t'),\hat{\rho}_S(t)\!\otimes\!\hat{\rho}_{B0}]\right]\!.\nonumber
\end{eqnarray*}
This approach is valid at finite temperatures $k_BT \gg \hbar \kappa$, for times $t\gg1/\omega_c$ \cite{Alicki06,Nato06II}, which is
the limit we will discuss henceforth. We start from a standard factorized initial state for all
subsystems. We express $\hat{\rho}_S(t)$
in the qubit basis and represent its elements, which are still oscillator operators, in phase-space as
\begin{equation}
\hat{\rho}_S = \left( \begin{matrix}
\hat{\rho}_{\uparrow\uparrow}&\hat{\rho}_{\uparrow\downarrow} \\
\hat{\rho}_{\downarrow\uparrow}&\hat{\rho}_{\downarrow\downarrow}\end{matrix}\right)
,\:\: \hat{\rho}_{\sigma\sigma'}=\int\!
\frac{d^2\alpha}{\pi}\:\:\chi_{\sigma\sigma'}(\alpha,\alpha^*,t)\hat{D}(-\alpha),\nonumber
\end{equation}
where $\chi_{\sigma\sigma'}$ is the characteristic Wigner function
and $\hat{D}=\exp(\alpha\hat{a}^{\dagger}-\alpha^*\hat{a})$ the displacement operator \cite{Cahill69}. Independent of our work,
Ref.~\cite{Gambetta06} has used a different phase-space
representation to calculate the qubit dephasing rate. We characterize the qubit coherence by
$C(t)=\left\langle\hat{\sigma}_x\otimes
\hat{\mathbbm{1}}\right\rangle=2{\rm
Re\:\:Tr}\hat{\rho}_{\uparrow\downarrow}(t)$ which can be easily
shown to be $C(t)=8\pi{\rm Re}\chi_{\uparrow\downarrow}(0,0,t)$.
After a rather long but essentially straightforward calculation,
one obtains for $\chi_{\uparrow\downarrow}$ a \emph{generalized}
Fokker-Planck equation
\begin{eqnarray}
\!\dot{\chi}_{\uparrow\downarrow}(\alpha,\alpha^*,t)&=&
\Big(\!\!\left(\alpha
(k_1+i \Omega)+\alpha^*k_1\right)\partial_{\alpha}\nonumber \\
&+&
\left(\alpha^*(k_2-i \Omega)+\alpha
k_2\right)\partial_{\alpha^*}\nonumber\\
-\frac{i \Delta^2}{2\Omega}(\partial_\alpha-\partial_{\alpha^*})^2 &+&p(\alpha+
\alpha^*)^2\!\Big)\chi_{\uparrow\downarrow}(\alpha,\alpha^*,t)\label{offdiag},
\end{eqnarray}
where
\begin{eqnarray}
\!\!\!\!\!\!\! k_{1,2} & = & -\frac{\kappa}{4}\left(2\mp\frac{\Omega_{\uparrow}}{\Omega}(1+2n_{\uparrow})\pm\frac{\Omega_{\downarrow}}{\Omega}(1+2n_{\downarrow})\right) , \\
p &= & -\frac{\kappa}{8\Omega}\left(\Omega_{\uparrow}(1+2n_{\uparrow})+\Omega_{\downarrow}(1+2n_{\downarrow})\right)-\frac{i \Delta^2}{8\Omega}
\end{eqnarray}
and $n_{\sigma}=n(\Omega_{\sigma})$ is the Bose function. To solve
Eq.~(\ref{offdiag}) we make a Gaussian ansatz for
$\chi_{\uparrow\downarrow}$.
\begin{eqnarray}
\chi_{\uparrow\downarrow}&=&A(t)\exp(-M(t)\alpha^2-N(t)\alpha^{*2}-Q(t)\alpha\alpha^*).
\end{eqnarray}
This ansatz includes coherent and thermal states. In the following we assume the oscillator to be initially in a thermal state, in equilibrium with its environment. { This implies $Q(0)=1/2+n(\Omega)$ and $M(0)=N(0)=0$}. Due to the quadratic (pure dephasing) form of
the Hamiltonian (\ref{eq:Hamiltonian}), obtain a closed system of
ordinary differential equations for the parameters of the Gaussian ansatz, { see also Ref. \cite{Serban06}}. This system can be easily solved perturbatively in $\Delta$ { in the weak coupling regime}, or numerically, { (for arbitrarily strong coupling)}, and we can extract the
dephasing time $\tau_\phi$ from the strictly exponential
long-time tail of $C(t)=8\pi{\rm Re}A(t)$.
\section{Weak qubit-oscillator coupling}
{Before solving Eq. (\ref{offdiag}) in a general manner, we revisit the case of small $\Delta$}. Up to the lowest non-vanishing order $\Delta^4$, the
analytically calculated WQOC dephasing rate is
\begin{eqnarray}
\frac{1}{\tau_\phi}=\Delta^4
\frac{n\!\left(\Omega\right)\left(n\!\left(\Omega\right)+1\right)}{\Omega^2}
\left(\frac{\kappa}{\kappa_m^2}+\frac{1}{\kappa}\right),\label{dephasingrate}
\end{eqnarray}
where $\kappa_m= \sqrt{2k_BT\Omega/(\hbar(1+2\:n(\Omega)))}$.
The term $1/\kappa$ exactly reproduces the Golden Rule dephasing
rate of Ref.~\cite{Bertet05c}, and is similar to the result of Ref.~\cite{Blais04}. These previous results have been obtained considering only the two-point correlator of the fluctuating observable $(a+a^{\dagger})^2$, i.e.~assuming an Gaussian environment. The crossover point $\kappa_m$ from $1/\kappa$ to $\kappa$ in
Eq.~(\ref{dephasingrate}) is, at the Delft parameters \cite{Bertet05b}, comparable to
$\Omega$, i.e., $\kappa$ would dominate over $1/\kappa$ only in a
regime where the Born approximation fails. Nevertheless, since the golden rule limit $\lim_{\kappa\to\infty}1/\tau_{\phi}=0$ is unphysical, such a term was to be expected.
In the WQOC regime, the enhancement of dephasing by weak coupling to the environment
is analogous to the enhancement of spontaneous emission by the narrow cavity lines in the
{\em resonant} Purcell effect, see Refs.~\cite{Purcell46, Haroche89}.
In the pure dephasing case we have no energy exchange between the qubit and the oscillator. Qubit decoherence is caused by fluctuations of $(\hat{a}+\hat{a}^{\dagger})^2$. Since we are in the WQOC regime, the stronger coupling between the oscillator and the environment causes equilibrium between the oscillator and the bath on a shorter time scale than the qubit dephasing. In equilibrium, the main contribution to the fluctuations of $(\hat{a}+\hat{a}^{\dagger})^2$ is the exchange of photons between oscillator and bath. The process is analogous to equilibrium fluctuations in canonical thermodynamics. A virtual photon returning from the environment is at resonance with the oscillator. The absorbtion of this photon, like in the resonant Purcell effect will be enhanced by narrow oscillator lines. Therefore, the entire dephasing process will be enhanced when the coupling to the environment is weak { and this mechanism can be viewed as a phase Purcell effect. We give a more detailed discussion of this effect in the Appendix.}
\section{Strong qubit-oscillator coupling}
The dephasing rate
(\ref{dephasingrate}) obtained in the small $\kappa$ and WQOC limit
diverges for $\kappa\to0$, i.e., in the absence of an environment. The
solution to this apparent contradiction lies beyond the WQOC, therefore we solve Eq.~(\ref{offdiag}) numerically using again the Gaussian ansatz for $\chi_{\uparrow\downarrow}$.
\begin{figure}[h]
\includegraphics[width=.45\textwidth]{delta.eps}
\caption{Dephasing rate $1/\tau_\phi$ as function of $\Delta$ for
different values of $\kappa$. Power-law $\Delta^4$ growth at low
$\Delta$ crosses over to $\Delta$-independence at strong
coupling. Inset: Dephasing rate as a function of $\kappa$ in the
weak coupling regime ($\Delta/\Omega=10^{-3}$) showing
$1/\tau_\phi\propto\Delta^4/\kappa$ and the strong coupling
regime ($\Delta/\Omega=10^{-1}$) with $1/\tau_\phi\propto\kappa$
dependence. Here $\hbar\Omega/k_BT=2$ similar to
experiments.}\label{gamma}
\end{figure}
Figure~\ref{gamma} shows the dependence of the dephasing rate
on, $\Delta$ for various values of $\kappa$. The
dimensionless parameter $\hbar\Omega/k_BT$ is $2$, similar to the Delft and Yale setups.
As predicted by eq.~(\ref{dephasingrate}) for $\kappa\ll\kappa_m$ and
small $\Delta$, the dephasing rate is proportional to
$\Delta^4/\kappa$. Further increasing $\Delta$, we observe a saturation of the dephasing rate which marks the onset of the strong coupling regime. This regime is analogous to the strong coupling in linear cavity QED. Here $1/\tau_\phi$ is proportional to $\kappa$.
At strong qubit-oscillator coupling the oscillator couples to
the qubit stronger than it couples to the heat bath, such that one
cannot use the effective bath concept of WQOC. As the
qubit-oscillator system becomes entangled, a fundamentally different
dephasing mechanism emerges. The eigenstates of the Hamiltonian
(\ref{eq:Hamiltonian}) at $\kappa=0$ are the dressed states $\{|\sigma,m_\sigma\rangle\}$ where $|m_\sigma\rangle$ are the number states of the oscillator with frequency $\Omega_\sigma$. Opposed to WQOC, these dressed states are built in the strong coupling regime on a shorter time scale than the re-thermalization of the oscillator. In the evolution from thermal state of oscillator with frequency $\Omega$ to an equilibrium between the new oscillator with $\Omega_\sigma$ and the bath, the state in the narrower potential tends to absorb and the one in the wider potential to emit photons to the bath in an incoherent manner, causing fluctuations of $(\hat{a}+\hat{a}^\dagger)^2$ and thus qubit decoherence. Thus we expect $1/\tau_\phi\propto \kappa n(\Omega)$. This simple picture is confirmed by
numerical results in Fig.~\ref{knt}, for a wide range of values of
$\kappa$.
\begin{figure}[h]
\includegraphics[width=.45\textwidth]{knt.eps}
\caption{Scaling plot of the dephasing rate $1/\tau_\phi$ as
function of temperature. For $\Delta/\Omega=0.3$, i.e. in the strong
coupling regime (see Fig.~\ref{gamma}), for a wide range of
$\kappa$'s we show that $1/(\tau_\phi\kappa)$ is proportional to the
Bose function $n(\Omega)$. Inset: dephasing rate $1/\tau_\phi$ as
function of $\kappa$ for different values of $\Delta$ and
$\hbar\Omega/k_BT=2$. Continuous lines correspond to $\kappa
n(\Omega)$, dashed lines correspond to
$n(\Omega)(n(\Omega)+1)\Delta^4/(\Omega^2\kappa)$.}\label{knt}
\end{figure}
The inset of Fig.~\ref{knt} shows the crossover from strong coupling
rate $\kappa$ to WQOC rate $1/\kappa$. This indicates that, for fixed
$\Delta$, as $\kappa$ decreases, $\Delta$ stops being "small" and
the WQOC limit breaks down. Thus, approaching $\kappa=0$ for any given $\Delta$ we
eventually leave the domain of validity for eq.
(\ref{dephasingrate}) avoiding the divergence at $\kappa\to0$. As
expected, dephasing will vanish as we go to a finite quantum system
(qubit $\otimes$ single oscillator) at $\kappa=0$. We observe that
the criterion of "small" $\Delta$ in WQOC is valid only relative to
$\kappa$. Using $1/\tau_{\phi}=\kappa n\!\left(\Omega\right)$
in the strong coupling regime and the $1/\kappa$ term of
$1/\tau_{\phi}$ in eq.~(\ref{dephasingrate}) in the weak coupling
regime, we determine the position of the crossover $\Delta_c$
between the two regimes
\begin{eqnarray}
\frac{\Delta_c}{\Omega}&=&\sqrt{\frac{\kappa}{\Omega}}\frac{1}
{(1+n(\Omega))^{1/4}} \,\, .
\end{eqnarray}
The position of the cross-over is controlled by the ratio of the
coupling strengths between the three subsystems i.e.~$\Delta^2/\Omega$ and $\kappa$. Note that, with the
in-situ tuning of the qubit-SQUID coupling, available in the Delft
experiment, the position of the cross-over could be tested
experimentally. Using the parameters from Ref.~\cite{Wallraff04,
Bertet05b} one finds $(\Delta/\Delta_c)_{\rm
Yale}\approx1.4$ and $(\Delta/\Delta_c)_{\rm
Delft}\approx1.3$ i.e., the strong coupling regime finds application in both setups .
If the oscillator is weakly driven off-resonance, as is the case in the dispersive
measurement, the qualitative behavior remains the same as in
Fig.~\ref{gamma}, as shown in Ref. \cite{Serban06}. In
general a tunneling $\hat{\sigma}_x$ term may occur in
Eq.~(\ref{eq:Hamiltonian}) and lead to energy relaxation as well as further
reducing the matrix elements containing the dephasing rate. We
expect that, as long as the energy splitting $E$ of the qubit is
off-resonance with the oscillator, which in our case means
$|E-2\Omega|\gg \kappa$, the effect of the relaxation is rather weak
and dephasing still dominates. On resonance, we expect a similar
Purcell to strong coupling crossover as for the dephasing channel.
Our results have applications in other systems with similar dispersive qubit-oscillator coupling, e.g., the Yale setup \cite{Wallraff04}, in the off-resonant dispersive regime.
There, the system is described by a similar (Eq.~(12) in Ref.~\cite{Blais04})
quadratic coupling $\hat{a}^{\dagger}\hat{a}$ between qubit and
cavity and a pure dephasing Hamiltonian. In particular, a strong
dispersive regime of this system has been utilized to resolve number
states of the electromagnetic field in Ref.~\cite{Schuster06}. The
terms $\hat{a}^2$ and $\hat{a}^{\dagger2}$ in Eq.~(1) do not play a
central role for our physical predictions, as confirmed by the numerical simulations. We expect our results, with minor adaptations, to
be applicable to various cavity systems, e.g. quantum dot or atom-based
quantum optical schemes \cite{Raimond01,Balodato05}. The dispersive coupling of Hamiltonian (\ref{eq:Hamiltonian}) could have implications for the generation of squeezed states, quantum memory in the frame of quantum information processing, measurement and post-selection of the number states of the cavity.
\section{Conclusion}
{ We have presented a concise theory of the dephasing
of a qubit coupled to a dispersive detector spanning both strong and weak coupling. The phase-space method applied is based on treating the oscillator on the same level of accuracy as the qubit. We have discussed the dominating decoherence mechanism at weak qubit-oscillator coupling, where the linewidth of the damped oscillator plays the main role, analogous to the Purcell effect. At strong qubit-oscillator coupling we have identified a qualitatively different behavior of the qubit dephasing and discussed it in terms of the onset of the qubit-oscillator entanglement. We have provided a criterion delimitating the parameter range at which these processes dominate the qubit dephasing.}
\acknowledgements
We are especially grateful to F.~Marquardt, I.~Chiorescu and A.~Blais for very
helpful discussions. We thank J.v.~Delft, A.G.~Fowler and
C.~Couteau for many useful suggestions. This work is supported by
NSERC and the DFG through SFB 631 and by EU through EuroSQIP and
RESQ projects. I.S.~acknowledges support through the Elitenetzwerk
Bayern.
\section{Appendix}
Assuming the WQOC limit we use Fermi's golden rule in an otherwise exact manner to prove the analogy between the weak qubit-oscillartor coupling regime and the Purcell effect. One can map the damped oscillator by an exact normal mode transformation
\cite{Ambegaokar05} onto an \emph{effective} heat bath of
\emph{decoupled} oscillators denoted by
$\hat{c}_j,\hat{c}_j^\dagger$ and with a spectral density
\begin{equation} J_{\rm
eff}(\omega)=\frac{2\kappa\omega}{(\omega^2-\Omega^2)^2+\kappa^2\omega^2}.
\label{eq:jeff}
\end{equation}
$J_{\rm eff}$ corresponds to the effective density of electromagnetic modes in the cavity introduced
in regular linear cavity QED for describing the Purcell effect.
The WQOC decoherence rate is proportional to the two-point correlation function of the
environmental operator coupling to the qubit \cite{Slichter96,Nato06II}, in our case
\begin{equation} S_2(\omega)=\left\langle
\hat{X}^2(t)\hat{X}^2(0)\right\rangle_\omega-\left(\langle
\hat{X}^2\rangle\right)^2,\label{corr}
\end{equation} where $\hat{X}$ is the sum of the \emph{effective}
bath coordinates $\hat{X}=\sum_j
\sqrt{\hbar/(2m_j\omega_j)}(\hat{c}_j+\hat{c}_j^\dagger)$. For
the pure dephasing situation described by the Hamiltonian
(\ref{eq:Hamiltonian}) we only need to study $1/\tau_{\phi}\propto S_2(\omega\to0^+)$ because the qubit energy conservation implies energy conservation within its effective environment. The last term
of Eq.~(\ref{corr}) removes the noise bias. This is important since
dephasing is caused only by processes that leave a trace in the bath
\cite{Leggett02}, i.e.~the exchanged boson spends a finite time in the environment.
Terms of the structure
$\left\langle\hat{c}^\dagger_i(t)\hat{c}_j^\dagger(t)\hat{c}_k \hat{c}_l\right\rangle$ ,$\left\langle\hat{c}_i(t)\hat{c}_j(t)\hat{c}_k^\dagger \hat{c}_l^\dagger\right\rangle$ contribute to $S_2(0)$ only when $\omega_i=\omega_j=0$, which are modes with density $J_{\rm eff}\simeq2\kappa\omega/\Omega^4$ each, leading to terms are of order $\kappa^2$.
Up to linear order in $\kappa$, the only terms in $S_2(\omega\to0^+)$ that fulfill the energy conservation and leave a trace in the bath are
of the structure $\left\langle
\hat{c}^\dagger_l(t)\hat{c}_j(t)\hat{c}_j^{\dagger} \hat{c}_l
\right\rangle$, including the permutations among the operators taken at
time $t$ and those taken at time 0. The terms contributing to $S_2(\omega\to0^+)$
satisfy the condition $|\omega_l-\omega_j|\to 0^+$. Physically this corresponds to infinitesimal energy fluctuations which leave a trace in the bath. Or, in other words, the photon absorbed at t=0, $\hat{c}_l$, should spend finite time in the bath and be emitted back only at the later time t, but at the same time the energy change in the environment e.g. caused by $\hat{c}_j^{\dagger}\hat{c}_l$ should remain undetectable within the energy-time uncertainty at every time, therefore in the Golden Rule (long time) limit $\omega_l\approx\omega_j$. Taking the continuum limit
we thus have
\begin{equation} 1/\tau_{\phi}\propto\int_0^{\infty}\! \!d\omega\: J_{\rm
eff}(\omega)(1+n(\omega))J_{\rm
eff}(\omega)n(\omega)\label{continuum}.
\end{equation}
The integral in eq.~(\ref{continuum}) can be rewritten as the convolution
$K(\omega')=\int d\omega J_{\rm
eff}(\omega)n(\omega)J_{\rm eff}(\omega'-\omega)n(\omega'-\omega)$
for $\omega'\to0$. Using eq.~(\ref{eq:jeff}), $K(\omega')$ becomes a
function with resonances at $\omega'=0$ and $\omega'=2\Omega$. The
the height of these resonances and consequently $1/\tau_{\phi} \propto S_2(0)$ increases with decreasing $\kappa$, thus matching the behavior of the dephasing rate (\ref{dephasingrate}).
At the same time, the tail of the peak at $2\Omega$ enhances $S_2(0)$ when $\kappa$ increases. This corresponds to the $\kappa$ term in eq. (\ref{dephasingrate}).
Analogous to $1/\tau_\phi$ in eq.~\ref{dephasingrate}, $S_2(\omega\to0^+)$ vanishes for $T\to0$.
|
2210.00799
|
\section{Introduction}
The extragalactic $\gamma$-ray sky is dominated by the blazar category of active galactic nuclei (AGN; \citealt{2020ApJS..247...33A}). Blazars comprising flat-spectrum radio quasars (FSRQs) and BL Lac objects (BL Lacs) are radio-loud AGN that have their relativistic jets aligned close to the line of sight to the observer \citep{1978bllo.conf..328B,1995PASP..107..803U}. They emit most of their energy in high energy $\gamma$-rays with the most powerful sources, during strong flares reaching $\gamma$-ray luminosities (in the isotropic emission scenario) as high as $L_{\gamma} \sim 10^{49-50}$ erg s$^{-1}$ erg s$^{-1}$ \citep{2017ApJ...841...61A}. The broadband spectral energy distributions (SEDs) of blazars have double-hump structures \citep{1998MNRAS.299..433F}. The low-energy component of the SED is well explained by synchrotron emission from the relativistic electrons within the jet, while the origin of the high-energy component is still unclear \citep[e.g.][]{2007Ap&SS.307...69B}. One among the mechanisms responsible for the high energy $\gamma$-ray radiation is the synchrotron self Compton (SSC) process by which the synchrotron photons emitted by the relativistic electrons in the jet are Compton up-scattered by the same population of electrons in the jet \citep{1992ApJ...397L...5M}. The other model posits the production of $\gamma$-rays by inverse Compton scattering of seed photons external
to the jet, by electrons in the emitting jet. The seed photons could be the ultra-violet photons from the accretion disk \citep{1993ApJ...416..458D}, the photons from emission lines from the broad-line region \citep{1994ApJ...421..153S} and the infrared emission from the dusty torus \citep{2000ApJ...545..107B}. Alternatively, hadronic processes could also produce $\gamma$-rays \citep{2013ApJ...768...54B}.
Based on the synchrotron peak frequency ($\nu_{peak}$) of their SEDs, BL Lacs are further divided into low-frequency peaked BL Lacs (LBLs; $\nu_{peak} \leq 10^{14}$ Hz), intermediate-frequency peaked BL Lacs (IBLs; $10^{14}$ Hz $< \nu_{peak} < 10^{15}$Hz), and high-frequency peaked BL Lacs (HBLs; $\nu_{peak} \geq 10^{15}$Hz) \citep{1995ApJ...444..567P,2010ApJ...716...30A}.
The $\gamma-$rays from blazars are known to be highly variable \citep{2010ApJ...722..520A,2020A&A...634A..80R} which indicates the emission region is highly compact \citep{1994ApJS...94..551F} and the $\gamma$-ray radiation is highly beamed \citep{1995MNRAS.273..583D}. However, the exact physical processes that are responsible for the generation of the observed $\gamma$-ray emission, as well as their production site, are uncertain and highly debated. Using powerful observational facilities in the $\gamma-$ray domain, rapid $\gamma-$ray variations have been detected in
seven AGN so far, including three BL Lacs (PKS 2155-304 \citep{2007ApJ...664L..71A}, Mrk 501 \citep{2007ApJ...669..862A}, BL Lac \citep{2013ApJ...762...92A,2019A&A...623A.175M}), three FSRQs (PKS PKS 1222$+$21 \citep{2011ApJ...730L...8A}, 3C 279 \citep{2016ApJ...824L..20A}, CTA 102 \citep{2016ApJ...824L..20A}), and one radio galaxy (IC 310 \citep{2014Sci...346.1080A}. However, the minute-timescale $\gamma-$ray variations were also detected multiple times in the same source. For example, \cite{2013ApJ...762...92A} observed a rapid TeV $\gamma-$ray flare from BL Lac with an exponential decay time of 13$\pm$4 min and recently, \cite{2019A&A...623A.175M} also reported the detection of VHE $\gamma-$ray flare with halving time of 26$\pm$8 min from the same source. Moreover, the majority of the findings of rapid $\gamma$-ray variations come from VHE observations by the ground-based Cerenkov telescopes except for two sources, namely 3C 279 \citep{2016ApJ...824L..20A} and CTA 102 \citep{2018ApJ...854L..26S}, where the short time scale variations are from the GeV observations by the {\it Fermi}-LAT. Both of these sources belong to the FSRQ subclass of blazars. To date, no rapid GeV $\gamma-$ray variability was detected in any BL Lacs by the {\it Fermi}-LAT.
\begin{figure*}
\centering
\includegraphics[width=15.5cm, height=13cm]{lc_lat}
\caption{\label{fig:lc_lat} Panel (a): The one-day binned light curve of BL Lac in the $\gamma-$ray band (0.1$-$500 GeV) covering a period of $\sim$ 35 days. Panel (b): The variation of $\gamma-$ray photon index with time; the blue and red horizontal lines indicate the average power-law index during the period considered and the power-law index mentioned in the 4FGL catalog, respectively. Panel (c): The 3-hr binned $\gamma-$ray light curve of BL Lac. Panel (d): The arrival time distribution of photons with energies greater than 10 GeV. The vertical red lines indicate the period of the major $\gamma-$ray flare.}
\end{figure*}
The availability of data from the {\it Fermi}-LAT \citep{2009ApJ...697.1071A} observations provide ample opportunities to find evidence of short time scale of variations in more blazars. Our motivation here is to find rapid $\gamma-$ray variations in the BL Lac subclass of blazars using the {\it Fermi}-LAT observations. BL Lac, at a redshift of $z$=0.069 \citep{1978ApJ...219L..85M}, is an eponym of the BL Lac category of blazars. It is usually categorized as an LBL \citep{2018A&A...620A.185N}, but sometimes it is also classified as an IBL \citep{2011ApJ...743..171A}. During April 2021, the {\it Fermi}-LAT observed the historically largest $\gamma-$ray flare from BL Lac, which allowed us to search for the rapid $\gamma-$ray variations in the source. In this work, we report the first-ever detection of minute-timescale $\gamma-$ray variability in any BL Lac object by {\it Fermi}-LAT.
This paper is organized as follows. In Section \ref{sec:data} we give an overview of observations and the data analysis of {\it Fermi}-LAT. Results of our rapid $\gamma-$ray variability study are given in the Section \ref{sec:res}; Section \ref{sec:diss} presents the discussion and the summary of our work is given in the Section \ref{sec:summary}.
\begin{figure}
\centering
\includegraphics[width=9cm, height=8cm]{lc_multi_bands}
\caption{\label{fig:lc_multi_bands} Daily-averaged $\gamma-$ray light curves of BL Lac in different energy bands. The energy range used to generate the light curve is mentioned in each panel.}
\end{figure}
\section{Observations and Data Analysis} \label{sec:data}
\begin{figure*}
\centering
\includegraphics[width=8.5cm, height=6cm]{orbit_lc2} \includegraphics[width=8.5cm, height=6cm]{orbit_lc1}
\caption{\label{fig:lc_lat_A}The top panels are the orbit-binned $\gamma-$ray light curves of BL Lac for 2021 April 23 (left panel) and 2021 April 27 (right panel). The TS values for each time bin in both the light curves are shown as blue bars. The power-law index for each bin is plotted in the bottom panels. The red and blue horizontal lines in the bottom panels represent the 4FGL power-law index (2.2025) and the average photon index on that day, respectively.}
\end{figure*}
\subsection{Gamma-ray observations}
We used the Pass 8 (P8R3) {\it Fermi}-LAT $\gamma-$ray (0.1$-$500 GeV) data of BL Lac from MJD 59305 (2021 April 1) to MJD 59340 (2021 May 6). We analyzed the data following the standard LAT data analysis procedures\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/}} and using the {\fontfamily{qcr}\selectfont FermiTool} \ software package version 2.0.8 with the {\fontfamily{qcr}\selectfont P8R3\_SOURCE\_V3} \ instrument response functions. For our analysis, we chose all the SOURCE class events (evclass=128 and evtype=3) within a circular region of 10 degrees (region of interest; ROI) around the blazar BL Lac. To select the good time intervals, we used a filter ``(DATA\_QUAL$>$0)\&\&(LAT\_CONFIG==1)'' and applied a maximum zenith-angle cut of 90 degrees to avoid the background $\gamma-$rays from the Earth's limb. We employed the unbinned maximum likelihood optimization technique for flux determination and spectral modelling \citep{2009ApJS..183...46A}. Our model file includes all the sources from the {\it Fermi}-LAT Fourth Source Catalog (4FGL; \cite {2020ApJS..247...33A}) within 20 degrees of the source as well as the Galactic and extragalactic isotropic diffuse emission components\footnote{{\fontfamily{qcr}\selectfont gll\_iem\_v07} \ and {\fontfamily{qcr}\selectfont iso\_P8R3\_SOURCE\_V3\_v1} \ , respectively}. During the initial likelihood fit, all the parameters of the sources lying outside the ROI were kept fixed to their values in 4FGL, while the normalization and the spectral parameters of the sources within the ROI were left to vary freely. The normalizations of the diffuse emission components were also left free.
\subsection{X-ray observations}
We generated the X-ray spectrum of BL Lac using the online {\it Swift}-XRT data products generator tool\footnote{\url{https://www.swift.ac.uk/user_objects/}}(for details, see \cite{2009MNRAS.397.1177E}). This tool produces the pile-up corrected source spectrum, the background spectrum and the response files using the HEASOFT version 6.29. The X-ray spectrum of BL Lac was fitted with an absorbed power-law model in the XSPEC version 12.12.0 to obtain the 0.3-10 keV X-ray flux and the photon index. For the fitting, we assumed a fixed Galactic hydrogen column density of n$_H$ = 3.03 $\times$ 10$^{21}$ cm$^{-2}$ \citep{2013MNRAS.431..394W}.
\begin{table*}
\centering
\caption{\label{tab:ffit}Results of the sum of exponential fit to the orbit-binned light curves. }
\begin{tabular}{cccccc}
\hline\hline
Flare & Peak Time (MJD) & Peak Flux ($\times$ 10$^{-6}$ ph cm$^{-2}$ s$^{-1}$) & Rise time (T$_r$) & Decay time (T$_d$) & $\xi$\\\hline
Flare 1 & 59327.19$\pm$0.02 & 13.61$\pm$1.84 & 1.42$\pm$0.32 hr & 0.79$\pm$0.27 hr & -0.28\\
Flare 2 & 59331.18$\pm$0.02 & 19.47$\pm$1.67 & 1.49$\pm$0.29 hr & 2.63$\pm$0.47 hr & 0.28 \\\hline
\end{tabular}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=8.5cm, height=6cm]{fig3a} \includegraphics[width=8.5cm, height=6cm]{fig3b}
\caption{\label{fig:min_scale_var}The top panels are the 2-min (left) and 5-min (right) binned light curves for the orbit E. The TS values for each time bin in both the light curves are shown as blue bars. The power-law index for each bin is plotted in the bottom panels. The red and blue horizontal lines in the bottom panels represent the 4FGL power-law index (2.2025) and the average photon index on that day, respectively.}
\end{figure*}
\section{Results}\label{sec:res}
\subsection{$\gamma-$ray light curve}\label{subsec:lc}
Figure \ref{fig:lc_lat} presents the $\gamma-$ray light curves of BL Lac for a period of about one month that includes the brightest outburst observed on 2021 April 27. Although the spectral shape of BL Lac is defined as log-parabola (LP) in the 4FGL catalog \citep{2020ApJS..247...33A}, we generated the light curves by modeling the spectra in each time bin as a simple power-law (PL), since the PL indices have smaller statistical uncertainties than those obtained from the complex LP model \citep[e.g.][]{Abdo_2011}. Also, choosing a simple PL model is more appropriate for this work as we are probing the shortest timescale $\gamma-$ray variations, thereby dealing with lesser photon statistics. The arrival time and the energy of the highest energy photon (bottom panel) coming from the source were derived using the tool {\fontfamily{qcr}\selectfont gtsrcprob} \ on ULTRACLEAN event class (evclass=512).
The maximum 1-day averaged flux (above 100 MeV) was observed on MJD 59331 (2021 April 27) reaching (6.8$\pm$0.3) $\times$ 10$^{-6}$ photons cm$^{-2}$ s$^{-1}$, which is the highest daily binned flux ever observed from this source. The photon index corresponding to this highest flux is 1.90$\pm$0.03, which is slightly harder than the value (2.2025) mentioned in the 4FGL catalog. The 3-hr binned $\gamma-$ray light curve of BL Lac, shown in panel (c) of Figure \ref{fig:lc_lat}, indicates two sharp $\gamma-$ray flares.
To investigate any instrumental uncertainties in the analysis, we also generated the daily-binned light curves of BL Lac in 0.1-1 GeV, 1-50 GeV, and 50-500 GeV energy bands for the period considered. As seen in Figure \ref{fig:lc_multi_bands}, the light curves in those three energy bands follow a pattern similar to the total 0.1-500 GeV energy band light curve. There are only six data points in the 50-500 GeV light curve as the source was not significantly (TS >9) detected in this energy range in other time bins.
The highest-energy photon, 128 GeV, was recorded with $>$99.99\% probability on MJD 59333 (2021 April 29), which is at the decline phase of the main flare. A similar trend was also seen in 3C 279 where the highest energy photon was observed at the end of the outburst phase \citep{2016ApJ...824L..20A}.
\subsection{Sub-orbital timescale variability}\label{subsec:var}
As shown in panel (c) of Figure \ref{fig:lc_lat}, the source flux exceeded the value of 10$^{-5}$ photons cm$^{-2}$ s$^{-1}$ on two days namely, 2021 April 23 (MJD 59327) and 2021 April 27 (MJD 59331) on three occasions with high photon statistics; MJD 59327.18756 (TS = 798.985), MJD 59331.18756 (TS = 1968.951), and MJD 59331.31256 (TS = 776.058). This allowed us to further resolve the light curve with shorter timescales on these two days. We first generated the light curves with bin size equal to the orbital period ($\sim$95.4 minutes) of the {\it Fermi}-LAT. The orbit-binned light curves of BL Lac on 2021 April 23 (left) and April 27 (right) are shown in the top panels of Figure \ref{fig:lc_lat_A}. We estimated the shortest flux doubling/halving timescales on these two epochs as follows:
\begin{equation}\label{eq:doubling_time}
F(t_{2}) = F(t_{1})\times2^{\Delta t/\tau}
\end{equation}
Here, F($t_{1}$) and F($t_{2}$) are the flux values at times $t_{1}$ and $t_{2}$ respectively, $\Delta$ t = $t_{2}$-$t_{1}$ and $\tau$ denotes the flux doubling/halving timescale. We observed a flux halving timescale of $\sim$(0.68$\pm$0.22) hr with a significance of $\sim$4$\sigma$ during the decay of the flare on MJD 59327. We also detected a flux doubling timescale of $\sim$(1.14$\pm$0.21) hr with $\sim$6$\sigma$ significance during the rise of the flare on MJD 59331.
To understand the temporal evolution of the flux, we fitted the peaks of orbit-binned light curves by a function of the sum of exponential defined as \citep{2010ApJ...722..520A}
\begin{equation}
F(t) = 2 F_0 \left(e^{\frac{t_0-t}{T_r}} + e^{\frac{t-t_0}{T_d}} \right)^{-1}
\end{equation}
Here, $F_0$ is the flux value at time $t_0$ denoting the flare amplitude, $T_r$ is the rise time, and $T_d$ is the decay time of the flare.
The results of the fit are given in Table \ref{tab:ffit}. We also estimated a parameter $\xi = (T_d - T_r)/(T_d + T_r)$ that describes the symmetry of the flares \citep{2010ApJ...722..520A}. For both the flares, we found $-0.3 < \xi < 0.3$ implying that these flares are symmetric.
Rapid flux variations with high photon statistics observed on MJD 59327 and MJD 59331 provide us with an opportunity to examine ultra-fast flux variations on the timescale of a few minutes. To detect such rapid flux variations, we generated 2-min, 3-min and 5-min binned light curves for each orbit on these two days. Similar to \cite{2016ApJ...824L..20A} and \cite{2018ApJ...854L..26S}, we searched for minute-scale variability on these two days by fitting a constant flux to each orbit for all three time bins and subsequently, computing the probability ($p-$value) of the flux to be constant. Since the detection of minute-timescale $\gamma-$ray variations are very rare, we conservatively chose 95\% confidence level (p-value smaller than 0.05) as the detection limit for sub-orbital variability.
On MJD 59327, the $p$-values for all the orbits and for all the time bins are found to be consistent with constant flux. Similarly, on MJD 59331, for all the orbits, except orbit E, we obtained $p-$values denoting no flux variations for all the time bins.
For orbit E, we detected minute-scale variability in the 2-min binned ($p$=0.0065, $\chi^2$/dof = 27.52/12) and 5-min binned ($p$=0.0202, $\chi^2$/dof = 13.36/5) light curves. However, in the 3-min binned light curve of orbit E the variations were not significant ($p$=0.6339, $\chi^2$/dof = 6.12/8). Our $p-$values are comparable to those found by \cite{2016ApJ...824L..20A} in a single orbit, during a giant flare of blazar 3C 279. The 2-min binned and 5-min binned light curves for orbit E are shown in the top panels in Figure \ref{fig:min_scale_var}. We also searched for the flux doubling/halving timescales in these light curves using Equation \ref{eq:doubling_time}. We found a halving time scale of $\sim$ (1$\pm$0.3) minute with $\sim$3.2$\sigma$ in the 2-min binned light curve of orbit E.
\subsection{Gamma-ray spectral variability}\label{subsec:spec_var}
We investigated the $\gamma-$ray spectral variability of BL Lac on different time bins, namely daily, orbital and shortest time bins. For all the time bins, the average PL photon indices are somewhat harder than its value (2.2025) in the 4FGL catalog. The average PL photon indices for these time bins are given in Table \ref{tab:indx_var}. We also searched for any correlation between the observed $\gamma-$ray flux and the PL index on these time bins using the Pearson correlation. The results of the correlation study are given in Table \ref{tab:indx_var}. As seen from Table \ref{tab:indx_var}, no significant correlation was found between the $\gamma-$ray flux and PL index on any of the time bins except on the 2-min time bin. We found a significant ($p<$0.01) positive correlation between $\gamma-$ray flux and PL index in the 2-min binned light curve indicating a softer-when-brighter trend. The variation of PL photon index with $\gamma-$ray flux for 2-min binned light curve of orbit E is shown in Figure \ref{fig:flx_indx_2min}.
\begin{table}
\centering
\caption{\label{tab:indx_var}Results of spectral variability of BL Lac on different time bins. Here, $r$ and $p$ represent the Pearson correlation coefficient and the null hypothesis probability, respectively.}
\begin{tabular}{lccc}
\hline\hline
Bin size & Average PL index & \multicolumn{2}{c}{Flux vs PL index} \\\hline
& & r & p \\\hline
1-day & 1.95$\pm$0.02 & -0.16 & 0.349 \\
1-Orbit (flare 1) & 2.11$\pm$0.08 & -0.38 & 0.161 \\
1-Orbit (flare 2) & 1.93$\pm$0.04 & -0.14 & 0.622 \\
2-min (orbit E) & 2.02$\pm$0.11 & 0.76 & 0.003 \\
5-min (orbit E) & 2.06$\pm$0.14 & 0.59 & 0.209 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=8cm, height=7cm]{flx_indx_2min.pdf}
\caption{\label{fig:flx_indx_2min}Variation of PL photon index with $\gamma-$ray flux for the 2-min binned light curve of orbit E. A softer-when-brighter trend is clearly visible. }
\end{figure}
\section{Discussion} \label{sec:diss}
For the first time, a rapid (minute-timescale) $\gamma-$ray variability is detected in any BL Lac category of blazars by {\it Fermi}-LAT. We observed significant variations in the minute-scale binned $\gamma-$ray light curves with a halving timescale of $\sim$1 minute during the historically bright $\gamma-$ray flare from BL Lac on MJD 59331. The detection of such a rapid variability timescale challenges the existing $\gamma-$ray emission models.
The black hole mass for BL Lacertae is 1.7 $\times$ 10$^8$ M$_{\odot}$ \citep{2014Natur.510..126Z} and the corresponding event horizon light-crossing time is $\sim$14 minutes. The detected variability timescale ($\sim$ 1 minute) is much shorter than the event horizon light-crossing time indicating that the enhanced $\gamma-$ray emission is coming from a very compact region within the jet. Such rapid variations could be triggered either by dissipation in a small fraction of the black hole magnetosphere at the base of the jet or by small scale instabilities within the jet \citep{2008MNRAS.384L..19B}. Alternatively, the jet could be much more extended and the emission can come from a localized region in the
jet much smaller than the width of the jet itself. In this scenario too, the innermost region of the jet at the sub-parsec level is responsible for the observed GeV emission.
The total jet power that is required to produce the observed $\gamma-$ray luminosity $L_{\gamma} \sim$ 5 $\times$ 10$^{47}$ erg s$^{-1}$ is $L_j \simeq L_{\gamma}/(\eta_j\Gamma^2)$, where $\eta_j \sim 0.1$, is the radiative jet efficiency \citep{2016ApJ...824L..20A}. The jet power should be less than the Eddington luminosity $L_{Edd} \sim$ 2 $\times$ 10$^{46}$ erg s$^{-1}$ which implies that $\Gamma > 16$.
The minimum Doppler factor, $\delta_{min}$, of the $\gamma-$ray emission region can also be estimated numerically using a $\delta-$function approximation for the $\gamma\gamma$ opacity constraint and the detected high energy $\gamma-$ray photons \citep{10.1093/mnras/273.3.583,2010ApJ...716.1178A}. Assuming that the $\gamma-$rays are produced via the SSC scattering process and the target photons for SSC are X-ray photons, a lower limit of the Doppler factor can be calculated as
\begin{equation}
\delta_{min} = \left[\frac{\sigma_T d^2_L (1+z)^2 f_{\epsilon} E_1}{4 \tau m_e c^4}\right]^{1/6}
\end{equation}
where $\sigma_T$ is the Thomson scattering cross section, d$_L$ = 307 Mpc is the luminosity distance\footnote{Assuming $H_0$=71 km s$^{-1}$ Mpc, $\Omega_M$=0.27, $\Omega_{\Lambda}$=0.73 \citep{2011ApJS..192...16L}} for BL Lac, $f_{\epsilon}$ = 7 $\times$ 10$^{-11}$ erg cm$^{-2}$ s$^{-1}$ is the X-ray flux obtained from the {\it Swift}-XRT observation (obsid 00034748061; X-ray photon index = 2.28$\pm$0.08), and $E_1$ = 26 GeV/$m_ec^2$ is the highest energy photon. We estimated $\delta_{min}$ to be $\sim$ 15, which is consistent with the $\delta_{min}$ = 13-17 obtained by \cite{2013ApJ...762...92A}.
Taking $\delta$=16, the detected size of the $\gamma$-ray emitting region $R \leq c \tau \left(\delta/(1+z)\right)$ $\leq 2.69 \times 10^{13}$ cm. For a conical geometry of the jet, the distance of the $\gamma-$ray emission region from the central super-massive black hole is $D \leq (2 c \Gamma^2 \tau)/(1+z)$ $\approx$ 8.62 $\times$ 10$^{14}$ cm, assuming $\Gamma = \delta_{min}$ \citep{Abdo_2011}.
Flares seen on the orbit-binned light curves have a symmetric profile and can thus be associated with the crossing time of radiation through the emitting region or can be explained by the superposition of several short-duration flares \citep{2010ApJ...722..520A}.
\section{Summary} \label{sec:summary}
In this work, we report the first detection of minute-timescale GeV $\gamma-$ray variability in the BL Lac category of blazars by {\it Fermi}-LAT. We detected a flux halving timescale of $\sim$ 1 minute from BL Lac on 2021 April 27. This observed short timescale of variability requires a minimum bulk Lorentz factor of 16 to have the jet power lesser than the Eddington value. Also, $\gamma\gamma$ transparency argument for that epoch of short timescale variability detection requires a minimum Doppler factor of $\sim$ 15. We found a softer-when-brighter trend in the 2-min binned light curve of orbit E.
\begin{acknowledgements}
We thank the referee for his/her critical comments that helped in the improvement of the manuscript. We thank Dr Vaidehi S. Paliya for a fruitful scientific discussion. This work has made use of Fermi data collected from the Fermi Science Support Center (FSSC) supported by the NASA Goddard Space Flight Center. This research has made use of the High-Performance Computing (HPC) resources (\url{https://www.iiap.res.in/?q=facilities/computing/nova}) made available by the Computer Center of the Indian Institute of Astrophysics, Bangalore. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester.
\end{acknowledgements}
\bibliographystyle{aa}
|
2210.00670
|
\section{Introduction}\label{intro}
Szemer\'edi's famous theorem \cite{S} states that any set $S$ of integers with positive
(upper) density must necessarily contain arbitrarily long arithmetic
progressions. Quantitative versions have been obtained by several authors, first by
Roth~\cite{Roth} for three-term arithmetic progressions and by Gowers~\cite{G} in general,
with the current best bounds due to Bloom and Sisask~\cite{BloomSisask}, Green and
Tao~\cite{GT}, and Gowers~\cite{G}. More generally, one can consider polynomial
progressions $x, x+P_1(y), \ldots, x + P_m(y)$ for $x,y \in {\mathbb Z}$ with $y\not= 0$
where $P_j \in {\mathbb Z}[{\rm y}]$ is a sequence of polynomials with integer
coefficients and no constant terms (the case of arithmetic progressions corresponding to
linear polynomials). Bergelson and Leibman \cite{BL}, extending earlier work of
Bergelson, Furstenberg and Weiss \cite{BFW}, generalised Szemer\'edi's theorem to
polynomial progressions. Obtaining quantitative versions of Bergelson and Leibman's result
has been a challenging problem and no progress (outside a few results on 2-term
progressions) has been made until very recently.
Inspired by the earlier work of Bergelson, Furstenberg and Weiss, Bourgain obtained a
quantitative lower bound on the count of 3-term polynomial progressions in the setting of
the real field ${\mathbb R}$. He accomplished this by coupling a technique he developed in
his work on arithmetic progressions \cite{BO}, together with fourier-analytic methods.
\begin{theorem}[Bourgain \cite{B}]\label{bourgain} Given $\varepsilon>0$, there exists a $\delta(\varepsilon)>0$ such
that for any $N\ge 1$ and measurable set $S \subseteq [0,N]$ satisfying $|S\cap [0,N]|\ge \varepsilon N$, we have
\begin{align}\label{bourgain-3-term}
\bigl|\{(x,y) \in [0,N]\times[0,N^{1/d}] : x, x+y, x + y^d \in S \}\bigr| \ \ge \ \delta N^{1+1/d}.
\end{align}
In particular we have the existence of a triple $x, x+y$ and $x+y^d$ belonging to $S$ with $y$ satisfying
the gap condition $y \ge \delta N^{1/d}$.
\end{theorem}
The bound \eqref{bourgain-3-term} implies a quantitative multiple
recurrence result. Only recently have there been extensions to more
general 3-term progressions $x, x +P_1(y), x + P_2(y)$; see the work
of Durcik, Guo, and Roos \cite{D+} when $P_1(y) = y$ and general $P_2$
and of Chen, Guo, and Li \cite{CGL} for general
$P_1, P_2 \in {\mathbb R}[{\rm y}]$ with distinct degrees. We also refer to \cite{CDR} where the square-corner configurations of the form $(x_1, x_2), (x_1 + y, x_2), (x_1, x_2 + y^2)$ were studied for subsets in $\RR^2$. The
methods in these papers, using delicate oscillatory integral operator
bounds, seem limited to 3-term progressions. Finally we mention about a recent expository paper \cite{DR}, which also recovers Theorem \ref{bourgain}.
In another direction, Bourgain and Chang \cite{BC} gave quantitative
bounds for 3-term progressions of the form $x, x+y, x+y^2$ in the
setting of finite fields ${\mathbb F}_q$. This result was extended to
more general 3-term polynomial progressions by Peluse \cite{PEL-field}
and Dong, Li, and Sawin \cite{DLS}. The techinques in these papers,
using a Fourier-analytic approach which relies on sophisticated
exponential sum bounds over finite fields, also seem limited to 3-term
progressions.
By using new ideas in additive combinatorics, by-passing the need of inverse theorems for Gowers' uniformity norms
of degree greater than 2,
Peluse \cite{PEL0} recently made a significant advance, giving quantitative bounds for general polynomial progressions
$x, x+P_1(y), \ldots, x+P_m(y)$ in ${\mathbb F}_q$
where $\{P_j\} \subseteq {\mathbb Z}[{\rm y}]$ are linearly independent over ${\mathbb Q}$.
Inspired by this work, Peluse and Prendiville \cite{PP} obtained the first quantitative bounds for 3-term polynomal progressions
in the setting of the integers ${\mathbb Z}$. This has been extended recently to general polynomial progressions
$x, x+P_1(y), \ldots, x + P_m(y)$ with $P_j \in {\mathbb Z}[{\rm y}] $ having distinct
degrees by Peluse \cite{PEL}. So although
the first quantitative bounds for polynomial progressions were made in the setting of the real field ${\mathbb R}$,
we have seen major advances in both the finite field ${\mathbb F}_q$ and integer ${\mathbb Z}$ settings by employing
new ideas in additive combinatorics.
One purpose of this paper is to rectify this situation for the continuous setting by establishing quantitative bounds
for general polynomial progressions in the real field ${\mathbb R}$, bringing it in line with the recent advances
in the finite field and integer settings.
Another purpose is to illustrate how one can marry these new ideas in additive combinatorics with other ideas,
notably from the work of Krause, Mirek and Tao \cite{K+}, to obtain compactness results for general multilinear polynomial
averaging operators which have implications for problems in euclidean harmonic analysis. These ideas and arguments
are robust enough to allow us to obtain quantitative bounds for polynomial progressions in a general locally compact topological field.
\begin{theorem}\label{thm:main} Let $\KK$ be a locally compact topological field with Haar measure $\mu$.
Let ${\mathcal P} = \{P_1, \ldots, P_m\}$ be a sequence of polynomials in $\KK[{\rm y}]$ with distinct degrees and
no constant terms and let $d$ denote the largest degree among the polynomials in ${\mathcal P}$. When $\KK$
has positive characteristic, we asssume the characteristic is larger than $d$.
For any $\varepsilon>0$, there exists a $\delta(\varepsilon, {\mathcal P}) > 0$ and $N(\varepsilon, {\mathcal P}) \ge 1$ such that
for any $N \ge N(\varepsilon, {\mathcal P})$ and measurable set $S \subseteq \KK$ satisfying $\mu(S \cap B_N) \ge \varepsilon N$, we have
\begin{align}\label{mult-recurrence}
\mu\bigl(\{(x,y) \in B_N \times B_{N^{1/d}} : x, x+P_1(y),\ldots x + P_m(y) \in S \}\bigr) \ \ge \ \delta N^{1+1/d}.
\end{align}
In particular we have the existence of a progression $x, x+P_1(y), \ldots, x+ P_m(y)$ belonging to $S$ with $y$ satisfying
the gap condition $|y| \ge \delta N^{1/d}$. The proof will show that we can take $\delta = \varepsilon^{C\varepsilon^{-2m-2}}$
for some $C = C_{\mathcal P} > 0$ and $N(\varepsilon, {\mathcal P}) = \varepsilon^{-C' \varepsilon^{-2m-2}}$ for a slightly
larger $C' > C_{\mathcal P}$.
\end{theorem}
When $\KK = {\mathbb R}$ is the real field, Theorem \ref{thm:main} extends the work in \cite{B}, \cite{D+} and \cite{CGL}
from 3-term polynomial progressions to general polynomial progressions albeit for large $N$, depending on
$\varepsilon$.
When $\KK={\mathbb C}$, Theorem \ref{thm:main}
represents the first known results for complex polynomial progressions. The absolute value $|\cdot|$ used in
the statement of Theorem \ref{thm:main} is normalised so that we can express the result in this generality
(see Section \ref{local-fields}). For any sequence of complex polynomials
$\{P_j\} \subseteq {\mathbb C}[{\rm z}]$ with distinct degrees and $P_j(0) = 0$, Theorem \ref{thm:main}
has the following consequence: given $\varepsilon>0$, there is a $\delta >0$ such that for sufficiently large $N$
and any set $S$ in the complex plane satisfying $|S\cap {\mathbb D}_N| \ge \varepsilon N^2$, we can find a
progressions of the form $w, w+ P_1(z), \ldots, w + P_m(z)$ lying in $S$ such that $|y| \ge \delta N^{2/d}$.
Important in our analysis are certain properties for $m+1$ linear forms formed from our collection
${\mathcal P} = \{P_1, \ldots, P_m\} \subseteq \KK[{\rm y}]$ of $m$ polynomials with distinct degrees,
say $1\le \deg(P_1) < \ldots < \deg(P_m) =: d$. Let $N \ge 1$ and consider the form
$$
\Lambda_{\mathcal P; N}(f_0,\ldots, f_m):=
\frac{1}{N^d} \int_{\KK^2}f_0(x)\prod_{i=1}^mf_{i}(x-P_i(y))d\mu_{[N]}(y)d\mu(x).
$$
Here $d\mu_{[N]}(y) = N^{-1} \ind{B_N(0)}(y) d\mu(y)$ is normalised measure on the ball $B_N(0)$
(we will describe notation used in the paper in Section \ref{sec:2}).
The key result in the proof of Theorem \ref{thm:main} is the following $L^{\infty}$ inverse theorem
for $\Lambda_{\mathcal P; N}$ which is of independent interest.
\begin{theorem}[Inverse theorem for $(m+1)$-linear forms]
\label{thm:inverse-informal}
With the set-up above, let
$f_0, f_1,\ldots, f_m$ be $1$-bounded functions supported
on a ball $B\subset \KK$ of measure $N^d$.
Suppose that
\begin{align*}
|\Lambda_{\mathcal P; N}(f_0,\ldots, f_m)|\ge\delta.
\end{align*}
Then there exists $N_1\simeq \delta^{O_{\mathcal P}(1)}N^{\deg(P_1)}$ such that
\begin{align*}
N^{-d}\big\| \mu_{[N_1]}*f_1\big\|_{L^1(\KK)} \gtrsim_{\mathcal P} \delta^{O_{\mathcal P}(1)}.
\end{align*}
\end{theorem}
The main application of Theorem \ref{thm:inverse-informal} for us will be to prove a precise structural
result for multilinear polynomial operators of the form
$$
A_N^{\mathcal P}(f_1,\ldots, f_m)(x) \ = \ \int_\KK f_1(x + P_1(y) \cdots f_m(x + P_m(y)) \, d\mu_{[N]}(y).
$$
We will use ideas in the recent work of Krause, Mirek and Tao \cite{K+}
to accomplish this and consequently, we will be able to establish the following important Sobolev estimate.
\begin{theorem}[A Sobolev inequality for $A_N^{\mathcal P}$]
\label{sobolev-informal}
Let $1<p_1,\ldots, p_m<\infty$ satisfying
$\frac{1}{p_1}+\ldots+\frac{1}{p_m}=1$ be given. Then
for $N_j\simeq \delta^{O_{\mathcal P}(1)}N^{\deg(P_j)}$, we have
\begin{align*}
\|A_N^{\mathcal P}(f_1,\ldots,f_{j-1}, (\delta_0-\varphi_{N_j})*f_j,f_{j+1}\ldots, f_m)\|_{L^1(\KK)}
\lesssim \delta^{1/8}
\prod_{i=1}^{m}
\|f_i\|_{L^{p_i}(\KK)},
\end{align*}
provided $N\gtrsim \delta^{-O_{\mathcal P}(1)}$.
Here $\varphi_{N_j}$ is a smooth cut-off function such that ${\widehat{\varphi_{N_j}}}(\xi) \equiv 1$ for $\xi \in B_{{N_j}^{-1}}(0)$.
\end{theorem}
Following an argument of Bourgain in \cite{B} we will show how
Theorem \ref{sobolev-informal} implies Theorem \ref{thm:main}. Versions of Theorem \ref{sobolev-informal} for two real
polynomials $\{P_1, P_2\} \subseteq {\mathbb R}[{\rm y}]$ were established in \cite{B}, \cite{D+} and \cite{CGL} using
delicate oscillatory integral operator bounds. Our arguments are much more elementary in nature and do not
require deep oscillatory integral/exponential sum/character sum bounds outside a standard application of van der Corput
bounds (see \cite{FatS}) when $\KK = {\mathbb R}, {\mathbb C}$ or Hua's exponential sum bound \cite{H} (which extends Mordell's classical bound
from the finite field setting to complete exponenial sums over ${\mathbb Z}/p^m{\mathbb Z}$). Furthermore
the Sobolev inequalities in \cite{D+} and \cite{CGL} were only established for certain sparse sequences of scales $N$. The
bound in Theorem \ref{sobolev-informal} holds for all sufficiently large scales $N$.
The Sobolev bound in Theorem \ref{sobolev-informal} potentially has many other applications.
See \cite{B} for a discussion on the implications of Theorem \ref{sobolev-informal} to compactness properties of
the multilinear operator $A_N^{\mathcal P}$.
Pointwise convergence results for multilinear polynomial averages
are common applications of such Sobolev bounds. See \cite{CGL} where the Sobolev inequality is
used to prove the existence of polynomial progressions in sets of sufficiently large Hausdorff dimension.
See also \cite{K}, \cite{LP}, \cite{Kel1}, \cite{Kel2} and \cite{CLP}.
Our results require the scales $N$ to be large. It would be interesting, for various applications,
to establish these results for small scales as well.
\section{Structure of the paper}\label{paper} After a review of analysis in the setting of locally
compact topological fields, including some essential but basic oscillatory integral bounds, we set up some
notation and detail some
tools involving the Gowers uniformity norms. In Section \ref{preliminaries} we give some preliminary results
necessary to carry out the core arguments. In Section \ref{sec:inverse} we give the proof of Theorem \ref{thm:inverse-informal}
which is based on a PET (polynomial exhaustion technique) induction scheme and a degree lowering argument
developed by the third author in earlier work.
In Section \ref{sec:sobolev} we will prove Theorem \ref{sobolev-informal}. Finally in Section \ref{appendix},
we show how Theorem \ref{thm:main} follows as a consequence of Theorem \ref{sobolev-informal}.
\section{Review of basic analysis on locally compact topological fields}\label{local-fields}
Let $\KK$ be a locally compact topological field with a nondiscrete topology. Such fields have
a unique (up to a positive multiple) Haar measure $\mu$. They also carry a nontrivial absolute
value $|\cdot|$ such that the corresponding balls $B_r(x) = \{y \in \KK: |y-x|\le r\}$ generate the topology.
Recall that an absolute value on a field $\KK$ is a map $|\cdot| : \KK \to {\mathbb R}^{+}$ satisfying
$$
(a) \ |x| = 0 \ \Leftrightarrow \ x = 0, \ \ \ (b) \ |xy| = |x||y| \ \ \ {\rm and} \ \ (c) \ |x+y| \le C (|x| + |y|)
$$
for some $C\ge 1$. It is {\it nontrivial} if there is an $x\not = 0$ such that $|x| \not = 1$.
Two absolute values $|\cdot|_1$ and $|\cdot|_2$ are said to be {\it equivalent} if there is a $\theta> 0$
such that $|x|_2 = |x|_1^{\theta}$ for all $x\in \KK$. Equivalent absolute values give the same topology. There
is always an equivalent absolute value such that the triangle inequality $(c)$ holds with $C = 1$. If $|\cdot|$ satisfies
the stronger triangle inequality $(c') \, |x+y| \le \max(|x|,|y|)$, we say that $|\cdot|$ is non-archimedean. Note
that if $|\cdot|$ is non-archimedean, then all equivalent absolute values are non-archimedean. The field $\KK$ is
said to be {\it non-archimedean} if the underlying absolute value (and hence all equivalent ones) is non-archimedean.
Otherwise we say $\KK$ is archimedean.
When $\KK$ is archimedean, then it is isomorphic to the real ${\mathbb R}$ or complex ${\mathbb C}$ field with the
usual topology. In this case Haar measure is a multiple of Lebesgue measure. When $\KK$ is non-archimedean,
then the {\it ring of integers} $o_\KK := \{x \in \KK: |x|\le 1\}$
and the unique maximal ideal $m_\KK := \{x \in \KK : |x| < 1\}$ do not depend on the choice of absolute value
(it is invariant when we pass to an equivalent absolute value).
For any $\KK$, we normalise Haar measure
so that $\mu(B_1(0)) = 1$.
When $\KK$ is non-archimedean, the unique maximal ideal $m_\KK = (\pi)$ is principal and we call any generating
element $\pi$ a {\it uniformizer}. Furthermore the residue field $k := o_\KK/m_\KK$ is finite, say with $q$ elements.
For $x\in \KK$, there is a unique $n\in {\mathbb Z}$ such that $x = \pi^n u$ where $u$ is a unit.
We can go further and expand any
$x\in \KK$ as a Laurent series in $\pi$; $x = \sum_{j\ge -L} x_j \pi^j$ where each $x_j$ belongs to the residue field $k$.
If $x_{-L} \not=0$, then $x = \pi^{-L} u$ where $u = \sum_{j\ge -L} x_j \pi^{j+L}$ is a unit.
There is a choice of (equivalent) absolute value $|\cdot|$ such that $\mu(B_r(x)) \simeq r$ for all $r>0$ and $x\in \KK$.
When $\KK= {\mathbb R}$, we have $|x| = x \, {\rm sgn}(x)$ and when $\KK = {\mathbb C}$, we have $|z| = z{\overline{z}}$.
When $\KK$ is non-archimedean, then the absolute value $|x| := q^{-m}$ where $x = \pi^m u$ and $u$ a unit
has the property that its balls satisfy
$\mu(B_r(x)) = q^n$ where $q^n \le r < q^{n+1}$ and so $\mu(B_r(x)) \simeq r$. We choose the absolute value with
this normalisation.
We will need a couple simple change of variable formulae which we will use again and again:
$$
\int_\KK f(x + y) \, d\mu(x) \ = \ \int_\KK f(x) \, d\mu(x) \ \ {\rm and} \ \
\int_\KK f(y^{-1} x) \, d\mu(x) \ = \ |y| \, \int_\KK f(x) \, d\mu(x).
$$
The first follows from the translation invariance of the Haar measure $\mu$. For the second formula, the
measure $E \to \mu(yE)$ defined by an element $y\in \KK$ is translation-invariant and so by the uniqueness
of Haar measure, we have $\mu(yE) = {\rm mod}_{\mu}(y) \mu(E)$ for some nonnegative number ${\rm mod}_{\mu}(y)$,
the so-called {\it modulus} of the measure $\mu$. In fact $|y| := {\rm mod}_{\mu}(y)$ defines the absolute value
with the desired normalisation whose balls $B_r(x)$ satisfy $\mu(B_r(x)) \simeq r$. This proves the second change
of variables formula. There is one additional, more sophisticated, nonlinear change of variable formula which we will
need at one point but we will justify this change of variables at the time.
The (additive) character group of $\KK$ is isomorphic to itself. Starting with any non-principal character ${\rm e}$
on $\KK$, all other characters $\chi$ can be identified with an element $y\in \KK$ via $\chi(x) = {\rm e}(yx)$. We fix
a convenient choice for ${\rm e}$; when $\KK={\mathbb R}$, we take ${\rm e}(x) = e^{2\pi i x}$. When $\KK$
is non-archimedean, we choose ${\rm e}$ so that ${\rm e} \equiv 1$ on $o_\KK$ and nontrivial on $B_q(0)$; that is,
there is a $x_0$ with $|x_0| = q$ such that ${\rm e}(x_0) \not= 1$. The choice of ${\rm e}$ on ${\mathbb C}$
does not really matter but a convenient choice is ${\rm e}(z) = e^{2\pi i \Rea{z}}$.
We define the fourier transform
$$
{\widehat{f}}(\xi) \ = \ \int_{\KK} f(x) {\rm e}(-\xi x) \, d\mu(x).
$$
Plancherel's theorem and
the fourier inversion formula hold as in the real setting.
\subsection{An oscillatory integral estimate} For $P(x) = a_d x^d + \cdots + a_1 x \in \KK[{\rm x}]$, we will use the following oscillatory integral bound:
\begin{align}\label{osc-int-est}
|I(P)| \ \le \ C_d \, [\max_j |a_j| ]^{-1/d} \ \ \ {\rm where} \ \ \ I(P) \ = \ \int_{B_1(0)} {\rm e}(P(x)) \, d\mu(x).
\end{align}
When $\KK = {\mathbb R}$, it is a simple matter to deduce the bound \eqref{osc-int-est} from general oscillatory bounds due to van der Corput (see \cite{FatS}).
When $\KK = {\mathbb Q}_p$ is the $p$-adic field, then
$$
I(P) \ = \ p^{-s} \sum_{x=0}^{p^s -1} e^{2\pi i Q(x)/p^s} \ \ \ {\rm where} \ \ \ p^s = \max_j |a_j| \ \ {\rm and} \ \
Q(x) = b_d x^d + \cdots + b_1 x \in {\mathbb Z}[{\rm x}]
$$
satisfies ${\rm gcd}(b_d, \ldots, b_1, p) = 1$; hence a classical result of Hua~\cite{H} implies $|I(P)| \le C_d p^{-s/d}$
which is \eqref{osc-int-est} in this case. It is natural to extend Hua's bound to other non-archimedean fields; see
for example \cite{W-jga} where character sums are treated over general Dedekind domains
which in particular establishes \eqref{osc-int-est} for any non-archimedean field $\KK$ when the characteristic of $\KK$ (if positive) is larger than $d$.
It is not straightforward to apply van der Corput bounds when $\KK = {\mathbb C}$. However we can see
the bound \eqref{osc-int-est} for both $\KK = {\mathbb R}$ and $\KK = {\mathbb C}$ as a consequence of the following general bound due to
Arkhipov, Chubarikov and Karatsuba \cite{ACK}: let
$P \in {\mathbb R}[X_1, \ldots, X_n]$ be a real polynomial of degree $d$ in $n$ variables. If ${\mathbb B}^n$ denotes the unit ball
in ${\mathbb R}^n$, then
\begin{align}\label{ACK}
\Bigl| \int_{{\mathbb B}^n} e^{2\pi i P({\underline{x}})} \, d{\underline{x}} \Bigr| \ \le \ C_{d,n} \, H(P)^{-1} \ \ \ {\rm where} \ \ \
H(P) = \min_{{\underline{x}}\in {\mathbb B}^n} \max_{\alpha} |\partial^{\alpha} P({\underline x})|^{1/|\alpha|}.
\end{align}
A simple equivalence of norms argument shows that $H(P) \ge c_d [\max_{\alpha} |a_{\alpha}|]^{1/d}$ where
$P({\underline{x}}) = \sum_{\alpha} a_{\alpha} {\underline{x}}^{\alpha}$ and $d$ is the degree of $P$.
Hence \eqref{ACK} implies \eqref{osc-int-est} when $\KK = {\mathbb R}$. When $\KK = {\mathbb C}$ and
$f(z) = a_d z^d + \cdots a_1 z \in {\mathbb C}[X]$, write $f(x+iy) = P(x,y) + i Q(x,y)$ and note that
$$
\int_{B_1(0)} {\rm e}(f(z)) \, dz \ = \ \int_{{\mathbb B}^2} e^{2\pi i P(x,y)} \, dx dy
$$
for the choice of character ${\rm e}(z) = e^{2\pi i \Rea{z}}$.
From the Cauchy--Riemann equations, we have $H(P) \simeq_d \min_{|z|\le 1} \max_k |f^{(k)}(z)|^{1/2k}
\ge c_d [\max_j |a_j|]^{1/2d}$ (recall we are using the absolute value $|z| = z{\overline{z}}$ on ${\mathbb C}$)
and so \eqref{ACK} implies \eqref{osc-int-est} with exponent $1/2d$ in this case.
There is an alternative
argument which establishes \eqref{osc-int-est} with the exponent $1/d$ when $\KK = {\mathbb C}$ but this is unimportant
for our purposes.
\section{Some notation and basic tools}
\label{sec:2}
By a {\it scale} $N$, we mean a positive number when $\KK$ is archimedean and when $\KK$ is non-archimedean,
it denotes a discrete value $N = q^k, \, k\in {\mathbb Z}$, a power of the cardinality of the residue field $k$.
When $N$ is a scale, we denote by
$[N] := B_N(0)$ the ball with
centre $0$ and radius $N$. In this case, we have $\mu([N]) \simeq N$ (equality in the non-archimeden case)
by our normalisations of the
absolute value $|\cdot|$ and Haar measure $\mu$.
An {\it interval} $I$ is a ball $I = B_{r_I}(x_I)$ with some centre $x_I \in \KK$
and radius $r_I>0$.
For an interval $I$, we associate the measure
$$
d\mu_I(x) \ = \ \frac{1}{\mu(I)} \mathbbm{1}_I(x) \, d\mu(x).
$$
For an interval $I$, we define the Fej\'er kernel $\kappa_I(x) = \mu(I)^{-2} \mathbbm{1}_I * \mathbbm{1}_{-I} (x)$ and
the corresponding measure $d\nu_I(x) = \kappa_I(x) d\mu(x)$. When $I = [N]$ for some scale $N$, we have
$-I = I$ and so $\kappa_{[N]}(x) = N^{-2} \mathbbm{1}_{[N]}*\mathbbm{1}_{[N]}(x)$. Furthermore when $\KK$ is non-archimedean,
we have $\kappa_{[N]}(x) = N^{-1} \mathbbm{1}_{[N]}(x)$ and so $d\nu_I = d\mu_I$ in this
case. When $\KK = {\mathbb R}$ and $I = [0,N]$, we have $\kappa_I(x) = N^{-1}(1 - |x|/N)$ when $|x|\le N$ and zero otherwise.
We now give precise notation which we will use throughout the paper.
\subsection{Basic notation}
As usual $\Z$ will denote the ring of rational integers.
\begin{enumerate}[label*={\arabic*}.]
\item We use $\Z_+:=\{1, 2,\ldots\}$ and $\N := \Z_+\cup\{0\}$ to
denote the sets of positive integers and non-negative integers,
respectively.
\item For any $L\in\RR_+$ we will use the notation
\[
\bra{L}_0 := \{ \ell \in \N : \ell \le L \} \ \ \ {\rm and} \ \ \ \bra{L} := \{\ell \in \Z_+ : \ell \le L\}.
\]
\item We use $\ind{A}$ to denote the indicator function of a set $A$. If $S$
is a statement we write $\ind{S}$ to denote its indicator, equal to $1$
if $S$ is true and $0$ if $S$ is false. For instance $\ind{A}(x)=\ind{x\in A}$.
\end{enumerate}
\subsection{Asymptotic notation and magnitudes}
The letters $C,c, C_0, C_1, \ldots>0$ will always denote
absolute constants, however their values may vary from occurrence to
occurrence.
\begin{enumerate}[label*={\arabic*}.]
\item For two nonnegative quantities
$A, B$ we write $A \lesssim_{\delta} B$ ($A \gtrsim_{\delta} B$) if
there is an absolute constant $C_{\delta}>0$ (which possibly depends
on $\delta>0$) such that $A\le C_{\delta}B$ ($A\ge C_{\delta}B$). We
will write $A \simeq_{\delta} B$ when $A \lesssim_{\delta} B$ and
$A\gtrsim_{\delta} B$ hold simultaneously. We will omit the subscript
$\delta$ if irrelevant.
\item For a function $f:X\to \C$ and positive-valued
function $g:X\to (0, \infty)$, write $f = O(g)$ if there exists a
constant $C>0$ such that $|f(x)| \le C g(x)$ for all $x\in X$. We will
also write $f = O_{\delta}(g)$ if the implicit constant depends on
$\delta$. For two functions $f, g:X\to \C$ such that $g(x)\neq0$ for
all $x\in X$ we write $f = o(g)$ if $\lim_{x\to\infty}f(x)/g(x)=0$.
\end{enumerate}
\subsection{Polynomials} Let $\KK[{\rm t}]$ denote the space of all
polynomials in one indeterminate $\rm t$ with
coefficients in $\KK$. Every polynomial $P\in \KK[{\rm t}]$ can be written as a formal power series
\begin{align}
\label{eq:29}
P(t)=\sum_{j=0}^{\infty}c_jt^j,
\end{align}
where all but finitely many coefficients $c_j\in \KK$ vanish.
\begin{enumerate}[label*={\arabic*}.]
\item We
define the degree of $P\in \KK[{\rm t}]$ by
\begin{align*}
\deg(P):=&\max\{j\in \Z_+: c_j\neq0\}.
\end{align*}
\item A finite collection
$\mathcal P\subset \KK[{\rm t}]$ has degree $d\in\N$, if $d=\max\{\deg(P): P\in\mathcal P\}$.
\item For a polynomial $P\in \KK[{\rm t}]$ and $j\in \N$ let
$\coe_j(P)$ denote $j$-th coefficient of $P$. We also let $\ell(P)$
denote the leading coefficient of
$P$; that is, for $P$ as in \eqref{eq:29} we have $\coe_j(P)=c_j$ for
$j\in \N$ and
$\ell(P)=c_{d}$ where $d = \deg{P}$.
\end{enumerate}
\subsection{$L^p$ spaces}
$(X, \mathcal B(X), \lambda)$ denotes a measure space $X$ with
$\sigma$-algebra $\mathcal B(X)$ and $\sigma$-finite measure $\lambda$.
\begin{enumerate}[label*={\arabic*}.]
\item
The set of $\lambda$-measurable
complex-valued functions defined on $X$ will be denoted by $L^0(X)$.
\item The set of functions in $L^0(X)$ whose modulus is integrable
with $p$-th power is denoted by $L^p(X)$ for $p\in(0, \infty)$,
whereas $L^{\infty}(X)$ denotes the space of all essentially bounded
functions in $L^0(X)$.
\item We will say that a function $f\in L^0(X)$ is $1$-bounded if $f\in L^{\infty}(X)$ and $\|f\|_{L^{\infty}(X)}\le 1$.
\item For any $n\in\Z_+$ the measure $\lambda^{\otimes n}$ will denote the product measure $\lambda\otimes\ldots\otimes\lambda$ on the product space $X^n$ with the product $\sigma$-algebra $\mathcal B(X)\otimes\ldots \otimes\mathcal B(X)$.
\end{enumerate}
\subsection{Gowers box and uniformity norms}
We will use the Gowers norm and Gowers box norm of a function $f$ which is defined in terms
of the multiplicative discrete derivatives $\Delta_{h_1.\ldots, h_s} f(x)$: for $x, h \in \KK$, we set
$\Delta_h f(x) = f(x){\overline{f(x+h)}}$ and iteratively, we define
$$
\Delta_{h_1, \ldots, h_s} f(x) \ = \ \Delta_{h_1}(\Delta_{h_2}( \cdots (\Delta_{h_s} f(x))\cdots))
\ \ \ {\rm where} \ \ x, h_1, \ldots, h_s \in \KK.
$$
When $h = (h_1, \ldots, h_s) \in \KK^s$, we often write $\Delta_{h_1, \ldots, h_s}f(x)$ as $\Delta_h f(x)$ or $\Delta_h^s f(x)$.
For $\omega = (\omega_1, \ldots, \omega_s) \in \{0,1\}^s$, we write $\omega \cdot h := \sum_{i=1}^s \omega_i h_i$
and $|\omega|:= \omega_1 + \cdots + \omega_s$.
If ${\mathcal C} z = {\overline{z}}$ denotes the conjugation operator, we observe that
\begin{align}
\label{product}
\Delta_h f(x) \ = \ \prod_{\omega \in \{0,1\}^s} {\mathcal C}^{|\omega|} f(x + \omega \cdot h).
\end{align}
For any integer $s\ge 1$, we define the Gowers $U^s$ norm of $f$ by
$$
\|f\|_{U^s}^{2^s} \ = \ \int_{\KK^{s+1} }\Delta_{h_1, \ldots, h_s} f(x) \
d\mu(h_1) \cdots d\mu(h_s) d\mu(x).
$$
We note that $\|f\|_{U^2} = \|{\widehat{f}}\|_{L^4}$.
For intervals $I, I_1, \ldots, I_s$, we define the Gowers box norm as
$$
\|f \|_{\square^s_{I_1, \ldots, I_s}(I)}^{2^s} \ = \ \frac{1}{\mu(I)} \int_{\KK^{s+1} }\Delta_{h_1, \ldots, h_s} f(x) \
d\nu_{I_1}(h_1) \cdots d\nu_{I_s}(h_s) d\mu(x) .
$$
From \eqref{product}, we see that
\begin{align}
\label{s+1-s}
\|f\|_{\square_{I_1, \ldots, I_{s+1}}^{s+1}(I)}^{2^{s+1}} \ = \
\int_{\KK}\|\Delta_h f\|_{\square_{I_1, \ldots, I_s}^s(I)}^{2^s}d\nu_{I_{s+1}}(h).
\end{align}
A similar formula relates the Gowers $U^{s+1}$ norm to the Gowers $U^s$ norm.
\subsection{The Gowers--Cauchy--Schwarz inequality}
When $s\ge 2$, both the Gowers uniformity norm and the Gowers box norm are in fact norms. In particular the triangle
inequality holds. The triangle inequality also holds when $s=1$ and so we have that
\begin{align}\label{triangle-ineq}
\|f + g\|_{U^s} \le \|f\|_{U^s} + \|g\|_{U^s} \ \ \ {\rm and} \ \ \ \|f + g\|_{\square_{I_1, \ldots, I_{s}}^{s}(I)} \le \|f\|_{\square_{I_1, \ldots, I_{s}}^{s}(I)} +
\|g\|_{\square_{I_1, \ldots, I_{s}}^{s}(I)}
\end{align}
holds for every $s\ge 1$. These inequalities follow from a more general inequality which we will find useful.
Let $A$ be a finite set and for each $\alpha \in A$,
let $(X_{\alpha}, du_{\alpha})$ be a probability space. Set $X = \prod_{\alpha\in A} X_{\alpha}$ and let $f : X \to {\mathbb C}$
be a complex-valued function. For any $x^{(0)} = (x_{\alpha}^{(0)})_{\alpha \in A}$ and $x^{(1)} = (x_{\alpha}^{(1)})_{\alpha \in A}$ in $X$ and
$\omega = (\omega_{\alpha})_{\alpha\in A} \in \{0,1\}^A$, we write $x^{(\omega)} = (x_{\alpha}^{(\omega_{\alpha})})_{\alpha\in A}$. We
define the {\it generalised Gowers box norm of $f$ on $X$} as
$$
\|f\|_{\square (X)}^{2^{|A|}} \ = \ \iint_{X^2} \prod_{\omega \in \{0,1\}^A} {\mathcal C}^{|\omega|} f(x^{(\omega)}) \ du(x^{(0)}) \, du(x^{(1)})
$$
where $du$ denotes the product measure $\otimes_{\alpha \in A} du_{\alpha}$. The following lemma is established in
\cite{GT-primes}.
\begin{lemma} [Gowers--Cauchy--Schwarz inequality]
\label{GCS}
With the set-up above, let $f_{\omega} : X \to {\mathbb C}$ for every $\omega \in \{0,1\}^A$. We have
\begin{align}
\label{GCS-ineq}
\Big| \iint_{X^2} \prod_{\omega \in \{0,1\}^A} {\mathcal C}^{|\omega|} f_{\omega} (x^{(\omega)}) \ du(x^{(0)}) \, du(x^{(1)}) \Bigr| \ \le \
\prod_{\omega \in \{0,1\}^A} \|f_{\omega}\|_{{\square (X)}}.
\end{align}
\end{lemma}
We will need the following consequence.
\begin{cor}\label{GCS-indep} Let $f : X \to {\mathbb C}$ and for each $\alpha \in A$, suppose
$g_{\alpha}: X \to {\mathbb C}$ is a 1-bounded function that is independent of the $x_{\alpha}$ variable. Then
\begin{align}\label{GCS-ineq-2}
\Bigl| \int_X f(x) \prod_{\alpha \in A} g_{\alpha}(x) du(x) \Bigr|^{2^{|A|}} \ \le \ \int_{X^2} \prod_{\omega\in \{0,1\}^A}
{\mathcal C}^{|\omega|} f(x^{(\omega)}) \, du(x^{(0)}) du(x^{(1)}).
\end{align}
\end{cor}
\begin{proof} For $\omega^0 = (0, \ldots, 0)$, set $f_{\omega^0} = f$ and for $\omega^{\beta} = (\omega_{\alpha})_{\alpha\in A}$ with
$\omega_{\alpha} = 0$ when $\alpha \not= \beta$ and $\omega_{\beta} = 1$, set
$f_{\omega^{\beta}} = {\overline{g_{\beta}}}$. For
all other choices of $\omega \in \{0,1\}^A$, set $f_{\omega} = 1$. Hence
$$
\prod_{\omega\in \{0,1\}^A}{\mathcal C}^{|\omega|} f_{\omega}(x^{(\omega)}) = f(x^{(0)}) \prod_{\alpha \in A} g_{\alpha}(x^{(0)})
$$
since $g_{\alpha}$ is independent of the $\alpha$ variable. Therefore the inequality \eqref{GCS-ineq} implies
$$
\Bigl| \int_X f(x) \prod_{\alpha \in A} g_{\alpha}(x) du(x) \Bigr| \le \prod_{\omega\in \{0,1\}^A}
\|f_{\omega}\|_{{\square (X)}} \le \|f\|_{{\square (X)}}
$$
by the 1-boundedness of each $g_{\alpha}$. This proves \eqref{GCS-ineq-2}.
\end{proof}
\section{Some preliminaries}\label{preliminaries}
In this section, we establish a few useful results which we will need in our arguments.
\subsection{$U^2$-inverse theorem}
We will use the following inverse theorem for the Gowers box norms.
\begin{lemma}[$U^2$-inverse theorem]
\label{2.10}
Let $H_1$ and $H_2$ be two scales and
let $f$ be a 1-bounded function supported in an interval $I$. Then
\begin{equation}\label{U2-FT}
\|f\|_{\square^2_{[H_1],[H_2]}(I)}^4 \ \le \ (H_1 H_2)^{-1} \, \|{\widehat{f}}\|_{L^{\infty}(\KK)}^2.
\end{equation}
\end{lemma}
\begin{proof} We have
$$
\|f\|_{\square^2_{[H_1],[H_2]}(I)}^4 \ = \ \frac{1}{\mu(I)}
{\mathop{\iiint}_{\KK^3}} \Delta_{h_1, h_2} f(x) d\nu_{[H_1]}(h_1) d\nu_{[H_2]}(h_2)
d\mu(x)
$$
$$
= \ {\mathop{\iint}_{\KK^2}} g(h_1, h_2) \, d\nu_{[H_1]}(h_1) d\nu_{[H_2]}(h_2) \ = \
{\mathop{\iint}_{\KK^2}} {\widehat{g}}(\xi_1, \xi_2) \,
{\overline{{\widehat{\nu_{[H_1]}}}(\xi_1) {\widehat{\nu_{[H_2]}}}(\xi_2)}} \, d\mu(\xi_1) d\mu(\xi_2)
$$
where
$$
g(h_1, h_2) \ = \ \frac{1}{\mu(I)} \int_{\KK} \Delta_{h_1, h_2} f(x) \, d\mu(x).
$$
Hence
$$
\|f\|_{\square^2_{[H_1],[H_2]}(I)}^4 \ \le \ \|{\widehat{\nu_{[H_1]}}}\|_{L^1}
\|{\widehat{\nu_{[H_2]}}}\|_{L^1} \ \sup_{{\underline{\xi}}\in \KK^2}
|{\widehat{g}}(\xi_1, \xi_2)|
$$
$$
= \ \frac{H_1^{-1}H_2^{-1}}{\mu(I)} \ \sup_{{\underline{\xi}}\in \KK^2}
\Bigl| {\mathop{\iiint}_{\KK^3}} f_{00}(x) {\overline{f_{10}(x+h_1)}}
{\overline{f_{01}(x+h_2)}} f_{11}(x+h_1+h_2) \, d\mu(x) d\mu(h_1) d\mu(h_2) \Bigr|
$$
where $f_{00}(x) = f(x) {\rm e}(-\xi_1 x - \xi_2 x),$
$$
\ f_{10}(x) \ = \ f(x) {\rm e}(-\xi_1 x), \
f_{01}(x) = f(x) {\rm e}(-\xi_2 x) \ \ {\rm and}
\ \ f_{11}(x) \ = \ f(x).
$$
The final equality follows since $|\widehat{\nu}_{[H_j]}(\xi)| = |\widehat{\mu}_{[H_j]}(\xi)|^2$ and so
\begin{align*}
\|\widehat{\nu}_{[H_j]}\|_{L^1(\KK)} = \| \widehat{\mu}_{[H_j]} \|_{L^2(\KK)}^2 = \| H_j^{-1}\ind{[H_j]} \|_2^2 = H_j^{-1}
\qquad {\rm for} \ j \in \{1,2\}
\end{align*}
by Plancherel's theorem.
Furthermore
$$
{\widehat{g}}(\xi_1, \xi_2) \ = \ \frac{1}{\mu(I)} {\mathop{\iiint}_{\KK^3}} \Delta_{h_1, h_2} f(x) \,
{\rm e}(\xi_1 h_1 + \xi_2 h_2) \, d\mu(h_1) d\mu(h_2) d\mu(x).
$$
Appealing to the Gowers--Cauchy--Schwarz inequality \eqref{GCS-ineq},
we see that
$$
\|f\|_{\square^2_{[H_1],[H_2]}(I)}^4 \ \le \ (\mu(I) H_1 H_2)^{-1} \|f\|_{U^2}^4 \ = \
(\mu(I) H_1 H_2)^{-1}\|{\widehat{f}}\|_{L^4}^4 \ \le \
(H_1 H_2)^{-1} \|{\widehat{f}}\|_{L^{\infty}}^2
$$
as desired.
The last inequality follows from Plancherel's theorem, the 1-boundedness of $f$ and ${\rm supp}(f) \subset I$ which implies
$$
\|{\widehat{f}}\|_{L^4}^4 \le \|{\widehat{f}}\|_{L^{\infty}}^2 \|{\widehat{f}}\|_{L^2}^2 =
\|{\widehat{f}}\|_{L^{\infty}}^2 \|f\|_{L^2}^2 \le \mu(I) \|{\widehat{f}}\|_{L^{\infty}}^2 .
$$
\end{proof}
\subsection{van der Corput's inequality}
We will need the following useful inequality.
\begin{lemma}[van der Corput's inequality]
\label{lem:vdC}
Let $\mathfrak g \in L^1(\KK)$ and let $J = B_{r_J}(x_J)$ be an interval. Then for any scale $H$, $0<H\le \mu(J)$, we have
\begin{equation}\label{vc-ineq-1}
\bigg|\int_{\KK} \mathfrak g(y)d\mu_J(y)\bigg|^2\leq \frac{C}{\mu(J)}\int_{\KK}\int_{J\cap(J-h)}\Delta_h\mathfrak g(y)
d\mu(y) d\nu_{[H]}(h).
\end{equation}
We can take $C = 4$ when $\KK$ is archimedean.
When $\KK$ is non-archimedean, we can take $C = 1$ and furthermore,
$\ind{J}(y) \ind{J}(y+h) = \mathbbm{1}_{J\cap(J-h)}(y) = \mathbbm{1}_J(y)$ for any $h \in [H]$ so that the above inequality can be expressed as
\begin{equation}\label{vc-ineq-2}
\bigg|\int_{\KK} \mathfrak g(y)d\mu_J(y)\bigg|^2\leq \iint_{\KK^2}\Delta_h\mathfrak g(y) d\mu_{[H]}(h) d\mu_J(y)
\end{equation}
since $d\nu_{[H]} = d\mu_{[H]}$ in this case.
\end{lemma}
\begin{proof}
We define $\mathfrak g_J(y):=\mathfrak g(y)\ind{J}(y)$. By a change of variables and Fubini's theorem we note
\[
\int_{\KK}\mathfrak g(y) d\mu_J(y) = \frac{1}{\mu(J)}\iint_{\KK^2} \mathfrak g_J(y+h) d\mu_{[H]}(h) d\mu(y).
\]
The function $y\mapsto \int_{\KK}\mathfrak g_J(y+h) d\mu_{[H]}(h)$ is supported on the set $J - [H]$ which in turn
lies in $B_{2(r_J + H)}(x_J)$ (in the non-archimedean case, $J-[H] = J$). Hence
by the Cauchy--Schwarz inequality and a change of variables, we conclude that
\begin{align*}
\bigg|\int_{\KK}\mathfrak g(y) d\mu_J(y)\bigg|^2 &=
\frac{1}{\mu(J)^2}\bigg|\iint_{\KK^2} \mathfrak g_J(y+h) d\mu_{[H]}(h) d\mu(y)\bigg|^2\\
&\le 2 \frac{\mu(J)+H}{\mu(J)^2}\iiint_{\KK^3} \mathfrak g_J(y+h_1)\overline{\mathfrak g_J(y+h_2)} d\mu_{[H]}(h_1)
d\mu_{[H]}(h_2) d\mu(y)\\
&= 2 \frac{\mu(J)+H}{\mu(J)^2}\iint_{\KK^2}\kappa_{[H]}(h)\mathfrak g_J(y)\overline{\mathfrak g_J(y+h)} d\mu(h)d\mu(y)\\
&\le 4 \mu(J)^{-1} \int_{\KK}\int_{J\cap(J-h)}\mathfrak g(y)\overline{\mathfrak g(y+h)} d\mu(y) d\nu_{[H]}(h),
\end{align*}
since $\kappa_{[H]}(h)=H^{-2}\int_{\KK}\ind{[H]}(h_1)\ind{[H]}(h+h_1)d\mu(h_1)$.
This gives the desired conclusion.
\end{proof}
\subsection{Preparation for the PET induction scheme}
We now give a simple application of van der Corput's inequality
which will be repeatedly applied in the PET induction scheme.
\begin{lemma}
\label{lem:pet}
Let $c\ge1$ and let $I, J\subset \KK$ be two intervals with $\mu(I) = N_0$. Assume that
$\mathfrak g_1\in L^\infty(\KK)$ and $\mathfrak g_2\in L^\infty(\KK^2)$ are $1$-bounded
functions such that
\begin{align}
\label{eq:65}
\|\mathfrak g_1\|_{L^1(\KK)}\le N_0,
\qquad \text{ and } \qquad
\sup_{y\in \KK}\|\mathfrak g_2(\cdot, y)\|_{L^1(\KK)}\le c N_0.
\end{align}
Suppose $H$ is a scale such that $0<H\le \mu(J)$. When $\KK$ is archimedean, we have
\begin{gather*}
\bigg|\frac{1}{N_0}\iint_{\KK^2} \mathfrak g_1(x)\mathfrak g_2(x, y)d\mu_J(y)d\mu(x)\bigg|^2\\
\le
4 \bigg|\frac{1}{N_0}\iiint_{\KK^3}\mathfrak g_2(x, y)\overline{\mathfrak g_2(x, y+h)}d\mu_J(y)d\nu_{[H]}(h)d\mu(x)\bigg|
+ 8 c \bigg[\frac{\mu([H])}{\mu(J)}\bigg]^{\theta}
\end{gather*}
where $\theta = 1$ when $\KK = {\mathbb R}$ and $\theta = 1/2$ when $\KK = {\mathbb C}$. When
$\KK$ is non-archimedean, this improves to
\begin{gather*}
\bigg|\frac{1}{N_0}\iint_{\KK^2} \mathfrak g_1(x)\mathfrak g_2(x, y)d\mu_J(y)d\mu(x)\bigg|^2\\
\le
\frac{1}{N_0} \iiint_{\KK^3}\mathfrak g_2(x, y)\overline{\mathfrak g_2(x, y+h)}d\mu_J(y)d\mu_{[H]}(h)d\mu(x) .
\end{gather*}
\end{lemma}
\begin{proof}
Applying the Cauchy--Schwarz inequality in the $x$ variable it follows that
$$
\bigg|\frac{1}{N_0} \iint_{\KK^2}\mathfrak g_1(x)\mathfrak g_2(x, y)d\mu_J(y)d\mu(x)\bigg|^2\leq \frac{1}{N_0}
\int_{\KK} \bigg|\int_{\KK} \mathfrak g_2(x, y)d\mu_J(y)\bigg|^2 d\mu(x),
$$
since by \eqref{eq:65} and the $1$-boundedness of $\mathfrak g_1$, we have $\|\mathfrak g_1 \|_{L^2}^2 \le N_0$.
By van der Corput's inequality in Lemma \ref{lem:vdC}, we obtain
\begin{gather*}
\int_{\KK} \bigg|\int_{\KK} \mathfrak g_2(x, y)d\mu_J(y)\bigg|^2d\mu(x)\\
\leq
4 \int_{\KK}\int_{\KK}\kappa_{[H]}(h)\frac{1}{\mu(J)}\int_{J\cap(J-h)} \mathfrak g_2(x, y)\overline{\mathfrak g_2(x, y+h)}d\mu(y)d\mu(h)d\mu(x)
\end{gather*}
when $\KK$ is archimedean. In this case, we have $\mu(J\setminus{[J\cap(J-h)]}) \le 2 \mu([H])$ when $\KK = {\mathbb R}$
and $\mu(J\setminus{[J\cap(J-h)]}) \le 2 \sqrt{\mu([H]) \mu(J)}$ when $\KK = {\mathbb C}$. Hence
\begin{align*}
\frac{4}{N_0} \int_{\KK}\kappa_{[H]}(h)\frac{1}{\mu(J)}\int_{J\setminus (J\cap(J-h))}\int_{\KK}|\mathfrak g_2(x, y)| d\mu(x)d\mu(y)d\mu(h)\le 8 c \bigg[\frac{\mu([H])}{\mu(J)}\bigg]^{\theta}.
\end{align*}
In the last line we used Fubini's theorem and \eqref{eq:65} for $\mathfrak g_2$.
This gives the desired bound when $\KK$ is archimedean.
When $\KK$ is non-archimedean, the bound \eqref{vc-ineq-2} in Lemma \ref{lem:vdC} gives
\begin{gather*}
\frac{1}{N_0} \int_{\KK} \bigg|\int_{\KK} \mathfrak g_2(x, y)d\mu_J(y)\bigg|^2d\mu(x)\\
\leq
\frac{1}{N_0} \iiint_{\KK^3}\mathfrak g_2(x, y)\overline{\mathfrak g_2(x, y+h)}d\mu_J(y)d\mu_{[H]}(h)d\mu(x)
\end{gather*}
which is the desired bound in this case.
\end{proof}
The next result is an essential building block of
the PET induction scheme, which will be employed in Section \ref{sec:inverse}.
\begin{proposition}
\label{prop:pet}
Let $N, N_0 >0$ be two scales, $I$ an interval such that $\mu(I) = N_0$, $m\in\NN$, $i_0\in \bra{m}$ and let $\mathcal P:=\{P_1,\ldots, P_m\}$ be a collection of polynomials. Suppose that
$\mathfrak f_0, \mathfrak f_1,\ldots, \mathfrak f_m\in L^0(\KK)$ are
$1$-bounded functions such that $\|\mathfrak f_i\|_{L^1(\KK)}\le N_0$ for every $i\in\bra{m}_0$.
Let $0<\delta\le 1$ and suppose that
\begin{align}
\bigg|\frac{1}{N_0} \iint_{\KK^2}\mathfrak f_0(x)\prod_{i=1}^m\mathfrak f_{i}(x-P_i(y))d\mu_{[N]}(y)d\mu(x)\bigg|\ge\delta.
\end{align}
Then there exists an absolute constant $C\gtrsim_{\mathcal P}1$ such that for all $\delta'\le\delta^4/C$ we have
\begin{align}
\bigg|\frac{1}{N_0} \iint_{\KK^2}\mathfrak f'_0(x)\prod_{i=1}^{m'}\mathfrak f'_{i}(x-P'_i(y))d\mu_{[N]}(y)d\mu(x)\bigg|\gtrsim_C\delta^2,
\end{align}
where $m'< 2m$ and $\mathcal P':=\{P_1',\ldots, P_{m'}'\}$ is a new collection of polynomials such that
\[
\mathcal P'=\{P_1(y)-P_{i_0}(y), P_1(y+h)-P_{i_0}(y),\ldots, P_m(y)-P_{i_0}(y), P_m(y+h)-P_{i_0}(y)\},
\]
for some $\delta'\delta^2N/C^2\le |h|\le \delta'N\le \delta^4N/C$,
where $P_{m'}'(y):=P_m(y)-P_{i_0}(y)$, and $\{\mathfrak f_0',\ldots, \mathfrak f_{m'}'\}:=\{\mathfrak f_1, \overline{\mathfrak f_1},\ldots, \mathfrak f_m, \overline{\mathfrak f_m}\}$ with $\mathfrak f_{m'}':=\mathfrak f_{m}$.
\end{proposition}
\begin{proof}
Let $\I:=\bra{m}$ and $C\ge 1$ be a large constant to be
determined later. We shall apply Lemma \ref{lem:pet} with
$J=[N]$, the functions $\mathfrak g_1(x)=\mathfrak f_0(x)$ and
$\mathfrak g_2(x, y)=\prod_{i\in\I}\mathfrak f_{i}(x-P_i(y))$, and the
parameter $H=\delta'N$. Note that
$\|\mathfrak g_1\|_{L^{\infty}(\KK)}\le 1$ and
$\|\mathfrak g_2\|_{L^{\infty}(\KK^2)}\le 1$, since
$\|\mathfrak f_i\|_{L^{\infty}(\KK)}\le 1$ for all $i\in\I$. Moreover,
$\mathfrak g_1$ and $\mathfrak g_2$ satisfy \eqref{eq:65}. If
$\delta'\le \delta^4/C$ and $C\ge1$ is sufficiently large, using Lemma
\ref{lem:pet}, we conclude
\begin{align*}
\bigg|\frac{1}{N_0}\iiint_{\KK^3}\mathfrak g_2(x, y)\overline{\mathfrak g_2(x, y+h)}d\mu_{[N]}(y)d\nu_{[H]}(h) d\mu(x)\bigg|
\gtrsim \delta^2.
\end{align*}
By the pigeonhole principle, there exists $|h|\ge \delta^2 H/C^2$ so that
\begin{align*}
\bigg|\frac{1}{N_0} \iint_{\KK^2}\mathfrak g_2(x, y)\overline{\mathfrak g_2(x, y+h)}d\mu_{[N]}(y)d\mu(x)\bigg|
\gtrsim \delta^2.
\end{align*}
We make the change of variables $x\mapsto x+P_{i_0}(y)$ to conclude
\begin{align*}
\bigg|\frac{1}{N_0}\iint_{\KK^2}
\prod_{i\in\I}\mathfrak f_{i}(x-P_i(y)+P_{i_0}(y))\overline{\mathfrak f_{i}(x-P_i(y+h)+P_{i_0}(y))}
d\mu_{[N]}(y)d\mu(x)\bigg|\gtrsim\delta^2.
\end{align*}
This completes the proof.
\end{proof}
\section{The $L^\infty$-inverse theorem}\label{sec:inverse}
The goal of this section is to present the proof of Theorem \ref{thm:inverse-informal}, the key $L^\infty$-inverse
theorem for general polynomials with distinct degrees, which we now restate in a more formal, precise way.
\begin{theorem}[Inverse theorem for $(m+1)$-linear forms]
\label{thm:inverse}
Let $N \ge 1$ be a scale, $m\in\Z_+$ and $0<\delta\le 1$ be given. Let $\mathcal P:=\{P_1,\ldots, P_m\}$
be a collection of polynomials such that
$1\le \deg{P_1}<\ldots<\deg{P_m}$. Set $N_0 = N^{\deg(P_m)}$ and let
$f_0, f_1,\ldots, f_m\in L^0(\KK)$ be $1$-bounded functions supported
on an interval $I\subset \KK$ of measure $N_0$. Define an $(m+1)$-linear form
corresponding to the pair $(\mathcal P; N)$ by
\begin{align}
\label{eq:6}
\Lambda_{\mathcal P; N}(f_0,\ldots, f_m):=
\frac{1}{N_0} \int_{\KK^2}f_0(x)\prod_{i=1}^mf_{i}(x-P_i(y))d\mu_{[N]}(y)d\mu(x).
\end{align}
Suppose that
\begin{align}
\label{eq:2}
|\Lambda_{\mathcal P; N}(f_0,\ldots, f_m)|\ge\delta.
\end{align}
Then there exists $N_1\simeq \delta^{O_{\mathcal P}(1)}N^{\deg(P_1)}$ so that
\begin{align}
\label{eq:19}
N_0^{-1}\big\| \mu_{[N_1]}*f_1\big\|_{L^1(\KK)} \gtrsim_{\mathcal P} \delta^{O_{\mathcal P}(1)}.
\end{align}
\end{theorem}
If necessary we will also write
$\Lambda_{\mathcal P; N}(f_0,\ldots, f_m)=\Lambda_{\mathcal P; N, I}(f_0,\ldots, f_m)$
in order to emphasize that the functions $f_0, f_1,\ldots, f_m$ are
supported on $I$.
\paragraph{\bf Remark} When $\KK = {\mathbb C}$ is the complex field, the proof of Theorem \ref{thm:inverse} will
also hold if the form $\Lambda_{\mathcal P; N}$ is defined with the disc $[N] = {\mathbb D}_{\sqrt{N}}$ replaced
by the square
$$
[N]_{sq} \ := \ \{x + i y \in {\mathbb C} : |x|\le \sqrt{N}, |y| \le \sqrt{N}\}.
$$
In this case, the conlusion is $N_0^{-1} \| \mu_{[N_1]_{sq}} * f_1\|_{L^1({\mathbb C})} \gtrsim \delta^{O_{\mathcal P}(1)}$.
This observation will be needed at one point in the proof of Theorem \ref{sobolev-informal}.
The proof of Theorem \ref{thm:inverse} breaks into two main steps:
first, an application of PET induction to show that whenever
\[ |\Lambda_{\mathcal{P};N}(f_0,f_1,\dots,f_m) | \geq \delta \] is
large, then necessarily $f_m$ has a fairly large $U^s$ norm for an
appropriately large $s = s_{\mathcal{P}}$. Second, an inductive
``degree-lowering'' step to reduce $U^s$ control to $U^2$ control. We
accordingly subdivide the argument into two subsections.
\subsection{PET induction}
Our first goal is to show that whenever the multi-linear form
$\Lambda_{\mathcal P;I}$ is large, necessarily $f_m$ has some fairly
large (sufficiently high degree) Gowers box norm. We begin with
the definition of $(d,j)$-admissible polynomials. Recall that for a polynomial $P \in \KK[{\rm y}]$,
the leading coefficient is denoted by $\ell(P)$.
\begin{definition}[The class of $(d,j)$-admissible polynomials]
\label{def:1}
Let $N\ge 1$ be a scale, $0<\delta\le 1$, $d\in\Z_+$, $j\in\bra{d}$ and parameters $A_0\ge1$ and $A\ge0$ be
given. Assume that a finite collection of polynomials $\mathcal P$ has degree $j$ and
define $\mathcal{P}_j := \{ P\in\mathcal P : \deg(P) = j\}$.
We will say that $\mathcal P$ is $(d, j)$-admissible with tolerance $(A_0, A)$
if the following properties are satisfied:
\begin{enumerate}[label*={\arabic*}.]
\item For every $P\in \mathcal P_j$ we have
\begin{align}
\label{eq:24}
A_0^{-1}\delta^{A} N^{d-j} \leq |\ell(P)| \leq A_0\delta^{-A}N^{d-j}.
\end{align}
\item Whenever $P, Q\in\mathcal P_j$ and $\ell(P) \neq \ell(Q)$ we have
\begin{align}
\label{eq:25}
A_0^{-1}\delta^{A} N^{d-j} \leq |\ell(P) - \ell(Q)| \leq A_0\delta^{-A} N^{d-j}.
\end{align}
\item Whenever $P, Q\in\mathcal P_j$ and $P\neq Q$ and $\ell(P) = \ell(Q)$ we have
\begin{align}
\label{eq:26}
A_0^{-1}\delta^{A} N^{d-j+1} \leq |\ell(P - Q)| \leq A_0\delta^{-A} N^{d-j+1},
\end{align}
and $\deg(P-Q) = j-1$.
\end{enumerate}
In the special case where the polynomials in $\mathcal{P}$ are linear, we require that
$\ell(P) \neq \ell(Q)$ for each $P, Q\in\mathcal P$. The constants
$A_0, A$ will be always independent of $\delta$ and $N$, but may
depend on $\mathcal P$. In our applications the exact values of
$A_0, A$ will be unimportant and then we will simply say that the
collection $\mathcal{P}$ is $(d, j)$-admissible.
\end{definition}
\begin{remark}
\label{rem:3}
Under the hypotheses of Theorem \ref{thm:inverse} it is not difficult
to see that the collection of polynomials
$\mathcal P=\{P_1,\ldots, P_m\}$ such that
$1\le \deg{P_1}<\ldots<\deg{P_m}=d$ is $(d,d)$-admissible with the
tolerance $(\max\{|\ell(P_m)|^{-1}, |\ell(P_m)|\}, 0)$. Indeed, condition
\eqref{eq:24} can be easily verified and conditions \eqref{eq:25} and
\eqref{eq:26} are vacuous as $\mathcal P_d=\{P_m\}$.
\end{remark}
The main result of this subsection is the following theorem.
\begin{theorem}[Gowers box norms control $(m+1)$-linear forms]
\label{thm:Us}
Let $\mathcal P:=\{P_1,\ldots, P_m\}$ be a collection of
$(d, d)$-admissible polynomials such that
$1\le \deg{P_1}\le\ldots\le\deg{P_m}=d$.
Let $N, N_0\ge 1$ be two scales, $I$ an interval with measure $N_0$ and $0<\delta\le 1$ be given and let
$f_0, f_1,\ldots, f_m\in L^0(\KK)$ be $1$-bounded functions
such that $\|f_i\|_{L^1(\KK)}\le N_0$ for all $i\in\bra{m}_0$.
If \eqref{eq:2} is
satisfied, then there exists $s:=s_{\mathcal P}\in\Z_+$ such that
\begin{align}
\label{eq:32}
\|f_m\|_{\square_{[H_1], \ldots, [H_s]}^s(I)} \gtrsim_{\mathcal P} \delta^{O_{\mathcal P}(1)},
\end{align}
where
$H_i\simeq \delta^{O_{\mathcal P}(1)}N^{\deg(P_m)}$ for $i\in\bra{s}$.
\end{theorem}
The proof of Theorem \ref{thm:Us} requires a subtle downwards
induction based on a repetitive application of Proposition
\ref{prop:pet} on the class of $(d, j)$-admissible polynomials. To
make our induction rigorous, we will assign a weight vector to each collection
$\mathcal{P}\subset \KK[{\rm t}]$ of polynomials.
\begin{definition}[Weight vector]
For any finite $\mathcal{P}\subset \KK[{\rm t}]$ define the weight vector
\[
v(\mathcal{P}) := (v_1, v_2,\dots) \in \mathbb{N}^{\Z_+},
\]
where
\[
v_j :=v_j(\mathcal P):= \#\{ \ell(P) : P \in \mathcal{P} \text{ and } \deg(P) = j\},
\]
is the number of distinct leading coefficients of $\mathcal{P}$ of degree $j\in\Z_+$.
\end{definition}
For example, the weight vector for the family
$\mathcal P=\{x, 5x, x^2, x^2+x, x^4\}$ is
$v(\mathcal P)=(2, 1, 0, 1, 0, 0, \ldots)$. There is a natural
ordering on the set of weight vectors.
\begin{definition}[Well-ordering on the set of weight vectors]
For any two weight vectors $v(\mathcal P)=(v_1(\mathcal P),v_2(\mathcal P),\dots)$ and $ v(\mathcal Q)=(v_j(\mathcal Q),v_j(\mathcal Q),\dots)$ corresponding to finite collections $\mathcal{P}, \mathcal Q\subset \KK[{\rm t}]$ we define an ordering $\prec$ on the set of weight vectors by declaring that
\[
v(\mathcal P)\prec v(\mathcal Q)
\]
if there is a degree $j\in\Z_+$ such that $v_j(\mathcal P)<v_j(\mathcal Q)$ and $v_k(\mathcal P)=v_k(\mathcal Q)$ for all $k>j$.
\end{definition}
It is a standard fact that $\prec$ is a well ordering, we omit the details.
\begin{proof}[Proof of Theorem \ref{thm:Us}]
We begin by stating the following claim:
\begin{claim}
\label{claim:1}
Let $N, N_0\ge 1$ be two scales, $0<\delta\le 1$, $d, m\in\Z_+$ and
$j\in\bra{d}$ be given and let $\mathcal P:=\{P_1,\ldots, P_m\}$
be a collection of $(d, j)$-admissible polynomials with tolerance $(A_0, A)$
such that $\deg{P_1}\le\ldots\le\deg{P_m} =j$. Let $I$ be an interval with $\mu(I) = N_0$ and let
$f_0, f_1,\ldots, f_m \in L^0(\KK)$ be $1$-bounded functions
such that $\|f_i\|_{L^1(\KK)}\le N_0$ for all $i\in\bra{m}_0$.
Suppose that
\begin{align}
\label{eq:49}
|\Lambda_{\mathcal P; N}(f_0,\ldots, f_m)|\ge\delta.
\end{align}
Then there exists a collection $\mathcal P':=\{P_1',\ldots, P_{m'}'\}$ of $(d, j-1)$-admissible polynomials with tolerance $(A_0', A')$
and $m':=\#\mathcal P'$ so
that $\deg(P_1')\le\ldots\le \deg(P_{m'}')=j-1$, and $1$-bounded functions
$f_0', f_1',\ldots, f_{m'}'\in L^0(\KK)$ such that $\|f_i'\|_{L^1(\KK)}\le N_0$ for all $i\in\bra{m'}_0$ with $f_{m'}':=f_m$ and satisfying
\begin{align}
\label{eq:50}
|\Lambda_{\mathcal P'; N}(f_0',\ldots, f_{m'}')|\gtrsim_{\mathcal P}\delta^{O_{\mathcal P}(1)}.
\end{align}
\end{claim}
The proof of Claim \ref{claim:1} will use the polynomial exhaustion
technique based on an iterative application of the PET induction
scheme from Proposition \ref{prop:pet}. The key steps of this method
are gathered in Proposition \ref{prop:PETiterate}. Assuming
momentarily that Claim \ref{claim:1} is true we can easily close the
argument to prove Theorem \ref{thm:Us}. We begin with a collection of $(d, d)$-admissible
polynomials such that $\deg{P_1}\le\ldots\le\deg{P_m}=d$ and apply
our claim $d-1$ times until we reach a collection of
$(d,1)$-admissible linear polynomials $\mathcal L$ with distinct
leading terms, which satisfies \eqref{eq:50} with
$\mathcal P'=\mathcal L$. In the special case where all polynomials
are linear matters simplify and can be handled using the next result, Proposition
\ref{prop:linear}, which in turn implies \eqref{eq:32} from Theorem
\ref{thm:Us} as desired.
\end{proof}
\begin{proposition}
\label{prop:linear}
Let $N, N_0 \ge 1$ be two scales, $I$ an interval with $\mu(I) = N_0$, $0<\delta\le 1$, $d, m\in\Z_+$ be given and let
$\mathcal L:=\{L_1,\ldots, L_m\}$ be a collection of
$(d,1)$-admissible linear polynomials. Let
$f_0, f_1,\ldots, f_m\in L^0(\KK)$ be $1$-bounded functions
such that $\|f_i\|_{L^1(\KK)}\le N_0$ for all $i\in\bra{m}_0$.
Suppose that
\begin{align}
\label{eq:18}
|\Lambda_{\mathcal L; N}(f_0,\ldots, f_m)|\ge\delta.
\end{align}
Then we have
\begin{align}
\label{eq:20}
\|f_m\|_{\square_{[H_1], \ldots, [H_m]}^m(I)} \gtrsim_{\mathcal L} \delta^{2^{m-1}},
\end{align}
where
$H_i\simeq \delta^{O_{\mathcal L}(1)}N^{d}$ for $i\in\bra{s}$.
\end{proposition}
In fact Proposition \ref{prop:linear} is a special case of Theorem
\ref{thm:Us} with the collection of linear polynomials $\mathcal L$ in
place of $\mathcal P$.
\begin{proof}[Proof of Proposition \ref{prop:linear}]
Defining $\mathcal{L}'=\{L_i':=L_i-L_i(0): i\in\bra{m}\}$ we see that each $L'\in \mathcal L'$ is linear
with vanishing constant term and
\begin{align*}
\Lambda_{\mathcal L; N}(f_0,\ldots, f_m)=\Lambda_{\mathcal L'; N}(g_0,\ldots, g_m),
\end{align*}
where $g_i(x)=\Tra{-L_i(0)}f_i(x)=f_i(x+L_i(0))$ for each $i\in\bra{m}$.
We now apply Lemma \ref{lem:pet} with functions
$\mathfrak g_1(x)=g_0(x)$ and
$\mathfrak g_2(x, y)=\prod_{i=1}^mg_i(x-L_i'(y))$ and intervals
$J=[N]$, and a parameter $H=\delta^{M}N/M$ for some
large absolute constant $M\ge1$, which will be specified later.
Using Lemma \ref{lem:pet} and changing the variables $x\mapsto x-L_1(y)$ we obtain
\begin{align*}
\bigg|\frac{1}{N_0} \iiint_{\KK^3}\Delta_{\ell(L_1)h} g_1(x)\prod_{i=2}^m\Delta_{\ell(L_i)h}g_{i}(x-(L_i-L_1)(y))d\mu_{[N]}(y)d\mu(x)d\nu_{[H]}(h)\bigg|\gtrsim_M \delta^2.
\end{align*}
Applying Lemma \ref{lem:pet} $m-2$ more times and changing the variables $x\mapsto x-L_m(0)$ we obtain
\begin{align*}
\bigg|\frac{1}{N_0} \int_{\KK^{m+1}}\Delta_{u_1h_1}\cdots\Delta_{u_{m-1}h_{m-1}}\Delta_{\ell(L_m)h_m}f_{m}(x)
d\nu_{[H]}^{\otimes m}(h_1,\ldots, h_m)d\mu(x)\bigg|\gtrsim_M \delta^{2^{m-1}},
\end{align*}
where $u_i:=\ell(L_m)-\ell(L_{i})$ for $i\in\bra{m-1}$. By another change of variables we obtain \eqref{eq:20} with
\begin{align*}
H_m= |\ell(L_m)|\delta^M N/M,
\quad \text{ and } \quad
H_i=|\ell(L_m)-\ell(L_{i})|\delta^M N/M
\end{align*}
for $i\in\bra{m-1}$. Using \eqref{eq:24} with $P=L_m$, and
\eqref{eq:25} with $P=L_m$ and $Q=L_i$ we obtain that
$H_i\simeq \delta^{O_{\mathcal L}(1)}N^{d}$ for $i\in\bra{s}$
provided that $M\ge1$ is sufficiently large. This completes the proof
of Proposition \ref{prop:linear}.
\end{proof}
\begin{proposition}
\label{prop:PETiterate}
Let $N, N_0>0$ be two scales, $0<\delta\le 1$, $d, m\in\Z_+$ and $i, j\in\bra{d}$ be given and
let $\mathcal P:=\{P_1,\ldots, P_m\}$ be a collection of
$(d, j)$-admissible polynomials with tolerance $(A_0, A)$ such that
$i =\deg{P_1}\le\ldots\le\deg{P_m} =j$. Let $I$ be an interval with $\mu(I) = N_0$ and let
$f_0, f_1,\ldots, f_m\in L^0(\KK)$ be $1$-bounded functions
such that $\|f_i\|_{L^1(\KK)}\le N_0$ for all $i\in\bra{m}_0$.
Suppose that
\begin{align}
\label{eq:21}
|\Lambda_{\mathcal P; N}(f_0,\ldots, f_m)|\ge\delta.
\end{align}
Then there exists a
collection of polynomials $\mathcal P':=\{P_1',\ldots, P_{m'}'\}$ with
$m':=\#\mathcal P'<2\#\mathcal P$ satisfying $P_{m'}':=P_m-P_1$ and
$\deg(P_1')\le\ldots\le \deg(P_{m'}')$, and $1$-bounded functions
$f_0', f_1',\ldots, f_{m'}'\in L^0(\KK)$ such that
$\|f_i'\|_{L^1(\KK)}\le N_0$ for all $i\in\bra{m'}_0$ and satisfying
\begin{align}
\label{eq:34}
|\Lambda_{\mathcal P'; N}(f_0',\ldots, f_{m'}')|\gtrsim_{\mathcal P}\delta^2.
\end{align}
We also know that
$\{f_0', f_1',\ldots, f_{m'}'\}=\{f_1,\overline{f_1},\ldots, f_m,\overline{f_m}\}$
with $f_{m'}' =f_m$.
Moreover, $v(\mathcal P')\prec v(\mathcal P)$, and one of the following three scenarios occurs.
\begin{enumerate}[label*=({\roman*})]
\item\label{item:1} The collection $\mathcal P$ is of type I; that is,
$\mathcal P\neq\mathcal P_j$. In this case, $\mathcal P'$ is a
$(d, j)$-admissible collection of polynomials with tolerance $(A_0', A')$ and for some $1\le i \le j-1$,
\begin{align}
\label{eq:23}
\qquad v(\mathcal P')=(v_1(\mathcal P'),\ldots, v_{i-1}(\mathcal P'), v_i(\mathcal P)-1, v_{i+1}(\mathcal P), \ldots, v_{j}(\mathcal P),0, 0, \ldots).
\end{align}
\item\label{item:2} The collection $\mathcal P$ is of type II; that is,
$\mathcal P=\mathcal P_j$ and $v_j(\mathcal P)>1$. In this case, $\mathcal P'$
is a $(d, j)$-admissible collection of polynomials with tolerance $(A_0', A')$ and
\begin{align}
\label{eq:27}
v(\mathcal P')=(v_1(\mathcal P'),\ldots, v_{j-1}(\mathcal P'), v_j(\mathcal P)-1, 0,0, \ldots).
\end{align}
\item\label{item:3} The collection $\mathcal P$ is of type III; that is,
$\mathcal P=\mathcal P_j$ and $v_j(\mathcal P)=1$. In this case, $\mathcal P'$
is a $(d, j-1)$-admissible collection of polynomials with tolerance $(A_0', A')$ and
\begin{align}
\label{eq:33}
v(\mathcal P')=(0,\ldots,0, v_{j-1}(\mathcal P'), 0,0, \ldots).
\end{align}
Moreover, the leading coefficients of the polynomials in $\mathcal P'$ are pairwise distinct.
\end{enumerate}
The tolerance $(A_0', A')$ of the collection $\mathcal P'$ only
depends on the tolerance $(A_0, A)$ of the collection $\mathcal P$, and
is independent of $\delta$ and $N$.
\end{proposition}
Using Proposition \ref{prop:PETiterate} we now prove Claim \ref{claim:1}.
\begin{proof}[Proof of Claim \ref{claim:1}]
We may assume, without loss of generality, that the collection
$\mathcal P$ from Claim \ref{claim:1} is of type I or type II. Then we apply
Proposition \ref{prop:PETiterate} until we reach a collection of
polynomials of type III with weight vector
$v(\mathcal P)=(0,\ldots,0, v_{j}(\mathcal P), 0,0, \ldots)$ where $v_j(\mathcal P) = 1$
and such that \eqref{eq:50} holds.
We apply Proposition \ref{prop:PETiterate} once more to reach a
collection of $(d, j-1)$-admissible polynomials satisfying \eqref{eq:50}.
This completes the proof of the claim.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:PETiterate}]
Appealing to Proposition \ref{prop:pet} with $i_0=1$ we may conclude that
there exists a collection of polynomials $\mathcal P':=\{P_1',\ldots, P_{m'}'\}$ with
$m'=\#\mathcal P'<2\#\mathcal P$ and $P_{m'}'=P_m-P_1$ such that
\[
\mathcal P'=\{P_1(y)-P_{1}(y), P_1(y+h)-P_{1}(y),\ldots, P_m(y)-P_{1}(y), P_m(y+h)-P_{1}(y)\},
\]
for some $\delta'\delta^2N/C^2\le |h| \le \delta'N\le \delta^4N/C$.
Proposition \ref{prop:pet} also ensures that bound \eqref{eq:34} holds
for certain $1$-bounded functions
$f_0', f_1',\ldots, f_{m'}'\in L^0(\KK)$ such that
$\|f_i'\|_{L^1(\KK)}\le N_0$ for all $i\in\bra{m'}_0$ and satisfying
$\{f_0', f_1',\ldots, f_{m'}'\}=\{f_1,\overline{f_1},\ldots, f_m,\overline{f_m}\}$
with $f_{m'}'=f_m$.
Now it remains to verify conclusions from
\ref{item:1}, \ref{item:2} and \ref{item:3}. For this purpose we will
have to adjust $\delta'\le \delta^4/C$, which can be made as small as necessary.
\medskip
\paragraph{\textit{Proof of the conclusion from \ref{item:1}}} Suppose
that the collection $\mathcal P$ is of type I. Then
$i=\deg(P_1)<\deg(P_m)=j$ and
$v(\mathcal P)=(0,\ldots, 0, v_i(\mathcal P),\ldots, v_j(\mathcal P), 0, 0, \ldots)$. To
establish \eqref{eq:23} we consider three cases. Let $P\in\mathcal P$. If $\deg(P)>i$,
then
\begin{align}
\label{eq:35}
\begin{gathered}
\deg(P-P_1)=\deg(P(\cdot+h)-P_1)=\deg(P), \\
\ell(P-P_1)=\ell(P(\cdot+h)-P_1)=\ell(P),
\end{gathered}
\end{align}
which yields that $v_k(\mathcal P')=v_k(\mathcal P)$ for all $k>i$.
If $\deg(P)=i$ and $\ell(P)\neq\ell(P_1)$, then
\begin{align}
\label{eq:36}
\begin{gathered}
\deg(P-P_1)=\deg(P(\cdot+h)-P_1)=i, \\
\ell(P-P_1)=\ell(P(\cdot+h)-P_1)=\ell(P)- \ell(P_1).
\end{gathered}
\end{align}
If $\deg(P)=i$ and $\ell(P)=\ell(P_1)$, then
\begin{align*}
\deg(P-P_1)<i,
\quad\text{ and } \quad
\deg(P(\cdot+h)-P_1)<i.
\end{align*}
The latter two cases show that $v_{k}(\mathcal P')\ge0$ for all
$k\in \bra{i-1}$ and $v_i(\mathcal P')=v_i(\mathcal P)-1$. Hence
\eqref{eq:23} holds. We now show that $\mathcal P'$ is
$(d, j)$-admissible.
We begin with verifying \eqref{eq:24} for $P'\in\mathcal P_j'$. We
may write $P'=P(\cdot+\varepsilon h)-P_1$ for some $P\in\mathcal P_j$
and $\varepsilon\in\{0, 1\}$. By \eqref{eq:35} and \eqref{eq:24} for
$P\in\mathcal P_j$ we obtain
\begin{align}
\label{eq:41}
A_0^{-1}\delta^{A} N^{d-j} \leq |\ell(P')| \leq A_0\delta^{-A}N^{d-j}.
\end{align}
We now verify \eqref{eq:25} for $Q_1', Q_2'\in\mathcal P'_j$ with $\ell(Q_1')\neq\ell(Q_2')$. We may
write
\begin{align}
\label{eq:40}
Q_1'=Q_1(\cdot+\varepsilon_1 h)-P_1,
\qquad \text{ and } \qquad
Q_2'=Q_2(\cdot+\varepsilon_2 h)-P_1
\end{align}
for some
$Q_1, Q_2\in\mathcal P_j$ and
$\varepsilon_1, \varepsilon_2\in\{0, 1\}$. By \eqref{eq:35} we have
$\ell(Q_1')=\ell(Q_1)$ and $\ell(Q_2')=\ell(Q_2)$.
Then $\ell(Q_1)\neq\ell(Q_2)$ and
by \eqref{eq:25} for $Q_1, Q_2\in\mathcal P_j$ we deduce
\begin{align}
\label{eq:42}
A_0^{-1}\delta^{A} N^{d-j} \leq |\ell(Q_1') - \ell(Q_2')| \leq A_0\delta^{-A} N^{d-j}.
\end{align}
We finally verify \eqref{eq:26} for $Q_1', Q_2'\in\mathcal P'_j$ as in \eqref{eq:40} such
that $Q_1'\neq Q_2'$ and $\ell(Q_1')=\ell(Q_2')=\ell$. By
\eqref{eq:35} we see that $\ell(Q_1)=\ell(Q_2)=\ell$.
Since $\mathcal P$ is $(d, j)$-admissible,
using \eqref{eq:24}, we also have
\begin{align}
\label{eq:37}
A_0^{-1}\delta^{A} N^{d-j} \leq |\ell| \leq A_0\delta^{-A}N^{d-j}.
\end{align}
Recall that
$\delta'\delta^2N/C^2\le |h| \le \delta'N$, where $\delta'>0$ is an
arbitrarily small number such that $\delta'\le \delta^4/C$. Set
$\delta':=\delta^{M}(CM)^{-1}$ for a large number $M\ge1$, which will be
chosen later.
First suppose $Q_1 = Q_2$. Then $\varepsilon_1 \not= \varepsilon_2$ and
$\deg(Q_1'-Q_2')=j-1$. Furthermore $\ell(Q_1'-Q_2')= j\ell h(\varepsilon_1-\varepsilon_2)$
implying $|\ell(Q_1' - Q_2')| = |j \ell h|$ and so by \eqref{eq:37},
\begin{align}
\label{eq:38}
|j|(A_0C^3M)^{-1}\delta^{A+M+2} N^{d-j+1}\le |j\ell h|
\le |j| A_0(CM)^{-1}\delta^{M-A} N^{d-j+1}
\end{align}
and this verifies \eqref{eq:26} in the case $Q_1 = Q_2$.
Now suppose $Q_1 \not= Q_2$ so that $\deg(Q_1 - Q_2) = j-1$ and \eqref{eq:26} holds for $\ell(Q_1 - Q_2)$; that is,
\begin{align}
\label{eq:38a}
A_0^{-1}\delta^{A} N^{d-j+1} \leq |\ell(Q_1-Q_2)| \leq A_0\delta^{-A} N^{d-j+1}.
\end{align}
Taking $M:=\max\{2A, 2|j|A_0^2\}$ in \eqref{eq:38}, we see that $|j\ell h|
\le \frac12 A_0^{-1}\delta^{A} N^{d-j+1}$ if $C > 1$ is large enough.
In this case, $\ell(Q_1' - Q_2') = \ell(Q_1 - Q_2) + j h\ell (\varepsilon_1-\varepsilon_2)$ and so
\begin{align*}
|\ell(Q_1-Q_2)|-|j\ell h|\le |\ell(Q_1'-Q_2')|
\le |\ell(Q_1-Q_2)|+|j\ell h|.
\end{align*}
From \eqref{eq:38a} and $|j\ell h| \le \frac12 A_0^{-1} \delta^A N^{d-j+1}$, we conclude
\begin{align}
\label{eq:39}
\frac{1}{2}A_0^{-1}\delta^{A} N^{d-j+1} \leq |\ell(Q_1'-Q_2')| \leq \frac{3}{2}A_0\delta^{-A} N^{d-j+1}.
\end{align}
This verifies \eqref{eq:26} in the case $Q_1 \not= Q_2$.
In either case, we see that $\deg(Q_1' - Q_2') = j-1$ and (see \eqref{eq:38} and \eqref{eq:39}) we can
find a tolerance pair $(A_0', A')$ for $\mathcal P'$ depending on the tolerance
$(A_0, A)$ of $\mathcal P$ and the constants $C$ and $M$ such that
\begin{align}
\label{eq:43}
(A_0')^{-1}\delta^{A'} N^{d-j+1} \leq |\ell(Q_1'-Q_2')| \leq A_0'\delta^{-A'} N^{d-j+1}
\end{align}
holds, establishing \eqref{eq:26}.
\medskip
\paragraph{\textit{Proof of the conclusion from \ref{item:2}}}
Suppose that the collection $\mathcal P$ is of type II. Then
$\deg(P_1)=\ldots=\deg(P_m)=j$ and
$v(\mathcal P)=(0,\ldots, 0, v_j(\mathcal P), 0, 0, \ldots)$ with
$v_j(\mathcal P)>1$.
To establish \eqref{eq:27} we will proceed
in a similar way as in \ref{item:1}.
If $P\in\mathcal P=\mathcal P_j$ and $\ell(P)\neq\ell(P_1)$, then
\begin{align}
\label{eq:44}
\begin{gathered}
\deg(P-P_1)=\deg(P(\cdot+h)-P_1)=j, \\
\ell(P-P_1)=\ell(P(\cdot+h)-P_1)=\ell(P)- \ell(P_1).
\end{gathered}
\end{align}
If $P\in\mathcal P=\mathcal P_j$ and $\ell(P)=\ell(P_1)$, then by the
fact that $\mathcal P$ is $(d, j)$-admissible and by \eqref{eq:26} we
see that
\begin{align}
\label{eq:45}
\deg(P-P_1)<j,
\quad\text{ and } \quad
\deg(P(\cdot+h)-P_1)<j.
\end{align}
This shows that $v_{k}(\mathcal P')\ge0$ for all $k\in \bra{j-1}$
and $v_j(\mathcal P')=v_j(\mathcal P)-1$. Hence \eqref{eq:27}
holds. We now show that $\mathcal P'$ is $(d, j)$-admissible.
We begin with verifying \eqref{eq:24} for $P'\in\mathcal P_j'$. We
may write $P'=P(\cdot+\varepsilon h)-P_1$ for some $P\in\mathcal P_j$
such that $\ell(P)\neq\ell(P_1)$ and $\varepsilon\in\{0, 1\}$. Since
$\mathcal P$ is $(d, j)$-admissible, using \eqref{eq:44} and
\eqref{eq:25} (with $\ell(P)- \ell(P_1)$ in place of
$\ell(P)- \ell(Q)$) we obtain \eqref{eq:41} which is \eqref{eq:24} for $P' \in \mathcal P_j'$.
We now verify \eqref{eq:25} for $Q_1', Q_2'\in\mathcal P'_j$ with
$\ell(Q_1')\neq\ell(Q_2')$. As in \eqref{eq:40} we may write
$Q_1'=Q_1(\cdot+\varepsilon_1 h)-P_1$, and
$Q_2'=Q_2(\cdot+\varepsilon_2 h)-P_1$ for some
$Q_1, Q_2\in\mathcal P_j$ and
$\varepsilon_1, \varepsilon_2\in\{0, 1\}$ such that
$\ell(Q_1)\neq\ell(P_1)$ and $\ell(Q_2)\neq\ell(P_1)$. By
\eqref{eq:44} we have $\ell(Q_1')=\ell(Q_1)-\ell(P_1)$ and
$\ell(Q_2')=\ell(Q_2)-\ell(P_1)$. Then $\ell(Q_1)\neq\ell(Q_2)$ and
\eqref{eq:42} is verified by appealing to \eqref{eq:25} (with
$\ell(Q_1) - \ell(Q_2)$ in place of $\ell(P) - \ell(Q)$).
We finally verify \eqref{eq:26} for $Q_1', Q_2'\in\mathcal P'_j$ as in
\eqref{eq:40} such that $Q_1'\neq Q_2'$ and
$\ell(Q_1')=\ell(Q_2')=\ell$. By \eqref{eq:44},
$\ell(Q_1)-\ell(P_1)=\ell(Q_2)-\ell(P_1)=\ell$ and since $\mathcal P$ is $(d,j)$-admissible,
we see that $\ell$ satisfies \eqref{eq:37}. Now by following the last part of the proof
from \ref{item:1}, we conclude that \eqref{eq:43} holds.
\medskip
\paragraph{\textit{Proof of the conclusion from \ref{item:3}}}
Suppose that the collection $\mathcal P$ is of type III. Then
$\deg(P_1)=\ldots=\deg(P_m)=j$ and
$v(\mathcal P)=(0,\ldots, 0, v_j(\mathcal P), 0, 0, \ldots)$ with
$v_j(\mathcal P)=1$, thus $\ell(P_1)=\ldots=\ell(P_m):=\ell$. To
establish \eqref{eq:33} we will proceed in a similar way as in
\ref{item:1} and \ref{item:2}. If $P\in\mathcal P_j$ and
$\ell(P)=\ell$, then \eqref{eq:37} holds for $\ell$ and once again \eqref{eq:45}
holds. This in turn implies that $v_{j-1}(\mathcal P')>0$ and $v_{k}(\mathcal P')=0$ for all
$k\neq j-1$. Hence \eqref{eq:33} holds. We
now show that $\mathcal P'$ is $(d, j-1)$-admissible.
We begin with verifying \eqref{eq:24} (or equivalently \eqref{eq:41} with $j$ replaced
by $j-1$) for $P'\in\mathcal P_{j-1}'$. We
may write $P'=P(\cdot+\varepsilon h)-P_1$ for some $P\in\mathcal P_j$
such that $\ell(P)=\ell(P_1)$ and $\varepsilon\in\{0, 1\}$.
Then
\begin{align}
\label{eq:46}
\ell(P')=\ell(P(\cdot+\varepsilon h)-P_1)= \ell(P-P_1) + j h \ell \varepsilon.
\end{align}
As in \ref{item:1} we have $\delta'\delta^2N/C^2\le |h| \le \delta'N$, where
$\delta':=\delta^{M}(CM)^{-1}$ for a large number $M\ge1$, which will be
chosen later. Furthermore if $P \not= P_1$, then $A_0^{-1} \delta^A N^{d-j+1} \le |\ell(P-P_1)| \le A_0 \delta^{-A} N^{d-j+1}$
since $\mathcal P$ is $(d,j)$-admissible and so \eqref{eq:26} holds with $Q = P_1$.
This takes care of the case $\varepsilon = 0$.
If $\varepsilon=1$ and $P = P_1$, then \eqref{eq:38} gives the desired bound for $|\ell(P')|$. When $P \not= P_1$, we use
the upper bound from \eqref{eq:38}
\begin{align}
\label{eq:47}
|jh\ell| \leq |j|A_0(CM)^{-1}\delta^{M-A}N^{d-j+1} \le \frac12 A_0^{-1} \delta^{-A} N^{d-j+1}
\end{align}
when $M = \max(2A, 2|j|A_0^2)$ and $C>1$ chosen large enough. Thus, as before,
condition \eqref{eq:24} holds for $P'$ with some tolerance pair $(A_0', A')$ as desired.
For $Q_1' \not= Q_2'\in\mathcal P'_{j-1}$, we may write
$Q_1'=Q_1(\cdot+\varepsilon_1 h)-P_1$, and
$Q_2'=Q_2(\cdot+\varepsilon_2 h)-P_1$ for some
$Q_1, Q_2\in\mathcal P_j$ and $\varepsilon_1, \varepsilon_2\in\{0, 1\}$ such that
$\ell(Q_1)=\ell(Q_2) = \ell(P_1) = \ell$. We have $\ell(Q_1 - P_1) - \ell(Q_2 - P_1) = \ell(Q_1 - Q_2)$
and so by \eqref{eq:46},
\begin{align}
\label{eq:48}
\ell(Q_1')-\ell(Q_2')= \ell(Q_1 - Q_2) + j h \ell (\varepsilon_1 - \varepsilon_2).
\end{align}
We consider two cases.
If $Q_1 = Q_2$, then necessarily $|\varepsilon_1 - \varepsilon_2| = 1$ and so $\ell(Q_1') \not= \ell(Q_2')$,
$\deg(Q_1' - Q_2') = j-1$ and
\eqref{eq:38} shows that \eqref{eq:25} holds for $Q_1', Q_2' \in\mathcal P'_{j-1}$.
If $Q_1 \not= Q_2$, then $A_0^{-1} \delta^A N^{d-j+1} \le |\ell(Q_1-Q_2)| \le A_0 \delta^{-A} N^{d-j+1}$
since $\mathcal P$ is $(d,j)$-admissible and so \eqref{eq:26} holds with $P=Q_1 $ and $Q=Q_2$.
From \eqref{eq:47}, we see that $\ell(Q_1') \not= \ell(Q_2')$ and
\eqref{eq:48} implies that \eqref{eq:25} holds for $Q_1', Q_2'\in\mathcal P'_{j-1}$.
In either case, we see that \eqref{eq:26} is vacuously satisfied by
$\mathcal P'$ and \eqref{eq:25} holds for
$Q_1', Q_2' \in \mathcal P'_{j-1}$ with (necessarily)
$\ell(Q_1')\neq\ell(Q_2')$.
Concluding, we are able to find a
tolerance pair $(A_0', A')$ for $\mathcal P'$ depending on the
tolerance $(A_0, A)$ of $\mathcal P$ and the constants $C$ and $M$
such that the required estimates for \eqref{eq:46} and \eqref{eq:48}
hold. This completes the proof of Proposition \ref{prop:PETiterate}.
\end{proof}
\subsection{Degree-lowering}
Here, we establish a modulated version of the inverse theorem, which will imply Theorem \ref{thm:inverse}.
\begin{theorem}[Inverse theorem for modulated $(m+1)$-linear forms]
\label{thm:inversem}
Let $N\ge 1$ be a scale, and let $0<\delta\le 1$, $m\in\Z_+$ and $n\in\N$ be given. Let $\mathcal P:=\{P_1,\ldots, P_{m}\}$ and
$\mathcal Q:=\{Q_1,\ldots, Q_n\}$ be
collections of polynomials such that
\begin{gather*}
1\le \deg{P_1}<\ldots<\deg{P_{m}}<\deg{Q_1}<\ldots<\deg{Q_n}.
\end{gather*}
Let $f_0, f_1,\ldots, f_m\in L^0(\KK)$ be $1$-bounded functions
supported on an interval $I\subset \KK$ of measure $N_0 := N^{\deg{P_m}}$.
For $n\in\Z_+$ we define an
$(m+1)$-linear form corresponding to the triple
$(\mathcal P, \mathcal Q; N)$ and a frequency vector
$\xi=(\xi_1,\ldots, \xi_n)\in \KK^n$ by
\begin{align}
\label{eq:51}
\Lambda_{\mathcal P; N}^{\mathcal Q; \xi}(f_0,\ldots, f_m):=\frac{1}{N_0}
\int_{\KK^2}f_0(x)\prod_{i=1}^mf_{i}(x-P_i(y)){\rm e}\Big(\sum_{j=1}^n\xi_jQ_j(y)\Big)d\mu_{[N]}(y)d\mu(x).
\end{align}
For $n=0$ we set $\mathcal Q=\emptyset$ and we simply write
$\Lambda_{\mathcal P; N}^{\mathcal Q; \xi}(f_0,\ldots, f_m):=\Lambda_{\mathcal P; N}(f_0,\ldots, f_m)$ as in \eqref{eq:6}.
Suppose that
\begin{align}
\label{eq:52}
|\Lambda_{\mathcal P; N}^{\mathcal Q; \xi}(f_0,\ldots, f_m)|\ge\delta.
\end{align}
Then there exists a $C_1 = C_1(\mathcal P) \gg 1$ such that
\begin{align}
\label{eq:53}
N_0^{-1}\big\| \mu_{[N_1]}*f_1\big\|_{L^1(\KK)} \gtrsim_{\mathcal P} \delta^{O_{\mathcal P}(1)},
\end{align}
for any $N_1 = \delta^C N^{\deg{P_1}}$ with $C\ge C_1$.
\end{theorem}
If necessary we will also write
$\Lambda_{\mathcal P; N}^{\mathcal Q; \xi}(f_0,\ldots, f_m)=\Lambda_{\mathcal P; N, I}^{\mathcal Q; \xi}(f_0,\ldots, f_m)$
in order to emphasise that the functions $f_0, f_1,\ldots, f_m$ are
supported on $I$.
We first show how the Gowers box norms control the dual functions.
The dual function, or more precisely the $m$-th dual function,
corresponding to \eqref{eq:51} is defined as
\begin{align}
\label{eq:55}
F_m^{\xi}(x):=\int_{\KK} F_{m; y}^{\xi}(x) d\mu_{[N]}(y), \qquad x\in \KK,
\end{align}
where
\begin{align}
\label{eq:56}
F_{m; y}^{\xi}(x):=f_0(x+P_m(y))\prod_{i=1}^{m-1}f_{i}(x-P_i(y)+P_m(y)){\rm e}\Big(\sum_{j=1}^n\xi_jQ_j(y)\Big).
\end{align}
\begin{proposition}[Gowers box norms control the dual functions]
\label{prop:dual}
Let $N\ge 1$ be a scale, and let $0<\delta\le 1$, $d, m\in\Z_+$ with $m\ge 2$ and $n\in\N$ be given. Let
$\mathcal P:=\{P_1,\ldots, P_{m}\}$ and
$\mathcal Q:=\{Q_1,\ldots, Q_n\}$ be collections of polynomials such
that $\mathcal P$ is $(d, d)$-admissible and
$$1\le \deg{P_1}\le\ldots\le\deg{P_{m}}\le\deg{Q_1}\le\ldots\le\deg{Q_n}.$$
Let $f_0, f_1,\ldots, f_m\in L^0(\KK)$ be $1$-bounded functions
supported on an interval $I\subset \KK$ of measure $N_0 := N^{\deg{P_m}}$.
For $\xi\in \KK^n$, let $F_m^{\xi}$ be the dual function defined in
\eqref{eq:55}. Suppose that \eqref{eq:52} is satisfied. Then for the exponent
$s\in\Z_+$ which appears in the conclusion of Theorem \ref{thm:Us}, we have
\begin{align}
\label{eq:66}
\|F_m^{\xi}\|_{\square_{[H_1], \ldots, [H_{s+1}]}^{s+1}(I)} \gtrsim_{\mathcal P} \delta^{O_{\mathcal P}(1)},
\end{align}
where $H_i\simeq \delta^{O_{\mathcal P}(1)}N^{\deg(P_m)}$ for
$i\in\bra{s+1}$.
\end{proposition}
\begin{proof}
By changing the variables $x\mapsto x+P_m(y)$ in \eqref{eq:51} we may write
\begin{align*}
\Lambda_{\mathcal P; N}^{\mathcal Q; \xi}(f_0,\ldots, f_m)=
\frac{1}{N_0}\int_{\KK}\Big(\int_{\KK} F_{m; y}^{\xi}(x) d\mu_{[N]}(y)\Big)f_m(x) d\mu(x).
\end{align*}
By the Cauchy--Schwarz inequality (observing once again that $\|f_m\|_{L^2(\KK)}^2 \le N_0$), we have
\begin{align*}
\delta^2 &\le \frac{1}{N_0} \int_{\KK} \Big|\int_{\KK} F_{m; y}^{\xi}(x)d\mu_{[N]}(y)\Big|^2 d\mu(x) \\
& = \frac{1}{N_0}\bigg|\int_{\KK^3}F_{m; y_1}^{\xi}(x)\overline{F_{m; y_2}^{\xi}(x)}d\mu_{[N]}^{\otimes2}(y_1, y_2)d\mu(x)\bigg| \\
&= |\Lambda_{\mathcal P; N}^{\mathcal Q; \xi}(f_0, f_1,\ldots, f_{m-1}, \overline{F_m^{\xi}})|,
\end{align*}
where in the last step we changed variables $x\mapsto x-P_m(y_1)$. Denote $g_m:=\overline{F_m^{\xi}}$, and $g_j:=f_j$ for $j\in\bra{m-1}_0$.
Our strategy will be to reduce the matter to Theorem \ref{thm:Us} with the family $\mathcal P$. Observe that $g_j$ is a $1$-bounded function and $\|g_j\|_{L^1(\KK)}\lesssim N_0$ for all $j\in\bra{m}_0$.
Changing the variables $x \mapsto x+h$ in the definition of
$\Lambda_{\mathcal P; N}^{\mathcal Q; \xi}$ and averaging over
$h\in [H_{s+1}]$ where $H_{s+1} =\delta^{O(1)}N^{{\rm Deg} P_m}$, we have
\begin{align*}
\delta^4&\le |\Lambda_{\mathcal P; N}^{\mathcal Q; \xi}(g_0,\ldots, g_m)|^2\\
&\lesssim \frac{1}{N_0}\int_{\KK^2}\Big|\int_{\KK}g_0(x+h)\prod_{i=1}^mg_{i}(x+h-P_i(y)) d\mu_{[H_{s+1}]}(h)\Big|^2
d\mu_{[N]}(y)d\mu(x),
\end{align*}
where in the last line we have used the Cauchy--Schwarz inequality in the $x$ and $y$ variables, noting that $x \to g_0(x+h)$ is supported
a fixed dilate of $I$ for every $h\in [H_{s+1}]$.
By another change of variables we obtain
\begin{align*}
\int_{\KK}\Lambda_{\mathcal P; N}(\Delta_hg_0,\ldots, \Delta_hg_m)d\nu_{[H_{s+1}]}(h)\gtrsim \delta^4.
\end{align*}
Now we may find a measurable set $X \subseteq [H_{s+1}]$ such that
\begin{align*}
|\Lambda_{\mathcal P; N}(\Delta_hg_0,\ldots, \Delta_hg_m)|\gtrsim \delta^4
\end{align*}
for all $h\in X$ and $\nu_{[H_{s+1}]}(X)\gtrsim \delta^4$. Since $\Delta_hg_j$ is a $1$-bounded function and
$\|\Delta_hg_j\|_{L^1(\KK)}\lesssim N_0$ for all $j\in\bra{m}_0$, we
may invoke Theorem \ref{thm:Us} and conclude that
\begin{align*}
\|\Delta_hF_m^{\xi}\|_{\square_{[H_1], \ldots, [H_s]}^s(I)}=\|\Delta_hg_m\|_{\square_{[H_1], \ldots, [H_s]}^s(I)}
\gtrsim_{\mathcal P} \delta^{O_{\mathcal P}(1)}
\end{align*}
for all $h\in X$, where
$H_i\simeq \delta^{O_{\mathcal P}(1)}N^{\deg(P_m)}$ for $i\in\bra{s}$. Averaging over $h\in X$ and using $\nu_{[H_{s+1}]}(X)\gtrsim \delta^4$, we obtain
\begin{align*}
\|F_m^{\xi}\|_{\square_{[H_1], \ldots, [H_{s+1}]}^{s+1}(I)}^{2^{s+1}} \ = \
\int_{\KK}\|\Delta_hF_m^{\xi}\|_{\square_{[H_1], \ldots, [H_s]}^s(I)}^{2^s}d\nu_{[H_{s+1}]}(h) \
\gtrsim_{\mathcal P} \ \delta^{O_{\mathcal P}(1)},
\end{align*}
which is \eqref{eq:66} as desired.
\end{proof}
We first establish a simple consequence of the oscillatory integral bound \eqref{osc-int-est}
which will be important later.
\begin{lemma}
\label{lem:vdc-osc}
Let $N>1$ be a scale, $m\in\Z_+$ and $n\in\N$ be given. Let $\mathcal P:=\{P_1,\ldots, P_{m}\}$ and
$\mathcal Q:=\{Q_1,\ldots, Q_n\}$ be
collections of polynomials such that
\begin{gather}
\label{eq:70}
1\le \deg{P_1}<\ldots<\deg{P_{m}}<\deg{Q_1}<\ldots<\deg{Q_n}.
\end{gather}
Define the multiplier corresponding to the families $\mathcal P$ and $\mathcal Q$ as follows:
\begin{align*}
m_N^{\mathcal P, \mathcal Q}(\zeta, \xi):=\int_\KK e\Big(\sum_{i=1}^m\zeta_iP_i(y)+\sum_{j=1}^n\xi_jQ_j(y)\Big)d\mu_{[N]}(y),
\end{align*}
where $\zeta=(\zeta_1,\ldots, \zeta_m)\in \KK^m$ and $\xi=(\xi_1,\ldots, \xi_n)\in \KK^n$.
Let $0<\delta\le 1$ and suppose that
\begin{align}
\label{eq:54}
|m_N^{\mathcal P, \mathcal Q}(\zeta, \xi)|\ge \delta.
\end{align}
Then there exists a large constant $A\gtrsim_{\mathcal P, \mathcal Q} 1$ such that
\begin{align}
\label{eq:78}
\begin{split}
N^{\deg(Q_j)}|\xi_{j}|&\lesssim \delta^{-A}, \quad \text{ for } \quad j\in\bra{n},\\
N^{\deg(P_j)}|\zeta_{j}|&\lesssim \delta^{-A}, \quad \text{ for } \quad j\in\bra{m}.
\end{split}
\end{align}
\end{lemma}
\begin{proof}
Fix an element $\alpha\in \KK$ such that $|\alpha| = N$ and make the change of variables $y \to \alpha y$ to write
$$
m_N^{\mathcal P, \mathcal Q}(\zeta, \xi) \ = \ \int_{B_1(0)} {\rm e} \Big(\sum_{i=1}^m\zeta_iP_i(\alpha y)+\sum_{j=1}^n\xi_jQ_j(\alpha y)\Big)
d\mu(y).
$$
Define $R(y):=\sum_{i=1}^m\zeta_iP_i(y)+\sum_{j=1}^n\xi_jQ_j(y)$. Then $R(y)$ may be rewritten as
\begin{align*}
R(y)=\sum_{l=1}^{\deg{Q_n}}\coe_l(R)y^l
\end{align*}
The oscillatory integral bound \eqref{osc-int-est} implies
\begin{align}
\label{eq:72}
|m_N^{\mathcal P, \mathcal Q}(\zeta, \xi)|\lesssim \bigg(1+\sum_{l=1}^{\deg{Q_n}}|\coe_l(R)|N^l\bigg)^{-1/\deg{Q_n}}.
\end{align}
Hence \eqref{eq:54} implies $\max_l |\coe_l(R)| N^{l} \lesssim \delta^{-d_{*}}$ where $d_{*} = \deg{Q_n}$ and the maximum is taken
over all $l \in \bra{\deg(Q_n)}$. From this, we see that for any
sufficiently large $A\ge d_{*}$,
\begin{align}
\label{eq:77}
|\coe_l(R)|N^l\le \delta^{-A}/A
\end{align}
for all $l\in \bra{\deg(Q_n)}$.
Using \eqref{eq:70} we observe that
\begin{gather}
\label{eq:73}
\coe_{\deg{Q_{j}}}(R)=\sum_{k=j}^n\coe_{\deg{Q_{j}}}(Q_{k})\xi_{k}, \quad \text{ for } \quad j\in\bra{n}),\\
\label{eq:74}\coe_{\deg{P_j}}(R)=\sum_{k=1}^n\coe_{\deg{P_j}}(Q_{k})\xi_{k}+\sum_{k=j}^m\coe_{\deg{P_j}}(P_k)\zeta_k
, \quad \text{ for } \quad j\in\bra{m}.
\end{gather}
Using \eqref{eq:73} for $j=n$, we see that \eqref{eq:77} implies \eqref{eq:78} for $N^{\deg{Q_n}} |\xi_n|$. Inductively we
now deduce, using \eqref{eq:73}, that \eqref{eq:77} implies that \eqref{eq:78} holds for all $N^{\deg{Q_j}} |\xi_j|$, $j\in\bra{n}$.
Similarly, using \eqref{eq:74} and \eqref{eq:77}, we see that that the second displayed equation in \eqref{eq:78} holds.
\end{proof}
The key ingredient in the proof of Theorem \ref{thm:inversem} will be a degree-lowering
argument, which reads as follows.
\begin{theorem}[Degree-lowering argument]
\label{thm:deg-low}
Let $N\ge 1$ be a scale and let $0<\delta\le 1$, $m\in\Z_+$ and $n\in\N$ be given. Let
$\mathcal P:=\{P_1,\ldots, P_{m}\}$ and
$\mathcal Q:=\{Q_1,\ldots, Q_n\}$ be collections of polynomials such
that
\begin{gather*}
1\le \deg{P_1}<\ldots<\deg{P_{m}}<\deg{Q_1}<\ldots<\deg{Q_n}.
\end{gather*}
For $\xi\in \KK^n$, let $F_m^{\xi}$ be the dual function from \eqref{eq:55} corresponding to the form \eqref{eq:51} and
$1$-bounded functions $f_0, f_1,\ldots, f_{m-1}\in L^0(\KK)$ supported
on an interval $I\subset \KK$ of measure $N_0 := N^{\deg{P_m}}$.
Suppose that for some integer $s\in\Z_+$ one has
\begin{align}
\label{eq:67}
\|F_m^{\xi}\|_{\square_{[H_1], \ldots, [H_s]}^s(I)} \ge \delta,
\end{align}
where $H_i\simeq \delta^{O_{\mathcal P}(1)}N^{\deg(P_m)}$ for
$i\in\bra{s}$. Then
\begin{align}
\label{eq:68}
\|F_m^{\xi}\|_{\square_{[H_1], \ldots, [H_{s-1}]}^{s-1}(I)} \gtrsim_{\mathcal P} \delta^{O_{\mathcal P}(1)}.
\end{align}
\end{theorem}
Assuming momentarily Theorem \ref{thm:deg-low} we prove Theorem \ref{thm:inversem}.
\begin{proof}[Proof of Theorem \ref{thm:inversem}]
Our goal is to prove \eqref{eq:53} when
$$
\delta \ \le \ |\Lambda_{\mathcal{P};N}^{\mathcal{Q}; \xi}(f_0,\ldots, f_m)|.
$$
The proof is by induction on $m\in \Z_+$. We divide the proof into two
steps. In the first step we establish the base case for $m=1$. In
the second step we will use Theorem \ref{thm:deg-low} to establish the inductive step.
\paragraph{\bf Step 1.}
Assume that $m=1$ so that $N_0 = N^{\deg{P_1}}$.
For $\zeta\in \KK$ and $\xi=(\xi_1,\ldots, \xi_n)\in \KK^n$ we define the multiplier
\begin{align*}
m_N(\zeta, \xi):=\int_\KK e\Big(-\zeta P_1(y) + \sum_{j=1}^n \xi_j Q_j(y)\Big)d\mu_{[N]}(y).
\end{align*}
We now express
\begin{align*}
\Lambda_{\mathcal{P};N}^{\mathcal{Q}; \xi}(g_0,g_1)
=N_0^{-1}\int_{\KK} \widehat{g_0}(-\zeta) \widehat{g_1}(\zeta) m_N(\zeta, \xi)d\mu(\zeta).
\end{align*}
Using the Cauchy--Schwarz inequality and Plancherel's theorem we see
\begin{align}
\label{eq:61}
|\Lambda_{\mathcal{P};N}^{\mathcal{Q}; \xi}(g_0,g_1)|\leq N_0^{-1}\| g_0 \|_{L^2(\KK)} \|g_1\|_{L^2(\KK)}
\sup_{\zeta \in \supp{(\widehat{g_0}\widehat{g_1})}} |m_N(\zeta, \xi)|.
\end{align}
When $\KK$ is non-archimedean, let $\varphi(x) = \ind{[1]}(x) = \ind{B_1(0)}(x)$ so that
$\widehat{\varphi}(\zeta) = \ind{[1]}(\xi)$. When $\KK$ is archimedean,
choose a Schwartz function $\varphi: \KK \to \KK$ such that
\begin{align*}
\ind{[1]}(\zeta)\le \widehat{\varphi}(\zeta)\le \ind{[2]}(\zeta), \qquad \zeta\in \KK.
\end{align*}
For a scale $M$, we set $\varphi_M(x) = M^{-1} \varphi(M^{-1} x)$ when $\KK = {\mathbb R}$ and
when $\KK = {\mathbb C}$, we set
$\varphi_{M}(z) = M^{-1} \varphi(M^{-1/2} z)$. When
$\KK$ is non-archimedean,
we set $\varphi_{M}(x) = M^{-1} \ind{[M]}(x)$.
Consider two scales $M_1\simeq\delta^{C}N^{\deg{P_1}}$ and $N_1\simeq\delta^{2C}N^{\deg{P_1}}/C$.
Then we obtain
\begin{align*}
\delta\le |\Lambda_{\mathcal{P};N}^{\mathcal{Q}; \xi}(f_0,f_1)|\le
|\Lambda_{\mathcal{P};N}^{\mathcal{Q}; \xi}(f_0,\varphi_{M_1}*f_1)|+
|\Lambda_{\mathcal{P};N}^{\mathcal{Q}; \xi}(f_0,f_1-\varphi_{M_1}*f_1)|.
\end{align*}
Note that
\begin{align*}
|\Lambda_{\mathcal{P};N}^{\mathcal{Q}; \xi}(f_0,\varphi_{M_1}*f_1)|
\le N_0^{-1}\|f_0\|_{L^{\infty}(\KK)}\|\varphi_{M_1}*f_1\|_{L^1(\KK)} \le N_0^{-1} \|\varphi_{M_1}*f_1\|_{L^1(\KK)},
\end{align*}
and
\begin{align*}
\|\varphi_{M_1}*f_1\|_{L^1(\KK)}&\le \|\varphi_{M_1}*\mu_{[N_1]}*f_1\|_{L^1(\KK)}+
\|(\varphi_{M_1}-\varphi_{M_1}*\mu_{[N_1]})*f_1\|_{L^1(\KK)}\\
&\lesssim \|\mu_{[N_1]}*f_1\|_{L^1(\KK)} +C^{-1}\delta^C N_0,
\end{align*}
since $\varphi_{M_1}-\varphi_{M_1}*\mu_{[N_1]} = 0$ when $\KK$ is non-archimedean and when $\KK$
is archimedean, we have the pointwise bound
\begin{align*}
|\varphi_{M_1}(x)-\varphi_{M_1}*\mu_{[N_1]}(x)|\lesssim C^{-1}\delta^C \, M_1^{-1}\big(1+M_1^{-1}|x|\big)^{-10}.
\end{align*}
If $C\ge1$ is sufficiently large then we may write
\begin{align}
\label{eq:61a}
\delta\lesssim |\Lambda_{\mathcal{P};N}^{\mathcal{Q}; \xi}(f_0,f_1)|\le
N_0^{-1}\|\mu_{[N_1]}*f_1\|_{L^1(\KK)}+
|\Lambda_{\mathcal{P};N}^{\mathcal{Q}; \xi}(f_0,f_1-\varphi_{M_1}*f_1)|.
\end{align}
By \eqref{eq:61} we have that
\begin{align}
\label{eq:61b}
|\Lambda_{\mathcal{P};N}^{\mathcal{Q}; \xi}(f_0,f_1-\varphi_{M_1}*f_1)|\lesssim
\sup_{\zeta\in \KK: |\zeta|\ge M_1^{-1} } |m_N(\zeta, \xi)|,
\end{align}
since $\| f_0 \|_{L^2(\KK)}\le N_0^{1/2}$ and $\|f_1\|_{L^2(\KK)}\le N_0^{1/2}$.
We now prove that
\begin{align}
\label{eq:62}
\sup_{\zeta\in \KK: |\zeta|\ge M_1^{-1} } |m_N(\zeta, \xi)|\lesssim \delta^{2}.
\end{align}
Suppose that inequality \eqref{eq:62} does not hold, then one has
\begin{align*}
|m_N(\zeta, \xi)|\gtrsim \delta^{2}
\end{align*}
for some $\zeta\in \KK$ so that $|\zeta|\ge M_1^{-1}$. Then Lemma \ref{lem:vdc-osc} implies
$N^{\deg{P_1}} |\zeta| \lesssim \delta^{-A}$ for some large, fixed $A\gtrsim 1$ by \eqref{eq:78}.
Since $M_1 = \delta^C N^{\deg{P_1}}$, we have $\delta^{-C} \lesssim \delta^{-A}$ which is
a contradiction if $C \gg A$. Thus \eqref{eq:62} holds.
Hence by \eqref{eq:62}, \eqref{eq:61b} and \eqref{eq:61a}, we see that
$$
\delta \lesssim N_0^{-1}\|\mu_{[N_1]}*f_1\|_{L^1(\KK)}
$$
which establishes Theorem \ref{thm:inversem} when $m=1$.
\paragraph{\bf Step 2.} We now assume that Theorem \ref{thm:inversem}
is true for $m-1$ in place of $m$ for some integer $m\ge2$. Using
Theorem \ref{thm:deg-low} we show that this implies Theorem
\ref{thm:inversem} for $m\ge2$. Note that bound \eqref{eq:52} implies
inequality \eqref{eq:66} from Proposition \ref{prop:dual}. Now by
Theorem \ref{thm:deg-low} applied $s-2$ times we may conclude that
\begin{align*}
\|F_m^{\xi}\|_{\square_{[H_1], [H_2]}^2(I)} \gtrsim_{\mathcal P} \delta^{O_{\mathcal P}(1)},
\end{align*}
where $H_1, H_2\simeq \delta^{O_{\mathcal P}(1)}N^{\deg{P_m}}$. By Lemma \ref{2.10} we can find a $\xi_0\in \KK$ such that
\begin{align}
\label{eq:69}
N_0^{-1}\big|\widehat{F_m^{\xi}}(\xi_0)\big| \gtrsim_{\mathcal P} \delta^{O_{\mathcal P}(1)},
\end{align}
since $N_0 = N^{\deg{P_m}}$.
By definitions \eqref{eq:55} and \eqref{eq:56} and making the change of variables $x\mapsto x-P_m(y)$ we may write
\begin{align*}
N_0^{-1}\widehat{F_m^{\xi}}(\xi_0)&=N_0^{-1}\iint_{\KK^2}F_{m;y}^{\xi}(x)e(-\xi_0 x)d\mu_{[N]}(y)d\mu(x)\\
=N_0^{-1}&\iint_{\KK^2}{\rm M}_{\xi_0}f_0(x)\prod_{i=1}^{m-1}f_{i}(x-P_i(y))e\Big(\xi_0 P_m(y)+\sum_{j=1}^n\xi_jQ_j(y)\Big)d\mu_{[N]}(y)d\mu(x)\\
&=M^{-1} \Lambda_{\mathcal P'; N}^{\mathcal Q', \xi'}({\rm M}_{\xi_0}f_0, f_1,\ldots, f_{m-1}),
\end{align*}
where ${\rm M}_{\xi_0}f_0(x):={\rm e}(-\xi_0 x)f_0(x), \, \mathcal P':=\mathcal P\setminus\{P_m\}, \, \mathcal Q':=\mathcal Q\cup\{P_m\}, \, \xi':=(\xi_0, \xi_1,\ldots, \xi_n)\in \KK^{n+1}$ and
$M:=N^{\deg(P_m)-\deg(P_{m-1})}$. The parameter $N_0' := N_0 M^{-1}$ is what appears in the $m$-linear form
$\Lambda_{\mathcal P';N}^{\mathcal Q', \xi'}$. We note that $N_0' = N^{\deg{P_{m-1}}}$.
Thus \eqref{eq:69} implies
\begin{align*}
M^{-1} |\Lambda_{\mathcal P'; N, I}^{\mathcal Q', \xi'}({\rm M}_{\xi_0}f_0, f_1,\ldots, f_{m-1})|\gtrsim \delta^{O(1)}.
\end{align*}
By translation invariance we may assume that all functions $f_0, f_1,\ldots, f_{m-1}$ are supported in $[N_0]$.
We can partition $[N_0] = \bigcup_{k\in \bra{L}} E_k$ into $L \simeq M$ sets, each with measure $\simeq N_0'$
contained in an interval $I_k$ lying in an $O(N_0')$ neighbourhood of $E_k$. Furthermore
$E_k$ is an $O(N_1)$ neighbourhood of a set $F_k$ such that $\mu(E_k\setminus F_k) \lesssim N_1$ and
${\rm supp}(\ind{F_k}*\mu_{[N_1]}) \subseteq E_k$. Here $N_1 \simeq \delta^{O_{\mathcal P}(1)} N^{\deg(P_1)}$.
In the non-archimedean setting, this decomposition is straightforward; in this case, we can take $F_k = E_k = I_k$.
If fact if $N_0 = q^{n_0}$ and $N_0' = q^{n_0 - \ell}$ so that $M = q^{\ell}$, then
\begin{align*}
[N_0] \ = \ B_{q^{n_0}}(0) \ = \ \bigcup_{x \in {\mathcal F}} B_{q^{n_0 -\ell}}(x)
\end{align*}
gives our partition of $[N_0]$ where ${\mathcal F} = \{ x = \sum_{j=0}^{\ell-1} x_j \pi^{-n_0 + j} : x_j \in o_\KK/m_\KK \}$. Note $\# {\mathcal F} = q^{\ell} = M$. When $\KK = {\mathbb R}$, one simply decomposes the interval $[N_0] = [-N_0, N_0]$ into
$M$ subintervals $(E_k)_{k\in \bra{L}}$ of equal length and then extend and shrink to obtain intervals $I_k$ and $F_k$ with the desired properties.
When $\KK = {\mathbb C}$, the set $[N_0]$ is a disc and the decomposition is not as straightforward but not difficult to construct
by starting with a mesh of squares of side length $\sqrt{N_0'}$ which cover $[N_0]$. It is important that for this case
(when $\KK = {\mathbb C}$) that we allow the sets $E_k$ and $F_k$ to be general sets (not necessarily intervals)
with the above properties. The picture should be clear.
Hence by changing variables $x \to x+P_1(y)$ and then back again,
\begin{align*}
\ \ \ \ \ \ &M^{-1} \Lambda_{\mathcal P'; N, I}^{\mathcal Q', \xi'}({\rm M}_{\xi_0}f_0, f_1,\ldots, f_{m-1}) \\
= N_0^{-1} &\sum_{k\in \bra{L}} \iint_{E_k\times \KK}{\rm M}_{\xi_0}f_0(x)\prod_{i=1}^{m-1}f_{i}(x-P_i(y))e\Big(\xi_0 P_m(y)+\sum_{j=1}^n\xi_jQ_j(y)\Big)d\mu_{[N]}(y)d\mu(x)\\
= N_0^{-1}&\sum_{k \in \bra{L}}
\iint_{\KK^2}f_0^k(x)g^k(x-P_1(y))\prod_{i=2}^{m-1}f_{i}^k(x-P_i(y))e\Big(\xi_0 P_m(y)+\sum_{j=1}^n\xi_jQ_j(y)\Big)d\mu_{[N]}(y)d\mu(x)\\
&=M^{-1} \sum_{k\in \bra{L}} \Lambda_{\mathcal P'; N, I_k}^{\mathcal Q', \xi'}(f_0^k, g^k,f_2^k, \ldots, f_{m-1}^k),
\end{align*}
where
$f_0^k:={\rm M}_{\xi_0}f_0\ind{I_k}, f_2^k:=f_2\ind{I_k}, \ldots, f_{m-1}^k:=f_{m-1}\ind{I_k}$ and $g^k = f_1 \ind{E_k}$.
By the pigeonhole principle there exists $L_0\subseteq \bra{L}$ such that $\# L_0\gtrsim \delta^{O_{\mathcal P'}(1)}M$ and
for every $k\in L_0$ we have
\begin{align*}
|\Lambda_{\mathcal P'; N, I_k}^{\mathcal Q', \xi'}(f_0^k, g^k, f_2^k,\ldots, f_{m-1}^k)|\gtrsim \delta^{O_{\mathcal P'}(1)}.
\end{align*}
By the inductive hypothesis, we have
\begin{align*}
(N_0')^{-1}\big\|\mu_{[N_1]}*(f_1\ind{E_k})\big\|_{L^1(\KK)}\gtrsim \delta^{O_{\mathcal P'}(1)}
\end{align*}
for every $k\in L_0$ and for every $N_1 = \delta^C N^{\deg{P_1}}$ with $C \ge C_1(\mathcal P')$.
Note that
\begin{align}
\label{eq:1}
(N_0')^{-1} \|\mu_{[N_1]}*(f_1(\ind{E_k} - \ind{F_k}))\|_{L^1(\KK)} \lesssim N_1 (N_0')^{-1} \lesssim \delta^{C} \ \ \
\end{align}
and hence for $C\gg 1$ large enough,
\begin{align*}
(N_0')^{-1}\big\|\mu_{[N_1]}*(f_1\ind{F_k})\big\|_{L^1(\KK)}\gtrsim \delta^{O_{\mathcal P}(1)} \ \ \ {\rm for \ every}
\ \ k \in L_0.
\end{align*}
Now we can sum over $k\in L_0$, using the bound $\# L_0\gtrsim \delta^{O(1)}M$ and the pairwise disjoint supports
of
$(\mu_{[N_1]}*(f_1 \ind{F_k}))_{k\in \bra{L}}$,
we obtain
\begin{align*}
N_0^{-1}\Big\|\mu_{[N_1]}*\Big(\sum_{k\in \bra{L}}f_1\ind{F_k}\Big)\Big\|_{L^1(\KK)} &\ge N_0^{-1} \sum_{k\in \bra{L}} \big\|\mu_{[N_1]}*(f_1 \ind{F_k})\big\|_{L^1(\KK)} \\
&\ge M^{-1} \sum_{k\in L_0} (N_0')^{-1} \big\|\mu_{[N_1]}*(f_1 \ind{F_k})\big\|_{L^1(\KK)} \
\gtrsim \ \delta^{O_{\mathcal P}(1)},
\end{align*}
which by \eqref{eq:1} yields
\begin{align*}
N_0^{-1}\big\|\mu_{[N_1]}*f_1\big\|_{L^1(\KK)}
\gtrsim \ \delta^{O_{\mathcal P}(1)},
\end{align*}
as desired.
\end{proof}
We now state two auxiliary technical lemmas which will be needed in the
proof of Theorem \ref{thm:deg-low}. For $\omega = (\omega_1, \ldots, \omega_n) \in \{0,1\}^n$
and $h = (h_1, \ldots, h_n) \in \KK^n$, we write $\omega \cdot h = \sum_{i=1}^n \omega_i h_i$
and $1-\omega = (1-\omega_1, \ldots, 1-\omega_n)$.
\begin{lemma}
\label{lem:1}
Let $N\ge 1$ be a scale and let $0<\delta\le 1$, $m\in\Z_+$ with $m\ge 2$, $n\in\N$ and scales
$H_1,\ldots, H_n$ with each $H_i\le N$ be given. Assume that $\phi:X\to\RR$
is a measurable function defined on a measurable set
$X\subseteq H:=\prod_{i=1}^n [H_i]$. Let
$\mathcal P:=\{P_1,\ldots, P_{m}\}$ and
$\mathcal Q:=\{Q_1,\ldots, Q_n\}$ be collections of polynomials. For $\xi\in \KK^n$, let
$F_m^{\xi}$ be the dual function defined in \eqref{eq:55} that
corresponds to the form \eqref{eq:51} and $1$-bounded functions
$f_0, f_1,\ldots, f_{m-1}\in L^0(\KK)$ supported on an interval $I\subset \KK$ of
measure $N_0 := N^{\deg{P_m}}$. Suppose that
\begin{align}
\label{eq:57}
\int_X\big|N^{-1}_0\reallywidehat{{\Delta}_h^n F_m^{\xi}}(\phi(h))\big|^2
d\Big(\bigotimes_{i=1}^{n}\nu_{[H_i]}\Big)(h)\ge \delta.
\end{align}
Then
\begin{align}
\label{eq:81}
\int_{\square_n(X)}\bigg|N_0^{-1}\int_{\KK} F_m(x; h, h')e(-\psi(h, h')x)d\mu(x)\bigg|^2
d\Big(\bigotimes_{i=1}^{n}\nu_{[H_i]}\Big)^{\otimes2}(h, h')
\gtrsim \delta^{O(1)},
\end{align}
where
\begin{align*}
\square_n(X):=\Big\{(h, h')\in H^2: \omega \cdot h + (1-\omega)\cdot h' \in X \text{ for every } \omega\in\{0, 1\}^n\Big\},
\end{align*}
and
\begin{gather*}
F_m(x; h, h'):=\int_{\KK}{\Delta}_{h'-h}^nf_0(x+P_m(y))\prod_{i=1}^{m-1}
{\Delta}_{h'-h}^nf_{i}(x-P_i(y)+P_m(y))d\mu_{[N]}(y),\\
\psi(h, h'):=\sum_{\omega\in\{0, 1\}^n}(-1)^{|\omega|}\phi\Big(\omega \cdot h + (1-\omega)\cdot h'\Big).
\end{gather*}
\end{lemma}
\begin{proof}
We shall write ${\bm \nu}_n:=\bigotimes_{i=1}^{n}\nu_{[H_i]}$.
Using \eqref{product}, \eqref{eq:55} and \eqref{eq:56} we see that the left-hand side of
\eqref{eq:57} can be written as
\begin{align*}
\frac{1}{N_0^2} \int_{\KK^{2^{n+1}}}\int_{\KK^2}\int_{\KK^n}G_0(x, z, h;{\bm y})
d{\bm \nu}_n(h)d\mu(x)d\mu(z)d\mu^{\otimes 2^{n+1}}_{[N]}({\bm y})
\end{align*}
where for ${\bm y}=(y_{(\omega, 0)}, y_{(\omega, 1)})_{\omega\in \{0, 1\}^n}\in \KK^{2^{n+1}}$, $x, z\in \KK$ and $h\in \KK^n$. We have set
\begin{gather*}
G_0(x, z, h;{\bm y})
:=\ind{X}(h) {\rm e} (-\phi(h)(x-z))\prod_{\omega\in \{0, 1\}^n} {\mathcal C}^{|\omega|} F_{m; y_{(\omega, 0)}}^{\xi}(x+ h\cdot\omega)\overline{F_{m; y_{(\omega, 1)}}^{\xi}(z+ h\cdot\omega)} .
\end{gather*}
Write elements in $X$ as $(h_1, h)$ with $h_1 \in \KK$ and apply the Cauchy--Schwarz inequality in all but the $h_1$ variable
(noting that $(x,z) \to G_0(x, z, h;{\bm y})$ is supported in a product of intervals of measure $\simeq N_0^2$) to conclude
\begin{align}
\label{eq:58}
\frac{1}{N_0^2} \int_{\KK^{2^{n}}}\int_{\KK^2}\int_{\KK^{n-1}} H_0(x, z, h;{\bm y})
d{\bm \nu}_{n-1}(h)d\mu(x)d\mu(z)d\mu^{\otimes 2^{n}}_{[N]}({\bm y})\gtrsim \delta^{O(1)},
\end{align}
where
\begin{align*}
H_0(x, z, h;{\bm y}):=\bigg|\int_{\KK}G_0^1(x, z, (h_1, h);{\bm y})d\nu_{[H_1]}(h_1)\bigg|^2,
\end{align*}
and
\begin{gather*}
G_0^1(x, z, (h_1, h);{\bm y})
:=\ind{X}(h_1, h){\rm e}(-\phi(h_1, h)(x-z))\\
\times\prod_{\omega\in \{0, 1\}^{n-1}} {\mathcal C}^{|\omega|} F_{m; y_{(1, \omega, 0)}}^{\xi}(x+ (h_1, h)\cdot(1, \omega))
\overline{F_{m; y_{(1, \omega, 1)}}^{\xi}(z+ (h_1, h)\cdot(1, \omega))}
\end{gather*}
for ${\bm y}=(y_{(1, \omega, 0)}, y_{(1, \omega, 1)})_{(j,\omega)\in \{0, 1\}^{n}}\in \KK^{2^{n}}$ and $x, z, h_1\in \KK$, $h\in \KK^{n-1}$.
Expanding the square and
changing variables $x\mapsto x-h_1$ and $z\mapsto z-h_1$ we may rewrite \eqref{eq:58} as
\begin{gather}
\label{eq:59}
\nonumber \frac{1}{N_0^2} \int_{\KK^{2^{n}}}\int_{\KK^2}\int_{\KK^{n+1}} G_1(x, z, h_1, h_1', h;{\bm y})d\nu_{[H_1]}^{\otimes2}(h_1, h_1')
d{\bm \nu}_{n-1}(h)d\mu(x)d\mu(z)d\mu^{\otimes 2^{n}}_{[N]}({\bm y})\\
\gtrsim \delta^{O(1)},
\end{gather}
where
\begin{gather*}
G_1(x, z, h_1, h_1', h;{\bm y})
:=\ind{X}(h_1, h)\ind{X}(h_1', h) {\rm e}
(-(\phi(h_1, h)-\phi(h_1', h))(x-z))\\
\times\prod_{\omega\in \{0, 1\}^{n-1}} {\mathcal C}^{|\omega|}
\Delta_{h_1'-h_1}F_{m; y_{(1, \omega, 0)}}^{\xi}(x+ h\cdot\omega)
\overline{\Delta_{h_1'-h_1}F_{m; y_{(1, \omega, 1)}}^{\xi}(z+ h\cdot\omega)}.
\end{gather*}
Iteratively, for each $i\in\{2,\ldots, n\}$, we apply the Cauchy--Schwarz inequality in all but the $h_i$
variable to conclude that
\begin{align*}
\frac{1}{N_0^2}\int_{\KK^2}\int_{\KK^2}\int_{\square_n(X)} G_n(x, z, h, h';y, y')
d{\bm \nu}_n^{\otimes2}(h, h')d\mu(x)d\mu(z)d\mu^{\otimes 2}_{[N]}(y, y')
\gtrsim \delta^{O(1)},
\end{align*}
where
\begin{align*}
G_n(x, z, h, h';y, y'):={\Delta}_{h'-h}^nF_{m; y}^{\xi}(x)\overline{{\Delta}_{h'-h}^nF_{m; y'}^{\xi}(z)}
{\rm e}(-\psi((h, h'))(x-z)).
\end{align*}
We have arrived at \eqref{eq:81}, completing the proof of the lemma
\end{proof}
The following lemma is a slight variant of a result found in \cite{PEL}.
\begin{lemma}
\label{lem:3}
Given a scale $N\ge 1$, $0<\delta\le 1$, $m\in\Z_+$ with $m\ge 2$, $n\in\N$ and scales
$H_1,\ldots, H_{n+1}$ with each $H_i \le N$. We assume for every $i\in\bra{n}$ that $\varphi_i:\KK^n\to \KK$
is a measurable function independent of the variable $h_i$ in a vector $h=(h_1, \ldots, h_n)\in \KK^n$. Let
$\mathcal P:=\{P_1,\ldots, P_{m}\}$ and
$\mathcal Q:=\{Q_1,\ldots, Q_n\}$ be collections of polynomials. For $\xi\in \KK^n$ let
$F_m^{\xi}$ be the dual function defined in \eqref{eq:55} that
corresponds to the form \eqref{eq:51} and $1$-bounded functions
$f_0, f_1,\ldots, f_{m-1}\in L^0(\KK)$ supported on an interval $I\subset \KK$ of
measure $N_0 = N^{\deg{P_m}}$. Suppose that
\begin{align}
\label{eq:82}
\int_{\KK^n}\Big|N^{-1}_0\reallywidehat{{\Delta}_h^n F_m^{\xi}}\Big(\sum_{i=1}^n\varphi_i(h)\Big)\Big|^2
d\Big(\bigotimes_{i=1}^{n}\nu_{[H_i]}\Big)(h)\ge \delta.
\end{align}
Then
\begin{align*}
\|F_m^{\xi}\|_{\square_{[H_1],\ldots, [H_{n+1}]}^{n+1}(I)} \gtrsim_{\mathcal P} \delta^{O_{\mathcal P}(1)}.
\end{align*}
\end{lemma}
\begin{proof}
We shall write as before ${\bm \nu}_n:=\bigotimes_{i=1}^{n}\nu_{[H_i]}$
and also let ${\bm \mu}_n:=\bigotimes_{i=1}^n\mu_{[H_i]}$. Expanding the Fej{\'e}r kernel we may write
the left-hand side of \eqref{eq:82} as
\begin{gather*}
{\mathcal I}:=\int_{\KK^n}\Big|N_0^{-1}\reallywidehat{{\Delta}^n_hF_m^{\xi}}\Big(\sum_{i=1}^n\varphi_i(h)\Big)\Big|^2
d{\bm\nu}_n(h)\\
=
\int_{\KK^{2n}}\Big|N_0^{-1}\reallywidehat{{\Delta}_{h-h'}^nF_m^{\xi}}\Big(\sum_{i=1}^n\varphi_i(h-h')\Big)\Big|^2
d{\bm\mu}_n^{\otimes2}(h, h')\\
=\frac{1}{N^{2}_0}\int_{\KK^{2n+2}}{\Delta}_{h-h'}^nF_m^{\xi}(x)\overline{{\Delta}_{h-h'}^nF_m^{\xi}(z)}e\Big(-\sum_{i=1}^n\varphi_i(h-h')(x-z)\Big)d{\bm\mu}_n^{\otimes2}(h, h')d\mu(x)d\mu(z).
\end{gather*}
We apply the Cauchy--Schwarz inequality in the $x, z$ and $h'$
variables and Corollary \ref{GCS-indep} to deduce that
\begin{align*}
{\mathcal I}^{2^n}&\le
\frac{1}{N_0^{2}}\int_{\KK^{3n+2}} \prod_{\omega\in\{0, 1\}^n}\mathcal C^{|\omega|}\big({\Delta}_{h^{(\omega)}-h}^nF_m^{\xi}(x)
\overline{{\Delta}_{h^{(\omega)}-h}^nF_m^{\xi}(z)}\big)d{\bm\mu}_n^{\otimes3}(h^{(0)}, h^{(1)}, h)d\mu(x)d\mu(z)\\
=& \frac{1}{N_0^2} \int_{\KK^{3n}}
{\mathcal A}(x,z, h_n', h^{(0)}, h^{(1)}, h'){\mathcal B}(x,z,h^{(0)}, h^{(1)}, h') d\mu(x) d\mu(z) d{\bm\mu}_{n-1}^{\otimes3}(h^{(0)}, h^{(1)}, h')d\mu_{[H_n]}(h_n'),
\end{align*}
where
\begin{align*}
{\mathcal A}(x,z,h_n', h^{(0)}, h^{(1)}, h') :=&
\int_{\KK^2} \prod_{\omega'\in\{0, 1\}^{n-1}}{\mathcal C}^{|\omega'|}
\Bigl[{\Delta}_{h^{(\omega')}-h'}^{n-1} \Bigl( F_m^{\xi}(x + h_n^0 - h_n')\\
&\times{\overline{F_m^{\xi}(x+h_n^1 - h_n') F_m^{\xi}(z+ h_n^0 - h_n')}} F_m^{\xi}(z + h_n^1 - h_n') \Bigr)\Bigr]
d\mu_{[H_n]}^{\otimes2}(h_n^0, h_n^1),
\end{align*}
\begin{align*}
{\mathcal B}(x,z,h^{(0)}, h^{(1)}, h'):=\prod_{\omega'\in\{0, 1\}^{n-1}} {\mathcal C}^{|\omega'|}
\Bigl[{\Delta}_{h^{(\omega')}-h'}^{n-1} \Bigl( |F_m^{\xi}(x)|^2 |F_m^{\xi}(z)|^2 \Bigr)\Bigr].
\end{align*}
Since ${\mathcal A}\ge 0$, we see that
\begin{align*}
{\mathcal I}^{2^n} \le &N_0^{-2} \int_{\KK^{3n}} {\mathcal A}(x,z, h_n', h^{(0)}, h^{(1)}, h') d\mu(x) d\mu(z) d{\bm\mu}_{n-1}^{\otimes3}(h^{(0)}, h^{(1)}, h')d\mu_{[H_n]}(h_n')\\
&=
\frac{1}{N_0^2} \int_{\KK^{3n+1}}
\prod_{\omega'\in\{0, 1\}^{n-1}}{\mathcal C}^{|\omega'|}
\Bigl[{\Delta}_{h^{(\omega')}-h'}^{n-1} \Bigl( F_m^{\xi}(x + h_n^0)\overline{F_m^{\xi}(x+h_n^1)}\\
&\hspace{3cm} \times
\overline{F_m^{\xi}(z+ h_n^0)} F_m^{\xi}(z + h_n^1) \Bigr)\Bigr] d\mu(x) d\mu(z)
d{\bm\mu}_{n}^{\otimes2}(h^{(0)}, h^{(1)}) d{\bm\mu}_{n-1}(h')\\
&= \frac{1}{N_0^2} \int_{\KK^{3n+1}}
\mathcal C(x,z,h^{(0)}, h^{(1)}, h') d\mu(x) d\mu(z)
d{\bm\mu}_{n}^{\otimes2}(h^{(0)}, h^{(1)}) d{\bm\mu}_{n-1}(h'),
\end{align*}
where
\[
\mathcal C(x,z,h^{(0)}, h^{(1)}, h'):=\prod_{\omega'\in\{0, 1\}^{n-1}}{\mathcal C}^{|\omega'|}
\Bigl[{\Delta}_{h^{(\omega')}-h'}^{n-1} {\Delta}_{h_n^1 - h_n^0}\Bigl( F_m^{\xi}(x)
{\overline{F_m^{\xi}(z)}}\Bigr)\Bigr].
\]
In the penultimate equality we made the change of variables $x \to x -h_n^0+h_n'$ and $z \to z - h_n^0+ h_n'$.
Now proceeding inductively we see that
$$
{\mathcal I}^{2^n} \le \frac{1}{N_0^2}\int_{\KK^{2n+2}} {\Delta}_{h-h'}^n F_m^{\xi}(x) {\overline{{\Delta}^n_{h-h'} F_m^{\xi}(z)}}
d\mu(x) d\mu(z)
d{\bm\mu}_{n}^{\otimes2}(h, h').
$$
Inserting an extra average in the $x$ variable and using the pigeonhole principle to fix $z$, it follows that
\begin{align*}
{\mathcal I}^{2^n}\le \frac{1}{N_0}\int_{\KK^{2n+1}}
\overline{{\Delta}_{h-h'}^nF_m^{\xi}(z)}\int_{\KK}{\Delta}_{h-h'}^nF_m^{\xi}(x+w)d\mu_{[H_{n+1}]}(w)d{\bm\mu}_n^{\otimes2}(h,h')d\mu(x).
\end{align*}
To conclude we apply the Cauchy--Schwarz inequality to double the $w$ variable and so
$$
\delta^{2^{n+1}} \le {\mathcal I}^{2^{n+1}} \le \frac{1}{N_0} \int_{\KK^{2n+3}} {\Delta}_{h-h'}^{n+1} F_m^{\xi}(x) d{\bm\mu}_{n+1}^{\otimes2}(h,h')d\mu(x)
= \|F_m^{\xi}\|_{\square_{[H_1],\ldots, [H_{n+1}]}^{n+1}(I)}.
$$
This completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:deg-low}]
The proof is by induction on $m\in \Z_+$. The proof will consists of
several steps. We begin by establishing the following claim.
\begin{claim}
\label{claim:3}
Let $N\ge 1$ be a scale, $0<\delta\le 1$, $m\in\Z_+$ with $m\ge 2$ and $n\in\N$ be given. Let
$\mathcal P:=\{P_1,\ldots, P_{m}\}$ and
$\mathcal Q:=\{Q_1,\ldots, Q_n\}$ be collections of polynomials such
that
$$
1\le \deg{P_1}<\ldots<\deg{P_{m}}<\deg{Q_1}<\ldots<\deg{Q_n}.
$$
For $\xi\in \KK^n$ let $F_m^{\xi}$ be the dual function defined in \eqref{eq:55} that corresponds to the form \eqref{eq:51} and
$1$-bounded functions $f_0, f_1,\ldots, f_{m-1}\in L^0(\KK)$ supported
on an interval $I\subset \KK$ of measure $N_0 := N^{\deg{P_m}}$.
Suppose that
\begin{align}
\label{eq:60}
N^{-1}_0\big|\widehat{F_m^{\xi}}(\zeta)\big| \ge \delta.
\end{align}
Then for any sufficiently large constant $C\gtrsim_{\mathcal P, \mathcal Q}1$ one has
\begin{align}
\label{eq:80}
|\zeta|\lesssim \delta^{-C} N^{-\deg(P_m)},
\quad \text{ and } \quad
|\xi_j|\lesssim \delta^{-C} N^{-\deg(Q_j)}
\quad \text{ for all } \quad j\in\bra{n}.
\end{align}
\end{claim}
The proof of Claim \ref{claim:3} for each integer $m\ge2$ is itself
part of the inductive proof of Theorem \ref{thm:deg-low}. In the
first step we prove Claim \ref{claim:3} for $m=2$. In the second step
we show that Claim \ref{claim:3} for all integers $m\ge2$ implies
Theorem \ref{thm:deg-low}, this in particular will establish Theorem
\ref{thm:deg-low} for $m=2$. In the third step we finally show that
Claim \ref{claim:3} for all integers $m\ge3$ follows from Claim
\ref{claim:3} and Theorem \ref{thm:deg-low} for $m-1$. Taken together,
this shows that Claim \ref{claim:3} and Theorem \ref{thm:deg-low} hold
for each integer $m\ge2$, completing the proof of Theorem \ref{thm:deg-low}.
\paragraph{\bf Step 1.}
We now prove Claim \ref{claim:3} for $m=2$. Here $N_0 = N^{\deg{P_2}}$.
For $\zeta_1, \zeta_2\in \KK$ and $\xi\in \KK^n$ we define the multiplier
\begin{align*}
m_N(\zeta_1, \zeta_2, \xi):=\int_{B_1(0)} {\rm e}\Big(-\zeta_1P_1(\alpha y)+\zeta_2 P_2(\alpha y)+\sum_{j=1}^n\xi_jQ_j(\alpha y)\Big)
d\mu(y)
\end{align*}
where $\alpha \in \KK$ satisfies $|\alpha| = N$.
By definitions \eqref{eq:55} and \eqref{eq:56} and making the change of variables $x\mapsto x-P_2(y)$ we may write
\begin{align*}
N_0^{-1}\widehat{F_2^{\xi}}(\zeta_2)&=N_0^{-1}\int_{\KK}\int_{\KK}F_{2;y}^{\xi}(x){\rm e}(-\zeta_2 x)d\mu_{[N]}(y)d\mu(x)\\
&=N_0^{-1}\int_{\KK}\widehat{f}_0(\zeta_2-\zeta_1)\widehat{f}_1(\zeta_1)m_N(\zeta_1, \zeta_2, \xi)d\zeta_1.
\end{align*}
By the Cauchy--Schwarz inequality and Plancherel's theorem we obtain
\begin{align*}
\delta\le N_0^{-1}|\widehat{F_2^{\xi}}(\zeta_2)|\lesssim N_0^{-1}\|f_0\|_{L^2(\KK)}
\|f_1\|_{L^2(\KK)}\sup_{\zeta_1\in \KK}|m_N(\zeta_1, \zeta_2, \xi)|,
\end{align*}
which gives for some $\zeta_1\in \KK$ that
\begin{align*}
\delta\lesssim |m_N(\zeta_1, \zeta_2, \xi)|,
\end{align*}
since $\|f_0\|_{L^2(\KK)}, \|f_1\|_{L^2(\KK)}\lesssim N_0^{1/2}$. Applying Lemma \ref{lem:vdc-osc} with $\mathcal P=\{-P_1, P_2\}$ and $\mathcal Q=\{Q_1,\ldots, Q_n\}$ we deduce that for every sufficiently large $C\gtrsim 1$ one has
\begin{align*}
|\zeta_j|\lesssim \delta^{-C}N^{-\deg(P_j)}
\quad \text{ for all } \quad j\in\bra{2}, \quad\text{ and } \quad
|\xi_j|\lesssim \delta^{-C} N^{-\deg(Q_j)}
\quad \text{ for all } \quad j\in\bra{n}.
\end{align*}
This completes the proof of Claim \ref{claim:3} for $m=2$.
\paragraph{\bf Step 2.}
In this step we show that Claim \ref{claim:3} for all integers $m\ge2$
implies Theorem \ref{thm:deg-low}. In view of Step 1. this will in
particular establish Theorem \ref{thm:deg-low} for $m=2$, which is the
base case of our double induction. As before we shall write
${\bm \nu}_{j}:=\bigotimes_{i=1}^{j}\nu_{[H_i]}$ for any
$j\in\Z_+$. Recall that
$N_0 = N^{\deg(P_m)}$
and note
\begin{align*}
\|F_m^{\xi}\|_{\square_{[H_1], \ldots, [H_s]}^s(I)}^{2^s}=
\int_{\KK^{s-2}}\|{\Delta}_h^{s-2}F_m^{\xi}\|_{\square_{[H_{s-1}], [H_s]}^2(I)}^4
d{\bm \nu}_{s-2}(h).
\end{align*}
By \eqref{eq:67} and the pigeonhole principle
there exists a measurable set
$X\subseteq \prod_{i=1}^{s-2}[H_i]$ so that ${\bm\nu}_{s-2}(X)\gtrsim \delta^{O(1)}$, and
for all $h \in X$ one has
\begin{align*}
\|{\Delta}_h^{s-2}F_m^{\xi}\|_{\square_{[H_{s-1}], [H_s]}^2(I)}
\gtrsim\delta^{O(1)}.
\end{align*}
Here we used that $\supp F_m^{\xi}$ is a subset of an
interval whose measure is at most
$O(N_0)$.
By Lemma \ref{2.10} we have
\begin{align*}
N_0^{-1}\bigl\|\reallywidehat{{\Delta}_h^{s-2}F_m^{\xi}}\bigr\|_{L^{\infty}(\KK)} \gtrsim \delta^{O(1)}.
\end{align*}
Next we claim that there is a countable set ${\mathcal F} \subset \KK$, depending on $N$ and $\delta$ such that
\begin{align}\label{sup-F}
\sup_{\phi\in {\mathcal F}}N_0^{-1}\big|\reallywidehat{{\Delta}_h^{s-2}F_m^{\xi}}(\phi)\big| \gtrsim \delta^{C_0}
\end{align}
for some absolute constant $C_0\in\Z_+$ and for all $h\in X$. When $\KK$ is non-archimedean, we take
$$
{\mathcal F} \ = \ \bigcup_{M\ge 1} \, \Bigl\{ z = \sum_{j= -M}^{L-1} z_j \pi^j \in \KK : z_j \in o_\KK/m_\KK\Bigr\}
$$
where $N_0 = q^L$. Let $x \in I = B_{N_0}(x_0)$. For any $\zeta \in \KK$, we have $\zeta \in B_{N_0^{-1}}(\zeta_0)$
for some $\zeta_0 \in {\mathcal F}$. Note that
$$
{\rm e}(-\zeta x) \ = \ {\rm e}(-x \zeta_0) \, {\rm e}(-(x-x_0)(\zeta - \zeta_0)) \, {\rm e}(-x_0(\zeta - \zeta_0)) \ = \
{\rm e}(-x \zeta_0) \, {\rm e}(-x_0(\zeta - \zeta_0))
$$
since $|(x-x_0)(\zeta - \zeta_0)| \le N_0 N_0^{-1} = 1$ and ${\rm e} = 1$ on $o_\KK$. Therefore
$|\reallywidehat{{\Delta}_h^{s-2}F_m^{\xi}}(\zeta)| = |\reallywidehat{{\Delta}_h^{s-2}F_m^{\xi}}(\zeta_0)|$
since ${\Delta}_h^{s-2}F_m$ is supported in $I$ whenever $h \in X$. This shows that \eqref{sup-F} holds for
non-archimedean fields.
When $\KK= {\mathbb R}$, we take ${\mathcal F} := T_0 {\mathbb Z}$, where
\begin{align*}
T_0:=\delta^{C_0}\big(CN_0\big)^{-1}.
\end{align*}
for a sufficiently large constant $C\gtrsim1$. When $\KK = {\mathbb C}$, we take ${\mathcal F} := T_1 {\mathbb Z}^2$
where $T_1 := \delta^{C_0} (C \sqrt{N_0})^{-1}$. By the Lipschitz nature of characters on ${\mathbb R}$ or ${\mathbb C}$,
we again see that \eqref{sup-F} holds in the archimedean cases.
In particular, there exists a measurable function $\phi: X \to {\mathcal F}$ so
that
\begin{align}
\label{eq:64}
N^{-1}_0\big|\reallywidehat{{\Delta}_h^{s-2}F_m^{\xi}}(\phi(h))\big|\gtrsim \delta^{C_0}
\end{align}
for all $h\in X$. If necessary, we may additionally assume that the range of $\phi$ is finite.
By Lemma \ref{lem:1} it follows that
\begin{align*}
\int_{\square_{s-2}(X)}\bigg|N^{-1}_0\int_{\KK} F_m(x; h, h')e(-\psi((h, h'))x)d\mu(x)\bigg|^2
d{\bm \nu}_{s-2}^{\otimes2}(h, h')
\gtrsim \delta^{O(1)},
\end{align*}
where
\begin{gather*}
F_m(x; h, h'):=\int_{\KK}{\Delta}_{h'-h}^{s-2}f_0(x+P_m(y))\prod_{i=1}^{m-1}{\Delta}_{h'-h}^{s-2}f_{i}(x-P_i(y)+P_m(y))d\mu_{[N]}(y),\\
\psi(h, h'):=\sum_{\omega\in\{0, 1\}^{s-2}}(-1)^{|\omega|}\phi\Big(\omega\cdot h+(1-\omega)\cdot h')\Big).
\end{gather*}
Thus by the pigeonhole principle, there exists a measurable set $X_0\subseteq \square_{s-2}(X)$ with
${\bm \nu}_{s-2}^{\otimes2}(X_0)\gtrsim\delta^{O(1)}$ such that for every $(h, h')\in X_0$ one has
\begin{align*}
\bigg|N^{-\deg(P_m)}\int_{\KK} F_m(x; h, h')e(-\psi((h, h'))x)d\mu(x)\bigg|
\gtrsim \delta^{O(1)}.
\end{align*}
By Claim \ref{claim:3} there is a $c:=c_{m, s}\ge1$ such that for each $(h, h')\in X_0$, one has
\begin{align*}
|\psi((h, h'))|\lesssim_{m, s} \delta^{-c} N^{-\deg(P_m)}.
\end{align*}
By the pigeonhole principle there exists $h'\in \prod_{i=1}^{s-2}[H_i]$ and a measurable set
\[
X_0(h'):=\big\{h\in X: (h, h')\in X_0\ \text{ and } \ |\psi((h, h'))|\lesssim \delta^{-c} N^{-\deg(P_m)}\big\}
\]
satisfying ${\bm \nu}_{s-2}(X_0(h'))\gtrsim \delta^{O(1)}$.
Since $\psi((h, h'))\in {\mathcal F}$ we see that
\begin{align*}
X_0(h')\subseteq\bigcup_{k\in {\mathcal \KK}}X_{0}^k(h')
\end{align*}
where ${\mathcal \KK} = [{O(\delta^{-O(1)})}]\cap{\mathbb Z}$ when $\KK = {\mathbb R}$. In this case,
$X_{0}^k(h'):=\{h\in X: \psi((h, h'))=T_0k\}$. When $\KK = {\mathbb C}$, we have
${\mathcal \KK} = [{O(\delta^{-O(1)})}]\cap{\mathbb Z}^2$ and
$X_{0}^k(h'):=\{h\in X: \psi((h, h'))=T_1 k\}$. Finally when $\KK$ is non-archimedean,
$$
{\mathcal \KK} = \bigl[{O(\delta^{-O(1)})}\bigr] \cap \bigl\{ k = \sum_{j= -M}^{-1} k_j \pi^j \in \KK : k_j \in o_\KK/m_\KK\bigr\}
$$
and $X_{0}^k(h') := \{h\in X: \psi((h,h')) = \pi^L k \}$.
Thus by the pigeonhole principle there is $k_0\in {\mathcal \KK}$ such that
${\bm \nu}_{s-2}(X_0^{k_0}(h'))\big)\gtrsim \delta^{O(1)}$.
When $\KK = {\mathbb R}$, this shows that
$\psi(h, h')=T_0 k_0 =: \phi_m$ for all $h\in X_0^{k_0}(h')$. When $\KK = {\mathbb C}$, we have
$\phi(h,h') = T_1 k_0$ for all $h \in X_0^{k_0}(h')$ and when $\KK$ is non-archimedean, $\phi(h,h') = \pi^L k_0$ for all
$h \in X_0^{k_0}(h')$. We will denote these values by $\phi_m$ in all cases.
Set
\begin{align*}
\psi_1(h, h'):=(-1)^{s+1}\sum_{\substack{\omega\in\{0,1\}^{s-2} \\ \omega_1=0}}(-1)^{|\omega|}\phi\Big((\omega\cdot h+(1-\omega)\cdot h')\Big) + (-1)^s \phi_m
\end{align*}
and, for $i=\bra{s-2}\setminus\{1\}$, set
\begin{align*}
\psi_i(h, h'):=(-1)^{s+1}\sum_{\substack{\omega\in\{0,1\}^{s-2}\setminus\{0\} \\ \omega_1=\ldots=\omega_{i-1}=1 \\ \omega_{i}=0}}(-1)^{|\omega|}\phi\Big((\omega\cdot h+(1-\omega)\cdot h')\Big).
\end{align*}
Note that $\psi_i$ does not depend on $h_{i}$ and we can write
\begin{align*}
\phi(h)=\sum_{i=1}^{s-2}\psi_i(h, h').
\end{align*}
Averaging \eqref{eq:64} over $\mathbb{X}:=X_0^{k_0}(h')$ and using positivity, we obtain
\begin{gather*}
\int_{\KK^{s-2}}\Big|N_0^{-1}\reallywidehat{{\Delta}_h^{s-2}F_m^{\xi}}\Big(\sum_{i=1}^{s-2}\psi_i(h, h')\Big)\Big|^2
d{\bm\nu}_{s-2}(h)\\
\ge \int_{\mathbb{X}}\Big|N_0^{-1}\reallywidehat{{\Delta}_h^{s-2}F_m^{\xi}}(\phi(h))\Big|^2
d{\bm\nu}_{s-2}(h)\gtrsim \delta^{O(1)}.
\end{gather*}
Invoking Lemma \ref{lem:3} we conclude that
\begin{align*}
\|F_m^{\xi}\|_{\square_{[H_1], \ldots, [H_{s-1}]}^{s-1}(I)}\gtrsim \delta^{O(1)}.
\end{align*}
\paragraph{\bf Step 3.}
Gathering together the conclusions of Step 1. and Step 2. (for $m=2$), we see
that the base step of a double induction has been established. In this step we
shall illustrate how to establish the inductive step. We assume that
Claim \ref{claim:3} and Theorem \ref{thm:deg-low} hold for $m-1$ in
place $m$ for some integer $m\ge3$. Then we will prove that Claim
\ref{claim:3} holds for $m\ge3$, which in view of Step 2. will allow
us to deduce that Theorem \ref{thm:deg-low} also holds for $m\ge3$.
This will complete the proof of Theorem \ref{thm:deg-low}.
Recall that $N_0 = N^{\deg(P_m)}$. By definitions \eqref{eq:55} and
\eqref{eq:56} and making the change of variables $x\mapsto x-P_m(y)$
we may write
\begin{align*}
N_0^{-1}\widehat{F_m^{\xi}}(\zeta_m)&=N_0^{-1}\int_{\KK^2}F_{m;y}^{\xi}(x){\rm e}(-\zeta_m x)d\mu(x)d\mu_{[N]}(y)\\
=N_0^{-1}&\int_{\KK^2}{\rm M}_{\zeta_m}f_0(x)\prod_{i=1}^{m-1}f_{i}(x-P_i(y)){\rm e}\Big(\zeta_m P_m(y)+\sum_{j=1}^n\xi_jQ_j(y)\Big)d\mu_{[N]}(y)d\mu(x)\\
&=: M^{-1} \Lambda_{\mathcal P'; N}^{\mathcal Q', \xi'}({\rm M}_{\zeta_m}f_0, f_1,\ldots f_{m-1}),
\end{align*}
where ${\rm M}_{\zeta_m}f_0(x):={\rm e}(-\zeta_m x)f_0(z)$, $\mathcal P':=\mathcal P\setminus\{P_m\}$, $\mathcal Q':=\mathcal Q\cup\{P_m\}$, $\xi':=(\zeta_m, \xi_1,\ldots, \xi_n)\in \KK^{n+1}$ and $M = N_0 {N_0'}^{-1}$ where
$N_0'$ is the scale $N^{\deg(P_{m-1})}$.
Thus \eqref{eq:60} implies
\begin{align*}
M^{-1}|\Lambda_{\mathcal P'; N}^{\mathcal Q', \xi'}({\rm M}_{\zeta_m}f_0, f_1,\ldots, f_{m-1})|\gtrsim \delta^{O(1)}.
\end{align*}
As in the proof of Theorem \ref{thm:inverse},
by the pigeonhole principle, we can find an interval
$I'\subset \KK$ of measure about $ N_0'$
such that
\begin{align*}
|\Lambda_{\mathcal P'; N, I'}^{\mathcal Q', \xi'}(f_0', f_1',\ldots, f_{m-1}')|\gtrsim \delta^{O(1)},
\end{align*}
where
$f_0':={\rm M}_{\zeta_m}f_0\ind{I'}, f_1':=f_1\ind{I'},\ldots, f_{m-1}':=f_{m-1}\ind{I'}$.
Consequently, by Proposition \ref{prop:dual}, there exists an $s\in \Z_+$ such that
\begin{align*}
\|F_{m-1}^{\xi'}\|_{\square_{[H_1], \ldots, [H_s]}^s(N_0')}
\gtrsim \delta^{O(1)},
\end{align*}
where $F_{m-1}^{\xi'}$ is the dual function respect the form
$\Lambda_{\mathcal P'; N, I'}^{\mathcal Q', \xi'}(f_0', f_1',\ldots, f_{m-1}')$ and
$H_i\simeq \delta^{O_{\mathcal P'}(1)}N^{\deg(P_{m-1})}$ for $i\in\bra{s}$. By the induction hypothesis (for Theorem \ref{thm:deg-low})
we deduce that
\begin{align*}
\|F_{m-1}^{\xi'}\|_{\square_{[H_1], [H_2]}^2(N_0')}
\gtrsim \delta^{O(1)},
\end{align*}
which in turn by Lemma \ref{2.10} implies
\begin{align*}
(N'_0)^{-1}\big|\widehat{F_{m-1}^{\xi'}}(\zeta_{m-1})\big| \gtrsim \delta^{O(1)}
\end{align*}
for some $\zeta_{m-1}\in \KK$. By the induction hypothesis (for Claim \ref{claim:3}) we deduce that
\begin{align*}
&|\zeta_j|\lesssim \delta^{-C} N^{-\deg(P_j)}
\qquad \text{ for all } \quad j\in \bra{m}\setminus\bra{m-2}, \quad\text{ and }\\
&|\xi_j|\lesssim \delta^{-C} N^{-\deg(Q_j)}
\qquad \text{ for all } \quad j\in\bra{n},
\end{align*}
which in particular implies \eqref{eq:80} and we are done.
\end{proof}
\section{Sobolev estimates}
\label{sec:sobolev}
As a consequence of the $L^\infty$-inverse theorem from the previous
section we establish some Sobolev estimates, which will be critical in
the proof of Theorem \ref{thm:main}.
We begin with a smooth variant of Theorem \ref{thm:inverse}. When $\KK$ is archimedean,
we fix a Schwartz function $\varphi$ on $\KK$ so that
\begin{align*}
\ind{[1]}(\xi)\le \widehat{\varphi}(\xi)\le \ind{[2]}(\xi), \qquad \xi\in \KK.
\end{align*}
When $\KK={\mathbb R}$, we set $\varphi_N(x) = N^{-1} \varphi(N^{-1} x)$ for any
$N>0$ and when $\KK = {\mathbb C}$, we set
$\varphi_{N}(z) = N^{-1} \varphi(N^{-1/2} z)$ for any $N > 0$. When
$\KK$ is non-archimedean, we set $\varphi(x) = \ind{B_1(0)}(x)$ so that $\widehat{\varphi}(\xi) = \ind{B_1(0)}(\xi)$ and
we set $\varphi_{N}(x) = N^{-1} \ind{[N]}(x)$ for any scale $N$.
\begin{theorem}[A smooth variant of the inverse theorem]
\label{thm:inverse-s}
Let $N\ge 1$ be a scale, $0<\delta\le 1$, $m\in\Z_+$ be given. Let $\mathcal P:=\{P_1,\ldots, P_m\}$
be a collection of polynomials such that
$1\le \deg{P_1}<\ldots<\deg{P_m}$.
Let $f_0, f_1,\ldots, f_m\in L^0(\KK)$ be $1$-bounded functions
supported on an interval $I\subset \KK$ of measure $N_0 = N^{\deg{P_m}}$.
Suppose that the $(m+1)$-linear form defined in
\eqref{eq:6} satisfies
\begin{align}
\label{eq:63}
|\Lambda_{\mathcal P; N}(f_0,\ldots, f_m)|\ge\delta.
\end{align}
Then for any $j\in\bra{m}$ there exists an absolute constant $C_j\gtrsim_{\mathcal P}1$ so that
\begin{align}
\label{eq:19-s}
N_0^{-1}\big\| \varphi_{N_j}*f_j\big\|_{L^1(\KK)} \gtrsim_{\mathcal P} \delta^{O_{\mathcal P}(1)},
\end{align}
where $N_j\simeq \delta^{C_j}N^{\deg(P_j)}$, provided $N \gtrsim \delta^{-O_{\mathcal P}(1)}$.
\end{theorem}
\begin{proof}
By translation invariance we can assume that $f_j$ is supported on $[N_0]$ for every $j\in\bra{m}$.
The proof will consist of two steps. In the
first step we will invoke Theorem \ref{thm:inverse} to prove
\eqref{eq:19-s} for $j=1$. In the second step we will use \eqref{eq:19-s}
for $j=1$ to establish \eqref{eq:19-s} for $j=2$, and continuing
inductively we will obtain \eqref{eq:19-s} for all $j\in\bra{m}$.
\paragraph{{\bf Step 1.}} We first establish \eqref{eq:19-s} for $j=1$. When $\KK$ is non-archimedean,
this is an immediate consequence of Theorem \ref{thm:inverse} since $\varphi_{N_1} = \mu_{[N_1]}$
in this case. Nevertheless we make the observation that
\begin{align}\label{var-form}
|\Lambda_{\mathcal P; N}(f_0,\varphi_{N_1}*f_1, \ldots, f_m)|\gtrsim\delta
\end{align}
holds. In fact we will see that \eqref{var-form} holds for any $\KK$, non-archimedean or archimedean. First let us
see \eqref{var-form} when $\KK$ is non-archimedean. Suppose that $|\Lambda_{\mathcal P; N}(f_0,\varphi_{N_1}*f_1, \ldots, f_m)| \le c \, \delta$
for some small $c>0$. Then, since
$$
\delta \le |\Lambda_{\mathcal P; N}(f_0,f_1, \ldots, f_m)| \le
|\Lambda_{\mathcal P; N}(f_0,\varphi_{N_1}*f_1, \ldots, f_m)| +
|\Lambda_{\mathcal P; N}(f_0,f_1 -\varphi_{N_1}*f_1, \ldots, f_m)|,
$$
we conclude that $|\Lambda_{\mathcal P; N}(f_0,f_1 - \varphi_{N_1}*f_1, \ldots, f_m)|\gtrsim\delta$. Therefore Theorem \ref{thm:inverse}
implies that $N_0^{-1} \|\varphi_{N_1} *(f_1 - \varphi_{N_1} * f_1)\|_{L^1(\KK)}| \gtrsim \delta^{O(1)}$ but this is a
contradiction since $\varphi_{N_1} * \varphi_{N_1} = \varphi_{N_1}$ when $\KK$ is non-archimedean (in which case
$\varphi_{N_1} = N_1^{-1} \ind{[N_1]}$) and so $\varphi_{N_1} * (f_1 - \varphi_{N_1}* f_1) \equiv 0$.
We now turn to establish \eqref{eq:19-s} for $j=1$ when $\KK$ is archimedean (when $\KK = {\mathbb R}$ or $\KK = {\mathbb C}$).
Let $\eta:\KK \to [0, \infty)$ be a Schwartz function so that $\int_{\KK}\eta=1$, ${\widehat{\eta}}\equiv 1$ near 0 and
${\rm supp}\:\widehat{\eta}\subseteq [2]$.
For $t>0$, we write $\eta_{t}(x):=t^{-1}\eta(t^{-1}x)$ when $\KK={\mathbb R}$ and $\eta_t(x):= t^{-2}\eta(t^{-1}x)$
when $\KK = {\mathbb C}$. We will also need a Schwartz function
$\rho:\KK\to [0, \infty)$ such that
\begin{align*}
\ind{[1]\setminus [1-\delta^M]}(x)\le \rho(x)\le \ind{[1]}(x), \qquad x\in \KK
\end{align*}
for some large absolute constant $M\ge1$, which will be specified
later. We shall also write $\rho_{(t)}(x):=\rho(t^{-1}x)$ for $t>0$ and $x\in \KK$.
Let $N_0'\simeq N_0$ when $\KK = {\mathbb R}$ and $N_0' \simeq \sqrt{N_0}$ when $\KK = {\mathbb C}$.
Observe that \eqref{eq:63} implies that at least one of the following lower bounds holds:
\begin{gather}
\label{eq:86}
|\Lambda_{\mathcal P; N}(f_0,\varphi_{N_1}*f_1, \ldots, f_m)|\gtrsim\delta,\\
\label{eq:83}
|\Lambda_{\mathcal P; N}(f_0,\rho_{(N_0')}(f_1-\varphi_{N_1}*f_1), \ldots, f_m)|\gtrsim\delta,\\
\label{eq:84}
|\Lambda_{\mathcal P; N}(f_0,(1-\rho_{(N_0')})(f_1-\varphi_{N_1}*f_1), \ldots, f_m)|\gtrsim\delta.
\end{gather}
By Theorem \ref{thm:inverse} it is easy to see that \eqref{eq:86} yields
that
\begin{align*}
N_0^{-1}\big\| \varphi_{N_1}*f_1\big\|_{L^1(\RR)} \gtrsim \delta,
\end{align*}
which in turn will imply \eqref{eq:19-s} for $j=1$ provided that the remaining two alternatives \eqref{eq:83} and \eqref{eq:84} do not hold.
If this is the case, then \eqref{var-form} also holds when $\KK={\mathbb R}, {\mathbb C}$ is archimedean.
If the second alternative holds we let $f_1':= \rho_{(N_0')}(f_1-\varphi_{N_1}*f_1)$ and then Theorem \ref{thm:inverse} implies that
\begin{align*}
N_0^{-1}\big\| \mu_{[N_1']}*f_1'\big\|_{L^1(\KK)} \gtrsim_{\mathcal P} \delta^{C_0'},
\end{align*}
with $N_1'\simeq \delta^{C_1'}N^{\deg(P_1)}$. By the Cauchy--Schwarz inequality (the support of $\mu_{[N_1']} *f_1'$ is contained
in a fixed dilate of $[N_0]$), we have
\begin{align*}
N_0^{-1}\big\| \mu_{[N_1']}*f_1'\big\|_{L^2(\KK)}^2 \gtrsim_{\mathcal P} \delta^{2C_0'}.
\end{align*}
Let $N_1'':= \delta^{A+C_1'}N^{\deg(P_1)}/A$ for some $A\ge1$ to be determined later. We now show that
\begin{align}\label{L1-norm}
\big\| \mu_{[N_1']}-\mu_{[N_1']}*\eta_{N_1''}\big\|_{L^1(\KK)}^2
\lesssim \sqrt{N_1'' /N_1'}
\lesssim \sqrt{\delta^{A}/A}.
\end{align}
We note that for $|x| \ge C N_1'$,
\begin{align*}
|\ind{[N_1']}(x)-\ind{[N_1']}*\eta_{N_1''}(x)|=\Big|\int_{\KK}\ind{[N_1']}(x-y))\eta_{N_1''}(y)d\mu(y)\Big|
\end{align*}
and so
\begin{align*}
\int_{|x|\ge CN_1'} |\ind{[N_1']}(x)-\ind{[N_1']}*\eta_{N_1''}(x)| d\mu(x) \lesssim N_1''.
\end{align*}
When $|x| \le C N_1'$ is small, we use the Cauchy--Schwarz inequality
\begin{align*}
\int_{|x|\le CN_1'} |\ind{[N_1']}(x)-\ind{[N_1']}*\eta_{N_1''}(x)| d\mu(x) \lesssim \sqrt{N_1'} \,
\|\ind{[N_1']} * (\delta_0 - \eta_{N_1''})\|_{L^2(\KK)}
\end{align*}
and then Plancherel's theorem,
\begin{align*}
\|\ind{[N_1']} * (\delta_0 - \eta_{N_1''})\|_{L^2(\KK)}^2 = \int_\KK |1 - {\widehat{\eta_{N_1''}}}(\xi)|^2 |\widehat{\ind{[N_1']}}(\xi)|^2 d\mu(\xi)
\lesssim \sqrt{N_1' N_1''}.
\end{align*}
Here we use the facts that $\widehat{\eta} \equiv 1$ near $0$ and the Fourier decay bound for euclidean balls,
$$
|\widehat{\ind{[N_1']}}(\xi)|^2 \lesssim |\xi|^{-2} \ {\rm when} \ \KK = {\mathbb R} \ \ {\rm and} \ \
|\widehat{\ind{[N_1']}}(\xi)|^2 \lesssim \sqrt{N_1'} |\xi|^{-3} \ {\rm when} \ \KK = {\mathbb C}.
$$
This establishes \eqref{L1-norm} and so
\begin{align*}
N_0^{-1}\big\| (\mu_{[N_1']}-\mu_{[N_1']}*\eta_{N_1''})*f_1'\big\|_{L^2(\KK)}^2&\lesssim
\big\| \mu_{[N_1']}-\mu_{[N_1']}*\eta_{N_1''}\big\|_{L^1(\KK)}^2\\
&\lesssim \sqrt{N_1'' /N_1'}
\lesssim \sqrt{\delta^{A}/A}.
\end{align*}
Consequently
\begin{align*}
\delta^{2C_0'}\lesssim_{\mathcal P}N_0^{-1}\big\| \mu_{[N_1']}*f_1'\big\|_{L^2(\KK)}^2\lesssim
N_0^{-1}\big\| \mu_{[N_1']}*\eta_{N_1''}*f_1'\big\|_{L^2(\KK)}^2+ \sqrt{\delta^{A}/A},
\end{align*}
which for sufficiently large $A\ge C_0'$ yields
\begin{align*}
N_0^{-1}\big\| \eta_{N_1''}*f_1'\big\|_{L^2(\KK)}^2\gtrsim_{\mathcal P} \delta^{2C_0'}.
\end{align*}
Taking $N_1:=\frac{1}{2}N_1''$ and using support properties of
$\widehat{\varphi}$ and $\widehat{\eta}$, by the Plancherel theorem we
may write (when $\KK = {\mathbb R}$)
\begin{gather*}
N_0^{-1}\big\| \eta_{N_1''}*f_1'\big\|_{L^2({\mathbb R})}^2
=N_0^{-1}\big\| \widehat{\eta}_{N_1''}\big(\widehat{\rho_{(N_0')}}*((1-\widehat{\varphi}_{N_1})\widehat{f}_1)\big)\big\|_{L^2(\RR)}^2\\
\lesssim N_0^{-1}\int_{\RR}\bigg(\int_{\RR}\frac{N_0'}{(1+N_0'|\xi-\zeta|)^{200}}|\widehat{f}_1(\zeta)(1-\widehat{\varphi}(N_1\zeta))||\widehat{\eta}(N_1''\xi)|\bigg)^2d\mu(\xi)\\
\lesssim N_0^{-1}\delta^{100(A+C_1')}\|f_1\|_{L^2(\RR)}^2.
\end{gather*}
A similar bound holds when $\KK = {\mathbb C}$. Therefore
\begin{align*}
\delta^{2C_0'}\lesssim_{\mathcal P} N_0^{-1}\big\| \eta_{N_1''}*f_1'\big\|_{L^2(\KK)}^2\lesssim \delta^{100(A+C_1')},
\end{align*}
which is impossible if $A\ge1$ is large enough. Thus the second
alternative \eqref{eq:83} is impossible. To see that the third
alternative \eqref{eq:84} is also impossible observe that
\begin{align*}
\delta\lesssim
|\Lambda_{\mathcal P; N}(f_0,(1-\rho_{(N_0')})(f_1-\varphi_{N_1}*f_1), \ldots, f_m)|\lesssim
N_0^{-1}\int_{[N_0']}(1-\rho_{(N_0')})(x)d\mu(x)\lesssim \delta^M,
\end{align*}
which is also impossible if $M\ge1$ is sufficiently large. Hence \eqref{eq:86} must necessarily hold and
we are done.
\paragraph{{\bf Step 2.}}
Let $M\ge1$ be a large constant to be determined later, and define
$N'\simeq \delta^MN$ and $N_0' \simeq \delta^M N_0$. The main idea is to partition the intervals $[N]$ and $[N_0]$ into
$\KK\simeq\delta^{-M}$ disjoint intervals of measure $\simeq N'$ and $\simeq N_0'$, respectively. Such partitions are straightforward
when $\KK = {\mathbb R}$. When $\KK$ is non-archimedean, we only need to partition $[N]$ and not $[N_0]$. Finally when
$\KK = {\mathbb C}$, intervals are discs and it is not possible to partition a disc into subdiscs and so we will need to be careful
with this technical issue.
We first concentrate
on the case when $\KK$ is non-archimedean. In this case, we only need to partition $[N]$ and not $[N_0]$. Such a partition was given in the proof of Theorem \ref{thm:inversem}.
In fact, choosing $\ell \gg 1$ such that $q^{-\ell} \simeq \delta^M$ and setting $N = q^n$ so that $N' = q^{n-\ell}$, we have
\begin{align*}
[N] \ = \ B_{q^{n}}(0) \ = \ \bigcup_{y \in {\mathcal F}} B_{q^{n -\ell}}(y),
\end{align*}
which gives a partition of $[N]$ where ${\mathcal F} = \{ y = \sum_{j=0}^{\ell-1} y_j \pi^{-n + j} : y_j \in o_\KK/m_\KK \}$.
Note $\# {\mathcal F} = q^{\ell}$ so that $\# {\mathcal F} \simeq \delta^{-M}$.
Hence $\Lambda_{\mathcal P; N}(f_0,\varphi_{N_1}*f_1, \ldots, f_m) =$
$$
\frac{1}{N_0 N} \sum_{y_0 \in {\mathcal F}} \int_{\KK} \int_{B_{q^{n-\ell}}(y_0)}
f_0(x) \varphi_{N_1} * f_1 (x - P_1(y)) \prod_{i=2}^m f_i(x - P_i(y)) d\mu(y) d\mu(x).
$$
We observe that $\varphi_{N_1} * f_1 (x - P_1(y)) = \varphi_{N_1}* f_1(x - P_1(y_0))$ for any $y \in B_{q^{n-\ell}}(y_0)$
by the non-archimedean nature of $\KK$, if $M$ is chosen large enough depending on $P_1$. Hence, by the pigeonhole principle, we can find a
$y_0 \in {\mathcal F}$ such that
$$
\Bigl| \frac{1}{N_0 N'} \int_{\KK} \int_{B_{q^{n-\ell}}(y_0)}
f_0(x) \varphi_{N_1} * f_1 (x - P_1(y_0)) \prod_{i=2}^m f_i(x - P_i(y)) d\mu(y) d\mu(x) \Bigr| \gtrsim \delta.
$$
Changing variables $y \to y_0 + y$ allows us to write the above as
$$
|\Lambda_{{\mathcal P}', N'}(f_0', f_2',\ldots, f_m')| \ \gtrsim \ \delta \ \ \ {\rm where}
$$
$$
\Lambda_{{\mathcal P}', N'}(f_0', f_2',\ldots, f_m') \ = \frac{1}{N_0} \int_{\KK^2} f_0'(x) \prod_{j=2}^m f_j'(x - P_j'(y)) d\mu_{[N']}(y) d\mu(x),
$$
with $P_j'(y) = \ P_j(y_0 + y) - P_j(y_0)$, $f_0'(x) \ = \ f_0(x) \varphi_{N_1}*f_1(x - P_1(y_0))$ and
$f_j'(x) = f_j (x + P_j(y_0))$.
Note that each $f_j'$ is supported in a fix dilate of $I$. In order to apply Theorem \ref{thm:inverse}, we require
$N' \simeq \delta^M N \ge 1$ and here is where the condition $N \gtrsim \delta^{-O_{\mathcal P}(1)}$ is needed.
Therefore Theorem \ref{thm:inverse} implies that
$$
N_0^{-1} \|\mu_{[N_2]} * f_2 \|_{L^1(\KK)} \ = \ N_0^{-1} \|\mu_{[N_2]} * f_2' \|_{L^1(\KK)} \ \gtrsim \ \delta^{O(1)}.
$$
The equality of $L^1$ norms follows from the change of variables $x \to x + P_2(y_0)$.
This completes the proof of \eqref{eq:19} for $j=2$ when $\KK$ is non-archimedean since $\mu_{[N_2]} = \varphi_{N_2}$.
We now turn to the archimedian case, when $\KK = {\mathbb R}$ or when $\KK={\mathbb C}$. Here we argue
as in Step 1. and establish the version of \eqref{var-form} for the function $f_2$. More precisely, writing
$$
\Lambda_{\mathcal P; N}(f_0, \ldots, f_m) =
\Lambda_{\mathcal P;N}(f_0, f_1, \varphi_{N_2} * f_2, \ldots, f_m) + \Lambda_{\mathcal P;N}(f_0, f_1, f_2 - \varphi_{N_2}*f_2, \ldots, f_m),
$$
the argument in Step 1. shows that \eqref{eq:63} implies
\begin{align}\label{var-form-2}
|\Lambda_{\mathcal P;N}(f_0, f_1, \varphi_{N_2} * f_2, \ldots, f_m)| \ \gtrsim \ \delta.
\end{align}
This inequality allows us to reduce matters to showing that \eqref{eq:63} implies
$N_0^{-1}\|\mu_{[N_2]}*f_2\|_{L^1(\KK)} \gtrsim \delta^{O(1)}$ since then \eqref{var-form-2} would
imply
$$
\delta^{O(1)} \lesssim N_0^{-1} \|\mu_{[N_2]}*\varphi_{N_2}*f_2\|_{L^1(\KK)} \le N_0^{-1} \|\varphi_{N_2}*f_2\|_{L^1(\KK)},
$$
establishing \eqref{eq:19-s} for $j=2$.
We give the details when $\KK = {\mathbb C}$ since there are additional
technical difficulties alluded to above. The case ${\mathbb R}$ is easier. Given a large, general interval ${\mathcal I}$ in
${\mathbb C}$ (that is, ${\mathcal I}$ is a disc with large radius $R$), we can clearly find a mesh of $K\simeq \delta^{-M}$ disjoint squares
$(S_k)_{k\in\bra{K}}$ of side length $\delta^{M/2} R$ which sit inside ${\mathcal I}$ such that
$\mu({\mathcal I}\setminus \bigcup_{k\in\bra{K}} S_k) \lesssim \delta^2 R^2$. We fix such a mesh of squares $(S_k)_{k\in\bra{K}}$ for $[N]$
and a mesh of squares $(T_j)_{j\in\bra{J}}$ for $[N_0]$ so that
$$
\Lambda_{\mathcal P, N}(f_0, \varphi_{N_1} * f_1, \ldots, f_m) \ =
$$
$$
\frac{1}{N_0 N}
\sum_{j\in\bra{J}} \sum_{k\in\bra{K}} \int_{T_j} \int_{S_k} f_0(x) f_0(x) \varphi_{N_1}*f_1(x - P_1(y)) \prod_{i=2}^m f_i(x - P_i(y)) d\mu(x) d\mu(y)
\ + \ O(\delta^2).
$$
Since $|\Lambda_{\mathcal P; N}(f_0, \varphi_{N_1}*f_1, \ldots, f_m)| \gtrsim \delta$ by \eqref{var-form} and since
the number of terms in each sum above is about $\delta^{-M}$, the pigeonhole principle gives us a square $T_0$ in $[N_0]$
and a square $S_0$ in $[N]$ such that
$$
\Bigl| \frac{1}{N_0' N'} \int_{T_0} \int_{S_0} f_0(x) f_0(x) \varphi_{N_1}*f_1(x - P_1(y)) \prod_{i=2}^m f_i(x - P_i(y)) d\mu(x) d\mu(y) \Bigr|
\gtrsim \delta.
$$
Write $[N']_{sq} = \{ z \in {\mathbb C}: |z |_{\infty} \le \sqrt{N'}\}$ where $|z|_{\infty} = \max(|x|, |y|)$
for $z = x+iy$. Hence $S_0 = y_0 + [N']_{sq}$ for some $y_0 \in [N]$. For $z \in S_0$, we have $z = y_0 + y$ for some
$y\in [N']_{sq}$ and so by the
mean value theorem and the 1-boundedness of $f_1$,
\begin{align*}
&|\varphi_{N_1}* f_1 (x - P_1(y_0 + y)) - \varphi_{N_1}*f_1(x - P_1(y_0))| \\
&\le \ \sqrt{\frac{(N')^{{\rm Deg}P_1}}{N_1}}
\int_{{\mathbb C}} \|(\nabla \varphi)_{N_1}(z)\| d\mu(z) \ \lesssim_{\varphi} \ \delta^{(M{\rm Deg}P_1 - C_1)/2}
\end{align*}
where $N_1 = \delta^{C_1} N^{{\rm Deg}P_1}$. Ensuring that $M \deg{P_1} - C_1 \ge 4$, we see that
$$
\Bigl| \frac{1}{N_0' N'} \int_{T_0} \int_{[N']_{sq}} f_0(x) \varphi_{N_1}*f_1(x - P_1(y_0)) \prod_{i=2}^m f_i^t(x - P_i'(y)) d\mu(x) d\mu(y) \Bigr|
\gtrsim \delta,
$$
where $P_i'(y) = P_i(y_0 + y) - P_i(y_0)$ and $f_i^t(x) = f_i(x + P_i(y_0))$. For an appropriate interval
$I'$ containing $T_0$ with measure $\simeq N_0'$, we can write the above inequality as
$|\Lambda_{{\mathcal P}'; N'}(f_0', f_2', \ldots, f_m')| \gtrsim \delta$ where ${\mathcal P}' = \{P_2',\ldots, P_m'\}$,
$f_0'(x) = f_0(x) \varphi_{N_1}*f_1(x-P_1(y_0)) \ind{T_0}(x)$ and $f_i'(x) = f_i^t(x) \ind{I'}(x)$ for $i\in\bra{m}\setminus\bra{1}$.
Here
$$
\Lambda_{{\mathcal P}'; N'}(f_0',\ldots, f_m') = \frac{1}{N_0'} \iint_{{\mathbb C}^2} f_0'(x) \prod_{i=2}^m f_i'(x-P_i'(y)) d\mu_{[N']_{sq}}(y)
d\mu(x).
$$
Again, in order to apply Theorem \ref{thm:inverse}, we need $N' = \delta^M N \ge 1$ which holds
provided $N \gtrsim \delta^{-O_{\mathcal P}(1)}$. Therefore
by Theorem \ref{thm:inverse} (see the remark following the statement of Theorem \ref{thm:inverse}), we conclude that
\begin{align*}
(N_0')^{-1}\big\| \mu_{[N_2]_{sq}}*f_2'\big\|_{L^1({\mathbb C})} \gtrsim_{\mathcal P} \delta^{O(1)}
\end{align*}
for some $N_2\simeq \delta^{C_2+M\deg(P_2)}N^{\deg(P_2)}$.
The function $\mu_{[N_2]}*f_2'$ is supported on an interval
$I''\supseteq I'$ such that $\mu(I''\setminus I') \lesssim N_2$. Furthermore we can find an
interval $I'''\subseteq I'$ so that $\mu(I'\setminus I''') \lesssim N_2$ and for
$x\in I'''$, we have $\ind{I'}(x-u) = 1$ for all $u \in [N_2]_{sq}$. Hence
$$
\delta^{O(1)} \lesssim \frac{1}{N_0'} \int_{I'''} \Bigl| \int_{\mathbb C} f_2(x+P_2(y_0) - u) d\mu_{[N_2]_{sq}}(u) \Bigr| d\mu(x)
\ + \ O(N_2 (N_0')^{-1})
$$
where $N_2/N_0' \lesssim \delta^{M(\deg{P_2} - 1)}$ and $\deg{P_2} -1 \ge 1$. Hence, for $M\gg 1$ sufficiently
large, we conclude that
\begin{align}\label{sq-ineq}
\delta^{O(1)} \lesssim \frac{1}{N_0'} \int_{I'''} \Bigl| \int_{\mathbb C} f_2(x+P_2(y_0) - u) d\mu_{[N_2]_{sq}}(u) \Bigr| d\mu(x)
\lesssim N_0^{-1} \|\mu_{[N_2]_{sq}} * f_2 \|_{L^1({\mathbb C})}.
\end{align}
In the final inequality, we promoted the integration in $x$ to all of ${\mathbb C}$ and changed variables $x \to x + P_2(y_0)$. Hence we have shown that \eqref{eq:63} implies $N_0^{-1}\|\mu_{[N_2]_{sq}}*f_2\|_{L^1({\mathbb C})} \gtrsim \delta^{O(1)}$.
Since \eqref{eq:63} holds with $f_2$ replaced by $\varphi_{N_2}*f_2$ (this is \eqref{var-form-2}), we see that
$$
\delta^{O(1)} \lesssim N_0^{-1} \|\mu_{[N_2]_{sq}}*\varphi_{N_2}*f_2\|_{L^1({\mathbb C})} \le N_0^{-1}
\|\varphi_{N_2}*f_2\|_{L^1({\mathbb C})},
$$
establishing \eqref{eq:19-s} for $j=2$. Now we can proceed inductively and obtain \eqref{eq:19-s} for all $j\in\bra{m}$.
\end{proof}
\subsection{Multilinear functions and their duals} Recall the multilinear form
$$
\Lambda_{\mathcal P; N}(f_0, f_1, \ldots, f_m) = \frac{1}{N_0} \iint_{\KK^2} f_0(x) \prod_{i=1}^m f_i(x - P_i(y)) d\mu_{[N]}(y) d\mu(x).
$$
We define the multilinear function
\begin{align*}
A_N^{\mathcal P}(f_1,\ldots, f_m)(x):=\int_{\KK}\prod_{i=1}^mf_{i}(x-P_i(y))d\mu_{[N]}(y)
\end{align*}
so that $\Lambda_{\mathcal P; N}$
can be written as a pairing of $A_N^{\mathcal P}$ with $f_0$,
\begin{align*}
\langle A_N^{\mathcal P}(f_1,\ldots, f_m), f_0\rangle = N_0 \, \Lambda_{\mathcal P; N, [N_0]}(f_0, f_1,\ldots, f_m)
\end{align*}
where $\langle f, g\rangle = \int_\KK f(x) g(x) d\mu(x)$.
By duality we have
\begin{align*}
\langle A_N^{\mathcal P}(f_1,\ldots, f_m), f_0\rangle=
\langle (A_N^{\mathcal P})^{*j}(f_1,\ldots, f_{j-1}, f_0, f_{j+1}, \ldots, f_m), f_j\rangle,
\end{align*}
where
\begin{align*}
(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m) (x) :=
\int_{\KK}\prod_{\substack{i=1\\i\neq j}}^mf_{i}(x-P_i(y)+P_j(y))f_0(x+P_j(y))d\mu_{[N]}(y).
\end{align*}
\begin{lemma}[Application of Hahn--Banach]\label{hahn} Let $A,B > 0$, let $I\subset \KK$ be an interval
and let $G$ be an element of $L^2(I)$. Let $\Phi$ be a family of
vectors in $L^2(I)$, and assume the following inverse theorem:
whenever $f \in L^2(I)$ is such that $\|f\|_{L^\infty(I)} \leq 1$ and
$|\langle f, G \rangle| > A$, then $|\langle f, \phi \rangle| > B$ for
some $\phi \in \Phi$. Then $G$ lies in the closed convex hull of
\begin{equation}\label{dip}
V= \{ \lambda\phi\in L^2(I): \phi \in \Phi, \ |\lambda| \leq A/B\}
\cup \{ h \in L^2(I): \|h\|_{L^1(I)} \leq A \}.
\end{equation}
\end{lemma}
\begin{proof} By way of contradiction, suppose that $G$ does not lie in
$W = \overline{{\rm conv} V}^{\|\cdot\|_{L^2(I)}}$. From the Hahn-Banach theorem, we can
find a continuous linear functional $\Lambda$ of $L^2(I)$ which seperates $G$ from $W$; that is,
there is a $C \in {\mathbb R}$ such that $\Rea\Lambda(h) \le C < \Rea \Lambda(G)$ for all $h \in W$.
Scaling $\Lambda$ allows us to change the constant $C$ so we can choose $\Lambda$ such that $C=A$ is in the statement of the lemma.
Since $W$ is balanced, we see that $|\Lambda(h)| \le A < \Rea \Lambda(G)$ for all $h \in W$.
By the Riesz representation theorem, there is an $f \in L^2(I)$ which represents $\Lambda$ so that
$|\langle f, h\rangle|\le A < \mathrm{Re} \langle f, G \rangle$ for all $h\in V$. This implies that
\[
|\langle f, \phi\rangle| \leq B
\]
for all $\phi \in \Phi$, and that
\[
\|f\|_{L^\infty(I)}=\sup_{\|h\|_{L^1(I)}\le1} |\langle f, h\rangle|\le 1,
\]
contradicting the hypothesis of the lemma. This completes the
proof of the lemma.
\end{proof}
\begin{cor}[Structure of dual functions]\label{struct}
Let $N\ge 1$ be a scale, $m\in\Z_+$ and $0<\delta\le 1$ be given. Let $\mathcal P:=\{P_1,\ldots, P_m\}$
be a collection of polynomials such that
$1\le \deg{P_1}<\ldots<\deg{P_m}$. Let
$f_0, f_1,\ldots, f_m\in L^0(\KK)$ be $1$-bounded functions supported
on an interval of measure $N_0 = N^{\deg(P_m)}$. Then for every $j\in\bra{m}$, provided $N\gtrsim \delta^{-O_{\mathcal P}(1)}$, there exist a decomposition
\begin{equation}\label{decomp}
(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m)(x) = H_j(x) + E_j(x)
\end{equation}
where $H_j \in L^2(\KK)$ has Fourier transform supported in $[(N_j)^{-1}]$
where $N_j \simeq \delta^{C_j} N^{\deg{P_j}}$ and $C_j$ is as in Theorem \ref{thm:inverse-s},
and obeys the bounds
\begin{equation}\label{faq-bound}
\|H_j\|_{L^\infty(\KK)} \lesssim_m 1,
\quad\text{ and }\quad
\|H_j\|_{L^1(\KK)} \lesssim_m N_0.
\end{equation}
The error term $E_j\in L^1(\KK)$ obeys the bound
\begin{equation}\label{e-bound}
\|E_j\|_{L^1(\KK)} \leq \delta N_0.
\end{equation}
\end{cor}
\begin{proof}
Fix $j\in\bra{m}$, let
$I_0:=\supp{(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m)}$,
and recall that $N_0= N^{\deg(P_m)}$. By translation invariance
we may assume $\supp{f_j}\subseteq [N_0]$ for all $j\in\bra{m}$, and that $I_0:=[O(N_0)]$. If there exists
$f\in L^{\infty}(I_0)$ with $\|f\|_{L^{\infty}(I_0)}\le 1$ such that
\begin{align}
\label{eq:85}
|\langle f, (A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m) \rangle|> \delta N_0,
\end{align}
then proceeding as in the proof of Theorem \ref{thm:inverse-s} we may conclude that
\begin{align*}
|\langle \varphi_{N_j}*f, (A_N^{\mathcal P})^{*j}(\varphi_{N_1}*f_1,\ldots, \varphi_{N_{j-1}}* f_{j-1}, f_0, f_{j+1} \ldots, f_m) \rangle|\ge c_m \, \delta N_0,
\end{align*}
where $N_i\simeq \delta^{C_i} N^{\deg(P_i)}$ for $i\in\bra{j}$.
This implies that there exists a 1-bounded $F\in L^2(\KK)$ with $\|F\|_{L^1(\KK)} \le N_0$
such that $\supp{\widehat{F}}\subseteq [N_j^{-1}]$ and
\begin{align}
\label{eq:87}
|\langle f, F \rangle|\ge c_m \, \delta N_0.
\end{align}
If fact, we can take
$$
F(x) = {\tilde{\varphi}}_{N_j}*(A_N^{\mathcal P})^{*j}(\varphi_{N_1}*f_1,\ldots, \varphi_{N_{j-1}}* f_{j-1}, f_0, f_{j+1} \ldots, f_m)(x)
$$
where ${\tilde{\varphi}}(x) = \varphi(-x)$.
Let $\Psi$ denote the collection of all 1-bounded $F\in L^2(\KK)$ with $\supp{\widehat{F}}\subseteq [N_j^{-1}]$
and $\|F\|_{L^1(\KK)}\le N_0$.
Invoking Lemma \ref{hahn} with $A=\delta N_0/4$ and $B = c_m \delta N_0$ and the set
$\Phi = \{ F \ind{I_0} : F \in \Psi\}$,
we obtain a decomposition
\begin{align}
\label{eq:88}
(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_{j-1}, f_0, f_{j+1}, \ldots, f_m) = \sum_{l=1}^{\infty}c_l\phi_l+E(1)+E(2),
\end{align}
with the following properties:
\begin{itemize}
\item[(i)] for each $l\in\Z_+$ we have that $\phi_l=\lambda_lF_l \ind{I_0}$, $F_l \in \Psi$ and $\lambda_l\in\C$ such that $|\lambda_l|\lesssim_m 1$;
\item[(ii)] the coefficients $c_l$ are non-negative with
$\sum_{l=1}^{\infty}c_l\le 1$, and all but finitely $c_l$ vanish;
\item[(iii)] the error term $E(1)\in L^1(I_0)$ satisfies $\|E(1)\|_{L^1(I_0)} \leq \delta N_0/2$;
\item[(iv)] the error term $E(2)\in L^2(I_0)$ satisfies $\|E(2)\|_{L^2(I_0)} \leq \delta$.
\end{itemize}
The latter error term arises as a consequence of the fact that
one is working with the closed convex hull instead of the convex hull. In fact, its $L^2(I_0)$ norm can be made arbitrarily small, but
$\delta$ will suffice for our purposes.
Grouping together terms in the deomposition \eqref{eq:88}, we have
\[
(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_{j-1}, f_0, f_{j+1}, \ldots, f_m)
= H_j' + E_j'
\]
where
\begin{align*}
H_j' = \bigl[\sum_{l=1}^{\infty} c_l \lambda_l F_l\bigr] \ind{I_0} \quad \text{satisfies} \quad
\|H_j'\|_{L^1(\KK)} \le \sum_{l=1}^{\infty} c_l |\lambda_l| \|F_l\|_{L^1(\KK)} \lesssim_m N_0 \ \ \text{and}
\end{align*}
\begin{align*}
\|H_j'\|_{L^{\infty}(\KK)} \le \sup_{l\in\NN} \|F_l\|_{L^{\infty}(\KK)} \sum_{l=1}^{\infty} c_l |\lambda_l| \lesssim_m 1.
\end{align*}
Also $E_j' = E(1) + E(2)$ satisfies $\|E_j'\|_{L^1(I_0)} \le \delta N_0$ by (iii) and (iv) above since by the Cauchy--Schwarz inequality,
we have $\|E(2)\|_{L^1(I_0)} \leq \delta N_0^{1/2}$.
We note that the function $F(x) = \sum_{i=1}^{\infty} c_i \lambda_i F_i (x)$ is Fourier supported in the interval $[N_j^{-1}]$.
When $\KK$ is non-archimedean, ${\rm supp}({\widehat{\ind{I_0}}}) \subseteq [N_0^{-1}]$ and so the Fourier transform
of $H_j'$ is supported in $[N_j^{-1}]$.
This verifies \eqref{faq-bound} in this case and completes the proof when $\KK$ is non-archimedean since
the decomposition $H_j' + E_j'$ of $(A_N^{\mathcal P})^{*j}$ satisfies \eqref{faq-bound} and \eqref{e-bound}.
Now suppose $\KK$ is archimedean. Let $\psi$ be a Schwartz function such that $\int_{\KK}\psi(x)d\mu(x)=1$ and
$\supp{\widehat{\psi}}\subseteq [2]$.
Let $M\simeq \delta^{O(1)} N_0$ and as usual, set $\psi_M(x) = M^{-1} \psi(M^{-1}x)$ when $\KK = {\mathbb R}$
and $\psi_M(x) = M^{-1}\psi(M^{-1/2} x)$ when $\KK = {\mathbb C}$. From the proof of \eqref{L1-norm}, we have
\begin{align}\label{L1-ind}
\|\ind{I_0}-\ind{I_0}*\psi_M\|_{L^1(\KK)} \lesssim M^{1/4} {N_0}^{3/4}.
\end{align}
We set $H_j(x) = F(x) \ind{I_0} * \psi_M (x)$ and $E_j = E(1) + E(2) + (\ind{I_0} - \ind{I_0}*\psi_M) F$ so that
$$
(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_{j-1}, f_0, f_{j+1}, \ldots, f_m) (x)
= H_j(x) + E_j (x).
$$
From \eqref{L1-ind}, we see that $E_j$ satisfies \eqref{e-bound}.
The properties $\|H_j\|_{L^\infty(\KK)} \lesssim_m 1$ and $\|H_j\|_{L^1(\KK)} \lesssim_m N_0$ are
still preserved.
Moreover, $\supp{\widehat{H}_j}\subseteq [O(N_j^{-1})]$, since
\[
\widehat{H}_j=(\widehat{\ind{I_0}}\widehat{\varphi}_M)*\widehat{F}.
\]
The shows that \eqref{faq-bound} holds for $H_j$ and this completes the proof of the corollary.
\end{proof}
We will combine Corollary \ref{struct} and the following $L^p$ improving bound for polynomial averages
to establish the key Sobolev inequality.
\begin{lemma}[$L^p$-improving for polynomial averages]\label{poly-improving} Let
$Q \in \KK[{\rm y}]$ with $\deg(Q) = d$ and let $N\gg_Q 1$ be a large scale. Consider the averaging operator
\[
M_N^{Q}g(x):=\int_\KK g(x-Q(y))d\mu_{[N]}(y).
\]
For any parameters
$1 < r < s< \infty$ satisfying $1/s = 1/r - 1/d$, the following inequality holds:
\begin{align}
\label{eq:90}
\|M_N^{Q}g\|_{L^s(\KK)}\lesssim_Q N^{d(\frac{1}{s}-\frac{1}{r})}\|g\|_{L^r(\KK)}\quad \text{ for } \quad g\in L^r(\KK).
\end{align}
\end{lemma}
\begin{proof} As our bounds are allowed to depend on $Q$, we may assume that $Q$ is monic.
Let $\alpha\in \KK$ be such that $|\alpha| = N$ and change variables $y \to \alpha y$ to write
$$
M^N_Q g(x) \ = \int_{B_1(0)} g(x - Q(\alpha y)) \, d\mu(y) = \int_{B_1(0)} g_{\alpha}(\alpha^{-d} x - Q_{\alpha}(y)) \, d\mu(y)
$$
where $g_{\alpha}(x) = g(\alpha^d x)$ and
$Q_{\alpha}(y) = \alpha^{-d} Q(\alpha y) = y^d + \alpha^{-1} a_{d-1} y^{d-1} + \ldots+\alpha^{-d}a_0$. Hence the right-hand side
above can be written as $M^1_{Q_{\alpha}} g_{\alpha}( \alpha^{-d} x)$. Since $\|g_{\alpha}\|_{L^r(\KK)} = N^{-d/r} \|g\|_{L^r(\KK)}$,
we see that matters are reduced to proving \eqref{eq:90} for $N=1$ and $Q = Q_{\alpha}$ with uniform bounds in $\alpha$.
The mapping $y \to Q_{\alpha}(y)$ is $d$-to-$1$ and we can use a
generalised change of variables formula to see that
\begin{align*}
|M^1_{Q_{\alpha}} g(x)| \lesssim \int_{|s|\le 2} |g(x -s)| |s|^{-(d-1)/d} d\mu(s)
\end{align*}
when $N\gg_{Q} 1$. Hence $M^1_{\alpha}$ is controlled by fractional integration, uniformly in $\alpha$.
When $\KK$ is archimedean, such a change of variables formula is well-known. Recall that when $\KK = {\mathbb C}$, $|s| = s {\overline{s}}$ is the square of the usual absolute value.
When $\KK = {\mathbb Q}_p$ is the $p$-adic field, such a formula is given
in \cite{Evans}. The argument in \cite{Evans} generalises to general non-archimedean fields (when the charateristic, if positive,
is larger than $d$). Alternatively one can use a construction in \cite{W-igusa}, valid in any local field and valid
for any polynomial $Q$ where $Q'(x)$ does not equal to zero mod $m_\KK$ for any nonzero $x$ (we need the condition on the
characteristic of the field for this), in which the unit group $U = \bigcup_{j\in\bra{J}} U_j$ is partitioned into $J={\rm gcd}(d, q-1)$ open sets and analytic isomorphisms $\phi_j :D_j \to \phi_j(D_j)$ are constructed such that $y = \phi_j(x)$
precisely when $Q(y) = x$. For us, $Q_{\alpha}'(x) \not= 0$ mod $m_\KK$ for any nonzero $x$
if $|\alpha| = N \gg_Q 1$ is sufficiently large.
By the Hardy-Littlewood-Sobolev
inequality (easily seen to be valid over general locally compact topological fields), we have
$$
\|M^1_{Q_{\alpha}} g \|_{L^s(\KK)} \lesssim \|g\|_{L^r(\KK)},
$$
uniformly in $\alpha$ whenever $1/s = 1/r - 1/d$, completing the proof of the lemma.
\end{proof}
We now come to the proof of Theorem \ref{sobolev-informal}.
As in the set up for Theorem \ref{thm:inverse-s}, we fix a smooth function $\varphi$ with
compact Fourier support. When $\KK$ is archimedean, let $\varphi$ be a
Schwartz function on $\KK$ so that
\begin{align*}
\ind{[1]}(\xi)\le \widehat{\varphi}(\xi)\le \ind{[2]}(\xi), \qquad \xi\in \KK.
\end{align*}
When $\KK={\mathbb R}$, we set $\varphi_N(x) = N^{-1} \varphi(N^{-1} x)$ for any
$N>0$ and when $\KK = {\mathbb C}$, we set
$\varphi_{N}(z) = N^{-1} \varphi(N^{-1/2} z)$ for any $N > 0$. When
$\KK$ is non-archimedean, we set $\varphi(x) = \ind{B_1(0)}(x)$ so that $\widehat{\varphi}(\xi) = \ind{B_1(0)}(\xi)$ and
we set $\varphi_{N}(x) = N^{-1} \ind{[N]}(x)$ for any scale $N$.
We restate Theorem \ref{sobolev-informal} in a more formal, precise way.
\begin{theorem}[A Sobolev inequality for $A_N^{\mathcal P}$]
\label{sobolev}
Let
$\mathcal P:=\{P_1,\ldots, P_m\}$ be a collection of polynomials such
that $1\le \deg{P_1}<\ldots<\deg{P_m}$. Let $N \gg_{\mathcal P} 1$ be a scale, $m\in\Z_+$ and $0<\delta\le 1$ be given.
Let $1<p_1,\ldots, p_m<\infty$ satisfying
$\frac{1}{p_1}+\ldots+\frac{1}{p_m}=1$ be given. Suppose $N\gtrsim \delta^{-O_{\mathcal P}(1)}$. Then
for all $f_1\in L^{p_1}(\KK),\ldots, f_m\in L^{p_m}(\KK)$ we have
\begin{align}
\label{eq:97}
\|A_N^{\mathcal P}(f_1,\ldots,f_{j-1}, (\delta_0-\varphi_{N_j})*f_j,f_{j+1}\ldots, f_m)\|_{L^1(\KK)}
\lesssim \delta^{1/8}
\prod_{i=1}^{m}
\|f_i\|_{L^{p_i}(\KK)},
\end{align}
where $N_j \simeq \delta^{C_j} N^{\deg{P_j}}$ and $C_j$ is the parameter from Theorem \ref{thm:inverse-s}. Here
$\widehat{\delta_0} \equiv 1$.
\end{theorem}
\paragraph{\bf Remark} The proof of Theorem \ref{sobolev} (and its statement) implicitly assumes that $m\ge 2$ but there is a version when $m=1$, which
will be given in Section \ref{appendix} where it is needed.
\begin{proof}
We fix $j\in\bra{m-1}$ and recall $N_j\simeq \delta^{O(1)}N^{\deg(P_j)}$. We first prove that for every functions $f_1,\ldots, f_{j-1}, f_{j+1},\ldots, f_{m-1}\in L^{\infty}(\KK)$ and $f_j, f_m\in L^2(\KK)$, we have
\begin{gather}
\label{eq:94}
\begin{split}
\|A_N^{\mathcal P}(f_1,\ldots,f_{j-1}, (\delta_0-\varphi_{N_j})*f_j,f_{j+1}\ldots, f_m)\|_{L^1(\KK)}\\
\lesssim \delta^{1/8}
\bigg(\prod_{\substack{i=1\\i\neq j}}^{m-1}
\|f_i\|_{L^\infty(\KK)}\bigg)\|f_j\|_{L^2(\KK)}\|f_m\|_{L^2(\KK)}.\qquad
\end{split}
\end{gather}
Choose $f_0\in L^{\infty}(\KK)$ so that $\|f_0\|_{L^{\infty}(\KK)}=1$ and
\begin{align*}
\|A_N^{\mathcal P}(f_1,&\ldots,f_{j-1}, (\delta_0-\varphi_{N_j})*f_j,f_{j+1}\ldots, f_m)\|_{L^1(\KK)}\\
&\simeq |\langle A_N^{\mathcal P}(f_1,\ldots,f_{j-1}, (\delta_0-\varphi_{N_j})*f_j,f_{j+1}\ldots, f_m), f_0\rangle|\\
&=|\langle (\delta_0-\varphi_{N_j})*(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m), f_j\rangle|
\end{align*}
By the Cauchy--Schwarz inequality it will suffice to prove
\begin{gather}
\label{eq:96}
\begin{split}
\|(\delta_0-\varphi_{N_j})*(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m)\|_{L^2(\KK)}\\
\lesssim \delta^{1/8}
\|f_0\|_{L^\infty(\KK)}\bigg(\prod_{\substack{i=1\\i\neq j}}^{m-1}
\|f_i\|_{L^\infty(\KK)}\bigg)\|f_m\|_{L^2(\KK)}.
\end{split}
\end{gather}
By multilinear interpolation, the bounds \eqref{eq:94} imply \eqref{eq:97} and so the proof of
Theorem \ref{sobolev} is reduced to establishing \eqref{eq:96}
which will be divided into three steps. In the first two steps, we will assume that $f_m$ is supported
in some interval of measure $N_0$ where $N_0 \simeq N^{\deg(P_m)}$.
\paragraph{\bf Step 1.} In this step, we will establish the bound
\begin{gather}
\label{eq:91}
\begin{split}
\|(\delta_0-\varphi_{N_j})*(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m)\|_{L^2(\KK)}\quad \\
\lesssim \delta^{1/2}N_0^{1/2}
\|f_0\|_{L^\infty(\KK)}\bigg(\prod_{\substack{i=1\\i\neq j}}^{m-1}
\|f_i\|_{L^\infty(\KK)}\bigg)\|f_m\|_{L^\infty(\KK)}
\end{split}
\end{gather}
under the assumption that $f_m$ is supported in an interval of measure $N_0$ (when $\KK = {\mathbb C}$, this implies
in particular that $f_m$ is supported in a square with measure about $N_0$, which in Step 3. will be a helpful observation).
When $f_m$ has this support condition,
$$
(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m) = (A_N^{\mathcal P})^{*j}(f_1',\ldots, f_0', \ldots, f_m')
$$
where $f_i'(x) = f_i(x) \ind{I_0}(x)$ for some interval $I_0$ of measure $O(N_0)$. To prove \eqref{eq:91}, it suffices
to assume $\|f_i\|_{L^{\infty}(\KK)} = 1$ for $i=0,1,\ldots, j-1, j+1,\ldots, m$ and so \eqref{eq:91} takes the form
\begin{align}
\label{eq:89}
\|(\delta_0-\varphi_{N_j})*(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m)\|_{L^2(\KK)}\lesssim \delta^{1/2}N_0^{1/2}.
\end{align}
We apply the decomposition \eqref{decomp} to $(A_N^{\mathcal P})^{*j}(f_1',\ldots, f_0', \ldots, f_m')$ to write
$$
(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m) (x) = H_j(x) + E_j(x)
$$
where $H_j$ satisfies \eqref{faq-bound} and $E_j$ satisfies \eqref{e-bound}.
Using the fact that $\widehat{H}_j\subseteq [(N_j)^{-1}]$ we conclude
that $(\delta_0-\varphi_{N_j})*H_j=0$. Thus
\[
(\delta_0-\varphi_{N_j})*(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m)=(\delta_0-\varphi_{N_j})*E_j.
\]
From \eqref{e-bound} and the 1-boundedness of $(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m)$, we have
\begin{align*}
\|(\delta_0 -\varphi_{N_j})*E_j\|_{L^1(\KK)}\lesssim \delta N_0,
\quad \text{ and } \quad
\|(\delta_0 -\varphi_{N_j})*E_j\|_{L^\infty(\KK)}\lesssim 1,
\end{align*}
respectively. Therefore
\begin{align*}
\|(\delta_0 -\varphi_{N_j})*E_j\|_{L^2(\KK)}\lesssim \delta^{1/2} N_0^{1/2},
\end{align*}
establishing \eqref{eq:89} and hence \eqref{eq:91}. This completes Step 1.
\paragraph{\bf Step 2.}
We continue with our assumption that $f_m$ is supported in an interval of measure $N_0$ but
now we relax the $L^\infty(\KK)$ control on $f_m$ to $L^2(\KK)$
control and show that
\begin{gather}
\label{eq:92}
\begin{split}
\|(\delta_0-\varphi_{N_j})*(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m)\|_{L^2(\KK)}\\
\lesssim \delta^{1/4}\|f_0\|_{L^\infty(\KK)}\bigg(\prod_{\substack{i=1\\i\neq j}}^{m-1}
\|f_i\|_{L^\infty(\KK)}\bigg)\|f_m\|_{L^2(\KK)}.
\end{split}
\end{gather}
The main tool for this will be the $L^p$-improving
estimate \eqref{eq:90} for the polynomial average $M_N^Q$.
We have a pointwise bound
\begin{align*}
|(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m)(x)|\le M_N^{P_m-P_j}|f_m|(x),
\end{align*}
which combined with \eqref{eq:90} (for $Q=P_m-P_j$, $d = \deg(P_m)$, $s=2$ and $r=(d+2)/2d$) yields
\begin{gather}
\label{eq:93}
\begin{split}
\|(\delta_0-\varphi_{N_j})*(A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m)\|_{L^2(\KK)}\quad\\
\lesssim N_0^{- 1/d}\|f_0\|_{L^\infty(\KK)}\bigg(\prod_{\substack{i=1\\i\neq j}}^{m-1}
\|f_i\|_{L^\infty(\KK)}\bigg)\|f_m\|_{L^r(\KK)}.
\end{split}
\end{gather}
Interpolating \eqref{eq:91} and \eqref{eq:93} we obtain \eqref{eq:92} as desired.
\paragraph{\bf Step 3.} In this final step, we remove the support condition on $f_m$ and establish
\eqref{eq:96}. To prove \eqref{eq:96}, we may assume that $\|f_i\|_{L^{\infty}(\KK)} = 1$ for $i=0, 1, \ldots, j-1, j+1, \ldots, m-1$.
We split $f_m = \sum_{I\in\mathcal I} f_m \ind{I}$ where $I$ ranges over a partition $\mathcal I$ of $\KK$ into intervals $I$ of
measure $N_0$. We have seen this is possible when $\KK$ is non-archimedean or when $\KK={\mathbb R}$. This is not possible
when $\KK = {\mathbb C}$ but in this case, we can find a partition $\mathcal I$ of squares.
By Step 1. and Step 2., the local dual function
$D_I \coloneqq (A_N^{\mathcal P})^{*j}(f_1,\ldots, f_0, \ldots, f_m \ind{I})$ obeys the bound
\begin{equation}\label{anghi}
\| (\delta_0 - \varphi_{N_j})* D_I \|_{L^2(\KK)} \lesssim \delta^{1/4} \| f_m \|_{L^2(I)}
\end{equation}
for each interval $I$, and we wish to establish
\[
\Big\| \sum_{I\in\mathcal I} (\delta_0 - \varphi_{N_j})* D_I\Big\|_{L^2(\KK)} \lesssim\delta^{1/8} \| f_m \|_{L^2(\KK)}.
\]
We will square out the sum. To handle the off-diagonal terms, we observe that for finite intervals
$I, J\subset \KK$ (squares when $\KK = {\mathbb C}$) of measure $N_0$ and $M>0$ and $1\le p<\infty$, we have
\begin{align}
\label{eq:95}
\|\varphi_{N_j}*(f\ind{I})\|_{L^p(J)}\lesssim_{M, p} \big(1+N_0^{-1}{\rm dist}(I, J)\big)^{-M} \|f\|_{L^p(I)}.
\end{align}
By squaring and applying Schur's test, it suffices to obtain the decay bound
\[
\bigl| \langle (\delta_0 - \varphi_{N_j})* D_I, (1 - \varphi_{N_j})* D_J \rangle \bigr| \lesssim
\delta^{1/4} \big(1+N_0^{-1}\mathrm{dist}(I,J) \big)^{-2} \| f_m \|_{L^2(I)} \| f_m \|_{L^2(J)}
\]
for all intervals $I,J$ of measure $N_0$. By Cauchy--Schwarz and \eqref{anghi} we know
\[
\langle (\delta_0 - \varphi_{N_j})* D_I, (1 - \varphi_{N_j})* D_J \rangle \lesssim
\delta^{1/2} \| f_m \|_{L^2(I)} \| f_m \|_{L^2(J)}.
\]
On the other hand, $D_I$ is supported in a $O(N_0)$-neighborhood of $I$, and similarly for $D_J$. From \eqref{eq:95} and Cauchy--Schwarz, we thus have
\begin{align*}
\langle (\delta_0 - \varphi_{N_j})* D_I, (1 - \varphi_{N_j})* D_J \rangle &\lesssim
\big(1+N_0^{-1}\mathrm{dist}(I,J) \big)^{-10} \| D_I \|_{L^2(\KK)} \| D_J \|_{L^2(\KK)}\\
&\lesssim \big(1+N_0^{-1}\mathrm{dist}(I,J) \big)^{-10} \| f_m \|_{L^2(I)} \| f_m \|_{L^2(J)}.
\end{align*}
Taking the geometric mean of the two estimates, we obtain the claim in \eqref{eq:96}.
This completes the proof of Theorem \ref{sobolev}.
\end{proof}
\section{The implication Theorem \ref{sobolev-informal} $\Longrightarrow$ Theorem \ref{thm:main}}\label{appendix}
Here we give the details of Bourgain's argument in \cite{B} which allow us to pass from Theorem \ref{sobolev-informal}
to Theorem \ref{thm:main} on polynomial progressions. Let ${\mathcal P} = \{P_1, \ldots, P_m\}$ be a sequence of polynomials in $\KK[{\rm y}]$ with distinct degrees and
no constant terms. Without loss of generality, we may assume
$$
\deg{P_1} < \deg{P_2} < \cdots < \deg{P_m}
$$
and we set $d_{m-j} := \deg{P_j}$ and $d := d_0 = \deg{P_m}$ so that $d_{m-1} < \cdots < d_1 < d$.
Since the argument showing how Theorem \ref{sobolev-informal} implies Theorem \ref{thm:main} has been given
in \cite{B}, \cite{D+}, and \cite{CGL} in the euclidean setting (albeit for shorter polynomial progressions), we will only give the details
for non-archimedean fields $\KK$ where uniform notation can be employed.
We will proceed in several steps.
\paragraph{\bf Step 1} When $\KK$ is non-archimedean, the family $(Q_t)_{t>0}$ of convolution operators defined by
\[
Q_t f (x) \ = \ f * \mu_{[t]}(x) = \ \frac{1}{t} \int_{|y|\le t} f(x - u) d\mu(u) \quad \text{for scales} \quad t>0
\]
gives us a natural appoximation of the identity and
form the analogue of the Poisson semigroup in the non-archimedean setting. They also
give us Fourier localization since
\begin{align}\label{poisson-fourier}
{\widehat{Q_t f}}(\xi) = {\widehat{Q_t}}(\xi) {\widehat{f}}(\xi) = \ind{[t^{-1}]}(\xi) {\widehat{f}}(\xi).
\end{align}
We will need the
following bound for $(Q_t)_{t>0}$ (see Lemma 6 in \cite{B} or Lemma 2.1 in \cite{D+}): for $f \ge 0$ and
scales $0 < t_1, \ldots, t_m \le 1$,
\begin{align}\label{poisson}
\int_{B_1(0)} f(x) Q_{t_1} f(x) \cdots Q_{t_m} f(x) d\mu(x) \ \ge \ \Bigl( \int_{B_1(0)} f(x) d\mu(x) \Bigr)^{m+1}.
\end{align}
The proof in the euclidean setting given in \cite{D+} established \eqref{poisson} for general approximations of the identity
but the first step is to show \eqref{poisson} for martingales $(E_k)_{k\in\N}$ defined with respect to dyadic intervals.
However a small scale $t$ in a non-archimedean field $\KK$ is the form $t = q^{-k}$ and
\[
Q_t f(x) = q^k \int_{|y|\le q^{-k}} f(x - y) d\mu(y) = \sum_{{\underline{x}}\in {\mathcal C}_k} A_{k,{\underline{x}}} f
\ \ind{B_{q^{-k}}}({\underline{x}}), \quad \text{ where }
\]
\[
{\mathcal C}_k \ = \ \{ {\underline{x}}=x_0 + x_1 \pi + \cdots + x_{k-1}\pi^{k-1} : x_j \in o_\KK/m_\KK \}
\quad \text{ and } \quad
A_{k,\underline{x}}f = q^{k} \int_{B_{q^{-k}({\underline{x}})}} f(u) d\mu(u).
\]
Hence $(Q_t)_{t>0}$
is a martingale with respect to the dyadic structure of non-archimedean fields and so the argument in \cite{D+}
extends without change to establish \eqref{poisson}.
\paragraph{\bf Step 2} Fix $\varepsilon>0$. Our goal is to find a $\delta(\varepsilon, {\mathcal P}) > 0$ and $N(\varepsilon, {\mathcal P}) \ge 1$ such
that for any scale $N \ge N(\varepsilon, {\mathcal P})$ and $f \in L^0(\KK)$ with $0\le f \le 1$ satisfying $\int_\KK f d\mu \ge \varepsilon N^d$,
we have
\begin{align}\label{mult-recurrence-function}
I := \frac{1}{N^d} \iint_{\KK^2} f(x) f(x+P_1(y)) \cdots f(x+P_m(y)) d\mu_{[N]}(y) d\mu(x) \ \ge \ \delta.
\end{align}
Taking $f = \ind{S}$ with $S\subseteq \KK$ in Theorem \ref{thm:main} implies \eqref{mult-recurrence}, the desired conclusion.
We may assume the $f$ is supported in the interval $[N^d]$.
Let $\alpha, \beta \in \KK$ satisfy $|\alpha| = N^d$ and $|\beta| = N$ and write
$$
I = \iint_{\KK^2} g(x) g(x + R_1(y)) \cdots g(x +R_m(y)) d\mu_{[1]}(y) d\mu(x),
$$
where $g(x) = f(\alpha x)$ and $R_j(y) = \alpha^{-1} P_j(\beta y)$. In particular, we have
$\int_\KK g \ge \varepsilon$. We note that $g$ is supported in $[1] = B_1(0)$. Fix three small scales $0<t_0 \ll t_1 \ll t \ll 1$
and decompose
\begin{align}\label{I-decomp}
t_1^{-1} I \ge \iint_{\KK^2} g(x) g(x + R_1(y)) \cdots g(x + R_m(y)) d\mu_{[t_1]}(y) d\mu(y) \ =: \ I_1 + I_2 + I_3,
\end{align}
where
$$
I_1 = \iint_{\KK^2} g(x) \prod_{j=1}^{m-1} g(x + R_j(y))\ Q_t g(x +R_m(y)) d\mu_{[t_1]}(y) d\mu(x),
$$
$$
I_2 = \iint_{\KK^2} g(x) \prod_{j=1}^{m-1} g(x + R_j(y))\ [Q_{t_0} - Q_t ] g(x +R_m(y)) d\mu_{[t_1]}(y) d\mu(x) \ \ {\rm and}
$$
$$
I_3 = \iint_{\KK^2} g(x) \prod_{j=1}^{m-1} g(x + R_j(y))\ [ {\rm Id} - Q_{t_0}] g(x +R_m(y)) d\mu_{[t_1]}(y) d\mu(x).
$$
For $I_1$, we note that for $t_1 \ll_{P_m} t$,
$$
Q_t g(x + R_m(y)) = \frac{1}{t} \int_{|u|\le t} g(x + R_m(y) - u) d\mu(u) = \frac{1}{t} \int_{|u|\le t} g(x-u) d\mu(u) =
Q_t g(x)
$$
whenever $|y| \le t_1$. For the final equality we made the change of variables $u \to u - R_m(y)$, noting that when $|y| \le t_1$, then
$|R_m(y)| \le C_{P_m} t_1 \le t$. Hence
\begin{align*}
I_1 = \iint_{\KK^2} g(x) \prod_{j=1}^{m-1} g(x + R_j(y))\ Q_t g(x)\, d\mu_{[t_1]}(y) d\mu(x).
\end{align*}
For $I_2$ we use the Cauchy--Schwarz inequality to see that
\begin{align}\label{I_2}
I_2 \ \le \ \| Q_{t_0} g - Q_t g \|_{L^2(\KK)}.
\end{align}
For $I_3$, we will use the more precise formulation of Theorem \ref{sobolev-informal} given in Theorem \ref{sobolev}.
We rescale $I_3$, moving from $g, R_j$ back to $f, P_j$ and write
\[
I_3 = \frac{1}{N^d} \iint_{\KK^2} f(x) \prod_{j=1}^{m-1} f(x + P_j(y))\ [{\rm Id} - Q_{t_0 N^d}] f (x + P_m(y)) d\mu_{[t_1N]}(y) d\mu(x),
\]
where the function $h(x) = [{\rm Id} - Q_{t_0N^d}] f(x)$ has the property that ${\widehat{h}}(\xi) = 0$
whenever $|\xi| \le (t_0 N^d)^{-1}$, see \eqref{poisson-fourier}. Hence
$$
I_3 \le N^{-d} \|A^{\mathcal P}_{t_1N}(f,f, \ldots, f, [{\rm Id} - Q_{t_0N^d}] f) \|_{L^1(\KK)}
$$
and we will want to apply Theorem \ref{sobolev} to the expression on the right
with $N$ replaced by $t_1N$ and $0 < \delta \le 1$ defined by $\delta^{C_m} (N t_1)^d = N^d t_0$
or $\delta = (t_0/ t_1^d)^{1/C_m}$. In order to apply Theorem \ref{sobolev}, we will need to ensure
\begin{align}\label{N-delta}
N \ \ge \ t_1^{-1} (t_1^{d_{m-1}} / t_0)^{C'} \ge\ldots \ge \ t_1^{-1} (t_1^{d} / t_0)^{C'}
\end{align}
for some appropriate large $C' = C'_{\mathcal P}$. If \eqref{N-delta} holds, then Theorem \ref{sobolev} implies
there exists a constant $b = b_{\mathcal P} > 0$ such that
$$
\|A^{\mathcal P}_{t_1N}(f,f, \ldots, f, h)\|_{L^1(\KK)} \lesssim_{\mathcal P} \bigl(t_0/t_1^d\bigr)^b \prod_{j=1}^m \|f\|_{L^{p_i}(\KK)}
\le \bigl(t_0/t_1^d\bigr)^b N^d
$$
since $1/p_1 + \cdots + 1/p_m = 1$ and $\|f\|_{L^{p_i}(\KK)} \le N^{d/p_i}$ for $i\in\bra{m}$ (which follows since $f$ is 1-bounded and supported in $[N^d]$).
Hence
\begin{align*}
I_3 \lesssim_{\mathcal P} \bigl(t_0/t_1^d\bigr)^b \quad \text{ if } \quad\eqref{N-delta}\quad \text{holds}.
\end{align*}
\paragraph{\bf Step 3} Next we decompose $I_1 = I_1^1 + I_{2}^1 + I_3^1$, where
$$
I_1^1 \ = \ \iint_{\KK^2} g(x) \prod_{j=1}^{m-2} g(x + R_j(y))\ Q_{t/N^{d-d_1}} g(x + R_{m-1}(y)) Q_t g(x) \, d\mu_{[t_1]}(y) d\mu(x),
$$
$$
I_{2}^1 = \iint_{\KK^2} g(x) \prod_{j=1}^{m-2} g(x + R_j(y))\ [Q_{t_0/N^{d-d_1}} - Q_{t/N^{d-d_1}} ] g(x +R_{m-1}(y))
Q_t g(x) d\mu_{[t_1]}(y) d\mu(x) \ \ {\rm and}
$$
$$
I_{3}^1 = \iint_{\KK^2} g(x) \prod_{j=1}^{m-2} g(x + R_j(y))\ [ {\rm Id} - Q_{t_0/N^{d-d_1}}] g(x +R_{m-1}(y)) Q_t g(x) d\mu_{[t_1]}(y) d\mu(x).
$$
For $I_1^1$, we set $s = t/N^{d-d_1}$ and note that for $t_1 \ll_{\mathcal P} t$,
$$
Q_{s} g(x + R_{m-1}(y))
= \frac{1}{s} \int_{|u|\le s} g(x + R_{m-1}(y) - u) d\mu(u)
= \frac{1}{s} \int_{|u|\le s} g(x-u) d\mu(u) =
Q_s g(x)
$$
whenever $|y| \le t_1$. For the final equality we made the change of variables $u \to u - R_{m-1}(y)$, noting that when $|y| \le t_1$, then
$|R_{m-1}(y)| \le C_{P_{m-1}} N^{-(d-d_1)}t_1 \le s$ since $ t_1 \ll_{\mathcal P} t$. Hence
\begin{align*}
I_1^1 = \iint_{\KK^2} g(x) \prod_{j=1}^{m-2} g(x + R_j(y))\ Q_{t/N^{d-d_1}} g(x) Q_t g(x) \, d\mu_{[t_1]}(y) d\mu(x).
\end{align*}
As in \eqref{I_2}, we have
$$
I_{2}^1 \le \|Q_{t_0/N^{d-d_1}} g - Q_{t/N^{d-d_1}} g \|_{L^2(\KK)}.
$$
For $I_{3}^1$, we will use Theorem \ref{sobolev}.
We rescale $I_{3}^1$, moving from $g, R_j$ back to $f, P_j$ and write
$$
I_{3}^1 = \frac{1}{N^d} \iint_{\KK^2} f(x) \prod_{j=1}^{m-2} f(x + P_j(y))\ [{\rm Id} - Q_{t_0 N^{d_1}}] f (x + P_{m-1}(y))
Q_{t N^d} f(x) d\mu_{[t_1N]}(y) d\mu(x)
$$
where the function $h'(x) = [{\rm Id} - Q_{t_0 N^{d_1}}] f(x)$ has the property that ${\widehat{h'}}(\xi) = 0$
whenever $|\xi| \le (t_0 N^{d_1})^{-1}$. Hence for ${\mathcal P'} = \{P_1, \ldots, P_{m-1}\}$,
\begin{align*}
I_{3}^1 \le N^{-d} \|A^{\mathcal P'}_{ t_1 N}(fQ_{t N^d} f,f, \ldots, f, [{\rm Id} - Q_{t_0 N^{d_1}}] f) \|_{L^1(\KK)}
\end{align*}
and so, as long as \eqref{N-delta} holds, Theorem \ref{sobolev} implies there exists a constant
$b' = b_{\mathcal P'} > 0$ such that
$$
\|A^{\mathcal P'}_{t_1N}(fQ_{t N^d} f,f, \ldots, f, h')\|_{L^1(\KK)} \lesssim_{\mathcal P'} \bigl(t_0/t_1^d\bigr)^{b'} \prod_{j=1}^m \|f\|_{L^{p_i}(\KK)}
\le \bigl(t_0/t_1^d\bigr)^{b'} N^d
$$
since $1/p_1 + \cdots + 1/p_{m-1} = 1$ and $\|f\|_{L^{p_i}(\KK)} \le N^{d/p_i}$ for $i\in\bra{m-1}$ (which follows since $f$ is 1-bounded and supported in $[N^d]$).
Hence
\begin{align*}
I_{3}^1 \lesssim_{\mathcal P'} \bigl(t_0/t_1^d\bigr)^{b'} \quad \text{ if } \quad \eqref{N-delta} \quad \text{holds}.
\end{align*}
\paragraph{\bf Step 4} We iterate, decomposing $I_1^1 = I_{1}^2 + I_{2}^2 + I_{3}^2$, followed by decomposing
$I_1^2 = I_1^3 + I_2^3 + I_3^3$ and so on. For each $0\le j \le m-1$, we have
\begin{align}
\label{I_1^j}
&I_1^j = \iint_{\KK^2} g(x) \Big(\prod_{i=1}^{m-j-1} g(x+ R_i(y))\Big) \, \Big(\prod_{i=0}^jQ_{t/N^{d-d_i}} g(x)\Big)\, d\mu_{[t_1]}(y) d\mu(x),\\
\label{I_3^j}
&I_{2}^j \le \|Q_{t_0/N^{d-d_j}} g - Q_{t/N^{d-d_j}} g \|_{L^2(\KK)} \quad \text{and} \quad
I_{3}^j \lesssim_{\mathcal P} \bigl(t_0/t_1^d\bigr)^{b} \quad \text{for some} \quad b = b_{\mathcal P} > 0,
\end{align}
again if \eqref{N-delta} holds.
Strictly speaking, the estimate \eqref{I_3^j} for $I_3^j$ does not follow from Theorem \ref{sobolev} when $j=m-1$
since the proof of Theorem \ref{sobolev} assumed that the collection ${\mathcal P}$ of polynomials consisted
of at least two polynomials. Nevertheless the bound \eqref{I_3^j} holds when $j=m-1$. To see this, we apply
the Cauchy--Schwarz inequality and Plancherel's theorem to see that
\begin{gather*}
| I_3^{m-1} |^2 \ \le \ \frac{1}{N^{d}} \int_\KK \bigl| \int_\KK [{\rm Id} - Q_{t_0N^{d_{m-1}}}] f (x + P_1(y))
\, d\mu_{[t_1N ]}(y)\bigr|^2\, d\mu(x)\\
= \frac{1}{N^{d}} \int_{|\xi|\ge (N^{d_{m-1}} t_0)^{-1}} |{\widehat{f}}(\xi)|^2 |m_{N,t_1}(\xi)|^2\,
d\mu(\xi), \quad \text{ where } \quad m_{N, t_1}(\xi) := \int_{B_1(0)} {\rm e}(P_1(t_1N y)\xi)\, d\mu(y).
\end{gather*}
The oscillatory integral bound \eqref{osc-int-est} implies that $|m_{N,t_1}(\xi)| \lesssim_{\mathcal P} (t_0/t_1)^b$
whenever $|\xi| \ge (N^{d_{m-1}} t_0)^{-1}$ and so \eqref{I_3^j} for $I_3^j$ follows when $j=m-1$ since
$\|f\|_{L^2(\KK)}^2 \le N^d$.
\paragraph{\bf Step 5} From \eqref{I-decomp} and the iterated decomposition of $I_1$, we see that $t_1^{-1} I \ge A + B + C$, where
$$
A = \int_{\KK} g(x) \prod_{j=0}^{m-1}Q_{t/N^{d-d_j}} g(x) \, d\mu(x) \ \ge \ \varepsilon^{m+1}
$$
by \eqref{poisson}, and for some $C_{\mathcal P}>0$ we have
\[
|B| \le C_{\mathcal P} \sum_{j=0}^{m-1} \|Q_{t_0/N^{d-d_j}} g - Q_{t/N^{d-d_j}} g \|_{L^2(\KK)}
\quad \text{and} \quad |C| \ \le C_{\mathcal P} \
\bigl(t_0/t_1^d\bigr)^{b} \le \varepsilon^{m+1}/4
\]
if $t_0 \le c_0 \, \varepsilon^{(m+1)/b} \, t_1^d$ and $c_0^b C_{\mathcal P}<1/4$ and \eqref{N-delta} holds.
Finally we claim that we can find a triple $t_0 \ll t_1 \ll t$ of small scales such that $|B| \le \varepsilon^{m+1}/4$.
If we are able to do this, then $I \ge \varepsilon^{m+1} t_1/2$ and the proof is complete.
Define $v:=-C_0\log_q( c_0 \varepsilon^{(m+1)/b})$ for some large constant $C_0\gg d$.
Choose a sequence of small scales $t_0 = q^{-\ell_j}$ and $t_1 = q^{-k_j}$ and $t=q^{-u_j}$ satisfying
\begin{align}
\label{eq:4}
\begin{gathered}
0\le u_1< dk_1+v < \ell_1 <u_2< dk_2+v < \ell_2 < \ldots < u_n<dk_n+v<\ell_n<\ldots \\
\text{ and } \qquad\ell_{n+1}\le \ell_n-C_0\log_q( c_0 \varepsilon^{(m+1)/b}).
\end{gathered}
\end{align}
Taking $L\in\NN$ such that $L=\lfloor 16C_{\mathcal P}m^2\varepsilon^{-2(m+1)}\rfloor +1$ we claim that there exists $j\in\bra{L}$ such that
\begin{align}
\label{eq:3}
C_{\mathcal P}\sum_{n=0}^{m-1} \|Q_{q^{-\ell_j} N^{-(d-d_n)}} g - Q_{q^{-u_j }N^{-(d-d_n)}} g \|_{L^2(\KK)} <\varepsilon^{m+1}/4.
\end{align}
Indeed, suppose for a contradiction that \eqref{eq:3} does not hold. Then for all $j\in\bra{L}$ by the Cauchy--Schwarz inequality we have
$$
\varepsilon^{2(m+1)} \le 16C_{\mathcal P}^2m
\sum_{n=0}^{m-1} \|Q_{q^{-\ell_j} N^{-(d-d_n)}} g - Q_{q^{-u_j }N^{-(d-d_n)}} g \|_{L^2(\KK)}^2.
$$
Then
\begin{gather*}
L \varepsilon^{2(m+1)} \le 16C_{\mathcal P}^2m\sum_{j=1}^L
\sum_{n=0}^{m-1} \|Q_{q^{-\ell_j} N^{-(d-d_n)}} g - Q_{q^{-u_j} N^{-(d-d_n)}}
g \|_{L^2(\KK)}^2 \\
= 16C_{\mathcal P}^2m \sum_{n=0}^{m-1} \int_\KK |{\widehat{g}}(\xi)|^2 \sum_{j=1}^L \bigl| \ind{[q^{\ell_j} N^{d-d_n}]}(\xi) -
\ind{[q^{u_j} N^{d-d_n}]}(\xi) \bigr|^2 d\mu(\xi) \le 16C_{\mathcal P}^2m^2 \|g\|_{L^2(\KK)}^2
\end{gather*}
and this implies $L \le 16C_{\mathcal P}^2m^2\varepsilon^{-2(m+1)}$ since $\|g\|_{L^2(\KK)} \le 1$, which is impossible by our choice of $L$.
Therefore there exists $j\in\bra{L}$ and a corresponding triple of scales $t_0 = q^{-\ell_j}\ll t_1 = q^{-k_j}\ll t=q^{-u_j}$ satisfying the desired properties for which \eqref{eq:3} is true. In particular,
$|B| \le \varepsilon^{m+1}/4$ holds.
\paragraph{\bf Step 6} Furthermore, with these scales by \eqref{eq:4}, we have $t_0= q^{-\ell_j} \gtrsim (c_0 \varepsilon^{m+1})^{O_{\mathcal P}(m^2 \varepsilon^{-2(m+1)})}$.
In order to ensure that \eqref{N-delta} holds for every iteration in the decomposition, we set
$$
N(\varepsilon, {\mathcal P}) \ := \ (c_0 \varepsilon^{m+1})^{- O_{\mathcal P}( m^2 \varepsilon^{-2(m+1)})}
$$
so that for every $N\ge N(\varepsilon, {\mathcal P})$ condition \eqref{N-delta} holds.
Hence
$$
I \gtrsim \varepsilon^{m+1} t_1 \gtrsim \varepsilon^{m+1} t_0 \gtrsim \varepsilon^{m+1}
(c_0\varepsilon^{m+1})^{O_{\mathcal P}(m^2 \varepsilon^{-2(m+1)})},
$$
establishing the desired bound \eqref{mult-recurrence-function} with $\delta = \varepsilon^{C_1 \varepsilon^{-2m-2}}$
for some $C_1>0$ depending only on ${\mathcal P}$.
This completes the proof of Theorem \ref{thm:main}.
|
2211.03845
|
\section{Introduction}
The advent of gravitational wave (GW) astronomy has brought a powerful tool to probe
new physics in regimes that have been inaccessible to previous experiments. Illusive,
weakly-coupled ultralight bosons beyond the Standard Model of particle physics
have been conjectured to solve various problems in high energy physics and
cosmology. However, terrestrial experiments require sufficiently strong coupling
to the Standard Model for a direct detection. Therefore, gravitational
signatures, which assume only that these ultralight particles gravitate, are
ideal for efficiently probing the weak-coupling parameter space inaccessible
to other observational efforts. Namely, black hole (BH) superradiance provides
a purely gravitational mechanism through which ultralight bosonic particles extract rotational
energy from spinning BHs with observable consequences.
Bosonic waves whose frequency satisfy the superradiance condition are amplified
when scattering off a rotating BH
\cite{Starobinsky:1973aij,1971JETPL..14..180Z}, extracting rotational energy in
a type of Penrose process \cite{Penrose:1971uk}. If the underlying bosonic
particle is massive, there is an instability associated with superradiance, leading
to the formation of exponentially growing, oscillating bound states---
superradiant clouds--- around the BH. Assuming self-interactions and couplings to other
matter are sufficiently weak, the instability saturates gravitationally as the
BH loses energy and angular momentum, and is spun down. At this point, the
system transitions from an exponentially growing phase to a phase characterized
by quasi-monochromatic GW emission which causes the cloud to
slowly dissipate. Therefore, the presence of the superradiant cloud leaves
observational signatures in the BH spin distribution and the GW emission.
This observational window allows us to probe various well-motivated extensions
to the Standard Model \cite{Arvanitaki:2009fg}. For scalar bosons, the QCD
axion (solving the strong CP-problem), axion dark matter (solving the dark
matter problem), and various quantum gravity motivated axion-like particles,
are ultralight candidates capable of forming superradiant clouds
\cite{Peccei:1977hh,Weinberg:1977ma,Wilczek:1977pj,Arvanitaki:2009fg,Essig:2013lka,Hui:2016ltb,Marsh:2015xka}.
The dark photon is a viable candidate to make up a significant fraction of the
dark matter, or could emerge in the low-energy limits of quantum gravity
\cite{Goodsell:2009xc,Jaeckel:2010ni}. Ultralight spin-2 fields are a possible
modification of general relativity \cite{Clifton:2011jh}. Hence, a wide variety of
models could be constrained, or discovered via this observational window; addressing fundamental questions in particle physics, cosmology, and high
energy physics.
To leverage the observational potential of ground- and space-based GW
detectors, accurate predictions for the involved spin-down timescales, as well
as GW frequency and amplitudes are required. Much effort has gone into
determining these for scalar bosons
\cite{Ternov:1978gq,Zouros:1979iw,Detweiler:1980uk,Dolan:2007mj,Arvanitaki_precision,Yoshino:2013ofa,Arvanitaki1,Arvanitaki2,Brito:2014wla,Yoshino:2015nsa},
vector bosons
\cite{Rosa:2011my,Pani:2012vp,Pani:2012bp,Cardoso:2018tly,Baryakhtar:2017ngi,East:2017mrj,East:2018glu,Baumann:2019eav,Siemonsen:2019ebd},
and spin-2 fields \cite{Brito:2013wya,Brito:2020lup} (see
Ref.~\cite{brito_review} for a review). Scalar bosons exhibit the longest
spin-down timescales, as well as weakest and longest GW signal after cloud
formation. Vector bosons, on the other hand, are amplified more efficiently,
leading to faster cloud growth rates and stronger, but hence shorter, GW
emissions. In modified theories of gravity, massive spin-2 fields grow the
fastest around BHs.
Using these results, various search strategies have been employed to constrain
parts of the ultralight boson parameter space. Electromagnetic spin
measurements of stellar mass and supermassive BHs
\cite{McClintock:2013vwa,Miller:2014aaa,Reynolds:2013qqa} have been used to
disfavor ultralight scalars
\cite{Arvanitaki1,Cardoso:2018tly,brito,Stott:2020gjj} and vectors
\cite{Baryakhtar:2017ngi} in certain mass ranges.
Similarly, measurements of
spins of the constituents of inspiraling binary BHs
\cite{Venumadhav:2019lyq,LIGOScientific:2018mvr,LIGOScientific:2018jsj} and BH population properties were
used in Ref.~\cite{Ng:2020ruv,Ng:2019jsx,Payne:2021ahy} to exclude a small scalar mass range. Stochastic
GW searches from a population of BH-cloud systems were used to
constrain the scalar \cite{brito,Tsukada:2019,brito_short} and vector
\cite{Tsukada:2020lgt} masses, while various directed and blind continuous GW
search techniques lead to constraints
\cite{Sun:2019mqb,allsky_sr_search,allsky_sr_method,Isi:2018pzk,CW_galactic,KAGRA:2022osp,KAGRA:2021tse,Dergachev:2019wqa}. The
presence of cloud around a constituent BH within a binary could also affect the
inspiral dynamics, leaving observable signatures in the emitted GW
waveform~\cite{Hannuksela:2018izj,Baumann:2019eav,Baumann:2021fkf,Choudhary:2020pxy}.
The methods used to determine the observable consequences of superradiance for a
given BH of mass $M$, are classified by the
their regime of validity for the dimensionless gravitational fine structure constant
\begin{align}
\alpha\approx 0.075\left(\frac{M}{10\ M_\odot}\right)\left(\frac{\mathcal{M}}{10^{-12}\ \text{eV}}\right),
\label{eq:alphadefinition}
\end{align}
where $\mathcal{M}$ is the mass of the ultralight particle. Analytic
techniques are most accurate for $\alpha\ll 1$, the regime where the boson
cloud is farther away from the BH and can be treated non-relativistically.
However, numerical approaches are required for systems with
$\alpha\sim\mathcal{O}(1)$, where the cloud sits close to the BH, and relativistic
effects are import. Analytic estimates have been pushed to high orders in an
expansion around small $\alpha$, while numerical techniques have been refined
to include large parts of the parameter space. Despite this progress, and the
significant impact of gravitational probes, most gravitational and
electromagnetic wave search campaigns for signatures of BH superradiance have
employed lower-order, potentially inaccurate, predictions, leaving the most
favorable parts of the parameter space unexplored.
Here, we introduce \texttt{SuperRad}, an open source BH superradiance waveform
model incorporating state-of-the-art theoretical predictions for BH spin-down
and GW observables across the entire relevant parameter space
in a simple, ready-to-use \texttt{python} package\footnote{Available at \url{www.bitbucket.org/weast/superrad} .}. A primary
goal is to provide a tool to efficiently and accurately interpret GW search
results of current and future ground- and space-based GW
observatories. As part of this work, we present new calculations of the
frequency evolution of the boson cloud oscillations and attendant GWs
due to the changing mass of the boson cloud. We compare the analytic
frequency evolution in the non-relativistic limit to both approximate
quasi-relativistic calculations, as well as fully general-relativistic ones, to
determine their accuracy in the relativistic regime.
As a first application, we use \texttt{SuperRad} to show that the Laser
Interferometer Space Antenna (LISA) should in principle be able to probe
ultralight boson masses from $1\times10^{-16}$ eV to $6\times 10^{-16}$ eV by
performing follow-up searches for GWs from boson clouds arising
around the remnants of massive BH binary mergers.
Such follow-up searches have
been previously discussed in the context of stellar mass BH
mergers~\cite{Ghosh:2018gaw,Isi:2018pzk,Chan:2022dkt}, and are especially promising because
the observation of the binary BH merger waveform gives definitive
information on the properties of the remnant BH, allowing one to place
constraints in the absence of a signal without further assumptions.
We begin in Sec.~\ref{sec:example} by providing a broad overview over the expected GW signals from BH superradiance of scalar and vector clouds. In Sec.~\ref{sec:cloudproperties}, we discuss in detail how \texttt{SuperRad} determines the cloud's oscillation frequency and the superradiance instability timescales. Furthermore, we analyze the frequency shift due to the finite self-gravity of the cloud around the BH in Sec.~\ref{sec:freqshift} using Newtonian, quasi-relativistic, and fully relativistic approaches. The GW amplitude and waveform is discussed in Sec.~\ref{sec:gw}. Following this, we outline the linear evolution of the cloud as well as the accompanying GW signature in Sec.~\ref{sec:cloudevolution}, and close with the application of \texttt{SuperRad} to analyze the prospects of follow-up searches with LISA in Sec.~\ref{sec:lisafollowup}. We use $G=c=1$ units throughout.
\section{Overview and Example} \label{sec:example}
We begin with an example to illustrate the expected GW signal
from superradiant clouds, and give an overview of the different effects that go into calculating it.
We consider parameters consistent with
the remnant from a GW150914-like binary BH merger. In particular, we
consider a BH with mass $M=62\ M_{\odot}$ and dimensionless spin\footnote{Defined by the ratio of angular momentum $J$ to the mass square of the BH: $a_*=J/M^2$.} $a_*=0.67$ at a distance of 410
Mpc~\cite{LIGOScientific:2016vlm} and determine the resulting GW signal if
there were an ultralight boson---scalar or vector---with mass
$\mathcal{M}=3.6\times10^{-13}$ eV (hence $\alpha\approx0.17$). For
simplicity, here we assume the angular momentum points in the direction of the
observer---hence both GW polarizations are equal---and ignore redshift
effects. The GW strain and frequency calculated with \texttt{SuperRad} for
both the scalar boson case and the vector case are shown in
Fig.~\ref{fig:strain_freq}.
\begin{figure}
\includegraphics[width=0.49\textwidth]{./strain_freq_sca.pdf}
\includegraphics[width=0.49\textwidth]{./strain_freq_vec.pdf}
\caption{
The GW strain $h$ and frequency $f_{\rm GW}$ as a function of time for a
BH with $M=62\ M_{\odot}$ and $a_*=0.67$ at a distance of
410 Mpc subject to the superradiant instability of a boson with
mass $3.6\times10^{-13}$ eV.
The top set of panels shows the scalar boson case,
while the bottom set shows the vector case. Note the difference
in timescales shown, since in the scalar (vector) case the cloud grows
on timescales of $\sim 5$ years (9 hours) and decays through GW radiation on timescales of
$\sim 9000$ years (1 day).
Time is measured since the BH was formed, assuming the cloud started as a single boson.
}
\label{fig:strain_freq}
\end{figure}
There are a number of different parts that go into these calculations. First,
one determines the superradiant instability timescale by solving for the
fastest growing mode of the massive scalar or vector equations of motion on the
BH spacetime as described in Sec.~\ref{sec:cloudproperties}. This gives the
timescale over which the boson cloud mass, and hence the GW
amplitude, grows exponentially in time. From Fig.~\ref{fig:strain_freq}, it
can be seen that the $e$-folding time of the cloud mass (half the
$e$-folding time of the field $\tau_I$) is much slower for the scalar case
($\tau_I/2\sim 10$ days) compared to the vector case ($\tau_I/2\sim 3$
minutes). Taking into account the resulting decrease in the mass and spin of
the BH as the boson cloud grows, as described in Sec.~\ref{sec:cloudevolution},
the instability timescale becomes longer and longer as the horizon frequency of
the BH approaches the oscillation frequency of the cloud. As the instability
saturates, and the cloud mass reaches its maximum value, the dissipation of the
cloud through gravitational radiation becomes dominant, leading to a slow
decrease in cloud mass. The rate at which energy is lost through gravitational
radiation $P_{\rm GW}$, as well as the two strain polarizations $h_+$ and
$h_\times$, are calculated by solving for linearized metric perturbations on
the BH spacetime, sourced by the oscillating cloud solution, as described in
Sec.~\ref{sec:gw}. As can be seen in Fig.~\ref{fig:strain_freq}, in
the scalar case the decay of GW amplitude is negligible on any reasonable
observing timescale, taking on the order of $10^4$ years, while in the vector
case, the cloud mass and GW amplitude decrease on timescales of
days.
The gravitational frequency shown in Fig.~\ref{fig:strain_freq} exhibits
an increase or ``chirp" in frequency, first as the BH loses mass and
the cloud grows exponentially, and then more slowly as the boson cloud dissipates.
Calculating this frequency shift
requires accounting for the self-gravity of the boson cloud, which slightly
red-shifts the oscillation frequency of the cloud, and hence the gravitational
waves (which have twice the frequency of the cloud oscillations), as described in
Sec.~\ref{sec:freqshift}. Though the change in frequency is small, because the
GW signal persists for many cycles, this is still an important
effect.
\section{Cloud properties} \label{sec:cloudproperties}
In this section, we outline the superradiant cloud properties relevant for
observational signatures such as BH spin-down or GW emission. This includes a
brief discussion of how estimates for the superradiant instability timescale
$\tau_I$ and the emitted GW frequency $f_{\rm GW}$ are obtained for different values of the BH mass,
spin, and the gravitational fine structure constant $\alpha$. We
defer the analysis of the dependency of the cloud frequency on cloud mass,
and the cloud dynamics to
Secs.~\ref{sec:freqshift} and \ref{sec:cloudevolution}, respectively.
\texttt{SuperRad} combines analytic and numerical predictions, valid for
$\alpha\ll 1$ and $\alpha\sim\mathcal{O}(1)$, and utilizes numerically
calibrated higher-order expansions to interpolate between the two regimes.
In most of the following calculations, we assume a fixed Kerr BH spacetime
$g_{\mu\nu}$, and consider scalar and vector bosonic fields, as well as linear
metric (GW) perturbations on this background. The exception to
this is the calculation of the frequency shift due to the self-gravity of the boson-cloud.
We will discuss the validity of this
assumption further in Secs.\ref{sec:gwpowerstrain} and \ref{sec:freqshift}.
Furthermore, we neglect field self-interactions and non-minimal couplings to
the Standard Model throughout, which have been investigated in
Refs.~\cite{Yoshino:2012kn,Fukuda:2019ewf,Baryakhtar:2020gao,East:2022ppo,East:2022rsi,Omiya:2022gwu}.
Depending on the coupling strength, these can alter the superradiance dynamics.
However, here we assume that we are in the weak coupling limit, which reduces
to the purely gravitational case. Therefore, the relevant field equations to solve in
order to obtain the desired observables are the scalar and vector massive wave
equations on the spacetime
$g_{\mu\nu}$, which are given by
\begin{align}
(\square_g -\mu_S^2)\Phi=0, & & \nabla_\mu F^{\mu\nu}=\mu^2_VA^\nu,
\label{eq:fieldeq}
\end{align}
where $\mathcal{M}_S=\hbar\mu_S$ and $\mathcal{M}_V=\hbar \mu_V$
are the scalar and vector boson masses, respectively. Due to various symmetries of the Kerr spacetime, solutions to the field equations \eqref{eq:fieldeq} can be written in the form
\begin{align}
A_\mu,\Phi\sim e^{-i(\omega t-m\varphi)},
\label{eq:fieldansatz}
\end{align}
where we introduced the azimuthal mode number $m$ and complex frequency $\omega$. Here, and in the following,
we refer to the Boyer-Lindquist time, radius, polar and azimuthal coordinate as
$t$, $r$, $\theta$, and $\varphi$. Without loss of generality, we assume the
azimuthal index to satisfy $m\geq 0$ throughout. Lastly, we label all
quantities defined both for scalar and vector fields with $\sigma\in\{S,V\}$.
The fields $A_\mu$ and $\Phi$ are susceptible to the superradiance instability,
if the superradiance condition
\begin{align}
0<\omega_R <m\Omega_H,
\end{align}
is satisfied, where $\Omega_H$ is the horizon frequency of the BH. In the
ansatz \eqref{eq:fieldansatz}, the frequency $\omega=\omega_R+i\omega_I$
encodes both the oscillation frequency of the cloud,
which is half of the characteristic GW frequency
$f_{\rm GW}=2\omega_R/(2\pi)$ (up to self-gravity corrections), and the instability growth timescale
$\tau_I=1/\omega_I$. For fixed mode number $m_\sigma$, these observables (in units
of $M$) depend
only on $\alpha$ and spin $a_*$, i.e., $\omega(\alpha,a_*)$.
In what follows, we begin by outlining \texttt{SuperRad's} coverage of
the $(\alpha,a_*)$ parameter
space in Sec.~\ref{sec:parameterspace}, and then
we illustrate
how analytic and numerical results are used to
calibrate \texttt{SuperRad} in the intermediate regime
in Secs.~\ref{sec:frequency} and \ref{sec:timescale}.
\subsection{Parameter space} \label{sec:parameterspace}
In \figurename{ \ref{fig:pspace}},
we show the parameter space for the $m_V=1$ massive vector as an illustrative
example. For a given quantity $q(\alpha,a_*)\in \{\omega_R,\omega_I,\partial_t
\omega_R\}$ (in units of $M$), we numerically calculate its value in the
relativistic regime, but for computational reasons, do not extend our calculations
deep into the small-$\alpha$ regime. We want to match on to
analytic results $q_N$ that are valid only in the Newtonian limit, when $\alpha \ll 1$.
We do this by dividing the parameter space in $(\alpha,a_*)$ into two regions.
In the relativistic regime, labelled $\mathcal{D}_{\rm int}$,
we merely interpolate between the numerically computed points using the
interpolation polynomial $I_R(\alpha,a_*)$.
In the regime where $\alpha$ is smaller, labelled $\mathcal{D}_{\rm fit}$,
we use a subset of the numerical results in $\mathcal{D}_{\rm int}$ (corresponding to the red points in Fig.~\ref{fig:pspace})
and fit the difference between these results and the analytic ones in a way that is guaranteed to recover the latter
at sufficiently small $\alpha$. That is, we let
\begin{align}
q(\alpha,a_*)=\begin{cases}
q_N(\alpha,a_*)+g(\alpha,a_*), & (\alpha,a_*)\in \mathcal{D}_{\rm fit}, \\
I_R(\alpha,a_*), & (\alpha,a_*)\in \mathcal{D}_{\rm int}. \end{cases}
\label{eq:quantity}
\end{align}
where $g$ is a fitting function chosen to give $q_N(\alpha,a_*)+g(\alpha,a_*) \to q_N(\alpha,a_*)$ as $\alpha \to 0$.
The specific choices of $\mathcal{D}_{\rm fit}$ and $\mathcal{D}_{\rm int}$ depend on the field
and azimuthal mode in question, and are determined by the accuracy of the
underlying methods (these are defined in App.~\ref{app:uncertainties}).
Note also that we are only interested in the part of the parameter
space where $\omega_R\leq m_{\sigma}\Omega_H$, since outside this range the
cloud will be exponentially decaying through absorption by the BH.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{./pspace.pdf}
\caption{The parameter space of the superradiant $m_V=1$ vector mode. It is
made up of relativistic regime $\mathcal{D}_{\rm int}$, where \texttt{SuperRad} employs
interpolation functions based on the numerical data (labelled ND) to determine
a given quantity $q(\alpha,a_*)$ and a lower $\alpha$ region $\mathcal{D}_{\rm fit}$,
where numerical calibration is necessary to augment the expressions valid in
the Newtonian limit $\alpha \to 0$ (indicated by a red line). For illustration
purposes, we show only $40^2$ of the $320^2$ data points used in
\texttt{SuperRad}. The gray dashed line marks the saturation point of the
superradiance instability, i.e., $\omega_R=\Omega_H(a_*)$. In this case, the
red data points are used for calibration in $\mathcal{D}_{\rm fit}$.}
\label{fig:pspace}
\end{figure}
In the relativistic part of the parameter space $\mathcal{D}_{\rm int}$, a set of $320^2$ waveforms are
generated for the azimuthal modes $m_\sigma=1$ and $2$, and for both the scalar
and the vector case. The grid of waveforms is uniformly spaced in the
coordinates $(y,a_*)$, with $y\in[0,1]$, defined by
\begin{align}
y=\frac{\alpha-\alpha_0}{\alpha_1-\alpha_0},
\end{align}
where $\alpha_0^{m_\sigma=1}=0.05$ and $\alpha_0^{m_\sigma=2}=0.25$,
while $\alpha_1$ is the solution to
\begin{align}
\beta \alpha_1\left[1-\frac{\alpha_1^2}{2n_\sigma^2}\right]=m_\sigma M\Omega_H(a_*),
\end{align}
with $\beta=0.9$, and $n_\sigma$ is the cloud's principle number defined below in
\eqref{eq:principlequantumnumbers}.
This choice of $\alpha_1$ is made so as to guarantee that $y=1$ lies outside the superradiant
regime, and thus that the saturated state $\omega_R=m_{\sigma} \Omega_H$ lies within the grid. The boundary $y=1$ corresponds to the large-$\alpha$ boundary of the numerical data in \figurename{ \ref{fig:pspace}}, beyond the superradiant saturation.
\subsection{Oscillation frequencies} \label{sec:frequency}
The real part of the superradiantly unstable field's frequency
determines the cloud's oscillation about the BH
\begin{align}
A_\mu,\Phi\sim \cos( \omega_R t),
\end{align}
and also sets the characteristic GW frequency $f_{\rm GW}=\omega_R/\pi$ (up to self-gravity corrections).
Because of the BH's gravitational potential, a bound massive
particle has a frequency $\omega_R<\mu$. Expanding
\eqref{eq:fieldeq} to leading order in $\alpha$ yields a Schrödinger-type
equation with potential $U\sim \alpha/r$, at a radius $r$ away from the BH. In
this regime, the solutions are simple hydrogen-like bound states for scalar and
vector fields \cite{Detweiler:1980uk,Pani:2012bp}. The scalar states
are characterized by their angular momentum quantum number $\ell_S$, as well as
azimuthal mode number $-\ell_S\leq m_S\leq \ell_S$ and radial node number
$\hat{n}_S\geq 0$, while the vector states are identified by an analogous
definition of radial node number $\hat{n}_V\geq 0$, angular momentum number
$\ell_V$ and azimuthal index $m_V$, in addition to the polarization state
$\hat{S}\in\{-1,0,1\}$. With this, the oscillation frequencies of the scalar
and vector clouds are, in the non-relativistic limit,
\begin{align}
\omega_R = \mu\left(1-\frac{\alpha^2}{2n^2_\sigma}+C_\sigma[\alpha]\right),
\label{eq:freqnonrel}
\end{align}
where $C_\sigma[\alpha]$ includes higher order corrections.
In particular, we include terms of up to $\mathcal{O}(\alpha^5)$,
obtained by keeping sub-leading contributions in $\alpha$ when solving
\eqref{eq:fieldeq} \cite{Baumann:2019eav}, with the full expressions
for $C_\sigma$ given in appendix~\ref{app:fieldsolutions} [in particular
\eqref{eq:freqrelcorrectionsscalar} and \eqref{eq:freqrelcorrectionsvector}].
The state label $n_\sigma$, depends on the intrinsic spin of the field and is given by
\begin{align}
n_S=\ell_S +1 +\hat{n}_S, & & n_V=m_V+\hat{n}_V+\hat{S}+1.
\label{eq:principlequantumnumbers}
\end{align}
Notice, in the case of the scalar field, we follow the conventions of
Ref.~\cite{Baumann:2019eav}, while in the vector case, we follow
Ref.~\cite{Dolan:2018dqv}. In the language of the previous section, the
expressions \eqref{eq:freqnonrel} are the Newtonian estimates
$q_N(\alpha,a_*)$.
We numerically estimate $\omega_R$ using the methods discussed in
appendix~\ref{app:fieldsolutions}, without assuming an expansion in small
$\alpha$. These estimates are calculated for $m_\sigma\in\{1,2\}$ for both
scalar and vector fields. Here, we simply summarize that our numerical methods
are more accurate and precise than the analytic estimates everywhere in
$\mathcal{D}_{\rm int}$. The waveform model provides accurate values for
$\omega_R$ in $\mathcal{D}_{\rm fit}$ using a fit to the numerical results. We
perform this fit using the ansatz
\begin{align}
\frac{\omega_R}{\mu}-1+\frac{\alpha^2}{2n^2_\sigma}-C_\sigma[\alpha]=\sum_{q,p}\alpha^p\hat{a}_{p,q}(1-a_*^2)^{q/2},
\label{eq:wrfit}
\end{align}
with appropriately chosen ranges for $p$ and $q$, to the numerical data in
a subset of $\mathcal{D}_{\rm int}$ (see appendix~\ref{app:uncertainties} for details). The right-hand side of \eqref{eq:wrfit}
corresponds to $g(\alpha,a_*)$, defined in the previous section. This ansatz
explicitly assumes the analytic estimates in the $\alpha\ll 1$ regime. Within
\texttt{SuperRad}, we combine all three of these ingredients as described in
\eqref{eq:quantity} to determine $\omega_R$ in the parameter space. Therefore,
we ensure that \texttt{SuperRad} provides the most accurate and precise
estimate for frequencies of a given superradiant bosonic field around a fixed
Kerr BH background across the entire parameter space. The correction of these
frequency estimates due to the self-gravity of the superradiant cloud is
discussed in Sec.~\ref{sec:freqshift}.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{./wr_model.pdf}
\caption{The relative difference $D_R$, between the prediction for $\omega_R$
provided by \texttt{SuperRad}, and purely analytical non-relativistic estimates
given in \eqref{eq:freqnonrel} together with
\eqref{eq:freqrelcorrectionsscalar} and \eqref{eq:freqrelcorrectionsvector}.
Dotted lines indicate the $\mathcal{D}_{\rm int}$ region in \texttt{SuperRad}. We focus
on a few representative cases.}
\label{fig:frequenciesmodelerror}
\end{figure}
In \figurename{ \ref{fig:frequenciesmodelerror}}, we compare the available
analytic estimates, given in \eqref{eq:freqnonrel} [together with
\eqref{eq:freqrelcorrectionsscalar} and \eqref{eq:freqrelcorrectionsvector}],
with those provided by \texttt{SuperRad}. As expected, the relative difference
between the analytic estimates and \texttt{SuperRad}'s decay as $\sim\alpha^6$
[the order of the leading-in-$\alpha$ unknown coefficient in the expansion of
\eqref{eq:freqnonrel}] in the Newtonian regime. For large spins $a_*$ and large
$\alpha$, i.e., in the relativistic regime, the analytic estimates have
relative errors up to $D_R(\omega_R)\lesssim 10^{-2}$. In comparison to the
vector results, the analytic estimates for $\omega_R^S$ are
more accurate in the most relativistic regime.
\subsection{Instability timescales} \label{sec:timescale}
The imaginary part of the frequency $\omega_I$ sets the superradiant instability
timescale $\tau_I=1/\omega_I$ of the bosonic cloud,
\begin{align}
A_\mu, \Phi\sim e^{\omega_I t}.
\end{align}
In the non-relativistic limit $\alpha \to 0$, the cloud sits far away from the
BH and the flux across the horizon, and hence the instability growth rate,
tends towards zero. For small, but non-zero $\alpha$, the rates scale with a
characteristic power $\kappa$, i.e., $\omega_I M\sim\alpha^{\kappa}$. This
scaling depends on the type of field (scalar or vector) and the mode
considered. Furthermore, at saturation, i.e., when
$\omega_R=m_\sigma\Omega_H$, the ultralight particles cease extracting
rotational energy from the BH, such that the growth rate vanishes.
Combining these two limits, the general behavior of the instability growth rates
for both the scalar and the vector cases is
\begin{align}
\omega_I M=\alpha^{\kappa}(\omega_R-m_\sigma\Omega_H)2r_+G_\sigma(a_*,\alpha).
\label{eq:ratenonrel}
\end{align}
Here, $G_\sigma(a_*,\alpha)$ is a function of the BH spin, as well as $\alpha$,
and determines the leading order and sub-dominant-in-$\alpha$ contributions to
$\omega_I$. The scaling power $\kappa$, for scalar and vector fields are
\cite{Detweiler:1980uk,Pani:2012vp}
\begin{align}
\kappa_S=4m_S+5, & & \kappa_V=4m_V+2\hat{S}+5,
\end{align}
for the fastest growing configurations\footnote{Notice, in the relativistic
regime, it is non-trivial to identify the most unstable mode.}, and depend on
the azimuthal index $m_\sigma$ and the vector polarization state $\hat{S}$. The
leading order contributions in the scalar case~\cite{Detweiler:1980uk}
and the vector case \cite{Rosa:2011my,Pani:2012bp,Baryakhtar:2017ngi,Baumann:2019eav}
to $G_\sigma(a_*,\alpha)$ that we use
are given in Appendix~\ref{app:fieldsolutions} [in particular \eqref{eq:raterelcorrectionsscalar} and
\eqref{eq:raterelcorrectionsvector}, respectively].
These are Newtonian estimates that we use [$q_N(\alpha,a_*)$ in the language
of Sec.~\ref{sec:parameterspace}] for the imaginary frequency.
Similarly to the previous section, we utilize numerical techniques to obtain
accurate predictions for $\omega_I$ in the relativistic regime $\mathcal{D}_{\rm int}$
of the parameter space. The methods and their accuracy are outlined in
Appendix~\ref{app:fieldsolutions}. Here, we simply note again that the numerical
predictions are more accurate than the analytic Newtonian expressions everywhere in $\mathcal{D}_{\rm int}$.
Similar to the
real part of the cloud's frequency, the analytic results obtained in the
Newtonian limit are connected with the numerical estimates in the $\alpha\sim
1$ regime by fitting\footnote{Notice a typo in eq. (A.2) of
\cite{Siemonsen:2019ebd}; it is fixed by $C_m\rightarrow 2C_m r_+$.} the ansatz
\begin{align}
\begin{aligned}
& \frac{\omega_IM \alpha^{-\kappa}G^{-1}_\sigma(a_*,\alpha)}{2r_+(\omega_R-m_\sigma\Omega_H)}-1\\
& \qquad =\sum_{p,q}\alpha^p\left[\hat{b}_{p,q}a_*^{q+1}+\hat{c}_{p,q}(1-a_*^2)^{q/2}\right],
\label{eq:wifit}
\end{aligned}
\end{align}
with appropriately chosen ranges for $p$ and $q$, to the numerical data obtained in $\mathcal{D}_{\rm int}$ (see Appendix~\ref{app:uncertainties} for details).
The right-hand-side of \eqref{eq:wifit} serves as
$g(\alpha,a_*)$ in the construction \eqref{eq:quantity} for $\omega_I$.
Analogously to the oscillation frequency, with this construction we ensure
\texttt{SuperRad} provides the most accurate and precise estimates for the
superradiance growth rate $\omega_I$ everywhere in the cloud's parameter space.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{./wi_model.pdf}
\caption{The relative difference $D_R$ between the prediction for $\omega_I$
provided by \texttt{SuperRad}, and purely analytical non-relativistic estimates
given in \eqref{eq:ratenonrel} together with
\eqref{eq:raterelcorrectionsscalar} and \eqref{eq:raterelcorrectionsvector}.
Dashed lines indicate the $\mathcal{D}_{\rm int}$ region in \texttt{SuperRad}. We show the
same representative cases as in \figurename{
\ref{fig:frequenciesmodelerror}}.
} \label{fig:growthratesmodelerror}
\end{figure}
In \figurename{ \ref{fig:growthratesmodelerror}}, we illustrate the relative
differences between the analytic estimates using only \eqref{eq:ratenonrel}
together with \eqref{eq:raterelcorrectionsscalar} and
\eqref{eq:raterelcorrectionsvector}, and the estimates provided by
\texttt{SuperRad}. In the Newtonian regime, the relative difference approaches
zero, while in the relativistic regime, the
relative error in the analytic estimates becomes $D_R(\omega_I)\sim\mathcal{O}(1)$
in both the scalar and the vector cases. Hence, using non-relativistic analytic
estimates in the relativistic regime can lead to large systematic
uncertainties in the instability rate. We indicate the leading-in-$\alpha$ scaling of the difference for each $m_\sigma$. An $\sim\alpha^1$-scaling is expected in principle for both $m_\sigma=1$ and $m_\sigma=2$, however, due to our choices of $p$ and $q$ in \eqref{eq:wifit} (see also Appendix~\ref{app:uncertainties}), the leading power is $>1$ for $\alpha\ll 1$ in the $m_\sigma=2$ case. For $\alpha\gtrsim 0.1$, the scaling decreases to the expected $\sim\alpha^1$.
\section{Frequency shift} \label{sec:freqshift}
So far, we have considered calculations that assume the bosonic field
can be treated as a test field on a Kerr background. Even for cases where the
boson cloud mass reaches $M_c\sim 0.1M$, treating the spacetime as Kerr, with
quasi-adiabtically changing parameters, gives a good
approximation to the nonlinear treatment~\cite{East:2017ovw,East:2018glu}. However, in this section, we
address the effect of the self-gravity of the cloud, focusing in particular on
how it causes the characteristic increase in frequency of the cloud oscillation, and hence the frequency
of the emitted GW radiation. Though the cloud-mass induced
shift in the frequency is small, it will change as the cloud slowly dissipates
through gravitational radiation, affecting how long the GW
signal can be coherently integrated without taking this effect into account.
Quantitatively estimating the contribution to the frequency from the finite cloud mass
\begin{equation}
\Delta\omega(M_c) = \omega(M_c)-\omega(M_c=0)
\end{equation}
(which we will assume to be real) is the subject of this section. We employ a
Newtonian approach, recovering and extending known results in the literature.
We then compare these results in the scalar case to a fully nonlinear approach
using synchronized complex fields around BHs.
\subsection{Newtonian approach}
\label{ssec:freq_nr}
The Newtonian approach, utilized to estimate the cloud mass correction to the
frequency in Refs.~\cite{Baryakhtar:2017ngi,Isi:2018pzk,Baryakhtar:2020gao}, exploits the fact that in the
non-relativistic limit,
the energy density\footnote{In a spacetime, like Kerr, with asymptotically timelike Killing field $\xi^\mu$ and time-slice normal vector $n^\mu$, the energy density is defined as $\rho=n_\alpha\xi_\beta T^{\alpha\beta}$ through the scalar or vector field's energy-momentum tensor $T^{\alpha\beta}$.} $\rho$ is
spread out over large scales away from the BH, minimizing curvature effects. In
this limit, the cloud itself sources a Newtonian gravitational potential
$\Psi$, which follows the Poisson equation:
\begin{align}
\Delta_\text{flat} \Psi=4\pi \rho, & & \Psi(\textbf{r})=-\int d^3\textbf{r}' \frac{\rho(\textbf{r}')}{|\textbf{r}-\textbf{r}'|}.
\end{align}
Here, the coordinates $\textbf{r}$ can be
identified with spatial slices of Kerr, where gauge ambiguities disappear in the
$\alpha\ll 1$ limit. Furthermore, while one might choose
$d^3\textbf{r}'=\sqrt{\gamma}d^3x'$, with the determinant of the
metric of a spatial slice of Kerr, a priori this is not more
consistent than simply setting $\gamma\rightarrow\gamma_\text{flat}$,
which is our choice.
In this weak-field limit, the scalar wave equation \eqref{eq:fieldeq} is given by
\begin{equation}
\label{eom-eq}
(\omega - \mu_S) \Phi(\textbf{r}) \approx \left(-\frac{\nabla^2}{2\mu_S}-\frac{\mu_S M}{r}+\mu_S \Psi \right)\Phi(\textbf{r}) \ ,
\end{equation}
with $r=|\textbf{r}|$.
Taking the usual approximation that the shift in frequency at leading order in $\alpha$ is
given by evaluating the perturbed operator on the unperturbed eigenfunction,
the self-gravity of a cloud with mass $M_c = \int d^3\textbf{r}\rho(\textbf{r})$ contributes a shift in frequency of
\begin{align}
\Delta \omega \frac{M_c}{\mu} &\approx \int d^3\textbf{r} \rho \Psi \nonumber \\
&= 2\int d^3 \textbf{r} \int_{|\textbf{r}'|<|\textbf{r}|} d^3 \textbf{r}' \frac{\rho(\textbf{r})\rho(\textbf{r}')}{|\textbf{r}-\textbf{r}'|}
= 2 W.
\label{eqn:delta_omega}
\end{align}
We used the non-relativistic approximation that $\rho \approx \mu^2 |\phi|^2$, and in the last line introduced
the total potential energy $W$ in the cloud.
We note that this factor of $2$ (from restricting the inner integral) is missing from some references~\cite{Baryakhtar:2017ngi,Isi:2018pzk}, but included in
Ref.~\cite{Baryakhtar:2020gao}.
An equivalent derivation gives the same expression~\eqref{eqn:delta_omega} in the vector case as well.
We can further simplify the frequency shift calculation by considering a low multipole
approximation. The denominator of~\eqref{eqn:delta_omega} can be expanded in
terms of spherical harmonics $Y_{\ell m}(\Omega)$, where $(\Omega) = (\theta,
\varphi)$ describes the angular dependence, as
\begin{align}
\label{sph-harm-expansion-eq}
\frac{1}{|\textbf{r}-\textbf{r}'|}=\sum_{\ell=0}^\infty \sum_{m=-\ell}^\ell\frac{r'^\ell}{r^{\ell+1}} \frac{4\pi}{2\ell+1} Y_{\ell m}(\Omega)\bar{Y}_{\ell m}(\Omega'),
\end{align}
assuming $|\textbf{r}'|<|\textbf{r}|$.
If we keep only the monopolar, i.e., the $\ell=0$, component of the density, which we
can write in terms of the radial mass function $m_c(r)=\int d\Omega' \int_0^r dr' r'^2\rho(\textbf{r}')$,
then~\eqref{eqn:delta_omega} simplifies to
\begin{align}
\Delta\omega=- \frac{2\mu}{M_c}\int d^3\textbf{r}\frac{m_c(|\textbf{r}|)\rho(\textbf{r})}{|\textbf{r}|}.
\label{eqn:delta_omega_ss}
\end{align}
In general, there are non-vanishing higher order multipoles due to the non-trivial
azimuthal and polar dependencies of the cloud's energy densities that are neglected above.
However, for the $m_S=1$, the error in the calculation of $W$ associated with making this
monopole approximation, as opposed to considering higher multipole corrections, is $\approx2\%$ at leading order in $\alpha$ \footnote{At leading order in $\alpha$, only the quadrupole $\ell=2$ contributes non-trivially.}.
In the $m_V=1$ case, all higher-order multipolar contributions are sub-leading in $\alpha$, since the Newtonian energy density is spherically symmetric. While the cloud states with larger azimuthal number have strong polar dependencies, the corrections from high-order multipoles is moderate. For the $m_V=2$ state, the quadrupolar contribution is $\approx2\%$ of the monopolar piece, at leading order in $\alpha$.
The frequency shift $\Delta \omega$ is then calculated for different modes
of the non-relativistic solutions to \eqref{eom-eq} for scalar clouds, and for
corresponding non-relativistic vector cloud solutions.
Expressions for $\Delta\omega$, valid for any azimuthal index $m_\sigma$, as well as
a table listing the first few values, are given in \eqref{eq:deltaomegaexpressions} and in Table~\ref{tab:nonrel_shifts}, respectively.
\subsection{Quasi-relativistic}
\label{ssec:freq_qr}
These analytic expressions \eqref{eq:deltaomegaexpressions} are accurate in the Newtonian limit, i.e., $\alpha\ll 1$. Here, we extend the validity to the $\alpha\sim\mathcal{O}(1)$ regime, with the caveat that a more accurate nonlinear treatment, discussed in the next section, is ultimately necessary. Within \texttt{SuperRad}, we compute the frequency shift in the relativistic regime $\mathcal{D}_{\rm int}$ in a quasi-relativistic approximation, as in Ref.~\cite{Siemonsen:2019ebd}.
We take the relativistic field configurations (derived in Appendix~\ref{app:fieldsolutions}) in Boyer-Lindquist
coordinates and use them to compute the energy density $\rho$, which we then use
to compute the frequency shift $\Delta\omega$ using the monopolar Newtonian expression~\eqref{eqn:delta_omega_ss}. This approach explicitly assumes a linear dependence of the frequency shift on the cloud mass: $\Delta\omega\sim M_c$. Given these quasi-relativistic results in the relativistic regime of the parameter space, we follow the approach taken in Secs.~\ref{sec:frequency} and \ref{sec:timescale}, to calibrate a fit that assumes the analytic expressions \eqref{eq:deltaomegaexpressions} against the quasi-relativistic results in $\mathcal{D}_{\rm int}$. The fit ansatz is
\begin{align}
\frac{M^2 \Delta\omega}{-\alpha^3 M_c}+F_\sigma=\sum_{p\geq 1} \alpha^p \hat{d}^\sigma_{p},
\label{eq:deltaomegafit}
\end{align}
where $F_\sigma$ contains the leading-in-$\alpha$ contribution, computed above and explicitly given in Appendix~\ref{app:frequencyshift}.
As a figure of merit for comparing how relevant this will
be in GW observations of boson clouds, we can calculate the extra
accumulated phase shift due to the frequency drift,
using that $\omega_{\rm GW}=2\omega_R$,
\begin{equation}
\Delta \phi_{\rm GW} = 2 \int_{t_{\rm max}}^{t_{\rm max}+\tau}\left[ \omega_R(t)-\omega_R(t_{\rm max})\right]dt,
\label{eqn:deltaphi}
\end{equation}
where $t_{\rm max}$ is the time the cloud mass is at its maximum.
We show this for the scalar and vector case in Fig.~\ref{fig:delta_phi}, taking
the total time $\tau=\min(\tau_{\rm GW},1 \text{yr})$ to be either the characteristic time over which the GW signal decays $\tau_{\rm GW}$,
or one year, when $\tau_{\rm GW}> 1$ yr (assuming a $50 \ M_{\odot}$ BH).
From the figure, we can see that $\Delta \phi_{\rm GW} \gg 1$ across the parameter
space, except
for the scalar case when $\alpha \lesssim 0.1$.
Thus, properly accounting for this frequency shift is important to be
able to coherently integrate the GW signal. The diverging behavior of the $\tau=\tau_{\rm GW}$ curves in Fig.~\ref{fig:delta_phi} at low $\alpha$ is due to the steeper $\alpha$-scaling of the GW timescales compared with the frequency shift's scaling.
\begin{figure}
\includegraphics[width=0.49\textwidth]{./deltaphi.pdf}
\caption{
The additional accumulated GW phase $\Delta \phi_{\rm GW}$ due to the
increase in frequency as the boson cloud mass decreases [defined in
\eqref{eqn:deltaphi}] for scalar (blue curves) and vector (orange curves) bosons. This phase is
calculated beginning from when the cloud mass is maximum for a duration of
$\tau_{\rm GW}$ (solid curves) and for one year (when $\tau_{\rm GW}>1$ yr;
dotted curves). We assume a BH with $M=50 \ M_{\odot}$ and $a_*=0.99$.
}
\label{fig:delta_phi}
\end{figure}
\subsection{Comparison to fully relativistic approach}
\label{ssec:freq_r}
To gauge the error in the quasi-relativistic frequency shifts described above,
we compare them to numerically constructed, fully relativistic solutions.
Following Herdeiro \& Radu~\cite{Herdeiro:2015hr}, we construct stationary and
axisymmetric spacetime solutions to the full Einstein-Klein-Gordon field
equations consisting of a massive complex scalar field cloud with $\Phi\sim
e^{i m_S \phi-i\omega_R t}$ around a BH, satisfying the synchronization
condition $\omega_R = m_S \Omega_H$. These can be thought of as oscillation
(or, equivalently, azimuthal angle) averaged versions of the scalar cloud
solutions. By calculating how the frequency of the solution changes with $M_c$
at fixed $M$ and $\alpha$, we can obtain a fully-relativistic estimate for the
frequency shift $\Delta \omega$. The frequency shift is the part of the real frequency
that is dependent on the boson cloud mass, $ \omega(M_c) = \omega(M_c = 0) + \Delta \omega(M_c)$.
For the values of cloud mass relevant to
superradiance, $\Delta \omega$ is, to a good approximation
linear in $M_c$, as expected from the non-relativistic
results above. Therefore, here we compute a numerical estimate of
$\partial \omega /\partial M_c$ at $M_c=0$ and fixed $\alpha$
(which is $\approx \Delta \omega /M_c$, to within $\sim 1\%$ for $M_c<0.04M$).
In Fig.~\ref{error-shifts}, we show how this compares, for $m_S=1$, to the
non-relativistic and quasi-relativistic results for the frequency shift. From
there it can be seen that the quasi-relativistic estimate used by
\texttt{SuperRad} is slightly more accurate than the non-relativistic
expressions, but still noticeably underestimates the frequency decrease, by
$\approx 32\%$, for $\alpha = 0.4$. For small $\alpha$, all three calculations
give similar results, as expected. In particular, for $\alpha<0.15$, the difference
in the quasi- versus the fully relativistic calculation is $<7\%$.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{./shifts_vs_alpha.pdf}
\caption{A comparison of different approximations of the frequency shift due to
the boson cloud's self-gravity for a scalar field with $m_S=1$. We compare the
non-relativistic (see Sec.~\ref{ssec:freq_nr}) and quasi-relativistic
(see Sec.~\ref{ssec:freq_qr}) approximations to the (leading order in
$M_c$ part) fully relativistic (labelled ``relativistic") relative frequency shift. In
particular, we show, for fixed $\alpha$, $(\partial \omega/\partial
M_c)(M_c=0)\approx \Delta \omega/M_c$, where the equality is exact for the
non-relativistic and quasi-relativistic approximations.
\label{error-shifts}
}
\end{figure}
We plan to include the fully relativistic frequency corrections in a future
version of \texttt{SuperRad}, and we defer details on constructing the
BH-complex boson cloud solutions, as well as the massive vector case (where
we expect comparable, if somewhat larger, relativistic corrections) to upcoming
work \cite{future_paper}.
We note that with such relativistic solutions, there is still theoretical error associated with taking a
complex instead of real field (and hence axisymmetric spacetime).
However, we can estimate this by comparing $\Delta \omega$ calculated from~\eqref{eqn:delta_omega}
using the axisymmetric energy density calculated from the complex scalar field
solution, to the same quantity calculated from just taking the real part, scaled to give the same energy
$\Phi \to \sqrt{2} \text{Re}[\Phi]$.
We find the relative difference to be $5\times 10^{-5}$, indicating the theoretical error in
the frequency shift should be $<0.01\%$ for these relativistic
results.
\section{Gravitational waves}
\label{sec:gw}
In the previous sections, we focused primarily on the conservative sector,
neglecting GW dissipation from the system. In what follows, we outline the
computation of the GW strain from the oscillating boson cloud in the source
frame. The general procedure is to consider superradiant solutions to the
field equations \eqref{eq:fieldeq} as sources for the linearized Einstein
equations. These source linear metric perturbations around the BH, which then
propagate on an (approximately) fixed Kerr spacetime towards the observer.
Analogous to the approach outlined in Sec.~\ref{sec:cloudproperties}, we use
numerical calculations of the emitted GWs that are valid in the
relativistic regime, and combine those with input from analytic calculations
that are valid in the Newtonian regime, $\alpha\ll 1$, to cover the entire
parameter space. In contrast, however, to the quantities calculated in
Sec.~\ref{sec:cloudproperties}, in several cases only the leading order scaling
of the GW power and strain with $\alpha$ is known, while the coefficient can be
fixed accurately only with numerical methods.
In the following, we begin by outlining the conventions used in the literature
and in \texttt{SuperRad} in Sec.~\ref{sec:conventions}. We then discuss the
emitted GW energy flux and the polarization waveform, as well as the GW modes
in the source frame in Sec.~\ref{sec:gwpowerstrain}.
\subsection{Conventions} \label{sec:conventions}
At a large distance $r$ away from the source, the
GWs in the source frame are captured by the polarization waveform
\begin{align}
h=h_+-ih_\times= \frac{\mathcal{A}}{r} e^{-i\phi_\text{GW}(t)}\psi(\theta)e^{im_{\rm GW} \varphi}.
\end{align}
The GW frequency is just twice the cloud oscillation frequency, hence
\begin{align}
\phi_\text{GW}(t) = 2\int \omega_R(t) dt.
\end{align}
As discussed in Sec.~\ref{sec:freqshift}, the frequency will change over
time as the cloud first grows exponentially, and then decays through GW
dissipation. The azimuthal dependence is fixed exactly by that of the cloud in
question: $|m_{\rm GW}|=2m_\sigma$, whereas the polar contribution
$\psi(\theta)$ is dominated by the $\ell_{\rm GW}=m_{\rm GW}$
spin-($-2$)-weighted spherical harmonic mode, except in the relativistic regime
of the parameter space. The overall amplitude $\mathcal{A}$ of the signal
scales with a leading power in the gravitational fine structure constant of the
system: $\mathcal{A}\sim \alpha^q$. This amplitude is approximately independent
of BH spin and is proportional to the cloud's mass: $\mathcal{A}(t) \propto
M_c(t)$.
We decompose the polarization waveform $h$ into GW modes $h^{\ell m}$ with
${}_{-2}Y_{\ell m}(\theta,\varphi)={}_{-2}S_{\ell m}(\theta)e^{im\varphi}$, the
$-2$-weighted spherical harmonics\footnote{Normalized as $\int d\cos\theta \
{}_{-2}\bar{S}_{\ell m}(\theta) \ {}_{-2}S_{\ell m}(\theta) =
1$.}, leading to:
\begin{align}
h^{\ell m}=\int_{S^2}d\Omega h {}_{-2}\bar{Y}^{\ell m}(\theta, \varphi).
\label{eq:gwmodes}
\end{align}
Here, and in the following, we drop the subscripts ``GW" on the GW mode labels
$(\ell,m)$ for brevity, and distinguish these from the corresponding cloud
labels by referring to the latter with $(\ell_\sigma,m_\sigma)$. The
polarization waveform can be reconstructed as
\begin{align}
\begin{aligned}
h_+= & \ \frac{1}{r}\sum_{\ell\geq m}|h^{\ell m}|\left[ {}_{-2}S_{\ell m}+(-1)^\ell {}_{-2}S_{\ell -m} \right] \\
& \qquad\times\cos(\phi_{\rm GW}+m\varphi+\tilde{\phi}_{\ell m}), \\
h_\times= & \ -\frac{1}{r}\sum_{\ell\geq m}|h^{\ell m}|\left[ {}_{-2}S_{\ell m}-(-1)^\ell {}_{-2}S_{\ell -m} \right] \\
& \qquad\times\sin(\phi_{\rm GW}+m\varphi+\tilde{\phi}_{\ell m}),
\label{eq:polarizationwaveform}
\end{aligned}
\end{align}
where we used $h^{\ell, -m}=(-1)^\ell \bar{h}^{\ell m}$, and defined
$\tilde{\phi}_{\ell m}$ as the complex phase-offsets between different $h^{\ell
m}$. Finally, the total GW energy flux is
\begin{align}
P_{\rm GW}=\int d\Omega \frac{r^2(2\omega_R)^2|h|^2}{16\pi},
\end{align}
and can be decomposed into the power emitted in each polar GW $\ell$-mode as
\begin{align}
P_{\rm GW}=P_{\rm GW}^{\ell=m}+P_{\rm GW}^{\ell=m+1}+P_{\rm GW}^{\ell=m+2}+\dots.
\label{eq:gwpower}
\end{align}
Due to the amplitude scaling $\mathcal{A}\propto M_c$, it is convenient to
factor out the dependence on the cloud's mass, and quote results only for the
rescaled GW power:
\begin{align}
\tilde{P}_{\rm GW}=P_{\rm GW} M^2/M_c^2.
\label{eq:rescaledgwpower}
\end{align}
\subsection{Gravitational wave power and strain} \label{sec:gwpowerstrain}
There are two main avenues to determine the strain $h$ in the context of BH superradiance.
On the one hand, there are frequency-domain approaches, solving a type of
differential eigenvalue problem that assumes a BH background with linear
perturbations, while on the other hand, there are time-domain numerical
methods, which solve the full nonlinear Einstein equations. The former are readily
extended across the entire relevant parameter space, but do not capture nonlinear effects,
while the latter make no approximations, but carry relatively large
numerical uncertainties, and are not easily extended to cover large parts of the parameter
space. In this work, we mainly leverage frequency-domain methods, and validate
these against time-domain estimates, where applicable. These frequency-domain
methods can be classified into the ``flat" and the ``Schwarzschild"
approximations, as well as what we call the ``Teukolsky" approximation. The former
two are analytic estimates, valid only in the non-relativistic regime,
$\alpha\ll 1$, while the last named is a numerical approach,
which is computationally efficient only when $\alpha$ is not too small.
The details of these approximations are given in Appendix~\ref{app:gws}.
Ultimately, as done above, \texttt{SuperRad} combines the best of both worlds
and provides the most accurate estimates across the entire parameter space.
In the non-relativistic limit, the currently available results are of the form
\begin{align}
\tilde{P}_{\rm GW}=H \alpha^{\eta}.
\label{eq:nonrelPgw}
\end{align}
The respective $\alpha$ scalings for the GW power from scalar and vector superradiant clouds are \cite{Arvanitaki_precision,Yoshino:2013ofa,Brito:2014wla,Baryakhtar:2017ngi}
\begin{align}
\eta_S=4m_S+10, & & \eta_V=4m_V+6,
\end{align}
while the numerical coefficient $H$ depends on the type of approximation
employed. We quote all available results in Appendix~\ref{app:gws}, and focus
here solely on those associated with $m_\sigma=1$ cloud states. The
Schwarzschild approximation has been studied only in the $m_\sigma=1$ case,
resulting in \cite{Brito:2014wla,Baryakhtar:2017ngi}
\begin{align}
(H_S)_{\text{Schw.}}^{m_S=1}=\frac{484+9\pi^2}{23040}, & & (H_V)_{\text{Schw.}}^{m_V=1}=60.
\end{align}
These overestimate the true emitted GW power, while the ``flat" approximation \cite{Yoshino:2013ofa,Baryakhtar:2017ngi}
\begin{align}
(H_S)_{\text{flat}}^{m_S=1}=\frac{1}{640}, & & (H_V)_{\text{flat}}^{m_V=1}=\frac{32}{5},
\end{align}
is expected to underestimate the total energy flux. From comparing the Schwarzschild with the
flat approximation, it is clear that the non-relativistic approximations have
systematic uncertainties of roughly one order of magnitude. Hence, even for
$\alpha\ll 1$, numerical techniques are required to reduce the uncertainty in
the coefficient $H$.
For this reason, and to extend the validity of the GW power and strain
predictions of \texttt{SuperRad} to the part of the parameter space with the
loudest signals, we utilize frequency-domain numerical techniques in the
Teukolsky approximation. We outline the methods we use in
Appendix~\ref{app:gws}. Here, we simply state that our numerical results are
more accurate than either of the analytic approximation techniques, even for
moderately small $\alpha$.
As evident from \eqref{eq:nonrelPgw}, the GW emission is independent of the BH
spin $a_*$ in the Newtonian regime, while in the relativistic regime, the GWs
exhibit mild spin-dependence \cite{Yoshino:2013ofa,Siemonsen:2019ebd}. To simplify the
parameter space, we restrict to clouds in the saturated state;
that is, we assume $\omega_R=m_\sigma \Omega_H$
\footnote{The
validity of this last condition is discussed below in
Sec.~\ref{sec:cloudevolution}.}, removing
the spin-dependence from the parameter space. As in the discussion in
Sec.~\ref{sec:cloudproperties}, there exists a relativistic regime,
$\tilde{\mathcal{D}}_{\rm int}$, in which accurate numerical predictions can be
obtained. For $\alpha\ll 1$, the function
\begin{align}
\tilde{P}_{\rm GW}=b\alpha^{\eta}+c\alpha^{\eta+1}+\dots,
\label{eq:nonrelPgwfit}
\end{align}
is used to fit against the numerical results. In general, $b\neq H_\sigma$;
that is, we fit even the leading order coefficient from the numerically
obtained Teukolsky estimates. However, we check explicitly that $(H_\sigma)_{\rm
flat}<b<(H_\sigma)_{\rm Schw.}$ for both the scalar and vector $m_\sigma=1$
cloud states in the $\alpha\ll 1$ regime. \texttt{SuperRad} employs cubic-order interpolation in
$\tilde{\mathcal{D}}_{\rm int}$, and uses fits of the type \eqref{eq:nonrelPgwfit} for
$\alpha\in\tilde{D}_{\rm fit}$.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{./power_comp.pdf}
\caption{
We show the mass-rescaled GW power $\tilde{P}_{\rm GW}$, defined in
\eqref{eq:rescaledgwpower}, emitted by the scalar and vector clouds with azimuthal number
$m_\sigma=1$ and 2 at the saturation point, $\omega_R=m_\sigma\Omega_H$,
comparing the Schwarzschild ``Schw." and the flat approximations to
\texttt{SuperRad} \textit{(colored lines)}, and time-domain estimates obtained
in \cite{East:2017mrj,East:2018glu}. Dash-dotted colored lines indicate where
\texttt{SuperRad} uses interpolation of numerical results over fits of the type \eqref{eq:nonrelPgwfit}.}
\label{fig:GWpower}
\end{figure}
In \figurename{ \ref{fig:GWpower}}, we compare the various calculation of the
the GW power to the predictions by \texttt{SuperRad}. In the Newtonian limit,
\texttt{SuperRad} differs from \eqref{eq:nonrelPgw} due to the fit
\eqref{eq:nonrelPgwfit}, allowing different leading-$\alpha$ coefficients. The
underlying numerical results are more accurate (see Appendix~\ref{app:gws} for
details), allowing us to conclude that the estimates provided by
\texttt{SuperRad}, are more accurate than the Schwarzschild or flat
approximations. The analytic estimates for $\tilde{P}_{\rm GW}$ are worse for
$m_\sigma=2$; we use those results only to inform the leading-$\alpha$ scaling behavior. We also show time-domain results from evolving the full
nonlinear Einstein-Proca equations~\cite{East:2017mrj,East:2018glu} for a few
points. These agree with the Teukolsky calculations to within the numerical
error of the simulations.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{./hlm_comp.pdf}
\caption{
We show the magnitudes of the GW modes $h_{\ell m}$, defined in
\eqref{eq:gwmodes}, which are sourced by $m_\sigma=1$ and $2$ scalar and vector
boson clouds at saturation ($\omega_R=m_\sigma\Omega_H$) as functions of
$\alpha$. Notice that $\ell\geq 2m_\sigma$.}
\label{fig:GWstraim}
\end{figure}
In \figurename{ \ref{fig:GWstraim}}, we show the GW modes provided by
\texttt{SuperRad}, as defined in \eqref{eq:gwmodes}, over the entire parameter
space, assuming the saturation condition. As expected from the non-relativistic
results, the quadrupolar contribution $h_{22}$ dominates throughout most of
the parameter space, except in the most relativistic regime, where $h_{32}$
increases in importance (and equivalently for $h_{44}$ and $h_{54}$). This
behavior implies a constant phase shift between the two involved multipolar
components. Hence, there is an $\alpha$-range where $|h_{22}|\sim|h_{32}|$
(and $|h_{44}|\sim|h_{54}|$), which means that the phase difference
$\tilde{\phi}_{22}$ (and $\tilde{\phi}_{44}$), defined in
\eqref{eq:polarizationwaveform}, introduces a non-trivial phase-offset between
the two involved polar modes.
\section{Growth and decay of boson cloud} \label{sec:cloudevolution}
In this section, we address how the superradiant instability and GW
calculations can be combined to calculate the evolution of the boson cloud,
which determines the evolution of the amplitude and frequency of the GW signal.
A boson cloud around a spinning BH evolves as
the cloud extracts energy and angular momentum from the BH through the
superradiant instability. During this process, the cloud also loses energy and
angular momentum to gravitational radiation. In a quasi-adiabatic
approximation, the evolution of this system is given by
\begin{align}
\begin{aligned}
\dot{M_c} & = 2\omega_I M_c+P_{\rm GW}, \\
\dot{M} & =-2\omega_I M_c, \\
\dot{J} & =-\frac{2 m_{\sigma} \omega_I}{\omega_R}M_c,
\label{eqn:cloud_evo}
\end{aligned}
\end{align}
where $\omega_R$, $\omega_I$, and $P_{\rm GW}$ are functions of the cloud mass
and BH mass and spin. The evolution of the boson cloud can be roughly divided
into two phases. In the first phase, the cloud grows exponentially, with the
mass going like $M_c\sim\exp(2\omega_I t)$, with the growth eventually saturating as
the BH is spun down and $\omega_I$ becomes small as
$m_{\sigma}\Omega_H$ decreases towards $\omega_R$. This is followed by the
gradual dissipation of the boson cloud through gravitational radiation.
Since during this time $-\dot{M}_c\approx P_{\rm GW} \propto M_c^2$,
\begin{align}
M_c(t) \approx \frac{\bar{M}_c}{1+(t-t_{\rm max})/\tau_{\rm GW}}
\end{align}
where the cloud mass reaches a maximum $\bar{M}_c$ at $t=t_{\rm max}$
and $\tau_{\rm GW}:=\bar{M}_c/P_{\rm GW}$.
In \figurename{ \ref{fig:cloud_evo}}, we plot an example of the evolution of the cloud
mass for both scalar and vector bosons. In both cases, $\tau_I \ll \tau_{\rm
GW}$, so that the exponential growth phase takes place on a much shorter time
scale than GW dissipation. However, the ratio
$\tau_I/\tau_{\rm GW}$ is markedly smaller in the scalar case compared to the
vector one. In addition to the full evolution of the cloud as determined by
\eqref{eqn:cloud_evo}, in \figurename{ \ref{fig:cloud_evo}} we also plot a simple
approximation where the maximum cloud mass is determined by solving for the BH
parameters where $\omega_R=m_\sigma \Omega_H$, and the evolution of $M_c$ after the
maximum is given solely by gravitational radiation, and the evolution of $M_c$
before the maximum is given by exponential growth with a fixed value $\omega_I$
given by the initial parameters.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{./evo_cmp.pdf}
\caption{
An example evolution of the boson cloud mass as a function of time for scalars
($s=0$) and vectors ($s=1$) with $\alpha=0.15$ and $a_{*}=0.7$. The plot
compares the evolution determined by evolving the full
equations~\eqref{eqn:cloud_evo} (solid lines, labelled ``full"), to an
approximation that matches together constant exponential growth to GW-dominated
decay (dotted and dashed lines, labelled ``matched"). Time is normalized by the
gravitational dissipation timescale in either case, and the offset adjusted so
that the maximum value of $M_c$ occurs at zero for the full evolution cases,
and the matching value of $M_c$ is obtained for the corresponding matched
evolution cases. The inset shows a zoom in of the end of the exponential growth phase
for the scalar case (in particular the full evolution).
}
\label{fig:cloud_evo}
\end{figure}
The \texttt{SuperRad} waveform model implements options for both the full cloud
evolution and the matched approximation. While the latter approximation is less
computationally expensive, as can be seen in \figurename{ \ref{fig:cloud_evo}},
it slightly overestimates the maximum cloud mass (by $\approx 0.04\%$ and
$0.8\%$, respectively, for the scalar and vector cases shown in the figure),
and underestimates the time for the cloud to reach its maximum. Thus, the more
accurate full cloud evolution is appropriate for scenarios when the signal
before the time when the cloud reaches saturation makes a non-negligible
contribution. However, as noted above, our calculation of $\tilde{P}_{\rm GW}$
assumes $m_\sigma \Omega_H=\omega_R$, which is not strictly valid before the
saturation of the instability. Hence, there will be a discrepancy in the BH
spin used for the computation of the GW power. This discrepancy is negligible
(i.e., below the numerical error of the methods, discussed in
Appendix~\ref{app:gws}) for $m_\sigma=2$, and for $m_\sigma=1$ assuming
$a_*<0.9$. It should be noted that this affects the GW emission before
saturation only, and also only systems with initial spin $a_*\gtrsim 0.9$. In
the vector $m_V=1$ case, the largest discrepancy occurs for $\alpha\approx
0.46$ and extremal spins, where the relative error from assuming the saturation
condition in the mass-rescaled GW power $\tilde{P}_{\rm GW}$ is $\approx 55\%$
(see Fig. 7 in \cite{Siemonsen:2019ebd}). For the scalar $m_S=1$ case this
discrepancy is at most $\approx 24\%$ around $\alpha\approx 0.36$.
\section{LISA follow-up searches} \label{sec:lisafollowup}
The two main observational signatures of superradiant clouds, BH spin down and
GW emission, are sensitive to various systematic and statistical
uncertainties. Spin measurements have been used to exclude scalar and vector
mass ranges. Most of these constraints, however, rely on BH-spin estimates from
electromagnetic observations with significant systematic uncertainties. Spin
measurements of BHs in inspiraling binaries using GWs exhibit
large statistical uncertainties and make assumptions about the proceeding
history of the binary. Constraints from the stochastic GW background, assuming a population
of BH-cloud systems, rely on assumptions regarding the BH mass and spin population,
in addition to position and distance uncertainties.
Lastly, searches for GWs from existing BHs observed in the electromagnetic
channel make assumptions about the past history of the observed BH, introducing
large systematic uncertainties. Clearly all of these methods rely on modeling
or assumptions with potentially substantial systematic uncertainties.
One search strategy for GWs from superradiant clouds, however, evades these
assumptions: BH merger follow-up searches. These searches target BH remnants of
previously detected compact binary coalescences. The key advantages are the
knowledge of the complete past history of the targeted BH, as well as
measurements of sky-position, spin, mass, and distance. Given these quantities,
accurate predictions of the subsequent superradiance instability and GW
emission are possible, enabling a targeted search for the latter in the
days/weeks/years following the merger. This removes the assumptions affecting
other search strategies, reduces the uncertainties to those coming from
the merger GW signal measurement of the remnant, and those of the waveform model
(discussed in the case of \texttt{SuperRad} below), and enables one to put
confident constraints on relevant parts of the ultralight boson parameter space,
or potentially to make a confident discovery.
In the context of the current generation of ground-based GW detectors,
follow-up searches for GWs from scalar superradiant clouds are likely
infeasible due to the small strain amplitudes \cite{Isi:2018pzk}.
On the other hand, because of their faster growth rates and orders of magnitude stronger signals,
vector boson clouds are ideal candidates for these types of searches
\cite{Jonesinprep}. At design sensitivity, the advanced LIGO~\cite{TheLIGOScientific:2014jea}, advanced
Virgo~\cite{TheVirgo:2014hva}, and KAGRA~\cite{kagra} observatories will in
principle be sensitive to systems out to $\sim 1$ Gpc at a typical remnant BH
spin of $a_*=0.7$ and masses of $M\sim 100M_\odot$
\cite{Chan:2022dkt,Jonesinprep}. Undertaking follow-up searches targeting BHs
falling into this parameter range could target vector boson
masses roughly in the range of $\mathcal{M}_V\in (1\times 10^{-11},1\times
10^{-13})$ eV [see eq.~\eqref{eq:alphadefinition}]. In a similar fashion, LISA
could be sensitive to GWs from vector boson clouds with boson masses in the
$\mathcal{M}_V< 10^{-15}$ eV regime, inaccessible by ground-based detectors.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{./lisa_followup_horizon.pdf}
\caption{We show the SNR \textit{(contour lines and color)} of GWs from vector
superradiant clouds around a fiducial BH of initial remnant source frame mass
of $M_i$ and spin $a_{*,i}=0.8$ as a function of luminosity distance $d_L$ and
redshift $z$, assuming a standard $\Lambda$CDM cosmology and $\alpha=0.2$. For
comparison, we also consider an initial spin of $a_{*,i}=0.7$ showing the $\rho_{\rm SNR}=10$ contour \textit{(dashed black line)}, assuming $\alpha=0.15$.}
\label{fig:lisafollowup}
\end{figure}
In the following, we analyze the prospects of follow-up searches for GWs from
vector superradiant clouds around supermassive binary BH merger remnants with
LISA. The fundamental assumption of follow-up searches is that a \textit{new}
superradiant cloud forms around the remnant after merger. If either of the
constituents already posses a superradiant cloud, it is expected to be depleted
before or during merger for nearly equal mass-ratio ($q\sim 1$) systems
\cite{Baumann:2018vus}. Even for $q>1$, depending on $\alpha$, clouds around
the constituents of the binary are efficiently removed before merger
\cite{Baumann:2018vus,Berti:2019wnn,Takahashi:2021yhy,Takahashi:2021eso}.
LISA is expected to see at
least a handful of such mergers over the mission lifetime of four years
\cite{Berti:2006ew,Micic:2007vd}. Therefore, to estimate the detection horizon,
we assume a fiducial supermassive binary BH merger remnant detection that occurs one year
into the mission. After merger at redshift $z$, residual ultralight vector densities around the
remnant, or quantum fluctuations, trigger the superradiance
instability\footnote{Notice, an equal-mass, non-spinning binary BH merger results in
remnant BH with $a_*\approx 0.7$.} leading to the complete cloud formation,
and hence the peak of the GW signal, on
timescales of at most $t_c\approx \tau_I(1+z)\log(M_c/\mathcal{M}_V)$ in the detector frame.
Over most of the
parameter space, these signals will last for longer than the remaining three
years of the LISA mission, leaving an observing time of $T_{\rm obs}=3-t_c$
years. We determine the maximum detection horizon of GWs from vector superradiant
clouds by considering the optimal signal-to-noise ratio (SNR) $\rho_{\rm SNR}$ with the
LISA sensitivity curve (details can be found in Appendix~\ref{app:snr}). Making
these assumptions, we illustrate the detection horizon of LISA for such events
in \figurename{ \ref{fig:lisafollowup}}.
From \figurename{ \ref{fig:lisafollowup}}, we conclude that parts of the vector
boson mass parameter space can be probed with idealized follow-up GW searches from
supermassive binary BH remnants. Even for moderate initial spins of
$a_{*,i}=0.7$, GWs can be detected up to $z\lesssim 0.8$, while for slightly
more favorable initial spins of $a_{*,i}=0.8$, the GW emission is observable out to
$z\lesssim 8$. The merger rate of massive BH binaries is expected to peak around $M\sim 10^6
M_\odot$ for equal mass ratio systems, $q\lesssim 1$, and at $z\approx 2$
\cite{Mazzolari:2022cho,Henriques:2014sga,2020MNRAS4954681I}. For initial BH
masses $M_i>10^6\ M_\odot$, the cloud formation timescales are larger than the
mission duration, $t_c>3$ years, leading to a drop in SNR. At high redshifts,
the sensitivity of LISA is primarily limited by the short effective observation
times in the detector frame. Larger BH masses (lower boson masses) can be
accessed only with larger initial spins, or significantly longer mission
durations. Consulting \eqref{eq:alphadefinition}, vector boson masses roughly
around $\mathcal{M}_V\in (1\times 10^{-16},6\times 10^{-16})$ eV are within reach of
these follow-up search strategies with LISA.
These prospects are subject to a few caveats. First, we determined the detection horizon and sensitivity of LISA to GW from vector clouds around remnant supermassive BHs using the optimal matched filter SNR. What fraction of this total available SNR could be recovered from the data by a realistic search algorithm is an open question, even for ground-based detectors \cite{Jonesinprep}. Secondly, the merger rate of massive BH binaries has large uncertainties. If the true merger rate were peaked at redshifts of $z>5$, a realistic follow-up search would require a very favorable initial BH spin $a_{*,i}>0.8$ to access a meaningful part of the vector boson parameter space directly, or an outlier event much closer.
\section{Discussion}
We have introduced a new BH superradiance gravitational waveform model
called \texttt{SuperRad}. This provides the superradiance instability growth
timescale $\tau_I$, the cloud oscillation frequency $\omega_R$,
the GW frequency $f_{\rm GW}(t)$ and strain $h_{\times/+}$ in
the source frame as a function of time, the GW power $P_{\rm
GW}$, and the evolution of the boson cloud. The \texttt{SuperRad} model makes
use of all available analytic and numerical estimates for these observables,
and calibrates analytic fits against the numerical data to extend the
applicability across the \textit{entire} parameter space of the $m=1$
and $2$ scalar and vector superradiant clouds. The waveform model
\texttt{SuperRad} can be used to inform and interpret the results of
GW searches for ultralight scalar and vector BH
superradiance.
This includes both blind and targeted searches for resolved continuous wave signals,
as well as searches for a stochastic GW background from BH-boson cloud systems.
It can also be used when interpreting BH spin measurements using
GW or electromagnetic observations.
Importantly, \texttt{SuperRad} is accurate in the relativistic regime where the
observable signals will be the strongest.
As the ultralight boson cloud dissipates through gravitational radiation, there
is a small increase in the frequency of the GWs due to the
changing self-gravity contribution of the cloud. As illustrated above, even
though this frequency drift is small, because of the large number of
GW cycles that make up a typical superradiance signal, not
properly accounting for it can lead to the signal model going out of phase in a
fraction of the observing time. Fully including this second-order effect
within BH perturbation theory is challenging, and the results in
\texttt{SuperRad} for the frequency evolution of the GW signal
use non-relativistic approximations. By comparing these to fully-relativistic
numerical calculations for the scalar boson case, we found that the former
underestimates the value of $\dot{f}_{\rm GW}$ by $\sim 30\%$ for the most
relativistic (i.e. $\alpha \sim O(1)$) cases, though the differences are
smaller for more typical parameters. In future work, we plan to include the
fully-relativistic results for the cloud-mass contribution to the frequency for
both scalar and vector bosons in \texttt{SuperRad}. Though, given the
stringent accuracy requirements imposed by the typical signal timescales (see Fig.~\ref{fig:delta_phi}), it is
likely that fully-coherent signal analysis techniques (e.g., match filtering)
will still not be feasible in much of the parameter space, better predictions
for the GW frequency evolution are nevertheless important in guiding
the application of semi-coherent techniques.
Furthermore, we investigated the viability of follow-up searches for GWs from
ultralight vector superradiant clouds with LISA targeting remnants of observed
massive binary BH mergers. We found that these searches are confident probes
of the ultralight vector boson parameter space around $\mathcal{M}\in (1\times
10^{-16},6\times 10^{-16})$ eV. With current estimates of the merger rate of
massive BH binaries, LISA will be sensitive to GWs from vector boson clouds
around remnants of these mergers out to redshift $z\lesssim 8$ at mass-ratio
$q\lesssim 1$ and remnant black hole masses of roughly $M\in(6\times
10^4,2\times 10^5)M_\odot$. Our basic analysis leaves various questions
unanswered. We assumed the total available signal-to-noise ratio can be
recovered by a realistic search algorithm, which is an overestimate even in the
case of ground-based detectors \cite{Jonesinprep}. As well, a more detailed
study folding in massive black hole binary merger rates with superradiant cloud
growth timescales and emitted GW luminosities could provide an estimate for the
expected number and mass ranges of merger events where LISA would be sensitive
to the GW signal from an ultralight vector boson.
\begin{acknowledgments}
We would like to thank Dana Jones, Andrew Miller, and Ling Sun for insightful discussions and comments on this draft. The authors acknowledge financial support by the Natural Sciences and Engineering Research Council of Canada (NSERC). Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. This research was undertaken thanks in part to funding from the Canada First Research Excellence Fund through the Arthur B. McDonald Canadian Astroparticle Physics Research Institute.
\end{acknowledgments}
|
2211.03824
|
\section{Introduction}
\label{sect:intro}
The origin of elements heavier than iron is still debated \citep[e.g.][]{arnould20}.
The slow (s-) and rapid (r-) neutron capture processes are thought to be responsible for the synthesis of most of these elements. It is classically accepted that the s-process ends in the Pb-Bi region with a series of fast $\alpha$ decays \citep{clayton67}, while the r-process can produce the heaviest nuclei, including the actinides. Up to now, the r-process alone has been held responsible for the nucleosynthesis of Th and U in the Universe.
Additionally, the intermediate neutron capture process, or i-process, with neutron densities between those of the the s- and r-processes ($N_n \simeq 10^{13} - 10^{15}$~cm$^{-3}$), was first proposed by Cowan \& Rose (1977) but has only recently been revived.
Different possible observational signatures of i-process nucleosynthesis were reported in carbon-enhanced metal-poor (CEMP) r/s stars \citep{jonsell06, lugaro12,dardelet14,roederer16,karinkuzhi21}, in barium anomalies in open clusters \citep{mishenina15}, and in the isotopic composition of pre-solar grains \citep{jadhav13,fujiya13,liu14}.
The i-process nucleosynthesis arises when protons are mixed into a convective helium-burning zone.
The astrophysical site(s) hosting proton ingestion events (PIEs), and hence the i-process, is (are) still debated \citep[see e.g.][for a detailed list]{choplin21}.
One possibility is the early thermally pulsing (TP) phase of low-mass low-metallicity asymptotic giant branch (AGB) stars \citep[e.g.][]{iwamoto04,cristallo09b,stancliffe11,choplin21,goriely21}.
In this Letter we show that the i-process, expected to take place in low-metallicity AGB stars, can pass the Pb-Bi region and also synthesize actinides, including Th and U.
Section~\ref{sect:ing} presents the physical ingredients and models.
Section~\ref{sect:nuc} focuses on the nucleosynthesis of actinides.
In Sect.~\ref{sect:comp} we compare our results with the abundances of \object{RAVE J094921.8-161722}, a CEMP r/s star that shows Th lines. Conclusions are given in Sect.~\ref{sect:concl}.
\begin{table*}[h!]
\scriptsize{
\caption{Main characteristics of the five models considered in this work. Given are the model label, nuclear dataset used (see text for details), spatial resolution parameter, $\epsilon_{\rm max}$ (see text for details), temporal resolution parameter, $\alpha$ (see text for details), maximum neutron density, surface $\log\epsilon$(Pb), $\log\epsilon$(Th), and $\log\epsilon$(U)
after the PIE (all unstable actinides were decayed except $^{232}$Th, $^{235}$U, and $^{238}$U).
\label{table:1}
}
\begin{center}
\resizebox{17cm}{!} {
\begin{tabular}{lccccccc}
\hline
Model & Nuclear & $\epsilon_{\rm max}$ & $\alpha$ & $\log(N_{\rm n,max}$) & $\log\epsilon$(Pb) & $\log\epsilon$(Th) & $\log\epsilon$(U) \\
label & dataset & & & & \\
\hline
MA$\epsilon 8 \alpha 08$ & A & 0.08 & 0.008 & 15.33 & 3.78 & $0.74$ & $0.50$ \\
MA$\epsilon 4 \alpha 08$ & A & 0.04 & 0.008 & 15.31 & 3.88 & $0.90$ & $0.76$ \\
MA$\epsilon 4 \alpha 04$ & A & 0.04 & 0.004 & 15.34 & 3.83 & $0.78$ & $0.46$ \\
MB$\epsilon 4 \alpha 04$ & B & 0.04 & 0.004 & 15.34 & 3.84 & $0.16$ & $-0.26$ \\
MC$\epsilon 4 \alpha 04$ & C & 0.04 & 0.004 & 15.17 & 3.83 & $0.89$ & $0.82$ \\
\hline
\end{tabular}
}
\end{center}
}
\end{table*}
\section{The i-process model: Physical ingredients}
\label{sect:ing}
We considered the i-process taking place in a low-metallicity AGB star at the time of the proton ingestion, as described in detail in \cite{choplin21} and \cite{goriely21} and using the same physical ingredients. We briefly recall some important aspects.
The models are computed with the stellar evolution code {\sf STAREVOL} \citep[][and references therein]{siess00, siess06, goriely18c}.
We used the mass-loss rate from \cite{reimers75} from the main sequence up to the beginning of the AGB phase and then switched to the \cite{vassiliadis93} rate.
When the star becomes carbon rich, the opacity change due to the formation of molecules is included \citep{marigo02}.
We used a mixing length parameter $\alpha = 1.75$, and no extra mixing (e.g. overshoot or thermohaline) was included.
In our models, during a PIE, the transport and burning of chemicals are coupled. This means that the nucleosynthesis and transport equations are solved simultaneously once the structure has converged.
For the relatively low neutron densities that characterize s-process nucleosynthesis (i.e. $N_n \,\,\raise0.14em\hbox{$<$}\kern-0.76em\lower0.28em\hbox{$\sim$}\,\, 10^{13}$~cm$^{-3}$), a network of 411 isotopes is traditionally used in {\sf STAREVOL}.
However, to reliably follow the i-process flow resulting from a PIE with $N_n > 10^{13}$~cm$^{-3}$, a larger network needs to be considered. We used a reaction network composed of 1160 nuclei from neutrons up to $^{253}$Cf and linked through 2123 nuclear reactions and decays.
Nuclear reaction rates were taken from the Nuclear Astrophysics Library of the Universit\'e Libre de Bruxelles\footnote{Available at http://www.astro.ulb.ac.be/bruslib} \citep{arnould06} and the updated experimental and theoretical rates from the NETGEN interface \citep{Xu13}.
The latest decay rates were taken from the recent NUBASE2020 release\footnote{https://www-nds.iaea.org/amdc/} \citep{kondev21}.
It should be noted that fission processes are not included in the reaction network.
Despite the fact that the i-process takes place on timescales of the order of days, the so-produced neutron-rich nuclei with $Z < 96$ have a significantly longer half-life against spontaneous fission, so spontaneous fission is not expected to affect the present results.
For some isotopes with $Z \geq 94$, the timescale for neutron-induced fission can become comparable to that of the radiative neutron capture $(n,\gamma)$, hindering the potential production of heavier actinides. However, on the basis of experimentally known cross-sections, along the i-process path, only $^{241}$Pu appears to have an ($n,f$) cross-section larger than its $(n,\gamma)$ cross-section (by a factor of about 4 at the thermal neutron energy of about 20~keV). Taking into account neutron-induced fission may affect the quantitative abundance estimates, but it will not change the fact that actinides are produced. For this reason, future simulations may need to include neutron-induced fission processes in the network.
\begin{figure*}[t]
\includegraphics[scale=0.7, trim = 0cm 0cm 0cm 0.cm]{path_m1z25std6.pdf}
\caption{
Main (blue) and secondary (black) i-process paths (starting from $^{56}$Fe) in the MA$\epsilon 4 \alpha 04$ model, at the bottom of the convective thermal pulse, at the time of maximum neutron density ($N_n = 2.17 \times 10^{15}$~cm$^{-3}$).
A secondary path is considered as such if at least 30~\% of the total flux goes through it.
The size of the arrows scales with the flux.
The black squares highlight the stable and long-lived isotopes \iso{232}Th, \iso{234}U, \iso{235}U, \iso{236}U, \iso{238}U, \iso{237}Np, \iso{244}Pu, and \iso{247}Cm.
The colour of the different nuclei corresponds to their mass fraction at that time.
}
\label{fig:path}
\end{figure*}
In this Letter we focus on an AGB model of 1~$M_{\odot}$\ with [Fe/H]~$=-2.5$. As extensively discussed in \cite{choplin21}, this model experiences a PIE during the early TP-AGB phase.
During the PIE, five different models were computed with different spatial discretization, temporal resolution, and nuclear datasets. The spatial and temporal resolutions are controlled by the $\epsilon_{\rm max}$ and $\alpha$ parameters, respectively.
More specifically, a new mesh point was added if the relative variation in a structural variable exceeds the threshold value, $\epsilon_\mathrm{max}$, between two adjacent shells.
The parameter $\alpha$ controls by how much the timestep is allowed to change based on the relative variations in the structure variables. More details are given in Sects.~4.1 and 4.2 of \cite{choplin21}.
The lower these parameters, the higher the resolution. In addition to such astrophysical uncertainties, the abundance calculation is also affected by nuclear uncertainties, in particular due to the large amount (about 70\%) of unmeasured neutron capture rates involved during i-process nucleosynthesis. To study nuclear uncertainties, as in \citet{goriely21}, different sets of theoretical $(n,\gamma)$ rates were considered, and their impact on the abundances was obtained by consistently re-calculating the full PIE event with {\sf STAREVOL} and the updated nuclear network. More specifically, we considered three sets of neutron capture rates. In addition to the fiducial rates (model A), we also performed the calculations with model B -- which uses the photon strength functions from \citet{Goriely04} instead of \citet{Goriely19}, as included in model A -- and model C, which adds to model A the contribution from the direct capture component \citep{Sieja21}. Models B and C were chosen out of the various sets included in \citet{goriely21} because they give rise to lower and upper limits to actinide production, as discussed below.
Table~\ref{table:1} reports the characteristics of these five models.
\begin{figure*}[t]
\includegraphics[scale=0.55, trim = 2cm 0cm 3cm 0cm]{flux_m1z25std7.pdf}
\caption{Flow chart in the Pb-Rn region in the MA$\epsilon 4 \alpha 04$ model, at maximum neutron density ($N_n = 2.17 \times 10^{15}$~cm$^{-3}$).
The coloured arrows from a given isotope, $i$, represent the flux ratio $F/F_{\rm tot}$, where $F$ corresponds to the flux of the $(n,\gamma)$ reaction $F_{\rm (n,\gamma)} = N_{\rm av} \rho Y_n Y_i \langle \sigma v \rangle$, $\beta$ decay $F_{\rm \beta} = \lambda_\beta Y_i$, or $\alpha$ decay $F_{\rm \alpha} = \lambda_\alpha Y_i$, and $F_{\rm tot} = F_{\rm (n,\gamma)} + F_{\rm \beta} + F_{\rm \alpha}$ ($Y_n$ and $Y_i$ being the molar mass fraction of the neutrons and target, respectively, $\langle \sigma v \rangle$ the nuclear reaction rate, and $\lambda_\beta$ and $\lambda_\alpha$ the decay rates).
The yellow area shows the nuclei experiencing $\alpha$ decays with $t_{1/2} < 1$~s.
}
\label{fig:flux}
\end{figure*}
\section{Synthesis of actinides}
\label{sect:nuc}
Proton ingestion events that lead to i-process nucleosynthesis can have a profound impact on AGB structure and evolution.
This was extensively discussed in \cite{choplin21,choplin22}.
In this Letter we focus on the possible synthesis of the heaviest elements beyond Pb through i-process nucleosynthesis.
In the fiducial MA$\epsilon 4 \alpha 04$ model, at the neutron density peak, the i-process mainly follows the blue path shown in Fig~\ref{fig:path}. From $^{209}$Pb to $^{215}$Pb, the timescale against neutron capture, $\tau_n$, is significantly smaller than the $\beta$-decay timescale, $\tau_{\beta}$, such that the ($n,\gamma$) channel dominates.
At $^{216}$Pb, $\beta^-$ somewhat dominates and the main path follows the chain $^{216}$Pb($\beta^-$)$^{216}$Bi($n,\gamma$)$^{217}$Bi($\beta^-$)$^{217}$Po($\gamma,\alpha$)$^{213}$Pb, which eventually forms a loop.
However, at $^{216}$Pb, $^{217}$Bi, and $^{217}$Po, the ($n,\gamma$) reactions compete with the $\beta$ decays, and a significant fraction of the flux (at least 30~\%) goes into a secondary path (black paths in Fig~\ref{fig:path}).
These branching points are important since they determine if the flux cycles in the Pb-Bi-Po region or continues up to heavier elements.
At $^{216}$Pb, the timescales against $\beta^-$ decay and neutron capture (at $T = 250$~MK and maximum $N_n$) are $\tau_{\beta} = 2.5$~min and $\tau_n = 2.7$~min, respectively. Although the $\beta^{-}$ channel dominates, more than 30~\% of the flux continues to $^{217}$Pb.
This is similar for $^{217}$Bi ($\tau_{\beta} = 2.3$~min and $\tau_n = 3.4$~min) and $^{217}$Po ($\tau_{\beta} = 2.2$~s and $\tau_n = 3.4$~s).
A crucial condition for synthesizing actinides is to have a neutron density high enough to pass the extremely fast $\alpha$-decay region at $Z\ge 84$ and $126 \le N \le 132$, which inevitably brings the nuclear flow back to the Pb region (yellow area in Fig.~\ref{fig:flux}).
If the neutron flux is too low, isotopes with $N > 134$ will not form and the nucleosynthesis flow will cycle in the Pb-Bi-Po region with $N<134$.
The minimum neutron density needed to overtake this fast $\alpha$-decay zone and build up actinides is about $10^{15}$~cm$^{-3}$, though the determination of its exact value is still hindered by the unknown reaction rates of the neutron-rich Pb-Po isotopes involved. As reported in Table~\ref{table:1}, all our i-process simulations in the 1~$M_{\odot}$\ [Fe/H]~$=-2.5$ model star lead to a significant enrichment of the stellar surface in Th and U. Our calculations also indicate a strong correlation between Pb and actinide abundances. We thus expect a CEMP r/s star enriched in Th and U to also have a high Pb abundance.
When inspecting the results for the different nuclear datasets, a few key reactions show important differences between the nuclear models.
At $^{217}$Po, for instance (which is on the main path in Fig.~\ref{fig:path}), 29, 3, and 26~\% of the flux follows the $(n,\gamma)$ channel for models A, B, and C, respectively.
Almost all the remaining flux follows the $\alpha$-decay channel back towards $^{213}$Pb.
This contributes to lowering the production of actinides in model B and reduces the Th and U surface abundances after the PIE (Table~\ref{table:1}).
We report in Table~\ref{table:2} the most uncertain, and hence critical, $(n,\gamma)$ reactions of relevance according to nuclear models A, B, and C.
Only the relevant reactions for i-process nucleosynthesis starting from Pb are mentioned.
An accurate determination of these reaction rates is needed to reduce the uncertainties regarding the production of actinides in low-metallicity AGB stars.
The production of actinides is mostly unchanged under different spatial and temporal discretizations during the PIE. After the PIE, the surface\footnote{$\log\epsilon(X) = \log_{10} (N_{\rm X} / N_{\rm H}) + 12$, where $N_{\rm X}$ and $N_{\rm H}$ refer
to the numbers of atoms of elements X and hydrogen, respectively.} $\log\epsilon$(Th) shows a scatter of 0.16 dex and the surface $\log\epsilon$(U) a scatter of 0.3 dex (cf. the first three models in Table~\ref{table:1}).
We notice that a higher spatial resolution slightly increases the production of Th and U, while a better temporal resolution reduces the production.
\section{Comparison to the CEMP r/s star RAVE J094921.8-161722}
\label{sect:comp}
\cite{gull18} reported the discovery of \object{RAVE J094921.8-161722} (hereafter J0949-1617), a red giant, CEMP star with ${\rm [Fe/H]}=-2.2$.
This star has a chemical composition between that of the s- and r-processes and was interpreted as having been polluted by both processes.
The only argument for justifying pollution by the r-process is based on the presence of Th in the spectrum, with an abundance estimated to $\log \epsilon (\mathrm{Th}) = -1.70 \pm 0.20$.
The scenario proposed by \cite{gull18} corresponds to a star formed in a zone enriched by a prior r-process event that later accreted some s-rich material from an AGB companion. Its chemical abundances would then reflect a combination of s- and r-processes.
In this section we discuss the alternative possibility of pollution by a low-metallicity AGB companion that experienced i-process nucleosynthesis.
To explain the present composition of J0949-1617, we freely varied the dilution factor, $f$, for each of the five models so as to minimize the $\chi_{\nu}^2$ between theory and observation
\citep[cf.][for details]{choplin21,choplin21cor}.
The $\chi_{\nu}^2$ was estimated based on the $\log \epsilon$ abundance of all available elements with $Z>30,$ except Th because its abundance decreased between the time of pollution and the observation.
The five best fits and residuals are shown in Fig.~\ref{fig:res}.
The dilution factors are $0.98<f<0.99$ for all models, meaning that $1-2$~\% of AGB material is mixed with $98-99$~\% of interstellar medium material.
The agreement between J0949-1617 and the models is reasonably good for elements with $55<Z<80$ (residuals are less than $0.5$~dex), as well as for Sr, Y, Zr, and Pd (although these elements are underproduced by 0.5 dex).
The elements Ru and Rh are systematically underproduced by 1~dex, and Pb is overproduced by $0.5-1$~dex.
But more importantly, the predictions for the s elements (e.g. La and Ba) and r elements (e.g. Eu) in the $N\simeq 82$ region are in excellent agreement with observations.
In principle, the time since the i-process event can be estimated from the Th/Eu cosmo\-chronometry through the expression
$\Delta t = 46.67 \, [ \, \log (\mathrm{Th/Eu})_{\rm AGB} - \log (\mathrm{Th/Eu})_{\rm now} \, ],$ where\footnote{We note that $\log (\mathrm{X/Y}) = \log \epsilon (\mathrm{X}) - \log \epsilon (\mathrm{Y})$.} $\log (\mathrm{Th/Eu})_{\rm now} = -0.61$ \citep{Fowler60,cayrel01}.
This gives a lower limit on the age of J0949-1617.
The age estimate for each of our five models is reported in Table~\ref{table:3}.
The scatter of 0.92~dex in $\log (\mathrm{Th/Eu})_{\rm AGB}$ leads to an uncertainty of 43~Gyr on the `age' of J0949-1617. This confirms that an age estimate based on the Th/Eu ratio is not realistic. Even with an extreme precision of 0.1~dex on $\log (\mathrm{Th/Eu})_{\rm AGB}$, the age uncertainty would be about 5~Gyr.
A better age estimate can be obtained using the Th/U ratio instead.
In this case, the age is derived through the expression $\Delta t' = 21.76 \, [ \, \log (\mathrm{Th/U})_{\rm AGB} - \log (\mathrm{Th/U})_{\rm now} \, ]$.
Since U has not been determined at the surface of J0949-1617, $\log (\mathrm{Th/U})_{\rm now}$ is not available. However, the age uncertainty based on the Th/U ratio can be estimated by comparing the values of $\Delta t'$ between two models.
The age difference, $\delta(\Delta t')$, between the considered model and the reference model, MA$\epsilon 4 \alpha 04$, is reported in the last column of Table~\ref{table:3}. It is given by
$\delta(\Delta t') = 21.76 \, [ \, \log (\mathrm{Th/U})_{\rm AGB} - \log (\mathrm{Th/U})_{\rm AGB\_REF} \, ]$.
Different spatial and temporal discretizations lead to an age uncertainty, $\delta(\Delta t')$, of $\sim 6$~Gyr and different nuclear physics to an uncertainty of $\sim 9$~Gyr.
Although such an uncertainty remains large, an improved determination of the theoretical reaction rates above Pb along the i-process path (especially for the reactions reported in Table~\ref{table:2}) would lower the impact of the nuclear uncertainty on the predicted Th and U abundances.
\begin{figure*}[t]
\includegraphics[scale=0.7, trim = 2cm 0cm 0cm 0.cm]{fit_eps_new_discret_nuc.pdf}
\caption{ Best fits to the abundances of the CEMP r/s star \object{RAVE J094921.8-161722} \citep{gull18} using the five AGB models listed in Table~\ref{table:1}.
The abundance of uranium is based on the \iso{238}U isotope only; the \iso{235}U isotope, with a half-life of $t_{1/2}=0.7$~Gyr, was assumed to have fully decayed. }
\label{fig:res}
\end{figure*}
\begin{table}[t]
\scriptsize{
\caption{
Most uncertain $(n,\gamma)$ reactions (above Pb) relevant to i-process nucleosynthesis, according to the three different nuclear models, A, B, and C.
The numbers give the ratios of the Maxwellian-averaged cross-section at $T=250$~MK, which corresponds to the temperature at the bottom of the thermal pulse during the PIE.
This table includes all $(n,\gamma)$ reactions that differ by at least of factor of 10 between two different nuclear models.
\label{table:2}
}
\begin{center}
\resizebox{8cm}{!} {
\begin{tabular}{lcc}
\hline
& $\langle \sigma \rangle_{\rm B} / \langle \sigma \rangle_{\rm A}$ & $\langle \sigma \rangle_{\rm C} / \langle \sigma \rangle_{\rm A}$ \\
\hline
$^{216}$Pb$(n,\gamma)$ & 1.57e-2 & 9.71e-1 \\
$^{217}$Po$(n,\gamma)$ & 8.72e-2 & 8.45e-2 \\
$^{229}$Rn$(n,\gamma)$ & 9.85e+0 & 1.21e+1 \\
$^{224}$Fr$(n,\gamma)$ & 9.63e+0 & 1.23e+1 \\
$^{230}$Fr$(n,\gamma)$ & 5.68e+0 & 3.79e+1 \\
$^{228}$Ra$(n,\gamma)$ & 8.93e-1 & 8.37e-2 \\
$^{232}$Ra$(n,\gamma)$ & 1.52e+1 & 7.64e+0 \\
$^{233}$Ra$(n,\gamma)$ & 1.29e+1 & 6.80e+0 \\
$^{228}$Ac$(n,\gamma)$ & 5.69e-1 & 8.96e+1 \\
$^{236}$Ac$(n,\gamma)$ & 3.39e+0 & 1.38e+1 \\
$^{232}$Pa$(n,\gamma)$ & 3.61e+0 & 1.13e+1 \\
$^{234}$Pa$(n,\gamma)$ & 3.62e+1 & 2.56e+1 \\
$^{238}$Pa$(n,\gamma)$ & 1.33e+1 & 3.08e+1 \\
\hline
\end{tabular}
}
\end{center}
}
\end{table}
\begin{table*}[t]
\caption{Abundances of the best-fit models to J0949-1617. We also give the `age' estimate ($\Delta t$, i.e. the time since the i-process took place, a lower limit to the stellar age; see text for details) of J0949-1617 based on the Th/Eu abundance ratio and the difference for the age estimate, $\delta (\Delta t')$, with respect to the MA$\epsilon 4 \alpha 04$ model, if relying on the Th/U abundance ratio (see text for details). Note that the age estimate for the MB$\epsilon 4 \alpha 04$ model is negative because the Th/Eu ratio in this model is lower than the presently observed ratio. The abundance of U is based solely on the \iso{238}U isotope since \iso{235}U, with a half-life of $t_{1/2}=0.7$~Gyr, was assumed to have fully decayed.
}
\label{table:3}
\begin{center}
\resizebox{17.5cm}{!} {
\begin{tabular}{lcccccccccccc}
\hline
Model & $\log\epsilon$(Eu) & $\log\epsilon$(Th) & $\log\epsilon$(U) & $\log(\mathrm{Th/Eu})_{\rm AGB}$ & $\log(\mathrm{Th/U})_{\rm AGB}$ & $\Delta t$ & $\delta (\Delta t')$\\
label & & & & & & [Gyr] & [Gyr] \\
\hline
MA$\epsilon 8 \alpha 08$ & $-0.98$ & $-1.27$ & $-1.68$ & $-0.29$ & $0.40$ & $14.95$ & $-2.25$ \\
MA$\epsilon 4 \alpha 08$ & $-0.99$ & $-0.92$ & $-1.16$ & $0.07$ & $0.24$ & $31.80$ & $-5.89$ \\
MA$\epsilon 4 \alpha 04$ & $-1.08$ & $-1.12$ & $-1.63$ & $-0.05$ & $0.51$ & $26.16$ & $0$ \\
MB$\epsilon 4 \alpha 04$ & $-1.00$ & $-1.69$ & $-2.28$ & $-0.69$ & $0.59$ & $-3.89$ & $+1.81$ \\
MC$\epsilon 4 \alpha 04$ & $-1.09$ & $-0.86$ & $-1.04$ & $0.23$ & $0.18$ & $39.01$ & $-7.21$ \\
\hline
\end{tabular}
}
\end{center}
\end{table*}
\section{Conclusions}
\label{sect:concl}
In this Letter we have shown that a low-metallicity ([Fe/H]~$=-2.5$) low-mass (1~$M_{\odot}$) AGB star experiencing a successful i-process (through PIE) can synthesize actinides, including a significant amount of Th and U. This production of actinides is also expected to lead to a significant over-abundance of Pb.
While the main i-process path cycles in the neutron-rich Pb-Bi-Po region, a potentially non-negligible fraction of the flux can leak towards actinides.
A neutron density of about $10^{15}$~cm$^{-3}$ is necessary to bypass the fast $\alpha$-decay region and build up actinides.
The isotopes $^{216}$Pb, $^{217}$Bi, and $^{217}$Po are shown to be important branching points where neutron capture competes with beta decay.
Varying the spatial and temporal resolution during the PIE leads to an uncertainty of 0.16~dex on Th and 0.30~dex on U.
The nuclear uncertainties have a larger impact, with about 0.7~dex for Th and 1.1~dex for U.
The different nuclear models allow us to highlight the most uncertain rates above Pb that affect the actinide nucleosynthesis (Table~\ref{table:2}).
We compared our models to the abundances of the CEMP r/s star \object{RAVE J094921.8-161722}, in which Th was detected. We find a rather good agreement for $55<Z<80$ elements, showing that the surface composition is compatible with an i-process origin. Most importantly, the Th enrichment can now be explained by the i-process and does not require additional pollution by the r-process.
Such a finding also opens the way to a possible estimate of the time since the i-process event, through actinide-based cosmochronometry, which would provide a lower limit on the age of the CEMP r/s star.
An estimation via the Th/Eu ratio suffers very large uncertainties, and a more accurate (although still uncertain) age indicator could be obtained if the U abundance can be estimated in CEMP r/s stars.
An accurate spectroscopic determination of Th (and hopefully U) is needed, especially in view of the difficulties in estimating the surface Th (and U) abundances in carbon-rich stars blended by CH molecules.
However, for a reliable age estimate, astrophysical and nuclear uncertainties need to be reduced first.
Despite the remaining uncertainties affecting the models, it seems clear that the high neutron densities encountered in low-metallicity low-mass AGB stars through the i-process can be a source of actinides, including Th and U. We consequently now know that Th and U are not exclusively produced by r-process nucleosynthesis.
The AGB mass and metallicity ranges within which actinides can be formed remain to be explored.
Another open question is whether or not other astrophysical sites that host the i-process can synthesize actinides.
\section*{Acknowledgments}
This work was supported by the Fonds de la Recherche Scientifique-FNRS under Grant No IISN 4.4502.19.
L.S. and S.G. are senior FRS-F.N.R.S. research associates.
\bibliographystyle{aa}
|
2211.03834
|
\section{Introduction}
\label{sec:intro}
When a molecule is formed in a chemical reaction there are often thousands of quantum states it
can end up in, due to the various electronic, vibrational, rotational, and spin degrees of freedom.
Generally, the product population is not uniformly distributed over these possible product states, but
rather follows characteristic propensity rules. Finding and identifying propensity rules can provide deep insights on the basic principles which drive and govern specific reactions.
Furthermore, the propensity rules can be used to develop predictions and approximations, especially when
full detailed calculations are highly complex, as, e.g., for reactions involving more than two atoms.
Propensity rules can be extracted experimentally from state-to-state measurements where the reactants are prepared in well defined quantum states and product states are detected in a quantum state resolved way.
In recent years, there has been rapid progress in the methodology of state-to-state chemistry using atomic and molecular beams \cite{Yang2007, Pan2017, Jankunas2015, Meerakker2012} or ultracold samples \cite{Liu2020, Haerter2013b}. Individual partial waves of product states have been resolved (see, e.g., \cite{Paliwal2021, deJongh2020, Beyer2018}) and spin conservation propensity rules have been observed with hyperfine and rotational states \cite{Haze2022, Wolf2017, Wolf2019, Hu2021, Liu2021}.
Three-body recombination is one of the
most fundamental and ubiquitous chemical reactions. In a collision of three atoms, two combine to form a molecule and the third atom enables the dissipation of the released energy. The released
energy consists of the initial collision energy plus the molecular binding energy $E_b$ and is converted into relative motion between the molecule and the third atom.
Experiments have shown that three-body recombination at ultracold temperatures generally produces the
most weakly-bound molecular state (see, e.g. \cite{Weber2003, Jochim2003, Wolf2017, WangJChemPhys}).
Semi-classical and fully quantum mechanical treatments have been carried out. They generally indicate that there is a propensity towards weakly-bound
molecular product states \cite{wang2011PRA, Perez2014, Wolf2017, Yuen2020}. However, precisely how the molecular production rate decreases with the binding energy has not been clarified yet.
Reference \cite{Nesbitt2012}, e.g., suggested the suppression to be exponential in triatomic reactions.
A recent calculation of three-body recombination of
hydrogen atoms at room temperature predicted a molecular production rate $\propto E_b^{-1.5} $
for recombination towards deeply-bound molecules, which was enhanced by the Jahn-Teller effect \cite{Yuen2020}.
Here, we investigate the rate decrease both experimentally as well as theoretically
by studying three-body recombination of $^{87}$Rb atoms at ultralow collision energies.
We find a $E_b^{-\alpha}$ power law for the
molecular production rate where the exponent $\alpha$ is close to 1.
This result differs from a previous scaling estimate of $\alpha = 1/2$ which was based on studying bound states in a limited range of binding energies \cite{Wolf2017}. We have now extended this range by roughly a
factor of ten, both on the experimental and theoretical side.
Our experimental data comprise thirty final quantum channels of detected molecules with binding energies of up to $E_b = 77\:\textrm{GHz} \times h$.
Our numerical calculations for the three-body recombination rates $L_3 (E_b)$ are in remarkable agreement with our measurements, especially for those product channels where molecules with low rotational angular momentum $L_R$ are formed.
Besides the general $E_b^{-\alpha}$ trend of $L_3 (E_b)$ the calculations also reproduce prominent deviations from this trend at particular binding energies $E_b$. These deviations might be interpreted as interference effects of various kinds.
\begin{figure*}[t]
\includegraphics[width=2\columnwidth]{fig1.pdf}
\caption{
(a) Two-color REMPI scheme for state-selective detection of molecules. Probe and ionization lasers have a wavelength of $1065\:\textrm{nm}$ and $544\:\textrm{nm}$, respectively. (b) Rotational level structure of the relevant molecular states with negative parity. (c) Various segments of REMPI detection signals of ($\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}, L_R=0,2$) product molecules as a function of the probe laser frequency $\nu$. Here, $\nu_0=281445.045\:\textrm{GHz}$, corresponding to the transition from the 5s 5s asymptote to $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}' = 66, J' = 1$. The vibrational quantum numbers $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}$ are given on the top of the figure and the rotational quantum numbers $L_R$ are indicated by the color coding of the dashed vertical lines ($L_R=0$ and $2$ in blue and red, respectively). These dashed lines are expected frequency positions for the given states obtained from coupled-channel calculations. For smaller signals ($\text{\usefont{OML}{lmr}{m}{it}\symbol{118}} = -6,-7, -8$) magnifications by a factor of 10 are also shown. The data for the most-weakly bound state $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}=-1,L_R=0$ are not presented, because the corresponding line is largely drowned by a neighboring photoassociation line \cite{Wolf2017}. }
\label{fig2}
\end{figure*}
Our perturbative model indicates that for each angular momentum $L_R > 0 $ there is a critical binding energy
$E_c (L_R)$ so that for $ E_b > E_c (L_R)$
the trend of the partial recombination rate
will be described by $L_3(E_b,L_R) = c E_b^{-\alpha}$,
where $c$ is a constant. We find that the factor $c$ is roughly independent of $L_R$.
For
$ E_b < E_c (L_R)$ there is a suppression of $L_3(E_b,L_R)$ which can be explained
as the effect of an angular momentum barrier in the exit channel.
As a result, this suggests that only molecular states with small $L_R$ will significantly contribute to molecular production at low binding energies $E_b$.
Finally, we show that the $E_b^{-\alpha}$ scaling law can be also derived theoretically in an analytic, perturbative approach. We find that within this approach the $E_b^{-\alpha}$ scaling is quite independent on the long-range behavior of the interaction potential between two atoms. Specifically, potentials with a power law
tail $-C_n/r^n$ for $n=3,4$ or 6, or the Morse potential, as well as the contact potential have $\alpha$-values in the range $[0.91, 1]$.
Within the framework of the perturbative calculations
the scaling of the rate constant $L_3(E_b)$ is largely determined by $|\phi_d(\sqrt{E_b\,m/3})|^2$,
where $\phi_d$ is the diatomic molecular wave function in momentum representation and $m$ is the atomic mass. It turns out that this part of the momentum wave function is linked to the molecular wave function in real space in the vicinity of the classical outer turning point of the molecular potential (at energy $-E_b$).
This indicates that the outer classical turning point marks
a typical distance for the recombination to occur.
\section{Experiment}
\label{sec:experiment}
\begin{figure}[t]
\includegraphics[width=0.8\columnwidth]{fig2.pdf}
\caption{Measured (scaled) and calculated rate constants $L_3(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}},L_R)$ for product molecules $(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}},L_R)$ as a function of their binding energy $E_b$. The rotational quantum numbers $L_R$ are indicated by different plot symbols (black: calculations; colors: experiment). For each sequence of states with the same $L_R$ the vibrational quantum number $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}$ is given below the data points.
For better visibility the data for each $L_R$ are shifted in vertical direction by multiplying them with $10^{-L_R}$. The gray solid lines show the energy scaling $(E_b/\textrm{GHz}\times h)^{-0.8} \times 0.2\times 10^{-29}\:\textrm{cm}^6/\textrm{s}$.
}
\label{fig3}
\end{figure}
In our experiments we prepare an ultracold cloud of $5\times10^6$ $^{87}$Rb atoms in a far-detuned 1D optical lattice trap ($\lambda = 1064\,$nm, trap depth $ \approx 10\,\mu$K$\times k_B$) combined with an optical dipole trap so that we obtain a trap frequency of $2\pi \times 23 $Hz in the transverse direction. The atoms are spin-polarized in the hyperfine state $f=1,m_f=-1$ of the electronic ground state and have a temperature of about $750\:\text{nK}$. Our measurements are carried out at a low external magnetic field of about $4\:\textrm{G}$. We hold the atom cloud in the trap for a duration of 500$\:$ms during which Rb$_2$ molecules are spontaneously produced via three-body recombination in the coupled $X^1\Sigma_g^+-a^3\Sigma_u^+$ molecular complex, below the $5S_{1/2}+5S_{1/2}$ atomic asymptote. The molecules are state-selectively ionized via resonance-enhanced multiphoton ionization [REMPI] (see Fig.$\:$\ref{fig2}(a) and Appendix \ref{appendix:REMPI} for details), and then trapped and detected as ions in a Paul trap at a distance of 50 $\mu$m (see Appendix \ref{appendix:counting} for details). In brief, a first REMPI laser (the probe laser) resonantly excites such a molecule to the intermediate level $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}} ' =66, \, J'$ of the state $A^1\Sigma_u^+$ using a wavelength of about 1065$\:$nm \cite{Deiss2015,Drozdova2013}. Here, $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}} ' $ is the vibrational quantum number and $J'$ is the total angular momentum quantum number excluding nuclear spin. From the intermediate level a second laser (the ionization laser) at a wavelength of about 544$\:$nm resonantly excites the molecule to a state above the Rb$_2^+$ ionization threshold, so that the molecule can autoionize. In one experimental run we can detect and count up to $\approx$ 70 ions in the Paul trap. The ion number scales linearly with the molecule number.
The corresponding scaling factor $\eta$ is the detection efficiency of a molecule.
As discussed in Appendices \ref{appendix:REMPI} and \ref{appendix:Conversion}, $\eta $ is roughly constant over the range of bound states investigated in this work, and its value is $\eta \approx 4.8\times10^{-3}$.
A REMPI spectrum of a particular product state is obtained by scanning the probe laser frequency in steps of typically 5$\:$MHz.
Our setup features an improvement of the product state signals and the sensitivity by a factor of $\approx 25$ as compared to previous work \cite{Wolf2017}, extending our detection range of binding energies to about $80\:\textrm{GHz}\times h$, which was instrumental for the present work (for details, see Appendix \ref{appendix:boost}).
In the following we specify molecular states by their vibrational quantum number $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}$ and their rotational quantum number $L_R$ only, which is sufficient due to the conservation of the hyperfine spin state in the reaction process \cite{Noteconservation}. Figure \ref{fig2}(c) shows product state spectra for molecules with $L_R = 0$ or $2$, and with $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}$ ranging from $ -2$ to $-8$.
We note that whenever $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}$ is negative,
it is counted downwards from the atomic $f_a = f_b = 1$, $m_{fa} =m_{fb} = -1$ asymptote, starting with $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}} = -1$ for the most weakly bound vibrational level.
The most deeply bound state, ($\text{\usefont{OML}{lmr}{m}{it}\symbol{118}} = -8, L_R=0$), has a binding energy of $77\:\textrm{GHz} \times h$. Here, all signals are obtained using the same intermediate state $J'=1$ for REMPI. The frequency reference $\nu_0$ corresponds to the photoassociation transition towards this intermediate state such that, at a resonance position, $(\nu-\nu_0)\times h$ directly represents the binding energy of the initially produced molecular state. Our data clearly show that the production rate of molecules for a given rotational level $L_R$ generally drops with the binding energy $E_b$. The drop is significant over the investigated range of $E_b$.
The relative strength of $L_R = 0$ and $2$ signals, however, can vary for different
vibrational levels $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}$.
Large molecular signals as, e.g., obtained for $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}=-2$ correspond to 63(3) produced ions per run whereas typical background signals are 0.69(0.15) ions per run (see also Fig.$\:$\ref{fig:App6} in Appendix \ref{appendix:boost}). In the measurements of Fig.$\:$\ref{fig2}(c) the number of repetitions of the experiment per data point was gradually increased from 5 to 40 for increasing binding energy, in order to improve the visibility of smaller signals. We assign the signals in our REMPI spectra by comparing their frequency positions to those obtained from
close-coupled channel calculations (see, e.g., \cite{Wolf2017,Haze2022}) and by observing
characteristic rotational ladders, since any molecular state with $L_R>0$ can be detected via two different rotational states $J'=|L_R\pm 1|$ [see Appendix \ref{appendix:ConsistencyCheck}, not shown in Fig.$\:$\ref{fig2}(c)].
The deviations between calculated [dashed vertical lines in Fig.$\:$\ref{fig2}(c)] and measured resonance frequency positions are typically smaller than $\sim 30\:\textrm{MHz}$ and arise mainly from daily drifts of our wavelength meter.
\section{Quantitative analysis}
\label{sec:numerical model}
We now carry out a quantitative analysis of the observed population distribution. Figure \ref{fig3} shows measured (scaled) and calculated partial rate constants $L_3(E_b) \equiv L_3(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}},L_R)$ for the production of $(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}},\, L_R)$ molecules at a temperature of 0.8$\:\mu$K. The experimental values
for $L_3(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}},L_R)$ were obtained
by multiplying the measured
ion numbers with a single calibration factor for all detected ($\text{\usefont{OML}{lmr}{m}{it}\symbol{118}},L_R$) states. This factor was chosen to
optimize the agreement between experiment and theory (Appendix \ref{appendix:Conversion}).
As can be seen from Fig.$\:$\ref{fig3}, the theoretical predictions reproduce remarkably well the relative strengths of the state dependent rates $L_3(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}, L_R)$ obtained from the experiments. The calculations are based on solving the three-body Schr\"{o}dinger equation in an adiabatic hyperspherical representation \cite{dincao2018JPB,wang2011PRA}, using a single-spin model as described in Appendix \ref{appendix:TBmodelRb87} (see also Ref.$\:$\cite{Wolf2017}).
Within our model the $^{87}$Rb atoms interact via pairwise additive long-range van der Waals potentials with a scattering length of 100.36 a$_0$ \cite{Strauss2010}. The potentials are truncated and support $15$ $L_R=0$ molecular bound states, and a total of 240 bound states. We calculated the theory data point for $L_{R}=8$, $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}=-3$ using a model potential with 12 $s$-wave bound states because for 15 $s$-wave bound states a numerical instability occurs specifically for this level.
Figure$\:$\ref{fig3} reveals that the $L_3(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}},L_R)$ rate roughly follows the overall scaling of $E_b^{-\alpha}$ for all rotational states. A fit analysis to the experimental data yields a scaling factor $\alpha={0.80(\pm 0.14)}$ (see gray solid lines), while the fit to the theoretical data yields $\alpha={0.77(\pm 0.10)}$. We point out that all gray solid lines
in Fig.$\:$\ref{fig3} correspond to exactly the same function,
$(E_b/\textrm{GHz}\times h)^{-0.8} \times 0.2\times 10^{-29}\:\textrm{cm}^6/\textrm{s}$. For better visibility these lines along with the respective data points have been shifted in vertical direction by multiplying them
with $10^{-L_R}$. We notice that the measured data for $L_R = 2$ are all located above the gray line while
the data for $L_R = 4$ or $10$ are all below the gray line. This indicates that there is a systematic dependence of $L_3(E_B)$ on $L_R$, as already discussed in
\cite{Wolf2017}. Nevertheless, considering the overall range of $L_3$ and $L_R$ in our data, this variation of
$L_3(E_B)$ with $L_R$ is still comparatively small, typically within a factor of 2. Therefore, to a first approximation, the production rate does seem
to be quite independent of the molecular rotation $L_R$. This fact might be somewhat counterintuitive given that the atoms initially collide with vanishing angular momenta and therefore products with small angular momenta would seem to be naturally preferable.
We note that there are considerable variations around the general $E_b^{-\alpha}$ scaling trend. For example, the rate for the state $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}=-3, L_R=0$ is significantly lower than the rate for the more-deeply bound state $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}=-4,L_R=0$. Remarkably, even such individual variations are largely reproduced by our numerical calculations.
In general, the theoretical and experimental data curves are very similar, especially for the low rotational states $L_R=0$ and $L_R=2$. This suggests that our three-body model is quite accurate
and that it should in principle be capable to track down how the deviations from the general scaling come about in individual cases.
For example, we point to the experimental and theoretical data points for $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}} = -2$ and $L_R=4$, which are located
below the $E_b^{-\alpha}$ scaling trend. This suppression may be due to an angular momentum barrier effect which we will discuss
in Section \ref{sec:hiPartialWave}.
\section{Perturbative approach}
\label{sec:perturbative}
In order to gain a deeper insight into the observed energy scaling we discuss in the following
a perturbative model for the partial three-body recombination rates $L_3(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}, L_R)$.
Generally, the rates $L_3(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}, L_R)$ towards each specific molecular product $d = (\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}, L_R)$ are given by \cite{Li:2022} (see also Appendix \ref{appendix:AGS})
\begin{equation} \label{eq:L3a}
L_3(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}, L_R)= \frac{12 \pi m}{ \hbar} (2 \pi \hbar)^6 q_{d} |\langle\psi_{f} | U_{0} (E) | \psi_{\rm{in}} \rangle |^2,
\end{equation}
where $m$ represents the mass of an atom. $| \psi_{\rm{in}}\rangle$ is the initial state, consisting of three free atoms each propagating as a plane wave with essentially vanishing momentum.
$|\psi_{f}\rangle$ is the final state of a free atom and a free molecule. Atom and molecule are asymptotically propagating as plane waves with relative
momentum $q_{d}$ which is fixed by the molecular binding energy $E_b$ and the total energy $E$ of the three-body system
via $\frac{3q_{d}^2}{4m}-E_b=E$, where we use the center of mass system as a reference.
In Eq.$\:$(\ref{eq:L3a}), $ U_{0} (E)$ represents a three-body transition operator which describes the transition process between the states. It can be approximated
by a perturbative expansion (Appendix \ref{appendix:AGS}) derived from the Alt-Grassberger-Sandhas (AGS) equation \cite{Alt:1967,Secker:2021,Li:2022}.
To the leading order of the expansion, we have
a process where atoms $(a,b)$ of the three free atoms $(a,b,c)$ collide
to exchange a momentum $\mathbf{q}_d$. During this collision atom $(b)$ is scattered into a molecular bound state with atom $(c)$.
This is shown schematically in the inset of Fig.$\:$\ref{fig:phi_decay}. The initial momenta of the atoms are 0.
After the collision atom $(a)$ remains free and carries away the momentum $\mathbf{q}_d$ and a corresponding part of the released binding energy.
The formed molecular bound state $\phi$ has a total momentum $-\mathbf{q}_d$ and the relative momentum
between its atomic constituents is $(-\mathbf{q}_d/2)$.
Apart from constants, the result of the calculation is
\begin{equation} \label{eq:L3b}
L _3(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}, L_R) \propto q_d \, \, \, \left| \phi_d \left(\frac{1}{2}q_d \right) \right|^2 \, \, \, |t_{\rm{h}}(q_d)|^2.
\end{equation}
Here, $\phi_d(p=\frac{1}{2}q_d)$ corresponds to the radial part of the molecular wave function in momentum space. It is normalized according to $\int |\phi_d(p)|^2 p^2 dp=1$. The factor $t_{\rm{h}}(p'=q_d) \equiv$ $\langle p'= q_d|t^{s}(0)|p=0\rangle$
is the matrix element of the $s$-wave component $t^s$ of the two-body transition operator $t$ for the two-body collision
and we have set $E = 0$.
Here, $p$ ($p'$) represent the relative momenta of the incoming (outgoing) two colliding atoms, respectively.
Within the perturbative approximation, the $E_b$ scaling of $L_3(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}, L_R)$ can only result from two-body quantities, i.e., $E_b$, $\phi_d$ and $t_{\rm{h}}$. Since $E \approx 0$, one obtains $q_d \approx 2 \sqrt{E_b \, m / 3 }$.
In order to analyze the scaling of $L_3(\text{\usefont{OML}{lmr}{m}{it}\symbol{118}}, L_R)$ with the molecular binding energy $E_b$,
we discuss $\phi_d(\sqrt{E_b\,m/3})$ and $t_{\rm{h}}(2\sqrt{E_b\,m/3})$ separately. We find that $t_{\rm{h}}(p)$ oscillates but its amplitude varies only gently with $p$
until the deeply-bound states are reached (see
Fig.$\:$\ref{fig:th}(a) in
Appendix \ref{appendix:AGS}). Therefore, $t_{\rm{h}}(2\sqrt{E_b\,m/3})$ cannot strongly contribute to an overall scaling with $E_b$ for the three-body recombination rate.
In contrast to that, $\phi_d(\sqrt{E_b\,m/3})$ which is obtained from two-body bound state calculations, vanishes quickly with increasing $E_b$.
This is shown in Fig.$\:$\ref{fig:phi_decay} (yellow data points) for atoms interacting via the van der Waals potential.
Besides the overall decrease of $|\phi_d(\sqrt{E_b\,m/3})|^2$ for growing $E_b$, there are also oscillations. The sharp drops in these oscillations correspond to the nodes of the various momentum wave functions $\phi_d$ for the bound states with energy $E_b$.
While the oscillations lead to some scatter of the data, the upper envelope of the data points indicates an overall power law scaling of the
amplitude.
A fit to this envelope (dotted line in Fig.$\:$\ref{fig:phi_decay}) gives $|\phi_d(\sqrt{E_b\,m/3})|^2\propto E_b^{-1.44\pm 0.03}$ which yields $L_3\propto E_b^{-0.94 \pm 0.03}$. This result agrees quite well with our full calculations from Fig.$\:$\ref{fig3}.
We note that in the shown energy range there are only 13 bound states in the van der Waals potential, resulting in 13 data points.
In order to map out in more detail the functional form of $|\phi_d(\sqrt{E_b\,m/3})|^2$ in Fig.$\:$\ref{fig:phi_decay}
we have slightly varied $\lambda_6$ over four different values (while keeping the number of bound states in the potential constant).
The variation in $\lambda_6$ leads to variations of $E_b$ and therefore also of $\phi_d$ and the scattering length $a$.
When we present all these data points together, a quasi-continuous curve is obtained. We note that the oscillating amplitude of $t_{\rm{h}}(p)$, which is nearly constant for a fixed scattering length $a$, can depend on $a$.
\begin{figure}[t]
\centering
\resizebox{0.53\textwidth}{!}{\includegraphics{fig3.pdf} }
\caption{
Plot of $|\phi_d(\sqrt{E_b\,m/3})|^2 \propto L_3(E_b)/\sqrt{E_b}$ versus $E_b$ for various two-body potentials. These are the potentials with the power-law tail, $\sim -C_n/r^n$ for $n=3,4$ or 6, and the Morse potential $D_e[e^{-2a(r-r_e)}-2e^{-a(r-r_e)}]$ (see legend). We use $\bar{E}=\hbar/m \bar{r}^2$, with $\bar{r}=\frac{1}{2}(m C_n/\hbar^2)^{1/(n-2)}$ for the power-law potentials and $\bar{r}=a$ for the Morse potential, respectively. The values for $\lambda_n$ and $D_e$ are chosen such that the potentials have 14 or 15 $s$-wave bound states. The shown
points correspond to bound states with $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}} \geq -13$.
In order to map out the function $|\phi_d(\sqrt{E_b\,m/3})|^2$ better we show data points for four different values for $\lambda_n$ (for each $n$) or for $D_e$ (see text). The dotted line is a power-law fit to the upper envelope of the data points for the $-C_6/r^6$ potential as well as the other potentials.
The dashed line represents the scaling for contact interactions. For $n = 6$ the energy range considered in this figure corresponds to [0.01 \dots 100] GHz$\times h$ for $^{87}$Rb atoms. The inset describes the scattering process in the perturbative approximation (see Appendix \ref{appendix:AGS}). Horizontal lines represent atoms and the numbers above these lines denote single atom momenta. The relative momentum between two atoms is indicated by a number that connects to the corresponding horizontal lines by arrows.
}\label{fig:phi_decay}
\end{figure}
\section{ Energy scaling for general long-range potentials}
\label{sec:analytical derivation}
Remarkably, we find that the scaling law is similar for a range of different two-body interaction potentials, such as
the Morse potential and potentials of the
form $V(r) = -C_{n}/r^n (1- \lambda_{n}^{n}/r^{n})$. Here, the parameter $n$ is typically $n = 3, 4$ or 6, and
$\lambda_n$ is a short-range parameter which defines the inner barrier. The case $n = 6$ corresponds to the Lennard-Jones potential which was already discussed in the previous section. The corresponding functions $|\phi_d(\sqrt{E_b\,m/3})|^2$ are shown
in Fig.$\:$\ref{fig:phi_decay}. Clearly, their envelopes roughly decrease in a similar manner, i.e.
$ \approx E_b^{-1.44\pm 0.03}$.
Furthermore,
we also consider contact interactions between the atoms. For these, we use
$\phi_d(p)=\frac{2}{\sqrt{\pi}}\frac{(m \, E_b)^{1/4}}{p^2+m \, E_b}$ and analytically obtain from Eq.$\:$(\ref{eq:L3b}) the scaling to be exactly
$|\phi_d(\sqrt{E_b\,m/3})|^2\propto1/E_b^{3/2}$ (see black dashed line in Fig. \ref{fig:phi_decay}), corresponding to $L_3(E_b)\propto 1/E_b$.
Therefore, even the results for contact interaction are in relatively good agreement with our other
numerical and the experimental results.
\begin{figure}[t]
\centering
\resizebox{0.53\textwidth}{!}{\includegraphics{fig4.pdf}}
\caption{(a) Typical example for a molecular wave function (red solid line)
located in the $-C_6/r^6$ van der Waals potential (black solid line). The blue horizontal dotted line and the blue circle indicate the molecular level position $-E_b$ and the corresponding classical outer turning point, respectively. The vertical dashed line shows the starting position of the last lobe of the molecular wave function. (b) shows the accumulated Fourier integral $\phi^{\rm{acc}}_d(\sqrt{mE_b/3};r)$, see text.
$ E_\mathrm{vdW} =\hbar/m r_\mathrm{vdW}^2$ is the van der Waals energy and $r_\mathrm{vdW} =\frac{1}{2}(m C_6/\hbar^2)^{1/4}$ is the van der Waals length.}
\label{fig:lastlobe}
\end{figure}
In the following we discuss how this similar scaling for the different long-range potentials can be explained. We make use of approximate analytical wave functions for the molecular bound state.
Let $\psi(r) = u(r)/r $ be the radial part of the molecular wave function with rotational angular momentum $L_R$. Here, $r$ is the internuclear distance between the two atoms. A typical example of the reduced radial wave function $u(r)$ is shown in Fig. \ref{fig:lastlobe}(a).
The Fourier transform of $u(r)/r $ generates the molecular wave function in momentum space
\begin{eqnarray}
\phi_d (p)&=&\sqrt{\frac{2}{\pi}}\int_0^{\infty} r j_{L_R}(pr/\hbar) {u}(r)dr ,
\end{eqnarray}
where $j_{L_R}$ is the spherical Bessel function of the first kind of order $L_R$.
A numerical analysis shows
that the dominant contribution to $\phi_d(\sqrt{ E_b m/ 3})$ comes from the last lobe of
$u(r)$,
which is located around the classical outer turning point $r_0$.
In Fig. \ref{fig:lastlobe}(b) we show the Fourier integral
for the case $L_R = 0$
in an
accumulated fashion
$\phi^{\rm{acc}}(p;r)=\int_{0}^{r}\sin(pr/ \hbar)u(r)dr/\int_{0}^{\infty}\sin(pr/ \hbar )u(r)dr$,
which verifies the
dominant contribution of the last lobe.
The turning point of the level [blue circle in Fig.
\ref{fig:lastlobe}(a)] is determined by
$\tilde V(r_0) +E_b=0$, where $ \tilde V(r) = V(r) + \hbar^2 L_R (L_R + 1)/ (m r^2)$. The reduced radial wave function $u(r) = r \psi(r)$ in this region is approximated by
$\tilde u(r) = \mathcal{N}^{1/2}\textrm{Ai} [s(r-r_0)]$, where Ai($x$) is the Airy function, $s=(mD/\hbar^2)^{1/3}$,
$D= d\tilde{V}(r)/ dr|_{r=r_0}$, and $\mathcal{N}$ is a normalization factor that ensures that $\tilde u(r)$ best matches
$u(r)$ in the region. It turns out that to a good approximation
$\mathcal{N} = s N$, where $N$ is a constant independent of $E_b$ \cite{LastLobe}.
We Fourier transform $\tilde{u}(r)$ and obtain
\begin{eqnarray}
\tilde \phi_d (p)
&=&\sqrt{\frac{2N\hbar^2}{\pi s p^2}} g_d(p)\,,
\end{eqnarray}
where
\begin{equation}
g_d(p)=\int_0^{\infty}\frac{p\tilde{r}}{s\hbar} j_{L_R}(p\tilde{r}/s\hbar)\textrm{Ai} (\tilde{r}-\tilde{r}_0)d\tilde{r}.
\end{equation}
Here, $\tilde{r}=sr$, $\tilde{r}_0=sr_0$. At $p=\sqrt{ E_b m/ 3}$, we get
\begin{align}
|\tilde \phi_d (\sqrt{ E_b m/ 3})|^2 &=\frac{6N\hbar^2}{\pi s mE_b}g_d^2(\sqrt{ E_b m/ 3}),
\label{eq:aphi}
\end{align}
which approximates $|\phi_d (\sqrt{ E_b m/ 3})|^2$.
\begin{figure}[t]
\centering
\resizebox{0.52\textwidth}{!}{\includegraphics{fig5.pdf} }
\caption{\label{fig:phai}
(a) Plot of $g_d(\sqrt{ E_b m/ 3}) $ as a function of $E_b$ for the partial waves $L_R \leq 10$ (see legend in (b)). The gray area highlights the range of the oscillation for $L_R=0$.
(b) Plot of $1/(s E_b)$.
For both (a) and (b) we use the $-C_6/r^6$ potential. }
\end{figure}
After this general discussion we now discuss the cases for $L_R = 0$ and
$L_R > 0$.
\subsection{Case: $L_R=0$}
For a $L_R=0$ molecular state and $\tilde V(r)=-C_n/r^n$ the classical outer turning point is given by $r_0 = (C_n/ E_b)^{1/n}$. From the derivative of the
potential $\tilde V (r)$ we obtain for the parameter $s$
(which we defined earlier in connection with the Airy function),
\begin{align}
s &=(mn/\hbar^2)^{1/3}E_b^{(n+1)/3n}C_n^{-1/3n} \label{eq:sLR0}.
\end{align}
Thus, the factor $6N\hbar^2/(\pi s mE_b)$ in Eq.~(\ref{eq:aphi}) scales as $E_b^{-(4n+1)/3n}$.
The other factor in Eq.~(\ref{eq:aphi}), $g_d(\sqrt{ E_b m/ 3})$, is plotted
in Fig.~\ref{fig:phai}(a) for $n = 6$, (blue line).
$g_d(\sqrt{ E_b m/ 3})$ oscillates between 1 and -1 with a constant
amplitude, as indicated by the gray area. Therefore, $g_d(\sqrt{ E_b m/ 3})$ does not contribute to an overall scaling of $|\phi_d (\sqrt{ E_b m/ 3})|^2$ with $E_b$.
This is similar as for $t_h(q_d)$ as mentioned in Sec. \ref{sec:perturbative}.
The variation of $g_d(\sqrt{ E_b m/ 3})$ merely leads to some scatter of $L_3$. Therefore, we ignore $g_d(\sqrt{ E_b m/ 3})$ in the following discussion on scaling and
obtain,
\begin{equation}
|\phi_d (\sqrt{ E_b m/ 3})|^2 \propto E_b^{-(4n+1)/3n} = E_b^{-\beta} . \label{eq:scale}
\end{equation}
For $n=3,4$ and 6, the exponent $\beta = (4n+1)/3n$ takes the
values of 1.44, 1.42 and 1.39, which agree very well with our numerical results in the perturbative approach.
For the scaling of $L_3$ we have $L_3\propto E_b^{-\alpha}$, where $\alpha = \beta - 0.5$ \textcolor{magenta}.
It is remarkable that for any positive integer $n$ the exponents are constrained to a narrow range, i.e. $\beta \in$ [1.33, 1.67] and $\alpha \in$ [0.83, 1.17].
Considering that a real interaction potential can typically be expanded in terms of the $-C_n/r^n$ functions,
these ranges should be valid quite generally.
In fact, the range of $\alpha \in$ [0.83, 1.17] agrees with the exponent $\alpha=0.8 \pm 0.14$ extracted from our experimental measurements
within the range of uncertainty.
\subsection{Case: $L_R>0$}
\label{sec:hiPartialWave}
We now consider the case of rotational angular momentum $L_R>0$. Because for this case we do not obtain a simple analytical expression for
$s$ as in Eq.~(\ref{eq:sLR0}), we only present numerical results.
Figure \ref{fig:phai}(b) shows a plot of $1/(s E_b)$ for various $L_R$ (and using the $-C_6/r^6$ potential as a
typical example).
Clearly,
all curves are quite similar, especially for large $E_b$.
The functions $g_d(\sqrt{ E_b m/ 3})$ are shown in Fig.~\ref{fig:phai}(a), as discussed before. They oscillate with a constant amplitude and
therefore do not contribute to the energy scaling for large energies $E_b$.
As a consequence, the energy scaling is quite independent on the rotational state $L_R$ of the molecule. Figure \ref{fig:phL} shows calculations for $|\phi_d(\sqrt{E_b\,m/3})|^2$ for various rotational angular momenta of the molecule. As a typical example we use the Lennard-Jones potential.
Similar as for Fig.\ref{fig:phi_decay} we have slightly varied $\lambda_6$ over four different values
in order to increase the number of data points and to better map out $|\phi_d(\sqrt{E_b\,m/3})|^2$.
The sudden drops in $|\phi_d(\sqrt{E_b\,m/3})|^2$ reflect nodes of the molecular wave function.
For large enough binding energy $E_b$ all curves for the different values of $L_R$ follow the same power law $E_b^{-1.45}$ corresponding to an energy scaling of the partial rate constants (for fixed $L_R$) of $L_3(E_b)\propto E_b^{-0.95}$.
We note, however, that for $L_R>0$ and small enough binding energies, our calculations in Fig. \ref{fig:phL} reveal a strong suppression of $|\phi_d|^2$ and therefore of $L_3(E_b)$.
This effect is due to the function $g_d(\sqrt{ E_b m/ 3})$.
As shown in Fig. \ref{fig:phai} (a), for $L_R>0$, $g_d(\sqrt{ E_b m/ 3})$ increases gradually with $E_b$ starting from 0.
When it reaches its first maximum, it goes over to the previously discussed oscillatory behavior in the gray area, similar to the case of $L_R=0$.
As a consequence, $|\phi_d (\sqrt{ E_b m/ 3})|^2$ is increasingly suppressed for $E_b \rightarrow 0 $, as observed in our numerical results.
This suppression can be understood as an effect of the angular momentum barrier.
In a simple picture, in order to create a molecule rotating with angular momentum $L_R$ at interparticle distance $r_0$ of the outer turning point, a minimal momentum $p_c$
needs to be supplied of the order $p_c \approx \hbar L_R / r_0$. The minimal momentum $p_c$ translates into a minimal
binding energy $E_c =3 p_c^2 / m \approx \hbar^2 L_R(L_R +1) / (m r_0^2)$. At the same time we have $E_c \approx C_6 / r_0^6-\hbar^2 L_R(L_R +1) / (m r_0^2)$. Combining these two equations to eliminate $r_0$ one can estimate
the critical energy $E_c$
to be
\begin{equation}
E_c/E_{\rm{vdW}} \approx c_c \left[ L_R(L_R +1)\right]^{3/2} ,
\label{eq:escal2}
\end{equation}
where $c_c = 1.5$ and
$E_{\rm{vdW}} = 4 \hbar^3/ (m^{3/2} C_6^{1/2}) $ is the van der Waals energy.
Reading off $E_c (L_R)$ from our numerical results in Fig. \ref{fig:phL} as the first maximum of $|\phi_d (\sqrt{ E_b m/ 3})|^2$ we find
that the data points are well described by Eq.(\ref{eq:escal2}) when we use
$c_c = 2.12$, see inset of Fig. \ref{fig:phL}. This validates our simple interpretation of the
angular momentum suppression.
\section{Discussion}
\label{sec:discussion}
We now compare and discuss the results of our theoretical and experimental approaches.
The suppression effect for large $L_R$ and small $E_b$, which is so clearly visible in Fig.$\:$\ref{fig:phL}
is not so obvious in Fig.$\:$\ref{fig3} where we present our experimental data and our full coupled channel calculations.
A small suppression effect might only be recognizable for the state $\text{\usefont{OML}{lmr}{m}{it}\symbol{118}} = -2, L_R=4$ in Fig.$\:$\ref{fig3}.
In practice, the observation of suppressed low-energy high-$L_R$ molecular signals can be hampered by various issues.
By accident it can occur that no weakly-bound molecular level with $E_b < E_c$
exists for a given rotational angular momentum $L_R$.
In fact, quantum defect theory predicts
for a van der Waals potential
that if the most weakly-bound state for a partial wave $L_R$ is not close to threshold, then
also the most weakly-bound state for the partial wave
$L_R + 4$ will not \cite{Chin2010}.
Alternatively, the level can be overlooked experimentally, if its signal is too weak.
It will be overlooked theoretically if the level has a vibrational quantum number beyond the limits of the model potential.
It should be, however, clear that the suppression mechanism must exist. Indeed, in a recent experiment on
diatomic molecular reactions, a similar suppression mechanism has been identified \cite{Liu2021}.
When comparing Fig.$\:$\ref{fig:phL} with Fig.$\:$\ref{fig3} it is evident
that the distinct drops of $|\phi_d (\sqrt{ E_b m/ 3})|^2$ in Fig.$\:$\ref{fig:phL} do not clearly
appear in Fig.$\:$\ref{fig3}. There are several possible explanations for this. First, the data sampling
in Fig.$\:$\ref{fig3} is seven times smaller than for Fig.$\:$\ref{fig:phL}. Therefore, it is likely that
a narrow drop is not encountered in Fig.$\:$\ref{fig3}. Second, the calculations in Fig.$\:$\ref{fig:phL} correspond to the leading order of an expansion. Including higher orders might wash out the sudden drops, as other pathways for the molecular formation can be taken.
Concerning the full model and the experiment, we expect that the scaling
exponent $\alpha$ of the $ E_b^{-\alpha}$ scaling law is prone to changes for deeper binding energies than the ones considered here.
As recently discussed in \cite{Haze2022}, for $^{87}$Rb and $E_b>150\:\textrm{GHz}\times h$ the spin conservation propensity rule which allows for working with a single spin channel should break down, affecting the scaling law.
In addition, the short-range three-body interaction, which is ignored so far in our treatment,
should play an increasingly important role when forming more tightly bound molecular states.
Recent work on three-body recombination of hydrogen \cite{Yuen2020} has already found evidence for this,
as the Jahn-Teller effect substantially enhances recombination rates into tightly bound molecular states.
\begin{figure}[t]
\centering
\resizebox{0.45\textwidth}{!}{\includegraphics{fig6.pdf} }
\caption{\label{fig:phL}
Powerlaw scaling of $|\phi_d(\sqrt{E_b\,m/3})|^2 \propto L_3(E_b)/\sqrt{E_b}$ for various rotational angular momenta $L_R$. The data points are a full calculation for the Lennard-Jones potential $(-C_6/r^6)(1-\lambda_6^6/r^6)$.
As in Fig.\ref{fig:phi_decay} and Section \ref{sec:perturbative},
we have slightly varied $\lambda_6$ over seven different values while keeping the number of bound states in the potential constant.
This helps to map out $|\phi_d(\sqrt{E_b\,m/3}) |^2$ in detail. Therefore, although there are only 8 bound states in the shown energy range we obtain 55 data points.
The solid lines are calculations
for the Lennard-Jones potential using Eq. (\ref{eq:aphi}).
The dashed lines show the $E_b^{-1.45}$ energy scaling for $|\phi_d|^2$, as determind by a fit to the upper envelope of the data set.
For better visibility the data for $L_R>0$ are shifted in vertical direction by multiplying them with $10^{-L_R}$.
We define $E_c$ to be the energy at which the curve $|\phi_d|^2$ (for a given $L_R$) has its first maximum (coming from low energy).
The inset shows $E_c$ as a function of $L_R$ (diamonds). The data are well described by Eq. (\ref{eq:escal2}) (solid line).
}
\end{figure}
\section{Summary and Outlook}
\label{sec:outlook}
To summarize, we have experimentally and theoretically investigated how the three-body recombination
of an ultracold gas scales with the molecular binding energy $E_b$,
detecting bound levels from 0.02 to 77 GHz $\times h$, thus
spanning an energy range of more than three orders of magnitude.
This became possible by applying improved experimental schemes for the state-to-state detection of molecules and
by carrying out large scale numerical calculations.
Besides these numerical calculations an analytical perturbative model was developed which gives deep physical insights into the recombination process and can explain the observed scaling law. In particular, the perturbative model shows that to a large part the scaling law can be extracted from two-body quantities such as the molecular wave function.
Our experimental and theoretical approaches show that the three-body recombination
exhibits a propensity towards weakly bound product molecules. The recombination rate
follows a $E_b^{-\alpha}$ scaling law
where $\alpha$ is in the vicinity of 1.
Remarkably, we find that this scaling law is quite universal as it should hold for a range of different potentials such as the
Morse potential, potentials of type $-C_n/r^n$ with $n = 3, 4, 6$, as well as the contact potential.
In addition, apart from a centrifugal barrier suppression at low enough binding energies, our results indicate that
the three-body recombination populates molecular quantum states with different rotational angular momenta quite evenly, within about a factor of two.
In the future it will be interesting to explore how the scaling law evolves for
deeper binding energies and what physical mechanisms will lead to its breakdown.
On the experimental side, the detection sensitivity must be enhanced and the spectroscopic data will be expanded for reliable quantum state identification. On the theory side, short-range three-body interactions which are ignored so far in our treatment will be taken into account.
Moreover, it will be insightful to explore how deviations of individual reaction channels from the $E_b^{-\alpha}$ scaling law can be explained on a microscopic level, e.g. as interference effects of collision pathways.
In fact, our perturbative calculations already produce
tell-tale oscillations
and it will be interesting whether we can match up these oscillations with the ones from the hyperspherical approach. This might give deeper insights into the reaction process.
We expect that our results on the scaling of the reaction rate with energy are not restricted to the recombination process of neutral atoms alone, but they can also be applied to other systems and processes. For example, these systems could involve molecules or ions as collision partners and they might also comprise a range of collisional relaxation processes.
We expect these process rates to be governed by a $E_b^{-\alpha}$ scaling law, where $\alpha$ should always be in the vicinity of unity.
\section*{Acknowledgments}
This work was financed by the Baden-W\"{u}rttemberg Stiftung through the Internationale Spitzenforschung program (Contract No. BWST ISF2017-061) and by
the German Research Foundation (DFG, Deutsche Forschungsgemeinschaft) within Contract No. 399903135. We acknowledge support from bwForCluster JUSTUS 2 for
high performance computing. J. P. D. also acknowledges partial support from the U.S. National Science Foundation
(PHY-2012125) and NASA/JPL (1502690).
|
1708.06959
|
\section{Introduction}
Linear programming (LP) decoding was introduced by Feldman \emph{et al.} in 2005 \cite{fel05} as an efficient, but (compared to maximum-likelihood (ML) decoding) suboptimal decoding approach for binary linear codes. Since then, LP decoding of low-density parity-check (LDPC) codes has been extensively studied by various authors, and, in particular, several low-complexity approaches have been proposed. See, for instance, \cite{von06,tag07,bur09,tag11,bar13}.
The approach was later extended to nonbinary linear codes by Flanagan \emph{et al.} \cite{fla09}, and several low-complexity approaches were proposed in
\cite{gol13,pun13,liu14_1,liu14}.
Nonbinary LDPC codes are especially appealing because they in general exhibit a better performance than binary codes in the important finite-length regime. %
As recent results have shown \cite{bar13,was16}, LP decoding based on the alternating direction method of multipliers (ADMM) for convex optimization problems \cite{boy10} is able to outperform (in terms of decoding complexity) other LP decoding approaches. The efficiency of the algorithm relies on an efficient algorithm to do Euclidean projection onto a polytope. In the binary case, the so-called ``two-slice'' lemma is the main result that enables efficient Euclidean projections in time $O(d \log d)$ for a binary single parity-check (SPC) code of length $d$. More recently, more efficient projection algorithms have been proposed in \cite{zha13_1} and \cite{zha13}. While initial work has been done to apply ADMM to the nonbinary case \cite{liu14}, it is currently not known how this framework can be applied to codes over nonbinary fields with characteristic greater than $2$. %
In this work, however, we take a different approach and propose an efficient adaptive LP (ALP) decoder for linear codes over prime fields, extending the well-known ALP decoder for binary linear codes by Taghavi and Siegel \cite{tag07} to general nonbinary linear codes over prime fields. %
The underlying structure of LP decoding are the codeword polytopes (or convex hulls) whose vertices correspond to the codewords of a binary image of a (nonbinary) SPC code.
By intersecting all those polytopes defined by the rows of a specific parity-check matrix of a linear code, one obtains the so-called fundamental polytope, the domain of optimization of an LP decoder. In order to perform ALP decoding, one hence needs an explicit description of the polytopes (without using auxiliary variables) of embedded nonbinary SPC codes in terms of linear (in)equalities. While an explicit description for binary codes is well-known (second formulation in \cite{fel05}), all LP formulations for nonbinary codes known so far generalize the first formulation in \cite{fel05} and thus depend on auxiliary variables (one for each feasible configuration) \cite{fla09}. In this paper, we provide such an explicit construction for valid (facet-defining) inequalities for the so-called constant-weight embedding of a nonbinary SPC code over \emph{any} prime field. The construction is based on classes of \emph{building blocks} that are assembled to form the left-hand side of an inequality according to several rules. In the
case of \emph{almost doubly-symmetric} valid classes we prove that the resulting inequalities are all \emph{facet-defining}, while we conjecture this to be true if and only if the class is valid and \emph{symmetric}. Such sets of inequalities have, to the best of our knowledge, not appeared in the literature before, have a strong theoretical interest, and their explicit form (constructed from finite sets of building blocks) provides efficient separation, and thus efficient ALP decoding of \emph{general} (non-SPC) linear codes over any prime field, %
providing an extension to any prime field of the well-known binary case \cite{fel05}. For the ternary case, we prove that the constructed facet-defining inequalities together with the so-called \emph{simplex} constraints give a complete and irredundant description of the embedded SPC codeword polytope. It also extends the explicit formulation of the fundamental cone by Skachek \cite{Skachek10F3} in the sense that the latter describes the convex hull at one single (namely, the all-zero) vertex. For the quinary case, we conjecture that the constructed inequalities together with the simplex constraints indeed give a complete and irredundant description of the embedded codeword polytope, while for larger $q$ we show that this is not the case. Besides its computational gain, ALP decoding is also a key component of methods for improving error-correction performance \cite{zha12,Helmling+14MLDecoding}.
Linear codes (or, in particular, linear LDPC codes) over prime fields have several application areas. For instance, such codes are a key ingredient in the construction of low-density integer lattices using the so-called Construction~A \cite{pie12}. Such lattices are referred to as LDA lattices and they perform close to capacity on the Gaussian channel, in addition to being conceptually simpler than previously proposed lattices based on multiple nested binary codes. In particular, in \cite{pie12}, several integer LDA lattices, all based on a particular $(2,5)$-regular LDPC code over the prime field of size $11$, were proposed. For dimension $5000$, the lattice attains a symbol error rate (under low-complexity iterative decoding) of less than $10^{-6}$ at $1$ dB from capacity. Also, ternary linear codes have recently attracted some attention in the context of polar codes \cite{god15_1} and array-based spatially-coupled LDPC codes \cite{ami15}.
The remainder of this paper is organized as follows. In \cref{sec:basics}, we establish notation and give a short overview of some background material. The relationship between Flanagan's embedding and the constant-weight embedding is characterized in \cref{sec:embeddings}. \Cref{sec:general_results} establishes general polyhedral properties (dimension, affine hull, and box inequalities) of the polytope mentioned above and studies its symmetries by introducing \emph{rotation}. Then, in \cref{sec:bb}, we present a construction method for valid inequalities for the convex hull of the constant-weight embedding of an SPC code defined over the prime field $\mathbb F_p$ for general $p$. In \cref{sec:explicit_3_5_7}, we tailor the general framework developed before for $p \in \{3,5,7\}$. In particular, we prove that for $p=3$, the framework provides a complete and irredundant description of the embedded codeword polytope of a ternary SPC code under the constant-weight embedding.
A separation algorithm based on the principle of dynamic programming (DP) for efficient (relaxed) ALP decoding of general (non-SPC) nonbinary codes over \emph{any} prime field is presented in \cref{sec:ALP_q}; \cref{sec:ALP_q3} describes an efficient implementation of this algorithm for the special case of $p=3$. In \cref{sec:ACG}, we outline an efficient method to search for cut-inducing redundant parity-check equations using Gaussian elimination, generalizing the \emph{Adaptive Cut Generation (ACG)} algorithm from \cite{zha12}. In \cref{sec:ptom}, we briefly consider the case of nonbinary codes over the general field $\mathbb{F}_q$ of size $q = p^m$, where %
$m>1$ is integer. In particular, we adapt the relaxation method proposed in \cite{liu14,hon12} for fields of characteristic $p=2$ to any characteristic $p > 2$.
Numerical results for both LDPC and high-density parity-check (HDPC) codes for various field sizes and block lengths are presented in \cref{sec:numerical_results}. The results show that our proposed ALP decoder outperforms (in terms of decoding complexity) the decoder from \cite{fla09} (using both the plain and the cascaded LP formulation). Also, using an appropriately generalized ACG-ALP decoding algorithm, as described in \cref{sec:ACG}, near-ML decoding performance can be achieved for short block lengths. %
Finally, we draw some conclusions and give an outline of some future work in \cref{sec:conclu}.
\end{notms}
\section{Notation and Background} \label{sec:basics}
This section establishes some basic definitions and results needed for the rest of the paper.
\subsection{General Notation}
If $x \in S$ and $A \subset S$, where $S$ is a set, we denote
\begin{equation} \notag %
x+A \coloneqq \{x + a\colon a \in A\}
\end{equation}
(and analogously $x\cdot A$, $x-A$, etc.). For a map $f\colon A \rightarrow B$ and a set $S \subseteq A$, $f(S) = \{f(s)\colon s\in S\}$ is the set of images of $S$ under $f$. The set of integers is denoted by $\mathbb Z$. For a positive integer $L \in \mathbb Z$, $\range L = \{1,2,\dotsc,L\}$.
A multiset $S$ is a set in which an item can occur repeatedly. The size of a multiset, denoted by $\|S\|_1$, is the number of items counted with multiplicity. For example, $S= \{1,2,2,3,6,6\}\subset \mathbb Z$ is a multiset with $\|S\|_1 = 6$.
\subsection{Finite Fields and Integers}
For any prime $p$ and integer $m \geq 1$, let $\mathbb F_q$ with $q=p^m$ denote the finite field with $q$ elements. If $m=1$ (which is assumed for most of this work), the set $\mathbb F_p=\mathbb F_q$ consists of $p$ \emph{congruence classes} of $\mathbb Z/p\mathbb Z = \{ [a]_p\colon a \in \mathbb Z\}$, where
\begin{equation}
\begin{aligned}
[\cdot]_p\colon \mathbb Z &\rightarrow \mathbb F_p\\
a &\mapsto a + p \mathbb Z = \{a+kp\colon k \in \mathbb Z\}
\label{eq:congruence}
\end{aligned}
\end{equation}
maps an integer to its congruence class modulo $p$. The \enquote{reverse} version of \cref{eq:congruence} that maps a congruence class $\zeta \in \mathbb F_p$ to its unique integer representative from $\{0,1,\dotsc,p-1\}$ is denoted by $[\cdot]_\mathbb Z$:
\begin{equation} \notag
\begin{aligned}
[\cdot]_\mathbb Z\colon \mathbb F_p &\rightarrow \mathbb Z\\
\zeta = [a]_p &\mapsto a \bmod p.
\end{aligned}
\end{equation}
In general, \emph{literal} numbers are used to denote elements of both $\mathbb F_p$ and $\mathbb Z$, depending on the context, but nonetheless designate different items; e.g., $\mathbb Z \ni 3 \neq 3 \in \mathbb F_7$ (we may explicitly use the explicit form $[3]_7$ if there is risk of ambiguity or confusion). Note also that operators like \enquote{$+$} are defined for both $\mathbb Z$ and $\mathbb F_p$, such that $3+5=8$ in $\mathbb Z$, but $3+5=1$ in $\mathbb F_7$. If $a, b\in \mathbb Z$, the expression $a \equiv b \pmod p$ is an alternative notation for $[a]_p = [b]_p$, which we use especially if $a$ and $b$ are compound expressions. For $a \in \mathbb Z$, $[[a]_p]_\mathbb Z = a \bmod p$.
In the general case $m \geq 1$, each element $\zeta \in \mathbb F_q = \mathbb F_{p^m}$ can be represented by a polynomial $\zeta(x) = \sum_{i=1}^m p_i x^{i-1}$, where $p_i \in \mathbb F_p$, and we will use the integer representation $\zeta(p) = \sum_{i=1}^m [p_i]_\mathbb Z p^{i-1}$ for representing $\zeta$. Furthermore, let $\mathsf p(\zeta) = (p_1,\dotsc,p_m)$ be the $p$-ary vector representation of $\zeta$.
For any field element $\zeta \in \mathbb F_q$ and any finite set $\mathcal{A} = \{\zeta_1,\dotsc,\zeta_{|\mathcal{A}|}\}$, $\zeta_i \in \mathbb F_q$, $i \in \range{|\mathcal{A}|}$,
we use the short-hand notation $\sum \mathcal{A} = \sum_{a \in \mathcal{A}} a$ for the sum (in $\mathbb F_q$) of the elements in $\mathcal A$.
\subsection{Linear Codes over Finite Fields}
Let $\mathcal{C}$ denote a linear code of length $n$ and dimension $k$ over the finite field $\mathbb{F}_q$ with $q$ elements. The code $\mathcal{C}$ can be defined by an $r \times n$ parity-check matrix $\boldH = (h_{j,i})$, where $r \geq n-k$ and each matrix entry $h_{j,i} \in \mathbb F_q$, $i \in \mathcal{I}$ and $j \in \mathcal{J}$, where $\mathcal I = \range n$ and $\mathcal J = \range r$ are the column and row index sets, respectively, of $\bm H$. Then, $\mathcal{C} = \mathcal C(\bm H) = \{\bm{c}=(c_1,\dotsc,c_n)^T \in \mathbb{F}_q^n\colon \boldH \bm{c} = \bm{0} \}$, where $(\cdot)^T$ denotes the transpose of its vector argument. When represented by a factor graph, $\mathcal{I}$ is also the variable node index set and $\mathcal{J}$ is the check node index set. In the following, let $\Nv(i)$ (resp.\ $\Nc(j)$) denote the set of neighboring nodes of variable node $i$ (resp.\ check node $j$). Finally, call $\mathcal C$ an $(n, k, d)$ code if $d$ denotes the minimum Hamming weight of its codewords.
In the original work by Feldman \emph{et al.} \cite{fel05}, the ML decoding problem was stated as an integer program (IP) in the real space by using the above-defined $[\cdot]_2$ as the embedding of $\mathbb F_2$ into $\mathbb R$, where $\mathbb R$ denotes the real numbers, and then relaxed into a linear program using vectors that live in $[0,1]^n$. In the nonbinary case, the obvious generalization that represents $\zeta \in \mathbb F_q$ into the reals by using its integer representation $\zeta(p) \in \mathbb R$ does not work out for several reasons. Instead, the following mapping $\mathsf f(\cdot)$ (see \cite{kautz1964nonrandom,hon12,liu14}) embeds elements of $\mathbb F_q$ into the Euclidean space of dimension $q$.
\begin{definition}\label{def:Constant}
We define the \emph{constant-weight embedding} of elements of $\mathbb F_q$ by
\begin{align*}
\mathsf f\colon \mathbb{F}_q &\rightarrow \{0,1\}^q \subseteq \mathbb R^q\\
\zeta &\mapsto \bm{x} = (x_0,\dotsc,x_{q-1})
\end{align*}
where $x_\delta = 1$ if $\delta = \zeta(p)$ is the integer representation of $\zeta$ and $x_\delta=0$ otherwise, and further the constant-weight embedding of \emph{column vectors} from $\mathbb F_q^n$ as
\begin{align*}
\mathsf F_{\mathrm v}\colon \mathbb F_q^n &\rightarrow \{0,1\}^{nq} \\
\bm\zeta = (\zeta_1,\dotsc,\zeta_n)^T &\mapsto \left( \mathsf f(\zeta_1) \mid \dotsc \mid \mathsf{f}(\zeta_n) \right)^T
\end{align*}
where $(\bm v_1\mid\dotsc\mid\bm v_n)$ denotes the concatenation of row vectors $\bm v_1,\dotsc,\bm v_n$.
\end{definition}
\begin{remark}\label{rem:zeroIndexing}
Motivated by the definition of $\mathsf f$, we identify, for any ground set $A$ (above, $A=\mathbb R$), $A^q$ with $A^{\mathbb F_q}$, i.e.,\ use elements from $\mathbb F_q$ and their integer representation interchangeably for indexing such vectors. As a consequence, the index starts at $0$ when its integer representation is used, as opposed to normal vectors which we index starting from $1$.
More generally, a space $A^{nq}$ which is related to $n$ embedded elements of $\mathbb F_q$ (such as $\mathbb R^{nq}$ in the above definition of $\mathsf F_{\mathrm v}$) is identified with $\left(A^{\mathbb F_q}\right)^n$, which is why we usually employ double-indexing to emphasize on the $q$-blocks $\bm v_i$ of a vector $\bm v \in A^{nq}$, as in
\begin{equation} \notag
\bm v = (\bm v_1,\dotsc, \bm v_n) = (v_{1,0},\dotsc,v_{1,q-1},\dotsc,v_{n,0},\dotsc,v_{n,q-1})
\end{equation}
where $\bm v_i \in A^{q} = A^{\mathbb F_q}$.
\end{remark}
Observe that $\mathsf f$ defined in \cref{def:Constant} maps the elements of $\mathbb F_q$ to the vertices of the full-dimensional standard $(q-1)$-simplex embedded in $\mathbb R^q$, $S_{q-1} \coloneqq \conv( \{\bolde^i\}_{i=1}^q)$, where $\bolde^i$ is the $i$-th unit vector in $\mathbb R^q$. Hence, $\mathsf F_{\mathrm v}$ maps $\mathbb F_q^n$ onto the vertices of $S_{q-1}^n = S_{q-1} \times \dotsm \times S_{q-1}$ ($n$ times).
\begin{remark}\label{rem:Flanagan}
Flanagan \emph{et al.} \cite{fla09} have proposed the slightly more compact embedding (called \emph{Flanagan's embedding} in the sequel)
\begin{align*}
\mathsf f'\colon \mathbb F_q &\rightarrow \{0,1\}^{q-1} \subseteq \mathbb{R}^{q-1}\\
\zeta &\mapsto \bm{x} = (x_1,\dotsc,x_{q-1} )
\end{align*}
where $x_\delta = 1$ if $\delta=\zeta(p)$ and $x_\delta=0$ otherwise, with the analog vector-embedding $\mathsf F_{\mathrm v}'$. It has the advantage of using one less dimension per entry of $\mathbb F_q$, while being less \enquote{symmetric} as the case $\zeta=0$ is structurally distinguished. Because of the latter, constant-weight embedding turned out to be better suitable for presenting the results of this paper. In \cref{sec:embeddings}, we establish a close relationship between $\mathsf f$ and $\mathsf f'$ and, in particular, show how to transform all of the results in this paper into their respective form under Flanagan's embedding.
\end{remark}
\subsection{LP Decoding of Nonbinary Codes}\label{sec:lpdecoding}
In this subsection, we review the LP decoding formulation proposed by Flanagan \emph{et al.} in \cite{fla09}, where in contrast to \cite{fla09} we use constant-weight embedding. Let $\mathbb F_q$ and $\Sigma$, respectively, denote the input and output alphabets of a memoryless channel with input $X$ and output $Y$, and define for each $y \in \Sigma$ and $\delta \in \mathbb F_q$ the value $\gamma_{\delta} = \log \left( \frac{ {\rm Pr} \left(Y = y \mid X= 0 \right)}{{\rm Pr} \left(Y = y \mid X= \delta \right)} \right)$. Then, the function $\lambda\colon \Sigma \mapsto \left( \mathbb{R} \cup \left\{ \pm \infty \right\} \right)^q$ is defined as
\begin{displaymath}
\lambda(y) = \boldgamma = \left(\gamma_0,\dotsc,\gamma_{q-1} \right).
\end{displaymath}
Furthermore, we define $\mathsf\Lambda_{\mathrm v}(\bm{y}) = \left( \lambda(y_1) \mid \dotsc \mid \lambda(y_n) \right)^T$ for $\bm{y}=(y_1,\dotsc,y_n)^T$. Now, the ML decoding problem can be written as \cite{fla09}
\begin{equation} \label{eq:MLformulation}
\begin{split}
\hat\boldc_\ML &= \argmin_{\bm{c} \in \mathcal{C}} \sum_{i=1}^n \log \left( \frac{ {\rm Pr} \left(Y=y_i \mid X=0 \right)\hfill}{{\rm Pr} \left( Y=y_i \mid X=c_i \right)} \right) \\
&= \argmin_{\bm{c} \in \mathcal{C}} \sum_{i=1}^n \mathsf\lambda(y_i) \mathsf{f}(c_i)^T \\
&= \argmin_{\bm{c} \in \mathcal{C}} \mathsf\Lambda_{\mathrm v}(\bm{y})^T \mathsf{F}_{\rm v}(\bm{c})
\end{split}
\end{equation}
\begin{comment}
Define
\begin{equation} \notag
\mathcal{C}_j = \{ \bm{c} \in \mathbb{F}_q^n\,|\, \bm{h}_j \cdot \bm{c}^T = 0 \}
\end{equation}
where $\bm{h}_j = (h_{j,0},\dotsc,h_{j,n-1})$ is the $j$-th row of the parity-check matrix $\boldH$ and $0 \leq j \leq m-1$. Furthermore, let $\conv(\mathsf{F}_{\rm v}(\mathcal{C}_j))$ denote the convex hull of $\mathsf{F}_{\rm v}(\mathcal{C}_j)$ in $\mathbb{R}^n$. The (binary) \emph{fundamental polytope} $\mathcal{P}(\boldH)$ of the nonbinary parity-check matrix $\boldH$ is defined as %
\begin{equation} \label{eq:fundapoly}
\mathcal{P}(\boldH) = \bigcap\nolimits_{j=0}^{m-1} \conv(\mathsf{F}_{\rm v}(\mathcal{C}_j)).
\end{equation}
\end{comment}
where $y_1,\dotsc,y_n$ are the channel outputs. The problem in \cref{eq:MLformulation} can be relaxed into a linear program using the embedding from \cref{def:Constant} as follows \cite{fla09}:
\begin{equation} \label{eq:LPformulation}
\begin{split}
\hat\boldx_{\LP} = \argmin\quad & \mathsf\Lambda_{\mathrm v}(\boldy)^T \boldx \\
\text{s.\,t.}\quad & \boldx^{(j)} = \boldP_j \boldx \in \conv(\mathsf F_{\mathrm v}(\C_j)),\, \forall j \in \mathcal{J}
\end{split}
\end{equation}
where $ \boldx^{(j)}=(\boldx^{(j)}_{1},\dotsc,\boldx^{(j)}_{|\Nc(j)|})^T$, $\boldx^{(j)}_i = (x^{(j)}_{i,0},\dotsc,x^{(j)}_{i,q-1})$ for all $i \in [|\Nc(j)|]$, and $\bm P_j$ is a binary %
\emph{indicator} matrix that selects the variables from $\bm{x}$ that participate in the $j$-th check node. %
In (\ref{eq:LPformulation}), $\mathcal{C}_j$ represents the SPC code defined by the $j$-th check node, and $\conv(\mathsf{F}_{\rm v}(\mathcal{C}_j))$ is the convex hull of $\mathsf{F}_{\rm v}(\mathcal{C}_j)$ in $\mathbb{R}^{\abs{\mathcal{N}_{\rm c}(j)}\cdot q}$.
LP decoding, i.e.,\ using the LP relaxation \cref{eq:LPformulation} as a decoder (which is defined to output a decoding failure if the optimal solution $\hat\boldx_{\LP}$ does not happen to be integral) has several desirable properties. Most importantly, the so-called \emph{ML certificate} property \cite{fel05,fla09} assures that, if $\hat\boldx_\LP$ is a codeword, then $\hat\boldx_\LP = \hat\boldc_{\ML}$, where $\hat\boldc_{\ML}$ is the ML decoded codeword.
Note that the ML certificate property remains to hold if $\conv(\mathsf F_{\mathrm v}(\C_j))$ is replaced by a relaxation $\mathcal Q_j \supseteq \conv(\mathsf F_{\mathrm v}(\C_j))$. We will use the term \emph{LP decoding} also when such a further relaxation is used, which is the case in this paper because (except for $q \in \{2,3\}$) the inequalities constructed in this paper may describe only a strict subset of $\conv(\mathsf F_{\mathrm v}(\C_j))$.
\subsection{Background on Polyhedra}
The convex hull of a finite number of points in $\mathbb R^n$ is called a \emph{polytope}. It can be alternatively characterized as the (bounded) intersection of a finite number of halfspaces, i.e.,\ the solution space of a finite number of linear inequalities.
Let $\P\subseteq \mathbb R^n$ be a polytope. An inequality $\boldtheta^T \boldx \leq \kappa$ with $\boldtheta \in \mathbb R^n$ and $\kappa \in \mathbb R$ is \emph{valid} if it holds for any $\boldx \in \P$. Every valid inequality defines a \emph{face} $F = \{\boldx \in \P\colon \boldtheta^T \boldx = \kappa\}$ of $\P$, which is itself a polytope. For notational convenience, we will identify a face $F$ with its defining inequality $\boldtheta^T \boldx \leq \kappa$ as long as there is no risk of ambiguity.
The \emph{dimension} of a face (or polytope) $F$ is defined as the dimension of its affine hull $\aff(F)$, which is calculated as one less than the maximum number of affinely independent vectors in $F$. Recall that a set of $k$ vectors $\{\bm v^1,\dotsc, \bm v^k\} \subseteq \mathbb R^n$ is affinely independent if and only if the vectors $\{\bm v^2 - \bm v^1, \dotsc, \bm v^k - \bm v^1\}$ are linearly independent.
A face $F$ with $\dim(F) = \dim(\P) - 1$ is called a \emph{facet}, while a zero-dimensional face is a \emph{vertex} of $\P$. It is a basic result of polyhedral theory that a face $F$ of dimension $\dim(F)$ actually contains $\dim(F)$ affinely independent \emph{vertices} of $\P$. Conversely, a face $F$ is uniquely determined by $\dim(F)$ affinely independent vertices of $\P$ that are contained in $F$.
Facets are important because every \enquote{minimal} representation of a polytope $\P$ is of the form
\[ \P = \left\{ \boldx \in \mathbb R^n\colon \bm A \bm x = \bm b, \bm C\bm x \leq \bm d\right\}\]
where $\bm A$ is an $r \times n$ matrix of rank $r = n -\dim(\P)$ such that $\aff(\P) = \{\boldx\colon \bm A \boldx = \boldb\}$, and $\bm C$ is an $s\times n$ matrix such that the rows of $\bm C \bm x \leq \bm d$ are in one-to-one correspondence with the $s$ facets of $\P$. For a more rigorous treatment of this topic, see, e.g., \cite{NemhauserWolsey88}.
\section{Comparison of Embeddings of $\mathbb F_q$} \label{sec:embeddings}
In this section, we establish a close relationship between the convex hull of a nonbinary code under the embeddings $\mathsf F_{\mathrm v}$ (\cref{def:Constant}) and $\mathsf F_{\mathrm v}'$ (\cref{rem:Flanagan}), respectively, of $\mathbb F_q^n$ into the Euclidean space. Note that, while $\mathsf F_{\mathrm v}$ maps $\mathbb F_q^n$ to the vertices of $S_{q-1}^n \subseteq \mathbb R^{nq}$, the embedding $\mathsf F_{\mathrm v}'$ maps to the vertices of $\hat S_{q-1}^n \subseteq \mathbb R^{(q-1)n}$, where
\[ \hat S_{q-1} = \conv( \{\bm 0\} \cup \{\bm e^i\}_{i=1}^{q-1}) \subset \mathbb R^{q-1} \]
is the full-dimensional embedding of the $(q-1)$-simplex. Geometrically, $S_{q-1}^n$ exhibits a higher symmetry (cf.\ \cref{fig:fvsfprime}). For this reason, we found the constant-weight embedding more helpful for grasping the geometry of nonbinary linear codes. In addition, several formulas turned out more compact under this embedding. On the other hand, the following results show that the choice of the embedding does not affect the key polyhedral properties. Note that these results hold for arbitrary finite fields $\mathbb F_q = \mathbb F_{p^m}$.
\begin{figure}
\centering
\begin{tikzpicture}[scale=1.5,z=-5mm]
\filldraw[shade,fill=orange,fill opacity=.5] (1,0,0) -- (0,1,0) -- (0,0,1) -- cycle;
\draw[axis] (-.3, 0, 0) -- (1.3, 0, 0);
\draw[axis] (0, -.3, 0) -- (0, 1.3, 0);
\draw[axis] (0, 0, -.3) -- (0, 0, 1.3);
\node[dot,label=below:$\mathsf f(1)$] at (1,0,0) {};
\node[dot,label=above right:$\mathsf f(2)$] at (0,1,0) {};
\node[dot,label=left:$\mathsf f(0)$] at (0,0,1) {};
\node at (.3, .3) {$S_2$};
\begin{scope}[xshift=3cm]
\filldraw[fill=orange!80!black,fill opacity=.5] (0,0) -- (1,0) -- (0,1) -- cycle;
\draw[axis] (-.3, 0) -- (1.3, 0);
\draw[axis] (0, -.3) -- (0, 1.3);
\node at (.3, .3) {$\hat S_2$};
\node[dot,label=below left:$\mathsf f'(0)$] at (0,0) {};
\node[dot,label=above right:$\mathsf f'(1)$] at (1, 0) {};
\node[dot,label=left:$\mathsf f'(2)$] at (0, 1) {};
\end{scope}
\end{tikzpicture}
\vspace{-2ex}
\caption{Constant-weight embedding $\mathsf f$ (left) and Flanagan's embedding $\mathsf f'$ (right) of $\mathbb F_3$ into $\mathbb R^3$ and $\mathbb R^2$, respectively.
Note that $S_2$ is an equilateral triangle, while $\hat S_2$ is not.}
\label{fig:fvsfprime}
\vskip -3ex
\end{figure}
\begin{lemma}\label{lem:embeddings}
Let
\begin{align*}
&\mathsf P_{\mathrm v}\colon S_{q-1}^n \rightarrow \hat S_{q-1}^n\\
&(\mathsf P_{\mathrm v}(\boldx))_{i,j} = x_{i,j}&\text{for }i\in \range n, j\in \mathbb F_q \setminus \{0\}
\end{align*}
be the map that \enquote{projects out} the entries $x_{i,0}$, and let
\begin{align*}
&\mathsf L_{\mathrm v}\colon \hat S_{q-1}^n \rightarrow S_{q-1}^n \\
&(\mathsf L_{\mathrm v}(\boldx'))_{i,j} = \begin{cases}
1 - \sum_{k=1}^{q-1} x'_{i,k}&\text{if }j=0,\\
x'_{i,j} &\text{otherwise}
\end{cases} \quad \text{(for }i\in\range n\text{)}
\end{align*}
\enquote{lift} $\hat S_{q-1}^n$ onto $S_{q-1}^n$. Then, $\mathsf P_{\mathrm v} = \mathsf L_{\mathrm v}^{-1}$ and $\mathsf L_{\mathrm v} = \mathsf P_{\mathrm v}^{-1}$. In particular, both maps are bijective. Furthermore, $\mathsf P_{\mathrm v}(\mathsf F_{\mathrm v}(\bm\xi)) = \mathsf F_{\mathrm v}'(\bm\xi)$ and $\mathsf L_{\mathrm v}(\mathsf F_{\mathrm v}'(\bm\xi)) = \mathsf F_{\mathrm v}(\bm\xi)$ for any $\bm\xi \in \mathbb F_q^n$.
\end{lemma}
\begin{IEEEproof}
The statements can be easily verified by running through the cases. For example, $\mathsf L_{\mathrm v}(\mathsf P_{\mathrm v}(\bm x))_{i,0} = 1 - \sum_{k \neq 0} \mathsf P_{\mathrm v}(x_{i,k}) = 1 - \sum_{k \neq 0} x_{i,k} = x_{i,0}$, where the last step holds because $\bm x_i \in S_{q-1}$ and hence $\sum_{k \in \mathbb F_q} x_{i,k} = 1$.
\end{IEEEproof}
Let $\mathcal C$ be a linear code of length $n$ defined over the finite field $\mathbb F_q$, $q=p^m$, with $p$ prime and $m \geq 1$. Let $\P= \conv(\mathsf F_{\mathrm v}(\mathcal C))$ and $\P' = \conv(\mathsf F_{\mathrm v}'(\mathcal C))$. From the above lemma, it follows immediately that $\P' = \mathsf P_{\mathrm v}(\P)$ and $\P= \mathsf L_{\mathrm v}(\P')$. Because $\mathsf P_{\mathrm v}$ and $\mathsf L_{\mathrm v}$ are affine linear and bijective, we also get the following.
\begin{lemma} \label{lem:affinely}
The vectors $\boldx^1,\dotsc, \boldx^k \in S_{q-1}^n$ are affinely independent if and only if $\mathsf P_{\mathrm v}(\boldx^1),\dotsc, \mathsf P_{\mathrm v}(\boldx^k) \in \hat S_{q-1}^n$ are affinely independent.
\end{lemma}
\begin{corollary}\label{cor:facetEquiv}
$F$ is a face of $\P$ with $\dim(F)=\delta$ if and only if $\mathsf P_{\mathrm v}(F)$ is a face of $\P'$ also with $\dim(F)=\delta$. In particular, $\dim(\P) = \dim(\P')$, and the facets of both polytopes are in one-to-one correspondence.
\end{corollary}
The particularly simple form of $\mathsf P_{\mathrm v}$ even allows us to immediately read off a description of $\P'$ by means of linear (in)equalities by such a description of $\P$; see Appendix~\ref{app:embeddings} for details.
In addition to \cref{lem:affinely}, the following lemma will become important in several proofs later on.
\begin{lemma}\label{lem:affInd}
Let $q = p^m \geq 3$, $m \geq 1$, and assume $\boldc$ and $\boldc^0, \boldc^1,\dotsc,\boldc^k \in \mathbb F_q^n$. The following are equivalent:
\begin{enumerate}
\item $\mathsf F_{\mathrm v}(\boldc^0), \dotsc, \mathsf F_{\mathrm v}(\boldc^k)$ are affinely independent.
\item $\mathsf F_{\mathrm v}(\boldc + \boldc^0), \dotsc, \mathsf F_{\mathrm v}(\boldc + \boldc^k)$ are affinely independent.
\item $\mathsf F_{\mathrm v}'(\boldc^1-\boldc^0),\dotsc,\mathsf F_{\mathrm v}'(\boldc^k-\boldc^0)$ are \emph{linearly} independent.
\end{enumerate}
\end{lemma}
\begin{IEEEproof}
By definition of $\mathsf F_{\mathrm v}$ and $\mathsf f$, adding a \emph{fixed} vector $\bm c$ to each $\boldc^j$ results in a \emph{fixed} permutation of the entries in each $q$-block of $\mathsf F_{\mathrm v}(\boldc^j)$. As this permutation has no effect on the affine independence, the equivalence of 1) and 2) follows.
Assume 1) holds, then by 2) with $\boldc = -\boldc^0$ also $\mathsf F_{\mathrm v}(\boldc^0-\boldc^0=\bm 0),\dotsc, \mathsf F_{\mathrm v}(\boldc^k-\boldc^0)$ are affinely independent, hence by \cref{lem:affinely} the vectors \[\mathsf F_{\mathrm v}'(\bm 0),\mathsf F_{\mathrm v}'(\bm c^1-\bm c^0),\dotsc, \mathsf F_{\mathrm v}'(\boldc^k-\boldc^0)\] are affinely independent, which by definition of affine independence is equivalent to \[\mathsf F_{\mathrm v}'(\bm c^1-\bm c^0) - \mathsf F_{\mathrm v}'(\bm 0),\dotsc, \mathsf F_{\mathrm v}'(\bm c^k-\bm c^0) - \mathsf F_{\mathrm v}'(\bm 0)\] being linearly independent. But $\mathsf F_{\mathrm v}'(\bm 0) = \bm 0$, which concludes the proof.
\end{IEEEproof}
\section{Dimension of $\P$ and Rotational Symmetry}
\label{sec:general_results}
\subsection{Simplex Constraints and Dimension of $\P$}
In this subsection, we determine the dimension of $\P = \conv(\mathsf F_{\mathrm v}(\mathcal C))$, where $\mathcal C$ is an \enquote{all-ones} SPC code (i.e.,\ its parity-check matrix contains only ones) of length $d$ defined over the field $\mathbb F_p$, $p$ prime, and show that the linear equations and inequalities describing $S_{p-1}^d$ define the affine hull and are facets, respectively, of $\P$.
\begin{definition}
\label{def:spx}
For the finite field $\mathbb F_p$ and $d \geq 1$, let $\Delta_p^d$ denote the set of $p\times d$ inequalities and $d$ equations in $\mathbb R^{dp}$ that define $S_{p-1}^d$, i.e.,\ the inequalities
\begin{subequations}
\label{eq:spx}
\begin{align}
&x_{i,j} \geq 0 &&\text{for }i \in \range d\text{ and } j \in \mathbb F_p \label{eq:spx-geq}\\
\text{and}\quad&\sum_{j \in \mathbb F_p} x_{i,j} = 1 &&\text{for }i \in \range d \label{eq:spx-eq};
\end{align}
\end{subequations}
we call the (in)equalities in $\Delta_p^d$ \emph{simplex constraints}.
\end{definition}
\begin{proposition}\label{prop:Pjdim}
Let $\mathcal C$ be a length-$d$ \enquote{all-ones} SPC code over the finite field $\mathbb F_p$ and $\P = \conv(\mathsf F_{\mathrm v}(\mathcal C))$. For $d \geq 3$ (if $p=2$, for $d \geq 4$),
\begin{enumerate}
\item $\dim(\P) = d(p-1)$, \label{prop:Pjdim-1}
\item the affine hull of $\P$ is $\aff(\P) = \{\boldx \colon \cref{eq:spx-eq} \text{ holds for }i \in \range d\}$, and
\item \cref{eq:spx-geq} defines a facet of $\P$ for $i \in \range d$ and $j \in \mathbb F_p$.
\end{enumerate}
\end{proposition}
\begin{IEEEproof}
The results for $p=2$ are already known; see, e.g., \cite[Thm.~III.2]{hel12}. The proof for $p \neq 2$ is given in Appendix~\ref{proofPjdim}.
\end{IEEEproof}
The simplex constraints $\Delta_p^d$ can be interpreted as generalized box constraints that restrict, for $i \in \range d$, the $p$ variables representing $\mathsf f(c_i)$ to the simplex $S_{p-1}$, where $(c_1,\dotsc,c_d)^T$ denotes a codeword of the SPC code. As they are independent of $\boldH$, an arbitrary code $\mathcal C$ of length $n$ thus has only $n(p+1)$ simplex constraints ($n$ equations and $pn$ inequalities) in total. These will be denoted by $\Delta_p^\mathcal C$.
\subsection{Symmetries of $\P$ and General SPC Codes}
\label{sec:rotation}
In this subsection, we develop a notion of \emph{rotating} the simplex $S_{q-1}$ (which corresponds to the embedding of $\mathbb F_q$) according to a permutation of the elements of $\mathbb F_q$. This allows both to reduce the study of general SPC codes to \enquote{all-ones} SPC codes and derive many valid inequalities from a single one by using automorphisms of the code.
Rotation is based on a work by Liu and Draper (see \cite[Sec.~IV.C]{liu14}). The results in this section hold for an arbitrary finite field $\mathbb F_q = \mathbb F_{p^m}$.
\begin{definition}\label{def:rotation1}
Let $\S_q$ denote group of permutations of the numbers $\{0,\dotsc,q-1\}$, which we will identify, via integer representation, with the permutations of $\mathbb F_q$.
For each $\pi \in \mathbb S_q$, the \emph{rotation operation} $\rot_\pi$ on the vector space $\mathbb R^q$ is defined by
\[ \rot_\pi(\bm a = (a_0,\dotsc, a_{q-1})) = (a_{\pi(0)}, \dotsc, a_{\pi(q-1)}).\]
\end{definition}
Note that $\rot_\pi$ is a simple coordinate permutation and hence a vector-space automorphism of $\mathbb R^q$ that, in particular, maps the simplex $S_{q-1} \subset \mathbb R^q$ onto itself.
We now extend the above definitions to vectors in $\mathbb F_q^l$ and $\mathbb R^{lq}$, respectively.
\begin{definition}\label{def:rotation2}
Let $\bm\pi = (\pi_1,\dotsc, \pi_l) \in \S_q^l$ be a vector of $l$ permutations and $\bm\zeta=(\zeta_1,\dotsc,\zeta_l) \in \mathbb F_q^l$. Define
\[ \bm\pi(\bm\zeta) = (\pi_1(\zeta_1),\dotsc,\pi_l(\zeta_l))\]
and the corresponding rotation of $\mathbb R^{lq}$ as
\begin{align*}
\rot_{\bm\pi}\colon &\mathbb R^{lq} \rightarrow \mathbb R^{lq}\\
&(\bm a_1,\dotsc, \bm a_l) \mapsto (\rot_{\pi_1}(\bm a_1),\dotsc,\rot_{\pi_l}(\bm a_l))
\end{align*}
by applying the $l$ individual rotations of $S_{q-1}$ component-wise to the $\bm a_i$.
\end{definition}
The following lemma links the two operations defined above.
\begin{lemma}\label{lem:rotation-embedding-swap}
Let $\bm\pi =(\pi_1,\dotsc,\pi_l) \in \mathbb S_q^l$ and $\bm\zeta = (\zeta_1,\dotsc, \zeta_l) \in \mathbb F_q^l$. Then,
$\rot_{\bm\pi}(\mathsf F_{\mathrm v}(\bm \zeta)) = \mathsf F_{\mathrm v}(\bm\pi^{-1}(\bm\zeta))$,
where $\bm\pi^{-1} = (\pi_1^{-1},\dotsc,\pi_l^{-1})$.
\end{lemma}
\begin{IEEEproof}
We show for $i \in \range d$ that $\rot_{\pi_i}(\mathsf f(\zeta_i)) = \mathsf f(\pi_i^{-1}(\zeta_i))$. Choose $i \in \range d$ and let $\pi_i \in \S_q$ and $\zeta_i \in \mathbb F_q$, and denote $\bm x_i = \mathsf f(\zeta_i) \in \mathbb R^q$. By \cref{def:Constant}, $x_{i,\pi_i(j)} = 1 \Leftrightarrow \zeta_i(p) = \pi_i(j) \Leftrightarrow \pi_i^{-1}(\zeta_i(p)) = j \Leftrightarrow (\pi_i^{-1}(\zeta_i))(p) = j$. Hence, the $j$-th component of $\mathsf f(\pi_i^{-1}(\zeta_i))$ is equal to $1$ if and only if $x_{i,\pi_i(j)} = 1$, i.e.,\ if and only if the $\pi_i(j)$-th component of $\rot_{\pi_i}(\mathsf f(\zeta_i))$ is equal to $1$, which shows the claim and hence (by component-wise application) immediately proves the vector case of the lemma.
\end{IEEEproof}
\begin{theorem}\label{thm:rotation-general}
Let $\mathcal S \subseteq \mathbb F_q^l$ be a set and $\bm\pi \in \S_q^l$ a permutation vector. Further, let $\P = \conv(\mathsf F_{\mathrm v}(\mathcal S))$ and $\tilde\P = \conv(\mathsf F_{\mathrm v}(\bm\pi(\mathcal S)))$. Then,
\begin{enumerate}
\item For $\bm x\in \mathbb R^{lq}$, $\bm x \in \P$ if and only if $\rot_{\bm\pi}(\bm x) \in \tilde\P$.
\label{part:rotation-polytope}
\item The inequality $\bm a^T \bm x \leq b$ with $\bm a \in \mathbb R^{lq}$, $b \in \mathbb R$ is valid for $\P$ and defines the face $F$ of $\P$, if and only if $\rot_{\bm\pi}(\bm a)^T\bm x \leq b$ is valid for $\tilde\P$ and defines the face $\rot_{\bm\pi}(F)$ of $\tilde\P$. In particular, both $F$ and $\rot_{\bm\pi}(F)$ have the same dimension.
\label{part:rotation-inequality}
\end{enumerate}
\end{theorem}
To prove \cref{thm:rotation-general}, we need an auxiliary result.
\begin{lemma}\label{lem:permScalProd}
If $\bm a,\bm b \in \mathbb R^{lq}$ and $\bm\pi \in \S_q^l$, then \begin{enumerate}
\item $\rot_{\bm\pi}(\bm a)^T \rot_{\bm\pi}(\bm b) = \bm a^T\bm b$, and
\item $\rot_{\bm\pi}(\bm a)^T \bm b = \bm a^T \rot_{\bm\pi}^{-1}(\bm b)$.
\end{enumerate}
\end{lemma}
\begin{IEEEproof}
The first claim is obvious because both sums contain the same elements, only in a different order. To show 2), apply 1) with $\rot_{\bm\pi}^{-1}$:
$\rot_{\bm\pi}(\bm a)^T \bm b = \rot_{\bm\pi}^{-1}(\rot_{\bm\pi}(\bm a))^T \rot_{\bm\pi}^{-1}(\bm b) = \bm a^T\rot_{\bm\pi}^{-1}(\bm b)$.
\end{IEEEproof}
\begin{IEEEproof}[Proof of \cref{thm:rotation-general}]\\
Part~\ref{part:rotation-polytope}): For $\bm\zeta = (\zeta_1,\dotsc,\zeta_l) \in \mathcal S$,
\[
\mathsf F_{\mathrm v}(\bm \zeta) \in \P
\Leftrightarrow \mathsf F_{\mathrm v}(\bm\pi(\bm \zeta)) = \rot_{\bm\pi}^{-1}(\mathsf F_{\mathrm v}(\bm\zeta)) \in \tilde\P \]
(where the equality is by \cref{lem:rotation-embedding-swap}), which shows the claim for all vertices of $\P$ and $\tilde\P$. But since $\rot_{\bm\pi}$ is linear, the result extends to convex combinations, which proves the first part.\\
Part~\ref{part:rotation-inequality}): It holds that
\begin{align*}
&\bm a^T \bm x \leq b&&\text{for all }\bm x \in \P\\
\Leftrightarrow\;&\bm a^T \rot_{\bm\pi}^{-1}(\tilde{\bm x}) \leq b&&\text{for all } \tilde{\bm x} \in \tilde\P
\intertext{because, by Part~\ref{part:rotation-polytope}, $\bm x$ equals $\rot_{\bm \pi}^{-1}(\tilde{\bm x})$ for some $\tilde{\bm x}\in \tilde\P$,}
\Leftrightarrow\;&\rot_{\bm \pi}(\bm a)^T \bm x \leq b &&\text{for all }\bm x \in \tilde\P\text{ (by \cref{lem:permScalProd})}
\end{align*}
which shows the first claim of Part~\ref{part:rotation-inequality}). If now $F = \{\bm x \colon \bm a^T \bm x = b\text{ and }\bm x \in \P\}$, then
\ifonecolumn
\begin{align*}
\rot_{\bm\pi}(F) &= \{\rot_{\bm\pi}(\bm x)\colon \bm a^T \bm x = b\text{ and }\bm x \in \P\} \\
&= \{\rot_{\bm \pi}(\bm x)\colon \bm a^T \bm x = b \text{ and }\rot_{\bm \pi}(\bm x) \in \tilde\P\}\\
&= \{\rot_{\bm \pi}(\bm x)\colon \rot_{\bm\pi}(\bm a)^T \rot_{\bm\pi}(\bm x) = b \text{ and }
\rot_{\bm\pi}(\bm x) \in \tilde\P\}\\
&=\{\tilde{\bm x}\colon \rot_{\bm\pi}(\bm a)^T \tilde{\bm x} = b\text{ and }\tilde{\bm x} \in \tilde\P\}
\end{align*}
\else
\begin{align*}
\rot_{\bm\pi}(F) &= \{\rot_{\bm\pi}(\bm x)\colon \bm a^T \bm x = b\text{ and }\bm x \in \P\} \\
&= \{\rot_{\bm \pi}(\bm x)\colon \bm a^T \bm x = b \text{ and }\rot_{\bm \pi}(\bm x) \in \tilde\P\}\\
&= \{\rot_{\bm \pi}(\bm x)\colon \rot_{\bm\pi}(\bm a)^T \rot_{\bm\pi}(\bm x) = b \text{ and }\\
&\qquad\qquad\rot_{\bm\pi}(\bm x) \in \tilde\P\}\\
&=\{\tilde{\bm x}\colon \rot_{\bm\pi}(\bm a)^T \tilde{\bm x} = b\text{ and }\tilde{\bm x} \in \tilde\P\}
\end{align*}
\fi
where we have again used Part~\ref{part:rotation-polytope} and \cref{lem:permScalProd}. The last line is the definition of the face of $\tilde\P$ induced by $\rot_{\bm\pi}(\bm a)^T \bm x \leq b$. Because $\rot_{\bm\pi}$ does not influence affine independence, both $F$ and $\rot_{\bm\pi}(F)$ have the same dimension.
\end{IEEEproof}
In the following, \cref{thm:rotation-general} is applied to some special cases, which leads to important results.
\begin{definition}\label{def:rotMultByConstant}
For $0 \neq h \in \mathbb F_q$, define $\varphi_h \in \S_q$ by $\varphi_h(\zeta) = h\cdot\zeta$ for $\zeta \in \mathbb F_q$ (note again the identification of $\S_q$ and the bijections on $\mathbb F_q$), and denote by
$\GL(\mathbb F_q) = \{ \varphi_h\colon 0 \neq h \in \mathbb F_q\}$
the general linear group of $\mathbb F_q$ (as a 1-dimensional vector space over $\mathbb F_q$).
For $\bm h = (h_1,\dotsc, h_d) \in (\mathbb F_q \setminus \{0\})^d$, the corresponding map from $\GL(\mathbb F_q)^d$ is named
$\bm\varphi_{\bm h} = (\varphi_{h_1},\dotsc,\varphi_{h_d})$. For convenience, we will abbreviate $\rot_{\bm \varphi_{\bm h}}$ as $\rot_{\bm h}$. In the event that $h_1 = \dotsm = h_d = h$, we abbreviate $\bm\varphi_{\bm h} = \bm\varphi_h$ and $\rot_{\bm h} = \rot_h$.
\end{definition}
\begin{corollary}\label{cor:rotation-generalSPC}
Let $\mathcal C$ be an \enquote{all-ones} SPC code of length $d$, and $\mathcal C(\bm h)$ an arbitrary SPC code defined by the parity-check vector $\bm h = (h_1,\dotsc, h_d)$ with $h_i \neq0$ for $i \in \range d$.
Further, let $\P = \conv(\mathsf F_{\mathrm v}(\mathcal C))$ and $\P(\bm h) = \conv(\mathsf F_{\mathrm v}(\mathcal C(\bm h))$. Then,
\begin{enumerate}
\item $\P(\bm h) = \rot_{\bm h}(\P)$ and $\P = \rot_{\bm h}^{-1}(\P(\bm h))$, and
\item $\bm a^T \bm x \leq b$ is valid for $\P$ if and only if $\rot_{\bm h}(\bm a)^T \bm x \leq b$ is valid for $\P(\bm h)$.
\end{enumerate}
\end{corollary}
\begin{IEEEproof}
Follows from \cref{thm:rotation-general} with $\mathcal S = \mathcal C$ and $\bm\pi = \bm\varphi_{\bm h}$ because $\mathcal C(\bm h) = \bm\varphi_{\bm h}(\mathcal C)$ by definition.
\end{IEEEproof}
The corollary shows that $\P$ and $\P(\bm h)$ are equivalent up to an index permutation; in particular, they coincide in most interesting structural properties such as dimension, number of facets, volume, etc. By the second part, a description (or a relaxation) of $\P$ by means of linear (in)equalities immediately leads to a description (or an equally tight relaxation) of $\P(\bm h)$.
Another special case of \cref{thm:rotation-general} reveals symmetries \emph{within} the SPC codeword polytope $\P(\bm h)$. For this, we need another subclass of the permutations of $\S_q$.
\begin{definition}\label{def:glq}
By $\Aut{(\F_q, +)}$ we denote the set of automorphisms of the additive group $(\mathbb F_q,+)$, i.e.,\ bijections $\varphi$ on $\mathbb F_q$ that satisfy $\varphi(\zeta+\eta) = \varphi(\zeta) + \varphi(\eta)$ for all $\zeta, \eta \in \mathbb F_q$ (note that this implies that $\varphi(0) = 0$).
\end{definition}
\begin{corollary}\label{cor:rotation-autfq}
Let $\mathcal C$, $\P$, $\mathcal C(\bm h)$, and $\P(\bm h)$ as above, $\varphi \in \Aut{(\F_q, +)}$, and let $\bm\varphi = (\varphi,\dotsc,\varphi)$ ($d$ times). Then,
\begin{enumerate}
\item $\bm a^T \bm x \leq b$ valid for $\P$ $\Leftrightarrow$ $\rot_{\bm\varphi}(\bm a)^T \bm x \leq b$ valid for $\P$, and
\item $\bm a^T \bm x \leq b$ valid for $\P(\bm h)$ $\Leftrightarrow$ $\rot_{\bm\varphi_{\bm h}\circ \bm\varphi \circ \bm\varphi_{\bm h}^{-1}}(\bm a)^T \bm x \leq b$ valid for $\P(\bm h)$.
\end{enumerate}
\end{corollary}
\begin{IEEEproof}
For the first statement, we can apply \cref{thm:rotation-general} with $\mathcal S=\mathcal C$ and $\bm\pi = \bm\varphi$ because $\mathcal C = \bm\varphi(\mathcal C)$: $\bm c \in \mathcal C \Leftrightarrow \sum_{i=1}^d c_i = 0 \Leftrightarrow \varphi(\sum_i c_i) = \varphi(0) = 0 \Leftrightarrow \sum_i \varphi(c_i) = 0 \Leftrightarrow \bm\varphi(\bm c) \in \mathcal C$. The second statement then follows by applying \cref{cor:rotation-generalSPC} twice and the first statement in between:
\begin{align*}
&\bm a ^T \bm x \leq b&&\text{valid for $\P(\bm h)$}\\
\Leftrightarrow\;
&\rot_{\bm h}^{-1}(\bm a)^T \bm x \leq b&&\text{valid for $\P$}\\
\Leftrightarrow\;
&\rot_{\bm\varphi}(\rot_{\bm h}^{-1}(\bm a))^T\bm x \leq b&&\text{valid for $\P$}\\
\Leftrightarrow\;
&\rot_{\bm h}(\rot_{\bm\varphi}(\rot_{\bm h}^{-1}(\bm a)))^T\bm x \leq b&&\text{valid for $\P(\bm h)$}.
\end{align*}
\end{IEEEproof}
\begin{remark}\label{rem:rotation-autfq-allones}
For $q=p^1=p$, $\Aut{(\F_p, +)} = \GL(\mathbb F_p)$ equals the set of multiplications with a nonzero constant as defined in \cref{def:rotMultByConstant}. By the distributive law, this in particular implies that $\Aut{(\F_p, +)}$ is commutative, such that $\bm\varphi_{\bm h}\circ \bm\varphi \circ \bm\varphi_{\bm h}^{-1}$ in the above corollary reduces to $\bm\varphi$.
In general, it can be shown that $\Aut{(\F_q, +)} = \GL(\mathbb F_p^m)$ (where the $m$-dimensional $\mathbb F_p$-space
$\mathbb F_p^m$ is, as a vector space, isomorphic to $\mathbb F_q$), which for $m>1$ is a strict superset of $\GL(\mathbb F_q)$ and not commutative.
\end{remark}
\section{Construction of Valid Inequalities From Building Blocks}\label{sec:bb}
In this section, we establish a construction of valid inequalities for the polytope $\P = \conv(\mathsf F_{\mathrm v}(\mathcal C))$, where $\mathcal C$ is an \enquote{all-ones} SPC code of length $d$ over the finite field $\mathbb F_p$ with $p$ prime; the symbols $\P$, $\mathcal C$, $d$, and $\mathbb F_p$ will be used, with the above meaning, throughout the entire section. The construction is based on classes of \emph{building blocks} that are assembled to form the left-hand side of an inequality according to several rules developed in the following.
First, a set of building block classes is defined in \cref{sec:bb-const}. The method for constructing inequalities from a building block class is described in \cref{sec:hilo}. Based on these inequalities, we derive necessary and sufficient conditions for a building block class to induce only valid inequalities in \cref{sec:validInvalid}, and some necessary conditions for inequalities to define \emph{facets} in \cref{sec:facet-defining}. \Cref{sec:automorph} discusses the application of \cref{cor:rotation-autfq} in order to obtain a set of additional inequalities from each of the previously constructed ones. Finally, the issue of redundancy within the set of valid inequalities constructed by the method of this section is addressed in \cref{sec:redundant}.
\subsection{Building Block Construction}\label{sec:bb-const}
\begin{definition}[Basic Building Block Class]\label{def:bb}
For any $\bm m = (m_0, m_1, \dotsc,m_{p-1}) \in \{0,1\}^{p}$ with $m_0 = 0$, define $p$ vectors $\{\boldt_k^{\bm m}\}_{k \in \mathbb F_p} \subset \mathbb R^p$ %
by
\begin{enumerate}
\item $t^{\bm m}_{0,j} = [j]_\mathbb Z + m_j p$ for $j\in \mathbb F_p$, and
\item $t^{\bm m}_{k,j} = t^{\bm m}_{0, j + k} - t^{\bm m}_{0,k}$ for $k \in \mathbb F_p \setminus \{0\}$ and $j\in \mathbb F_p$.
\end{enumerate}
Each $\bm t_k^{\bm m}$ is called a \emph{basic building block}, and the set $\mathcal T^{\bm m} = \{ \bm t_k^{\bm m} \}_{k \in \mathbb{F}_p}$ %
of building blocks constructed in this way is called a \emph{basic building block class}.
\end{definition}
In the sequel, we will sometimes omit the prefix basic when it is clear from the context that we talk about a basic building block class.
Note that \cref{rem:zeroIndexing} applies to the above definition, such that, e.g., $t^{\bm m}_{0,j+k} = t^{\bm m}_{0,[j+k]_\mathbb Z}$, i.e.,\ there is no need to take the index modulo $p$ because it is an element of $\mathbb F_p$.
\begin{example}\label{ex:bb-p3}
Let $p=3$ and $\bm m = (0,1,1)$. Then, $\boldt^{\bm m}_0 = (0, 4, 5)$, $\boldt^{\bm m}_1 = (0, 1, -4)$, and $\boldt^{\bm m}_2 = (0, -5, -1)$.
\end{example}
\begin{example}\label{ex:bb-p7}
Let $p=7$ and $\bm m = (0,1,1,0,0,1,0)$. Then,
\ifonecolumn
\begin{align}
\boldt^{\bm m}_0 &= (0, 8, 9,3,4,12,6),\;
\boldt^{\bm m}_1 = (0, 1, -5,-4,4,-2,-8),\;
\boldt^{\bm m}_2 = (0, -6, -5,3,-3,-9,-1), \notag \\
\boldt^{\bm m}_3 &= (0, 1, 9,3,-3,5,6),\;
\boldt^{\bm m}_4 = (0, 8, 2,-4,4,5,-1),\;
\boldt^{\bm m}_5 = (0, -6, -12,-4,-3,-9,-8), \notag \\
\boldt^{\bm m}_6 &= (0, -6, 2,3,-3,-2,6). \notag
\end{align}
\else
\begin{align}
\boldt^{\bm m}_0 &= (0, 8, 9,3,4,12,6), \notag \\
\boldt^{\bm m}_1 &= (0, 1, -5,-4,4,-2,-8), \notag \\
\boldt^{\bm m}_2 &= (0, -6, -5,3,-3,-9,-1), \notag \\
\boldt^{\bm m}_3 &= (0, 1, 9,3,-3,5,6), \notag \\
\boldt^{\bm m}_4 &= (0, 8, 2,-4,4,5,-1), \notag \\
\boldt^{\bm m}_5 &= (0, -6, -12,-4,-3,-9,-8), \notag \\
\boldt^{\bm m}_6 &= (0, -6, 2,3,-3,-2,6). \notag
\end{align}
\fi
\end{example}
When $\bm m$ is fixed in the respective context, we will in the sequel frequently omit the superscript, i.e., write $\boldt_k$ instead of $\boldt^{\bm m}_k$.
\begin{lemma}\label{lem:bb-properties}
A class $\mathcal T^{\bm m}$ as defined above has the following properties:
\begin{enumerate}
\item For any $k,j \in \mathbb F_p$, $[t_{k,j}]_p = j$ (i.e.,\ $t_{k,j} = [j]_\mathbb Z \bmod p$).
\label{lem:bb-property1}
\item $t_{k,0} = 0$ for all $k \in \mathbb{F}_p$.
\label{lem:bb-property2}
\item For $k \in \mathbb F_p$, let
$\set(\bm t_k) = \{t_{k,j}\colon j \in \mathbb F_p\}$ be the unordered set of entries of $\boldt_k$. Then,
\begin{equation}
\set(\bm t_k) = \set(\bm t_0) - t_{0,k} = \{ i - t_{0,k}\colon i \in \set(\bm t_0)\}. \label{eq:property-3}
\end{equation}
Hence, the entries (regardless of order) of different building blocks within each basic class differ only by a constant.
\label{lem:bb-property3}
\item For all $k, i, j, l \in \mathbb F_p$ holds
\begin{equation} \notag
t_{k,i} - t_{k,j} = t_{k+l, i-l} - t_{k+l, j-l}
%
\end{equation}
i.e.,\ the relative placement of entries $i$ and $j$ of the building block $\boldt_k$ is the same as that of entries $i-l$ and $j-l$ of $\boldt_{k+l}$. \label{lem:bb-property4}
\end{enumerate}
\end{lemma}
\begin{IEEEproof}
For $k=0$, the first three statements are immediate for all $j \in \mathbb F_p$ by definition. For general $k \in \mathbb F_p$:
\begin{enumerate}
\item We have
$[t_{k,j}]_p = [t_{0,j+k} - t_{0, k}]_p = j + k - k = j$.
\item $t_{k,0} = t_{0, k} - t_{0,k} = 0$.
\item By definition of $t_{k,j}$,
\ifonecolumn
\begin{displaymath} \set(\bm t_k) = \{t_{k,j}\}_{j \in \mathbb F_p} = \{t_{0,j+k}\}_{j \in \mathbb F_p} - t_{0,k} = \{t_{0,j}\}_{j \in \mathbb F_p} - t_{0,k} = \set(\bm t_0) - t_{0,k}\end{displaymath}
\else
\begin{multline*} \set(\bm t_k) = \{t_{k,j}\}_{j \in \mathbb F_p} = \{t_{0,j+k}\}_{j \in \mathbb F_p} - t_{0,k} \\= \{t_{0,j}\}_{j \in \mathbb F_p} - t_{0,k} = \set(\bm t_0) - t_{0,k}\end{multline*}
\fi
where we have used that $\{j+k\}_{j \in \mathbb F_p} = \{j\}_{j \in \mathbb F_p}$.
\end{enumerate}
Finally, Property~\ref{lem:bb-property4} holds because
\begin{align*}
t_{k+l, i-l} - t_{k+l,j-l} &= t_{0,k+i} - t_{0,k+l} - (t_{0,k+j} - t_{0,k+l}) \\
&=t_{0,k+i} - t_{0,k+j} \\
&=t_{0,k+i} - t_{0,k} - (t_{0,k+j} - t_{0,k}) \\
&=t_{k,i} - t_{k,j}
\end{align*}
where both the first and last step are by definition of $t_{k,j}$.
\end{IEEEproof}
\tikzset{dot/.style={fill,inner sep=0mm,minimum size=1.5mm,circle}}
\tikzset{rbrace/.style={decorate,decoration={brace,mirror}}}
\tikzset{>=stealth}
\begin{figure*}
\centering
\begin{tikzpicture}[xscale=.7,yscale=1.2]
\draw[black!10,very thin] (-6.2, .2) grid (6.2, -4.2);
\begin{scope}[xshift=-11cm]
\matrix (T) [matrix of math nodes,row sep={1.2cm,between origins},anchor=center,every node/.append style={anchor=east}] at (0, -2)
{ \boldt_0=( &0 & 6 & 2 & 3 & 4 &)\\
\boldt_1=( &0 & -4 & -3 & -2 & -6&) \\
\boldt_2=( &0 & 1 & 2 & -2 & 4 &)\\
\boldt_3=( &0 & 1 & -3 & 3 & -1 &)\\
\boldt_4=( &0& -4 & 2 & -2 & -1 &) \\
};
\end{scope}
\foreach \i in {-6,...,6}
\node at (\i,1) {\i};
%
\foreach \i [count=\j from 0] in {0,6,2,3,4}
\node[dot,label=$t_{0,\j}$] (t0\j) at (\i, 0) {};
%
\foreach \i [count=\j from 0] in {0,-4,-3,-2,-6}
\node[dot,label=$t_{1,\j}$] (t1\j) at (\i, -1) {};
%
\foreach \i [count=\j from 0] in {0,1,2,-2,4}
\node[dot,label=$t_{2,\j}$] (t2\j) at (\i, -2) {};
%
\foreach \i [count=\j from 0] in {0,1,-3,3,-1}
\node[dot,label=$t_{3,\j}$] (t3\j) at (\i, -3) {};
%
\foreach \i [count=\j from 0] in {0,-4,2,-2,-1}
\node[dot,label=$t_{4,\j}$] (t4\j) at (\i, -4) {};
\begin{scope}[font=\scriptsize]
%
\draw[red,->] (6,-1) -- node[below] {$-t_{0,1}=-6$} (t10);
\draw[red,->] (6,-2) -- node[below] {$-t_{0,2}=-2$} (t24);
\draw[red,->] (6,-3) -- node[below] {$-t_{0,3}=-3$} (t33);
\draw[red,->] (6,-4) -- node[below] {$-t_{0,4}=-4$} (t42);
%
\begin{scope}[DarkGreen,->]
\draw (t04) to[bend left=10] node[below] {$t_{0,4} - t_{0,2}=2$} (t02);
\draw (T-1-6) to[bend right=15] node[above] {$t_{0,4} - t_{0,2}=2$} (T-1-4);
\draw (t13) to[bend left=10] node[below] {$t_{1,4-1} - t_{1,2-1}=2$} (t11);
\draw (T-2-5) to[bend right=15] node[above] {$t_{1,4-1} - t_{1,2-1}=2$} (T-2-3);
\draw (t22) to[bend left=10] node[below] {$t_{2,4-2} - t_{2,2-2}=2$} (t20);
\draw (T-3-4) to[bend right=15] node[above] {$t_{2,4-2} - t_{2,2-2} = 2$} (T-3-2);
\draw (t31) to[bend left=10] node[below] {$t_{3,4-3} - t_{3,2-3}=2$} (t34);
\draw (T-4-3) to[bend right=15] node[below] {$t_{3,4-3} - t_{3,2-3}=2$} (T-4-6);
\end{scope}
\begin{pgfonlayer}{background}
\fill[thick,DeepSkyBlue!50,rounded corners] (-.4,.6) rectangle (.4,-4.3) node[below,DeepSkyBlue] {$t_{k,0} = 0$};
\fill[thick,DeepSkyBlue!50,rounded corners] (T-1-2.north west) rectangle (T-5-2.south east) node[below,DeepSkyBlue] {$t_{k,0} =0$};
\fill[thick,SlateGray!50,rounded corners] ($ (T-1-5.north west) + (-.2, 0) $) rectangle (T-5-5.south east) node[below,SlateGray] {$\left[t_{k,3}\right]_5 = 3$};
\end{pgfonlayer}
\end{scope}
\end{tikzpicture}
%
\vspace{-2ex}
\caption{Example of the statements of \cref{lem:bb-properties} for $p=5$ and $\bm m = (0,1,0,0,0)$. The five building blocks in $\mathcal T^{\bm m}$ are shown on the left. On the right, the entries of each block are placed according to their respective value. \textcolor{SlateGray}{Property~\ref{lem:bb-property1}} is shown on the left for $j=3$. \textcolor{DeepSkyBlue}{Property~\ref{lem:bb-property2}} is apparent in both figures. The statement of \textcolor{red}{Property~\ref{lem:bb-property3}} becomes obvious on the right, while \textcolor{DarkGreen}{Property~\ref{lem:bb-property4}} is again shown in both figures (for $k=0, i=4$, and $j=2$).}
\label{fig:bb-properties}
\vskip -3ex
\end{figure*}
\begin{example}\label{ex:bb-properties}\begin{notms}
For $p=5$ and $\bm m = (0,1,0,0,0)$, the statements of \cref{lem:bb-properties} are shown in \cref{fig:bb-properties} for some example values.\end{notms}
\end{example}
\begin{definition}\label{def:hilo-notation}
For any basic building block class $\mathcal T^{\bm m}$ and $k \in \mathbb F_p$, define
\ifonecolumn
\begin{displaymath}
t_{k,\uparrow} = \argmax_{\zeta \in \mathbb F_p} t_{k,\zeta} \in \mathbb F_p
\text{ and } t_{k,\downarrow} = \argmin_{\zeta \in \mathbb F_p} t_{k,\zeta} \in \mathbb F_p
\end{displaymath}
\else
\begin{align*}
t_{k,\uparrow} &= \argmax_{\zeta \in \mathbb F_p} t_{k,\zeta} \in \mathbb F_p \\
\text{and}\quad t_{k,\downarrow} &= \argmin_{\zeta \in \mathbb F_p} t_{k,\zeta} \in \mathbb F_p
\end{align*}
\fi
which is the (congruence class of the) \emph{index} of the largest and smallest entry, respectively, of $\bm t_k$. Further define, for given $\zeta \in \mathbb F_p$, the inverses of the above expressions as
\ifonecolumn
\begin{displaymath}
t_{\uparrow, \zeta} \in \mathbb F_p \text{ (with } t_{\uparrow, \zeta} = k \Leftrightarrow\zeta = t_{k,\uparrow}\text{)}
\text{ and } t_{\downarrow, \zeta} \in \mathbb F_p \text{ (with }t_{\downarrow,\zeta} = k \Leftrightarrow\zeta = t_{k,\downarrow}\text{)}
\end{displaymath}
\else
\begin{align*}
&t_{\uparrow, \zeta} \in \mathbb F_p \quad\text{(with } t_{\uparrow, \zeta} = k \Leftrightarrow\zeta = t_{k,\uparrow}\text{)}\\
\text{and}\quad &t_{\downarrow, \zeta} \in \mathbb F_p \quad\text{(with }t_{\downarrow,\zeta} = k \Leftrightarrow\zeta = t_{k,\downarrow}\text{)}
\end{align*}
\fi
that tell, for given $\zeta$, in which block the $\zeta$-th entry is the maximizer (resp. minimizer) of $\set(\bm t_k)$. Finally,
\begin{equation} \notag
\sigma = \sigma^{\bm m} = t_{0,\uparrow} = \argmax_{j \in \mathbb F_p} t_{0,j} = \argmax_{j \in \mathbb F_p}([j]_\mathbb Z+m_j p) \neq 0
\label{eq:sigma}
\end{equation}
which is a constant in $\mathbb F_p$ for $\bm m$ fixed.
\end{definition}
\begin{lemma}\label{lem:hilo-formulas}
The definitions from \cref{def:hilo-notation} are indeed well-defined, i.e.,\ every building block has a unique largest and smallest entry and these are at different positions for the different building blocks within a class, and further admit the following explicit formulas for $k,\zeta \in \mathbb F_p$ (hence subtraction is in $\mathbb F_p$):
\begin{align}
t_{k,\uparrow} &= \sigma - k,
\label{eq:zeta-hi}\\
t_{k,\downarrow} &= -k,
\label{eq:zeta-lo}\\
t_{\uparrow,\zeta} &=\sigma - \zeta,
\label{eq:k-hi}\\
t_{\downarrow,\zeta} &=-\zeta.
\label{eq:k-lo}
\end{align}
In addition, the largest and smallest value, respectively, within the building block $\bm t_k$ are explicitly given by
\begin{align}
&\max(\bm t_k) = \max (\set(\bm t_k)) = t_{0,\sigma} - t_{0, k}\label{eq:val-k-hi}\\
\text{and}\quad&\min(\bm t_k) = \min(\set(\bm t_k)) = - t_{0,k}\label{eq:val-k-lo}
\end{align}
where $t_{0,\sigma} = \max(\bm t_0)$ by definition of $\sigma$.
\end{lemma}
\begin{IEEEproof}
We consider the \enquote{$\max$}-cases (\cref{eq:zeta-hi,eq:k-hi,eq:val-k-hi}) only.
%
By \cref{eq:property-3},
\[\max(\bm t_k) = \max (\bm t_0) - t_{0,k} = t_{0,\sigma} - t_{0,k}\]
which shows \cref{eq:val-k-hi}.
Applying Property~\ref{lem:bb-property1} of \cref{lem:bb-properties} to the above equation shows that $[\max(\bm t_k)]_p = \sigma - k$, hence (by another application of that property) the maximizer is unique and must equal $\sigma - k$, i.e., \cref{eq:zeta-hi} is correct. As this value is different for distinct values of $k\in\mathbb F_p$, the map $\mathbb F_p \ni k \mapsto t_{k,\uparrow} \in \mathbb F_p$ is bijective and hence admits an inverse. By resolving the expression for $k$, that inverse is seen to be $t_{\uparrow, \zeta} = \sigma - \zeta$, which proves \cref{eq:k-hi}.
The proofs for \cref{eq:zeta-lo,eq:k-lo,eq:val-k-lo} are completely analogous (note that the constant $t_{0,\downarrow}$ (which is the analog of $\sigma$) is zero and hence does not appear in the formulas).
\end{IEEEproof}
\begin{example}
Let $p=5$ and $\bm m = (0,1,0,0,0)$, then $\sigma=1$ and $t_{0,\sigma} = 6$ is the largest entry of $\bm t_0$. For e.g.\ $k=3$, we have $t_{k,\uparrow} = [1]_5 - [3]_5 = [3]_5$ by \cref{eq:zeta-hi} and $\max(\bm t_3) = t_{3,(t_{3,\uparrow})} = 6 - 3 = 3$ by \cref{eq:val-k-hi}, while $t_{k,\downarrow} = -[3]_5 = [2]_5$ by \cref{eq:zeta-lo} and $\min(\bm t_3) = t_{3,(t_{3,\downarrow})} = -3$ by \cref{eq:val-k-lo}. These statements can be easily verified using the left-hand side of \cref{fig:bb-properties}.
\end{example}
The following result will be used in various proofs.
\begin{lemma}\label{lem:bb-hilo-plusi}
For a building block class $\mathcal{T}^{\bm m}$ and any $k, i \in \mathbb F_p$,
\ifonecolumn
\begin{align}
&t_{k,(t_{k,\uparrow}+i)} - t_{k,t_{k,\uparrow}} = t_{k,\sigma-k+i} - t_{k,\sigma - k}= t_{\sigma,i} \label{eq:bb-hi-plusi}\\
\text{and}\;&t_{k,(t_{k,\downarrow}+i)} - t_{k,t_{k,\downarrow}} = t_{k,-k+i} - t_{k,-k} = t_{0,i}. \label{eq:bb-lo-plusi}
\end{align}
\else
\begin{align}
&t_{k,(t_{k,\uparrow}+i)} - t_{k,t_{k,\uparrow}} = t_{k,\sigma-k+i} - t_{k,\sigma - k}&&= t_{\sigma,i} \label{eq:bb-hi-plusi}\\
\text{and}\;&t_{k,(t_{k,\downarrow}+i)} - t_{k,t_{k,\downarrow}} = t_{k,-k+i} - t_{k,-k} &&= t_{0,i}. \label{eq:bb-lo-plusi}
\end{align}
\fi
In particular, both expressions are independent of $k$.
\end{lemma}
\begin{IEEEproof}
The left-hand equations are by \cref{eq:zeta-hi,eq:zeta-lo}, respectively, while the right-hand equations follow by Property~\ref{lem:bb-property4} of \cref{lem:bb-properties} with $l=\sigma - k$ and $l=-k$ for \cref{eq:bb-hi-plusi} and \cref{eq:bb-lo-plusi}, respectively.
\end{IEEEproof}
\begin{definition} \label{def:symmetric_class}
A basic building block class $\mathcal T^{\bm m}$ is called \emph{symmetric} if $\set(\bm t_0)= t_{0,\sigma} - \set(\bm t_0)$, i.e., if
\begin{equation} i \in \set(\bm t_0) \Leftrightarrow t_{0,\sigma} - i \in \set(\bm t_0).\label{eq:symmetry} \end{equation}
\end{definition}
The building block classes from \cref{ex:bb-p7,ex:bb-properties} are symmetric (the latter becomes obvious on the right of \cref{fig:bb-properties}), while the one from \cref{ex:bb-p3} is not: here, $\set(\bm t_0) = \{0,4,5\}$, $4 \in \set(\bm t_0)$, but $t_{0,\sigma} - 4 = 5-4 = 1$ is not.
\begin{lemma} \label{lem:bb-symmetric_properties}
Let $\mathcal{T}^{\bm m}$ be a basic building block class for a prime $p$. The following are equivalent:
\begin{enumerate}
\item $\mathcal T^{\bm m}$ is symmetric. \label{lem:bb-symmetric-propertybase}
\item $t_{\sigma, j} = -t_{0,-j}$ for all $j \in \mathbb F_p$.
\label{lem:bb-symmetric-property1}
\item If $p>2$, either $\bm m = \bm 0 = (0,\dotsc, 0)$, or $[\sigma]_\mathbb Z$ is odd and $m_i + m_{\sigma - i} = 1$ for $[i]_\mathbb Z \leq [\sigma]_\mathbb Z$.
\label{lem:bb-symmetric-property3}
\end{enumerate}
\end{lemma}
\begin{IEEEproof}
By Property~\ref{lem:bb-property1} of \cref{lem:bb-properties}, \cref{eq:symmetry} is equivalent to the condition that, for $i \in \mathbb F_p$,
\begin{equation}
t_{0,\sigma} - t_{0,i} = t_{0,\sigma - i}.
\label{eq:symmetry-alt}
\end{equation}
By \cref{def:bb}, the left-hand side equals $-t_{\sigma,i-\sigma}$, which shows the equivalence of \ref{lem:bb-symmetric-propertybase}) and \ref{lem:bb-symmetric-property1}) (using $j=i-\sigma$).
To show that \ref{lem:bb-symmetric-propertybase}) implies \ref{lem:bb-symmetric-property3}), let $p\geq 3$ and $\bm m \neq \bm 0$, hence $m_\sigma = 1$. Assume first that $[\sigma]_\mathbb Z = 2s$ is even, hence $t_{0,\sigma} = [\sigma]_\mathbb Z + pm_\sigma = [\sigma]_\mathbb Z + p = 2s + p$. Because this number is odd, $a \neq t_{0,\sigma} - a$ for all $a \in \set(\bm t_0)$, so that \cref{eq:symmetry} partitions $\set(\bm t_0)$ into disjoint pairs, which contradicts $\abs{\set(\bm t_0)} = p$ being odd.
Now, expand \cref{eq:symmetry-alt} by \cref{def:bb} to obtain
\begin{align}
&[\sigma]_\mathbb Z + p m_\sigma - [i]_\mathbb Z - p m_i = [\sigma - i]_\mathbb Z + p m_{\sigma - i} \notag \\
\Leftrightarrow\quad
&[\sigma]_\mathbb Z - [i]_\mathbb Z - [\sigma - i]_\mathbb Z + p(m_\sigma - m_i - m_{\sigma -i}) = 0 \notag \\
\Leftrightarrow\quad
&m_\sigma - m_i - m_{\sigma - i} - \left\{\begin{matrix}1 &\text{if }[\sigma]_\mathbb Z < [i]_\mathbb Z\\
0 &\text{otherwise}
\end{matrix}\right\} = 0,\label{eq:symmetry-alt2}
\end{align}
hence $m_i + m_{\sigma-i} = m_\sigma$ for $[i]_\mathbb Z \leq [\sigma]_\mathbb Z$, which shows \ref{lem:bb-symmetric-property3}).
\ref{lem:bb-symmetric-property3})$\Rightarrow$\ref{lem:bb-symmetric-propertybase}): For $\bm m =\bm 0$, $[\sigma]_\mathbb Z = p-1$, such that the braced expression in \cref{eq:symmetry-alt2} is $0$ for all $i \in \mathbb F_p$, hence \cref{eq:symmetry-alt2} holds for all $i$ and the class is symmetric. For $p=2$, there is only one additional basic class defined by $\bm m=(0,1)$, which implies $\set(\bm t_0) = \{0, 2\}$ and hence fulfills \cref{def:symmetric_class}. Thus assume that $[\sigma]_\mathbb Z$ is odd and $m_i + m_{\sigma - i} = 1$ for $[i]_\mathbb Z \leq [\sigma]_\mathbb Z$.
For $[i]_\mathbb Z \leq [\sigma]_\mathbb Z$, then, \cref{eq:symmetry-alt2} holds by assumption. On the other hand, $[i]_\mathbb Z > [\sigma]_\mathbb Z$ implies $[\sigma - i]_\mathbb Z > [\sigma]_\mathbb Z$, such that $m_i = m_{\sigma - i} = 0$ by definition of $\sigma$, and \cref{eq:symmetry-alt2} holds as well, which concludes the proof.
\end{IEEEproof}
\begin{corollary}
For $p \geq 3$, there are $2^{(p-1)/2}$ symmetric building block classes.
\end{corollary}
\begin{IEEEproof}
Besides $\mathcal T^{\bm 0}$ which is always symmetric, the first statement of Property~\ref{lem:bb-symmetric-property3} of the above lemma implies that symmetric classes exist only for $[\sigma]_\mathbb Z = 2s+1$ with $0 \leq s \leq (p-3)/2$. By the second part, only half of the $m_i$ for $[0]_\mathbb Z < [i]_\mathbb Z < [\sigma]_\mathbb Z$ are free to choose, which gives $2^s = 2^{([\sigma]_\mathbb Z - 1)/2}$ symmetric classes for a single $\sigma$. In total (and including $\mathcal T^{\bm 0}$) this results in
$1 + \sum_{s=0}^{(p-3)/2} 2^s = 2^{(p-1)/2}$
classes.
\end{IEEEproof}
We now define \emph{almost doubly-symmetric classes} which will become important in \cref{sec:facet-defining}.
\begin{definition}
A symmetric basic building block class $\mathcal{T}^{\bm m}$ for $p \geq 3$ is called \emph{almost doubly-symmetric} if its ``projection'' onto the ``interior'' entries, i.e.,\ the building block class obtained by removing both the smallest and largest entries from each building block of the class, has the property that there exists a subset $\tilde{T}_0^{\text{proj}} \subset T_0^{\text{proj}}$ such that
\begin{subequations}
\begin{align}
&\abs{\tilde{T}_0^{\text{proj}}} \geq (p-3)/2, \label{eq:condition_dsymmetrix_1} \\
&\max \left( \tilde{T}_0^{\text{proj}}\right) \leq \left\lfloor \smax(\bm t_0)/2 \right\rfloor, \label{eq:condition_dsymmetrix_2} \\
\text{and}\quad&i \in \tilde{T}_0^{\text{proj}} \Rightarrow \smax(\bm t_0) - i \in T_0^{\text{proj}} \label{eq:condition_dsymmetrix_3}
\end{align} \label{eq:condition_dsymmetrix}%
\end{subequations}%
where $T_0^{\text{proj}} = \set(\bm t_0) \setminus \{ 0, t_{0,\sigma}\}$ (i.e.,\ the projection of $\set(\bm t_0)$ onto the interior entries) and $\smax(\bm t_k)$ is the second-largest entry of $\boldt_k$, i.e.,\
\begin{displaymath}
\smax(\bm t_k)= \max ( \set(\bm t_k) \setminus \{ \max(\boldt_k) \}).
%
%
\end{displaymath}
\end{definition}
\begin{example} \label{ex:doubly-symmetric}
The building block class from \cref{ex:bb-properties} is almost doubly-symmetric: here, $\set(\bm t_0) = \{0,2,3,4,6\}$, $T_0^{\text{proj}} = \{2,3,4\}$, and $\smax(\bm t_0) = 4$. It is easy to check that $\tilde T_0^{\text{proj}} = \{2\}$ fulfills \cref{eq:condition_dsymmetrix}. In contrast, the class from \cref{ex:bb-p7} is not almost doubly-symmetric: here, $\set(\bm t_0) = \{0,3,4,6,8,9,12\}$, $T_0^{\text{proj}} = \{3,4,6,8,9\}$, and the largest subset $\tilde{T}_0^{\text{proj}} \subset T_0^{\text{proj}}$ that satisfies \cref{eq:condition_dsymmetrix_2} and \cref{eq:condition_dsymmetrix_3} is $\tilde{T}_0^{\text{proj}} = \{3\}$ which does not satisfy \cref{eq:condition_dsymmetrix_1}.
\end{example}
\subsection{Deriving Inequalities From Building Blocks}\label{sec:hilo}
We now describe a set of linear inequalities derived from a given basic class $\mathcal T^{\bm m}$. For each such inequality $\boldtheta^T \boldx \leq \kappa$, $\boldtheta \in \mathbb R^{dp}$ is of the form $\boldtheta = (\boldt_{k_1} \mid \dotsc \mid \boldt_{k_d} )^T$, where each $\boldt_{k_i} \in \mathcal T^{\bm m}$ for some fixed $\bm m$.
For any codeword $\boldc =(c_1,\dotsc, c_d) \in \mathcal C$, the left-hand side of $\boldtheta^T \mathsf F_{\mathrm v}(\boldc) \leq \kappa$ is then
\begin{equation}
\sum_{i=1}^d \boldt_{k_i} \mathsf f(c_i)^T = \sum_{i=1}^d t_{k_i,c_i}
\label{eq:ineqVals}
\end{equation}
because $\mathsf f(c_i)$ is the $c_i$-th unit vector by \cref{def:Constant}. Hence, for $k \in \mathbb F_p$, the entries of $\boldt_k = (t_{k,0}, \dotsc, t_{k,p-1})$ immediately specify the values of the corresponding terms in $\boldtheta^T \mathsf F_{\mathrm v}(\boldc)$.
\begin{construction}[Hi-Lo-Construction] \label{constr:hilo}
Choose and fix $k_1,\dotsc,k_{d-1}$ arbitrarily from $\mathbb F_p$, obtaining $\hat{\boldtheta} = (\bm t_{k_1} \mid \dotsc \mid \bm t_{k_{d-1}})^T$. Then, construct a \emph{canonical codeword} $\boldc$ for $\hat\boldtheta$ in the following way. The first $d-1$ entries of $\boldc$ are chosen to maximize $t_{k_i,c_i}$, i.e.,\
\begin{subequations}
\begin{align}
&c_i = t_{k_i,\uparrow} = \sigma - k_i \label{eq:hilocons-ci}\\
\text{(and hence}\;&k_i = t_{\uparrow, c_i} = \sigma - c_i \text{)}\label{eq:hilocons-ki}
\end{align}
\end{subequations}
for $i \in \range{d-1}$. The condition $\boldc \in \mathcal C$ then uniquely specifies the last entry $c_d$ of the codeword. Now, $k_d$ is chosen such that $t_{k_d, c_d}$ is minimized, i.e.,\
\begin{subequations}
\begin{align}
&k_d = t_{\downarrow, c_d} = - c_d \label{eq:hilocons-kd}\\
\text{(and hence}\;&c_d = t_{k_d,\downarrow} = -k_d\text{).} \label{eq:hilocons-cd}
\end{align}
\end{subequations}
Finally, the right-hand side $\kappa$ is defined as $\boldtheta^T \mathsf F_{\mathrm v}(\boldc)= \sum_{i=1}^d t_{k_i,c_i}$ which ensures that $\mathsf F_{\mathrm v}(\boldc)$ is tight for the resulting inequality $\boldtheta^T \boldx \leq \kappa$.
\end{construction}
\begin{remark}\label{rem:hilo-arbitrary}
The choice of $d$ in the above construction is arbitrary; using any other position $i \in \range d$ to define the last entry $c_i$ of $\bm c$ and the last building block $\bm t_i$ leads to a different mapping of inequalities to canonical codewords, but the set of inequalities constructed in total remains the same.
\end{remark}
\begin{corollary}\label{cor:kappaModp}
For any inequality $\boldtheta^T \boldx \leq \kappa$ obtained from \cref{constr:hilo}, $[\kappa]_p = 0$.
\end{corollary}
\begin{IEEEproof}
By construction, $[\kappa]_p = \sum_{i=1}^d [t_{k_i, c_i}]_p = \sum_{i=1}^d c_i$ by Property~\ref{lem:bb-property1} of \cref{lem:bb-properties}, and $\sum c_i = 0$ because $\boldc \in \mathcal C$.
\end{IEEEproof}
\begin{example}
Let $p=5$ and $\bm m = (0,1,0,0,0)$ as before, $d=6$, and choose $\hat\boldtheta = (\bm t^{\bm m}_0 \mid \bm t^{\bm m}_1 \mid \bm t^{\bm m}_2 \mid \bm t^{\bm m}_3 \mid \bm t^{\bm m}_4)^T$. Then, the first $5$ entries of the canonical codeword $\boldc$ are $(t_{0,\uparrow}, \dotsc, t_{4,\uparrow}) = (1, 0, 4, 3, 2)$ by \cref{eq:zeta-hi}. As $\sum_{i=1}^{d-1} t_{k_i,\uparrow} = 0$, this implies $c_6 = 0$, such that $\boldc=(1,0,4,3,2,0)^T \in \mathcal C$, and we set $k_6 = t_{\downarrow, c_6} = 0$ by \cref{eq:zeta-lo}, hence $\boldtheta = (\bm t_0^{\bm m} \mid \bm t_1^{\bm m} \mid \bm t_2^{\bm m} \mid \bm t_3^{\bm m} \mid \bm t_4^{\bm m} \mid \bm t_0^{\bm m})^T$. Finally, we can compute $\kappa = \boldtheta^T \mathsf F_{\mathrm v}(\boldc) = t_{0,1} + t_{1,0} + t_{2,4} + t_{3,3} + t_{4,2} + t_{0,0} = 6 + 0 + 4 + 3 + 2 + 0 = 15$ and obtain the inequality $(\boldt^{\bm m}_0 \mid \boldt^{\bm m}_1 \mid \boldt^{\bm m}_2 \mid \boldt^{\bm m}_3 \mid\boldt^{\bm m}_4 \mid \boldt^{\bm m}_0) \boldx \leq 15$.
\end{example}
Note that by \cref{constr:hilo}, a total of $p^{d-1}$ inequalities can be constructed per class $\mathcal T^{\bm m}$. We denote the set of these inequalities by $\Theta^{\bm m}$. The following lemma states two alternative characterizations of the elements of $\Theta^{\bm m}$.
\begin{lemma}\label{lem:conditions}
An inequality
$\boldtheta^T \boldx \leq \kappa$ with $\boldtheta=(\bm t_{k_1} \mid \dotsc \mid \bm t_{k_d})^T$ is in $\Theta^{\bm m}$ if and only if
\begin{subequations} \label{eq:hilo-conditions}
\begin{align}
&\sum_{i=1}^d k_i = [d-1]_p \cdot \sigma\label{eq:Thetacondition}\\
\text{and}\quad&\kappa = (d-1) t_{0,\sigma} - \sum_{i=1}^d t_{0,k_i} \label{eq:kappacondition}
\end{align}
\end{subequations}
which is in turn equivalent to the condition that
\begin{subequations} \label{eq:hilo-conditions'}
\begin{align}
&\sum_{k\in \mathbb F_p} \left[\abs{V^\boldtheta_k}\right]_p (\sigma - k) = \sigma \label{eq:Thetacondition'}\\
\text{and}\quad&\kappa = \sum_{k\in \mathbb F_p} \abs{V^\boldtheta_k} \max(\bm t_k) - \max(\bm t_0)\label{eq:kappacondition'}
\end{align}
\end{subequations}
where, for $k \in \mathbb F_p$, $V^\boldtheta_k = \{i \in \range d\colon k_i = k\}$ denotes the index set of entries in $\boldtheta$ that equal $\bm t_k$.
\end{lemma}
\begin{IEEEproof}
See Appendix~\ref{app:proofConditions}.
\end{IEEEproof}
\begin{remark} \label{rem:Thetamunique}
Note that no two inequalities constructed by \cref{constr:hilo} from distinct $\hat \boldtheta^1 \neq \hat \boldtheta^2$ for the same class $\mathcal T^{\bm m}$ are equivalent, in the sense that none is a positive scalar multiple of another. Assume for the contrary that $(\boldtheta^1 = (\bm t_{k^1_1} \mid \dotsc \mid \bm t_{k^1_d})^T, \kappa^1)$ and $(\boldtheta^2=(\bm t_{k^2_1} \mid \dotsc \mid \bm t_{k^2_d})^T,\kappa^2)$ are constructed from \cref{constr:hilo} with $\boldtheta^1 = a \boldtheta^2$, $\kappa^1 = a \kappa^2$, and $a \geq 0$. If $a=1$, then $k^1_i = k^2_i$ for $i \in \range{d-1}$, i.e.,\ $\hat\boldtheta^1 = \hat\boldtheta^2$. Otherwise, $\boldt_{k^1_1} = a \boldt_{k^2_1}$ with $a \neq 1$, which is a contradiction: since by Property~\ref{lem:bb-property3} of \cref{lem:bb-properties}, the difference between the largest and smallest element of a building block is constant among a fixed class, no building block can be a proper scalar multiple of another.
\end{remark}
\subsection{Valid and Invalid Building Block Classes}\label{sec:validInvalid}
In this subsection, we show that in a class $\Theta^{\bm m}$ of inequalities, either \emph{all} inequalities are valid or \emph{all} are invalid for $\P$, and the decision is independent of the code's length $d$. The key argument is given by the following lemma. The proofs of all results of this subsection are given in Appendix~\ref{app:validInvalid}.
\begin{lemma}\label{lem:validIndependent}
Let $\boldtheta^T \boldx \leq \kappa$ with $\boldtheta = (\bm t_{k_1} \mid \dotsc \mid \bm t_{k_d})^T$ be an inequality in $\Theta^{\bm m}$, with $\boldc \in \mathcal C$ being the corresponding canonical codeword. Then, for any $\bm\xi \in \mathbb F_p^d$,
\begin{equation} \label{eq:tightness}
\boldtheta^T \mathsf F_{\mathrm v}(\bm c + \bm\xi) - \kappa = \sum_{i=1}^{d-1} t_{\sigma, \xi_i} + t_{0, \xi_d}.
\end{equation}
In particular, the change to the left-hand side of the inequality $\boldtheta^T\mathsf F_{\mathrm v}(\boldc) \leq \kappa$ induced by adding $\bm\xi$ to the canonical codeword only depends on $\bm\xi$; it is independent of $\boldtheta$, $\kappa$, and $\boldc$, i.e.,\ independent of which inequality was chosen.
\end{lemma}
\begin{corollary}\label{cor:allValidOrNot}
Let $\mathcal T^{\bm m}$ be a basic building block class.
\begin{enumerate}
\item If
\begin{equation}
\sum_{i=1}^{d-1} t_{\sigma, c_i} + t_{0,c_d} \leq 0
\label{eq:allValidCondition}
\end{equation}
for all $\boldc \in \mathcal C$, then all inequalities in $\Theta^{\bm m}$ are valid for $\P$. \label{cor:allValidOrNot-1}
\item Conversely, if there is a codeword $\boldc \in \mathcal C$ such that $\sum_{i=1}^{d-1} t_{\sigma, c_i} + t_{0, c_d} > 0$, then no inequality in $\Theta^{\bm m}$ is valid for $\P$.
\end{enumerate}
\end{corollary}
\begin{definition}\label{def:valid-class}
The basic building block class $\mathcal T^{\bm m}$ is called \emph{valid} if the equation
\begin{equation}
\sum_{i\in I} n_i t_{\sigma,i} + [r]_\mathbb Z = 0
\label{eq:valid-class-condition}
\end{equation}
with $I = \{i \in \mathbb F_p\colon 0 > t_{\sigma,i} \geq -[\sigma]_\mathbb Z\}$, nonnegative integer variables $n_i$, and
$ r = - \sum_{i\in I} [n_i]_p\cdot i$
has no solution for which $m_r = 1$.
\end{definition}
\begin{theorem}\label{theorem:newValidProgram}
If the class $\mathcal T^{\bm m}$ is valid, then all inequalities in $\Theta^{\bm m}$ are valid for $\P$ (independently of $d$). If $\mathcal T^{\bm m}$ is not valid, there is a $d_0 \leq [\sigma]_\mathbb Z+1$ such that all inequalities in $\Theta^{\bm m}$ are invalid for $\P$ if $d \geq d_0$.
\end{theorem}
\begin{example}
Let $p=3$ and $\bm m = (0,1,1)$. Then, $\boldt^{\bm m}_0 = (0, 4, 5)$, $\boldt^{\bm m}_1 = (0, 1, -4)$, and $\boldt^{\bm m}_2 = (0, -5, -1)$. Thus, $\sigma = t_{0,\uparrow} = 2$ and $\boldt^{\bm m}_{\sigma} = (0, -5, -1)$. Now, $I = \{2\}$, and with $n_2=2$, we obtain $r=-[2]_p\cdot [2]_p = [2]_p$, which satisfies both $m_r = m_2 = 1$ and \cref{eq:valid-class-condition} because $n_2\cdot t_{2,2} + [r]_\mathbb Z = 2 \cdot (-1) + 2 = 0$. Hence, \cref{def:valid-class,theorem:newValidProgram} tell us that this class is invalid for $d \geq 3$. Indeed, for $d=3$ and the canonical codeword $\bm 0 \in \mathcal C$, we obtain $\boldtheta = (0, -5, -1, 0, -5, -1, 0, 4, 5)^T$ and $\kappa = 0$. However, the resulting inequality is violated by the codeword $\boldc = (2,2,2)^T \in \mathcal C$, as $\boldtheta^T \mathsf F_{\mathrm v}(\boldc) = -1 + (-1) + 5 = 3 > 0 = \kappa$.
\end{example}
\begin{remark}
The condition that $m_r=1$ in \cref{def:valid-class} implies that $[r]_\mathbb Z \leq [\sigma]_\mathbb Z$ in \cref{eq:valid-class-condition}. Since also $t_{\sigma,i} < 0$ for $i \in I$, the number of potential solutions to \cref{eq:valid-class-condition} is relatively small. We have verified that a simple enumeration runs in negligible time for all $p$ of reasonable size. The first row of \cref{table:classNumbers} lists the number of classes that pass the test for all primes $p \leq 19$.
\end{remark}
\begin{lemma}\label{lem:valid-symmetric-condition}
If $\mathcal T^{\bm m}$ is symmetric, then $\mathcal T^{\bm m}$ is valid if and only if the equation
\begin{equation} \notag
\sum_{j \in J} \nu_j\cdot [j]_\mathbb Z = [\rho]_\mathbb Z
%
\end{equation}
with $J = \{j \in \mathbb F_p\colon m_j = 0\text{ and }0 < [j]_\mathbb Z < [\sigma]_\mathbb Z\}$ has no solution with $m_\rho = 1$.
\end{lemma}
\begin{remark}
The conditions of \cref{lem:valid-symmetric-condition} depend on $\sigma$ and $m_0,\dotsc,m_\sigma$ only; in particular, they are independent of $p$. Hence, once such an $\bm m$-vector prefix has been determined to be valid, a valid symmetric class is obtained for \emph{any} prime $p > \sigma$ by appending an appropriate number of zeros. \Cref{table:facetClasses} lists the valid prefixes for $\sigma \leq 17$.
\end{remark}
\begin{proposition}\label{prop:specialMvalid}
For any $p$, let $\bm m$ be of the form $(0,1,\dotsc,0,1,0,\dotsc,0)$, i.e.,\ consisting of $s$ copies of $(0,1)$, for an arbitrary $0 \leq s \leq \lceil(p-1)/2\rceil$, followed by zeros (note that this includes the all-zero $\bm m$-vector). Then, $\mathcal T^{\bm m}$ is a valid symmetric class, which additionally is almost doubly-symmetric for $p\geq 3$.\footnote{Note that a similar result with an upper bound of $(p-1)/2$ on $s$ (without the ceiling operator) was stated in \cite[Prop.~2]{ros16_2}. That result is slightly weaker since it does not include $s=1$ for $p=2$. However, the resulting class turns out to be redundant (see \cref{sec:redundant} below).}
\end{proposition}
\begin{IEEEproof}
Every class of the above form is symmetric by \cref{lem:bb-symmetric-property3} of \cref{lem:bb-symmetric_properties}, such that \cref{lem:valid-symmetric-condition} can be applied. Assume $\mathcal T^{\bm m}$ is invalid. The condition $m_\rho = 1$ implies that $s \neq 0$, hence $[\sigma]_\mathbb Z$ is odd, while $J$ contains entries with even integer representations only; hence $\sum_{j \in J} \nu_j \cdot [j]_\mathbb Z$ is even and the condition in \cref{lem:valid-symmetric-condition} is not satisfiable.
For $p \geq 3$ and $\bm 0 \neq \bm m$ of the above form, let $\tilde T_0^{\text{proj}}$ consist of the smallest $(p-3)/2$ elements of $\set(\bm t_0) \setminus \{0\}$. By the form of $\bm m$, this implies $\tilde T_0^{\text{proj}} \subseteq \{t_{0,i}\colon m_i=0\}$, and one can show (details omitted) that this set fulfills \cref{eq:condition_dsymmetrix_2,eq:condition_dsymmetrix_3}, hence $\mathcal T^{\bm m}$ is almost doubly-symmetric, which concludes the proof.
\end{IEEEproof}
\ifonecolumn
\begin{table}
\caption{Number of valid basic building block classes for different values of $p$. Also, the numbers of unique (cf. \cref{sec:redundant}), unique symmetric, almost doubly-symmetric, and facet-defining valid classes are given.}
\label{table:classNumbers}
\centering
\vskip -2.0ex
\begin{tabular}{lcccccccc}
\toprule
$p$: &2 & 3 & 5 & 7 & 11 & 13 & 17 & 19 \\ \midrule
valid classes: & 2 & 3 & 7 & 17 & 109 & 261 & 1621 & 4085 \\
unique valid: & 1 & 2 & 6 & 16 & 108 & 260 & 1620 & 4084\\
of which are \dots \\
- symmetric: & 1 & 1 & 2 & 4 & 10 & 16 & 31 & 46\\
- almost doubly-symmetric: & 0 & 1 & 2 & 3 & 5 & 6 & 8 & 9
\\
- facet-defining: & 1 & 1 & 2 & 4 & 10 & 16 & 31 & 46\\
\bottomrule
\end{tabular}
\end{table}
\else
\begin{table}
\caption{Number of valid basic building block classes for different values of $p$. Also, the numbers of unique (cf. \cref{sec:redundant}), unique symmetric, almost doubly-symmetric, and facet-defining valid classes are given.}
\label{table:classNumbers}
\centering
\vskip -2.0ex
\begin{tabular}{lcccccccc}
\toprule
$p$: &2 & 3 & 5 & 7 & 11 & 13 & 17 & 19 \\ \midrule
valid classes: & 2 & 3 & 7 & 17 & 109 & 261 & 1621 & 4085 \\
unique valid: & 1 & 2 & 6 & 16 & 108 & 260 & 1620 & 4084\\
of which are \dots \\
- symmetric: & 1 & 1 & 2 & 4 & 10 & 16 & 31 & 46\\
- almost doubly-\\
\phantom{- }symmetric: & 0 & 1 & 2 & 3 & 5 & 6 & 8 & 9
\\
- facet-defining: & 1 & 1 & 2 & 4 & 10 & 16 & 31 & 46\\
\bottomrule
\end{tabular}
\end{table}
\fi
\ifonecolumn
\begin{table}
\caption{Prefixes of $\bm m$-vectors for which the corresponding building block class $\mathcal T^{\bm m}$ is symmetric and valid for any prime $p$. For $p \leq 19$, these are exactly the prefixes that lead to valid facet-defining classes. The entries corresponding to \cref{prop:specialMvalid} are printed in bold.}
\label{table:facetClasses}
\vskip -2.0ex
\begin{tabular}{rlrl}
\toprule
$\sigma$ &valid $\bm m$ prefixes &$\sigma$&valid $\bm m$ prefixes\\ \midrule
1 &$\mathbf{01}$ & 3 & $\mathbf{0101}$\\
5 & $\mathbf{010101}$, $011001$ & 7 & $\mathbf{01010101}$, $01101001$, $01110001$\\
9 & \multicolumn{3}{l}{
$\mathbf{0101010101}$,
$0111010001$,
$0111100001$}\\
11 & \multicolumn{3}{l}{
$\mathbf{010101010101}$,
$011011001001$,
$011100110001$,
$011101010001$,
$011110100001$,
$011111000001$}\\
13 & \multicolumn{3}{l}{
$\mathbf{01010101010101}$,
$01101101001001$,
$01110101010001$,
$01110110010001$,
$01111001100001$,
$01111010100001$,}\\
&\multicolumn{3}{l}{
$01111101000001$,
$01111110000001$}\\
15&\multicolumn{3}{l}{
$\mathbf{0101010101010101}$,
$0111010101010001$,
$0111011100010001$,
$0111110011000001$,
$0111110101000001$,}\\
&\multicolumn{3}{l}{
$0111111010000001$,
$0111111100000001$}\\
17&\multicolumn{3}{l}{
$\mathbf{010101010101010101}$,
$011011011001001001$,
$011101010101010001$,
$011101100110010001$,$011101110100010001$}\\
&\multicolumn{3}{l}{
$011110110100100001$,
$011110111000100001$,
$011111000111000001$,$011111001011000001$,
$011111010101000001$,}\\
&\multicolumn{3}{l}{
$011111011001000001$,
$011111100110000001$,
$011111101010000001$,
$011111110100000001$,$011111111000000001$}\\
$p-1$&$\bm 0$ (all-zero $\bm m$-vector) \\
\bottomrule
\end{tabular}
\end{table}
\else
\begin{table}
\caption{Prefixes of $\bm m$-vectors for which the corresponding building block class $\mathcal T^{\bm m}$ is symmetric and valid for any prime $p$. For $p \leq 19$, these are exactly the prefixes that lead to valid facet-defining classes. The entries corresponding to \cref{prop:specialMvalid} are printed in bold.}
\label{table:facetClasses}
\vskip -2.0ex
\begin{tabular}{rlrl}
\toprule
$\sigma$ &valid $\bm m$ prefixes &$\sigma$&valid $\bm m$ prefixes\\ \midrule
1 &$\mathbf{01}$ & 3 & $\mathbf{0101}$\\
5 & $\mathbf{010101}$, $011001$ & 7 & $\mathbf{01010101}$, $01101001$, \\
& & & $01110001$\\
9 & \multicolumn{3}{l}{
$\mathbf{0101010101}$,
$0111010001$,
$0111100001$}\\
11 & \multicolumn{3}{l}{
$\mathbf{010101010101}$,
$011011001001$,
$011100110001$,}\\
& \multicolumn{3}{l}{
$011101010001$,
$011110100001$,
$011111000001$}\\
13 & \multicolumn{3}{l}{
$\mathbf{01010101010101}$,
$01101101001001$,
$01110101010001$,}\\
&\multicolumn{3}{l}{
$01110110010001$,
$01111001100001$,
$01111010100001$,}\\
&\multicolumn{3}{l}{
$01111101000001$,
$01111110000001$}\\
15&\multicolumn{3}{l}{
$\mathbf{0101010101010101}$,
$0111010101010001$,}\\
&\multicolumn{3}{l}{
$0111011100010001$,
$0111110011000001$,}\\
&\multicolumn{3}{l}{
$0111110101000001$,
$0111111010000001$,}\\
&\multicolumn{3}{l}{$0111111100000001$}\\
17&\multicolumn{3}{l}{
$\mathbf{010101010101010101}$,
$011011011001001001$,}\\
&\multicolumn{3}{l}{
$011101010101010001$,
$011101100110010001$,}\\
&\multicolumn{3}{l}{
$011101110100010001$,
$011110110100100001$,}\\
&\multicolumn{3}{l}{
$011110111000100001$,
$011111000111000001$,}\\
&\multicolumn{3}{l}{
$011111001011000001$,
$011111010101000001$,}\\
&\multicolumn{3}{l}{
$011111011001000001$,
$011111100110000001$,}\\
&\multicolumn{3}{l}{
$011111101010000001$,
$011111110100000001$,}\\
&\multicolumn{3}{l}{$011111111000000001$}\\
$p-1$&$\bm 0$ (all-zero $\bm m$-vector) \\
\bottomrule
\end{tabular}
\end{table}
\fi
\begin{lemma} \label{lem:counting_formulas}
Let $\mathcal C$ be an \enquote{all-ones} SPC code over $\mathbb{F}_p$ of length $d \geq 2$, and let $\mathcal{T}^{\bm m}$ be a basic valid building block class. Define
\begin{displaymath}
I^>_{c,J} = \begin{cases}
1 & \text{if $t_{0,c} + \sum_{j \in J} t_{\sigma,j} > 0$},\\
0 & \text{otherwise}
\end{cases}
\end{displaymath}
where $c \in \mathbb F_p$ and $J$ is a multiset of $\mathbb F_p$ (including the empty set). Furthermore, let
\ifonecolumn
\begin{displaymath}
I^=_{c,J} = \begin{cases}
1 & \text{if $c + \| J \|_1 = 0$ and $t_{0,c} + \sum_{j \in J} t_{\sigma,j} = 0$},\\
0 & \text{otherwise}.
\end{cases}
\end{displaymath}
\else
\begin{displaymath}
I^=_{c,J} = \begin{cases}
1 & \text{if $c + \| J \|_1 = 0$} \\
& \text{and
$t_{0,c} + \sum_{j \in J} t_{\sigma,j} = 0$},\\
0 & \text{otherwise}.
\end{cases}
\end{displaymath}
\fi
Now, any inequality $\boldtheta^T \boldx \leq \kappa$ from $\Theta^{\bm m}$
\begin{enumerate}
\item cuts the embeddings of
\begin{equation} \notag %
\sum_{c \in \mathbb F_p,\,J \text{ multiset of }\mathbb F_p} I^>_{c,J} {{d-1} \choose {|J|}} \frac{|J|!}{n^J_1! \cdots n^J_{k^J}!}
\end{equation}
elements $\boldzeta \in \mathbb{F}_p^d \setminus \mathcal C$, where $k^J$ is the number of \emph{types} (number of different elements) of the multiset $J$ with repetion numbers $n_1^J,\dotsc, n^J_{k^J}$, and %
\label{part:valid2_q}
\item is tight for the embeddings of exactly
\begin{equation} \notag %
\sum_{c \in \mathbb F_p,\,J \text{ multiset of }\mathbb F_p} I^=_{c,J} {{d-1} \choose {|J|}} \frac{|J|!}{n^J_1! \cdots n^J_{k^J}!}
\end{equation}
codewords of $\mathcal C$. %
\label{part:valid3_q}
\end{enumerate}
\end{lemma}
\subsection{Facet-Defining Valid Building Block Classes}\label{sec:facet-defining}
In this subsection, we state several results regarding the dimension of the face defined by an inequality from $\Theta^{\bm m}$ when $\mathcal{T}^{\bm m}$ is a valid \emph{symmetric} building block class.
\begin{lemma}\label{lem:facets}
Let $\mathcal{T}^{\bm m}$ denote a valid \emph{symmetric} basic building block class. Then, any inequality from $\Theta^{\bm m}$ defines a face of $\mathcal{P}$ of dimension at least $d(p-1)-1 - \frac12(p-3)$, when $d \geq 3$ and $p \geq 3$.
\end{lemma}
\begin{IEEEproof}
See Appendix~\ref{prooffacets}.
\end{IEEEproof}
\begin{conjecture}\label{conj:facetsymmetric}
A valid building block class $\mathcal T^{\bm m}$ is facet-defining if and only if it is symmetric.
\end{conjecture}
The \enquote{if}-part of the conjecture was verified numerically (cf. \cref{rem:numerical-check-facet} in Appendix~\ref{prooffacets}) for all primes $p \leq 19$; see \cref{table:classNumbers} and also \cref{table:facetClasses} from which the corresponding $\bm m$-vectors can be derived. The \enquote{only if}-part is supported by complementary numerical experiments for all primes $p \leq 19$; see Appendix~\ref{app:only-if-part} %
While we are not able to prove \cref{conj:facetsymmetric} for general $p$, the following stronger result (compared to \cref{lem:facets}) holds for valid almost doubly-symmetric building block classes.
\begin{proposition}\label{lem:facets_modified}
Let $\mathcal{T}^{\bm m}$ denote a valid \emph{almost doubly-symmetric} basic building block class. Then, any inequality from $\Theta^{\bm m}$ defines a facet of $\mathcal{P}$ for $d \geq 3$ and for any prime $p \geq 3$. In particular, all inequalities derived from the building block classes in \cref{prop:specialMvalid} define facets of $\P$ for any prime $p \geq 2$.
\end{proposition}
\begin{IEEEproof}
For $p\geq 3$, see Appendix~\ref{prooffacets_modified}. The statement for $p=2$ (which is included by the \enquote{in particular}-part of the proposition) has been shown previously in LP decoding literature, as pointed out in \cref{sec:q2}.
\end{IEEEproof}
\subsection{More Inequalities by Rotation}\label{sec:automorph}
For a set $\Theta^{\bm m}$ of inequalities and $0 \neq h \in \mathbb F_p$, denote by $\varphi_h(\Theta^{\bm m})$ the set of all inequalities derived by replacing each $\boldtheta^T \boldx \leq \kappa$ in $\Theta^{\bm m}$ by $\rot_{\bm h}(\boldtheta)^T \boldx \leq \kappa$, where $\bm h = (h,\dotsc,h)$. By \cref{cor:rotation-autfq}, the inequalities in $\varphi_h(\Theta^{\bm m})$ are valid for $\P$ if and only if those in $\Theta^{\bm m}$ are.
Concludingly, the set of all inequalities obtained from a class $\mathcal T^{\bm m}$ is denoted by
\[ \Phi(\Theta^{\bm m}) = \bigcup_{0 \neq h \in \mathbb F_p} \varphi_h(\Theta^{\bm m}).\]
\begin{remark}\label{rem:phiThetaunique}
The uniqueness statement of \cref{rem:Thetamunique} remains valid among $\Phi(\Theta^{\bm m})$: if $\boldtheta^{1T}\boldx \leq \kappa^1$ and $\boldtheta^{2T} \boldx \leq \kappa^2$ are from $\varphi_h(\Theta^{\bm m})$ and $\varphi_{h'}(\Theta^{\bm m})$, respectively, where $\boldtheta^1 = a \boldtheta^2$ and $\kappa^1 = a \kappa^2$ and $h,h'\in \mathbb F_p \setminus\{0\}$, then the same argument as in \cref{rem:Thetamunique} shows that $a \neq 1$ is impossible. For $a=1$, Property~\ref{lem:bb-property1} of \cref{lem:bb-properties} implies that $h=h'$, hence this case reduces to \cref{rem:Thetamunique}.
\end{remark}
\begin{remark}\label{rem:inequalityCount}
Since $\abs{\GL(\mathbb F_p)} = \abs{\mathbb F_p \setminus \{0\}} = p-1$, $\Phi(\Theta^{\bm m})$ contains $(p-1) p^{d-1}$ unique inequalities.
\end{remark}
\subsection{Redundant Inequalities}\label{sec:redundant}
The procedure described so far leads to the set
\begin{equation} \notag
\Theta(d) = \Delta_p^d \cup \bigcup_{\bm m \text{ valid}} \Phi(\Theta^{\bm m})
%
\end{equation}
with $\Delta_p^d$ as defined in \cref{def:spx}, of (in)equalities that are valid for $\P$; i.e.,\ they describe a relaxation of $\P$ and can thus be used for (relaxed) LP decoding as described in \cref{sec:lpdecoding}.
In general, however, some entries of $\Theta(d)$ can be \emph{redundant}; an inequality $\boldtheta^T \boldx \leq \kappa$ in $\Theta(d)$ is redundant if it is representable as linear combinations of other (in)equalities in $\Theta(d)$, where the coefficients of all \emph{in}equalities must be nonnegative:
\begin{equation}
\label{eq:redundancy}
\boldtheta = \sum_{j=1}^N \lambda_j \boldtheta^j + \sum_{i=1}^d \mu_i \bm s^i \quad\text{and}\quad \kappa = \sum_{j=1}^N \lambda_j \kappa^j + \sum_{i=1}^d \mu_i
\end{equation}
where $\{\boldtheta^{jT} \boldx \leq \kappa^j\}_{j=1}^N$ are the remaining \emph{in}equalities in $\Theta(d)$, $\lambda_j \geq 0$ for $j \in \range N$, and $\{\bm s^{iT} \boldx = 1\}_{i=1}^d$ are the $d$ equations from \cref{eq:spx-eq} that describe $\aff(\P)$ (which are the only equations in $\Theta(d)$ by construction). Geometrically, an inequality is redundant if the face it induces is a subset of a face induced by some other inequality.
\begin{observation}
If an inequality $\boldtheta^T \boldx \leq \kappa$ of $\Theta(d)$ is redundant for $d \geq 3$, then all $\mu_i = 0$ in \cref{eq:redundancy}, i.e.,\ the equations \cref{eq:spx-eq} are not necessary in the above representation of the inequality.
\end{observation}
\begin{IEEEproof}
Assume $\boldtheta^T \boldx \leq \kappa$ in $\Theta(d)$ is redundant and satisfies \cref{eq:redundancy} for $\{\lambda_j\}_{j=1}^N$ and $\{\mu_i\}_{i=1}^d$, where $\mu_{i^*} \neq 0$ for some $i^* \in \range d$. By Property~\ref{lem:bb-property2} of \cref{lem:bb-properties}, $\theta^j_{i,0}=0$ for $i \in \range d$. On the other hand, $s^{i}_{i,0}=1$ by \cref{eq:spx-eq}, while $s^{i}_{k,0}=0$ for $i \neq k$, from which follows that $\theta_{i^*,0}=\mu_{i^*} \neq 0$. Again by Property~\ref{lem:bb-property2} of \cref{lem:bb-properties}, this means that the redundant inequality cannot be contained in $\Phi(\Theta^{\bm m})$.
But then it must be in $\Delta_p^d$, i.e.,\ one of \cref{eq:spx-geq}, say $-x_{k,l} \leq 0$ for some $k \in \range d, l \in \mathbb F_p$. By \cref{prop:Pjdim}, it is a facet, which by basic polyhedral theory implies $\rank(\{\boldtheta^i\colon \lambda_i \neq 0\}) = 1$. Hence, we can assume wlog.\ $\lambda_i \neq 0$ for only one $i=i^*$, i.e.,\ $\boldtheta = \lambda_{i^*} \boldtheta^{i^*} + \sum_i \mu_i \bm s^i$. This is obviously impossible for both the case that the corresponding inequality $\boldtheta^{i^*T}\boldx \leq \kappa^{i^*}$ is also of type \cref{eq:spx-geq} or obtained from \cref{constr:hilo}.
\end{IEEEproof}
The observation allows us to ignore equations \cref{eq:spx-eq} in the study of redundancy within $\Theta(d)$. The following proposition considers the possibility that two inequalities $\Theta(d)$ induce the same face in $\P$, i.e.,\ that exactly one $\lambda_i \neq 0$ in \cref{eq:redundancy}, i.e.,\ one of the inequalities is the (positive) scalar multiple of the other. By \cref{rem:phiThetaunique}, this implies that the inequalities originate from different classes $\mathcal T^{\bm m}\neq \mathcal T^{\bm m'}$.
\begin{proposition}[Equivalent Inequalities]\label{prop:equivalent}
Let the inequalities $\boldtheta^T \boldx \leq \kappa$ and $\boldtheta'^T \boldx \leq \kappa'$ be contained in $\varphi(\Theta^{\bm m})$ and $\varphi'(\Theta^{\bm m'})$, respectively, where $\mathcal T^{\bm m} \neq \mathcal T^{\bm m'}$ and $\varphi,\varphi' \in \GL(\mathbb F_p)$. Assume that $\boldtheta = a \boldtheta'$ and $\kappa = a \kappa'$ for some $a \geq 0$, where wlog.\ $a \leq 1$ (swap $\bm m$ and $\bm m'$ otherwise). Then, $\bm m = (0,\dotsc,0)$, $\bm m' = (0,1,0,1,\dotsc)$, and $a = 1/3$ (for $p=2$) or $a=1/2$ (for $p > 2$).
Conversely, $\Phi(\Theta^{(0,\dotsc,0)})$ and $\Phi(\Theta^{(0,1,0,1,\dotsc)})$ are equivalent:
\[ \boldtheta^T \boldx \leq \kappa \in \Phi(\Theta^{(0,1,0,1,\dotsc)}) \Leftrightarrow a\boldtheta^T \boldx \leq a\kappa \in \Phi(\Theta^{(0,\dotsc,0)})\]
with $a$ as above.
\end{proposition}
\begin{IEEEproof}
See Appendix~\ref{app:equivalent}.
\end{IEEEproof}
\begin{corollary}
Except for $\mathcal T^{\bm m}$ with $\bm m = (0,1,\dotsc)$, the inequalities derived from building block classes that were shown to induce facets in \cref{sec:facet-defining} are irredundant, i.e.,\ necessary in the description of $\P$. In particular, if \cref{sec:facet-defining} leads to $N$ facet-defining valid building block classes, then, by \cref{rem:inequalityCount} and \cref{prop:Pjdim}, $\P$ has at least $(N-1)(p-1)p^{d-1} + dp$ facets.
As \cref{prop:specialMvalid} already leads to $\lceil(p+1)/2\rceil$ classes,
\[ \left\lceil(p-1)^2/2\right\rceil p^{d-1} + dp\]
is a lower bound on the number of facets of $\P$.
\end{corollary}
\subsection{Valid Inequalities for General SPC Codes } \label{sec:generalSPC}
This section has so far focused on \enquote{all-ones} SPC codes. As noted in \cref{cor:rotation-generalSPC}, however, each of the facets constructed in this section can be translated into a facet of the codeword polytope of a general SPC code by applying the appropriate rotation operation to the respective $\boldtheta$-vector. For an SPC code $\mathcal C(\bm h)$ with parity-check matrix $\bm h$ and $\varphi_r \in \GL(\mathbb F_p)$, we denote the respective equivalent of $\Theta^{\bm m}$ by \[\varphi_{\bm h,r}(\Theta^{\bm m}) = \{ (\rot_{\bm h}\circ \rot_r)(\boldtheta)^T \boldx \leq \kappa)\colon (\boldtheta^T \boldx \leq \kappa) \in \Theta^{\bm m}\}.\]
\section{Explicit Building Block Classes for Small Values of $p$} \label{sec:explicit_3_5_7}
In this section, we present explicit building block classes for $p = 3,5$, and $7$ obtained from the general construction of the previous section. For $p=3$ (resp.\ $5$) we prove (resp.\ conjecture) that these classes together with $\Delta_p^d$ give a complete and irredundant set of linear (in)equalities describing the convex hull of an embedded SPC code of length $d$. However, for $p=7$, this is not the case, and we present two (up to additive automorphisms) additional nonbasic classes. Based on numerical experiments, we conjecture that these classes (basic and nonbasic) together with $\Delta_7^d$ give a complete and irredundant set of linear (in)equalities describing the convex hull of an embedded length-$d$ SPC code over $\mathbb F_7$. For completeness, we also consider the binary case $p=2$.
\subsection{The Case $p=2$} \label{sec:q2}
For $p=2$, there is only a single vector $\bm m=(0,0)$ that gives a valid basic building block class (the alternative, $\bm m'=(0,1)$, is redundant by \cref{prop:equivalent}). In particular, $\bm t^{\bm m}_0=(0,1)$ and $\bm t^{\bm m}_1=(0,-1)$, with $[\sigma]_\mathbb Z = t_{0,\sigma} = 1$. %
Thus, the conditions from \cref{eq:hilo-conditions'} state that the inequalities obtained from \cref{constr:hilo} are exactly those for which $\left[\abs{V_0^\boldtheta}\right]_2 = [1]_2$, i.e.,\ $\abs{V_0^\boldtheta}$ is odd, and $\kappa = \abs{V_0^\boldtheta} - 1$.
As $\GL(\mathbb F_2) = \{ \varphi_1\}$ consists of only the identity, there is a single class of facets plus $\Delta_2^d$ which describes $S_1^d$, where $S_1 = \conv\{ (1,0), (0,1) \}$.
Note that the existing literature on binary LP decoding uses Flanagan's embedding instead of constant-weight embedding. Using the procedure from Appendix~\ref{app:embeddings}, we obtain corresponding building blocks $\boldt'_0 = (1)$ and $\boldt'_1= (-1)$, which exactly leads to the so-called \enquote{forbidden-set} or \enquote{Feldman} inequalities \cite{fel05} which date back to an earlier work of Jeroslow \cite{jer75}. Furthermore, the procedure replaces the simplex constraints $\Delta_2^d$ by the usual box constraints $x_{i,1} \geq 0$ (by \cref{eq:hsimplex-0}) and $x_{i,1} \leq 1$ (by \cref{eq:hsimplex-1}), which shows that the inequalities constructed in this paper are indeed a generalization of the well-known binary case.
\subsection{The Case $p=3$} \label{sec:q3}
\begin{figure*}[!t]
\begin{displaymath}
\begin{array}{llclcl}
& -x_1 -2x_2 +x_3 +2x_4 -2x_5 -x_6 & \leq 0, & -2x_1 -x_2 +2x_3 +x_4 -x_5 -2x_6 &\leq 0, \notag \\
& -x_1 -2x_2 +x_3 - x_4 +x_5 - x_6 &\leq 0, & -2x_1 -x_2 -x_3 +x_4 -x_5 +x_6 &\leq 0, \notag \\
& -x_1 -2x_2 -2x_3 -x_4 +x_5 +2x_6 &\leq 0, & -2x_1 -x_2 -x_3 -2x_4 +2x_5 +x_6 &\leq 0, \notag \\
& -x_1 +x_2 +x_3 -x_4 -2x_5 -x_6 &\leq 0, & x_1 -x_2 -x_3 +x_4 -x_5 -2x_6 &\leq 0, \notag \\
& -x_1 +x_2 -2x_3 -x_4 +x_5 -x_6 &\leq 0, & x_1 -x_2 -x_3 -2x_4 -x_5 +x_6 &\leq 0, \notag \\
& 2x_1 +x_2 -2x_3 -x_4 -2x_5 -x_6 &\leq 0, & x_1 + 2x_2 -x_3 -2x_4 -x_5 -2x_6 &\leq 0, \notag \\
& -x_1 +x_2 +x_3 +2x_4 +x_5 +2x_6 &\leq 3, &x_1 -x_2 +2x_3 +x_4 +2x_5 +x_6 &\leq 3, & \notag \\
& 2x_1 +x_2 +x_3 -x_4 +x_5 +2x_6 &\leq 3, & x_1 +2x_2 -x_3 +x_4 +2x_5 +x_6 &\leq 3, & \notag \\
& 2x_1 +x_2 +x_3 +2x_4 +x_5 -x_6 &\leq 3, & x_1 +2x_2 +2x_3 +x_4 -x_5 +x_6 &\leq 3. & \notag
\end{array}
\end{displaymath}
\hrulefill
\vspace*{-2mm}
\end{figure*}
For $p=3$, there is also only a single vector $\bm m=(0,0,0)$ that gives a valid, irredundant facet-defining basic building block class (see \cref{table:facetClasses}). In particular, $\bm t^{\bm m}_0=(0,1,2)$, $\boldt^{\bm m}_1 = (0,1, -1)$, and $\boldt^{\bm m}_2 = (0,-2,-1)$. %
In the following, the explicit dependence of $\bm m =(0,0,0)$ will be omitted to simplify notation.
The building blocks $\boldt_{k}$ and values from \cref{def:hilo-notation} and \cref{lem:hilo-formulas} are presented in \cref{tab:values_tk}.
\begin{table}
\centering
\caption{Structural properties of the building blocks $\boldt^{\bm m}_0$, $\boldt^{\bm m}_1$, and $\boldt^{\bm m}_2$ for $p=3$ and $\bm m=(0,0,0)$. For this $\bm m$, $\sigma = 2$ (cf.\ \cref{def:hilo-notation}).} %
\vskip -2.0ex
\label{tab:values_tk}
\begin{tabular}{*5{>{$}c<{$}}}
\toprule
k & 0 & 1 & 2 & \text{expression}\\
\midrule
\boldt_k & (0, 1, 2) & (0, 1,-1) & (0, -2, -1) \\
t_{k,\uparrow} & 2 & 1 & 0 & 2-k\\
t_{k,\downarrow} & 0 & 2 & 1 & -k\\
\max\left(\bm t_k\right) & 2 & 1 & 0 & 2-[k]_\mathbb Z\\
\min\left(\bm t_k\right) & 0 & -1 & -2 &-[k]_\mathbb Z\\
\bottomrule
\end{tabular}
\end{table}
\begin{proposition}\label{prop:Theta1Facets}
Every inequality $\boldtheta^T \boldx \leq \kappa$ in $\Theta^{\bm m}$ defines a facet of $\P$, $d \geq 3$. Moreover,
\begin{enumerate}
\item there are $d+1$ elements $\boldzeta \in \mathbb{F}_3^d \setminus \mathcal C$ whose embeddings are cut by the inequality, i.\,e., $\boldtheta^T\mathsf F_{\mathrm v}(\boldzeta) > \kappa$.\label{part:valid1_q3}
\item The inequality is tight for the embeddings of exactly $\frac12 d(d+1)$ codewords of $\mathcal C$.\label{part:valid2_q3}
\end{enumerate}
\end{proposition}
\begin{IEEEproof}
%
The class $\mathcal{T}^{\bm m}$ is almost doubly-symmetric (see \cref{prop:specialMvalid}) and then it follows from \cref{lem:facets_modified} that any inequality from $\Theta^{\bm m}$ defines a facet of $\mathcal{P}$ for $d \geq 3$.
The remaining counting formulas are special cases of the general counting formulas of \cref{lem:counting_formulas} (details omitted for brevity).
\end{IEEEproof}
\begin{theorem} \label{thm:facetComplete}
Let $\mathcal C$ be the ternary \enquote{all-ones} SPC code of length $d \geq 3$ and $\P = \conv(\mathsf F_{\mathrm v}(\mathcal C))$. Then, $\Theta(d) = \Theta^{\bm m} \cup \varphi_2(\Theta^{\bm m}) \cup \Delta_3^d = \Delta_3^d \cup \Phi(\Theta^{\bm m})$ with $\abs{\Theta(d)} = 2\cdot 3^{d-1} + 4d$ gives a complete and irredundant description of $\P$ (note that $\varphi_2$ is the only entry of $\GL(\mathbb F_3)$ besides the identity).
\end{theorem}
\begin{IEEEproof}
See Appendix~\ref{proofFacetComplete3}.
\end{IEEEproof}
\begin{example}
Consider a ternary SPC code $\mathcal{C}$ of length $d=3$, defined by the parity-check matrix $\boldH = (1,2,2)$. In the ternary case, the constant-weight embedding (from \cref{def:Constant}) is as follows: $0 \mapsto (1,0,0)$, $1 \mapsto (0,1,0)$, and $2 \mapsto (0,0,1)$. %
The image $\mathsf{F}_{\rm v}(\mathcal{C})$ (which is a nonlinear binary code) has length $3 d = 9$ and contains nine codewords as follows:
\ifonecolumn
\begin{align}
\{ &(1 0 0 1 0 0 1 0 0), (1 0 0 0 1 0 0 0 1), (1 0 0 0 0 1 0 1 0),
(0 1 0 1 0 0 0 1 0), (0 1 0 0 1 0 1 0 0), (0 1 0 0 0 1 0 0 1),
(0 0 1 1 0 0 0 0 1), (0 0 1 0 1 0 0 1 0), \notag \\
&(0 0 1 0 0 1 1 0 0) \}. \notag
\end{align}
\else
\begin{align}
\{ &(1 0 0 1 0 0 1 0 0), (1 0 0 0 1 0 0 0 1), (1 0 0 0 0 1 0 1 0), \notag \\
&(0 1 0 1 0 0 0 1 0), (0 1 0 0 1 0 1 0 0), (0 1 0 0 0 1 0 0 1), \notag \\
&(0 0 1 1 0 0 0 0 1), (0 0 1 0 1 0 0 1 0), (0 0 1 0 0 1 1 0 0) \}. \notag
\end{align}
\fi
By \cref{thm:facetComplete}, the linear (in)equalities obtained from \cref{sec:generalSPC} and $\Delta_3^d$ give a complete and irredundant description %
of the convex hull $\P = \conv(\mathsf{F}_{\rm v}(\mathcal{C}))$. These inequalities (except for the ones from $\Delta_3^d$) %
are shown at the top of
\ifonecolumn
the previous page.
\else
the page.
\fi
\end{example}
\subsection{The Case $p=5$} \label{sec:q5}
\begin{table*}
\centering
\caption{Structural properties for $p=5$ of the basic building block classes $\mathcal T^{\bm m = (0,0,0,0,0)}$ and $\mathcal T^{\bm m' = (0,1,0,0,0)}$ with $\sigma^{\bm m} = 4$ and $\sigma^{\bm m'} = 1$.}
\vskip -2.0ex
\label{tab:values_tk_q5_1}
\begin{tabular}{*7{>{$}c<{$}}}
\toprule
k & 0 & 1 & 2 & 3 & 4\\
\midrule
\boldt^{\bm m}_k & (0,1,2,3,4) & (0,1,2,3,-1) & (0,1,2,-2,-1) & (0,1,-3,-2,-1) & (0,-4,-3,-2,-1) \\
\midrule
t^{\bm m}_{k,\uparrow} & 4 & 3 & 2 & 1 & 0 \\ %
t^{\bm m}_{k,\downarrow} & 0 & 4 & 3 & 2 & 1 \\ %
\max\left(\bm t^{\bm m}_k\right) & 4 & 3 & 2 & 1 & 0 \\
\min\left(\bm t^{\bm m}_k\right) & 0 & -1 & -2 & -3 & -4 \\
\midrule
\boldt^{\bm m'}_k & (0,6,2,3,4) & (0,-4,-3,-2,-6) & (0,1,2,-2,4) & (0,1,-3,3,-1) & (0,-4,2,-2,-1) \\
\midrule
t^{\bm m'}_{k,\uparrow} & 1 & 0 & 4 & 3 & 2 \\ %
t^{\bm m'}_{k,\downarrow} & 0 & 4 & 3 & 2 & 1 \\ %
\max\left(\bm t^{\bm m'}_k\right) & 6 & 0 & 4 & 3 & 2 \\
\min\left(\bm t^{\bm m'}_k\right) & 0 & -6 & -2 & -3 & -4 \\
\bottomrule
\end{tabular}
\end{table*}
For $p = 5$, there are two vectors $\bm m = (0, 0, 0,0,0)$ and $\bm m' = (0,1,0,0,0)$ that give valid, irredundant facet-defining basic building block classes (see \cref{table:facetClasses}). The five building blocks and some properties of each class are tabulated in \cref{tab:values_tk_q5_1}.
\begin{example} \label{lem:conditions_q5}
Using \cref{lem:conditions}, we find that an inequality $\boldtheta^T \boldx \leq \kappa$ with $\boldtheta = (\bm t^{\bm m}_{k_1} \mid \dotsc \mid \bm t^{\bm m}_{k_d})^T$ and $\kappa \in \mathbb R$ is in $\Theta^{\bm m}$ if and only if
%
\ifonecolumn
\begin{displaymath}
\left[4\abs{V^\boldtheta_0}+ 3 \abs{V^\boldtheta_1} +2 \abs{V^\boldtheta_2} + \abs{V^\boldtheta_3}\right]_5 = [4]_5 %
\text{ and } \kappa =4\abs{V^\boldtheta_0}+3\abs{V^\boldtheta_1}+2\abs{V^\boldtheta_2}+\abs{V^\boldtheta_3}-4 \notag %
\end{displaymath}
\else
\begin{align}
& \left[4\abs{V^\boldtheta_0}+ 3 \abs{V^\boldtheta_1} +2 \abs{V^\boldtheta_2} + \abs{V^\boldtheta_3}\right]_5 = [4]_5 \notag \\%\label{eq:kdCondition_q5}\\
\text{and\quad}&\kappa =4\abs{V^\boldtheta_0}+3\abs{V^\boldtheta_1}+2\abs{V^\boldtheta_2}+\abs{V^\boldtheta_3}-4 \notag %
\end{align}
\fi
%
%
while an inequality $\boldtheta'^T \boldx \leq \kappa'$ with $\boldtheta' = (\bm t^{\bm m'}_{k_1} \mid \dotsc \mid \bm t^{\bm m'}_{k_d})^T$ and $\kappa' \in \mathbb R$ is in $\Theta^{\bm m'}$ if and only if
%
\ifonecolumn
\begin{displaymath}
\left[\abs{V^{\boldtheta'}_0} + 4 \abs{V^{\boldtheta'}_2} +3 \abs{V^{\boldtheta'}_3}+ 2\abs{V^{\boldtheta'}_1}\right]_5 = [1]_p %
\text{ and } \kappa' =6\abs{V^{\boldtheta'}_0}+4\abs{V^{\boldtheta'}_2}+3\abs{V^{\boldtheta'}_3}+2\abs{V^{\boldtheta'}_4}-6. \notag %
\end{displaymath}
\else
\begin{align}
& \left[\abs{V^{\boldtheta'}_0} + 4 \abs{V^{\boldtheta'}_2} +3 \abs{V^{\boldtheta'}_3}+ 2\abs{V^{\boldtheta'}_1}\right]_5 = [1]_p \notag \\%\label{eq:kdCondition_2_q5}\\
\text{and}\quad&\kappa' =6\abs{V^{\boldtheta'}_0}+4\abs{V^{\boldtheta'}_2}+3\abs{V^{\boldtheta'}_3}+2\abs{V^{\boldtheta'}_4}-6. \notag %
\end{align}
\fi
%
%
\end{example}
The following proposition is an adapted version of \cref{prop:Theta1Facets} for the case $p=5$.
\begin{proposition}\label{prop:validIneq_q5}
Every inequality $\boldtheta^T \boldx \leq \kappa$ in $\Theta^{\bm m} \cup \Theta^{\bm m'}$ defines a facet of $\P$, $d \geq 3$. Moreover, %
\begin{enumerate}
%
\item there are
\begin{displaymath}
\begin{cases}
4 + 6(d-1) + 4 \binom{d-1}{2} + \binom{d-1}{3} & \text{for $\Theta^{\bm m}$},\\
4 + 6(d-1) + 3 \binom{d-1}{2} & \text{for $\Theta^{\bm m'}$} \end{cases}
\end{displaymath}
elements $\boldzeta \in \mathbb{F}_5^d \setminus \mathcal C$ whose embeddings are cut by the inequality, i.\,e., $\boldtheta^T\mathsf F_{\mathrm v}(\boldzeta) > \kappa$.\label{part:valid2_q5_theta}
\item The inequality is tight for the embeddings of exactly
\begin{displaymath}
\begin{cases}
1+4(d-1) + 6 \binom{d-1}{2} + 4\binom{d-1}{3} + \binom{d-1}{4} & \text{for $\Theta^{\bm m}$},\\
1+4(d-1) + 4 \binom{d-1}{2} + \binom{d-1}{3} & \text{for $\Theta^{\bm m'}$} \end{cases}
\end{displaymath}
codewords of $\mathcal C$.\label{part:valid3_q5_theta}
\end{enumerate}
\end{proposition}
\begin{IEEEproof}
Both classes $\mathcal{T}^{\bm m}$ and $\mathcal{T}^{\bm m'}$ are almost doubly-symmetric (see \cref{prop:specialMvalid}) and then it follows from \cref{lem:facets_modified} that any inequality from $\Theta^{\bm m}$ or $\Theta^{\bm m'}$ defines a facet of $\mathcal{P}$ for $d \geq 3$. The remaining counting formulas are special cases of the general counting formulas of \cref{lem:counting_formulas} (details omitted for brevity).
\end{IEEEproof}
Now, for each of the three nontrivial automorphisms $\varphi_2, \varphi_3, \varphi_4 \in \GL(\mathbb F_5)$ we obtain additional inequalities (as described in \cref{sec:automorph}) if we apply the corresponding permutation to each building block $\boldt_k$ in an inequality. %
\begin{example}
By applying the automorphism $\varphi_4$ to the building blocks from of $\mathcal T^{\bm m'}$ (see \cref{tab:values_tk_q5_1}), we get the building blocks
\ifonecolumn
\begin{align*}
\varphi_4(\bm t^{\bm m'}_0) &= (0,4,3,2,6),\;
\varphi_4(\bm t^{\bm m'}_1) = (0, -6, -2, -3, -4),\;
\varphi_4(\bm t^{\bm m'}_2) = (0,4,-2,2,1),\\
\varphi_4(\bm t^{\bm m'}_3) &= (0,-1,3,-3,1),\;
\varphi_4(\bm t^{\bm m'}_4) = (0,-1,-2,2,-4).
\end{align*}
\else
\begin{align*}
\varphi_4(\bm t^{\bm m'}_0) &= (0,4,3,2,6),\\
\varphi_4(\bm t^{\bm m'}_1) &= (0, -6, -2, -3, -4),\\
\varphi_4(\bm t^{\bm m'}_2) &= (0,4,-2,2,1),\\
\varphi_4(\bm t^{\bm m'}_3) &= (0,-1,3,-3,1),\\
\varphi_4(\bm t^{\bm m'}_4) &= (0,-1,-2,2,-4).
\end{align*}
\fi
\begin{comment}
By applying the automorphism $\varphi_2$ to the building blocks from (\ref{eq:build_block_q5_2}), we get the building blocks
\begin{equation} \label{eq:build_block_q5_2_inverted}
\begin{split}
\bar{\tilde\boldt}_{0} &= -\tilde\boldt_{-4} = (-3,-1,1,3),\quad \bar{\tilde\boldt}_{-1} = -\tilde\boldt_{-3} = (2, 4,6,3),\\
\bar{\tilde\boldt}_{-2} &= -\tilde\boldt_{-2} = (2,4,1,-2),\\
\bar{\tilde\boldt}_{-3} &= -\tilde\boldt_{-1} = (2,-1,-4,-2),\\
\bar{\tilde\boldt}_{-4} &= -\tilde\boldt_{0} = (-3,-6,-4,-2).
\end{split}
\end{equation}
\end{comment}
\end{example}
\begin{comment}
Valid inequalities can also be constructed using either the inverted building blocks from (\ref{eq:build_block_q5_1}) or the inverted building blocks from (\ref{eq:build_block_q5_2}), i.\,e., either using the building blocks
\begin{equation} \label{eq:build_block_q5_1_inverted}
\begin{split}
\bar\boldt_{0} &= -\boldt_{-4} = (1, 2,3,4),\quad \bar\boldt_{-1} = -\boldt_{-3} = (1,2,3,-1),\\
\bar\boldt_{-2} &= -\boldt_{-2} = (1,2,-2,-1),\\
\bar\boldt_{-3} &= -\boldt_{-1} = (1,-3,-2,-1),\\
\bar\boldt_{-4} &= -\boldt_0 = (-4,-3,-2,-1)
\end{split}
\end{equation}
or the building blocks
\begin{equation} \label{eq:build_block_q5_2_inverted}
\begin{split}
\bar{\tilde\boldt}_{0} &= -\tilde\boldt_{-4} = (-3,-1,1,3),\quad \bar{\tilde\boldt}_{-1} = -\tilde\boldt_{-3} = (2, 4,6,3),\\
\bar{\tilde\boldt}_{-2} &= -\tilde\boldt_{-2} = (2,4,1,-2),\\
\bar{\tilde\boldt}_{-3} &= -\tilde\boldt_{-1} = (2,-1,-4,-2),\\
\bar{\tilde\boldt}_{-4} &= -\tilde\boldt_{0} = (-3,-6,-4,-2).
\end{split}
\end{equation}
Note that, as explained above, either we pick building blocks entirely from the first set of building blocks in (\ref{eq:build_block_q5_1_inverted}), or entirely from the second set of building blocks in (\ref{eq:build_block_q5_2_inverted}). Again, the construction as described in Section~\ref{sec:q3} (just before Definition~\ref{def:theta1}) works also with these sets of building blocks. This will give an additional $2 \cdot 5^{d-1}$ valid inequalities. Furthermore, by applying the index permutation $\pi = (1,3,4,2)$ to each building block in an inequality, we get another valid inequality, resulting in an additional set of $2 \cdot 5^{d-1}$ valid inequalities.
\end{comment}
\begin{conjecture} \label{thm:facetComplete_q5}
Let $\mathcal C$ be the quinary \enquote{all-ones} SPC code of length $d \geq 3$ and $\P = \conv(\mathsf F_{\mathrm v}(\mathcal C))$. Then,
\[
\Theta(d) = \Phi(\Theta^{\bm m}) \cup \Phi(\Theta^{\bm m'}) \cup \Delta_5^d
\]
with $\abs{\Theta(d)} = 8\cdot 5^{d-1} + 6d$ gives a complete and irredundant description of $\P$. %
\end{conjecture}
We have verified numerically that the conjecture is true for $d=3$, $4$, and $5$.
\subsection{The Case $p=7$} \label{sec:q7}
For $p=7$, there are four vectors $\bm m = (0,0,0,0,0,0,0)$, $\bm m = (0,1,0,1,0,0,0)$, $\bm m = (0,1,0,0,0,0,0)$, and $\bm m = (0,1,1,0,0,1,0)$ that give valid, irredundant facet-defining basic building block classes (see \cref{table:facetClasses}). The elements of the resulting basic building block classes, denoted by $\mathcal{T}_1$ through $\mathcal{T}_4$, are summarized in the first four rows of \cref{tab:q7classes}. The corresponding sets of inequalities are denoted by $\Theta_1$ through $\Theta_4$. %
Furthermore, there is a set $\Theta_5$ of \enquote{hybrid} facet-defining inequalities built from \emph{two} basic building block classes, namely the (valid) class $\mathcal T^{(0010000)}$ and the (invalid) basic class $\mathcal T^{(0110000)}$, both of which are not symmetric; we will describe below how to construct inequalities in $\Theta_5$ using a modification of \cref{constr:hilo}.
Finally, there is a sixth structurally distinct set $\Theta_6$ of facet-defining inequalities that are built from \emph{three} different classes of building blocks, which however are not conforming to \cref{def:bb} but instead are of a completely different form. Still, the construction of $\Theta_6$, which is outlined below, shares several aspects with the one developed in \cref{sec:bb}.
\begin{table*}
\centering
\caption{Collection of classes (up to rotational symmetry) of building blocks for the case $p=7$. $\mathcal T_1$ to $\mathcal T_4$ are basic classes in the sense of \cref{def:bb}. $\mathcal T_5^{\rm b}$ and $\mathcal T_5^{\rm nb}$ are used to construct $\Theta_5$, while $\mathcal T_6^{\rm b}$, $\mathcal T_6^{\rm lo}$, and $\mathcal T_6^{\rm hi}$ are the building blocks of inequalities in $\Theta_6$.}
\label{tab:q7classes}
\vskip -2.0ex
\ifonecolumn
{\scriptsize{
\else
\fi
\begin{tabular}{*6{>{$}c<{$}}}
\toprule
\mathcal{T}_1 & (0, 1, 2, 3, 4, 5, 6) & (0,1, 2, 3, 4, 5, -1) & (0,1, 2, 3, 4, -2, -1) & (0,1, 2, 3, -3, -2, -1) \\ & (0,1, 2, -4, -3, -2, -1)
& (0,1, -5, -4, -3, -2, -1) & (0,-6, -5, -4, -3, -2, -1) \\
\midrule
\mathcal{T}_2 & (0,8, 2, 10, 4, 5, 6) & (0,-6, 2, -4, -3, -2, -8) & (0,8, 2, 3, 4, -2, 6) & (0,-6, -5, -4, -10, -2, -8) \\ & (0,1, 2, -4, 4, -2, 6)
& (0,1,-5, 3, -3, 5, -1) & (0,-6, 2, -4, 4, -2, -1) \\
\midrule
\mathcal{T}_3 & (0,8, 2, 3, 4, 5, 6) & (0,-6, -5, -4, -3, -2, -8) & (0,1, 2, 3, 4, -2, 6) & (0,1, 2, 3, -3, 5, -1) \\ & (0,1, 2, -4, 4, -2, -1)
& (0,1, -5, 3, -3, -2, -1) & (0,-6, 2, -4, -3, -2, -1) \\
\midrule
\mathcal{T}_4 & (0, 8, 9, 3, 4, 12,6) & (0,1, -5, -4, 4, -2, -8) & (0,-6, -5, 3, -3, -9, -1) & (0, 1, 9, 3, -3, 5,6) \\ & (0, 8, 2, -4, 4, 5,-1)
& (0, -6, -12, -4, -3, -9,-8) & (0,-6, 2, 3, -3, -2, 6) \\
\midrule
\mathcal{T}_5^{\rm b} & \bm{(0,1, 9, 3, 4, 5, 6)} & \bm{(0,8, 2, 3, 4, 5, -1)} & \bm{(0,-6, -5, -4, -3, -9, -8)} & \bm{(0,1, 2, 3, -3, -2, 6)} \\
& \bm{(0,1, 2, -4, -3, 5, -1)}& \bm{(0,1,-5, -4, 4, -2, -1)}& \bm{(0,-6, -5, 3, -3, -2, -1)}& \\
\mathcal T_5^{\rm nb} & (0,8, 9, 3, 4, 5, 6) & (0,1, -5, -4, -3, -2, -8) & (0,-6, -5, -4, -3, -9, -1) & (0,1, 2, 3, -3, 5, 6) \\
& (0,1, 2, -4, 4, 5, -1) & (0,1, -5, 3, 4, -2, -1) &(0,-6, 2, 3, -3, -2, -1) \\
\midrule
\mathcal{T}_6^{\rm b} & \bm{(0,-1, -1, -1, -1, -1, -1)} & \bm{(0, 0, 0, 0, 0, 0, 1)} & \bm{(0, 0, 0, 0, 0, 1, 0)} & \bm{(0,0, 0, 0, 1, 0, 0)} \\
& \bm{(0, 0, 0, 1, 0, 0, 0)} & \bm{(0, 0, 1, 0, 0, 0, 0)} & \bm{(0, 1, 0, 0, 0, 0, 0)} \\
\mathcal T_6^{\rm hi} & (0, 0, 0, -1, 0, -1, -1) & (0, 0, -1, 0, -1, -1, 0) & (0, -1, 0, -1, -1, 0, 0) & (0, 1, 0, 0, 1, 1, 1) \\
& (0, -1, -1, 0, 0, 0, -1) & (0, 0, 1, 1, 1, 0, 1) & (0, 1, 1, 1, 0, 1, 0) \\
\mathcal T_6^{\rm lo}& \it{(0, 1, 1, 0, 1, 0, 0)} & \it{(0, 0, -1, 0, -1, -1, -1)} & \it{(0, -1, 0, -1, -1, -1, 0)} & \it{(0, 1, 0, 0, 0, 1, 1)} \\
& \it{(0, -1, -1, -1, 0, 0, -1)} & \it{(0, 0, 0, 1, 1, 0, 1)} &\it{(0, 0, 1, 1, 0, 1, 0)} \\
\bottomrule
\end{tabular}
\ifonecolumn
}}
\else
\fi
\end{table*}
The following proposition is an adapted version of \cref{prop:Theta1Facets} for the case $p=7$.
\begin{proposition}\label{prop:validIneq_q7_1to4}
Every inequality $\boldtheta^T \boldx \leq \kappa$ in $\Theta_1 \cup \cdots \cup \Theta_4$ defines a facet of $\P$, $d \geq 3$. Moreover, %
\begin{enumerate}
\item there are
\ifonecolumn
\begin{displaymath}
\begin{cases}
6 + 15(d-1) + 20 \binom{d-1}{2} + 15\binom{d-1}{3} +
6\binom{d-1}{4}+
\binom{d-1}{5} & \text{for $\Theta_1$},\\
6 + 15(d-1) + 17 \binom{d-1}{2} + 8\binom{d-1}{3} +\binom{d-1}{4} & \text{for $\Theta_2$},\\
6 + 15(d-1) + 14 \binom{d-1}{2} + 4\binom{d-1}{3} & \text{for $\Theta_3$},\\
6 + 15(d-1) + 17 \binom{d-1}{2} + 7\binom{d-1}{3} & \text{for $\Theta_4$} \end{cases}
\end{displaymath}
\else
\begin{displaymath}
\begin{cases}
6 + 15(d-1) + 20 \binom{d-1}{2} + 15\binom{d-1}{3} + \\
6\binom{d-1}{4}+
\binom{d-1}{5} & \text{for $\Theta_1$},\\
6 + 15(d-1) + 17 \binom{d-1}{2} + 8\binom{d-1}{3} +\binom{d-1}{4} & \text{for $\Theta_2$},\\
6 + 15(d-1) + 14 \binom{d-1}{2} + 4\binom{d-1}{3} & \text{for $\Theta_3$},\\
6 + 15(d-1) + 17 \binom{d-1}{2} + 7\binom{d-1}{3} & \text{for $\Theta_4$} \end{cases}
\end{displaymath}
\fi
elements $\boldzeta \in \mathbb{F}_7^d \setminus \mathcal C$ whose embeddings are cut by the inequality, i.\,e., $\boldtheta^T\mathsf F_{\mathrm v}(\boldzeta) > \kappa$.\label{part:valid2_q5_1to4}
\item The inequality is tight for the embeddings of exactly
\ifonecolumn
\begin{displaymath}
\begin{cases}
1+6(d-1) + 15 \binom{d-1}{2} + 20\binom{d-1}{3} +
15\binom{d-1}{4} +
6\binom{d-1}{5} + \binom{d-1}{6}& \text{for $\Theta_1$},\\
1+6(d-1) + 11 \binom{d-1}{2} + 10\binom{d-1}{3} +
5\binom{d-1}{4} + \binom{d-1}{5} & \text{for $\Theta_2$},\\
1+6(d-1) + 11 \binom{d-1}{2} + 7\binom{d-1}{3} + \binom{d-1}{4} & \text{for $\Theta_3$},\\
1+6(d-1) + 9 \binom{d-1}{2} + 4\binom{d-1}{3} + \binom{d-1}{4} & \text{for $\Theta_4$} \end{cases}
\end{displaymath}
\else
\begin{displaymath}
\begin{cases}
1+6(d-1) + 15 \binom{d-1}{2} + 20\binom{d-1}{3} + \\
15\binom{d-1}{4} +
6\binom{d-1}{5} + \binom{d-1}{6}& \text{for $\Theta_1$},\\
1+6(d-1) + 11 \binom{d-1}{2} + 10\binom{d-1}{3} + \\
5\binom{d-1}{4} + \binom{d-1}{5} & \text{for $\Theta_2$},\\
1+6(d-1) + 11 \binom{d-1}{2} + 7\binom{d-1}{3} + \binom{d-1}{4} & \text{for $\Theta_3$},\\
1+6(d-1) + 9 \binom{d-1}{2} + 4\binom{d-1}{3} + \binom{d-1}{4} & \text{for $\Theta_4$} \end{cases}
\end{displaymath}
\fi
codewords of $\mathcal C$.\label{part:valid3_q7_1to4}
\end{enumerate}
\end{proposition}
\begin{IEEEproof}
By \cref{prop:specialMvalid}, $\mathcal{T}_1$ through $\mathcal{T}_3$ are almost doubly-symmetric and hence facet-defining by \cref{lem:facets_modified}. The class $\mathcal{T}_4$ is not almost doubly-symmetric (see \cref{ex:doubly-symmetric}), but it is symmetric, and it can be easily verified that the $\frac12(p-3)=2$ length-$3$ vectors $(1,4,2)$ and $(3,3,1) \in \mathbb F_7^3$ satisfy the conditions in \cref{rem:numerical-check-facet}, which proves that $\mathcal T_4$ is facet-defining.
The remaining counting formulas are special cases of the general counting formulas of \cref{lem:counting_formulas} (details omitted for brevity).
\end{IEEEproof}
We now construct the inequalities in $\Theta_5$. To that end, let $\mathcal{T}_5^{\rm b} = \mathcal T^{(0010000)}$ and $\mathcal T_5^{\rm nb} = \mathcal T^{(0110000)}$ which correspond to the bold (resp. nonbold) entries in \cref{tab:q7classes}.
\begin{construction}[Hi-Lo-Construction for $\Theta_5$]\label{constr:hilo7}
Choose an index $i^{\rm nb} \in \range d$. For $i \in \range d \setminus \{i^{\rm nb}\}$, choose $k_i \in \mathbb F_7$ arbitrarily.
This choice results in a \emph{canonical codeword} $\bm c \in \mathcal C$ by defining, for $i \neq i^{\rm nb}$, $c_i = t^{\rm b}_{k_i, \uparrow}$, and the condition $\bm c \in \mathcal C$ specifies the remaining entry $c_{i^{\rm nb}}$.
Now, set $k_{i^{\rm nb}} = t^{\rm nb}_{\downarrow, c_{i^{\rm nb}}}$.
The resulting inequality is $\boldtheta^T \bm x \leq \kappa$ with $\boldtheta_i = \bm t^{\rm b}_{k_i}$ for $i \neq i^{\rm nb}$, $\boldtheta_{i^{\rm nb}} = \bm t^{\rm nb}_{k_{i^{\rm nb}}}$, and $\kappa = \boldtheta^T \mathsf F_{\mathrm v}(\bm c)$, and the set of all inequalities obtained by this construction is denoted by $\Theta_5$.
\end{construction}
\begin{remark}
The above construction differs from \cref{constr:hilo} in two points. First, here one building block is chosen from $\mathcal T_5^{\rm nb}$, while all others are from $\mathcal T_5^{\rm b}$. Secondly, in \cref{constr:hilo7}, $i^{\rm nb}$ is the index that is not free to choose (instead of $d$ in \cref{constr:hilo}), i.e.,\ \cref{rem:hilo-arbitrary} has been incorporated. The latter is necessary because each choice of $i^{\rm nb}$ leads to a \emph{different} set of inequalities; consequently, $\abs{\Theta_5} = d \cdot 7^{d-1} = d \cdot \abs{\Theta_r}$ for $r\in\{1,2,3,4\}$.
\end{remark}
\Cref{lem:conditions} (and its proof) can straightforwardly be adapted to \cref{constr:hilo7}, which leads to:
\begin{lemma}\label{lem:conditions_theta5}
An inequality $\boldtheta^T \boldx \leq \kappa$ with $\boldtheta_{i^{\rm nb}} = \bm t^{\rm nb}_{k_{i^{\rm nb}}}$ for some $i^{\rm nb} \in \range d$ and $\boldtheta_i = \bm t^{\rm b}_{k_i}$ for $i \neq i^{\rm nb}$ is in $\Theta_5$, if and only if
\begin{subequations}
\begin{align}
&\sum_{i=1}^d k_i = [d-1]_7 \cdot \sigma^{\rm b} \label{eq:thetaconditions-Theta5}\\
\text{and}\quad&\kappa = (d-1) t^{\rm b}_{0,\sigma^{\rm b}} - \sum_{i\neq i^{\rm nb}} t^{\rm b}_{0,k_i} - t^{\rm nb}_{0,k_{i^{\rm nb}}}
\end{align}
\end{subequations}
where $\sigma^{\rm b} = 2$ and $t_{0,\sigma}^{\rm b} = 9$ by definition.
\end{lemma}
\begin{proposition} \label{lem:Theta5_p7}
All inequalities from $\Theta_5$ are valid and facet-defining for $\P = \conv(\mathsf F_{\mathrm v}(\mathcal C))$, where $\mathcal C$ is an \enquote{all-ones} septenary SPC code of length $d \geq 3$.
\end{proposition}
\begin{IEEEproof}
The result follows because the relevant results from \cref{sec:validInvalid,sec:facet-defining} can be easily generalized to the situation of \cref{constr:hilo7}, i.e.,\ that one building block within $\boldtheta$ is chosen from a different class. See Appendix~\ref{app:proofTheta5} for an outline.
\end{IEEEproof}
We remark that, because the class $\mathcal T_5^{\rm nb}$ is invalid, the presence of more than one building block from $\mathcal{T}_5^{\rm nb}$ in an invalid inequality does not result in a valid inequality. In other words, it is crucial for $\Theta_5$ that \emph{exactly} one entry of $\bm\theta$ is picked from $\mathcal T_5^{\rm nb}$.
For $\Theta_6$, there are three classes of building blocks named $\mathcal T_6^{\rm b}$, $\mathcal T_6^{\rm lo}$, and $\mathcal T_6^{\rm hi}$, which are listed in \cref{tab:q7classes}. To construct inequalities, \cref{constr:hilo7} is further extended, as shown shortly. Since the structure of the building blocks (which do not satisfy \cref{def:bb}) results in the \enquote{$\argmax$} expressions in \cref{def:hilo-notation} not being well-defined, we need the following definitions to choose specific maximizers.
\begin{definition}\label{def:hilo-theta6}
For $k \in \mathbb F_7$, let $t^{\rm b}_{k,\uparrow} = -k$, $t^{\rm hi}_{k,\uparrow} = -k$, and $t^{\rm lo}_{\downarrow, k} = -k$. Further, define $\sigma^{\rm b} = \sigma^{\mathrm{hi}} = 0$.
\end{definition}
\begin{construction}[Hi-Lo-Construction for $\Theta_6$]\label{constr:hilo7_Theta6}
Choose $i^{\rm hi},i^{\rm lo} \in \range d$ with $i^{\rm lo} \neq i^{\rm hi}$. For notational purposes, we introduce the label
\[ l_i = \begin{cases}{\mathrm{lo}}&\text{if }i = i^{\mathrm{lo}},\\
{\mathrm{hi}}&\text{if }i=i^{\mathrm{hi}},\\
\mathrm b&\text{otherwise}.\end{cases}\]
For $i \in \range d \setminus \{i^{\rm lo}\}$, choose $k_i \in \mathbb F_7$ arbitrarily and define the canonical codeword $\bm c \in \mathcal C$ by $c_i = t^{l_i}_{k_i,\uparrow} = -k_i$; this specifies $c_{i^{\rm lo}}$. Now, set $k_{i^{\rm lo}} = t^{\rm lo}_{\downarrow, c_{i^{\rm lo}}} = -c_{i^{\rm lo}}$.
The resulting inequality is $\boldtheta^T \boldx \leq \kappa$ with $\boldtheta_i = \boldt^{l_i}_{k_i}$ for $i \in \range d$, and $\kappa = \boldtheta^T \mathsf F_{\mathrm v}(\bm c)$, and the set of all inequalities obtained by this construction is denoted by $\Theta_6$.
\end{construction}
\begin{lemma} \label{lem:conditions_theta6}
For fixed $i^{\mathrm{hi}},i^{\mathrm{lo}} \in \range d$ with $i^{\mathrm{hi}} \neq i^{\mathrm{lo}}$, the inequality $\boldtheta^T \boldx \leq \kappa$ with $\boldtheta_i = \bm t^{l_i}_{k_i}$ is in $\Theta_6$ if and only if
\begin{subequations} \label{eq:hilo-conditions_3}
\begin{align}
&\sum_{i=1}^d k_i = 0 \label{eq:kcondition-theta6}\\
\text{and}\quad&\kappa = -\sum_{i=1}^{d} t^{l_i}_{0,k_i}.\label{eq:kappacondition-theta6}
\end{align}
\end{subequations}
\end{lemma}
\begin{IEEEproof}
We again argue similar to the proof of \cref{lem:conditions}. Let $\bm\theta^T \bm x \leq \kappa$ be constructed from \cref{constr:hilo7_Theta6} with canonical codeword $\bm c$. For \cref{eq:kcondition-theta6},
\[ \bm c \in \mathcal C \Leftrightarrow 0 = \sum_{i=1}^d c_i = \sum_{l_i \neq {\mathrm{lo}}} t^{l_i}_{k_i,\uparrow} + -k_{i^{\mathrm{lo}}} = -\sum_{i=1}^d k_i \]
where the last step is due to \cref{def:hilo-theta6}. Now, \cref{eq:kappacondition-theta6} holds because
\[
\kappa = \sum_{i=1}^d t^{l_i}_{k_i,c_i} = \sum_{l_i \neq {\mathrm{lo}}} \max(t^{l_i}_{k_i}) + \min(t^{\mathrm{lo}}_{k_{i^{\mathrm{lo}}}})
%
=-\sum_{i=1}^d t^{l_i}_{0,k_i}
\]
where the last step can be verified by inspection of the building blocks of $\mathcal T_6^{l_i}$, $l_i = \mathrm b, {\mathrm{hi}}, {\mathrm{lo}}$. Finally, it is easily seen (as in the proof of \cref{lem:conditions}) that the above arguments work in both directions, which concludes the proof.
\end{IEEEproof}
\begin{example}
Let $d=3$ and choose $i^{\mathrm{hi}}=1$ and $i^{\mathrm{lo}}=3$. Then, choose $k_2=1$ (corresponding to the building block $\bm\theta_2 = \bm t^{\rm b}_1 = (\mathbf{0,0,0,0,0,0,1}) \in \mathcal T_6^{\rm b}$) and $k_1=5$ (corresponding to $\bm\theta_1 = \bm t^{\mathrm{hi}}_5 = (0,0,1,1,1,0,1) \in \mathcal T_6^{\mathrm{hi}}$). This results in $c_1=[-5]_7=[2]_7$, $c_2=[-1]_7=[6]_7$, hence $c_3=[6]_7$ and thus $k_3 = [-6]_7 = [1]_7$, such that $\bm\theta_3 = \bm t^{\mathrm{lo}}_1 = (0,0,-1,0,-1,-1,-1) \in \mathcal T_6^{\mathrm{lo}}$.
\end{example}
\begin{proposition} \label{lem:Theta6_p7}
All inequalities from $\Theta_6$ are valid and facet-defining for $\P = \conv(\mathsf F_{\mathrm v}(\mathcal C))$, where $\mathcal C$ is an \enquote{all-ones} septenary SPC code of length $d \geq 3$.
\end{proposition}
\begin{IEEEproof}
See Appendix~\ref{app:proofTheta6}.
\end{IEEEproof}
\begin{conjecture}\label{conj:facetComplete_q7}
Let $\mathcal C$ be the septenary \enquote{all-ones} SPC code of length $d \geq 3$ and $\P = \conv(\mathsf F_{\mathrm v}(\mathcal C))$. %
Then,
\begin{displaymath}
\Theta(d) = \left( \cup_{\varphi \in \GL(\mathbb F_7)} \varphi \left( \cup_{i=1}^6 \Theta_i \right) \right) \cup \Delta_7^d
\end{displaymath}
with $\abs{\Theta(d)} = \left(4 {d \choose 2} + 24 + 6d\right) \cdot 7^{d-1} + 8d$ gives a complete and irredundant description of $\P$.
\end{conjecture}
We remark that when applying the $6$ distinct elements of $\GL(\mathbb F_7)$ to $\Theta_6$, only two distinct classes of inequalities occur, i.e., $\abs{\cup_{\varphi \in \GL(\mathbb F_7)} \varphi\left(\Theta_6\right)}=2 \cdot \abs{\Theta_6}$, which explains the multiplication by $4$ in the expression for $\abs{\Theta(d)}$.
We have verified numerically that the conjecture is true for $d=3$ and $4$.
\begin{remark}
Note that for $p > 7$, several \enquote{hybrid} classes of inequalities of the same form as $\Theta_5$ will exist. To identify these classes, one can loop through all possible choices for a bold
building block class (there are $2^{p-1}$ possible choices, not
considering the all-zero $m$-vector). The corresponding nonbold class can be identified be generalizing the results and arguments of Appendix~\ref{app:proofTheta5} for $p=7$ to the general case.
This procedure can identify $21$, $60$, $405$, and $967$ additional (some of which may be redundant) valid facet-defining classes for $p=11$, $13$, $17$, and $19$, respectively. Note that the procedure will also identifiy the valid facet-defining basic classes, which are considered degenerated cases (the bold
class and the nonbold class are the same) here. The numbers above refer to nondegenerated cases.
\end{remark}
\section{Adaptive Linear Programming Decoding} \label{sec:ALP_q}
In this section, we show how to overcome the exponential number of inequalities in $\Theta(d)$ by giving an efficient separation algorithm, which allows for efficient relaxed ALP decoding of general codes over $\mathbb F_p$. It thus generalizes the well-known \emph{Adaptive LP Decoder} for binary codes \cite{tag07}. Throughout the section, let $\mathcal C$ denote a general $p$-ary code defined by the parity-check matrix $\bm H$, the $j$-th row of which is denoted by $\bm h_j$.
The main loop of our ALP decoder (\cref{alg:ALP}) is similar to \cite[Alg.~2]{tag07}, except that the $n(p + 1)$ \emph{simplex constraints} $\Delta_p^\mathcal C$ are present from start. Denote by $\mathcal{M}$ the set of vectors $\bm m$ corresponding to valid irredundant facet-defining classes $\Theta^{\bm m}$.
\begin{algorithm}
\caption{ALP Decoder for $p$-Ary Codes}\label{alg:ALP}
\textbf{Input:} $p$-ary code $\mathcal C$ of length $n$, channel output $\mathsf\Lambda_{\mathrm v}(\boldy)$\\
\textbf{Output:} Solution $\boldx$ of \cref{eq:LPformulation}
\begin{algorithmic}[1]
\State Initialize a linear program with variables $\boldx \in \mathbb R^{np}$, constraints from $\Delta_p^\mathcal C$, and objective function $\min\,\mathsf\Lambda_{\mathrm v}(\boldy)^T\boldx$\label{algALP:init}%
\While{\texttt{True}}\label{algALP:outerWhile}
\State $\boldx^{\mathrm{LP}} \leftarrow$ optimal LP solution
\ForAll{$j\in \mathcal J$, $\bm m \in \mathcal{M}$, and $r \in \mathbb F_p \setminus \{0\}$} \label{algALP:jandi}
\State $(\bm\theta,\kappa) \leftarrow$ \Call{Separate}{$\bm m, r, \bm h_j, P_j\boldx^\LP$} \label{algALP:separate}
\If{$(\boldtheta,\kappa) \neq \texttt{Null}$}
\State insert $\boldtheta^T \boldx \leq \kappa$ into the LP model
\EndIf
\EndFor
\If{no cut was added in the above loop}
\State \textbf{return} $\boldx^{\rm LP}$
\EndIf
\EndWhile
\end{algorithmic}
\end{algorithm}
\begin{notms}
The main issue that needs to be addressed is that of efficient separation of the sets of inequalities $\varphi_{\bm h, r}(\Theta^{\bm m})$, for some $\varphi_r \in \GL(\mathbb F_p)$, valid $\bm m \in \mathcal{M}$, and an SPC code with parity-check vector $\bm h = (h_1,\dotsc,h_d)$, i.e.,\ to develop an efficient cut-search algorithm that finds, for given $\bm x \in \mathbb R^{dp}$, an inequality in $\varphi_{\bm h, r}(\Theta^{\bm m})$ that is violated by $\bm y$ or concludes that no such inequality exists.
\end{notms}
As a first step, we reduce separation to the case of $\bm h = (1,\dotsc,1)$ (i.e.,\ $\C_j$ is an all-ones SPC code) and $r=1$ (i.e.,\ $\varphi_r$ is the identity). This reduction is based on the following corollary of \cref{thm:rotation-general}, and outlined in \cref{alg:separateGeneralSPC}.
\begin{corollary}
\label{cor:rotation-separation}
Let $\bm h$, $\P$, and $\P(\bm h)$ be defined as in \cref{cor:rotation-generalSPC} and $\bm y \in \mathbb R^{dq}$. Then, the inequality
$(\rot_{\bm h}\circ \rot_r)(\bm \theta)^T \bm x \leq \kappa$
from $\varphi_{\bm h, r}(\Theta^{\bm m})$
separates $\bm y$ from $\P(\bm h)$ if and only if the inequality
$\bm \theta^T \bm x \leq \kappa$ from $\Theta^{\bm m}$ separates $(\rot_r^{-1} \circ \rot^{-1}_{\bm h})(\bm y)$ from $\P$.
\end{corollary}
\begin{IEEEproof}
Starting from the second statement, it holds that
\ifonecolumn
\begin{align*}
&\bm \theta^T \bm x \leq \kappa \text{ separates } (\rot_r^{-1}\circ \rot_{\bm h}^{-1})(\bm y)\text{ from }\P\\
\Leftrightarrow\;&
\bm \theta^T \bm x \leq \kappa\text{ for }\bm x \in \P\text{ and }\bm \theta^T(\rot_r^{-1}\circ \rot_{\bm h}^{-1})(\bm y) > \kappa\\
\Leftrightarrow\;&
\rot_r(\bm\theta)^T\bm x \leq \kappa\text{ for }\bm x\in \P\text{ and }\rot_r(\bm \theta)^T \rot_{\bm h}^{-1}(\bm y) > \kappa\\
\Leftrightarrow\;&
(\rot_{\bm h}\circ \rot_r)(\bm \theta)^T\bm x \leq \kappa \text{ for }\bm x \in\P(\bm h)
\text{ and }(\rot_{\bm h}\circ\rot_r)(\bm \theta)^T \bm y > \kappa
\end{align*}
\else
\begin{align*}
&\bm \theta^T \bm x \leq \kappa \text{ separates } (\rot_r^{-1}\circ \rot_{\bm h}^{-1})(\bm y)\text{ from }\P\\
\Leftrightarrow\;&
\bm \theta^T \bm x \leq \kappa\text{ for }\bm x \in \P\text{ and }\bm \theta^T(\rot_r^{-1}\circ \rot_{\bm h}^{-1})(\bm y) > \kappa\\
\Leftrightarrow\;&
\rot_r(\bm\theta)^T\bm x \leq \kappa\text{ for }\bm x\in \P\text{ and }\rot_r(\bm \theta)^T \rot_{\bm h}^{-1}(\bm y) > \kappa\\
\Leftrightarrow\;&
(\rot_{\bm h}\circ \rot_r)(\bm \theta)^T\bm x \leq \kappa \text{ for }\bm x \in\P(\bm h)\\
&\quad\quad\text{ and }(\rot_{\bm h}\circ\rot_r)(\bm \theta)^T \bm y > \kappa
\end{align*}
\fi
where we have used, on the left side, \cref{cor:rotation-generalSPC,cor:rotation-autfq} and \cref{lem:permScalProd} on the right. Now, the last line exactly states that $(\rot_{\bm h}\circ \rot_r)(\bm \theta)^T\bm x \leq \kappa$ separates $\bm y$ from $\P(\bm h)$, which concludes the proof.
\end{IEEEproof}
\begin{algorithm}
\caption{\textsc{Separate}($\bm m$, $r$, $\bm h$, $\bm x$)}
\label{alg:separateGeneralSPC}
\textbf{Input:} $\bm m \in \mathcal M$, $r \in \mathbb F_p \setminus \{0\}$, $\bm h =(h_1,\dotsc,h_d)$ with nonzero entries, and current (projected) LP solution $\bm x$ \\
\textbf{Output:} Inequality in $\varphi_{\bm h, r}(\Theta^{\bm m})$ violated by $\bm x$, if such exists; \texttt{Null} otherwise
\begin{algorithmic}[1]
\State $\tilde \boldx \leftarrow (\rot_r^{-1}\circ \rot_{\bm h}^{-1})(\boldx)$
\State $\bm\theta \leftarrow$ \Call{Separate}{$\bm m$, $\tilde{\bm x}$}
\If{$\bm\theta \neq$ \texttt{Null}}
\State Compute $\kappa$ from \cref{eq:kappacondition}
\State \textbf{return} $((\rot_{\bm h}\circ \rot_r)(\bm \theta), \kappa)$
\Else
\State \textbf{return} \texttt{Null}
\EndIf
\end{algorithmic}
\end{algorithm}
We now describe the separation of inequalities in $\Theta^{\bm m}$, for some $\bm m \in \mathcal{M}$. Any such inequality $\bm\theta^T \bm x \leq \kappa$ with $\boldtheta = (\bm t^{\bm m}_{k_1}\mid \dotsc \mid \bm t^{\bm m}_{k_d})^T$ can be rewritten (using \cref{eq:kappacondition}) as
\begin{equation}
\Psi(\boldtheta, \boldx)= \sum_{i=1}^d v^{k_i}(\boldx_i)
\geq t^{\bm m}_{0,\sigma} \label{eq:facetAltForm_p}
\end{equation}
where $\boldx = (\boldx_1,\dotsc,\boldx_d)^T$, $\boldx_i = (x_{i,0},\dotsc,x_{i,p-1})$ for all $i \in \range d$, and $v^k(\boldx_i) = t^{\bm m}_{0,\sigma} - t^{\bm m}_{0,k} -\boldt^{\bm m}_k \boldx_i^T$. %
Thus, $\Theta^{\bm m}$ contains a cut for $\boldx$ (for some $j \in \mathcal{J}$) if and only if $\Psi(\bm\theta,\bm x) < t^{\bm m}_{0,\sigma}$ for some $\boldtheta$ from $\Theta^{\bm m}$, i.e.,\ if and only if the optimization problem (in $d$ variables $k_1,\dotsc,k_d \in \mathbb F_p$)
\begin{subequations}
\begin{align}
\psi^*=\min\;& \Psi(\boldtheta,\boldx)\\
\text{s.t.}\;&\boldtheta = (\bm t^{\bm m}_{k_1} \mid \dotsc \mid \bm t^{\bm m}_{k_d})^T\\
&\sum_{i=1}^d k_i = [d-1]_p\sigma \label{eq:sepOptFacet_generic}
\end{align}\label{eq:sepOpt_generic}%
\end{subequations}
where the condition \cref{eq:sepOptFacet_generic} is due to \cref{eq:Thetacondition},
has an optimal solution that satisfies $\psi^* < t^{\bm m}_{0,\sigma}$.
We now describe how to solve \cref{eq:sepOpt_generic} using a DP approach with linear running time in $d$. For $s \in \range d$ and $\zeta \in \mathbb F_p$, define
\ifonecolumn
\begin{displaymath}\psi(\boldx,s,\zeta) = \min\left\{ \sum_{i=1}^s v^{k_i}(\boldx_i)\colon k_i \in \mathbb F_p\text{ for }i \in \range s
\text{ and } \sum_{i=1}^s k_i = \zeta \right\}.
\end{displaymath}
\else
\begin{multline*}\psi(\boldx,s,\zeta) =\\ \min\left\{ \sum_{i=1}^s v^{k_i}(\boldx_i)\colon k_i \in \mathbb F_p\text{ for }i \in \range s
\text{ and } \sum_{i=1}^s k_i = \zeta \right\}.
\end{multline*}
\fi
It holds that $\psi^* = \psi(\boldx,d,[d-1]_p\sigma)$ and the obvious recursion
\begin{equation}
\psi(\boldx, s, \zeta) = \min_{\beta \in \mathbb F_p} \left\{v^{\beta}(\boldx_s) + \psi(\boldx, s-1, \zeta - \beta)\right\}\label{eq:sepRecursion}
\end{equation}
for $s \geq 2$ and $\zeta \in \mathbb F_p$
allows us to compute a $d\times \mathbb F_p$ table $\texttt{T}$ with entries $\texttt{T}[s,\zeta] = \psi(\boldx, s, \zeta)$: initialize the first row of $\texttt T$ with
$\texttt T[1,\zeta] = \psi(\boldx, 1, \zeta) = v^{\zeta}(\boldx_1)$ for $\zeta \in \mathbb F_p$.
Then, use \cref{eq:sepRecursion} to proceed from top to bottom until reaching row $d$, where only the entry $\texttt T[d, [d-1]_p\sigma]$ is needed. Because the expression in \cref{eq:sepRecursion} can be calculated in time $\mathcal O(p)$, the overall time needed to obtain $\psi^*$ is $\mathcal O(dp^2)$, which is linear for fixed $p$. Observe that the actual solution $\boldtheta^*$ of \cref{eq:sepOpt_generic} can be obtained within the same asymptotic running time by storing the minimizing $\beta$'s from \cref{eq:sepRecursion} in a second $d \times \mathbb F_p$ table $\texttt S$. The complete algorithm is outlined in \cref{alg:SAgeneral} (note that, in an actual implementation, one has to use the integer representations of field elements everywhere, and insert \enquote{mod $p$} statements in the appropriate places).
\begin{algorithm}
\caption{\textsc{Separate}($\bm m, \bm x$)}\label{alg:SAgeneral}
\textbf{Input:} $\bm m \in \mathcal M$ and $\bm x = (\bm x_1,\dotsc,\bm x_d)^T \in \mathbb R^{dp}$ \\
\textbf{Output:} Solution $\boldtheta^*$ of \cref{eq:sepOpt_generic}, if $\psi^* < t^{\bm m}_{0,\sigma}$; \texttt{Null} otherwise
\begin{algorithmic}[1]
\State Let \texttt{T}, \texttt{v}, and \texttt S be $d \times \mathbb F_p$ arrays, and let \texttt k be a length-$d$ array
\For{$\zeta \in \mathbb F_p$}
\For{$i \in \range d$}
\State{$\texttt{v}[i,\zeta] \leftarrow t^{\bm m}_{0,\sigma} - t^{\bm m}_{0,\zeta} - \bm t_\zeta^{\bm m} \bm x_i^T$}\Comment{initialize \texttt{v}}
\EndFor
\State $\texttt{T}[1,\zeta] \leftarrow \texttt v[1,\zeta]$\Comment{initialize $\texttt{T}[1,:]$}
\State $\texttt{S}[1,\zeta] \leftarrow \zeta$\Comment{initialize $\texttt{S}[1,:]$}
\EndFor
\For{$i =2,\dotsc, d$}
\For {$\zeta \in \mathbb F_p$}
\State $\texttt S[i,\zeta] \leftarrow -1$
\State $\texttt T[i,\zeta] \leftarrow \infty$
\For {$\beta \in \mathbb F_p$}\Comment{find min.\ from \cref{eq:sepRecursion}}
\State $\texttt{val} \leftarrow \texttt v[i,\beta] + \texttt T[i-1,\zeta - \beta]$
\If{$\texttt{val} < \texttt T[i,\zeta]$}
\State $\texttt T[i,\zeta] \leftarrow \texttt{val}$
\State $\texttt S[i,\zeta] \leftarrow \beta$
\EndIf
\EndFor
\EndFor
\EndFor
\If{$\texttt T[d,[d-1]_p\sigma] < t_{0,\sigma}^{\bm m}$}
\State $\texttt k[d] \leftarrow \texttt S[d,[d-1]_p \sigma]$
\State $\texttt{next} \leftarrow [d-1]_p\sigma - \texttt k[d]$
\For{$i = d-1,\dotsc, 1$}
\State $\texttt k[i] \leftarrow \texttt S[i,\texttt{next}]$
\State $\texttt{next} \leftarrow \texttt{next} - \texttt k[i]$
\EndFor
\State \textbf{return} \texttt k
\EndIf
\State \textbf{return} \texttt{Null}
\end{algorithmic}
\end{algorithm}
For the binary case, the well-known separation algorithm \cite[Alg.~1]{zha12} is more efficient than the above general approach: there, problem \cref{eq:sepOpt_generic} is first solved without the constraint \cref{eq:sepOptFacet_generic} by setting $k_i^* = 0 \Leftrightarrow x_{i,1} > 1/2$. If this solution does not happen to fulfill \cref{eq:sepOptFacet_generic}, the constraint is restored by altering a single $k_i^*$ with minimal corresponding $\abs{x_{i,1} - 1/2}$ (see \cite[Alg.~1]{zha12} for details).
For $p=3$ a more efficient algorithm can be derived as we will show below in \cref{sec:ALP_q3}. However, as the number of possible combinations for restoring \cref{eq:sepOptFacet_generic} grows rapidly with increasing $p$, the DP approach is preferable in the general case.
\begin{remark}\label{rem:SAtweaks}
\Cref{alg:SAgeneral} can be tweaked in several ways:
\begin{itemize}
\item In the $d$-th row of $\texttt T$, only the single value $\texttt T[d,[d-1]_p\sigma]$ has to be computed.
\item If $d$ and/or $p$ are large, one could first minimize $\Psi(\boldtheta,\boldx)$ without the constraint \cref{eq:sepOptFacet_generic} (which is possible in time $\mathcal O(dp)$). If the result satisfies \cref{eq:sepOptFacet_generic} (optimum found) or fulfills $\psi^* \geq t^{\bm m}_{0,\sigma}$ (no cut can be included), we are done.
\item Because $v^k(\boldx_i) \geq 0$ for all $k \in \mathbb F_p$ and $i \in \range{d}$, $\psi^* \geq \min_{\zeta \in \mathbb F_p} \psi(\boldx, i, \zeta)$ holds for \emph{any} $i \in \range d$. Hence, the search can be stopped as soon as all entries in a single row of $\texttt T$ are larger than or equal to $t^{\bm m}_{0,\sigma}$.
\end{itemize}
\end{remark}
\subsection{Efficient Implementation for $p=3$} \label{sec:ALP_q3}
In this subsection, we explicitly develop an optimized version of \cref{alg:SAgeneral} for the case of $p=3$. Note that there is only one relevant class $\mathcal T^{\bm m = (0,0,0)}$ (cf. \cref{sec:q3}), such that the \enquote{$\bm m$}-parameter is omitted in the sequel.
The optimization problem in \cref{eq:sepOpt_generic} simplifies to the following form for $p=3$:
\begin{subequations}
\begin{align}
\psi^*=\min\;& \Psi(\boldtheta,\boldx) \label{eq:sepOptP3-obj}\\
\text{s.\,t.}\quad&\boldtheta = (\bm t_{k_1} \mid \dotsc \mid \bm t_{k_d})^T\\ &\sum_{i=1}^d k_i = [2(d-1)]_3.\label{eq:sepOptFacet}
\end{align}\label{eq:sepOpt}%
\end{subequations}
To solve \cref{eq:sepOpt}, we first ignore \cref{eq:sepOptFacet}, i.e.,\ find $\hat\boldtheta$ that unconditionally minimizes $\Psi(\boldtheta,\boldx)$ by computing, for $i \in \range d$,
\begin{equation} \notag
\hat k_i = \argmin_{k\in \mathbb F_3} v^k(\boldx_i),
%
\end{equation}
breaking ties arbitrarily.
If $\Psi(\hat\boldtheta,\boldx)\geq2$, then $\Theta$ does not contain a cut. If otherwise $\hat\boldtheta$ happens to fulfill \cref{eq:sepOptFacet}, it is the optimal solution of \cref{eq:sepOpt} and hence leads to a cut; in both cases we are done.
In the remaining case, $\eta = \sum k_i - [2(d-1)]_3 \neq [0]_3$. Let $I \subseteq \range d$ be the set of positions in which $(\hat k_1,\dotsc, \hat k_d)$ differs from the optimal solution $(k_1^*,\dotsc, k_d^*)$ of \cref{eq:sepOpt}, which we now need to find. By definition of the $\hat k_i$, we can assume that no subset of $I' \subseteq I$ satisfies $\sum_{i \in I'} \hat k_i - k_i^* = [0]_3$; otherwise, one could replace $k_i^*$ by $\hat k_i$ for $i \in I'$ while maintaining \cref{eq:sepOptFacet} without increasing the objective value \cref{eq:sepOptP3-obj}. In particular, this shows that $\abs I \leq 2$.
Let
\ifonecolumn
\begin{displaymath}
\psi_i^1 = v^{\hat k_i + [1]_3}(\bm x_i) - v^{\hat k_i}(\bm x_i)
\text{ and } \psi_i^2= v^{\hat k_i + [2]_3}(\bm x_i) - v^{\hat k_i}(\bm x_i)
\end{displaymath}
\else
\begin{align*}
&\psi_i^1 = v^{\hat k_i + [1]_3}(\bm x_i) - v^{\hat k_i}(\bm x_i)\\
\text{and }&\psi_i^2= v^{\hat k_i + [2]_3}(\bm x_i) - v^{\hat k_i}(\bm x_i)
\end{align*}
\fi
denote the increase of \cref{eq:sepOptP3-obj} incurred by replacing $\hat k_i$ with $\hat k_i + [1]_3$ and $\hat k_i + [2]_3$, respectively. Furthermore, define $i^1 \neq j^1$ and $i^{2} \neq j^{2} \in \range d$ such that
\begin{subequations}
\begin{align}
&\psi^1_{i^1} \leq \psi^1_{j^1} \leq \psi^1_l&&\text{for all }l \notin \{i^1,j^1\}
\\
\text{and }&\psi^{2}_{i^{2}} \leq \psi^{2}_{j^{2}} \leq \psi^{2}_l&&\text{for all }l \notin \{i^{2},j^{2}\}
\end{align}%
\label{eq:psiPlusMinus}%
\end{subequations}%
i.e.,\ $\psi^\zeta_{i^\zeta}$ and $\psi^\zeta_{j^\zeta}$ correspond to the two optimal positions in which to add $\zeta$ to $\hat k_i$.
\begin{lemma}\label{lem:psiPlusMinus_q3}
Let $\boldtheta^1 = (\bm t_{k^1_1} \mid \dotsc \mid \bm t_{k^1_d})$ be defined by
\begin{align*}
k^1_i &=
\begin{cases}
\hat k_i - \eta &\text{if }i = i^{-\eta},\\
\hat k_i&\text{otherwise}
\end{cases}\\
\text{and}\quad k^2_i &=
\begin{cases}
\hat k_i + \eta &\text{if }i \in \{i^{\eta},j^{\eta}\},\\
\hat k_i&\text{otherwise}.
\end{cases}
\end{align*}
If $\psi^{-\eta}_{i^{-\eta}} < \psi^{\eta}_{i^{\eta}} + \psi^{\eta}_{j^{\eta}}$, then $\boldtheta^1$ is the optimal solution of \cref{eq:sepOpt}, otherwise $\boldtheta^2$.
\end{lemma}
\begin{IEEEproof}
By the above discussion, $\boldtheta^1$ minimizes $\Psi(\boldtheta, \boldx)$ among all possibilities that differ in only one entry from $(\hat k_1,\dotsc, \hat k_d)$, while $\boldtheta^2$ is optimal for two different positions. As $\abs I \leq 2$, one of both is the optimal solution of \cref{eq:sepOpt}, which concludes the proof.
\end{IEEEproof}
This completes the description of \textsc{Separate}($\bm x$) for ternary codes; the pseudocode is shown in \cref{alg:SAternary}.
\begin{algorithm}
\caption{\textsc{Separate}($\bm x$) for $p=3$}\label{alg:SAternary}
\textbf{Input:} Current LP solution $\boldx \in \mathbb R^{3d}$ \\
\textbf{Output:} Solution $\boldtheta^*$ of \cref{eq:sepOpt}, if $\psi^\ast < t_{0,\sigma}= 2$; \texttt{Null} otherwise
\begin{algorithmic}[1]
\State Initialize arrays $\bm k$, $\bolda$, $\boldb$, $\boldsymbol{\psi}^1$, $\boldsymbol{\psi}^{2}$ of length $d$ each
\State $\Psi \leftarrow 0$, $\eta \leftarrow [-2(d-1)]_3$
\For{$i \in \range d$}
\State Compute $v^0(\bm x_i)$, $v^1(\bm x_i)$, and $v^2(\bm x_i)$
\If{$v^0(\bm x_i) \leq v^1(\bm x_i)$ and $v^0(\bm x_i) \leq v^2(\bm x_i)$}
\State $k_i \leftarrow 0$
\ElsIf{$v^1(\bm x_i) \leq v^0(\bm x_i)$ and $v^1(\bm x_i) \leq v^2(\bm x_i)$}
\State $k_i \leftarrow 1$
\Else
\State $k_i \leftarrow 2$
\EndIf
\State $\eta \leftarrow \eta + k_i$
\State $\Psi \leftarrow \Psi + v^{k_i}(\bm x_i)$
\EndFor
\If{$\Psi < 2$ and $\eta = [0]_3$}
\State \textbf{return} $(\bm t_{k_1} \mid \dotsc \mid \bm t_{k_d})^T$
\ElsIf{$\Psi < 2$}
\State compute $\psi^1_i$ and $\psi^2_i$ for $i \in \range d$, and
\State compute $i^1, j^1, i^2, j^2$ defined in \cref{eq:psiPlusMinus}
\If{$\psi^{-\eta}_{i^{-\eta}} < \psi^{\eta}_{i^{\eta}} + \psi^{\eta}_{j^{\eta}}$}
\State $k_{i^{-\eta}} \leftarrow k_{i^{-\eta}} - \eta$
\State $\Psi \leftarrow \Psi + \psi^{-\eta}_{i^{-\eta}}$
\Else
\State $k_{i^{\eta}} \leftarrow k_{i^{\eta}} + \eta$
\State $k_{j^{\eta}} \leftarrow k_{j^{\eta}} + \eta$
\State $\Psi \leftarrow \Psi + \psi^{\eta}_{i^{\eta}} + \psi^{\eta}_{j^{\eta}}$
\EndIf
\If{$\Psi < 2$}
\State \textbf{return} $(\bm t_{k_1} \mid \dotsc \mid \bm t_{k_d})^T$
\EndIf
\EndIf
\State \textbf{return} \texttt{Null}
\end{algorithmic}
\end{algorithm}
\subsection{Implementation for $p=7$} \label{sec:ALP_q7}
As shown in \cref{sec:q7} (the case $p=7$), there are two nonbasic classes $\Theta_5$ and $\Theta_6$ of inequalities. The similarity of \cref{lem:conditions_theta5,lem:conditions_theta6} with \cref{lem:conditions} allows to use the same separation algorithm as above for these classes.
An inequality in $\Theta_5$ admits the form \cref{eq:facetAltForm_p} (with $t_{0,\sigma}^{\bm m}$ replaced by $t^{\rm b}_{0,\sigma^{\rm b}} = 9$) when one defines $v^k(\bm x_i) = 9 - t^{\rm b}_{0,k} - \bm t^{\rm b}_k \bm x_i^T$ for $i \neq i^{\rm nb}$ and $v^k(\bm x_{i^{\rm nb}}) = 9 - t^{\rm nb}_{0,k} - \bm t^{\rm nb}_k \bm x_{i^{\rm nb}}^T$, and hence the same optimization problem \cref{eq:sepOpt_generic} (with \cref{eq:sepOptFacet_generic} replaced by \cref{eq:thetaconditions-Theta5}) can be used for separation and solved with the DP approach described by \cref{alg:SAgeneral}.
Analogously, \cref{eq:facetAltForm_p} holds for $\Theta_6$ (with the right-hand side $t_{0,\sigma}^{\bm m}$ replaced by $0$) when we define $v^k(\bm x_i) = - t^{l_i}_{0,k} - \bm t^{l_i}_k \bm x_i^T$ (with $l_i \in \{\rm b, hi, lo\}$ as defined in \cref{sec:q7}).
In both cases, the \enquote{special} indices $i^{\rm nb}$ (for $\Theta_5$) and $i^{\mathrm{hi}}$ / $i^{\mathrm{lo}}$ (for $\Theta_6$) are assumed to be fixed in advance, which can be implemented by one ($\Theta_5$) or two ($\Theta_6$) extra \texttt{for}-loops around the actual separation algorithm.
\section{Searching for Redundant Parity-Check Equations} \label{sec:ACG}
In this section, we outline an efficient cut-searching algorithm for general $p$-ary codes, where $p$ is any prime.
Assume that ALP decoding of a $p$-ary linear code $\mathcal C$ of length $n$ (using \cref{alg:ALP}) has returned a \emph{fractional} pseudocodeword $\bm p=(\bm p_1,\dotsc,\bm p_n)$, i.e.,\ the ALP decoding algorithm (in \cref{alg:ALP}) has returned $\bm p = \bm x^{\rm LP}$ with some fractional entries. Due to the generalized box constraints (from the individual constituent codes), $p_{i,0}+\cdots+p_{i,p-1} = 1$, for all $i \in \range n$. Now, let
\begin{displaymath}
\mathcal{F}_{\bm p} =\{ i \in \range n: \text{ $p_{i,0},\dotsc,p_{i,p-2}$, or $p_{i,p-1}$ is fractional} \}
\end{displaymath}
denote the set of fractional \emph{positions} in $\bm p$. We can prove the following theorem which generalizes \cite[Thm.~3]{zha12} and \cite[Thm.~3.3]{tan10} to the $p$-ary case.
\begin{theorem} \label{thm:RPC}
Let $\bm h = (h_1,\dotsc,h_n)$ denote a valid (redundant) parity-check constraint for a $p$-ary linear code $\mathcal C$ of length $n$, let $I = \{i \in \range n\colon h_i \neq 0\}$ and assume that $I \cap \mathcal F_{\bm p} = 1$ for a given pseudocodeword $\bm p$. Then, the inequalities constructed in \cref{sec:bb} contain a cut that separates $\bm p$ from $\P = \conv(\mathsf F_{\mathrm v}(\mathcal C))$.
\end{theorem}
\begin{IEEEproof}
As the entries $h_j$ and $\bm p_j$ for $j \notin I$ are not relevant, we can assume that $I = \range n$. Furthermore, by \cref{cor:rotation-separation} and because rotating $\bm p$ does not change the size of $\mathcal F_{\bm p}$, we can assume without loss of generality that $\bm h = (1,\dotsc, 1)$.
Now, note that when $\bm p_i$ is not fractional, then $\bm p_i \in \{\bm e^1,\dotsc,\bm e^p\}$ (due to the generalized box constraints), where, for $i \in \range p$, $\bm e^i = (0,\dotsc,0,1,0,\dotsc,0)$ is the $i$-th unit vector.
We show that $\mathcal{T}^{\bm m = (0,\dotsc,0)}$, which is a valid class for any prime $p$ (see \cref{prop:specialMvalid}), leads to a cut for $\bm p$. The corresponding building blocks are
\begin{equation} \label{eq:Tm000}
\begin{split}
\bm t^{\bm m}_0 &= (0,1,\dotsc,p-3,p-2,p-1), \\
\bm t^{\bm m}_1 &= (0,1,\dotsc,p-3,p-2,-1), \\
\bm t^{\bm m}_2 &= (0,1,\dotsc,p-3,-2,-1), \\
\vdots\quad&\phantom{=}\quad \vdots \\
\bm t^{\bm m}_{p-2} &= (0,1,-p+2,\dotsc,-2,-1), \\
\bm t^{\bm m}_{p-1} &= (0,-p+1,-p+2,\dotsc,-2,-1).
\end{split}
\end{equation}
In the optimization problem in \cref{eq:sepOpt_generic}, $v^k(\bm p_i)=t^{\bm m}_{0,\sigma} - t^{\bm m}_{0,k}-\boldt_k^{\bm m} \boldp_i^T = p-1-\left[k\right]_\mathbb Z-\boldt_k^{\bm m} \boldp_i^T$, $k \in \mathbb F_p$ and $\boldt_k^{\bm m}$ is one of the $p$ possible building blocks of (\ref{eq:Tm000}). When $\bm p_i$ is not fractional,
\begin{equation} \notag %
\begin{split}
v^{0}(\bm p_i) &\in \{p-1,p-2,p-3,\dotsc,2,1,0\}, \\
v^{1}(\bm p_i) &\in \{p-2,p-3,p-4,\dotsc,1,0,p-1\}, \\
v^{2}(\bm p_i) &\in \{p-3,p-4,p-5,\dotsc,0,p-1,p-2\}, \\
\vdots\quad&\phantom{\in}\quad \vdots \\
v^{p-2}(\bm p_i) &\in \{1,0,p-1,\dotsc,4,3,2\}, \\
v^{p-1}(\bm p_i) &\in \{0,p-1,p-2,\dotsc,3,2,1\}
\end{split}
\end{equation}
where the ordering of elements is according to $\bm e^1,\dotsc,\bm e^p$. Furthermore, it can easily be verified that when $\bm p_i$ is fractional, i.e.,\ $\bm p_{i} = (p_{i,0},\dotsc,p_{i,p-1})$ where $0 \leq p_{i,j} \leq 1$, $j=0,\dotsc,p-1$, $\sum_{j=0}^{p-1} p_{i,j} = 1$, and $p_{i,0},\dotsc,p_{i,p-1}$ are not all integers, then $v^{k}(\bm p_i) < p-1$, for $k \in \mathbb F_p$, i.e.,\ \emph{strictly} smaller than $p-1$. Now, we build a valid inequality from $\Theta^{\bm m = (0,\dotsc,0)}$ in the following way, assuming that $n$ is the fractional position. If $\bm p_i=\bm e^j$, $j \in \range{p}$, choose $k_i=-\left[j\right]_p$, where $i \in \range{n-1}$. This will give an overall contribution of zero to the objective function $\Psi(\boldtheta,\bm x)$ in the optimization problem in \cref{eq:sepOpt_generic}. Finally, we need to choose $k_n$ such that the constraint in \cref{eq:sepOptFacet_generic} is fulfilled. However, since $\bm p_n$ is fractional (by assumption), the contribution to $\Psi(\boldtheta,\bm x)$ is strictly less than $p-1$ (independent of the choice for $k_n$) and it follows that in the optimization problem in \cref{eq:sepOpt_generic} the optimal objective value is indeed strictly less than $p-1$, while $t^{\bm m}_{0,\sigma} = p-1$, and thus $\Theta^{\bm m = (0,\dotsc,0)}$ indeed contains a cut (the one that we constructed) for the pseudocodeword $\bm p$. More formally, all $v^k(\bm p_i)$ are linear maps of the simplex onto the interval $[0,p-1]$, and we know that the images of the extreme points ${\bm e^1,\dotsc,\bm e^p}$ of the simplex under that map are $\{0,\dotsc,p-1\}$. By linearity, the image of a fractional pseudocodeword, which is a nontrivial convex combination (at least two nonzero coefficients) of those extreme points, equals the corresponding convex combination (with at least two nonzero coefficients) of the images $\{0,\dotsc,p-1\}$, and is thus strictly less than $p-1$.
\begin{comment}
We will consider $\Theta_1$ with its corresponding building blocks shown in \cref{tab:values_tk}. In the optimization problem in \cref{eq:sepOpt}, $v^k(\bm p_i)=2+k-\boldt_k \boldp_i^T$, $k \in \{-2,-1,0\}$ and $\boldt_k$ is one of three possible building blocks. When $\bm p_i$ is not fractional, $v^{-2}(\bm p_i) \in \{0,2,1\}$, $v^{-1}(\bm p_i) \in \{1,0,2\}$, and $v^{0}(\bm p_i) \in \{2,1,0\}$ where the ordering of elements is according to $\{(0,0),(1,0),(0,1)\}$. Furthermore, it can easily be verified that when $\bm p_i$ is fractional, i.e.,\ $\bm p_i = (a,b)$ where $0 \leq a \leq 1$, $0 \leq b \leq 1$, $a+b \leq 1$, and $a$ and $b$ are not both integers, $v^{k}(\bm p_i) < 2$, for $k \in \{-2,-1,0\}$, i.e.,\ \emph{strictly} smaller than $2$. Now, we build a valid inequality from $\Theta_1$ in the following way, assuming that $d$ is the fractional position. If $\bm p_i=(0,0)$, choose $k_i=-2$; if $\bm p_i=(1,0)$, choose $k_i=-1$; and if $\bm p_i=(0,1)$, choose $k_i=0$, $i \in [d-1]$. This will give an overall contribution of zero to the objective function $\Psi(\boldtheta,\bm x)$ in the optimization problem in \cref{eq:sepOpt}. Finally, we need to choose $k_d$ such that the constraint in \cref{eq:sepOptFacet} is fulfilled. However, since $\bm p_d$ is fractional (by assumption), the contribution to $\Psi(\boldtheta,\bm x)$ is strictly less than $2$ (independent of the choice for $k_d$) and it follows that in the optimization problem in \cref{eq:sepOpt} the optimal objective value is indeed strictly less than $2$, and thus $\Theta_1$ indeed contains a cut (the one that we constructed) for the pseudocodeword $\bm p$.
For the quinary case, the argument is similar. Due to the generalized box constraints, when $\bm p_i$ is not fractional,
\begin{displaymath}
\bm p_i \in \{(0,0,0,0),(1,0,0,0),(0,1,0,0), (0,0,1,0), (0,0,0,1)\}.
\end{displaymath}
Again, we will consider $\Theta_1$ with its corresponding building blocks shown in \cref{tab:values_tk_q5_1}. In the optimization problem in \cref{eq:sepOpt_q5}, $v^k(\bm p_i)=4+k-\boldt_k \boldp_i^T$, $k \in \{-4,-3,-2,-1,0\}$ and $\boldt_k$ is one of five possible building blocks. When $\bm p_i$ is not fractional,
\begin{align*}
v^{-4}(\bm p_i) &\in \{0,1,2,3,4\}, v^{-3}(\bm p_i) \in \{1,2,3,4,0\}, \\
v^{-2}(\bm p_i) &\in \{2,3,4,0,1\}, v^{-1}(\bm p_i) \in \{3,4,0,1,2\}, \\
v^{0}(\bm p_i) &\in \{4,0,1,2,3\}
\end{align*}
where the ordering of elements is according to
\begin{displaymath}
\{(0,0,0,0),(1,0,0,0),(0,1,0,0), (0,0,1,0), (0,0,0,1)\}.
\end{displaymath}
Furthermore, it can easily be verified that when $\bm p_i$ is fractional, $v^{k}(\bm p_i) < 4$, for $k \in \{-4,-3,-2,-1,0\}$, i.e.,\ \emph{strictly} smaller than $4$, and the result follows in the same way as in the ternary case, since we can always choose $k_i$ (when constructing an invalid inequality for $\bm p$), when $i \in [d-1]$, such that the contribution to the objective function is zero. As an illustrative example, consider $k=-3$ and the corresponding building block $\boldt_{-3}=(-1,-2,-3,1)$. In this case, $v^{-3}(\bm p_i) = 4+ (-3) - (-p_{i,1} - 2p_{i,2}-3p_{i,3} + p_{i,4}) = 1 - 2p_{i,1} - p_{i,2}-4p_{i,4} + 3 (p_{i,1}+p_{i,2}+p_{i,3}+p_{i,4}) \leq 4 - 2p_{i,1} - p_{i,2}-4p_{i,4}$, which is strictly smaller than $4$ when $\bm p_i$ is fractional. Finally, we remark that we could have based the proof for the case $p=5$ also on the second building block class $\Theta_2$ with its corresponding building blocks shown in \cref{tab:values_tk_q5_2}. The same argument would hold.
\end{comment}
\end{IEEEproof}
As in the binary case, cut-inducing redundant parity-check (RPC) equations can be found, for instance, by reducing the parity-check matrix $\bm H$ using Gaussian elimination to \emph{reduced row echelon} form, where columns are processed in the order of ``fractionality'' (or \emph{closeness} to $(\frac{1}{p},\dotsc,\frac{1}{p})$) of the corresponding coordinate of $\bm p$. For instance, since $\bm p_i$, $i \in \range{n}$, is a probability vector, the entropy can be computed and compared to the entropy of the uniform distribution $(\frac{1}{p},\dotsc,\frac{1}{p})$. Although \cref{thm:RPC} guarantees a cut only for rows $\tilde{\bm h}$ of $\pi^{-1}(\tilde{\bm H}$), where $\tilde{\bm H}$ denotes the \emph{reduced row echelon} form of $\pi(\bm H$) and $\pi$ is a permutation (of length $n$) which reorders the columns of $\bm H$ in order of closeness (or entropy) to $(\frac{1}{p},\dotsc,\frac{1}{p})$, such that $\tilde{\bm h}_{\mathcal{F}_{\bm p}}$ has weight one, rows of larger weight may also provide a cut. Thus, in a practical decoding situation all rows $\tilde{\bm h}$ of $\pi^{-1}(\tilde{\bm H})$ should be processed, using the separation algorithm described in \cref{sec:ACG}, in order to locate redundant cut-inducing parity-check equations. In \cite{zha12}, this is called \emph{adaptive cut generation} or ACG and can be combined with ALP decoding as outlined in \cite[Alg.~2]{zha12}. In \cref{sec:numerical_results}, we provide simulation results for the decoding algorithm obtained by appropriately generalizing, as outlined above, the ACG-ALP procedure described in \cite[Alg.~2]{zha12} (without considering removal of constraints) to nonbinary linear codes, showing that near-ML decoding performance can be obtained for a ternary Reed-Muller (RM) code. %
\section{The Case $q=p^m$} \label{sec:ptom}
In this section, we consider the problem of efficient LP decoding of linear codes over the field $\mathbb F_q = \mathbb F_{p^m}$, where $m > 1$ is a positive integer and $p$ is any prime.
For any nonzero $h \in \mathbb{F}_{p^m}$, $\emptyset \neq \mathcal{K} \subseteq \range{m}$, and $\boldsymbol{\gamma} \in (\mathbb F_{p} \setminus \{0\})^{|\mathcal{K}|}$, let
\begin{displaymath}
\mathcal{B}^{(\beta)}(\mathcal{K},\boldsymbol{\gamma},h) = \left \{\zeta \in \mathbb{F}_{p^m}\colon \sum_{k \in \mathcal{K}} \gamma_k \cdot \mathsf{p}(h \zeta)_k = \beta \right\}
\end{displaymath}
for $\beta \in \mathbb F_p$, where $(\cdot)_k$ denotes the $k$-th entry of its vector argument (note that summation and multiplication (except for the $h\zeta$ term) above are in $\mathbb F_p$).
Now, let $\mathcal{C}$ denote a nonbinary SPC code over $\mathbb{F}_{p^m}$ of length $d$ defined by a parity-check vector $\bm h = (h_1,\dotsc,h_d)$. Furthermore, for any vector
\[ \bm f = (\bm f_1,\dotsc, \bm f_d)^T \in \mathbb R^{dq}\]
define
\begin{displaymath}
g_j^{(\mathcal{K},\boldsymbol{\gamma},\bm f)} = \sum_{\beta \in \mathbb F_p \setminus \{0\}} \sum_{i \in \mathcal{B}^{(\beta)}(\mathcal{K},\boldsymbol{\gamma},h_j)} [\beta]_\mathbb Z \cdot f_{j,i}
\end{displaymath}
where $\emptyset \neq \mathcal{K} \subseteq \range{m}$, $\boldsymbol{\gamma} \in (\mathbb F_{p} \setminus \{0\})^{|\mathcal{K}|}$, and $j \in \range d$, and where the summation is in the real space.
We have the following proposition, which generalizes \cite[Lem.~12]{liu14} to any field (only $p=2$ was considered in \cite{liu14}) under the constant-weight embedding.\footnote{Note that a \emph{weaker} version of the proposition appeared in \cite{ros15} (proof omitted) in which $\boldsymbol{\gamma}$ was constrained to $(1,\dotsc,1)$ which results in a potentially weaker relaxation. Also, Flanagan's embedding from \cref{rem:Flanagan} was considered in \cite{ros15}.}
\begin{proposition} \label{prop:pm}
Let $\mathcal C$ be the SPC code over the field $\mathbb{F}_{p^m}$ defined by the parity-check vector $\bm h=(h_1,\dotsc,h_d)$. Then, $\mathsf F_{\mathrm v}(\mathcal C)$ is equal to the set of vectors $\bm f \in \mathbb R^{dq}$ that satisfies the following three conditions (and will be denoted by $\mathcal E$ in the sequel):
\begin{enumerate}
\item $f_{j,i} \in \{0,1\}$ for $j \in \range d$ and $i \in \mathbb F_q$,%
\item $\sum_{i \in \mathbb F_q} f_{j,i}= 1$ for $j \in \range d$, and
\item for any $\emptyset \neq \mathcal{K} \subseteq \range{m}$ and $\boldsymbol{\gamma} \in (\mathbb F_{p} \setminus \{0\})^{|\mathcal{K}|}$ holds
$\sum_{j=1}^{d} \left[ g_j^{(\mathcal{K},\boldsymbol{\gamma},\bm f)} \right]_p = \left[ 0 \right]_p$.
\end{enumerate}
\end{proposition}
\begin{IEEEproof}
See Appendix~\ref{proofproppm}.
\end{IEEEproof}
\begin{comment}
Now, the set $\mathcal{B}^{(\beta)}(\mathcal{K},\boldsymbol{\gamma},h_j)$ can be written explicitly as
\begin{multline*}
\Bigl\{ \eta_1 \alpha^0 + \cdots + \eta_{m} \alpha^{m-1} %
\colon 0 \leq \eta_l \leq p-1\text{ for all } l \\
\;\;\;\; \text{and } h_j \sum_{k \in \mathcal{K}} \eta_{k} = \beta\Bigr\}
\end{multline*}
where $\alpha$ is a primitive element of $\mathbb{F}_{p^m}$,
from which it follows that
$|\mathcal{B}^{(1)}(\mathcal{K},h_j)| = \cdots = |\mathcal{B}^{(p-1)}(\mathcal{K},h_j)| = p^{m-1}$.
\end{comment}
Now, we can write $g_j^{(\mathcal{K},\boldsymbol{\gamma},\bm f)}$ as
\ifonecolumn
\begin{displaymath}
g_j^{(\mathcal{K},\boldsymbol{\gamma},\bm f)} = 1 \cdot \sum_{i \in \mathcal{B}^{(1)}(\mathcal{K},\boldsymbol{\gamma},h_j)} f_{j,i} + \cdots + (p-1) \cdot \sum_{i \in \mathcal{B}^{(p-1)}(\mathcal{K},\boldsymbol{\gamma},h_j)} f_{j,i}
= \sum_{s=1}^{p^{m-1}} \sum_{\beta \in \mathbb F_p \setminus \{0\}} \left[\beta \right]_\mathbb Z \cdot f_{j,i_{s,\beta}}
\end{displaymath}
\else
\begin{displaymath}
\begin{split}
g_j^{(\mathcal{K},\boldsymbol{\gamma},\bm f)} &= 1 \cdot \sum_{i \in \mathcal{B}^{(1)}(\mathcal{K},\boldsymbol{\gamma},h_j)} f_{j,i} + \cdots + \\
&\;\;\;\;(p-1) \cdot \sum_{i \in \mathcal{B}^{(p-1)}(\mathcal{K},\boldsymbol{\gamma},h_j)} f_{j,i}\\
&= \sum_{s=1}^{p^{m-1}} \sum_{\beta \in \mathbb F_p \setminus \{0\}} \left[\beta \right]_\mathbb Z \cdot f_{j,i_{s,\beta}}
\end{split}
\end{displaymath}
\fi
where $i_{s,\beta}$ is the $s$-th element (under some arbitrary ordering) of $\mathcal{B}^{(\beta)}(\mathcal{K},\boldsymbol{\gamma},h_j)$. The last equality follows since $|\mathcal{B}^{(0)}(\mathcal{K},\boldsymbol{\gamma},h_j)| = \cdots = |\mathcal{B}^{(p-1)}(\mathcal{K},\boldsymbol{\gamma},h_j)| = p^{m-1}$ (details omitted for brevity). It follows (from the third condition of Proposition~\ref{prop:pm}) that
\begin{equation} \label{eq:gjk}
\sum_{j=1}^d \left[ g_j^{(\mathcal{K},\boldsymbol{\gamma},\bm f)} \right]_p = \sum_{j=1}^d \sum_{s=1}^{p^{m-1}} \sum_{\beta \in \mathbb F_p \setminus \{0\}} \beta \cdot \left[ f_{j,i_{s,\beta}} \right]_p = \left[0\right]_p.
\end{equation}
The constraint in (\ref{eq:gjk}) can be written as the $p$-ary parity-check constraint
\begin{equation} \label{eq:SPCp-ary}
\sum_{j=1}^d \sum_{s=1}^{p^{m-1}} \tilde{f}_{j,s} = \sum_{j=1}^d \tilde{f}_{j} = [0]_p
\end{equation}
where $\tilde{f}_{j,s} = \sum_{\beta \in \mathbb F_p \setminus \{0\}} \beta \cdot \left[ f_{j,i_{s,\beta}} \right]_p \in \mathbb{F}_p$
and $\tilde{f}_j = \sum_{s=1}^{p^{m-1}} \tilde{f}_{j,s} \in \mathbb F_p$.
Thus, in summary, a length-$d$ parity-check constraint over the finite field $\mathbb{F}_{p^m}$ can be written as a set of $p^{m}-1$ length-$d$ $p$-ary parity-check constraints ($p^m-1$ is the number of combinations of possible nonempty subsets of $\range{m}$ and vectors $\boldsymbol{\gamma}$). As pointed out in \cite{liu14} (for the case $p=2$) some of these constrains are redundant, and it is sufficient to consider $\mathcal{K} \in \{ \{1\},\dotsc,\{m\} \}$ and $\boldsymbol{\gamma}=(1,\dotsc,1)$. Now, each of these parity-check equations (including the redundant ones) can be considered separately in \cref{alg:ALP}, which results in an efficient relaxed ALP decoding algorithm for nonbinary codes over $\mathbb{F}_{p^m}$. %
The following lemma is a key ingredient of the proposed relaxation.
\begin{proposition} \label{lem:relaxation}
Let $\boldtheta^T \boldx \leq \kappa$ be a valid facet-defining inequality for $\conv(\mathsf F_{\mathrm v}(\mathcal C))$ where $\mathcal C$ is a nonbinary SPC code over $\mathbb F_p$ of length $d > 0$, where $p$ is any prime, $ \boldtheta = (\boldt_{k_1} \mid \dotsc \mid \boldt_{k_d})^T$, $k_i \in \mathbb F_p$, and $\boldt_k = (t_{k,0},\dotsc,t_{k,p-1})$ for all $k \in \mathbb F_p$. Then, the inequality $\left(\tilde\boldtheta^{(\mathcal{K},\boldsymbol{\gamma})}\right)^T \boldx \leq \kappa$ where $\tilde\boldtheta^{(\mathcal{K},\boldsymbol{\gamma})} = \left(\tilde{\boldt}_{k_1}^{(\mathcal{K},\boldsymbol{\gamma})} \mid \dotsc \mid \tilde{\boldt}_{k_d}^{(\mathcal{K},\boldsymbol{\gamma})}\right)^T$ and %
$\tilde{t}_{k,\eta}^{(\mathcal{K},\boldsymbol{\gamma})} = t_{k,\beta}$ for all $\eta \in \mathcal{B}^{(\beta)}(\mathcal{K},\boldsymbol{\gamma},\bm h)$ and $\beta \in \mathbb F_p$ (each entry of $\boldt_k$ is repeated $p^{m-1}$ times in $\tilde{\boldt}_{k}^{(\mathcal{K},\boldsymbol{\gamma})}$) %
is valid %
for the convex hull $\mathcal{P}^{(\mathcal{K},\boldsymbol{\gamma})}$, described in \cref{eq:convex_hull} at the
\ifonecolumn
top of the page,
\else
top of the next page,
\fi
for all $\emptyset \neq \mathcal{K} \in \range{m}$ and $\boldsymbol{\gamma} \in \left(\mathbb F_p \setminus \{0\} \right)^{|\mathcal{K}|}$.
\begin{figure*}[!t]
\normalsize
\begin{equation} \label{eq:convex_hull}
\mathcal{P}^{(\mathcal{K},\boldsymbol{\gamma})} = \conv\left( \left\{ \left( f_{1,0},\dotsc,f_{1,q-1},\dotsc,
f_{d,0},\dotsc,f_{d,q-1} \right)^T
\in \{0,1\}^{dq} \colon %
\sum_{j=1}^d \tilde{f}_{j} = [0]_p \text{ and } \sum_{i=0}^{q-1} f_{j,i} = 1,\; j \in \range{d} \right\} \right)
\end{equation}
\hrulefill
\vspace*{-2mm}
\end{figure*}
\end{proposition}
\begin{IEEEproof}
Because of the first two conditions of \cref{prop:pm} and since $\mathcal{B}^{(\beta_1)}(\mathcal{K},\boldsymbol{\gamma},h_j) \cap \mathcal{B}^{(\beta_2)}(\mathcal{K},\boldsymbol{\gamma},h_j) = \emptyset$, for any $\beta_1 \neq \beta_2$, $\beta_1, \beta_2 \in \mathbb F_p \setminus \{0\}$, $(f_{j,i_{s,1}},\dotsc,f_{j,i_{s,p-1}})$ can only take values in the set
\ifonecolumn
\begin{displaymath}
\{(0,0,0,\dotsc,0,0),
(1,0,0,\dotsc,0,0),(0,1,0,\dotsc,0,0),\dotsc,(0,0,0,\dotsc,0,1) \}.
\end{displaymath}
\else
\begin{displaymath}
\begin{split}
\{&(0,0,0,\dotsc,0,0), \\
&(1,0,0,\dotsc,0,0),(0,1,0,\dotsc,0,0),\dotsc,(0,0,0,\dotsc,0,1) \}.
\end{split}
\end{displaymath}
\fi
Hence, $\tilde{f}_{j,s}$ is either equal to $[0]_p$ or to a single $f_{j,i_{s,\beta}}$ times $\beta$ (for some $\beta \in \mathbb F_p \setminus \{0\}$). Furthermore, since at most one $\tilde{f}_{j,s}$, $s \in \range{p^{m-1}}$, is nonzero, $\tilde{f}_{j}$ is either equal to $[0]_p$ or to a single $f_{j,i_{s,\beta}}$ times $\beta$ (for some $\beta \in \mathbb F_p \setminus \{0\}$ and $s \in \range{p^{m-1}}$).
From this observation and the fact that $\tilde{t}_{k,\eta}^{(\mathcal{K},\boldsymbol{\gamma})} = t_{k,\beta}$ for all $\eta \in \mathcal{B}^{(\beta)}(\mathcal{K},\boldsymbol{\gamma},\bm h)$ and $\beta \in \mathbb F_p$, it can readily be seen that for any binary vector $\bm{f}=(f_{1,0},\dotsc,f_{1,q-1},\dotsc,f_{d,0},\dotsc,f_{d,q-1})^T \in \mathcal{P}^{(\mathcal{K},\boldsymbol{\gamma})}$, $\left(\tilde\boldtheta^{(\mathcal{K},\boldsymbol{\gamma})}\right)^T \bm f = t_{k_1,\tilde{f}_1} + \cdots + t_{k_d,\tilde{f}_d}$. Since $\sum_{j=1}^d \tilde{f}_{j} = [0]_p$, $t_{k_1,\tilde{f}_1} + \cdots + t_{k_d,\tilde{f}_d} \leq \kappa$, and it follows that the inequality $\left(\tilde\boldtheta^{(\mathcal{K},\boldsymbol{\gamma})}\right)^T \bm x \leq \kappa$ is valid for any binary vector from $\mathcal{P}^{(\mathcal{K},\boldsymbol{\gamma})}$ and hence valid for $\mathcal{P}^{(\mathcal{K},\boldsymbol{\gamma})}$.
\end{IEEEproof}
\begin{comment}
Consider $\mathcal K = \{1\}$ and $\gamma = (1,\dotsc,1)$. Then,
\begin{displaymath}
\tilde{\bm t}_k^{(\mathcal K,\boldsymbol{\gamma})}=(\boldt_k | \boldt_k | \cdots | \boldt_k)
\end{displaymath}
\end{comment}
\begin{remark}
According to \cref{lem:relaxation}, for a given valid facet-defining inequality $\boldtheta^T \boldx \leq \kappa$, we can derive a valid inequality for $\mathcal{P}^{(\mathcal{K},\boldsymbol{\gamma})}$ using the interleaving scheme of \cref{lem:relaxation}. Note that varying $\emptyset \neq \mathcal{K} \in \range{m}$ and $\boldsymbol{\gamma} \in \left(\mathbb F_p \setminus \{0\} \right)^{|\mathcal{K}|}$ corresponds to permuting the building block entries of the building blocks for a given $\emptyset \neq \mathcal{K} \in \range{m}$ and $\boldsymbol{\gamma} \in \left(\mathbb F_p \setminus \{0\} \right)^{|\mathcal{K}|}$. This corresponds exactly to applying all permutations from $\GL(\mathbb F_{p^m})$ %
to the entries of the building blocks.
\end{remark}
From \cref{prop:pm,lem:relaxation}, and \cref{eq:gjk,eq:SPCp-ary}, it follows that
\begin{equation} \label{eq:relaxation}
\conv(\mathsf F_{\mathrm v}(\mathcal{C})) \subseteq \bigcap_{\emptyset \neq \mathcal{K} \in \range{m}, \boldsymbol{\gamma} \in \mathbb F_p \setminus \{0\}} \mathcal{P}^{(\mathcal{K},\boldsymbol{\gamma})}
\end{equation}
where $\mathcal{C}$ is a nonbinary SPC code of length $d > 0$ over $\mathbb F_{p^m}$ where $p$ is any prime and $m$ is a positive integer. The relaxation from \cref{eq:relaxation} can be used for (relaxed) ALP decoding of general nonbinary codes over $\mathbb F_{p^m}$.
\begin{example} \label{ex:gf9}
Consider a nonbinary SPC code of length $d=3$ over
$\mathbb{F}_{3^2} = \{0,1,2,\alpha,1+\alpha,2+\alpha,2\alpha,1+2\alpha,2+2\alpha \}$,
where $\alpha$ is a primitive element in $\mathbb{F}_{3^2}$, defined by the parity-check vector $\bm h=(1,1,1)$. In this case, the constant-weight embedding (from \cref{def:Constant}) is as follows:
\ifonecolumn
\begin{align}
\mathsf{f}(0) &= (1,0,0,0,0,0,0,0,0),\;
\mathsf{f}(1) = (0,1,0,0,0,0,0,0,0), \;
\mathsf{f}(2) = (0,0,1,0,0,0,0,0,0), \notag \\
\mathsf{f}(\alpha) &= (0,0,0,1,0,0,0,0,0), \;
\mathsf{f}(1+\alpha) = (0,0,0,0,1,0,0,0,0), \;
\mathsf{f}(2+\alpha) = (0,0,0,0,0,1,0,0,0), \notag \\
\mathsf{f}(2\alpha) &= (0,0,0,0,0,0,1,0,0), \;
\mathsf{f}(1+2\alpha) = (0,0,0,0,0,0,0,1,0), \;
\mathsf{f}(2+2\alpha) = (0,0,0,0,0,0,0,0,1). \notag
\end{align}
\else
\begin{align}
\mathsf{f}(0) &= (1,0,0,0,0,0,0,0,0), \notag \\
\mathsf{f}(1) &= (0,1,0,0,0,0,0,0,0), \notag \\
\mathsf{f}(2) &= (0,0,1,0,0,0,0,0,0), \notag \\
\mathsf{f}(\alpha) &= (0,0,0,1,0,0,0,0,0), \notag \\
\mathsf{f}(1+\alpha) &= (0,0,0,0,1,0,0,0,0), \notag \\
\mathsf{f}(2+\alpha) &= (0,0,0,0,0,1,0,0,0), \notag \\
\mathsf{f}(2\alpha) &= (0,0,0,0,0,0,1,0,0), \notag \\
\mathsf{f}(1+2\alpha) &= (0,0,0,0,0,0,0,1,0), \notag \\
\mathsf{f}(2+2\alpha) &= (0,0,0,0,0,0,0,0,1). \notag
\end{align}
\fi
Furthermore, we have
\ifonecolumn
\begin{align}
&\mathcal{B}^{(1)}(\{1\},1,1) =\{1,1+\alpha,1+2\alpha \}, \;
\mathcal{B}^{(2)}(\{1\},1,1) =\{2,2+\alpha,2+2\alpha \}, \notag \\
&\mathcal{B}^{(1)}(\{1\},2,1) =\{2,2+\alpha,2+2\alpha \} = \mathcal{B}^{(2)}(\{1\},1,1), \;
\mathcal{B}^{(2)}(\{1\},2,1) =\{1,1+\alpha,1+2\alpha \} = \mathcal{B}^{(1)}(\{1\},1,1), \notag \\
&\mathcal{B}^{(1)}(\{2\},1,1) =\{\alpha,1+\alpha,2+\alpha \}, \;
\mathcal{B}^{(2)}(\{2\},1,1) =\{2\alpha,1+2\alpha,2+2\alpha \}, \notag\\
&\mathcal{B}^{(1)}(\{2\},2,1) =\{2\alpha,1+2\alpha,2+2\alpha \}=\mathcal{B}^{(2)}(\{2\},1,1), \;
\mathcal{B}^{(2)}(\{2\},2,1) =\{\alpha,1+\alpha,2+\alpha \} = \mathcal{B}^{(1)}(\{2\},1,1), \notag\\
&\mathcal{B}^{(1)}(\{1,2\},(1,1),1) =\{1,\alpha,2+2\alpha \}, \;
\mathcal{B}^{(2)}(\{1,2\},(1,1),1) =\{2,1+\alpha,2\alpha \}, \notag\\
&\mathcal{B}^{(1)}(\{1,2\},(1,2),1) =\{1,2+\alpha,2\alpha \}, \;
\mathcal{B}^{(2)}(\{1,2\},(1,2),1) =\{2,\alpha,1+2\alpha \}, \notag\\
&\mathcal{B}^{(1)}(\{1,2\},(2,1),1) =\{2,\alpha,1+2\alpha \} = \mathcal{B}^{(2)}(\{1,2\},(1,2),1),\notag \\
&\mathcal{B}^{(2)}(\{1,2\},(2,1),1) =\{1,2+\alpha,2\alpha \}= \mathcal{B}^{(1)}(\{1,2\},(1,2),1), \notag\\
&\mathcal{B}^{(1)}(\{1,2\},(2,2),1) =\{2,1+\alpha,2\alpha\}= \mathcal{B}^{(2)}(\{1,2\},(1,1),1),\notag \\
&\mathcal{B}^{(2)}(\{1,2\},(2,2),1) =\{1,\alpha,2+2\alpha \}= \mathcal{B}^{(1)}(\{1,2\},(1,1),1).\notag
\end{align}
\else
\begin{align}
&\mathcal{B}^{(1)}(\{1\},1,1) =\{1,1+\alpha,1+2\alpha \}, \notag \\
&\mathcal{B}^{(2)}(\{1\},1,1) =\{2,2+\alpha,2+2\alpha \}, \notag \\
&\mathcal{B}^{(1)}(\{1\},2,1) =\{2,2+\alpha,2+2\alpha \} = \mathcal{B}^{(2)}(\{1\},1,1), \notag \\
&\mathcal{B}^{(2)}(\{1\},2,1) =\{1,1+\alpha,1+2\alpha \} = \mathcal{B}^{(1)}(\{1\},1,1), \notag \\
&\mathcal{B}^{(1)}(\{2\},1,1) =\{\alpha,1+\alpha,2+\alpha \}, \notag \\
&\mathcal{B}^{(2)}(\{2\},1,1) =\{2\alpha,1+2\alpha,2+2\alpha \}, \notag\\
&\mathcal{B}^{(1)}(\{2\},2,1) =\{2\alpha,1+2\alpha,2+2\alpha \}=\mathcal{B}^{(2)}(\{2\},1,1), \notag \\
&\mathcal{B}^{(2)}(\{2\},2,1) =\{\alpha,1+\alpha,2+\alpha \} = \mathcal{B}^{(1)}(\{2\},1,1), \notag\\
&\mathcal{B}^{(1)}(\{1,2\},(1,1),1) =\{1,\alpha,2+2\alpha \}, \notag \\
&\mathcal{B}^{(2)}(\{1,2\},(1,1),1) =\{2,1+\alpha,2\alpha \}, \notag\\
&\mathcal{B}^{(1)}(\{1,2\},(1,2),1) =\{1,2+\alpha,2\alpha \}, \notag \\
&\mathcal{B}^{(2)}(\{1,2\},(1,2),1) =\{2,\alpha,1+2\alpha \}, \notag\\
&\mathcal{B}^{(1)}(\{1,2\},(2,1),1) =\{2,\alpha,1+2\alpha \} \notag \\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; = \mathcal{B}^{(2)}(\{1,2\},(1,2),1),\notag \\
&\mathcal{B}^{(2)}(\{1,2\},(2,1),1) =\{1,2+\alpha,2\alpha \} \notag \\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= \mathcal{B}^{(1)}(\{1,2\},(1,2),1), \notag\\
&\mathcal{B}^{(1)}(\{1,2\},(2,2),1) =\{2,1+\alpha,2\alpha\} \notag \\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= \mathcal{B}^{(2)}(\{1,2\},(1,1),1),\notag \\
&\mathcal{B}^{(2)}(\{1,2\},(2,2),1) =\{1,\alpha,2+2\alpha \} \notag\\
&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;= \mathcal{B}^{(1)}(\{1,2\},(1,1),1).\notag
\end{align}
\fi
As an example, we can write out the constraint $\sum_{j=1}^3 \left[g_j^{(\mathcal{K},\boldsymbol{\gamma},\bm f)} \right]_3 = [0]_3$ for $\mathcal{K}=\{1,2\} = \range{2}$ and $\boldsymbol{\gamma}=(2,1)$ as follows:
\ifonecolumn
\begin{displaymath}
\sum_{j=1}^3 \left[ f_{j,2} + f_{j,\alpha} + f_{j,1+2\alpha} \right]_3
+ 2 \left[ f_{j,1} + f_{j,2+\alpha} + f_{j,2\alpha} \right]_3
= \sum_{j=1}^3 \left( \tilde{f}_{j,1} + \tilde{f}_{j,2} + \tilde{f}_{j,3} \right)
= \tilde{f}_{1} + \tilde{f}_{2} + \tilde{f}_{3} = [0]_3
\end{displaymath}
\else
\begin{displaymath}
\begin{split}
&\sum_{j=1}^3 \left[ f_{j,2} + f_{j,\alpha} + f_{j,1+2\alpha} \right]_3
+ 2 \left[ f_{j,1} + f_{j,2+\alpha} + f_{j,2\alpha} \right]_3 \\
&= \sum_{j=1}^3 \left( \tilde{f}_{j,1} + \tilde{f}_{j,2} + \tilde{f}_{j,3} \right)
= \tilde{f}_{1} + \tilde{f}_{2} + \tilde{f}_{3} = [0]_3
\end{split}
\end{displaymath}
\fi
where
\ifonecolumn
\begin{displaymath}
\tilde{f}_{j,1} = \left[ f_{j,2} + 2 f_{j,1} \right]_3, \;
\tilde{f}_{j,2} = \left[ f_{j,\alpha} + 2 f_{j,2+\alpha}\right]_3, \;
\tilde{f}_{j,3} = \left[ f_{j,1+2\alpha} + 2 f_{j,2\alpha} \right]_3
\end{displaymath}
\else
\begin{align*}
\tilde{f}_{j,1} &= \left[ f_{j,2} + 2 f_{j,1} \right]_3, \notag \\
\tilde{f}_{j,2} &= \left[ f_{j,\alpha} + 2 f_{j,2+\alpha}\right]_3, \notag \\
\tilde{f}_{j,3} &= \left[ f_{j,1+2\alpha} + 2 f_{j,2\alpha} \right]_3 \notag
\end{align*}
\fi
and where $\tilde{f}_{j} = \tilde{f}_{j,1} + \tilde{f}_{j,2} + \tilde{f}_{j,3} \in \mathbb F_3$.
\end{example}
\begin{example}
For $p=3$, there is a single vector $\bm m = (0,0,0)$ that gives a valid, irredundant basic building block class (see \cref{sec:q3}). In particular, $\boldt_0^{\bm m} = (0,1,2)$, $\boldt_1^{\bm m} = (0,1,-1)$, and $\boldt_2^{\bm m} = (0,-2,-1)$. From the construction of \cref{lem:relaxation} (and \cref{ex:gf9}), $\tilde\boldt_k^{(\{1,2\},(2,1))} = \left(t_{k,0},t_{k,2},t_{k,1},t_{k,1},t_{k,0},t_{k,2},t_{k,2},t_{k,1},t_{k,0} \right)$ for all $k \in \mathbb F_3$. From \cref{constr:hilo} and \cref{prop:specialMvalid}, the inequality $(0,1,2,\,0,1,2,\,0,1,-1) \boldx \leq 3$ is valid and facet-defining
for an \enquote{all-ones} SPC code of length $3$ over $\mathbb F_3$. Thus, according to \cref{lem:relaxation}, the inequality
\ifonecolumn
\begin{displaymath}
\left(0,2,1,1,0,2,2,1,0,\, 0,2,1,1,0,2,2,1,0,\,
0,-1,1,1,0,-1,-1,1,0\right) \boldx \leq 3
\end{displaymath}
\else
\begin{displaymath}
\begin{split}
&\left(0,2,1,1,0,2,2,1,0,\, 0,2,1,1,0,2,2,1,0,\, \right. \\
&\;\left. 0,-1,1,1,0,-1,-1,1,0\right) \boldx \leq 3
\end{split}
\end{displaymath}
\fi
is valid %
for $\mathcal{P}^{(\{1,2\},(2,1))}$.
\end{example}
\begin{comment}
Note, however, that this is in general a relaxation since %
\begin{displaymath}
\begin{split}
&\conv(\mathsf{F}(\mathcal{C})) \subseteq \left\{\bm{F}\, |\, f_{j,i} \in [0,1] \text{ and } \sum_{i=1}^{q-1} f_{j,i} \leq 1,\, \forall j \right\} \bigcap \\
&\bigcup_{\mathcal{K} \in 2^{\range{m}} \setminus \emptyset}
\left\{ \bm{F}\, |\, (\mathsf{p}(\tilde{f}_1),\dotsc,\mathsf{p}(\tilde{f}_d)) \in \mathbb{P}_{2d} \right\}
\end{split}
\end{displaymath}
where $\mathcal{C}$ is a length-$d$ nonbinary SPC code over $\mathbb{F}_{3^m}$.
As a special case, consider $m=1$ and $p=3$. Then, $\mathcal{K}=\{1\}$ and
\begin{displaymath}
\begin{split}
&\mathcal{B}^{(1)}(1,1) = \{1\},\; \mathcal{B}^{(2)}(1,1) = \{2\},\\
&\mathcal{B}^{(1)}(1,2) = \{2\},\; \mathcal{B}^{(2)}(1,2) = \{1\}
\end{split}
\end{displaymath}
which means that
\begin{displaymath}
\mathsf{p}(\tilde{f}_j) = \mathsf{p}(\tilde{f}_{1,j}) = (f_{j,i_{1,1}},f_{j,i_{1,2}}) = \begin{cases}
(f_{j,1},f_{j,2}) & \text{if $h_j=1$}, \\
(f_{j,2},f_{j,1}) & \text{if $h_j=2$}
\end{cases}
\end{displaymath}
which again explains Remark~\ref{remark:2s}.
\end{comment}
\begin{example}
For the case $p=2$ and $m=2$, the number of facets (and the corresponding sets of inequalities) can be computed numerically (using, for instance, the software package \emph{Polymake} \cite{polymake}) for $d=3$, $4$, $5$, and $6$. The number of facets is $24$, $40$, $68$, and $120$, respectively, and the sets of inequalities match perfectly with the sets derived using the relaxation method presented above. Note that in \cite[Conj.~62]{liu14}, it was conjectured that the relaxation is indeed tight. Also, in \cite{hon12}, the same observation was made, but no proof was given. If we increase $m$ to $3$, the number of facets is $2740$ and $35928$ for $d=3$ and $4$, respectively, while the number of inequalities using the relaxation method presented above is only $7 \cdot 2^{d-1} + 8 \cdot d$, which is equal to $52$ and $88$ for $d=3$ and $4$, respectively, and the relaxation is not tight.
\end{example}
\begin{example}
For the case $p=3$ and $m=2$, the number of facets (and the corresponding sets of inequalities) can be computed numerically for $d=3$. The number of facets is $73323$, while the number of inequalities using the relaxation method presented above is only $8 \cdot 2 \cdot 3^{3-1} + 9 \cdot 3 = 144 + 27 = 171$, and the relaxation is not tight. Note that using the weaker Proposition~2 from \cite{ros15} (or, equivalently, \cref{prop:pm} constrained with $\boldsymbol{\gamma}=(1,\dotsc,1)$), we only get $3 \cdot 2 \cdot 3^{3-1} + 9 \cdot 3 = 54 + 27 = 81$ inequalities.
\end{example}
\section{Numerical Results} \label{sec:numerical_results}
\pgfplotsset{grid style={gray!40,thin}}
\pgfplotsset{every axis/.append style={font=\scriptsize}}
\tikzset{
curveRPC/.style = {mark=triangle,Yellow!75!Black},
curveALP/.style = {mark=x,Red},
curvePLP/.style = {mark=diamond,Blue},
curveT1/.style = {mark=|,Violet},
curveML/.style = {mark=*,Green,mark options={scale=.6}},
}
\ifonecolumn
\newlength{\myheight}\setlength{\myheight}{.6\columnwidth}
\newlength{\mywidth}\setlength{\mywidth}{.75\columnwidth}
\else
\newlength{\myheight}\setlength{\myheight}{.7\columnwidth}
\newlength{\mywidth}\setlength{\mywidth}{\columnwidth}
\fi
\begin{figure}
\centering
\begin{tikzpicture}
\begin{semilogyaxis}[
xlabel=$E_{\mathrm s}/N_0 (\si{\decibel})$,
ylabel=FER,
grid=both,
ymax=1,
xmin=3, xmax=7.5,
height=\myheight,
width=\mywidth,
y tick label style={rotate=90},
legend style={legend pos=south west,font=\tiny}]
%
\addplot[curveALP] plot coordinates {
(3.00, 9.09e-01) %
(3.50, 8.77e-01) %
(4.00, 7.58e-01) %
(4.50, 6.62e-01) %
(5.00, 5.26e-01) %
(5.50, 3.99e-01) %
(6.00, 2.60e-01) %
(6.50, 1.67e-01) %
(7.00, 8.97e-02) %
(7.50, 4.21e-02) %
};
\addlegendentry{PLP / ALP}
\addplot[curveRPC] plot coordinates {
(3.00, 1.78e-01) %
(3.50, 9.90e-02) %
(4.00, 5.05e-02) %
(4.50, 2.52e-02) %
(5.00, 1.01e-02) %
(5.50, 3.38e-03) %
(6.00, 9.86e-04) %
(6.50, 2.00e-04) %
(7.00, 4.14e-05) %
(7.50, 5.60e-06) %
};
\addlegendentry{RPC}
\addplot[curveML] plot coordinates {
(3.00, 8.09e-02) %
(3.50, 4.81e-02) %
(4.00, 2.55e-02) %
(4.50, 1.13e-02) %
(5.00, 4.13e-03) %
(5.50, 1.46e-03) %
(6.00, 4.99e-04) %
(6.50, 1.21e-04) %
(7.00, 2.70e-05) %
(7.50, 5.09e-06) %
};
\addlegendentry{ML}
%
\addplot[curveALP,dashed] plot coordinates {
(4.00, 1.00e+00) %
(4.50, 1.00e+00) %
(5.00, 9.80e-01) %
(5.50, 9.62e-01) %
(6.00, 8.62e-01) %
(6.50, 7.46e-01) %
(7.00, 5.00e-01) %
(8.00, 1.57e-01) %
(10.50, 1.22e-04) %
};
\addplot[curveRPC,dashed] plot coordinates {
(4.00, 3.25e-01) %
(4.50, 1.56e-01) %
(5.00, 4.74e-02) %
(5.50, 8.78e-03) %
(6.00, 1.23e-03) %
(6.50, 7.25e-05) %
};
\end{semilogyaxis}
\end{tikzpicture}
\vspace{-2ex}
\caption{FER performance of the ternary RM codes $\mathcal C_{\mathrm{RM}(1)}^{(3)}$ (solid lines) and $\mathcal C_{\mathrm{RM}(2)}^{(3)}$ (dashed lines) as a function of $E_{\rm s}/N_0$.}
\label{fig:ternaryRM}
\vskip -3ex
\end{figure}
\ifonecolumn
\newlength{\myheightt}\setlength{\myheightt}{.6\textwidth}
\else
\newlength{\myheightt}\setlength{\myheightt}{.4\textwidth}
\fi
\begin{figure*}[t]
\centering
\begin{tikzpicture}
\begin{semilogyaxis}[
xlabel=$E_{\mathrm s}/N_0 (\si{\decibel})$,
ylabel=FER,
grid=both,
ymax=1,
ymin=1e-6,
xmin=4, xmax=12,
height=\myheightt,
width=\textwidth,
y tick label style={rotate=90},
legend columns=1,
legend style={legend pos=south east,font=\tiny}]
%
\addplot[curvePLP] plot coordinates {
(5.00, 9.26e-01) %
(5.50, 6.92e-01) %
(6.00, 3.07e-01) %
(6.50, 9.02e-02) %
(7.00, 1.22e-02) %
(7.50, 6.78e-04) %
(8.00, 1.31e-05) %
};
\addlegendentry{PLP / CLP}
\addplot[curveALP] plot coordinates {
(5.00, 9.26e-01) %
(5.50, 6.92e-01) %
(6.00, 3.07e-01) %
(6.50, 9.02e-02) %
(7.00, 1.22e-02) %
(7.50, 6.78e-04) %
(8.00, 1.31e-05) %
};
\addlegendentry{ALP}
\addplot[curveT1] plot coordinates {
(5.00, 9.26e-01) %
(5.50, 6.99e-01) %
(6.00, 3.11e-01) %
(6.50, 9.24e-02) %
(7.00, 1.26e-02) %
(7.50, 7.38e-04) %
(8.00, 1.46e-05) %
};
\addlegendentry{ALP$|\Phi(\Theta^{\bm 0})$}
\addplot[curveRPC] plot coordinates {
(5.00, 9.26e-01) %
(5.50, 6.92e-01) %
(6.00, 3.07e-01) %
(6.50, 9.02e-02) %
(7.00, 1.21e-02) %
(7.50, 6.76e-04) %
(8.00, 1.29e-05) %
};
\addlegendentry{RPC}
%
%
\addplot[curvePLP] plot coordinates {
(4.00, 1.04e-01) %
(4.50, 2.21e-02) %
(5.00, 2.61e-03) %
(5.50, 1.81e-04) %
(6.00, 5.10e-06) %
};
%
\addplot[curveALP] plot coordinates {
(4.00, 1.04e-01) %
(4.50, 2.21e-02) %
(5.00, 2.61e-03) %
(5.50, 1.81e-04) %
(6.00, 5.10e-06) %
};
%
\addplot[curveRPC] plot coordinates {
(4.00, 6.99e-02) %
(4.50, 1.09e-02) %
(5.00, 9.80e-04) %
(5.50, 3.82e-05) %
(6.00, 4.8e-07) %
};
%
%
%
%
\addplot[curvePLP] plot coordinates {
(8.00, 1.97e-01) %
(8.50, 4.13e-02) %
(9.00, 2.73e-03) %
(9.50, 7.64e-05) %
};
%
\addplot[curveALP] plot coordinates {
(8.00, 1.97e-01) %
(8.50, 4.13e-02) %
(9.00, 2.73e-03) %
(9.50, 7.64e-05) %
};
%
\addplot[curveT1] plot coordinates {
(8.00, 2.32e-01) %
(8.50, 5.22e-02) %
(9.00, 3.91e-03) %
(9.50, 1.37e-04) %
};
%
\addplot[curveRPC] plot coordinates {
(8.00, 1.97e-01) %
(8.50, 4.31e-02) %
(9.00, 2.73e-03) %
(9.50, 7.64e-05) %
};
%
%
%
%
%
\addplot[curvePLP] plot coordinates {
(10.00, 3.58e-01) %
(10.50, 1.04e-01) %
(11.00, 1.81e-02) %
(11.50, 1.81e-03) %
%
};
%
\addplot[curveALP] plot coordinates {
(8.00, 1.00e+00) %
(8.50, 9.95e-01) %
(9.00, 9.48e-01) %
(9.50, 7.60e-01) %
(10.00, 3.74e-01) %
(10.50, 1.20e-01) %
(11.00, 2.00e-02) %
(11.50, 1.95e-03) %
(12.00, 9.86e-05) %
};
%
\addplot[curveT1] plot coordinates {
(8.00, 1.00e+00) %
(8.50, 1.00e+00) %
(9.00, 9.76e-01) %
(9.50, 8.47e-01) %
(10.00, 5.54e-01) %
(10.50, 2.19e-01) %
(11.00, 4.47e-02) %
(11.50, 5.45e-03) %
(12.00, 3.05e-04) %
};
%
\addplot[curveRPC] plot coordinates {
(8.00, 1.00e+00) %
(8.50, 9.95e-01) %
(9.00, 9.48e-01) %
(9.50, 7.60e-01) %
(10.00, 3.74e-01) %
(10.50, 1.20e-01) %
(11.00, 2.00e-02) %
(11.50, 1.93e-03) %
(12.00, 1.05e-04) %
};
%
%
\end{semilogyaxis}
\end{tikzpicture}
\vspace{-2ex}
\caption{FER performance of $\mathcal{C}_{\rm Tan}^{(3)}$, $\mathcal{C}_{\rm Tan}^{(5)}$, $\mathcal{C}_{\rm Tan}^{(7)}$, and $\mathcal{C}_{\rm Tan}^{(11)}$ (left to right) as a function of $E_{\rm s}/N_0$. %
For $p=3$, $\Delta^d_p \cup \Phi(\Theta^{\bm 0})$ gives a complete and irredundant description of $\mathcal{P}$ (see \cref{thm:facetComplete}); thus only three curves are displayed.}
\label{fig:Tannercode}
\vskip -3ex
\end{figure*}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{semilogyaxis}[
xlabel=$E_{\mathrm s}/N_0 (\si{\decibel})$,
ylabel=FER,
grid=both,
ymax=1,
ymin=1e-5,
xmin=4.0, xmax=9.0,
height=\myheight,
width=\mywidth,
y tick label style={rotate=90},
legend columns=2,
transpose legend,
legend style={legend pos=south west,font=\tiny}]
%
\addplot[curveALP] plot coordinates {
(5.00, 7.84e-01) %
(5.25, 3.97e-01) %
(5.50, 1.70e-01) %
(5.75, 3.70e-02) %
(6.00, 5.50e-03) %
(6.25, 4.42e-04) %
};
\addlegendentry{ALP}
\addplot[curveRPC] plot coordinates {
(5.00, 7.46e-01) %
(5.25, 3.62e-01) %
(5.50, 1.28e-01) %
(5.75, 1.92e-02) %
(6.00, 2.59e-03) %
(6.25, 1.54e-04) %
};
\addlegendentry{RPC}
%
\addplot[curveT1] plot coordinates {
(7.50, 1.00e+00) %
(7.75, 9.26e-01) %
(8.00, 5.75e-01) %
(8.25, 2.26e-01) %
(8.50, 4.28e-02) %
(8.75, 3.31e-03) %
(9.00, 1.76e-04) %
};
\addlegendentry{ALP$|\Phi(\Theta^{\bm 0})$}
\addplot[curvePLP] plot coordinates {
(8.00, 5.75e-01) %
(8.25, 2.26e-01) %
(8.50, 4.28e-02) %
};
\addlegendentry{PLP}
\addplot[curveALP] plot coordinates {
%
(7.75, 9.26e-01) %
(8.00, 5.75e-01) %
(8.25, 2.26e-01) %
(8.50, 4.28e-02) %
(8.75, 3.13e-03) %
(9.0, 1.42e-4) %
};
%
\addplot[curveRPC] plot coordinates {
%
(7.75, 9.26e-01) %
(8.00, 5.75e-01) %
(8.25, 2.26e-01) %
(8.50, 4.28e-02) %
(8.75, 3.13e-03) %
(9.0, 1.30e-4) %
};
%
\end{semilogyaxis}
\end{tikzpicture}
\vspace{-2ex}
\caption{FER performance of the length-$999$ MacKay 999.111.3.5543 code $\mathcal{C}_{\rm MacKay}^{(3)}$ (left) and $\mathcal{C}_{\rm MacKay}^{(5)}$ (right).}
\label{fig:MacKay}
\vskip -3ex
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{semilogyaxis}[
xlabel=$E_{\mathrm s}/N_0 (\si{\decibel})$,
ylabel=FER,
grid=both,
ymax=1,
ymin=1e-4,
xmin=3.5, xmax=8.5,
height=\myheight,
width=\mywidth,
y tick label style={rotate=90},
legend columns=1,
legend style={legend pos=south east,font=\tiny}]
%
\addplot[curveALP] plot coordinates {
(6.00, 5.80e-01) %
(6.25, 1.89e-01) %
(6.50, 2.40e-02) %
(6.75, 7.47e-04) %
};
\addlegendentry{ALP}
\addplot[curveRPC] plot coordinates {
(6.00, 5.76e-01) %
(6.25, 1.86e-01) %
(6.50, 2.40e-02) %
(6.75, 7.47e-04) %
};
\addlegendentry{RPC}
\addplot[curveT1] plot coordinates {
(6.00, 5.83e-01) %
(6.25, 1.97e-01) %
(6.50, 2.44e-02) %
(6.75, 9.08e-04) %
};
\addlegendentry{ALP$|\Phi(\Theta^{\bm 0})$}
%
\addplot[curveALP] plot coordinates {
(3.50, 5.29e-01) %
(3.75, 1.74e-01) %
(4.00, 3.92e-02) %
(4.25, 3.75e-03) %
(4.50, 1.40e-04) %
};
%
\addplot[curveRPC] plot coordinates {
(3.50, 5.13e-01) %
(3.75, 1.74e-01) %
(4.00, 3.43e-02) %
(4.25, 2.87e-03) %
(4.50, 1.00e-04) %
};
%
%
\addplot[curveALP] plot coordinates {
(7.75, 8.77e-01) %
(8.00, 4.08e-01) %
(8.25, 7.99e-02) %
(8.50, 5.84e-03) %
};
%
\addplot[curveT1] plot coordinates {
(7.75, 8.77e-01) %
(8.00, 4.31e-01) %
(8.25, 7.99e-02) %
(8.50, 7.1e-03) %
};
%
\end{semilogyaxis}
\end{tikzpicture}
\vspace{-2ex}
\caption{FER performance of $\mathcal C_{(3,6)}^{(3)}$ (left), $\mathcal C_{(3,6)}^{(5)}$ (center), and $\mathcal C_{(3,6)}^{(7)}$ (right).}
\label{fig:RandomLDPC}
\vskip -3ex
\end{figure}
\begin{notms}
In this section, we compare the proposed ALP decoding algorithm from \cref{sec:ALP_q} and the version augmented by RPC cuts (named RPC in the following) from \cref{sec:ACG} with both the plain and cascaded \enquote{static} (nonadaptive) approaches from \cite{fla09} (named PLP and CLP, respectively). We present frame error-rate (FER) performance results for various codes over additive white Gaussian noise channels with different signal-to-noise ratios (SNRs), using $p$-phase-shift keying modulation (for codes over $\mathbb F_p$). The code symbols $\zeta \in \mathbb F_p$ are mapped to constellation points according to $\zeta \mapsto \exp(\sqrt{-1}(2[\zeta]_\mathbb Z+1)\pi / p)$. In addition to the decoding performance, we present several performance figures (for selected combinations of code and SNR value) in \cref{tab:resultsNumbers}: the average CPU time, the average number of iterations of the simplex algorithm that is used to solve the LPs, and, for ALP and RPC, the average number of cuts added (i.e.,\ number of constraints in the final linear program) and the average number of LPs solved (i.e.,\ iterations of the main loop of \cref{alg:ALP}).
For $p=3$, simulations were performed using
a $(27, 10, 9)$ and an $(81, 31, 18)$ ternary RM code ($\mathcal C_{\mathrm{RM}(1)}^{(3)}$ and $C_{\mathrm{RM}(2)}^{(3)}$, respectively).
Note that the respective parity-check matrices are rather dense, with a nonzeros ratio of $0.23$ and $0.14$, respectively. %
In view of \cref{thm:facetComplete}, it is clear that ALP, CLP, and PLP show the same error-correction performance for $p=3$. As shown in \cref{fig:ternaryRM}, the RPC cut-search algorithm drastically improves decoding performance for the dense codes and, for $\mathcal C_{\mathrm{RM}(1)}^{(3)}$, nearly achieves ML performance. Note that this is in line with the well-known observation that, in the binary case, LP decoding without RPC search performs poor for dense codes (see, e.g., \cite{tan10}). The ML curve in \cref{fig:ternaryRM} was computed using an integer programming formulation (and the commercial Gurobi solver \cite{Gurobi600}) of the nonbinary ML decoding problem that is based on the compact binary IPD formulation first presented in \cite{tan10}. Except for the small RM code, the complexity of this approach however is intractable for all codes considered in this section.
\end{notms}
\begin{notms}
To study the effect of increasing $p$ on the ALP algorithm, we employ the $(3,5)$-regular $(155,64)$ Tanner code \cite{tan01} over the fields $\mathbb F_p$, $p \in \{3,5,7,11\}$ (denoted by $\mathcal C_{\mathrm{Tan}}^{(p)}$, respectively). To construct the codes, we have replaced the $5$ ones in each row of the binary $\bm H$ by the patterns $(1,2,2,1,1)$, $(1,2,4,3,1)$, $(1,2,4,6,1)$, and $(1,2,6,10,1)$, for $\mathcal C_{\mathrm{Tan}}^{(3)}$ to $\mathcal C_{\mathrm{Tan}}^{(11)}$, respectively. The results are shown in \cref{fig:Tannercode}. Because of \cref{thm:facetComplete} and the numerical verification of \cref{thm:facetComplete_q5} for $d \leq 5$, the ALP and PLP/CLP curves are identical for $p\in \{3,5\}$, and the fact that they also are for $p=7$ supports \cref{conj:facetComplete_q7}. Interestingly, the class $\Phi(\Theta^{\bm 0})$ (the pink $+$-marked curve) is sufficient for achieving close-to-exact LP decoding performance (especially for $p=5$; for $p=3$, this is the only class, as detailed in \cref{sec:q3}). This can only mean that the facets induced by the other classes somehow cut off only \enquote{smaller} parts of the polytope. This can be explained by the counting formulas in \cref{lem:counting_formulas}. For instance, for $p=7$, \cref{prop:validIneq_q7_1to4} shows that the number of codewords that each inequality is tight for decreases strictly from $\Theta_1$ to $\Theta_4$. This proves that the $\Theta_1$-facets are \enquote{larger} than the others. For $p=11$, we observe an increasing gap between ALP and PLP/CLP which shows that the inequalities proposed in this paper specify only a strict relaxation of the LP decoder as in \cref{eq:LPformulation} for $p\geq 11$.
In contrast to the binary case (see \cite[Fig.~3]{zha12}) and the dense codes above, RPCs lead to a noteworthy improvement only for $p=3$ for the Tanner codes. In \cref{tab:resultsNumbers}, one can observe that again ALP decoding is much more efficient than both PLP and CLP.
As a remark, no single cut from the special class $\Phi(\Theta_6)$ for $p=7$ (see \cref{sec:q7}) was found during all of our simulations, hence they appear not to influence decoding performance in practice. On the other the complexity of the cut-search algorithm is $d^2$ times higher for $\Theta_6$ than for the basic classes due to the additional loop that sets $i^{\mathrm{lo}}$ and $i^{\mathrm{hi}}$. Especially in conjunction with RPC search (where RPCs are generally dense even with LDPC codes), it may not be worthwhile to search for $\Theta_6$ inequalities at all.
In order to examine the scalability of the proposed algorithm, we present numerical results for two sets of larger LDPC codes. The first one is based on MacKay's random $(3,27)$-regular $(999, 888)$ code 999.111.3.5543 from \cite{mac-web}, from which we derive a ternary ($\mathcal C^{(3)}_{\text{MacKay}}$) and a quinary ($\mathcal C^{(5)}_{\text{MacKay}}$) %
code, respectively, by iteratively replacing the nonzeros in each row of the parity-check matrix by the pattern $1,2,\dotsc, p-1,1,2,\dotsc$. The error-rate results are shown in \cref{fig:MacKay}. Because of the large row-weight of this code, both PLP and CLP are intractable to run. As before, it can be observed that the RPC approach as stated in \cref{sec:ACG} is helpful only for $p=3$. For $p=5$, the decoding results for ALP and PLP coincide exactly, providing strong evidence that \cref{thm:facetComplete_q5} holds also for larger $d$, because here each row code has length $d=27$.
The second class is constructed with the same pattern based on a random $(3,6)$-regular $(1000,500)$ LDPC code; the resulting nonbinary codes are denoted $\mathcal C^{(3)}_{3,6}$ (for $p=3$), $\mathcal C^{(5)}_{3,6}$ (for $p=5$), and $\mathcal C^{(7)}_{3,6}$ (for $p=7$), with results shown in \cref{fig:RandomLDPC}. This code has a much smaller check-node degree, hence PLP/CLP decoding is possible but extremely slow (see \cref{tab:resultsNumbers}). Interestingly, for this code the RPC algorithm does not significantly improve decoding performance even for $p=3$.
\Cref{tab:resultsNumbers} shows that the proposed algorithms are clearly favorable in terms of decoding complexity compared with both CLP and PLP; in particular, they scale well with both block length and check node degree. The complexity of the static LP decoders in contrast explodes with increasing check node degree and quickly becomes intractable.
Note that, for ALP and $p$ fixed, the number of inequalities depends on the check node degrees only, such that it scales especially well for sparse codes.
Examining the different LP formulations and the results from \cref{tab:resultsNumbers}, one can name various reasons for this performance gain. First, the number $np$ (using constant-weight embedding) of variables in ALP is much smaller even compared with PLP.
Secondly, the number of constraints in the final linear program is virtually negligible as opposed to both static LP formulations. These two observations imply that the cost of a single simplex iteration is much lower in ALP than in PLP or CLP; the effect of this saving increases due to the much smaller number of simplex iterations observed for ALP. Finally, these advantages entirely outweigh the overhead introduced by solving several LP problems instead of one, as this number stays very small even for large codes.
\end{notms}
\section{Conclusion and Future Work} \label{sec:conclu}
In this work, we presented an explicit construction of valid inequalities (using no auxiliary variables) for the codeword polytope (or the convex hull) of the so-called constant-weight embedding of an SPC code over any prime field. The inequalities are assembled from classes of building blocks and can be proved to be facet-defining under some conditions. We observed numerically, for all primes $p \leq 19$, that a valid class gives facet-defining inequalities if and only if it is symmetric, and we conjectured this to be true in general.
For ternary codes we proved that the inequalities from the construction together with the simplex constraints give a complete and irredundant description of the embedded codeword polytope. For quinary codes, based on extensive numerical evidence, we conjectured this to be the case as well. For $p>5$, there exist other types of facet-defining inequalities besides the ones that can be constructed from basic classes. A complete characterization of such inequalities is left as future work; the similarities of \cref{lem:conditions_theta5,lem:conditions_theta6} with \cref{lem:conditions} suggest that it might be possible to subsume all three types by a more general unifying form. Our initial numerical results however show that these are not required for achieving close-to-exact LP decoding performance, at least for small $p$.
Building on the explicit form of the inequalities, we presented an efficient (relaxed) ALP decoder for general linear codes over any prime field, in which efficient separation of the underlying inequalities describing the decoding polytope was done through DP. An explicit efficient implementation was also provided for ternary linear codes.
Next, an ACG-ALP decoder was presented, generalizing the corresponding decoding algorithm for binary codes, and we briefly showed how the results can be generalized to fields of size ${p^m}$ where $m > 1$ by introducing a relaxation. Numerical results for both LDPC and HDPC codes showed that our proposed ALP decoder outperforms (in terms of decoding complexity) a static decoder
(using both the plain and the cascaded LP formulation).
We believe that many of the results in this paper generalize to nonprime fields. In particular, the concept of building blocks and building block classes appears to be universal. In particular, facet-defining inequalities for the codeword polytope of an embedded SPC code over $\mathbb F_8$ can be constructed using this principle as shown in \cite{ros16_ita}. In fact, the same \emph{structure} (or, equivalently, the same \emph{types} of building block classes) as for codes over $\mathbb F_7$ can be observed. %
An important open problem would be to completely understand how to construct such building blocks and classes in the general case, with the ultimate goal of constructing an efficient algorithm to perform Euclidean projections onto the codeword polytope of an embedded SPC code in order to construct an efficient decoder using the ADMM framework. We believe that the characterization of the facet-defining inequalities (using no auxiliary variables) of the codeword polytope of an embedded nonbinary SPC code is a first step towards this goal.
\begin{table*}
\caption{Numerical comparison of the proposed algorithms ALP and RPC (ALP with RPC search) with static LP decoding.
CPU times are specified in seconds$ \times 10^{-2}$.
Missing entries have been skipped because of intractable complexity.
}
\vskip -2.0ex
\label{tab:resultsNumbers}\centering
$\begin{array}{c|cc|c|*4c|*4c|*2c|*2c}
\toprule
p & \text{code} & \text{dimensions}& \scriptsize \text{SNR} & \multicolumn{4}{c|}{\text{CPU time} (s\times 10^{-2})} & \multicolumn{4}{c|}{\text{simplex iterations}} & \multicolumn{2}{c|}{\text{\# cuts}} & \multicolumn{2}{c}{\text{\# LPs}} \\
&&\text{(N,K)}& \text{(dB)} & \text{ALP} & \text{RPC}& \text{CLP} & \text{PLP} & \text{ALP}&\text{RPC} & \text{CLP} & \text{PLP} & \text{ALP}&\text{RPC} & \text{ALP}&\text{RPC} \\ \midrule
3&\mathcal C_{\mathrm{RM}(1)}^{(3)} &(27,10) &3
&.19&1.3&2.3 &7.9 %
&34.7&142&447 &1043 %
&64&165 %
&5.3&12.3 %
\\
& &&7
&.11& .13&1.2 & 5.6 %
&8.6 &9.9&236 &694 %
&25&27.3 %
&2.9&2.98 %
\\
\midrule
3&\mathcal C_{\mathrm{RM}(2)}^{(3)} &(81,31) &8
&.37&.43& 39 &- %
&27.8&29.8&2009 &- %
&99.4&103 %
&3.8&3.8 %
\\
&&&10.5
&.1&.14& 11 &- %
&14.1&14.1& 1171 &- %
&12.9&12.9 %
&1.66&1.65 %
\\
\midrule
3&\mathcal C_{\mathrm{Tan}}^{(3)} &(155,64) &5
&.5&.8&16&14%
&66&70&1035&1962 %
&138&140 %
&3.5&3.6 %
\\
\midrule
5&\mathcal C_{\mathrm{Tan}}^{(5)} &(155,64) &7.5
&2.4&3.5&114&144%
&155&155&4178&7105%
&792&792 %
&3.3&3.3 %
\\
\midrule
7&\mathcal C_{\mathrm{Tan}}^{(7)} &(155,64) &9
&20&25&561&1726%
&377&377&11399&16037%
&3188&3188 %
&3.3&3.3 %
\\
\midrule
11&\mathcal C_{\mathrm{Tan}}^{(11)} &(155,64) &11.5
&115&169&3508&-%
&864&864&35299&-%
&17358&17363 %
&3.6&3.6 %
\\
\midrule
3&\mathcal C_{\mathrm{MacKay}}^{(3)} &(999,888) &6
&.14&.42&341&-%
&73&83&1207&-%
&133&135 %
&3.5&3.7 %
\\
\midrule
5&\mathcal C_{\text{MacKay}}^{(5)} &(999,888) &8.5
&19&31&12509&-%
&391&392&53457&-%
&1226&1226 %
&4.3&4.4 %
\\
\midrule
3&\mathcal C_{(3,6)}^{(3)} &(1000,500) &4
&39&122& 2685 &- %
&1439&1581&14206 &- %
&1337&1372 %
&5.7&7.7 %
\\
\midrule
5&\mathcal C_{(3,6)}^{(5)} &(1000,500) &6.5
&655&744& 39155 &- %
&5279&5284&88432 &- %
&7668&7669 %
&4.7&4.7 %
\\
\midrule
7&\mathcal C_{(3,6)}^{(7)} &(1000,500) &8.5
&2382&-& - &- %
&8334&-&- &- %
&24077&- %
&4.1&- %
\\
\bottomrule
\end{array}$
\end{table*}
\appendices
\section{How to Switch Between Embeddings $\mathsf F_{\mathrm v}$ and $\mathsf F_{\mathrm v}'$}\label{app:embeddings}
Let $\mathcal C$ be a nonbinary code of length $d$ over the field $\mathbb F_q$, for some prime power $q$, and define $\P=\conv(\mathsf F_{\mathrm v}(\mathcal C))$. Assume
\begin{subequations}
\label{eq:pdesc}
\begin{align}
&\bm a^{\nu T}\bm x \leq \alpha^\nu&&\text{for }\nu \in \range N,
\label{eq:pdesc-ineq}\\
&\bm b^{\mu T}\bm x = \beta^\mu&&\text{for }\mu \in \range M, \label{eq:pdesc-eq}\\
&\sum_{j \in \mathbb F_q} x_{i,j} = 1&&\text{for }i\in \range d, \label{eq:simplex-eq}\\
&x_{i,j} \geq 0&&\text{for }i\in \range d, j \in \mathbb F_q \label{eq:simplex-ineq}
\end{align}
\end{subequations}
is a description of $\P$ by means of $N+ dq$ linear inequalities and $M+d$ linear equations (for some natural numbers $N$ and $M$). Note that the existence of \cref{eq:simplex-eq,eq:simplex-ineq} in this description can be assumed without loss of generality, because that part exactly specifies $S_{q-1}^d$, and $\P \subseteq S_{q-1}^d$ by definition of $\mathsf F_{\mathrm v}$.
By adding, for $i \in \range d$, $-a^\nu_{i,0}$ times \cref{eq:simplex-eq} to each inequality $\bm a^{\nu T} \boldx \leq \alpha^\nu$ of \cref{eq:pdesc-ineq}, we may further assume that $a^\nu_{i,0} = 0$ for $i \in \range d$, and likewise that $b^\mu_{i,0}=0$ for all equations of \cref{eq:pdesc-eq}. For $j=0$, the corresponding step turns \cref{eq:simplex-ineq} into
\[ -\sum_{j \neq 0} x_{i,j} \geq -1 \text{ or equivalently } \sum_{j \neq 0} x_{i,j} \leq 1.\]
Now, for each $\bm a^\nu$, $\nu \in \range N$ (and $\bm b^\mu$ analogously), we define $\bm a'^{\nu} \in \mathbb R^{d(q-1)}$ by removing all entries $a^\nu_{i,0}$, $i \in \range d$ (if we extend the definition of $\mathsf P_{\mathrm v}$ to points outside of $S_{q-1}^d$, this can be written as $\bm a'^\nu = \mathsf P_{\mathrm v}(\bm a^\nu)$), and define the polytope $\tilde \P \subseteq \mathbb R^{d(q-1)}$ by
\begin{subequations}
\label{eq:ppdesc}
\begin{align}
&\bm a'^{\nu T} \boldx' \leq \alpha^\nu &&\text{for }\nu \in \range N,\label{eq:ppdesc-ineq} \\
&\bm b'^{\mu T} \boldx' = \beta^\nu &&\text{for }\mu \in \range M,\label{eq:ppdesc-eq}\\
&\sum_{j \neq 0} x_{i,j} \leq 1 &&\text{for } i \in \range d, \label{eq:hsimplex-1}\\
&x_{i,j} \geq 0 &&\text{for }i \in \range d, j \in \mathbb F_q\setminus \{0\} \label{eq:hsimplex-0}
\end{align}
\end{subequations}
which is obtained from \cref{eq:pdesc} by removing the equations \cref{eq:pdesc-eq} and all coefficients belonging to field elements $\zeta=0$, after they have been set to $0$ as described above.
\begin{proposition}
$\tilde \P = \P'$, where $\P'= \conv(\mathsf F_{\mathrm v}'(\mathcal C))$.
\end{proposition}
\begin{IEEEproof}
Let $\boldx \in \P$. By construction of $\bolda'^\nu$ and $\boldb'^\mu$, $\bolda'^{\nu T} \mathsf P_{\mathrm v}(\boldx) = \bolda^{\nu T} \boldx \leq \kappa$ and $\boldb'^{\mu T} \mathsf P_{\mathrm v}(\boldx) = \boldb^{\mu T} \boldx$, thus $\mathsf P_{\mathrm v}(\boldx)$ fulfills \cref{eq:ppdesc-ineq,eq:ppdesc-eq}. Furthermore, \cref{eq:hsimplex-0,eq:hsimplex-1} are obviously satisfied by $\mathsf P_{\mathrm v}(\boldx)$, which shows that $\mathsf P_{\mathrm v}(\boldx) \in \tilde \P$. Since $\mathsf P_{\mathrm v}(\P) = \P'$ by \cref{lem:embeddings}, we have established $\P' \subseteq \tilde \P$.
Conversely, let $\boldx' \in \tilde\P \subseteq \hat S_{q-1}^d$. Again by construction of $\bolda'^{\nu}$ and $\boldb'^{\nu}$, $\bolda^{\nu T}\mathsf L_{\mathrm v}(\boldx') \leq \alpha^\nu$ and $\boldb^{\mu T}\mathsf L_{\mathrm v}(\boldx') = \beta^\mu$, and because further $\mathsf L_{\mathrm v}(\boldx') \in S_{q-1}^d$, we see that $\mathsf L_{\mathrm v}(\boldx')$ satisfies all constraints of \cref{eq:pdesc}, hence $\mathsf L_{\mathrm v}(\boldx') \in \P$. Using again \cref{lem:embeddings}, this implies that $\mathsf P_{\mathrm v}(\mathsf L_{\mathrm v}(\boldx')) = \boldx' \in \P'$, i.e., $\tilde\P \subseteq \P$.
\end{IEEEproof}
The following explicit version of \cref{cor:facetEquiv} is a by-product of the above proof.
\begin{corollary}
If $\bolda^{\nu T} \boldx \leq \alpha^\nu$ induces the face $F$ of $\P$ with $\dim(F) = \delta$, then $\bolda'^{\nu T} \boldx' \leq \alpha^\nu$ induces $\mathsf P_{\mathrm v}(F)$ (which by \cref{cor:facetEquiv} also has dimension $\delta$). This holds in particular if $F$ is a facet.
\end{corollary}
\begin{remark}
\begin{enumerate}
\item If $\mathcal C$ is an SPC code with $d\geq 3$, \cref{eq:simplex-eq} already specifies the affine hull of $\P$ by \cref{prop:Pjdim}. Hence, no further equations are necessary, i.e., $M=0$.
\item By Property~\ref{lem:bb-property2} of \cref{lem:bb-properties}, $a^{\nu}_{i,0} = 0$ already holds for all inequalities constructed in \cref{sec:bb}. There is no need to add multiples of \cref{eq:simplex-eq} to those inequalities to establish this assumption.
\item While the above results show that $\P$ and $\P'$ are practically equivalent with respect to most \emph{polyhedral} properties, they are \emph{geometrically} different because the underlying embeddings $\mathsf f$ and $\mathsf f'$ are. For example, when $p=3$, $\lVert \mathsf f(1) - \mathsf f(0) \rVert_2 = \lVert \mathsf f(2) - \mathsf f(1) \rVert_2$, where $\Vert \cdot \Vert_2$ denotes the Euclidean norm of its argument, while the two distances are different when replacing $\mathsf f$ by $\mathsf f'$. This has consequences when using nonlinear solvers, such as the penalized ADMM decoder (cf.\ \cite[Sec.~VI]{liu14_1}, \cite{liu14}).
\item At a first glance, using $\P'$ instead of $\P$ appears to be computationally preferable because \cref{eq:ppdesc} exhibits less variables than \cref{eq:pdesc}. However, when solved with the simplex method, the inequalities \cref{eq:hsimplex-1} will be internally expanded to the form \cref{eq:simplex-eq} by introducing slack variables; hence, internally, the algorithm will perform \emph{exactly the same steps} no matter which embedding is used.
\end{enumerate}
\end{remark}
\section{Proof of \cref{prop:Pjdim}}\label{proofPjdim}
Assume $p \geq 3$ and let $\mathcal A = \{\boldx \in \mathbb R^{dp}\colon \cref{eq:spx-eq} \text{ holds for }i \in \range d\}$ be the affine hull of $S_{p-1}^d$. Because $\P \subseteq S_{p-1}^d \subseteq \mathcal A$ by definition and further $\dim(\mathcal A)=d(p-1)$ (as follows immediately from the structure of \cref{eq:spx-eq}), we obtain
\begin{gather}
\dim (\P) \leq \dim (S_{p-1}^d) = \dim(\mathcal A)= d(p-1) \label{eq:dimUb}\\
\text{and}\quad\aff(\P) \subseteq \mathcal A.\label{eq:dimAff}
\end{gather}
\emph{Part~1):}
By \cref{eq:dimUb} it suffices to show that $\dim(\P) \geq d(p-1)$, i.e., find $d(p-1) + 1$ affinely independent elements of $\P$.
Define the set $S = \{\boldc^0,\dotsc,\boldc^{d(p-1)}\} \subseteq \mathcal C$ of $d(p-1)+1$ codewords of $\mathcal C$ as follows. First, $\boldc^0 = (0,\dotsc,0)$. Then, for $ 0 \leq i < d-1$ and $1 \leq l < p$, $\boldc^{i(p-1) + l}$ is a codeword with two nonzeros only, defined by
\[c_j^{i(p-1)+l} = \begin{cases}
[l]_p&\text{if }j = i+1,\\
-[l]_p&\text{if }j = d,\\
0&\text{otherwise.} \end{cases}\]
Finally, there are $p-1$ codewords with three nonzeros, namely $\boldc^{(d-1)(p-1)+l} = (0,\dotsc,0,[l]_p,[l]_p,[-2l]_p)$, $1 \leq l < p$ (note that here we need $p \neq 2$, because $[-2]_2 =[0]_2$).
We now show that the $d(p-1)$ vectors
\[\mathsf F_{\mathrm v}'(\boldc^1),\dotsc, \mathsf F_{\mathrm v}'(\boldc^{d(p-1)})\]
are linearly independent, from which the claim follows by \cref{lem:affInd} since $\bm c^0 = \bm 0$. Let $\bm{M}$ be the real square $0/1$ matrix with rows $\mathsf F_{\mathrm v}'(\boldc^1)^T, \dotsc, \mathsf F_{\mathrm v}'(\boldc^{d(p-1)})^T$. Then, $M$ can be written as a $d\times d$ block matrix with blocks of size $(p-1)\times (p-1)$ having the form
\[ \bm M = \begin{pmatrix}
\bm I_{p-1} & & &&\bar{\bm I}_{p-1} \\
& \ddots & &&\vdots \\
& & \ddots & &\vdots\\
& & & \bm I_{p-1} & \bar{\bm I}_{p-1} \\
& & \bm I_{p-1} & \bm I_{p-1} & \bm C
\end{pmatrix} \in \mathbb R^{d(p-1) \times d(p-1)}\]
where $\bm I_{p-1}\in \mathbb R^{(p-1)\times (p-1)}$ is the identity matrix,
\begin{equation} \notag %
\bar{\bm I}_{p-1} = \begin{pmatrix} && 1 \\ & \iddots \\ 1 \end{pmatrix}
\in \mathbb R^{(p-1)\times (p-1)}
\end{equation}
is the \enquote{reverse} identity,
and $\bm C$ is a permutation matrix with a single one at column index $[-2l]_p$ (because $p$ is prime, these values are distinct for all $l \in \range{p-1}$) for row index $l$, $1 \leq l < p$. Thus, $\bm{M}$ is almost upper triangular except for the lower right $3\times 3$ block-submatrix, which we now show to have full rank. By elementary row operations,
\[
\det\begin{pmatrix} \bm I_{p-1}& &\bar{\bm I}_{p-1} \\ & \bm I_{p-1} & \bar{\bm I}_{p-1} \\ \bm I_{p-1} & \bm I_{p-1} & \bm C \end{pmatrix}
%
= \det\begin{pmatrix} 2\bar{\bm I}_{p-1} - \bm C \end{pmatrix};\]
it hence suffices to show that the latter matrix is nonsingular. To that end, note that, for $j \in \range{p-1}$, the $j$-th row of $2\bar{\bm I}_{p-1}$ has an entry $2$ in column $[-j]_p$, while the $j$-th row of $\bm C$ has an entry $1$ in column $[-2j]_p$ (all other entries are $0$). Because $p$ is prime, $[-j]_p \neq [-2j]_p$ for $j \in \range{p-1}$. Hence, when reversing the rows of $2\bar{\bm I}_{p-1} - \bm C$, the result is a \emph{strictly diagonally dominant} matrix, which by the Levy-Desplanques theorem (see, e.g., \cite[Cor.~5.6.17]{HornJohnson12Matrix}) implies that it is nonsingular. This concludes the proof.\\
\emph{Part~2):}
We have just shown that $\dim(\aff(\P)) =\dim(\P) = \dim(\mathcal A)$. Since both are affine spaces and by \cref{eq:dimAff} one is contained in the other, they must indeed be equal.\\
\emph{Part~3):}
Assume wlog.\ that $i=d$ (the rest follows from symmetry), and consider $j=0$ first. In the proof of part~\ref{prop:Pjdim-1}, we have shown that the $d(p-1)$ points $\mathsf F_{\mathrm v}'(\boldc^1), \dotsc, \mathsf F_{\mathrm v}'(\boldc^{d(p-1)})$ are linearly and hence also affinely independent, such that, by \cref{lem:affinely}, $\mathsf F_{\mathrm v}(\boldc^1),\dotsc,\mathsf F_{\mathrm v}(\boldc^{d(p-1)})$ are affinely independent, too. By construction $c^i_d \neq 0$ and thus $(\mathsf F_{\mathrm v}(\boldc^i))_{d,0} = 0$ for all of them, thus \cref{eq:spx-geq} is satisfied with equality for $d(p-1)$ affinely independent elements of $\P$, such that \cref{eq:spx-geq} defines a facet of $\P$ for $j=0$.
For $j \neq 0$, let $\boldc = (0,\dotsc, 0, -j, j) \in \mathcal C$. By the above and \cref{lem:affInd}, $\mathsf F_{\mathrm v}(\boldc + \boldc^1),\dotsc, \mathsf F_{\mathrm v}(\boldc + \boldc^{d(p-1)})$ are affinely independent. By linearity, all $\boldc + \boldc^i \in \mathcal C$, and because $c_d = j$ and $c^i_d \neq 0$ for all $i \in \range{d(p-1)}$, it follows that $(\boldc + \boldc^i)_d \neq j$ for $i \in \range{d(p-1)}$. Hence, for $j \neq 0$, \cref{eq:spx-geq} is satisfied with equality by $\mathsf F_{\mathrm v}(\boldc + \boldc^1),\dotsc, \mathsf F_{\mathrm v}(\boldc + \boldc^{d(p-1)})$, i.e.,\ defines a facet of $\P$.
\section{Proof of \cref{lem:conditions}}\label{app:proofConditions}
Let $\boldtheta^T \boldx \leq \kappa$ with $\boldtheta=(\bm t_{k_1} \mid \dotsc \mid \bm t_{k_d})$ be contained in $\Theta^{\bm m}$, and denote by $\boldc \in \mathcal C$ the canonical codeword corresponding to $\boldtheta$. Then, \cref{eq:Thetacondition} follows from \cref{eq:hilocons-ci,eq:hilocons-cd}, since $\bm c \in \mathcal C$
\[
\Leftrightarrow\sum_{i=1}^d c_i =\sum_{i=1}^{d-1}(\sigma-k_i) - k_d = 0
\Leftrightarrow \sum_{i=1}^d k_i = [d-1]_p\sigma.
\]
Likewise, by construction
\begin{align*}
\kappa &= \boldtheta^T \mathsf F_{\mathrm v}(\boldc)
= \sum_{i=1}^d t_{k_i, c_i} &&\text{by \cref{eq:ineqVals}}\\
&= \sum_{i=1}^{d-1} \max(\bm t_{k_i}) + \min(\bm t_{k_d}) &&\text{by \cref{constr:hilo}}\\
&= \sum_{i=1}^{d-1} \left( t_{0,\sigma} - t_{0,k_i}\right) -t_{0,k_d}&&\text{using \cref{eq:val-k-hi,eq:val-k-lo}}\\
&= (d-1)t_{0,\sigma} - \sum_{i=1}^d t_{0,k_i}
\end{align*}
which is \cref{eq:kappacondition}. It is easy to see that the proof works in both directions, i.e.,\ if $\boldtheta$ and $\kappa$ fulfill \cref{eq:hilo-conditions}, then they are covered by \cref{constr:hilo}.
Finally, the equivalence of \cref{eq:hilo-conditions} and \cref{eq:hilo-conditions'} follows immediately by using the equations $\sum_{i=1}^d k_i = \sum_{k\in \mathbb F_p} k \left[\abs{V_k^\boldtheta}\right]_p$ and $d = \sum_{k \in \mathbb F_p} \abs{V_k^\boldtheta}$ in \cref{eq:Thetacondition,eq:kappacondition}, and by \cref{eq:property-3}.
\section{Proofs for \cref{sec:validInvalid}}\label{app:validInvalid}
\begin{IEEEproof}[Proof of \cref{lem:validIndependent}]
By \cref{constr:hilo}, $\kappa = \boldtheta^T \mathsf F_{\mathrm v}(\boldc)$.
\ifonecolumn
Hence,
\begin{align*}
\boldtheta^T \mathsf F_{\mathrm v}(\bm c + \bm\xi) - \kappa &=\boldtheta^T \left(\mathsf F_{\mathrm v}(\bm c + \bm\xi) - \mathsf F_{\mathrm v}(\bm c)\right)\\
&=\sum_{i=1}^d (t_{k_i, c_i + \xi_i} - t_{k_i, c_i})&\text{by \cref{eq:ineqVals}}\\
&=\sum_{i=1}^{d-1} (t_{\sigma - c_i, c_i + \xi_i} - t_{\sigma-c_i, c_i})
+ t_{-c_d, c_d + \xi_d} - t_{-c_d, c_d}&\text{by \cref{eq:hilocons-ki,eq:hilocons-kd}}\\
&=\sum_{i=1}^{d-1} t_{\sigma,\xi_i} + t_{0,\xi_d}&\text{by \cref{lem:bb-hilo-plusi}.}
\end{align*}
\else
Hence, $\boldtheta^T \mathsf F_{\mathrm v}(\bm c + \bm\xi) - \kappa$
\begin{align*}
&=\boldtheta^T \left(\mathsf F_{\mathrm v}(\bm c + \bm\xi) - \mathsf F_{\mathrm v}(\bm c)\right)\\
&=\sum_{i=1}^d (t_{k_i, c_i + \xi_i} - t_{k_i, c_i})&\text{by \cref{eq:ineqVals}}\\
&=\sum_{i=1}^{d-1} (t_{\sigma - c_i, c_i + \xi_i} - t_{\sigma-c_i, c_i}) \\
&\phantom{=}+ t_{-c_d, c_d + \xi_d} - t_{-c_d, c_d}&\text{by \cref{eq:hilocons-ki,eq:hilocons-kd}}\\
&=\sum_{i=1}^{d-1} t_{\sigma,\xi_i} + t_{0,\xi_d}&\text{by \cref{lem:bb-hilo-plusi}.}
\end{align*}
\fi
\end{IEEEproof}
\begin{IEEEproof}[Proof of Corollary~\ref{cor:allValidOrNot}]
\begin{enumerate}
\item Let $(\boldtheta^T \boldx \leq \kappa) \in \Theta^{\bm m}$ with canonical codeword $\boldc^{\boldtheta}$. In order to show that the inequality is valid for $\P = \conv(\mathsf F_{\mathrm v}(\mathcal C))$, it suffices to prove it valid for all vertices of $\P$, which by definition of $\P$ are given by $\mathsf F_{\mathrm v}(\boldc)$ for $\boldc \in \mathcal C$. To that end, let $\boldc \in \mathcal C$ be chosen arbitrarily and define $\boldc' = \boldc - \boldc^{\boldtheta} \in \mathcal C$. Then,
\ifonecolumn
\begin{displaymath}
\boldtheta^T \mathsf F_{\mathrm v}(\boldc) - \kappa = \boldtheta^T \mathsf F_{\mathrm v}(\boldc' + \boldc^\boldtheta) - \kappa
= \sum_{i=1}^{d-1} t_{\sigma, c'_i} + t_{0,c'_d} \leq 0
\end{displaymath}
\else
\begin{align*}
\boldtheta^T \mathsf F_{\mathrm v}(\boldc) - \kappa &= \boldtheta^T \mathsf F_{\mathrm v}(\boldc' + \boldc^\boldtheta) - \kappa \\
&= \sum_{i=1}^{d-1} t_{\sigma, c'_i} + t_{0,c'_d} \leq 0
\end{align*}
\fi
where the last two equations hold because of \cref{lem:validIndependent} and the assumption applied to $\boldc'$, respectively.
\item Let $\boldtheta^T \boldx \leq \kappa$ and $\boldc^\boldtheta$ as above, and define $\boldc' = \boldc^\boldtheta + \boldc \in \mathcal C$. Then,
\ifonecolumn
\begin{displaymath}
\boldtheta^T \mathsf F_{\mathrm v}(\boldc') - \kappa = \boldtheta^T \mathsf F_{\mathrm v}(\boldc + \boldc^\boldtheta) - \kappa
= \sum_{i=1}^{d-1} t_{\sigma, c_i} + t_{0,c_d} > 0;
\end{displaymath}
\else
\begin{align*}
\boldtheta^T \mathsf F_{\mathrm v}(\boldc') - \kappa &= \boldtheta^T \mathsf F_{\mathrm v}(\boldc + \boldc^\boldtheta) - \kappa \\
&= \sum_{i=1}^{d-1} t_{\sigma, c_i} + t_{0,c_d} > 0;
\end{align*}
\fi
the inequality is violated by $\boldc' \in \mathcal C$ and hence invalid for $\P$.
\end{enumerate}
\end{IEEEproof}
\begin{IEEEproof}[Proof of \cref{theorem:newValidProgram}]
Let $\mathcal T^{\bm m}$ be a valid class and $d >0$. We show that \cref{eq:allValidCondition} holds for all $\bm c\in \mathcal C$, which implies the first statement by \cref{cor:allValidOrNot}. Let $\bm c \in \mathcal C$, and denote the left side of \cref{eq:allValidCondition} by $\gamma(\bm c) =\sum_{i=1}^{d-1} t_{\sigma, c_i} + t_{0,c_d}$. The following three observations will be used:
\begin{enumerate}
\item By Property~\ref{lem:bb-property1} of \cref{lem:bb-properties} $[\gamma(\bm c)]_p = \sum_{i=1}^d c_i = 0$.
\item By Property~\ref{lem:bb-property3} of \cref{lem:bb-properties}, $t_{\sigma,i} \leq 0$ for all $i \in \mathbb F_p$, i.e.,\ the sum in \cref{eq:allValidCondition} contains nonpositive entries only.
\item By definition of $\sigma$ in \cref{eq:sigma}, $t_{0,c_d} \leq \max(\bm t_0) = t_{0,\sigma} \leq 2p-1$.
\end{enumerate}
Assume now $\gamma(\bm c) > 0$, i.e.,\ \cref{eq:allValidCondition} is violated. By 1), this implies $\gamma(\bm c) \geq p$, while 2) and 3) imply that $\gamma(\bm c) \leq t_{0,\sigma} < 2p$, hence $\gamma(\bm c) = p$. By 2), this shows that $t_{0,c_d} \geq p$, hence $m_{c_d} = 1$. Thus, the equation $\gamma(\bm c) = p$ can be written as
$\sum_{i=1}^{d-1} t_{\sigma,i} + p + [c_d]_\mathbb Z = p$, i.e.,\
\begin{equation}
\sum_{i=1}^{d-1} t_{\sigma,i} = -[c_d]_\mathbb Z.
\label{eq:thmValid-proof}
\end{equation}
Further, the result $m_{c_d}=1$ rules out the case $\bm m = (0,\dotsc,0)$, hence $m_{\sigma} = 1$ and thus $[\sigma]_\mathbb Z \geq [c_d]_\mathbb Z$ by definition of $\sigma$.
In \cref{eq:thmValid-proof}, all terms on the left are $\leq 0$, hence $t_{\sigma,i} \geq -[c_d]_\mathbb Z \geq -[\sigma]_\mathbb Z$ for $i \in \range{d-1}$.
But then with $n_j=\abs{\{i \in \range{d-1}\colon c_i = j\}}$ for $j \in J = \{j \in \mathbb F_p\colon 0 > t_{\sigma,j} \geq -[\sigma]_\mathbb Z\}$ (observe that $t_{\sigma,0} = 0$ by Property~\ref{lem:bb-property2}) of \cref{lem:bb-properties}),
\[\sum_{i=1}^{d-1} t_{\sigma,i} + [c_d]_\mathbb Z = \sum_{j\in J} n_jt_{\sigma,j} + [c_d]_\mathbb Z = 0\]
with $m_{c_d} = 1$ and $c_d = -\sum_{j\in J} [n_j]_p\cdot j$, i.e.,\ $\mathcal T^{\bm m}$ is not valid, a contradiction.
For the second statement of the theorem, let $\{n_i\}_{i \in I}$ and $r=-\sum_{i\in I} [n_i]_p\cdot i$ be a solution to \cref{eq:valid-class-condition} with $m_r=1$. Choose $d \geq d_0 =\sum_{i \in I} n_i + 1$ and let $\bm c \in C$ be any codeword that has, for $i \in I$, $n_i$ $i$-entries among the first $d-1$ entries, $c_d= r$, and all other entries are zero (any such vector is a codeword of $\mathcal C$ by the condition on $r$).
Analogously to above, we see that
\[\sum_{j=1}^{d-1} t_{\sigma,c_j} + t_{0,c_d} = \sum_{i \in I} n_i t_{\sigma,i} + [c_d]_\mathbb Z + p = p > 0\]
such that, by the second part of \cref{cor:allValidOrNot}, no inequality in $\Theta^{\bm m}$ is valid for $\P$. Finally, $\cref{eq:valid-class-condition}$ gives $\sum_{i\in I} n_i \leq [r]_\mathbb Z$ because all $t_{\sigma,i} \leq -1$ for $i \in I$ by definition of $I$, such that $d_0 = \sum_{i \in I} n_i + 1\leq [r]_\mathbb Z + 1 \leq [\sigma]_\mathbb Z + 1$, which concludes the proof.
\end{IEEEproof}
\begin{IEEEproof}[Proof of \cref{lem:valid-symmetric-condition}]
We show that the system in \cref{def:valid-class} is solvable if and only if the one from \cref{lem:valid-symmetric-condition} is. For the \enquote{only if} part, let $\{n_i\}_{i \in I}$ and $r$ be a solution to \cref{eq:valid-class-condition} with $r=-\sum_{i\in I} [n_i]_p \cdot i$ and $m_r = 1$.
By \cref{lem:bb-symmetric-property1} of \cref{lem:bb-symmetric_properties}, $ I = \{i \in \mathbb F_p\colon 0 > t_{\sigma,i} \geq -[\sigma]_\mathbb Z\} = \{i \colon 0 > -t_{0,-i} \geq -[\sigma]_\mathbb Z\} = \{i\colon 0 < t_{0,-i} \leq [\sigma]_\mathbb Z\}$. Because $[\sigma]_\mathbb Z<p$, $t_{0,-i} \leq [\sigma]_\mathbb Z$ implies $m_{-i} = 0$ for $i \in I$. Furthermore, $m_r=1$ implies $m_\sigma = 1$, such that $t_{0,\sigma} = p + [\sigma]_\mathbb Z$ and by Property~\ref{lem:bb-property1} of \cref{lem:bb-properties} $t_{0,k} \neq [\sigma]_\mathbb Z$ for any $k \in \mathbb F_p$. Concludingly,
\[ I = \{i\colon m_{-i} = 0\text{ and }0 < t_{0,-i} < [\sigma]_\mathbb Z\}\]
and thus $J = \{-i\colon i \in I\}$. For $j \in J$, define $\nu_j = n_{-j}$ and let $\rho = r$. Then,
\ifonecolumn
\begin{displaymath} \notag
0 = \sum_{i \in I} n_i \cdot t_{\sigma,i} + [r]_\mathbb Z
= \sum_{j \in J} \nu_j t_{\sigma,-j} + [\rho]_\mathbb Z = -\sum_{j \in J} \nu_j t_{0,j} + [\rho]_\mathbb Z = \sum_{j \in J} \nu_j \cdot j + [\rho]_\mathbb Z
\end{displaymath}
\else
\begin{multline} \notag
0 = \sum_{i \in I} n_i \cdot t_{\sigma,i} + [r]_\mathbb Z
= \sum_{j \in J} \nu_j t_{\sigma,-j} + [\rho]_\mathbb Z \\= -\sum_{j \in J} \nu_j t_{0,j} + [\rho]_\mathbb Z = \sum_{j \in J} \nu_j \cdot j + [\rho]_\mathbb Z \end{multline}
\fi
which completes the proof of the \enquote{only if} direction. For the \enquote{if} part, one can see analogously that $I = \{-j\colon j \in J\}$ and use $n_i=-\nu_j$ and $r = \rho$ to construct a solution for \cref{eq:valid-class-condition}, which completes the proof.
\end{IEEEproof}
\begin{IEEEproof}[Proof of \cref{lem:counting_formulas}]
By \cref{lem:validIndependent}, a vector $\bm\zeta = \bm c + \bm\xi$ satisfies $\boldtheta^T \mathsf F_{\mathrm v}(\boldzeta) > \kappa$ if and only if $I^>_{\xi_d,J^{\bm\xi}} = 1$, where the multiset $J^{\bm\xi}$ contains the nonzero entries of $\xi_1,\dotsc,\xi_{d-1}$ (with multiplicity). The first claim follows because, for $\xi_d$ fixed, there are $\binom{d-1}{|J^{\bm\xi}|} \frac{|J|!}{n^J_1! \cdots n^J_{k^J}!}$ different vectors $\bm\xi$ that result in the same multiset $J^{\bm\xi}$, where the multinomial coefficient is due to the number possible permutations of the multiset $J$.
The second part is analogous; note that $\bm\zeta$ is a codeword if and only if $\sum_{i=1}^d \xi_i = \xi_d + \|J^{\bm\xi}\|_1=0$, which accounts for the additional condition in the definition in $I^=_{c,J}$.
\end{IEEEproof}
\begin{comment}
As for Part~\ref{part:valid3_q}, we count codewords $\boldc$ for which $\boldtheta^T \mathsf F_{\mathrm v}(\boldc) = \kappa$ as follows. First, $\boldc^{\hat{\boldtheta}}$ (corresponding to $J=\emptyset$) is one such codeword. Arguing similar as above (Appendix~\ref{proofconditions_facets}), any other codeword for which the inequality is tight must differ in the last coordinate $c_d$. If we increase $c^{\hat{\boldtheta}}_d$ by $i$ (modulo $q$), $i \in [q-1]$, the left-hand side increases by $b_i$ to $\kappa+b_i$. Let $J$ be a multiset (subset with repetition) of $[q-1]$ of size $|J|$. Changing any (due to the first requirement) subset of size $|J|$ of the other coordinates in $[d-1]$ of the canonical codeword $\boldc^{\hat{\boldtheta}}$ by $j_l$, $l \in [|J|]$, respectively, where $j_l$ is the $l$-th element of $J$, will decrease the value by $-\sum_{j \in J} a_j$. Thus, if $b_i + \sum_{j \in J} a_j = 0$, the left-hand side will decrease to $\kappa$ again. The requirement of having a codeword (as argued in the proof of \cref{lem:conditions_facets}) is equivalent to $i + \|J\|_1 \equiv 0 \pmod q$, from which the counting formula in \cref{eq:codewords_q} results. Again, the binomial coefficient is due to the invariance property of the first requirement in the formulation of the lemma.
\end{comment}
\section{Proof of \cref{lem:facets}}\label{prooffacets}
Let $\boldtheta^T \bm x \leq \kappa$ be an inequality from $\Theta^{\bm m}$.
In order to prove the claim, we construct a set $S \subset \mathcal C$ of codewords such that $\boldtheta^T \mathsf F_{\mathrm v}(\bm c) = \kappa$ for any $\bm c \in S$, and at least $1+d(p-1)-1-(p-3)/2$ elements of $S$ have affinely independent embeddings. Let $\bm c^{\hat\boldtheta}$ be the canonical codeword from \cref{constr:hilo} that is used to generate the inequality and define, for $s \in \range{(p-1)(d-1)}$, the vector $\bm\xi^s = \bm\xi^{i(p-1)+l} \in \mathbb F_p^d$ by
\begin{equation}
\xi^{i(p-1)+l}_j = \begin{cases}
\phantom{-}\left[l\right]_p &\text{if $j=i$},\\
-\left[l\right]_p & \text{if $j=d$},\\
\phantom{-}0 &\text{otherwise} \end{cases}
\label{eq:faceproof-xi-type1}
\end{equation}
where $0 \leq i < d-1$ and $1 \leq l < p$, and let $\bm c^s = \bm c^{\hat\boldtheta} + \bm\xi^s$ for $s \in \range{(p-1)(d-1)}$. By \cref{lem:validIndependent}, $\boldtheta^T \mathsf F_{\mathrm v}(\bm c^{i(p-1)+l}) - \kappa = t_{\sigma,[l]_p} + t_{0,[-l]_p} = 0$,
where the second equation is due to Part~\ref{lem:bb-symmetric-property1}) of \cref{lem:bb-symmetric_properties}, such that the inequality is tight for the embeddings of these $(d-1)(p-1)$ codewords.
Now, we construct $p-2$ additional codewords that differ from $\bm c^{\hat{\boldtheta}}$ in the last \emph{three} entries by adding, for each $i \in \mathbb F_p \setminus \{0,\sigma\}$, the vector $\bm\zeta^i \in \mathbb F_p^d$ defined by
\[ \zeta^i_j = \begin{cases}
-i&\text{if }j=d-2,\\
i-\sigma &\text{if }j=d-1,\\
\sigma&\text{if }j=d,\\
0&\text{otherwise}
\end{cases}
\]
to the canonical codeword $\bm c^{\hat\boldtheta}$. Using again \cref{lem:validIndependent} we find that, for $i \in \mathbb F_p\setminus\{0,\sigma\}$,
\ifonecolumn
\begin{align*}
\boldtheta^T\mathsf F_{\mathrm v}(\bm c^{\hat\boldtheta} + \bm\zeta^i) - \kappa
&=t_{\sigma,-i} + t_{\sigma,i-\sigma} + t_{0,\sigma} \\
&=-t_{0,i} - t_{0,\sigma-i} + t_{0,\sigma} \\
&= - [i]_\mathbb Z - p m_i - [\sigma - i]_\mathbb Z - p m_{\sigma - i} + [\sigma]_\mathbb Z + p m_\sigma.
\end{align*}
\else
\begin{align*}
&\boldtheta^T\mathsf F_{\mathrm v}(\bm c^{\hat\boldtheta} + \bm\zeta^i) - \kappa\\
&=t_{\sigma,-i} + t_{\sigma,i-\sigma} + t_{0,\sigma} \\
&=-t_{0,i} - t_{0,\sigma-i} + t_{0,\sigma} \\
&= - [i]_\mathbb Z - p m_i - [\sigma - i]_\mathbb Z - p m_{\sigma - i} + [\sigma]_\mathbb Z + p m_\sigma.
\end{align*}
\fi
For $[i]_\mathbb Z \leq [\sigma]_\mathbb Z$, the above is zero because then $-[i]_\mathbb Z - [\sigma-i]_\mathbb Z + [\sigma]_\mathbb Z = 0$ and $m_i + m_{\sigma - i} = m_\sigma$ by Part~\ref{lem:bb-symmetric-property3} of \cref{lem:bb-symmetric_properties}. If $[i]_\mathbb Z > [\sigma]_\mathbb Z$ then $m_i = 0$ by definition, thus also $m_{\sigma - i} = 0$. Also, this case implies $\sigma \neq p-1$, hence $m_\sigma = 1$, and furthermore $[\sigma - i]_\mathbb Z = [\sigma]_\mathbb Z - [i]_\mathbb Z + p$, such that $\boldtheta^T \mathsf F_{\mathrm v}(\bm c^{\hat\boldtheta} + \bm\zeta^i) = \kappa$ also for this case. Concludingly, we have constructed a set $S$ of $(p-1)(d-1) + p - 2 = (p-1)d - 1$ codewords of $\mathcal C$, the embeddings of which satisfy the inequality with equality.
We now switch to Flanagan's embedding from \cref{rem:Flanagan} and show that the span of $\mathsf F_{\mathrm v}'(\boldc^1-\boldc^{\hat{\boldtheta}}),\dotsc,\mathsf F_{\mathrm v}'(\boldc^{(p-1)d-1}-\boldc^{\hat{\boldtheta}})$ has rank $d(p-1)-1-(p-3)/2$, from which the claim follows by \cref{lem:affInd}. To that end, consider the real $0/1$ matrix $\bm{M}_F$ whose rows
are $\mathsf F_{\mathrm v}'(\boldc^1-\boldc^{\hat{\boldtheta}})^T,\dotsc,\mathsf F_{\mathrm v}'(\boldc^{(p-1)d-1}-\boldc^{\hat{\boldtheta}})^T$. This matrix is of the form
\begin{equation} \label{eq:MF}
\bm{M}_F = \begin{pmatrix}
\bm{I}_{p-1} & \bm{0} & & & \bm{0} & \bar{\bm I}_{p-1}\\
\bm{0} & \bm{I}_{p-1} & & & \bm{0} & \bar{\bm I}_{p-1}\\
& &\ddots& &&\\
& & & \bm{I}_{p-1}&\bm{0}&\bar{\bm I}_{p-1}\\
& & & \bm{0}&\bm{I}_{p-1}&\bar{\bm I}_{p-1}\\
& & & \bar{\bm{D}}_{p-2}&\bm D_{p-2}&\bm{E}_{p-2}
\end{pmatrix}
\end{equation}
where $\bm{I}_{p-1}$ and $\bar{\bm I}_{p-1}$ are defined in Appendix~\ref{proofPjdim},
\ifonecolumn
\begin{displaymath}
\bar{\bm D}_{p-2} =
\begin{pmatrix}
&0& \bar{\bm I}_{[\sigma]_\mathbb Z-1}\\
\bar{\bm I}_{[-\sigma]_\mathbb Z-1} & 0
\end{pmatrix}, \;
\bm D_{p-2} =
\begin{pmatrix}
&0&\bm I_{[\sigma]_\mathbb Z-1}\\
\bm I_{[-\sigma]_\mathbb Z-1}&0
\end{pmatrix}
\end{displaymath}
\else
\begin{align*}
\bar{\bm D}_{p-2} &=
\begin{pmatrix}
&0& \bar{\bm I}_{[\sigma]_\mathbb Z-1}\\
\bar{\bm I}_{[-\sigma]_\mathbb Z-1} & 0
\end{pmatrix},\\
\bm D_{p-2} &=
\begin{pmatrix}
&0&\bm I_{[\sigma]_\mathbb Z-1}\\
\bm I_{[-\sigma]_\mathbb Z-1}&0
\end{pmatrix}
\end{align*}
\fi
(note that the $[-\sigma]_\mathbb Z$-th column (with $[-\sigma]_\mathbb Z = p - [\sigma]_\mathbb Z$) of both $\bm D_{p-2}$ and $\bar{\bm D}_{p-2}$ is all-zero), and $\bm E_{p-2}$ has $1$-entries in the $[\sigma]_\mathbb Z$-th column and zeros otherwise.
Now, note that the matrix $\bm{M}_F$ is (similar to the one in the proof of \cref{prop:Pjdim}) almost upper triangular except for the lower right $3(p-1)-1 \times 3(p-1)$ block. After turning $\bar{\bm D}_{p-2}$ and $\bm D_{p-2}$ into zero matrices by Gaussian elimination, the lower right $\bm E_{p-2}$ turns into the matrix
\[
\tilde{\bm E}_{p-2} = \begin{pmatrix} -\bm X_{[-\sigma]_\mathbb Z-1} & 1 \\
&1 & \bm X_{[\sigma]_\mathbb Z -1}
\end{pmatrix}\]
where $\bm X_l$ is the $l\times l$ \enquote{X}-shaped 0/1 matrix (details omitted). Now, $\tilde{\bm E}_{p-2}$ is easily verified to have rank $\frac12(p-1)$, such that the total rank of $\bm M_F$ is $(d-1)(p-1) + \frac12(p-1) = d(p-1) - 1 - \frac12(p-3)$, which concludes the proof.
\begin{remark}\label{rem:numerical-check-facet}
The above proof implies a simple numerical procedure by which a specific valid symmetric basic building block class $\mathcal T^{\bm m}$ can be verified to be facet-defining: Find $\frac12(p-3)$ \emph{additional} nonzero vectors $\bm \xi \in \mathbb F_p^3$ with $\sum_{i=1}^3 \xi_i = 0$ that satisfy $t_{\sigma,\xi_1} + t_{\sigma,\xi_2} + t_{0,\xi_3} = 0$ such that their (Flanagan) embeddings, together with the lower right $3(p-1)-1 \times 3(p-1)$ part of $\bm M_F$ above, are linearly independent and thus complete $\bm M_F$ to a matrix of rank $d(p-1)-1$.
While passing this test is only a sufficient condition for a valid class being facet-defining (in theory, there could be classes that are facet-defining only for some $d \geq d_0 > 3$), we conjecture it to be necessary as well, as we did not find any counter-example in numerical experiments.
\end{remark}
\section{Numerical Procedure to Verify the \enquote{Only-If} part of \cref{conj:facetsymmetric}} \label{app:only-if-part}
The procedure is based on the following key lemma.
\begin{lemma} \label{lem:d0}
Let $\mathcal{T}^{\bm m}$ be a valid basic building block class and $\boldtheta^T\boldx \leq \kappa$ any inequality from $\Theta^{\bm m}$, where $\bm c^{\hat\boldtheta}$ is the canonical codeword according to \cref{constr:hilo}, for a given length $d$ of the SPC code. Denote by $\bm M_F^d$ the matrix whose rows consist of $\mathsf F_{\mathrm v}(\bm c - \bm c^{\hat\boldtheta})$, where $\bm c$ runs over all codewords for which the inequality is tight (as outlined in the proof of \cref{lem:facets} in Appendix~\ref{prooffacets}; see \eqref{eq:MF}). Note that $\bm M_F^d$ is independent of the chosen inequality. Let $d_0$ be the smallest $d \geq 3$ such that the support of all codewords of $\bm M_F^{d_0}$ (before being embedded) is at most $d_0-2$. Then,
\begin{displaymath}
\rank\left(\bm M_F^{d+1}\right) \leq \rank\left(\bm M_F^{d}\right) + p-1
\end{displaymath}
for all $d \geq d_0$.
\end{lemma}
\begin{IEEEproof}
First note that for any valid basic building block class $\mathcal{T}^{\bm m}$, $d_0$ is finite, since $t_{0,j} \geq 0$ (see \cref{def:bb}) for all $j \in \mathbb F_p$ and $t_{\sigma,j} < 0$ for $j \in \mathbb F_p \setminus \{0 \}$ (follows from Property~\ref{lem:bb-property3} of \cref{lem:bb-properties}), and then the result follows from \cref{{lem:validIndependent}} (see (\ref{eq:tightness})). Furthermore, for \cref{constr:hilo}, we choose the first coordinate as the constrained coordinate (it can be arbitrarily chosen).
Now, let $\bm M^{d_0}$ be the corresponding matrix before the embedding.
It follows that
\begin{displaymath}
\bm M^{d_0+1} = \begin{pmatrix} \bm M^{d_0} & \bm {0} \\
\bm A & \bm b
\end{pmatrix}
\end{displaymath}
where $\bm 0$ is an all-zero column vector over $\mathbb F_p$, $\bm b$ is a column vector in which all entries are different from $[0]_p$, and $\bm A$ is a matrix of partial codewords (one partial codeword per row) such that the concantenation $(\bm A \mid \bm b)$ contains all tight codewords for the inequalities in $\Theta^{\bm m}$ for $d = d_0+1$ with a nonzero final entry. Now, suppose that there exist two codewords (rows) in $(\bm A \mid \bm b)$ in which the final entry is the same, i.e., the two codewords are of the form $(\bm z_1,u,v,0,0,x)$ and $(\bm z_2,0,0,w,y,x)$, respectively, where $x \in \mathbb F_p \setminus \{0\}$, $u,v,w,y \in \mathbb F_p$, and $\bm z_1$ and $\bm z_2$ are vectors of length $d_0-4$ over $\mathbb F_p$. Here, without loss of generality, we have assumed that the support of the first codeword is contained within the first $d_0-2$ coordinates, while the support of the second codeword is contained within the first $d_0-4$ coordinates, as well as within the coordinates $d_0-1$ and $d_0$. This assumes $d_0 \geq 4$. If $d_0=3$, then the two codewords would overlap with a zero entry in at least one coordinate. However, the same argument below can be repeated to prove the lemma in this special case as well. Now, since by assumption both codewords are tight for the inequalities of $\Theta^{\bm m}$, so are $(\bm z_1,u,v,0,x,0)$ and $(\bm z_2,x,0,w,y,0)$, since the constrained coordinate of \cref{constr:hilo} cannot be in the position of a zero entry; this would entail that the codeword is equal to the all-zero codeword, violating the assumption that $x$ is nonzero. Note that permuting the other non-constrained entries of a codeword would not violate the tightness of the codeword with respect to the inequalitites in $\Theta^{\bm m}$ (see (\ref{eq:tightness}) of \cref{lem:validIndependent}). Consider the codewords
$(\bm z_2,x,y,w,0,0)$ and $(\bm z_2,0,y,w,x,0)$. Again, they are permutations (not involving the constrained coordinate of \cref{constr:hilo}) of $(\bm z_2,x,0,w,y,0)$, and are thus tight for the inequalities in $\Theta^{\bm m}$ for $d=d_0+1$ (this follows from (\ref{eq:tightness}) of \cref{lem:validIndependent}). Taking the real linear combination
\ifonecolumn
\begin{align*}
&\mathsf F_{\mathrm v}'\left((\bm z_1,u,v,0,x,0)^T \right) - \mathsf F_{\mathrm v}'\left((\bm z_2,x,0,w,y,0)^T\right)
+ \mathsf F_{\mathrm v}'\left((\bm z_2,x,y,w,0,0)^T\right) - \mathsf F_{\mathrm v}'\left((\bm z_2,0,y,w,x,0)^T\right) \\
&= \mathsf F_{\mathrm v}'\left((\bm z_1,u,v,0,0,x)^T\right) - \mathsf F_{\mathrm v}'\left((\bm z_2,0,0,w,y,x)^T\right)
\end{align*}
\else
\begin{align*}
&\mathsf F_{\mathrm v}'\left((\bm z_1,u,v,0,x,0)^T \right) - \mathsf F_{\mathrm v}'\left((\bm z_2,x,0,w,y,0)^T\right) \\
&+ \mathsf F_{\mathrm v}'\left((\bm z_2,x,y,w,0,0)^T\right) - \mathsf F_{\mathrm v}'\left((\bm z_2,0,y,w,x,0)^T\right) \\
&= \mathsf F_{\mathrm v}'\left((\bm z_1,u,v,0,0,x)^T\right) - \mathsf F_{\mathrm v}'\left((\bm z_2,0,0,w,y,x)^T\right)
\end{align*}
\fi
shows that $\mathsf F_{\mathrm v}'((\bm z_1,u,v,0,0,x)^T)$ can be written as a real linear combination of four rows from $(\bm M_F^{d_0} \mid \bm (0,\dotsc,0))$ and the row $\mathsf F_{\mathrm v}'((\bm z_2,0,0,w,y,x)^T)$. Thus, the set of rows of $(\bm A \mid \bm b)$ can at most increase the rank by $p-1$, and the result of the lemma follows for $d=d_0$. The result for $d > d_0$ follows by induction.
\end{IEEEproof}
The procedure works in the following way and is repeated for each valid basic building block class $\mathcal{T}^{\bm m}$ (indexed by $\bm m$) for a given prime $p$. Choose $d=3$ and build the matrix $\bm M_F^d$. If its rank is less than $d(p-1)$, then we know that the class cannot be facet-defining for $d=3$. If this is not the case, stop. Otherwise, compute the reduced row echelon form of $\bm M_F^d$ and remove the all-zero rows. The resulting matrix is denoted by $\bm M_{F, \rm red}^d$. Now, construct the matrix
\begin{equation} \label{eq:MF_d}
\begin{pmatrix} \bm M_{F, \rm red}^{d} & \bm 0 \\
\mathsf F_{\mathrm v}'(\bm a_1^T)^T & \mathsf f'(1) \\
\vdots & \vdots \\
\mathsf F_{\mathrm v}'(\bm a_{p-1}^T)^T & \mathsf f'(p-1) \\
\mathsf F_{\mathrm m}'(\bm A^T)^T & \mathsf F_{\mathrm m}'(\bm b^T)^T
\end{pmatrix}
\end{equation}
where the matrix-embedding $\mathsf F_{\mathrm m}'$ is analog to the vector-embedding $\mathsf F_{\mathrm v}'$, applying the vector-embedding on each column on the matrix individually. In (\ref{eq:MF_d}), $(\bm a_i \mid i)$, $i \in \mathbb F_p$, is any (if any exist) tight codeword for length $d+1$ with at least two zero entries in $\bm a_i$, and each row of the matrix $(\bm A \mid \bm b)$ is a tight codeword for length $d+1$ with a nonzero final entry and at most one zero entry among the first $d$ coordinates. Then, it follows from the proof of \cref{lem:d0} that the rank of $\bm M_F^{d+1}$ is equal to the rank of the matrix in (\ref{eq:MF_d}), since any additional tight codeword for length $d+1$ with at least two zero entries in the coordinates $1$ throught $d$ can be written as a real linear combination of the rows of $(\bm M_{F, \rm red}^{d} \mid \bm 0)$. The reduced row echelon form of $\bm M_F^{d+1}$ (after removing the all-zero rows) is equal to the reduced row echelon form (again after removing the all-zero rows) of the matrix in (\ref{eq:MF_d}), and the procedure can be repeated for the next value of $d$ if the rank of the matrix in (\ref{eq:MF_d}) is strictly smaller than $d(p-1)$. Otherwise, the class would be facet-defining for this particular value of $d$. The procedure is repeated until $d=d_0$, unless the rank at some point reaches $d(p-1)$. Now, if $\rank \left( \bm M_F^{d} \right) < d(p-1)$ for all $3 \leq d \leq d_0$ (i.e., the procedure does not stop until $d=d_0$), it follows from \cref{lem:d0} that the class cannot be facet-defining for any $d \geq 3$.
The complexity of the approach depends on the value of $d_0$, which again depends on the building block class, and in particular on the number of rows of the matrix $(\mathsf F_{\mathrm m}'(\bm A^T)^T \mid \mathsf F_{\mathrm m}'(\bm b^T)^T)$ for each value of $d$. The size of the matrix $(\mathsf F_{\mathrm m}'(\bm A^T)^T \mid \mathsf F_{\mathrm m}'(\bm b^T)^T)$ (for a given $d$) can be determined from the principle behind the counting formula in \cref{lem:counting_formulas}. Note that for some classes, $d_0$ is as large as $p+2$. To address the complexity of the approach, as an example, in the worst case for $p=19$ (when $d_0=p+2=21$), the number of rows of $(\mathsf F_{\mathrm m}'(\bm A^T)^T \mid \mathsf F_{\mathrm m}'(\bm b^T)^T)$ for $d=3,\dotsc,20$ is
\ifonecolumn
\begin{displaymath}
\begin{split}
(&908, 4464, 17400, 53886, 131826, 254652, 390302,
477511,468383, 368507, 231336, 114444, 43656, 12393,
2466, \\
&307, 18, 0).
\end{split}
\end{displaymath}
\else
\begin{displaymath}
\begin{split}
(&908, 4464, 17400, 53886, 131826, 254652, 390302, \\
&477511,468383, 368507, 231336, 114444, 43656, 12393, \\
&2466, 307, 18, 0).
\end{split}
\end{displaymath}
\fi
\section{Proof of \cref{lem:facets_modified}}\label{prooffacets_modified}
We use \cref{rem:numerical-check-facet} to prove the proposition. The following lemma yields the required $\bm\xi$-vectors.
\begin{lemma} \label{lem:bb-doubly_symmetric_properties}
Let $\mathcal T^{\bm m}$ be almost doubly-symmetric and denote by $\sigma_1$ the index of the second-largest entry in $\bm t_0$, i.e.,\ $t_{0,\sigma_1} = \smax(\bm t_0)$. Then, there are $(p-3)/2$ different elements $c_1,\dotsc,c_{(p-3)/2} \in \mathbb F_p$ such that, for $k \in \range{(p-3)/2}$,
\begin{equation} \label{eq:double_sym_condition}
t_{\sigma, -c_k} + t_{\sigma, c_k-\sigma_1} + t_{0,\sigma_1} = 0.
\end{equation}
Furthermore, $c_k \neq \sigma_1-c_l$ for all $k,l \in \range{(p-3)/2}$ and all $c_k, \sigma_1-c_l \notin \{0, \sigma_1, \sigma\}$.
\end{lemma}
\begin{IEEEproof}
Choose $i \in \tilde T_0^\mathrm{proj}$ as in \cref{eq:condition_dsymmetrix_3}. By there is a $c \in \mathbb F_p$ such that $i = t_{0,c} = t_{0,\sigma} - t_{0,\sigma - c}$ (cf. \cref{eq:symmetry-alt} for the second equality).
By \cref{eq:condition_dsymmetrix_3}, $t_{0,\sigma_1} - t_{0,c} \in T_0^\mathrm{proj}$, so that again by \cref{eq:symmetry-alt} we conclude that $t_{0,\sigma_1} - t_{0,c} = t_{0,\sigma} - t_{0,\sigma-\sigma_1+c}$.
Inserting the above expression for $t_{0,c}$ and using \cref{def:bb} results in
\[ t_{0,\sigma_1} + t_{\sigma,- c} = -t_{\sigma,c-\sigma_1}\]
which shows \cref{eq:double_sym_condition} for this particular $c$.
Now by \cref{eq:condition_dsymmetrix_2} all $t_{0,c_k} \in \tilde T_0^{\rm proj}$, while all $t_{0,\sigma_1-c} \in T_0^{\rm proj} \setminus \tilde T_0^{\rm proj}$, which shows the remaining claims.
\end{IEEEproof}
Let now $c_1,\dotsc, c_{(p-3)/2}$ denote the $(p-3)/2$ $c$-values obtained from the above lemma, and define, for $k \in \range{(p-3)/2}$,
\[\bm\xi_k = (0,\dotsc,0,-c_k, c_k - \sigma_1, \sigma_1)\]
which fulfills the conditions of \cref{rem:numerical-check-facet} by \cref{lem:bb-doubly_symmetric_properties}. The (Flanagan) embeddings of the $\bm \xi_k$ lead to a $(p-3)/2 \times 3(p-1)$ matrix of the form
\[\begin{pmatrix} \bm G &\bar{\bm G} &\bm E'_{(p-3)/2} \end{pmatrix}\]
where $\bm G$ contains a permutation matrix in the columns $[c_1]_\mathbb Z,\dotsc, [c_{(p-3)/2}]_\mathbb Z$ and zeros otherwise, $\bar{\bm G}$ is a permutation matrix in columns $[\sigma_1-c_1]_\mathbb Z, \dotsc, [\sigma_1-c_{(p-3)/2}]_\mathbb Z$ with zeros otherwise, and $\bm E'_{(p-3)/2}$ has $1$-entries in the $[\sigma_1]_\mathbb Z$-th column and zeros otherwise. Now, as in Appendix~E both $\bm G$ and $\bar{\bm G}$ can trivially be eliminated, and one can readily show that this operation turns $\bm E'_{(p-3)/2}$ into a matrix $\tilde{\bm E}'_{(p-3)/2}$ that has zeros in the $[-\sigma]_\mathbb Z$-th column, ones in the $[-\sigma_1]_\mathbb Z$-th column and two negated permutation matrices in the remaining columns such that the conditions of \cref{rem:numerical-check-facet} are fulfilled, which completes the proof.
\begin{comment}
The proof of \cref{lem:facets_modified} is along the same lines as the proof of \cref{lem:facets}, except that additional codewords that are tight for any inequality are appended to the embedded codeword matrix in \cref{eq:MF} (cf.\ \cref{rem:numerical-check-facet}). These codewords can be identified due to the fact that the class, by assumption, is almost doubly-symmetric.
In particular, it follows from \cref{lem:bb-hilo-plusi} and \cref{lem:bb-doubly_symmetric_properties} that we can choose the coordinates of the codeword $\bm c$, initially set to the canonical codeword (see the proof of \cref{lem:facets_modified}) in positions $d-2$ and $d-1$ to decrease their contribution to the left-hand side of the inequality (with respect to the canonical codeword) by $t_{k,t_{k,\uparrow}} - t_{k,t_{k,\uparrow}+\left[c\right]_p}$ and $t_{k,t_{k,\uparrow}} - t_{k,t_{k,\uparrow}-\left[c\right]_p - t_{k,\uparrow'}}$, respectively, and set $c_d$ to its ``s-hi''-value (second highest value), increasing the contribution to the left-hand side by $t_{k,t_{k,\downarrow}+t_{k,\uparrow'}}- t_{k,t_{k,\downarrow}}$, where $c$ is taken from $\range{p-1}$ such that $t_{k,t_{k,\uparrow}} - t_{k,t_{k,\uparrow}+\left[c\right]_p} \leq \left\lfloor \smax(\boldt_0)/2 \right\rfloor$ (see \cref{lem:bb-doubly_symmetric_properties}). Note that there are $(p-3)/2$ such values. Thus, we get $(p-3)/2$ additional codewords, all of which are tight (due to \cref{lem:bb-doubly_symmetric_properties}) for the given inequality. As in the proof of \cref{lem:facets}, we can switch from the constant-weight embedding to Flanagan's embedding from \cref{rem:Flanagan} (due to \cref{lem:affInd}). The matrix whose rows are $\mathsf F_{\mathrm v}'(\boldc^1 - \boldc^{\hat{\boldsymbol{\theta}}})^T,\dotsc,\mathsf F_{\mathrm v}'(\boldc^{(p-1)d-1+ (p-3)/2}-\boldc^{\hat{\boldsymbol{\theta}}})^T$, where the additional codewords identified above are denoted as $\bm c^{(p-1)d-1+l}$, $l=1,\dotsc,(p-3)/2$, and where the first $(p-1)d-1$ codewords are identified in the proof of \cref{lem:facets}, have the same form as the matrix in \cref{eq:MF}, except that $\bm D_{p-2}$, $\bar{\bm D}_{p-2}$, and $\bm E_{p-2}$ are replaced by
\begin{displaymath}
\begin{pmatrix}
\bm D_{p-2} \\
\bm D'_{(p-3)/2} \end{pmatrix},\;
\begin{pmatrix}
\bar{\bm D}_{p-2} \\
\bar{\bm D}'_{(p-3)/2} \end{pmatrix},\; \text{and }
\begin{pmatrix}
\bm E_{p-2} \\
\bm E'_{(p-3)/2} \end{pmatrix}
\end{displaymath}
respectively, where
\begin{displaymath}
\bm{D}'_{(p-3)/2} = \begin{pmatrix}
\bm s'_1 & \overbrace{0 \cdots 0}^{(p-3)/2} & \overbrace{0\,0}^2\\
\vdots & \vdots & \vdots \\
\bm s'_{(p-3)/2} & \overbrace{0 \cdots 0}^{(p-3)/2} & \overbrace{0\,0}^2\\
\end{pmatrix}
\end{displaymath}
and
\begin{displaymath}
\bar{\bm{D}}'_{(p-3)/2} = \begin{pmatrix}
\overbrace{0 \cdots 0}^{(p-3)/2} & \bm t'_1 & \overbrace{0\,0}^2\\
\vdots & \vdots & \vdots \\
\overbrace{0 \cdots 0}^{(p-3)/2} & \bm t'_{(p-3)/2} & \overbrace{0\,0}^2\\
\end{pmatrix}
\end{displaymath}
and $\bm E'_{(p-3)/2} = (\bm 1 | \bm 0_{(p-3)/2-1} | \bm 0_{(p-3)/2} | \bm 0 | \bm 0)$ where $\bm s'_i$ and $\bm t'_i$, $i=1,\dotsc,(p-3)/2$, are weight-$1$ binary vectors of length $(p-3)/2$, where all $\bm s'_i$ are different and all $\bm t'_i$ are different. Note that all weight-$1$ binary vectors (of length $(p-3)/2$) occur within the sets $\{\bm s'_1,\dots,\bm s'_{(p-3)/2} \}$ and $\{\bm t'_1,\dots,\bm t'_{(p-3)/2}\}$. Again, as in the proof of \cref{lem:facets}, the column order of the matrices $\bm{D}'_{(p-3)/2}$, $\bar{\bm{D}}'_{(p-3)/2}$, and $\bm{E}'_{(p-3)/2}$ is not necessarily as indicated above; the order indicated is just to simplify notation. In particular, the last two all-zero columns have indices $t_{0,\downarrow} - t_{0,\uparrow} = -\sigma$ and $t_{0,\downarrow} - t_{0,\uparrow'}$, for all three matrices, while the position of the all-one vector in $\bm{E}'_{(p-3)/2}$ is $t_{0,\uparrow'} - t_{0,\downarrow}$. Finally, for the matrices $\bm{D}'_{(p-3)/2}$ and $\bar{\bm{D}}'_{(p-3)/2}$ the columns corresponding to the $\bm s'$-vectors and $\bm t'$-vectors are in positions $c$ and $t_{0,\downarrow} - t_{0,\uparrow'}-\left[c\right]_p$, respectively, where $c$ chosen according to \cref{lem:bb-doubly_symmetric_properties}. It can be readily shown that the lower right $3(p-1)-1 + (p-3)/2 \times 3(p-1)$ matrix has real rank $3(p-1)-1$. Thus, the overall matrix $\bm M_F$ has rank $(d-3)(p-1) + 3(p-1)-1 = d(p-1)-1$ which is equal to $\dim(\P)-1$ (see \cref{prop:Pjdim}). The important thing to note for this result to be true is that the position of the all-one column of $\bm E_{p-2}$ (which is $\sigma$) is different from the position of the all-one column of $\bm E'_{p-2}$ (which is $t_{0,\uparrow'} - t_{0,\downarrow}$). Thus, all inequalities are facet-defining.
\end{comment}
\section{Proof of \cref{prop:equivalent}}\label{app:equivalent}
For $p=2$, there are only two possible $\bm m$-vectors $(0,0)$ and $(0,1)$, so that the first part of the claim is trivial. Because $\mathcal T^{(0,0)} = \{ (0,1), (0,-1) \}$ and $\mathcal T^{(0,1)} = \{(0,3), (0, -3)\}$, it is obvious that $\Theta^{(0,1)}$ is obtained by multiplying each inequality in $\Theta^{(0,0)}$ by $3$, which concludes the proof for $p=2$. Thus, assume $p>2$ in the following.
By \cref{constr:hilo}, $\boldtheta_1 = \rot_\varphi(\boldt^{\bm m}_{k_1})$ and $\boldtheta'_1 = \rot_{\varphi'}(\boldt^{\bm m'}_{k'_1})$, i.e.,\ the first $p$-block of each inequality is a rotated building block of $\mathcal T^{\bm m}$ and $\mathcal T^{\bm m'}$, respectively. Hence the assumption implies
$\rot_\varphi(\boldt^{\bm m}_{k_1}) = a \rot_{\varphi'}(\boldt^{\bm m'}_{k'_1})$, and thus
\[ \set(\boldt^{\bm m}_{k_1}) = a\cdot\set(\boldt^{\bm m'}_{k'_1}) \Rightarrow \set(\boldt^{\bm m}_0) = a\cdot \set(\boldt^{\bm m'}_0) - at^{\bm m'}_{0,k'_1} + t^{\bm m}_{0,k_1}\]
by \cref{eq:property-3}. But $\min(\bm t_0^{\bm m}) = 0 = \min(\bm t_0^{\bm m'})$, so that $t_{0,k_1}^{\bm m} - a t_{0,k'_1}^{\bm m'}=0$ must hold, i.e.,\
\begin{equation}
\set(\bm t_0^{\bm m}) = a\cdot\set(\bm t_0^{\bm m'}).
\label{eq:W0aW0}
\end{equation}
As both sets in \cref{eq:W0aW0} contain integer entries only, $a$ must be rational, i.e., $a=r/s$ where $r,s>0$ ($a=0$ would imply $\set(\bm t_0^{\bm m}) = \{0\}$, contradicting \cref{def:bb}) with $\gcd(r,s) = 1$. Hence, $s$ divides all $p$ distinct nonnegative entries of $\bm t_0^{\bm m'}$, thus
%
$\max(\bm t^{\bm m'}_0) \geq s(p-1)$,
%
while by \cref{def:bb}, $\max(\bm t^{\bm m'}_0) \leq 2p-1$.
Together, this implies $s \leq 2 + \frac1{p-1}$. As $s$ is integer and we assume $p>2$, this means $s \in \{1,2\}$.\\
\emph{Case $s=1$:} Because $0 < a = r/s \leq 1$ by assumption, $s=1$ implies $a=1$, which by \cref{eq:W0aW0} implies $\set(\bm t^{\bm m}_0) = \set(\bm t^{\bm m'}_0)$, contradicting the assumption that $\bm m \neq \bm m'$.\\
\emph{Case $s=2$:} Because $s$ divides all elements of $\set(\bm t^{\bm m'}_0)$, in this case $t^{\bm m'}_{0,\zeta}$ is even for all $\zeta \in \mathbb F_p$. But $t^{\bm m'}_{0,\zeta} = [\zeta]_\mathbb Z + m'_\zeta p$ by definition, which implies (because $p$ is odd) that $m'_\zeta = 1$ if and only if $[\zeta]_\mathbb Z$ is odd, i.e.,\ $\bm m'=(0,1,0,1,\dotsc)$ as claimed.
To see that $\bm m = (0,\dotsc,0)$, note that $s=2$ and $0 < r/s \leq 1$ implies $a = r/s = 1/2$. From \cref{eq:W0aW0} follows that
\[\max(\bm t_0^{\bm m}) = \frac12 \max(\bm t^{\bm m'}_0) = \frac12(2p-2) = p-1\]
by the structure of $\bm m'$, which implies that $\bm m = (0,\dotsc,0)$.
For the proof of the second claim, we use the following lemma.
\begin{lemma}\label{lem:2tkm}
For $k \in \{0,\dotsc,p-1\}$ and $\bm m, \bm m'$ as above, $2\boldt^{\bm m}_{k} = \rot_2(\boldt^{\bm m'}_{\varphi_2(k)})$.
\end{lemma}
\begin{IEEEproof}
By definition of $\bm m$, we have
\begin{equation}
(2\boldt^{\bm m}_k)_j = 2 t^{\bm m}_{k,j} = 2(t^{\bm m}_{0,k+j} - t^{\bm m}_{0,k}) = 2 ( [k+j]_\mathbb Z - [k]_\mathbb Z).
\label{eq:2tkm}
\end{equation}
On the other hand,
\[(\rot_2(\boldt^{\bm m'}_{\varphi_2(k)}))_j = t^{\bm m'}_{2k, 2j} = t^{\bm m'}_{0,2(k+j)} - t^{\bm m'}_{0,2k}.\]
Now, the structure of $\bm m'$ implies that, for $i\in \mathbb F_p$, $t^{\bm m'}_{0,2i} = 2[i]_\mathbb Z$. It follows that
\[t^{\bm m'}_{0,2(k+j)} - t^{\bm m'}_{0,2k} = 2 [k+j]_\mathbb Z - 2[k]_\mathbb Z\]
which equals \cref{eq:2tkm}.
\end{IEEEproof}
Now, let $\rot_\varphi(\boldtheta)^T \boldx \leq \kappa$ be in $\varphi(\Theta^{\bm m})$, then by \cref{cor:rotation-autfq} also $\boldtheta^T \boldx \leq \kappa$ is in $\Theta^{\bm m}$. Let $\boldtheta = (\boldt^{\bm m}_{k_1} \mid \dotsm \mid \boldt^{\bm m}_{k_d})$. We now show that $2\boldtheta^T \boldx \leq 2\kappa$ is in $\varphi_2(\Theta^{\bm m'})$.
By \cref{lem:2tkm},
\[2\boldtheta = (\rot_2(\boldt^{\bm m'}_{\varphi_2(k_1)}) \mid \dotsm \mid \rot_2(\boldt^{\bm m'}_{\varphi_2(k_d)})).\]
Using \cref{cor:rotation-autfq} again, the claim reduces to showing that $\boldtheta'^T\boldx \leq \kappa$ with $\boldtheta' = (\boldt^{\bm m'}_{\varphi_2(k_1)} \mid \dotsm \mid \boldt^{\bm m'}_{\varphi_2(k_d)})$ and $\kappa' = 2\kappa$ is contained in $\Theta^{\bm m'}$. We show the latter by verifying \cref{eq:hilo-conditions} from \cref{lem:conditions}; namely, \cref{eq:Thetacondition} holds because
\begin{align*}
\sum_{i=1}^d \varphi_2(k_i) &= [2]_p \sum_{i=1}^d k_i &&\text{(by def.\ of }\varphi_2\text{)}\\
&= [2]_p [d-1]_p\sigma^{\bm m} &&\text{(\cref{eq:Thetacondition} for $\boldtheta$)}\\
&= [d-1]_p\sigma^{\bm m'}&&
\end{align*}
where the last step follows because $\sigma^{\bm m} = [p-1]_p = [-1]_p$ and $\sigma^{\bm m'} = [p-2]_p = [-2]_p$, whereas for \cref{eq:kappacondition} we compute
\begin{align*}
\kappa' = 2\kappa &= 2(d-1)t^{\bm m}_{0,\sigma^{\bm m}} - 2\sum_{i=1}^d t_{0,k_i} &&\text{(\cref{eq:kappacondition} for $\kappa$)}\\
&= 2(d-1)(p-1) - \sum_{i=1}^d 2[k_i]_\mathbb Z&&\text{(as $\bm m = \bm 0$)} \\
&= (d-1)t^{\bm m'}_{0,\sigma^{\bm m'}} - \sum_{i=1}^d t^{\bm m'}_{0,\varphi_2(k_i)}&&\text{(by \cref{lem:2tkm})}
\end{align*}
which proves the claim. Hence, for every inequality in $\Phi(\Theta^{\bm m})$ there is an equivalent inequality in $\Phi(\Theta^{\bm m'})$. Furthermore, by \cref{rem:phiThetaunique} and because both sets are of the same size, the converse must also hold, which concludes the proof.
\section{Proof of \cref{thm:facetComplete}}\label{proofFacetComplete3}
We will need a technical lemma for this proof.
\begin{lemma}\label{lem:bracketBijection}
Let a finite field $\mathbb F_q$, $d\geq1$, and $\boldxi =( \xi_1,\dotsc,\xi_d) \in \mathbb F_q^d$ be given. Then, there is a permutation matrix $\bm P_\boldxi \in \{0,1\}^{dq \times dq}$ with the property that
\[ \bm P_\boldxi \mathsf F_{\mathrm v}(\bm \eta) = \mathsf F_{\mathrm v}(\bm \eta - \bm\xi)\]
for any $\bm\eta \in \mathbb F_q^d$. In particular, the linear map defined by $\bm P_\boldxi$ is bijective and maps $S^d_{q-1}$ onto itself.
\end{lemma}
\begin{IEEEproof}
Define $\bm P_\boldxi$ to be the block-diagonal matrix
\[\bm P_\boldxi = \begin{pmatrix}
\bm P_{\xi_1} \\
& \ddots &\\
&& \bm P_{\xi_d}\end{pmatrix}\]
where, for $\xi_i \in \mathbb F_q$, $\bm P_{\xi_i}$ is the $q\times q$ permutation matrix defined by
\[\bm P_{\xi_i} \mathsf f(\eta) = \mathsf f(\eta - \xi_i)\]
for $\eta \in \mathbb F_q$. Note that the above equation defines the image of $\bm P_{\xi_i}$ for all unit vectors in $\mathbb R^q$, and the right-hand side runs over all unit vectors as well, such that $\bm P_{\xi_i}$ and hence also $\bm P_\boldxi$ is a permutation matrix.
\end{IEEEproof}
\begin{example}
Let $d=1$ and $q = 4$, and $\xi_1 = 2$. Then,
\[
\bm P_{\xi_1} =
\begin{pmatrix}
0&0&1&0\\
0&0&0&1\\
1&0&0&0\\
0&1&0&1
\end{pmatrix}
\]
and it is easily verified that $\bm P_{\xi_1}\mathsf f(\eta) = \mathsf f(\eta - 2)$ holds.
\end{example}
Let now $\mathcal Q$ denote the polytope defined by the inequalities in $\Theta$. From \cref{prop:Theta1Facets} follows that $\P \subseteq \mathcal Q$, so it remains to show that $\mathcal Q \subseteq \P$. Suppose not. Then there exists $\boldf \in \mathcal Q \setminus \P$. Since the facets in $\Delta_3^d$ by definition describe $S_2^d$ we have $\mathcal Q \subseteq S_2^d$ and hence $\boldf \in S_2^d$. Since $\boldf \notin \P$, there must be a facet $F$ of $\P$ that cuts $\boldf$ and hence also some vertex $\bm v$ of $S_2^d$. We will show that $F$ is contained in either $\Theta^{\bm m}$ or $\varphi_2(\Theta^{\bm m})$, which contradicts the assumption that $\boldf \in \mathcal Q$ and hence shows $\P = \mathcal Q$.
Note that on the vertices of $S_2^d$ the inverse of $\mathsf F_{\mathrm v}$, $\mathsf F_{\mathrm v}^{-1}\colon \mathbb R^{2d} \rightarrow \mathbb F_3^d$ is defined. Let $\boldxi = \mathsf F_{\mathrm v}^{-1}(\bm v) \in \mathbb F_3^d$. Then, $\boldxi \notin \mathcal C$. Assume $\sum_{i=1}^d \xi_i = [2]_3$; the case $\sum_{i=1}^d \xi_i = [1]_3$ is completely symmetric. We will prove that $F \in \Theta^{\bm m} \cup \varphi_2(\Theta^{\bm m})$ by going through several cases, distinguished by the Hamming distance between $\boldxi$ and the codewords for which $F$ is tight. For each case, we will derive a set of vertices of $\P$ for which $F$ could potentially be tight; then, we will show that there is an inequality in $\Theta^{\bm m} \cup \varphi_2(\Theta^{\bm m})$ that is tight for \emph{all} of those vertices, hence that inequality must define $F$.
As $F$ is a face of $\P$, there are $\bolda' \in \mathbb R^{2d}$ and $b \in \mathbb R$ such that
\begin{equation}
\bolda'^T \bm x \geq b
\label{eq:p3proof-facetInequality}
\end{equation}
is valid for $\P$ and induces $F$, i.e., $F = \{\bm x \in \P\colon \bolda'^T \bm x = b \}$.
We use the permutation matrix of \cref{lem:bracketBijection} to state \cref{eq:p3proof-facetInequality} in a more convenient form, where we use the shorthand notation $\langle\bm x \rangle_{\boldxi} = \bm P_\boldxi \bm x$ for any $\bm x \in\mathbb R^{dq}$.
Define $\bm a = \bm P_{\bm\xi}^{-T} \bm a'$. Then,
\ifonecolumn
\begin{equation}
\bm a^T \langle\bm x\rangle_\boldxi \geq b
\Leftrightarrow
\left(\bm P_{\bm\xi}^{-T} \bm a'\right)^T \bm P_\boldxi \bm x = \bm a'^T \bm P_\boldxi^{-1} \bm P_\boldxi \bm x = \bm a'^T \bm x \geq b
\label{eq:p3proof-ineq}
\end{equation}
\else
\begin{multline}
\bm a^T \langle\bm x\rangle_\boldxi \geq b
\Leftrightarrow \\
\left(\bm P_{\bm\xi}^{-T} \bm a'\right)^T \bm P_\boldxi \bm x = \bm a'^T \bm P_\boldxi^{-1} \bm P_\boldxi \bm x = \bm a'^T \bm x \geq b
\label{eq:p3proof-ineq}
\end{multline}
\fi
i.e.,\ \cref{eq:p3proof-facetInequality} can be restated in terms of $\langle \bm x\rangle_\boldxi$ instead of $\bm x$. The particular form of $\bm P_\boldxi$ further allows us to assume that $a_{i,0} = 0$ for $i \in \range d$, by adding appropriate multiples of \cref{eq:spx-eq} to the inequality (cf.\ Appendix~\ref{app:embeddings}).
\begin{remark}
The construction of $\langle\cdot\rangle_{\boldxi}$ and the way it is used above generalizes the definition and usage of the map $[\cdot]$ in the proof of \cite[Thm.~5.15]{fel03}.
\end{remark}
As $\bm v$ is cut by $F$, we have $b > \bolda^T\langle\bm v\rangle_\boldxi = \bolda^T\langle\mathsf F_{\mathrm v}(\boldxi)\rangle_\boldxi = \bolda^T\mathsf F_{\mathrm v}(\bm 0) = \sum_{i=1}^d a_{i,0} = 0$ by the above assumption. Let $\bm e^i$ denote the $i$-th unit vector in $\mathbb F_3^d$, i.e.,\ $\bm e^i_i = [1]_3$ and $\bm e^i_j = [0]_3$ for $j \neq i$. Since $\sum_{i=1}^d \xi_i = [2]_3$, $\boldxi + \bm e^i \in \mathcal C$ for $i\in \range d$. Because \cref{eq:p3proof-ineq} is valid for $\P$, $b \leq \bolda^T\langle\mathsf F_{\mathrm v}(\boldxi+\bm e^i)\rangle_\boldxi = \bolda^T \mathsf F_{\mathrm v}(\bm e^i) = a_{i,1}$. Likewise $\boldxi - \bm e^i - \bm e^j \in \mathcal C$ for $i \neq j$, hence
\[ b \leq \bolda^T\langle\mathsf F_{\mathrm v}(\boldxi-\bm e^i -\bm e^j)\rangle_\boldxi = \bolda^T\mathsf F_{\mathrm v}(2\bm e^i + 2 \bm e^j) = a_{i,2} + a_{j,2}\]
and finally from $\boldxi + \bm e^i + \bm e^j - \bm e^k \in \mathcal C$ for pairwise different $i,j,k \in \range d$ we conclude analogously that $a_{i,1} + a_{j,1} + a_{k,2} \geq b$. To sum up, for arbitrary but different $i,j,k \in \range d$ holds:
\begin{align}
b &> 0, \notag \\%\label{eq:bgr0}\\
a_{i,1} &\geq b, \label{eq:a1grb}\\
a_{i,2} + a_{j,2} &\geq b, \label{eq:a2grb}\\
a_{i,1} + a_{j,1} + a_{k,2} &\geq b. \label{eq:triplegrb}
\end{align}
Furthermore, \eqref{eq:a2grb} implies that $a_{i,2} < b/2$ can hold for at most one $i \in \range d$, which allows the stronger statement that, for $I \subseteq \range d$ with $|I| \geq 2$,
\begin{equation}
\sum_{i \in I} a_{i,2} \geq |I|\cdot b/2.\label{eq:aIgrb}
\end{equation}
\begin{lemma}\label{lem:p3proof-dh3}
$F$ contains $\mathsf F_{\mathrm v}(\boldc)$ for at least one codeword $\boldc$ with $d_{\rm H}(\boldc, \boldxi) \geq 3$.
\end{lemma}
\begin{IEEEproof}
Assume the contrary. As $F$ is a facet of the (by \cref{prop:Pjdim}) $2d$-dimensional polytope $\P$, there must be a set $\mathcal F$ of $2d$ codewords of $\mathcal C$ such that $\{\mathsf F_{\mathrm v}(\bm c)\colon \bm c \in \mathcal F\} \subseteq F$, this set is affinely independent, and by assumption $d_{\rm H}(\boldxi, \boldc) \leq 2$ for $\boldc \in \mathcal F$. We now show that $\Theta^{\bm m}$ contains an inequality $\bm \theta^T \bm x \leq \kappa$ that induces $F$.
Let $\boldc^{\boldxi} = \boldxi + \bm e^d \in \mathcal C$. Choose $\boldtheta = (\boldt^{\bm m}_{k_1} \mid \dotsc \mid \boldt^{\bm m}_{k_d})^T$ according to \cref{constr:hilo} using $\boldc^{\boldxi}$ as canonical codeword. Using $\sigma = 2$ and \cref{lem:hilo-formulas}, this implies $k_i = t_{\uparrow, c^{\bm\xi}_i} = 2 - c^{\bm\xi}_i =2- \xi_i$ for $i \in \range{d-1}$, while also $k_d = t_{\downarrow,c^{\bm\xi}_d} = -(\xi_d + 1) = 2 - \xi_d$. By \cref{lem:facets_modified}, $\boldtheta^T \bm x \leq \kappa$ with $\kappa = \boldtheta^T \mathsf F_{\mathrm v}(\boldc^\boldxi)$ defines a facet $G$ of $\P$. We now show that $\boldtheta^T \mathsf F_{\mathrm v}(\bm c) = \kappa$ for all $\bm c \in \mathcal F$.
If $d_{\mathrm H}(\boldc,\boldxi)=1$, then $\boldc = \boldxi + \bm e^i$ for some $i \in \range d$. By \cref{lem:validIndependent},
\ifonecolumn
\begin{displaymath}
\boldtheta^T\mathsf F_{\mathrm v}(\bm c) - \kappa
= \boldtheta^T\mathsf F_{\mathrm v}(\bm c^{\bm\xi} - \bm e^d + \bm e^i) - \kappa
= \begin{cases}
0&\text{if }i=d,\\
t_{\sigma,1}+t_{0,2}=-2+2=0&\text{otherwise}\end{cases}
\end{displaymath}
\else
%
%
%
%
%
%
%
\begin{align*}
\boldtheta^T\mathsf F_{\mathrm v}(\bm c) - \kappa
&= \boldtheta^T\mathsf F_{\mathrm v}(\bm c^{\bm\xi} - \bm e^d + \bm e^i) - \kappa\\
&= \begin{cases}
0&\text{if }i=d,\\
t_{\sigma,1}+t_{0,2}=-2+2=0&\text{otherwise}\end{cases}
\end{align*}
\fi
where the explicit values of $\bm t_k$ can be looked up, e.g., in \cref{tab:values_tk}. If on the other hand $d_{\mathrm H}(\bm c, \bm\xi) = 2$, then $\boldc = \boldxi + 2 \bm e^i + 2 \bm e^j$ for $i\neq j \in \range d$. Using \cref{lem:validIndependent} again,
\ifonecolumn
\begin{displaymath}
\boldtheta^T\mathsf F_{\mathrm v}(\bm c) - \kappa
= \boldtheta^T\mathsf F_{\mathrm v}(\bm c^{\bm\xi} - \bm e^d + 2\bm e^i + 2\bm e^j) - \kappa
=\begin{cases}
t_{\sigma,2}+t_{0,1} = 0&\text{if }d \in \{i,j\},\\
t_{\sigma,2}+t_{\sigma,2} + t_{0,2} =0&\text{otherwise}.\end{cases}
\end{displaymath}
\else
%
%
%
%
%
%
%
\begin{align*}
\boldtheta^T\mathsf F_{\mathrm v}(\bm c) - \kappa
&= \boldtheta^T\mathsf F_{\mathrm v}(\bm c^{\bm\xi} - \bm e^d + 2\bm e^i + 2\bm e^j) - \kappa \\
&= \begin{cases}
t_{\sigma,2}+t_{0,1} = 0&\text{if }d \in \{i,j\},\\
t_{\sigma,2}+t_{\sigma,2} + t_{0,2} =0&\text{otherwise}.\end{cases}
\end{align*}
\fi
In conclusion, $\boldtheta^T\boldx \leq \kappa$ is tight for the embeddings of \emph{all} codewords $\boldc$ with $d_{\mathrm H}(\boldc,\boldxi) \leq 2$, and in particular for all of $\boldc \in \mathcal F$. Because the latter by assumption lead to $2d$ affinely independent elements of $F$, which has dimension $2d-1$, they already uniquely specify the facet $F$, which hence must equal $G$, i.e.,\ $F \in \Theta^{\bm m}$.
\end{IEEEproof}
\begin{lemma}
Let $\bolddelta \in \mathbb F_3^d$. If $d_{\rm H}(\bolddelta, \boldxi) \geq 4$, $F$ does not contain $\mathsf F_{\mathrm v}(\bolddelta)$, i.\,e., $\bolda^T\langle\mathsf F_{\mathrm v}(\bolddelta)\rangle_{\boldxi} > b$. In particular, $F$ does not contain any codeword $\bm c \in \mathcal C$ with $d_{\mathrm H}(\bm c, \boldxi) > 3$.
\end{lemma}
\begin{IEEEproof}
Let $\bolddelta \in \mathbb F_3^d$ with $d_{\rm H}(\bolddelta, \boldxi) = w \geq 4$. Then there are disjoint index sets $I_1, I_2 \subseteq \range d$, $|I_1| + |I_2| = w$ such that $I_1 = \{i\colon \delta_i - \xi_i = [1]_3\}$ and $I_2 = \{i\colon \delta_i - \xi_i = [2]_3\}$.
Assume \cref{eq:p3proof-ineq} is not strictly satisfied by $\bolddelta$, i.e.,\
\[b \geq \bolda^T\langle\mathsf F_{\mathrm v}(\bolddelta)\rangle_{\boldxi} = \bolda^T\mathsf F_{\mathrm v}(\bolddelta-\boldxi) = \sum_{i \in I_1} a_{i,1} + \sum_{i \in I_2} a_{i,2}.\]
Now, \eqref{eq:a1grb} implies $|I_2| > 0$, while \eqref{eq:aIgrb} demands $|I_2| < 2$, but \eqref{eq:triplegrb} forbids $|I_2| = 1$. So the assumption must be false.
\end{IEEEproof}
From the above two lemmas we conclude that there exists a $\boldc^3 \in \mathcal C$ with $d_{\mathrm H}(\boldc^3,\boldxi) = 3$ and $\mathsf F_{\mathrm v}(\boldc^3) \in F$. As $\sum_{i=1}^d \xi_i = 2$, this implies that $\boldc^3 = \boldxi + \bm e^i + \bm e^j + 2 \bm e^k$ for pairwise different $i,j,k$. Hence, $b=\bolda^T\langle\mathsf F_{\mathrm v}(\boldc^3)\rangle_{\boldxi} = a_{i,1} + a_{j,1} + a_{k,2}$, thus $a_{k,2} \leq -b$ by \eqref{eq:a1grb}, so by \eqref{eq:a2grb} $a_{l,2} \geq 2b$ for any $l\in\range{d}$ with $l \neq k$. This means that $a_{k,2}$ is the unique negative entry of $\bolda$, so that all $\boldc' \in \mathcal C$ with $d_{\rm H}(\boldc', \boldxi) \in \{2, 3\}$ and $\bolda^T\langle\mathsf F_{\mathrm v}(\boldc')\rangle_{\boldxi} = b$ must have $c'_k = c_k = \xi_k + 2$.
As in the proof of \cref{lem:p3proof-dh3}, there is a set $\mathcal F \subset \mathcal C$, $\abs{\mathcal F} = 2d$, such that $\bm a^T \mathsf F_{\mathrm v}(\bm c) = b$ for all $\bm c\in \mathcal F$ and the embeddings are affinely independent. From the above discussion, each $\boldc \in \mathcal F$ is either of the form $\boldc = \boldxi + \bm e^i$, $\boldc = \boldxi + 2 \bm e^i + 2 \bm e^k$, or $\boldc = \boldxi + \bm e^i + \bm e^j + 2 \bm e^k$ for $i,j \neq k$ and $i \neq j$.
We now assume wlog.\ (in view of \cref{rem:hilo-arbitrary}) that $k=d$ and use \cref{constr:hilo} with the canonical codeword $\boldc^{\boldxi} = \boldxi + \bm e^d$ to obtain an inequality $\boldtheta^T \boldx \leq \kappa$ from $\varphi_2(\Theta^{\bm m})$, where $\boldtheta = (\varphi_2(\bm t_{k_1}) \mid \dotsc \mid \varphi_2(\bm t_{k_d}))$ and $k_i = 2 - \xi_i$ as in the proof of \cref{lem:p3proof-dh3}. Analogously to above, one can check that this inequality is tight for
\begin{enumerate}
\item $\boldc^\xi$ itself,
\item $\boldc^\xi - \bm e^d + \bm e^i = \boldxi+\bm e^i$ for any $i \neq d$,
\item $\boldc^\xi + \bm e^d - \bm e^i = \boldxi + 2 \bm e^i + 2 \bm e^d$ for any $i \neq d$, and
\item $\boldc^\xi + \bm e^d + \bm e^i + \bm e^j = \boldxi + \bm e^i + \bm e^j + 2 \bm e^d$ for $i, j \neq d$ and $i \neq j$,
\end{enumerate}
i.e.,\ is tight for \emph{all} potential codewords for which \cref{eq:p3proof-ineq} is tight, which again shows that $F = \{\boldx \colon \boldtheta^T \boldx \leq \kappa\}$, contradicting the assumption that $F \notin \Theta^{\bm m} \cup \varphi_2(\Theta^{\bm m})$.
As we have gone through all cases, this concludes the proof that $\P = \Q$. Finally, the irredundancy statement follows from \cref{prop:equivalent}.
\section{Proof of \cref{lem:Theta5_p7} (Outline)} \label{app:proofTheta5}
The analog of \cref{lem:validIndependent} leads to the result that
\[ \boldtheta^T \mathsf F_{\mathrm v}(\bm c + \bm\xi) - \kappa = \sum_{i\neq i^{\rm nb}} t^{\rm b}_{\sigma^{\rm b}, \xi_i} + t^{\rm nb}_{0,\xi_{i^{\rm nb}}}\]
for any $\bm\xi \in \mathbb F_7^d$; the adaption of \cref{cor:allValidOrNot} shows that the inequalities in $\Theta_5$ are valid for $\P$ if and only if the following holds for all $\bm c \in \mathcal C$ (in analogy to \cref{eq:allValidCondition}):
\[\sum_{i \neq i^{\rm nb}} t^{\rm b}_{\sigma^{\rm b}, c_i} + t^{\rm nb}_{0,c_{i^{\rm nb}}} \leq 0.\]
Then, the proper generalization of \cref{def:valid-class} and \cref{theorem:newValidProgram} leads to the equivalent condition that $\Theta_5$ is valid if and only if
\[ \sum_{i \in I} n_i t^{\rm b}_{\sigma^{\rm b},i} + [r]_\mathbb Z = 0\]
with $I= \{i \in \mathbb F_7\colon 0 > t^{\rm b}_{\sigma^{\rm b}, i} \geq -[\sigma^{\rm b}]_\mathbb Z\}$, variables $n_i \in \mathbb Z$, and $r = -\sum_{i \in I} [n_i]_7 \cdot i$ has no solution for which $m^{\rm nb}_r = 1$. From $\sigma^{\rm b} = 2$ follows that $I = \emptyset$, such that the system has no solution and hence all inequalities in $\Theta_5$ are valid for $\P$.
Let now $\boldtheta^T\bm x \leq \kappa \in \Theta_5$. In order to show that the inequality defines a facet, we can partially recycle the proof of \cref{lem:facets} in Appendix~\ref{prooffacets}. For the sake of consistency, assume that $i^{\rm nb} = d$, which is without loss of generality because the role of $d$ is arbitrary (cf.\ \cref{rem:hilo-arbitrary}).
For $s \in \range{(p-1)(d-1)}$, define $\bm \xi^s$ (and hence $\bm c^s$) as in \cref{eq:faceproof-xi-type1}. Note that $t_{\sigma^{\rm b}, [l]_7}^{\rm b} + t_{0, [-l]_7}^{\rm nb} = 0$ holds (even though Part~\ref{lem:bb-symmetric-property1} of \cref{lem:bb-symmetric_properties}, which is used to show the same result in the original proof, is not applicable here), such that $\boldtheta^T \mathsf F_{\mathrm v}(\bm c^s) = \kappa$ for $s \in \range{(p-1)(d-1)}$.
The construction of the next $p-2$ codewords cannot be copied from Appendix~\ref{prooffacets} because the condition $t^{\rm b}_{\sigma^{\rm b},-i} + t^{\rm b}_{\sigma^{\rm b},i-\sigma^{\rm b}} + t^{\rm nb}_{0,\sigma^{\rm b}} = 0$ does not hold. However, one can check that the following $p-2=5$ additional $\bm\xi$-vectors
\ifonecolumn
\begin{displaymath}
(0,\dotsc,0,3,2,2), (0,\dotsc,0,1,4,2), (0,\dotsc,0,4,2,1),
(0,\dotsc,0,4,4,6), (0,\dotsc,0,3,3,1)
\end{displaymath}
\else
\begin{gather*}
(0,\dotsc,0,3,2,2), (0,\dotsc,0,1,4,2), (0,\dotsc,0,4,2,1),\\
(0,\dotsc,0,4,4,6), (0,\dotsc,0,3,3,1)
\end{gather*}
\fi
satisfy $\sum_{i=1}^{d-1} t_{\sigma^{\rm b},\xi_i}^{\rm b} + t^{\rm nb}_{0,\xi_d} = 0$, such that the corresponding codewords are tight for the inequality. Furthermore, the corresponding embeddings are linearly independent, such that the counterpart of $\bm M_F$ as defined in \cref{eq:MF} has full rank $d(p-1) -1$, hence the inequality indeed defines a facet.
\section{Proof of \cref{lem:Theta6_p7}} \label{app:proofTheta6}
For the first statement, we again follow the arguments of \cref{sec:validInvalid}, but need to be careful not to rely on features of basic building block classes.
First, observe that \cref{lem:bb-hilo-plusi} generalizes to the current case; in particular, \cref{eq:bb-hi-plusi} holds for $\mathcal T_6^{\rm b}$ and $\mathcal T_6^{\mathrm{hi}}$, and \cref{eq:bb-lo-plusi} holds for $\mathcal T_6^{\mathrm{lo}}$. This allows to generalize \cref{lem:validIndependent} (the proof of which relies on \cref{lem:bb-hilo-plusi}), resulting in
\begin{equation}
\boldtheta^T \mathsf F_{\mathrm v}(\bm c +\bm\xi)-\kappa = \sum_{i=1}^d t^{l_i}_{0,\xi_i}
\label{eq:cPlusXiTheta6}
\end{equation}
for $\bm\xi \in \mathbb F_7^d$. This immediately implies (cf.\ \cref{cor:allValidOrNot}) that the inequalities in $\Theta_6$ are valid if and only if all $\bm c \in \mathcal C$ satisfy the condition $\sum_{i=1}^d t^{l_i}_{0,c_i}\leq 0$ for all configurations of $i^{\mathrm{lo}} \neq i^{\mathrm{hi}}$. Since $\bm t^{\rm b}_0$ and $\bm t^{\mathrm{hi}}_0$ contain nonpositive entries only, this condition can be violated only if $t^{\mathrm{lo}}_{0,c_{i^{\mathrm{lo}}}} = 1$, i.e.,\ $c_{i^{\mathrm{lo}}} \in \{1,2,4\}$ and simultaneously $t^{\rm b}_{0,c_i} = 0$ (i.e.,\ $c_i = 0$) for $l_i = {\rm b}$, and also $t^{\mathrm{hi}}_{0, c_{i^{\mathrm{hi}}}} =0$, i.e.,\ $c_{i^{\mathrm{hi}}} \in \{0,1,2,4\}$. But then $\sum c_i = c_{i^{\mathrm{lo}}} + c_{i^{\mathrm{hi}}} \neq 0$, which contradicts $\bm c \in \mathcal C$ and hence concludes the proof of the first claim.
It remains to show that each inequality from $\Theta_6$ defines a facet of $\P$. To that end, let $\bm\theta^T \bm x \leq \kappa$ be such an inequality, where we assume, for the sake of notation and without loss of generality, that $i^{\mathrm{hi}} = d-1$ and $i^{\mathrm{lo}} = d$, and assume that $\bm c \in \mathcal C$ is the canonical codeword. Analogously to the proof of \cref{lem:facets}, we construct $d(p-1)-1 = 6d-1$ codewords $\bm\xi^s$, $s \in \range{6d-1}$, with the property that $\boldtheta^T\mathsf F_{\mathrm v}(\bm c + \bm \xi^s) - \kappa =0$ for $s \in \range{6d-1}$ and such that the (Flanagan) embeddings $\mathsf F_{\mathrm v}'(\bm\xi^s)$ are linearly independent.
For $s \in \range{6(d-2)}$, define vectors $\bm\xi^s = \bm\xi^{6i+l}$ (where $1 \leq l \leq 6$ and $0 \leq i \leq d-3$) by
\[
\xi^{6i+l}_j = \begin{cases}
[l]_7 &\text{if $j=i$},\\
[-l \bmod 3]_7 & \text{if $j=d-1$},\\
[4]_7&\text{if $j=d$ and $l \in \{1,2,3\}$},\\
[1]_7&\text{if $j=d$ and $l \in \{4,5,6\}$},\\
\phantom{-}0 &\text{otherwise} \end{cases}
\]
each of which is a codeword and satisfies, by \cref{eq:cPlusXiTheta6}, that $\boldtheta^T \mathsf F_{\mathrm v}(\bm c + \bm\xi^s) -\kappa = \sum_{i=1}^d t^{l_i}_{0,\xi^s_i} = 0$ because $t^{\rm b}_{0,\xi^i} = -1$, $t^{\mathrm{lo}}_{0,\xi^d} = 1$, and $t^{\mathrm{hi}}_{0,\xi^{d-1}} = 0$ by construction. The matrix whose rows are the embeddings $\mathsf F_{\mathrm v}'(\bm \xi^s)$, $s \in \range{6(d-2)}$, then has the form
\begin{equation} \label{eq:MF-Theta6}
\begin{pmatrix}
\bm{I}_6 & & & \bm A&\bm B\\
&\ddots& &\vdots&\vdots\\
& & \bm{I}_6&\bm A&\bm B\\
\end{pmatrix}
\end{equation}
with
\[\bm A = \left(\begin{smallmatrix}
0&1&0&0&0&0\\
1&0&0&0&0&0\\
0&0&0&0&0&0\\
0&1&0&0&0&0\\
1&0&0&0&0&0\\
0&0&0&0&0&0\\
\end{smallmatrix}\right)\text{ and } \bm B = \left(\begin{smallmatrix}
0&0&0&1&0&0\\
0&0&0&1&0&0\\
0&0&0&1&0&0\\
1&0&0&0&0&0\\
1&0&0&0&0&0\\
1&0&0&0&0&0
\end{smallmatrix}\right)
\]
and hence obviously full row rank $6(d-2)$. The remaining $11$ codewords $\bm\xi^s$, $6(d-2)+1 \leq s \leq 6d-1$, are zero except for the last three entries, which are given by
\ifonecolumn
\begin{displaymath} (0,1,6), (0,2,5), (0,3,4), (0,4,3), (0,5,2), (0,6,1),
(1,4,2), (2,4,1), (3,2,2), (5,0,2), (6,4,4).
\end{displaymath}
\else
\begin{gather*} (0,1,6), (0,2,5), (0,3,4), (0,4,3), (0,5,2), (0,6,1),\\
(1,4,2), (2,4,1), (3,2,2), (5,0,2), (6,4,4).
\end{gather*}
\fi
It can be checked by hand that the condition $\bm\theta^T \mathsf F_{\mathrm v}(\bm c + \bm\xi^s) = \kappa$ holds for these codewords as well, and one can verify numerically (cf. \cref{rem:numerical-check-facet}) that their Flanagan embeddings, together with the last block-row of \cref{eq:MF-Theta6}, are linearly independent, such that they complete \cref{eq:MF-Theta6} to a matrix of rank $6d-1$, which concludes the proof.
\section{Proof of \cref{prop:pm}} \label{proofproppm}
The proof is along the same lines as the proof of Lemmas~8 and 12 in \cite{liu14}, first showing that $\sum_{k \in \mathcal{K}} \gamma_k \cdot \mathsf{p}(h_j c_j)_k = \beta$ if and only if
\[ \left[ g_j^{(\mathcal{K},\boldsymbol{\gamma}, \bm f)} \right]_p =\beta \]
for $\beta \in \mathbb F_p$. %
We first need a technical result.
\begin{lemma} \label{lem:intermediate_result_prop_pm}
For any vector $\bm c \in \mathbb F_q^d$ and its embedding $\bm f = \mathsf F_{\mathrm v}(\bm c)$,
$\left[ g_j^{(\mathcal{K},\boldsymbol{\gamma}, \bm f)} \right]_p= \beta$ if and only if $\sum_{k \in \mathcal{K}} \gamma_k \cdot \mathsf{p}(h_j c_j)_k = \beta$ for all $\emptyset \neq \mathcal{K} \subset \range{m}$, $\boldsymbol{\gamma} \in (\mathbb F_p \setminus \{0\})^{|\mathcal{K}|}$, and $j \in \range d$, where $\beta \in \mathbb F_p$.
\end{lemma}
\begin{IEEEproof}
Assume that $\beta \in \mathbb F_p$ and consider a fixed $j \in \range d$. If $\sum_{k \in \mathcal{K}} \gamma_k \cdot \mathsf{p}(h_j c_j)_k = \beta$, then $c_j \in \mathcal{B}^{(\beta)}(\mathcal{K},\boldsymbol{\gamma},h_j)$ by definition. Since $f_{j,c_j}=1$ and $f_{j,i} = 0$ for all $c_j \neq i \in \mathbb F_q$,
\ifonecolumn
\begin{displaymath}
g_j^{(\mathcal{K},\boldsymbol{\gamma}, \bm f)} = \sum_{\beta \in \mathbb F_p \setminus \{0\}} \sum_{i \in \mathcal{B}^{(\beta)}(\mathcal{K},\boldsymbol{\gamma},h_j)} [\beta]_\mathbb Z \cdot f_{j,i}
= [\beta]_\mathbb Z \cdot f_{j,c_j} = [\beta]_\mathbb Z
\end{displaymath}
\else
\begin{displaymath}
\begin{split}
g_j^{(\mathcal{K},\boldsymbol{\gamma}, \bm f)} &= \sum_{\beta \in \mathbb F_p \setminus \{0\}} \sum_{i \in \mathcal{B}^{(\beta)}(\mathcal{K},\boldsymbol{\gamma},h_j)} [\beta]_\mathbb Z \cdot f_{j,i} \\
&= [\beta]_\mathbb Z \cdot f_{j,c_j} = [\beta]_\mathbb Z
\end{split}
\end{displaymath}
\fi
since the sets $\mathcal{B}^{(\beta)}(\mathcal{K},\boldsymbol{\gamma},h_j)$ are disjoint. Thus, $\left[ g_j^{(\mathcal{K},\boldsymbol{\gamma}, \bm f)} \right]_p = [[\beta]_\mathbb Z]_p = \beta$.
For the converse, assume that
\[\left[ g_j^{(\mathcal{K},\boldsymbol{\gamma},\bm f)} \right]_p = \sum_{\eta \in \mathbb F_p \setminus \{0\}} \sum_{i \in \mathcal{B}^{(\eta)}(\mathcal{K},\boldsymbol{\gamma},h_j)} \eta \cdot \left[ f_{j,i} \right]_p= \beta\]
for $\beta \in \mathbb F_p$. This implies that $c_j \in \mathcal{B}^{(\beta)}(\mathcal{K},\boldsymbol{\gamma},h_j)$ and, by definition, $\sum_{k \in \mathcal{K}} \gamma_k \cdot \mathsf{p}(h_j c_j)_k = \beta$. %
\end{IEEEproof}
Now, assume that $\bm f \in \mathsf F_{\mathrm v}(\mathcal C)$. We will show that this implies $\bm f \in \mathcal{E}$. By assumption there exists a codeword $\bm c \in \mathcal C$ such that $\bm f = \mathsf F_{\mathrm v}(\bm c)$. Obviously, the two first conditions of the proposition are satisfied due to the properties of the constant-weight embedding from \cref{def:Constant} (all symbols are embedded to weight-$1$ vectors of length $q$). Now, since $\bm c$ is codeword, the syndrome $s = \sum_{j=1}^d h_j c_j = [0]_q$. Furthermore, $\mathsf{p}(s)=(0,\dotsc,0)$ (a vector of length $m$) since we are working in the field $\mathbb F_q$ where $q = p^m$. Hence, $\mathsf{p}(s)_k = \sum_{j=1}^d\mathsf{p}(h_j c_j)_k = [0]_p$ and using \cref{lem:intermediate_result_prop_pm}, we get
\ifonecolumn
\begin{displaymath}
\sum_{j=1}^d \left[g_j^{(\mathcal{K},\boldsymbol{\gamma}, \bm f)} \right]_p = \sum_{j=1}^d \sum_{k \in \mathcal{K}} \gamma_k \cdot \mathsf{p}(h_j c_j)_k
= \sum_{k \in \mathcal{K}} \gamma_k \sum_{j=1}^d \mathsf{p}(h_j c_j)_k = [0]_p
\end{displaymath}
\else
\begin{displaymath}
\begin{split}
\sum_{j=1}^d \left[g_j^{(\mathcal{K},\boldsymbol{\gamma}, \bm f)} \right]_p &= \sum_{j=1}^d \sum_{k \in \mathcal{K}} \gamma_k \cdot \mathsf{p}(h_j c_j)_k \\
&= \sum_{k \in \mathcal{K}} \gamma_k \sum_{j=1}^d \mathsf{p}(h_j c_j)_k = [0]_p
\end{split}
\end{displaymath}
\fi
which implies that the third condition of the proposition is indeed true.
Conversely, assume that $\bm f \in \mathcal{E}$ fulfills all three conditions of the proposition. From the first two conditions it follows that there exists a unique vector $\bm c \in \mathbb F_q^{d}$ such that $\bm f = \mathsf F_{\mathrm v}(\bm c)$. From the last condition of the proposition and \cref{lem:intermediate_result_prop_pm} we know that
\ifonecolumn
\begin{displaymath}
\sum_{j=1}^d \left[g_j^{(\mathcal{K},\boldsymbol{\gamma},\bm f)} \right]_p = \sum_{j=1}^d \sum_{k \in \mathcal{K}} \gamma_k \cdot \mathsf{p}(h_j c_j)_k
= \sum_{k \in \mathcal{K}} \gamma_k \sum_{j=1}^d \mathsf{p}(h_j c_j)_k = [0]_p.
\end{displaymath}
\else
\begin{displaymath}
\begin{split}
\sum_{j=1}^d \left[g_j^{(\mathcal{K},\boldsymbol{\gamma},\bm f)} \right]_p &= \sum_{j=1}^d \sum_{k \in \mathcal{K}} \gamma_k \cdot \mathsf{p}(h_j c_j)_k \\
&= \sum_{k \in \mathcal{K}} \gamma_k \sum_{j=1}^d \mathsf{p}(h_j c_j)_k = [0]_p.
\end{split}
\end{displaymath}
\fi
Fixing $\mathcal{K}=\{k\}$ and $\boldsymbol{\gamma}=(1)$ for any fixed $k \in \range d$, we get
\begin{displaymath}
\sum_{j=1}^d \mathsf{p}(h_j c_j)_k = \mathsf{p}\left( \sum_{j=1}^d h_j c_j \right)_k = [0]_p
\end{displaymath}
for all $k \in \range d$, which implies that $\sum_{j=1}^d h_j c_j = [0]_q$ and $\bm c$ is indeed a valid codeword. Thus, $\bm f \in \mathsf F_{\mathrm v}(\mathcal C)$.
\balance
|
1708.07158
|
\section{Introduction}
One of the open problem in electrochemistry is the theoretical interpretation of Warburg's impedance. In his pioneer paper \cite{warburg} this impedance was interpreted as related to the ionic diffusion only. Analyses similar to that proposed by Warburg have been proposed more recently, following the same scheme by several searchers. Among others Warburg's impedance has been discussed in \cite{gerisher,buck0,buck1,buck2,bisquert,li,ju,moya,mac5,jrm1,jrm2,lai}. We have recently \cite{pccp,pre} criticized the analysis based only on the diffusion current \cite{gerisher,buck0,buck1,buck2,bisquert,li,ju,moya}, because in this case it is impossible to define the impedance of the cell in the standard manner, since the electric current is position dependent. The apparent inconsistency between the existence of a current position dependent, and vanishing at the infinite in the case of a half space, and the definition of the impedance of the system as the ration between the applied potential and the current entering into the sample, has been source of contention and puzzlement to physicists and chemists since the original paper of Warburg \cite{jrm1,jrm2,lai}.
In the present paper, we will work out the details of the solution of the problem for the simplest case of two univalent ions, in the absence of generation-recombination, when the electrodes can be described by an Ohmic model. For this simple case the mathematics is not so complicated, and one can easily see through it to the underlying physical meaning of what is going on. In particular, it is possible to show that Warburg's impedance takes origin from the difference of the diffusion coefficients of the positive and negative ions, and from the non blocking character of the electrodes.
\section{Model}
We consider an electrolytic cell containing ions. Their bulk density, in thermodynamical equilibrium, is $n_0$. When the thermodynamical equilibrium is perturbed the actual bulk density for the positive and negative ions are $n_p$ and $n_m$. The ionic currents densities of positive and negative ions are ${\bf {J_p}}=-D_p\,\nabla n_p+\mu_p\,n_p {\bf E}$, and ${\bf {J_m}}=-D_m\,\nabla n_p-\mu_m\,n_m {\bf E}$,
where $D_p$, $D_m$, and $\mu_p$, $\mu_m$ are the diffusion and mobility coefficients of the positive and negative ions, respectively. The actual electric field in the medium is related to the net charge density by Poisson's equation $\nabla\cdot {\bf E}=(q/\varepsilon) (n_p-n_m)$,
where $q$ is the electric charge of the ions, assumed monovalent, and $\varepsilon$ the dielectric constant of the liquid, free of ions. For the frequency range considered by us the electric field can be considered conservative and related to the electric potential by ${\bf E}=-\nabla V$.
The conservation of particles is described by the equations of continuity for the two type of ions, that in the absence of generation-recombination are $n_{p,t}=-\nabla \cdot {\bf {J_p}}$, and $n_{m,t}=-\nabla \cdot {\bf{ J_m}}$, where we use the comma notation $f_{,x}=\partial f/\partial x$, $f_{,xx}=\partial^2 f/\partial x^2$ and so on. We limit our considerations to a sample in the shape of slab of thickness $d$, and assume the validity of Einstein-Smolucowski relation $\mu_p/D_p=\mu_m/D_m=q/(K_BT)$. The cartesian reference frame used for the description has the $z$-axis normal to the limiting surfaces, coinciding with the electrodes, at $z=\pm d/2$. We indicate by
$u_p=(n_p-n_0)/n_0$, $u_m=(n_m-n_0)/n_0$, and $u_v=q V/(K_BT)$,
the relative variations of the ionic bulk densities of positive and negative ions, and the electric potential, expressed in $v_{th}=K_BT/q$, respectively. With these definitions, the fundamental equations of the model are
\begin{eqnarray}
\label{6} u_{p,t}&=&D_p(u_p+u_v)_{,zz},\\
\label{7}u_{m,t}&=&D_m(u_m-u_v)_{,zz},\\
\label{8}u_{v,zz}&=&-(u_p-u_m)/(2 \lambda^2).
\end{eqnarray}
Instead of $D_p$ and $D_m$ we use the quantities $D$ and $\Delta$ defined by $D_p=D/(1-\Delta)$ and $D_m=D/(1+\Delta)$
from which it follows that
$D=2 D_p D_m/(D_p+D_m)$ and $\Delta=(D_p-D_m)/(D_p+D_m)$.
Diffusion coefficient $D$ coincides with the ambipolar diffusion coefficient \cite{gb}. We define the unit of time
$t_u=\lambda^2/D$, where $\omega_D=D/\lambda^2$ is Debye's circular frequency related to the ambipolar diffusion. In the following we use the dimensionless units $\zeta=z/\lambda$ and $\tau=t/t_u$.
Consequently $-M\leq \zeta \leq M$, where $M=d/(2 \lambda)$. In terms of the dimensionless parameters and coordinates the fundamental equations of the problems can be rewritten as
\begin{eqnarray}
\label{12}(1-\Delta)u_{p,\tau}&=&(u_p+u_v)_{,\zeta \zeta},\\
\label{13}(1+\Delta)u_{m,\tau}&=&(u_m-u_v)_{,\zeta \zeta},\\
\label{14}u_{v,\zeta\zeta}&=&-(u_p-u_m)/2,
\end{eqnarray}
forming a linear system of partial differential equations. In the following we will be interested in the determination of the electrical impedance of the cell under investigation, to investigate its dependence on the circular frequency of the external applied potential difference. For this reason we limit the analysis to the case where the external power supply is such that
$V(\pm d/2,t)=\pm (V_0/2)\,\exp(i \omega t)=\pm(V_0/2)\,\exp(i \Omega \tau)$, where $\Omega=\omega/\omega_D$ is the dimensionless circular frequency expressed in unit of the ambipolar Debye's circular frequency. In terms of dimensionless quantities the boundary conditions on the reduced electric potential $u_v$ are
\begin{equation}
\label{15}u_v(\pm M,\tau)=\pm(u_0/2)\,\exp(i \Omega \tau).
\end{equation}
Since the system of Eq.s(\ref{12},\ref{13},\ref{14}) is linear the steady state solutions we are looking for have the functional form $[u_p,u_m,u_v](\zeta,\tau)=[\phi_p,\phi_m,\phi_v](\zeta)\,\exp(i \Omega \tau)$.
Substituting this ansatz into Eq.s(\ref{12},\ref{13},\ref{14}) we get the system of ordinary differential equations
\begin{eqnarray}
\label{17}i\Omega(1-\Delta)\phi_p&=&\phi_p''+\phi_v'',\\
\label{18}i\Omega(1+\Delta)\phi_m&=&\phi_m''-\phi_v'',\\
\label{19}\phi_v''&=&-(\phi_p-\phi_m)/2,
\end{eqnarray}
where the prime means a derivation with respect to $\zeta$. The boundary conditions of the problem are related to the presence of the external power supply, Eq.s(\ref{15}), and to the nature of the electrodes. In the following we assume that the electrodes are identical in all the aspects, and that the exchange of electric charge on them is described by Ohmic's model $J_p=\kappa_p E$, and $J_m=-\kappa_m E$,
for all $\tau$, at $\zeta=\pm M$. As we have shown elsewhere \cite{ioannis}, Ohmic's boundary conditions are equivalent to Chang-Jaffe boundary conditions. In the following instead of $\kappa_p$ and $\kappa_m$ we use the quantities $\kappa$ and $\delta$ defined by $\kappa_p=\kappa(1+\delta)$, and $\kappa_m=\kappa(1-\delta)$, from which it follows that
$\kappa=(\kappa_p+\kappa_m)/2$, and $\delta=(\kappa_p-\kappa_m)(\kappa_p+\kappa_m)$.
In terms of the dimensionless quantities and coordinates, Ohmic's boundary conditions can be rewritten as
\begin{eqnarray}
\label{23}\phi_p'+[1-h(1+\delta)(1-\Delta)]\phi_v'&=&0,\\
\label{24}\phi_m'-[1-h(1-\delta)(1+\Delta)]\phi_v'&=&0,
\end{eqnarray}
where $h=\kappa/\kappa^*$, and $\kappa^*=q D n_0/(K_BT)$. The case of blocking electrodes is obtained when $h=0$. We observe that even if $\delta=0$, there is an anisotropy in (\ref{23},\ref{24}) when $\Delta\neq 0$.
From Eq.s(\ref{12},\ref{13},\ref{14}) we get
\begin{eqnarray}
\label{25}\phi_p''-\,\frac{1+i2 \Omega(1-\Delta)}{2}\,\phi_p+\frac{1}{2}\phi_m&=&0,\\
\label{26}\phi_m''-\,\frac{1+i2 \Omega(1+\Delta)}{2}\,\phi_m+\frac{1}{2}\phi_p&=&0,
\end{eqnarray}
whose solutions are
\begin{eqnarray}
\label{27}\phi_p(\zeta)&=&C_{pa}\sinh(\mu_a \zeta)+C_{pb}\sinh(\mu_b \zeta),\\
\label{28}\phi_m(\zeta)&=&k_a C_{pa}\sinh(\mu_a \zeta)+k_b C_{pb}\sinh(\mu_b \zeta),
\end{eqnarray}
where $C_{pa}$ and $C_{pb}$ are integration constants,
\begin{eqnarray}
\label{29}\mu_{a,b}=\sqrt{\frac{1+2 i \Omega\mp\sqrt{1-4 \Omega^2 \Delta^2}}{2}},
\end{eqnarray}
are the characteristics complex lengths, and
\begin{eqnarray}
\label{31}k_{a,b}=-2\left[\mu_{a,b}^2-\,\frac{1+i2 \Omega(1-\Delta)}{2}\right].
\end{eqnarray}
Consequently, the $\zeta$-part of the reduced electric potential is given by
\begin{eqnarray}
\label{33}\phi_v(\zeta)=&-&\left\{\frac{1-k_a}{2\mu_a^2}C_{pa}\sinh(\mu_a \zeta)+\frac{1-k_b}{2\mu_b^2}C_{pb}\sinh(\mu_b \zeta)\right\}\nonumber\\
&+&C_v \zeta,
\end{eqnarray}
where $C_v$ is another integration constant. The integration constants $C_{pa}$, $C_{pb}$ and $C_v$ have to be determined by means of the boundary conditions (\ref{15},\ref{23},\ref{24}).
\section{Impedance of the cell}
The total electric current density is given by
\begin{equation}
\label{34}{\bf j}=q({\bf J_p}-{\bf J_{m}})+\varepsilon{\bf E}_{,t},
\end{equation}
where the first contribution represents the conduction current, and the second the displacement current. In the slab geometry ${\bf j}$ has just $z$-component, that in terms of dimensionless quantities is
\begin{equation}
\label{35}j=j_0\,\left\{\frac{\phi_p'}{1-\Delta}-\frac{\phi_m'}{1+\Delta}+2\,\frac{1+i \Omega(1-\Delta^2)}{1-\Delta^2}\,\phi_v'\right\}e^{i \Omega \tau},
\end{equation}
where $j_0=q n_0 D/\lambda$. The current density $j(\zeta,\tau)$ is such that $\partial j/\partial z=0$,
as it is easy to verify by means of Eq.s(\ref{17},\ref{18},\ref{19}). Substituting (\ref{27},\ref{28},\ref{33}) into (\ref{35}) we get
\begin{equation}
\label{37}j=2j_0\,C_v\,\,\frac{1+i \Omega (1-\Delta^2)}{1-\Delta^2}\,\,e^{i \Omega \tau},
\end{equation}
which is $\zeta$ independent, as expected.
The electric impedance of the cell, defined by $Z=\Delta V(t)/(j(t) S)$, where $S$ is the surface area of the electrodes and $\Delta V(t)=V(d/2,t)-V(-d/2,t)$ the difference of potential applied to the cell by means of the external power supply, is found to be
\begin{equation}
\label{38}Z=R_u\,\,\,\frac{u_0 (1-\Delta^2)}{C_v[1+i \Omega (1-\Delta^2)]},
\end{equation}
where $R_u=\lambda^3/(\varepsilon D S)$
is an intrinsic resistance defined in terms of the surface of the cell and the physical parameters of the medium.
As stated above, the integration constants $C_{pa}$, $C_{pb}$ and $C_v$ have to be determined by means of the boundary conditions (\ref{15},\ref{23},\ref{24}). They will not be reported in the paper because their expressions are rather large. We will discuss before in general the predicted frequency dependencies of the real, $R$, and imaginary, $X$, parts of the impedance, and in particular the parametric plot of $-X$ versus $R$, numerically obtained. After that, by means of reasonable approximation, we will show from where is coming the Warburg dependence of $X$ versus $R$, not correctly explained before \cite{warburg,gerisher,buck1,buck2,bisquert,li,ju,moya}.
The anisotropy in the surface conductivity, $\delta$ does not play an important role in the frequency dependence of $Z$, and in the following this parameter will be assumed $\delta=0$. The parameters playing a fundamental role in the existence of Warburg dependence are the anisotropy in the diffusion coefficient $\Delta =(D_p-D_m)/(D_p+D_m)$, and the surface conductivity $h=\kappa/\kappa^*$. Since $\kappa^*$ plays the role of characteristic intrinsic surface conductivity, we will limit our analysis to the case $h\sim 1$. For what concerns $\Delta$ we assume $0\leq \Delta \leq 1$. Of course $\Delta=0$ corresponds to $D_p=D_m$, and $\Delta=1$, to $D_p\gg D_m$. The value of $M=d/(2\lambda)$ is usually very large, and in our numerical calculation it is assumed $M=10^3$.
In Fig.1 we show $r=R/R_u$, a, and $x=X/R_u$, b, versus $\Omega=\omega/\omega_D$, where $\omega_D=D/\Lambda^2$ is the Debye's circular frequency related to the ambipolar diffusion, for $M=10^3$, $h=1$ and $\Delta=0.2,0.4,0.6,0.8$. For $\Delta\neq 0$ and $h\sim 1$ the spectrum of $r$ versus $\Omega$ shows the existence of two plateaux: one related to the free diffusion $r_f=2(1-\Delta^2)M$, and the other to the ambipolar diffusion $r_a=2 M$. The calculation of the limit for $\Omega\to 0$ of $r$, by means of the full expression of $C_v$ gives, in the limit of large $M$,
\begin{equation}
\label{40}r_0= 2\,\frac{h (M-1)+1}{h},
\end{equation}
that for $M\gg 1$ and $h M\gg 1$ is $h$ independent, and equal to $r_0\sim 2 M=r_a$. In the same framework the spectrum of $-x$ versus $\Omega$ presents two maxima, at the frequencies $\Omega_{\ell}=[\pi/(2 M)]^2$ and $\Omega_h=1/(1-\Delta^2)$ related to the ambipolar and free diffusion, respectively \cite{gb}. The parametric plot of $-x$ versus $r$ gives information on the Warburg's like impedance. It presents a circle in the high frequency region, whose radius is very close to $r_f$. Decreasing the circular frequency, it presents the typical Warburg dependence, and decreasing further $\Omega$, the dependence is again of circular type, and the circle ends at $r_a$ \cite{jamnik}.
In Fig. 1c we show the parametric plot of $-x$ versus $r$. Increasing $\Delta$ decreases the radius of the circle in the high frequency region, related to the free diffusion, as expected since $r_f=2(1-\Delta^2)M$. A similar analysis for $h$ ranging from $0.1$ to $2$, shows that the parametric plot is independent, in this range, of $h$. This parameters is important only if it is rather small, i.e. when $hM\sim1$, limit that it is not of interest in our analysis.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.50\textwidth]{figunite.eps}
\caption{Frequency dependence of $r=R/R_u$, a, $x=X/R_u$, b, and parametric plot of $-x$ versus $r$, c, for $h=1$, $M=10^3$ and $\Delta$ equal to $0.2$, dotted, $0.4$, dashed, $0.6$ dotted-dashed, and $0.8$, continuous. From c it is evident that increasing $\Delta$ the radius of the circle in the high frequency region decreases, and the linear part increases.}
\label{R}
\end{figure}
In the complete expression of $C_v$, and of $Z$, not reported, appear terms linear in $\Omega$ e terms of the type $1-(\Delta \Omega)^2$. Since the region of interest for Warburg's region corresponds to $\Omega<1$, and $\Delta<1$, we neglect $(\Delta \Omega)^2$ with respect to one. With this approximation the impedance of the cell, in the case of $\delta=0$, is given by
\begin{equation}
\label{41}Z=R_{\infty}\,\frac{(1-\Delta^2)+G+\Delta^2 \tanh(M\sqrt{i \Omega})/(M \sqrt{i \Omega})}{1+i(1-\Delta^2)\Omega},
\end{equation}
where $R_{\infty}=\lambda^2 d/(\varepsilon D S)$, and
\begin{equation}
\label{42}G=\frac{1-h-i\Delta^2}{M \sqrt{1+i \Omega}(h+i \Omega)}.
\end{equation}
Approximated formula (\ref{41}) coincides with the exact one in the whole frequency range.
From Eq.s(\ref{41},\ref{42}) it is possible to derive some rather important conclusions on the existence of Warburg's behaviour.
First of all we observe that for $\Omega\to 0$
\begin{equation}
\label{42-1}\Delta^2\,\frac{\tanh(M\sqrt{i \Omega})}{M \sqrt{i \Omega}}\to \Delta^2 ,\quad\quad G\to \frac{1-h- i \Delta^2}{M h},
\end{equation}
whereas for $\Omega \to \infty$ the two terms tend to zero, but
\begin{equation}
\label{42-2}\frac{\tanh(M\sqrt{i \Omega})}{M \sqrt{i \Omega}}\to \frac{1}{M \sqrt{i \Omega}} ,\quad\quad G\to \frac{1-h-i \Delta^2}{M (i \Omega)^{3/2}}.
\end{equation}
Hence, for $h\sim 1$ in the two limits the $G$ term is negligible with respect to the term $\tanh(M\sqrt{i \Omega})/(M \sqrt{i \Omega})$. A simple numerical analysis allows to verify that this is true in all frequency range when $h\sim 1$. Consequently Eq.(\ref{41}) is well approximated by
\begin{equation}
\label{41-1}Z=R_{\infty}\,\frac{(1-\Delta^2)+\Delta^2 \tanh(M\sqrt{i \Omega})/(M \sqrt{i \Omega})}{1+i(1-\Delta^2)\Omega},
\end{equation}
Formula (\ref{41-1}) for $\Delta \to 1$, i.e. for the case where one of the diffusion coefficient is very large with respect to the other, can be rewritten as
\begin{equation}
\label{42-3}Z= R_{\infty} \frac{\tanh(M\sqrt{i \Omega})}{M\sqrt{i \Omega}},
\end{equation}
that coincides with the expression reported in \cite{warburg,gerisher,buck1,buck2,bisquert,li,ju,moya}. From this result it follows that, despite the analysis reported in \cite{warburg,gerisher,buck1,buck2,bisquert,li,ju,moya} is not correct, the obtained result is sound. We stress Eq.(\ref{42-2}) is valid only in the limit of $\Delta\to 1$, that means, f.i. $D_p\gg D_m$. In this case $D\sim 2 D_m$. Of course, for $\Delta=1$, i.e. $D_m=0$, $Z$ diverges, and Warburg's impedance is absent. We have already underlined that when only one group of ions is mobile, Warburg's impedance is not predicted by Poisson-Nernst-Planck model \cite{pccp,pre}.
In the pioneer paper of Warburg \cite{warburg}, where only the diffusion current was considered, the expression of the impedance is proportional to $\tanh(M\sqrt{i \Omega})/\sqrt{i \Omega}$.
This term is present in our analysis, and it is directly connected to $\Delta$. Hence, a condition to observe Warburg's impedance is $D_p\neq D_m$. However this condition is not enough. In fact, if the electrodes are blocking, and hence $h=0$, the $G$ term, defined by (\ref{42}), becomes
\begin{equation}
\label{44}G(h=0)=\frac{1-i\Delta^2}{i \Omega\,M\,\sqrt{1+i \Omega}},
\end{equation}
that, in the low frequency region, is more important of $\Delta^2 \tanh(M\sqrt{i \Omega})/(M \sqrt{i \Omega})$, and the linear term disappears.
Equation (\ref{41-1}) is more general than those reported in \cite{warburg,gerisher,buck1,buck2,bisquert,li,ju,moya}, and can give information in the high frequency range. In fact, for $M\sqrt{\Omega}\gg 1$, that means $\Omega \gg 1/M^2$, $\tanh[M\sqrt{i \Omega}]=1$, Eq.(\ref{41-1}) can be rewritten as
\begin{equation}
\label{45}Z=R_{\infty}\frac{(1-\Delta^2)+(1-i) \Delta^2/(M \sqrt{2 \Omega})}{1+i(1-\Delta^2)\Omega},
\end{equation}
from which it is possible to derive the effective resistance and reactance, in the series representation, of the impedance of the cell, along the lines suggested by \cite{jamnik,yeh}.
\section{Conclusions}
We have investigated the origin of Warburg's impedance in electrolytic cells. For simplicity we assumed that only one group of positive and negative ions are present in the liquid and that the recombination-generation of ions can be neglected. The presented theoretical analysis is based on the Poisson-Nernst-Planck model, where the dynamical evolution of the bulk density of ions and the actual electric potential in the cell are described by the continuity equations and by the Poisson equation. The non blocking character of the electrodes is described by means of an Ohmic model. We have shown that to observe Warburg's like impedance the diffusion coefficient of the positive ions has to differen from that of the negative one, and furthermore that the electrodes have to be not blocking. Our analysis can be easily generalized to take into account more groups of ions, of different boundary conditions for the charge exchange between the cell and the external circuit. The result of our analysis allows to justify the expression for Warburg's impedance proposed previously by several groups, based on wrong assumptions.
{\bf Acknowledgment}
This work was supported by the MEPhI Academic Excellence Project (agreement with the Ministry of Education and Science of the Russian Federation of August 27, 2013, project no. 02.a03.21.0005). Many thanks are due to Antonio Scarfone for useful discussions.
|
2006.10105
|
\section*{\abstractname}%
\else
\begin{center}%
{\bfseries\sffamily\abstractname\vspace{\z@}
\end{center}%
\quotation
\fi}
{\if@twocolumn\else\endquotation\fi}
\makeatother
\DeclarePairedDelimiter{\snorm}{\lVert}{\rVert}
\hypersetup{
colorlinks=true,
linkcolor=purple,
citecolor=red,
urlcolor=Blue
}
\begin{document}
\title{\sffamily On the origin of the quantum group symmetry \\ in 3d quantum gravity }
\author[1,2]{\sffamily Ma\"it\'e Dupuis\thanks{mdupuis@perimeterinstitute.ca}}
\author[1]{\sffamily Laurent Freidel\thanks{lfreidel@perimeterinstitute.ca}}
\author[2]{\sffamily Florian Girelli\thanks{florian.girelli@uwaterloo.ca}}
\author[2]{\sffamily Abdulmajid Osumanu \thanks{a3osumanu@uwaterloo.ca}}
\author[2]{\sffamily Julian Rennert \thanks{jrennert@uwaterloo.ca}}
\affil[1]{\small Perimeter Institute for Theoretical Physics, 31 Caroline St. N., Waterloo, ON N2L2Y5, Canada}
\affil[2]{\small Department of Applied Mathematics, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, Canada, N2L 3G1}
\maketitle
\abstract{ It is well-known that quantum groups are relevant to describe the quantum regime of 3d gravity. They encode a deformation of the gauge symmetries parametrized by the value of the cosmological constant. They appear as a form of regularization either through the quantization of the Chern-Simons formulation or the state sum approach of Turaev-Viro.
Such deformations are perplexing from a continuum and classical picture since the action is defined in terms of undeformed gauge invariance. We present here a novel way to derive from first principle and from the classical action such quantum group deformation. The argument relies on two main steps. First we perform a canonical transformation, which
deformed the gauge invariance and the boundary symmetries, and makes them depend on the cosmological constant. Second we implement a discretization procedure relying on a truncation of the degrees of freedom from the continuum.
}
\tableofcontents
\section*{Introduction}
\label{sec:intro}
When constructing a quantum theory, it is essential to identify the system's relevant symmetries. Symmetries provide, thanks to Noether theorem \cite{Olver}, a non-perturbative handle which enables us to limit the quantization ambiguities, for example, by demanding that such symmetries are preserved upon quantization. They permit a powerful organization of the spectra by allowing the quantum states to form a representation of the symmetry group.
For gauge theories, such as gravity, it appears that this powerful tool is not available.
Indeed, it is often believed that there are no symmetries in gravity, only gauge invariances.
This leaves no means to use the power of having non-trivial conserved charges. Gauge invariances are conventionally understood \cite{Rozali} to be mere redundancies of the parametrization and, therefore, cannot help us to organize the quantum spectra. Physical states cannot be distinguished or labelled by the canonical generators associated with gauge invariance since by definition, these vanish on all physical states.
We have a state of complete degeneracy, which is another expression of the celebrated problem of time \cite{Isham:1992ms}, and this is the main reason behind the challenge of constructing a theory of quantum gravity.
Although there is no doubt that gauge invariance implies redundancy of the parametrization, there is a lingering sense that there is more to it \cite{Rovelli:2013fga}. After all, different formulations of gravity
such as canonical formulation \cite{Arnowitt:1962hi}, metric formulation \cite{Wald:1984rg}, tetrad formulation \cite{Trautman:2006fp}, teleparalell formulation \cite{tele} or shape-dynamics \cite{Barbour:2011dn}, possess different levels of redundancies and seem to present different advantages. Moreover, one seldom studies gravity in a fully gauge fixed form, such as \cite{Grant:2009zz, Verlinde:1991iu}, which would be the most natural and beneficial option if redundancy was all there is to gauge invariance.
It is therefore natural to wonder whether there can be some other types of ``hidden'' symmetries that could be essential in the construction of the quantum theory, and whether such hidden symmetries could entertain a profitable relationship with the notion of gauge invariance?
Critical examples of hidden symmetries in field theories are dualities \cite{Seiberg}, which are not manifest in the bulk Lagrangian. Other examples of hidden symmetries are dynamical symmetries \cite{Faddeev} that arise in integrable systems.
One of the first and strongest indications that there are such ``hidden'' symmetries in gravity comes from the Turaev-Viro (TV) model \cite{TV}, which is an expansion of the Ponzano-Regge \cite{Ponzano-Regge} model. Indeed, in the presence of a cosmological constant, the quantum gravity partition function, can be constructed in terms of
spin network states satisfying the intertwining properties of quantum groups \cite{Chari:1994pz, Kassel}.
The TV model provides a discretization of the gravity path integral.
This discretization satisfies two fundamental properties: First, each building block, given by the quantum group 6j symbol, is related in the limit of small Planck constant to the exponential of the classical gravity action \cite{Taylor}. Second, the partition function is invariant under refinement hence defines a continuum theory.
The puzzle comes from the fact that there seem to be no sign of quantum group in the continuum theory, so that they seem to appear only after discretization and quantization.
Other mathematical justifications for quantum group symmetries in the context of 3d quantum gravity also originate from the fact that one can relate the TV model
to the quantization of Chern-Simons (CS)
\cite{ROBERTS, Freidel:2004nb, alex2010, turaev2010}, and then prove that quantum groups
appear in the definition of the quantum CS theory.
For instance, the conjecture that quantum groups enter the construction of the CS partition function was first made by Witten \cite{Witten:1988hc} and proven by Reshetikhin-Turaev \cite{RT}. Another important evidence comes from the construction by Fock and Roskly \cite{Fock:1998nu} of a discrete version of the CS phase space, which includes from the get-go arbitrary sets of classical R-matrices.
The quantization of this discrete phase, in terms of quantum groups, was achieved by Alekseev et al. \cite{Alekseev:1994au, Alekseev:1994pa}.
These approaches are top-down in the sense that quantum groups are postulated in the construction of partition functions or states or algebras and then justified by the consistency of their mathematical properties \emph{but not derived from first principles}.
In all these approaches, the R-matrix, which is the quantum group structure constant, is introduced by hand in the discretization and quantization processes.
There have also been many attempts to try to understand the appearance of quantum groups from a physical perspective. In \cite{Freidel:1998pt}, it was argued that quantum group deformation perturbatively appears in the limit of small cosmological constant.
The works \cite{Noui:2011aa, Pranzetti:2014xva} showed that the quantum group structure could appear in the regularization of the Hamiltonian constraint.
In \cite{Bonzom:2014bua} a deformation of the Hamiltonian constraint, such that its kernel contains the TV amplitude, was found.
We should also mention
the seminal works \cite{Gawedzki:1990jc, Falceto_1993}, where the quantum group symmetry is identified at the classical level for the Wess-Zumino model. While this is not the gravity context, the approach used there was an inspiration for our current work.
Despite all these attempts, no actual derivation of the TV model from a gravity action exists.
Not to the level of satisfaction achieved for the Ponzano-Regge model where undeformed symmetry appears \cite{Freidel2004, Barrett:2008wh, Freidel:2002dw, Bonzom:2012mb}.
All the justifications listed here point to the fact that the quantum group is the right symmetry to implement in the discrete and quantum regime,
and that this symmetry somehow respects the dynamics of the theory.
However, it is unclear what this symmetry exactly corresponds to. It cannot merely be gauge invariance since the Lorentz gauge group is independent of the cosmological constant. Also, it has to be appreciated that quantum groups introduce a preferred direction that selects a Cartan subalgebra from the onset. The source and nature of this preferred direction have been a long-standing puzzle.
\medskip
The question we would like to address here is \textbf{what is the classical origin of these quantum deformed symmetries, \textit{starting from the gravity action}?}
\medskip
Answering this question relies on three concepts.
The first key idea was first formulated in \cite{Freidel:2015gpa}, further formalized in \cite{Donnelly:2016auv} and developed in \cite{Freidel:2016bxd, Freidel:2019ees} at the quantum level.
Concretely, these works establish that there are, actual symmetries in gravity represented by non-trivial canonical generators. These symmetries reveal themselves once we
decompose a gravitational system into subsystems. Then the boundary of the subsystem decomposition supports the symmetry generators.
The point is that these boundary symmetry generators are the relevant symmetry generators that one needs to use in order to construct the quantum theory.
The quantum spacetime is then obtained as a fusion of quantum representations of the boundary symmetry group. This represents the quantum equivalent of the gluing of subregions.
This idea is built upon the works of many who have demonstrated the central importance of boundary symmetry algebra in gravity \cite{Regge:1974zd, Iyer:1994ys, Balachandran:1994up, Balachandran:1995qa, Wald:1999wa, Carlip:1999cy, Szabados:2005wi } and developped the understanding of the nature of entanglement entropy in gauge theory
\cite{Balachandran:1995iq, Buividovich:2008yv, Donnelly:2011hn, Casini:2013rba, Radicevic:2014kqa,Donnelly:2015hxa,Lin:2018bud}.
The second and related idea, first proposed in \cite{Freidel:2011ue}, is that one can think of the process of discretizing a field theory, while respecting the
bulk gauge invariance \cite{Freidel:2002dw, Dittrich:2012qb} as a two-step process. The first step, that we just discussed, is the decomposition of the system into subregions and the second step is a coarse-graining operation where one replaces each cell of the decomposition by a
vacuum solution of the bulk constraints. Consequently,
the subregion boundaries, and their symmetry charges, encode all the relevant degrees of freedom of this corase grained data. This procedure leads to a discretization that respects, by construction, the fundamental invariance of the theory under study. It also leads to a new way to approach the continuum limit as a condensation of charge defects \cite{Delcamp:2016yix}. The choice of a solution on each cell corresponds to a vacuum choice at the quantum level \cite{Dittrich:2016typ}.
This strategy has been developed in the case of three-dimensional gravity in \cite{Dupuis:2017otn, Freidel:2018pbr, Shoshany:2019ymo}.
The third concept is illustrated in the section II for 3d gravity and in \cite{Edge-Mode-II} for 4d gravity. It uses the fact that it is possible to modify the expression of the boundary symmetries and their charges by the addition of boundary terms to the action. In the case of 3d gravity, the boundary symmetry is composed of
the internal Lorentz symmetry and the translation symmetry.
We show that it is necessary, in the presence of a non-vanishing cosmological constant, to add a boundary term to the action to ensure that the boundary translational symmetry is closed as an algebra. This boundary term, which implements a canonical transformation in the bulk, is the continuum analog of the classical R-matrix. It is given for 3d gravity by
\begin{equation}
\int_{\partial M} {\mathfrak{r}}_{ij}e^i\wedge e^j, \qquad {\mathfrak{r}}_{ij}\equiv \epsilon_{ijk}n^k,
\end{equation}
where $n^k$ is a fiducial vector that is shown to be the quantum group preferred direction
and whose norm square is proportional to the cosmological constant.
We show that the presence of this boundary term affect the bulk connection
and deforms the notion of gauge invariance, by replacing the usual gauge invariance by an equivalent one preserving the fiducial vector $n^i$.
The fact that this is possible to introduce a fiducial vector without breaking, only deforming, gauge invariance is the central physical mechanism behind the appearance of quantum groups.
It happens because the vector labels a bulk canonical transformation whose rotation can be rectified by a canonical boundary transformation.
It is well-known that the charges of local rotations are given by the boundary coframe, that they form an algebra denoted $\mathfrak{su}$ and that the charges of local translations are given by the boundary connection
\cite{Geiller:2017whh}. After deformation we find that the translation generators form a subalgebra denoted $\mathfrak{an}$:
\begin{equation}
\{{P'}_\alpha,{P'}_\beta\} ={{P'}}_{ (\alpha\times \beta) \times n }, \qquad {P}'_\phi = \oint_{\partial\Sigma} \phi^I { \omega}_I,
\end{equation}
where $\times$ denote the cross product, $\Sigma$ is a 2d subregion and $\omega$ the (deformed) gravity connection.
We also find that the cosmological constant enters, through $n$, in a deformation of the Lorentzian Gauss law. This gives us our first hint of the presence of a quantum group in the continuum theory.
\medskip
In section III, we study the process of subdivision and coarse-graining as described previously. We show that after a choice of vacuum solution on each cell, the symplectic form of the continuum theory becomes finite-dimensional. It decomposes as a sum over
the intersections of cells, these are the ``links'' of the decomposition.
For each link $\ell$ (and its dual $\ell^*$), we identify two holonomies $(H_\ell,\tilde{H}_{\ell})$ belonging to the rotation group $ \mathrm{SU}$ and two holonomies $(L_{\ell^*},\tilde{L}_{\ell^*})$ belonging to the group $\AN$ and we show that they form a \emph{ribbon structure}:
\begin{equation}
\tilde H_\ell \tilde L_{\ell^*} = {L}_{\ell^*} {H}_{\ell}.
\end{equation}
The crux of the paper consists in proving that the phase space attached to each link is in fact the
\emph{Heisenberg double} ${\mathfrak{D}}$.
As a manifold, the Heisenberg double is the cross-product group
$ {\mathfrak{D}}=\mathrm{SU} \bowtie \AN$ defined by the ribbon structure.
The Poisson bracket we derived is compatible with the action of a Poisson-Lie group, which is the classical analog of the quantum group.
The fact that classical analog of quantum group symmetries appears naturally
when the phase space is a Heisenberg double has been established for a long time \cite{SemenovTianShansky:1985my, SemenovTianShansky:1993ws, Alekseev_1994}.
Note that in \cite{Bonzom:2014wva}, a discrete model based on Heisenberg doubles attached to links was proposed. It was also argued there that it provides a discretization of 3d gravity with a non-zero cosmological constant, and later on, it was shown to lead to the Turaev-Viro amplitude upon quantization \cite{Bonzom:2014bua}. The relation with the classical continuum variables was missing. The derivation of this structure from the continuum action constitutes the main result of our work.
\medskip
The article is organized as follow.
Section I is essentially a review of existing material. We first recall the Hamiltonian analysis of 3d gravity with a non-zero cosmological constant. We emphasize that the rotational symmetry does not depend on the cosmological constant, so that it is not clear at first why a deformation of the symmetries should appear upon quantization.
In section II, we introduce the relevant boundary action which provides the right starting point for the discretization. We perform the Hamiltonian analysis of the action in these new variables. In particular, we obtain new rotational symmetries which do depend on the cosmological constant.
Section III provides the main result of the paper. We provide a detailed proof that the Heisenberg double phase space is obtained from our discretization. We highlight how the discretized variables we have obtained are related to the ones introduced in \cite{Bonzom:2014wva}. We show explicitly how the deformed symmetries of the Heisenberg double are recovered.
In Section IV, we recall how the quantum group structure appears from the quantization of the discrete variables we have constructed, following \cite{Bonzom:2014bua}.
\section{Canonical analysis of the 3d gravity action with a cosmological constant} \label{normal}
We first recall the standard canonical analysis of the first order 3d gravity action with a non-zero cosmological constant. We consider a 3-dimensional manifold $M$ \cite{Carlip:1998uc}. The greek indices $\alpha,\beta,..\in\{1,2,3\}$ are spacetime indices, while capital latin letters $I,J,..\in\{1,2,3\}$ are internal indices.
\paragraph{From metric formulation to first order formulation.}
In the metric formulation the action is given by
\begin{align}
S_{EH}[g_{\mu \nu}]&=- \frac1{2\sigma \kappa} \int_M{d^3x \,\sqrt{\sigma\det(g_{\mu \nu})} \: \left(R[g_{\mu \nu}] - 2 \Lambda\right)}\label{eq:903},
\end{align}
where $\kappa={8\pi G}$ and $\sigma$ encodes the signature, $\sigma=-1$ for the Lorentzian case and $\sigma=+1$ for the Euclidean case. We introduce the frame field $e^I_{\mu}$, such that
\begin{equation}
g_{\mu \nu} = \eta_{IJ} e^I_{\mu} e^J_{\nu} \quad , \qquad e^I_{\mu} e^{\mu}_J = \delta^I_J , \quad e^I_{\mu} e^{\nu}_I = \delta^{\nu}_{\mu}\,.
\label{eq:13}
\end{equation}
The internal metric is then $\eta = (+,+, \sigma)$. We also introduce the spin connection $\tilde A_{IJ}$, a $\mathfrak{so}(\eta)$ valued spin connection, such that $\tilde A^{IJ}=-\tilde A^{JI}$. The associated curvature is
\begin{equation}
R^{IJ}[\tilde{A}] = \text{d} \tilde{A}^{IJ} + \tilde{A}^{I}_{\: \: \, L} \wedge \tilde{A}^{LJ}.
\end{equation}
Replacing these quantities in the action \eqref{eq:903}, we recover
\begin{equation}
S_{GR}[\tilde{A},e] = -\frac1{2\sigma \kappa} \int_M{\varepsilon_{IJK} \left(e^I \wedge R[\tilde{A}]^{JK} - \frac{\Lambda}{3} \, e^I \wedge e^J \wedge e^K\right)}\,.
\label{eq:bf2}
\end{equation}
It is common to rewrite the connection with a single index, using the Levi-Civita tensor, which also depends on the signature. Fixing $\epsilon_{123}=1$, we have $\epsilon^{123}=\sigma$ and furthermore
\begin{align}
\epsilon^{\mu \nu \rho} \epsilon_{\mu \beta \gamma} &= \sigma (\delta^{\nu}_{\beta} \delta^{\rho}_{\gamma}-\delta^{\rho}_{\beta} \delta^{\nu}_{\gamma} ) \,.\label{eq:neu4}
\end{align}
We have then
\begin{eqnarray}
&& \tilde{A}^J= \frac{1}{2} \, \epsilon^J_{\: \: KL} \tilde{A}^{KL}, \quad \tilde{A}^{JK}=\sigma \epsilon^{JK}{}_I\tilde{A}^I \\
&&R^J = \frac{1}{2} \, \epsilon^J_{\: \: KL} \, R^{KL}, \quad R^{JK}=\sigma \epsilon^{JK}{}_IR^I, \quad R^I= \mathrm{d} \tilde{A}^I -\frac{\sigma}2\epsilon^I{}_{JK}\tilde{A}^J\wedge \tilde{A}^K.
\end{eqnarray}
In order to have a curvature formula that does not depend on the signature, we can rescale the connection $A=-\sigma \tilde{A}$, so that
\begin{equation}
R^I[\tilde{A}]=-\sigma F^I[A] = -\sigma (\mathrm{d} A^I + {\frac{1}{2}} \epsilon^{I}{}_{JK} A^J\wedge A^K).
\end{equation}
$ A$ is still a $\mathfrak{so}(\eta)$ valued spin connection. Replacing this expression in the action, we obtain
\begin{eqnarray}
S_{BF}[A,e] &=& \frac1\kappa \int_M{ \left(\eta_{IJ} e^I \wedge F^{J}[A] +\sigma \frac{\Lambda}{6}\epsilon_{IJK} \, e^I \wedge e^J \wedge e^K\right)} \label{action}\\
&=&
\frac1\kappa \int_M \, e^I \wedge \left(F_I[A] + \sigma\frac{\Lambda}{3} \,E_I \right)
\,,
\end{eqnarray}
where $E_I ={\frac{1}{2}} \, (e \times e)_I $ is the area flux, $F_I[A]
\equiv \text{d}A_I + {\frac{1}{2}} \, (A \times A)_I$ denotes the curvature of $A$ and $(A\times B)^I = \epsilon^{I}_{JK} A^J\wedge B^K$ denotes the cross-product of Lie algebra valued forms. In the following, we will work in units where $\kappa=1$, reestablishing the units when deemed useful.
\medskip
\paragraph{Equations of motion.}
One can couple this action to matter field via $S^{\mathrm{Mat}}(e,A;\phi)$ and we denote $\mathcal{P}_I \equiv-\frac{\delta S^{\mathrm{Mat}} }{\delta e^I}$ the energy momentum density and
$\mathcal{J}^I \equiv-\frac{\delta S^{\mathrm{Mat}} }{\delta A_I}$ the angular-momentum density of the matter fields. The equations of motion are given by
\begin{eqnarray}
F_I[A] + {\Lambda} \,\sigma E_I \approx \mathcal{P}_I \qquad \mathrm{d}_A e ^I \approx \mathcal{J}^I,
\end{eqnarray}
where $\mathrm{d}_A e^I \equiv \mathrm{d} e ^I + (A\times e)^I$ is the torsion of $A$.
In vaccuum, when no matter is present, the first equation is the curvature constraint $F_\Lambda^I\equiv F[A]^I + {\Lambda} \, E^I \approx 0$ and the second equation is the torsion free condition since $T^I\equiv {\mathrm{d}_A} e^I\approx 0$.
We use the notation $\approx$ to stress that we have implemented the equations of motion.
\medskip
\paragraph{Action symmetries}
The action is invariant under a set of (gauge) symmetries. The first obvious symmetry is given by the $\mathfrak{so}(\eta)$ infinitesimal gauge transformations, parametrized by the scalar fields $\alpha^J$,
\begin{eqnarray} \label{SU2transfo}
\delta_{\alpha}e^I&= & (e\times \alpha)^I,
\qquad
\delta_{\alpha}A^I = {\mathrm{d}_A}\alpha^I,
\\ \delta_{\alpha}\mathcal{J}^I&=& (\mathcal{J} \times \alpha)^I,\qquad
\delta_{\alpha}\mathcal{P}^I= (\mathcal{P}\times \alpha)^I.\nonumber
\end{eqnarray}
They do not depend on the cosmological constant.
\\
The second one is the "shift" symmetry, parametrized by the scalar fields $\phi^J$,
\begin{eqnarray} \label{shift}
\delta_{\phi}e^I&= & {\mathrm{d}_A}\phi^I, \qquad \qquad
\delta_{\phi}A^I= \Lambda\, (e \times \phi)^I, \\
\delta_{\phi}\mathcal{J}^I &= & (\mathcal{P} \times \phi)^I, \qquad
\delta_{\phi}\mathcal{P}^I= \Lambda\, (\mathcal{J}\times \phi)^I.\nonumber
\label{transsym2}
\end{eqnarray}
These transformations are $\Lambda$ dependent.
The last identity means that in the presence of a non-zero $\Lambda$, the notion of energy and momentum depends on the translational frame via the angular momenta density. In the same way that the notion of angular momenta depends on the rotational frame via the energy momentum density.
\\
Diffeomorphism symmetry can be written, on-shell of the equations of motion, as a combined action of gauge and shift symmetries with field dependent parameters \cite{Horowitz:1989ng}. Given an infinitesimal diffeomorphism $\xi$, we define the field dependent parameters
\begin{equation}
\alpha_\xi ^I = \iota_\xi A^I,\quad \phi_\xi^I = \iota_\xi e ^I,
\end{equation}
and we can express the action of an infinitesimal diffeomorphism as a gauge or shift symmetry on-shell.
\begin{eqnarray}
\pounds_\xi A^I & =& \mathrm{d} \iota_\xi A^I + \iota_\xi \mathrm{d} A^I = \iota_\xi F^I_\Lambda+\delta_{\alpha_\xi}A^I + \delta_{\phi_\xi}A^I \approx \delta_{\alpha_\xi}A ^I+ \delta_{\phi_\xi}A^I \\
\pounds_\xi e^I &=&\mathrm{d} \iota_\xi e^I + \iota_\xi \mathrm{d} e^I = \iota_\xi T^I +\delta_{\alpha_\xi}e^I + \delta_{\phi_\xi}e ^I\approx \delta_{\alpha_\xi}e^I + \delta_{\phi_\xi}e^I.
\end{eqnarray}
\medskip
\paragraph{Symplectic form and Poisson brackets.}
Let us now perform the Hamiltonian analysis of the action \eqref{action}. We consider $M = \mathbb{R} \times \Sigma$. The symplectic potential associated with $S^{\text{BF}}_M$ is
identify as the boundary variation $\delta S^{\text{BF}}_M \approx \Theta^{\text{BF}}_{\partial M}$.
The symplectic form $\Omega^{\text{BF}}_\Sigma = \delta \Theta^{\text{BF}}_\Sigma$, associated with a Cauchy slice $\Sigma$ is
\begin{equation} \label{form0}
\Theta^{\text{BF}}_\Sigma = -\int_{\Sigma} \left\langle e\wedge \delta A \right\rangle, \quad \Omega^{\text{BF}}_\Sigma = -\int_{\Sigma} \left\langle \delta e\curlywedge \delta A \right\rangle ,
\end{equation}
where $\delta$ encodes the field variations,
$\curlywedge$ is the extension of the wedge product to variational forms\footnote{If $\alpha$ is a degree $a$ form and $\beta $ a degree $b$ form, we have
\begin{equation}
\alpha \curlywedge \beta =\alpha\wedge \beta,
\qquad
\alpha \curlywedge \delta \beta = \alpha \wedge \delta \beta,\qquad
\delta \alpha \curlywedge \delta \beta = - (-1)^{ab} \delta \beta \curlywedge \delta \alpha.
\end{equation} },
and the pairing is given by $
\left\langle \delta e \curlywedge \delta A \right\rangle=\eta_{IJ} \delta e^I \curlywedge \delta A^J$.
Accordingly, the canonical variables are the pairs
$(A^I_a(x), e_{b}^J(x))$ where $a,b$ are indices tangent to $\Sigma$, $a,b,..\in\{1,2\}$. The canonical Poisson bracket generated by \eqref{form0} is simply, $\forall x,y\in \Sigma$,
\begin{equation}\label{symplectic 0}
\poi{A^I_a(x), e_{b}^J(y)}= \kappa\, \epsilon_{ab}\, \eta^{IJ} \,\delta^2(x-y), \quad \poi{A^I_a(x), A_b^J(y)}=0=\poi{e_{a}^I(x), e_{b}^J(y)},
\end{equation}
where we reinstated $\kappa$ for completeness.
\medskip
\paragraph{Charges algebra}
It is well-known that the total Hamiltonian and the generators of rotational and translational symmetry are given by boundary terms and satisfy a closed algebra.
Let us recall that the Hamiltonian generator associated with a canonical field transformation
$\delta_\psi$ is $H_\psi$ provided we have
\begin{equation}\label{symp}
\delta_\psi \lrcorner\, \Omega = \int_{\Sigma}( \left\langle \delta_\psi e\wedge \delta A \right\rangle-
\left\langle \delta e\wedge \delta_\psi A \right\rangle) =-\delta H_\psi.
\end{equation}
The Poisson bracket of two generators is defined to be
\begin{equation}\label{Poisson}
\{H_\psi,H_{\psi'}\} = \Omega(\delta_\psi,\delta_{\psi'}) =\delta_\psi H_{\psi'}.
\end{equation}
In other words, the condition \eqref{symp} means that the
Hamiltonian generator $H_\psi$ generates the canonical transformation
\begin{equation}
\delta_\psi \cdot =\{H_\psi,\cdot\}.
\end{equation}
One denotes $J_{\alpha}$ the generator of rotational symmetry
($\delta_\alpha =\{J_\alpha,\cdot\}$) , $P_\phi$ the generator of translational symmetry.
They are given by
\begin{eqnarray}\label{generator 1}
J_\alpha &=& \int_{\Sigma} \alpha_I (\mathcal{J}^I - \mathrm{d}_Ae^I)+\oint_{\partial\Sigma } \alpha_I e^I,\cr
P_\phi &=&\int_\Sigma \phi^I \left(\mathcal{P}_I - F_I(A)- \sigma\tfrac{\Lambda}2 \, (e \times e)_I\right) +\oint_{\partial\Sigma} \phi^I A_I,
\end{eqnarray}
The transformations associated to a parameter vanishing on the boundary are gauge transformations. Hence they have a vanishing charge. Their canonical generator vanishes on-shell since it is proportional to the constraints. On the other hand, transformations whose boundary parameters do not vanish, have non vanishing charges. They are the boundary symmetries. The corresponding boundary charges are given by
\begin{equation}
J_\alpha
\approx \oint_{\partial\Sigma } \alpha_I e^I,\qquad
P_\phi \approx \oint_{\partial\Sigma} \phi^I A_I.
\end{equation}
Using \eqref{Poisson} and the expressions (\ref{SU2transfo},\ref{shift}) for the transformations, one can evaluate the boundary charge algebra (reinstating $\kappa$)
\begin{eqnarray}
&&\{J_\alpha,J_\beta\}= {\kappa} \, J_{(\alpha\times \beta)},\quad
\{P_\phi,P_\psi\} = { \sigma}\, {\kappa}\, \Lambda \, J_{(\phi\times \psi)},\nonumber\\
&&\{J_\alpha, P_\phi \} ={\kappa} \, P_{(\alpha\times \phi)} + {\kappa} \oint_{\partial\Sigma} \phi^I \mathrm{d}\alpha_I. \label{boobost}
\end{eqnarray}
One sees that there exists a central extension in the commutator between $J_\alpha$ and $P_\phi$. Therefore this algebra is first class only for the transformation parameters $\alpha,\phi$ that are constant on $\partial \Sigma$.
This set of constant parameters generate then global symmetry transformations which form a finite dimensional Poisson Lie algebra.
\medskip
\paragraph{Quantum algebra of observables.}
The corresponding quantum operators for the global charges are given by
\begin{equation}
{\widehat J}^I
\approx i\oint_{\partial\Sigma }\hat e^I,\qquad
{\widehat P}^I \approx i\oint_{\partial\Sigma} \hat A^I.
\end{equation}
We require them to be be antihermitian ${\widehat J}^\dagger=-{\widehat J}$, ${\widehat P}^\dagger=-{\widehat P}$. They satisfy the Lie algebra brackets
\begin{equation}\label{algebra}
[{\widehat J}_I,{\widehat J}_J]= l_P \,\epsilon_{IJK}\, {\widehat J}^K, \quad [{\widehat J}_I,{\widehat P}_J]=l_P\, \epsilon_{IJK}\,{\widehat P}^K, \quad [{\widehat P}_I,{\widehat P}_J]=\sigma \,l_P\, \Lambda \,\epsilon_{IJK}\,{\widehat J}^K,
\end{equation}
with $l_P=\hbar{\kappa}$ the Planck length. The indices are raised with the metric $\eta_{IJ}$.
Hence according to the signature $\sigma$ and the sign $s$ of the cosmological constant $\Lambda$, the quantum algebra of charges is isomorphic to a well-known Lie algebra ${\mathfrak{d}}_{\sigma s}$. We have
${\mathfrak{d}}_{++}=\mathfrak{so}(4)$ when dealing with a spherical space-time $S_3$, ${\mathfrak{d}}_{+-}={\mathfrak{sl}}(2,\mathbb{C})= {\mathfrak{d}}_{-+}$ when dealing with a hyperbolical space-time $H_3$ or with a de Sitter space-time $dS_3$
and finally ${\mathfrak{d}}_{--}=\mathfrak{so}(2,2)$ when dealing with an anti de Sitter space-time $AdS_3$.
\medskip
\paragraph{Gauge theory for ${\mathfrak{d}}_{\sigma s}$.} Let us note the generators of Lie algebra ${\mathfrak{d}}_{\sigma s}$ by ${{\bf J}} _I$ and ${{\bf P}} _J$, respectively the Lorentz/rotation generators and the boosts.
To build the action,
we introduce a pairing between the generators, i.e. an invariant bilinear form over ${\mathfrak{d}}_{\sigma s}$. The relevant one is\footnote{See \cite{Meusburger:2007ad} for a discussion on the most general pairing one can consider.} ,
\begin{equation}\label{pairing}
\left\langle {{\bf J}} _I,{{\bf P}} _J\right\rangle = \eta_{IJ} = \left\langle {{\bf P}} _I,{{\bf J}} _J\right\rangle , \quad
\left\langle {{\bf J}} _I,{{\bf J}} _J\right\rangle = 0 = \left\langle {{\bf P}} _I,{{\bf P}} _J\right\rangle.
\end{equation}
The frame field has value in the boosts, $e\equiv e^I{{\bf P}} _I$, whereas the connection has value in $\mathfrak{so}(\eta)$, $A\equiv A^I {{\bf J}} _I$. Hence, the curvature $F[A]$ is an object with value in $\mathfrak{so}(\eta)$, whereas the torsion $T[e,A]$ takes value in the boosts. In particular the covariant derivatives can be expressed in terms of the structure constant of $\mathfrak{so}(\eta)$.
\begin{eqnarray}
{\mathrm{d}_A} \alpha = \mathrm{d} \alpha + [A,\alpha], \quad \textrm{ with } \alpha= \alpha^I{{\bf J}} _I, \nonumber \\
\quad {\mathrm{d}_A} \phi = \mathrm{d} \phi + [A,\phi], \quad \textrm{ with } \phi= \phi^I{{\bf P}} _I.
\end{eqnarray}
\medskip
We could now try to construct the LQG kinematical Hilbert space by imposing the Gauss constraint first as usually done.
Since the rotational charge does not depend on $\Lambda$, we expect to recover after discretization the standard spin networks based on $\mathrm{SU}(2)$, just as when $\Lambda=0$. Hence the kinematical states are not given in terms of a quantum group structure.
However we know that the quantum group structure needs to appear once we properly implement the dynamics. For example in the Turaev-Viro model \cite{TV}, which gives the proper quantization of 3d gravity, the boundary states are given in terms of quantum group spin networks.
This raises a fundamental puzzle and shows that the choice of discretization scheme could be at odd with the dynamics of the theory.
While both formulations (with group or quantum group spin networks) should agree in the continuum limit, it is not clear how to define the quantum theory with undeformed spin networks and then to achieve a proper continuum limit, while the Turaev-Viro model is well-defined and also known to be invariant under refinement therefore defining a continuum theory. Resolving this tension means that one needs to deal at the classical level with a different rotational charge, which should depend on $\Lambda$.
Note also that an essential step to construct the quantum states is to discretize the theory, and in particular the charge information. We note that the translational charge algebra \eqref{boobost} does not form a closed algebra, rendering its discretization more obscure.
As we will show modifying the rotational charge in a $\Lambda$ dependent way allows to perform the discretization without breaking the symmetry.
\section{New variables and new action} \label{sec:new action}
In order to change the rotational charge structure, which should also depend on $\Lambda$, it is natural to add a boundary term.
\subsection{Gravity Action and canonical transformation}
\paragraph{Boundary term and canonical transformation.} Let us consider a general vector $n^I$ parametrizing the boundary contribution.
We will see what further conditions $n$ is required to satisfy along the way.
We consider then the original action \eqref{action} modified by the boundary term\footnote{QG stands for quantum group.}
\begin{eqnarray} \label{rmatbdy}
{S}_{QG}[e,A]&\equiv
&
S_{BF}[e,A]+ \frac1{2{\kappa}} \int _{\partial M}\, {\mathfrak{r}}_{IJ}\, e^I\wedge e^J\\
&=& S_{BF}[e,A]+
\frac1{2{\kappa}} \int_{\partial M} (e\times e)_I n^I \nonumber\\
&=& \frac1{\kappa}\int_M \, e^I \wedge \left(F_I[A] + \sigma\frac{\Lambda}{6} \, \epsilon_{IJK} \, e^J \wedge e^K \right) + \frac1{2{\kappa}} \int_{ M} \mathrm{d} \left( (e\times e)_I n^I\right). \label{action1}
\end{eqnarray}
The boundary term does not modify the equations of motion. We note that while $n$ is defined first on the boundary $\partial M$, it can be naturally extended to the bulk $M$ using Stokes theorem. As before we will work with ${\kappa}=1$ until deemed necessary.
\medskip
To perform the Hamiltonian analysis of the new action, we assume as before that $ M=\mathbb{R}\times \Sigma$. The new symplectic potential is
\begin{equation}\label{new theta}
\Theta_{QG} = \int_{\Sigma} e _I\wedge \delta A^I
-\frac12 \delta \int_{\Sigma}(e\times e)_I n^I = \int_{\Sigma} e_I\wedge \delta \omega^I -{\frac{1}{2}} \int_{\Sigma} (e\times e)_I \cdot \delta n^I,
\end{equation}
where we have introduced a new connection
\begin{equation}
\omega^I\equiv A^I +(n\times e)^I.
\end{equation}
We see from \eqref{new theta} that we have an extra pair of conjugated variables $(n, E= {\frac{1}{2}} (e\times e))$ where the area flux $E$ is conjugated to $n$. We note that if $n$ is treated as a kinematical structure, it is required to be \textit{constant as a field}, $\delta n=0$, and the boundary term simply induces a canonical transformation (\textit{in the bulk}) that modifies the original symplectic potential \eqref{form0}.
Note that this conditions forbids the vector $n^I$ to be related to the boundary normal\footnote{ If we denote $ s_a $ the normal form to the boundary, we can construct, using the frame, the
internal normal $ s_I = e_I^a s_a$. This normal is field dependent $\delta s_I = \delta e_I^a s_a\neq 0$, where we use that the boundary normal form is field independent: $\delta s_a=0$.
Therefore the vector $n^I$ being kinematical cannot be related to the boundary normal.}.
This canonical transformation only modifies the connection. \textit{We will assume that $\delta n=0$ from now on}.
Hence $(e_a^I,\omega_b^J)$ is our new canonical pair, $\forall (x,y)\in\mathbb{R}^2$,
\begin{equation}\label{new poisson}
\poi{{ \omega}^I_a(x), e_{b}^J(y)}= {\kappa} \, \epsilon_{ab}\, \eta^{IJ} \, \delta^2(x-y), \quad \poi{{ \omega}^I_a(x), { \omega}_b^J(y)}=0=\poi{e_{a}^I(x), e_{b}^J(y)},
\end{equation}
With such a change of variables, we can express the curvature in terms of the new connection $\omega$
\begin{eqnarray} \label{F(A)}
&&F[A]= F[ \omega + e\times n] = F[ \omega]+ \mathrm{d}_ \omega (e\times n) + {\frac{1}{2}} (e\times n)\times (e\times n),
\end{eqnarray}
where $\mathrm{d}_\omega\alpha =\mathrm{d} \alpha +\omega\times \alpha$. To evaluate the action in terms of $\omega$, one establishes\footnote{This follows from
\begin{equation}
\frac{n^2}3 e\cdot (e\times e)= (e\cdot n) (n\cdot (e\times e)),
\end{equation}
and the cross-product identity
$
(\alpha\times \beta)\times \gamma = \sigma[ (\alpha\cdot \gamma)\beta -\alpha (\gamma\cdot \beta) ].$} that
\begin{eqnarray}
{\frac{1}{2}} e\cdot ((e\times n)\times (e\times n))=\frac{\sigma n^2}{6} e\cdot (e\times e).
\end{eqnarray}
We choose the normalization $n^2=- \Lambda$,
as a new restriction on $n$, so that the last term of \eqref{F(A)} compensates the term proportional to $\Lambda$ in the action \eqref{action1}.
With the assumptions that $\delta n=0, $ and $n^2=- \Lambda$, the change of variables implies that the action \eqref{action1} becomes
\begin{eqnarray} \label{QGp}
{S}_{QG}= \int_M \, \left( e\cdot F [A] + \sigma\frac{\Lambda}{3} \,e\cdot E \right) + \int_{\partial M} E \cdot n
&=& \int_M\, \left( e \cdot F[ \omega]
- E \cdot \mathrm{d}_ \omega n
\right).
\end{eqnarray}
While the original action \eqref{action1} couples the frame $e$ and flux $E =\tfrac12 (e\times e)$ the modified action is achieving a ``separation of variables'' where $e$ and $E$ are decoupled. This will simplify the analysis of the theory and its symmetries.
\medskip
The equations of motion of the new action \eqref{QGp} are now
\begin{eqnarray} \label{motioneom}
F_I[ \omega] - (e\times \mathrm{d}_ \omega n )_I \approx \mathcal{P}'_I
\qquad \mathrm{and} \qquad & \mathrm{d}_ \omega e^I + \frac12 [(e\times e )\times n]^I\approx \mathcal{J}'^I .
\end{eqnarray}
The matter spin density $\mathcal{J}'\equiv -\frac{\delta S_{\mathrm{QG}}}{\delta \omega}$ is unchanged while the energy-momentum density
$\mathcal{P}'\equiv -\frac{\delta S_{\mathrm{QG}}}{\delta e}$ is redefined\footnote{One uses that
$-\delta S_{QG}={\cal P} \delta e + {\cal J} \delta A = {\cal P}' \delta e + {\cal J}' \delta \omega $}:
\begin{eqnarray}
{\mathcal{J}'}_I = \mathcal{J}_I,\qquad {\mathcal{P}'}_I = \mathcal{P}_I + (n\times \mathcal{J})_I.
\end{eqnarray}
\medskip
\paragraph{Nature of the vector $n$. } In the Euclidean case $\sigma=+1$, the normalization condition
$n^2=- \Lambda$ can be achieved by a real vector in the hyperbolic case ($\Lambda<0$) or by a pure imaginary vector in the spherical case ($\Lambda>0$). If $\Lambda=0$, then either $n=0$ or it is specified by a Grassmanian number.
In the Lorentzian case, $n$ is time-like (or \textit{imaginary} space-like) for the de Sitter case and space-like (or \textit{imaginary} time-like) for the AdS case. When $\Lambda =0$ we have two options, $n $ is either a non trivial null vector or it simply vanishes.
\begin{center}
\begin{tabular}{ c|c|c }
& Euclidean & Lorentzian \\\hline
Flat: $\Lambda=0$ & $n=0$ or $n$ is Grassmanian& $n=0$ or $n$ is light-like\\\hline
AdS:$\Lambda<0$ &$n$ is space-like& $n$ is space-like or \textit{imaginary} time-like \\\hline
dS:$\Lambda>0$ &$n$ is \textit{imaginary} & $n$ is time-like or \textit{imaginary} space-like\\ \hline
\end{tabular}
\end{center}
\medskip
\paragraph{Symmetries of the action. }
Since the action $S_{\textrm{QG}}$ depends explicitely
on a vector $n$, one might worry that this vector acts as a background structure and that this action explicitly breaks local rotational symmetry. It turns out, quite remarkably, that this is \textit{not} the case. The action is still invariant under gauge transformations generalizing the local $\mathrm{SO}(\eta)$ transformations \eqref{SU2transfo} and the shift transformations \eqref{shift}.
First let us notice that since we required $n$ to be constant as a field $\delta n=0$ this implies that it will not change under the symmetry transformations, spanned by the Hamiltonain generators $H_\psi$ (with $\psi=\alpha, \phi$),
\begin{equation}
\delta_{\psi}n =\{ H_\psi,n\}=0.
\end{equation}
As a consequence, $n$ can be seen as a scalar for the different gauge transformations. In the following, we are going to determine the shape of the gauge transformations on the field $e$ and $\omega$ which are consistent with this constraint $\delta_\psi n=0$. In order to distinguish the new infinitesimal transformations from the previous one, we will note them $\delta'_\psi$. We demand therefore that $\delta'_\psi n=0$, for $\psi=\alpha, \phi$.
\smallskip
Let us study the set of transformations, generalizing the $\mathfrak{so}(\eta)$ infinitesimal transformations, that we parametrize by $\alpha^I$. Since we have that
\begin{eqnarray}
\delta_\alpha' n^I&=& 0,
\end{eqnarray}
and that we still have that $e^I$ should transform as a vector,
\begin{eqnarray}
\delta'_{\alpha}e^I&= & (e\times \alpha)^I =\delta_\alpha e^I.\label{transfosu1}
\end{eqnarray}
We can use the transformations of $A$ and the relation between $A$ and $\omega$ to infer the transformations of $\omega$.
\begin{eqnarray}
\delta'_\alpha A = \delta'_\alpha({ \omega} - n\times e)= \delta'_\alpha{ \omega} - n\times \delta'_\alpha e \Leftrightarrow
\delta'_{\alpha} \omega^I= \mathrm{d}_ \omega \alpha^I + (e\times (n\times \alpha))^I \equiv D \alpha^I.
\label{transfosu2}
\end{eqnarray}
The second set of transformations, parametrized by $\phi$ generalizes the shift symmetry. We still demand that $\delta'_\phi n^I=0$. We have
\begin{eqnarray}
\delta'_{\phi} \omega^I&= & (\phi\times \mathrm{d}_\omega n )^I
\label{transfoan2}\cr
\delta'_{\phi}e^I &= &\mathrm{d}_{ \omega} \phi^I + ( (e \times \phi)\times n )^I \equiv \tilde{D} \phi^I \label{transfoan1}.
\end{eqnarray}
These transformations satisfy $\delta'_\phi e^I = \delta_\phi e^I+ \delta_{\alpha=\phi\times n}e^I$.
\medskip
It is worth noticing that now \textit{both types of gauge transformations are dependent on the cosmological constant} through the vector $n$ and both leave the auxiliary vector $n$ invariant. We emphasize again that this implies that the vector $n$ is a \textit{scalar} for such gauge transformations.
\subsection{Deformed boundary symmetry algebra and Manin pairs}
\paragraph{New charges algebra. } One can wonder at this stage, what have we gained by going to this more elaborate description of the same physical system?
The clear advantage of this description shows up when we look at the symmetry algebra and the transformations of the
spin and energy momenta densities. These transformation can be deduced from (\ref{transfosu1},\ref{transfosu2},\ref{transfoan1}) by acting on the LHS of the constraints \eqref{motioneom}.
For instance one finds that
\begin{equation}
\delta'_\alpha \mathcal{J}' = \mathcal{J}' \times \alpha,\qquad
\delta'_\phi \mathcal{P}' = (\mathcal{P}' \times \phi)\times n ,
\end{equation}
which shows that the modified energy-momentum density transforms homogeneously under a local translation, unlike \eqref{transsym2}.
The charges associated with these transformations
$ {\delta}'_\alpha \lrcorner\, \Omega = -\delta {J}'_\alpha$ and $ \delta'_\phi \lrcorner\, \Omega =-\delta {P}'_\phi$, are given by
\begin{eqnarray}\label{new charges}
{J}'_\alpha &=& \int_{\Sigma} \alpha_I \left[\mathcal{J}' - \mathrm{d}_\omega e - \tfrac12 ((e\times e )\times n) \right]^I +\oint_{\partial\Sigma} \alpha_I e^I, \cr
{P}'_\phi &=& \int_{\Sigma} \phi^I \left[\mathcal{P}' - F[ \omega] + (e\times \mathrm{d}_ \omega n )\right]_I +\oint_{\partial\Sigma} \phi^I \omega_I.
\end{eqnarray}
On-shell, these charges are simply
\begin{equation}\label{newcharge}
{J}'_\alpha = \oint_{\partial\Sigma} \alpha_I e^I= J_\alpha , \qquad
{P}'_\phi = \oint_{\partial\Sigma} \phi^I { \omega}_I = P_{\phi}+ J_{\phi\times n}.
\end{equation}
The charge algebra is such that $J'_\alpha$ and $P'_\phi$ generate two subalgebras given by
\begin{eqnarray}\label{newcom1}
\{J'_\alpha, J'_\beta\} = {\kappa}\, J'_{\alpha\times \beta},& &\qquad
\{{P'}_\phi,{P'}_\psi\} ={\kappa} \, {{P'}}_{ (\phi\times \psi) \times n }+{\kappa}\, \oint_{\partial \Sigma}(\phi \times \psi) \cdot \mathrm{d} n,
\end{eqnarray}
while the cross-commutator is given by
\begin{eqnarray} \label{newcom2}
\{J'_\alpha,{P'}_\phi\}
&=& {\kappa}\, {P'}_{ \alpha\times \phi } + J'_{\phi\times (\alpha \times n ) }
+{\kappa}\, \oint_{\partial \Sigma} \phi\cdot \mathrm{d} \alpha.
\end{eqnarray}
The proof is detailed in the appendix \ref{proof1}. We emphasize that we are using the simple derivative $\mathrm{d}$ since $n$ is a scalar in terms of the gauge transformations.
\medskip
We see that the commutator of energy-momentum charges possesses a central charge if $n$ is not constant. \textit{From now on}, we assume that
$\mathrm{d} n=0$.
In this case, we see that the modified energy momentum charges ${P'}_\alpha$ form \textit{a closed subalgebra}
and the central charge is concentrated of the bracket between rotation generators $J'$
translation/boost generators ${P'}$. This is in sharp contrast with the original description \eqref{boobost}, where the momentum generators do not form a closed subalgebra and it is the main reason behind the canonical transformation and the normalisation $n^2 = - \Lambda$.
\medskip
\paragraph{Another condition on $n$. }
{ Before discussing the shape of the global symmetries, it will be useful to fix for once and for all the vector $n$. Without loss of generality, we can always choose the vector $n$ as defining the direction $3$,
$n^I=(0,0,n^3)$. As we have seen earlier in \eqref{QGp}, according to the normalization condition $n^2=- \Lambda$, the vector $n$ can be space-like or time-like, or even imaginar
.
Since we have fixed the direction of $n$, this means that the metric should also depend on $s$, the sign of $\Lambda$.
Let us review the different cases.
If we are in the Euclidean case with $\Lambda>0$, then $n^I=(0,0,i\sqrt\Lambda)$ and the Euclidean metric is consistent.
In the other cases where $\Lambda\neq0$, we will take $n^I=(0,0, -s\sigma \sqrt{|\Lambda|})$ and a metric $\eta^{\sigma s}$ such that
\begin{equation}
\eta^{\sigma s}_{IJ}=\mathrm{diag}(+,-s\sigma,-s), \quad n^I \eta_{IJ} n^J= -\, s \, |\Lambda|= -\, \Lambda.
\end{equation}
Finally, in the case where $\Lambda=0$, we stick to the usual metric $\eta_{IJ}=\mathrm{diag}(+,+,\sigma)$.
Fixing such convention will allow to connect more easily to the usual quantum group formalism where it is \textit{always} the third direction that is picked out as preferred. Let us review the full set of constraints we have on $n$,
\begin{equation}\label{constraints on n}
\delta n =0 \,\, (\Rightarrow \delta'_\alpha n=\delta'_\phi n=0), \quad n^2=-\Lambda, \quad \mathrm{d} n=0, \quad n^I= (0,0,n^3).
\end{equation}
\medskip
While the symmetry structure of the metric is still isomorphic to $\mathfrak{so}(\eta)$, the time direction is not always the same in the Lorentzian case, to account for $n$ being space-like or time-like. Let us review the different explicit forms of $\mathfrak{so}(\eta)$. We note ${{\bf J}} ^I$ their generators. The commutation relations are simply $[{{\bf J}} ^I,{{\bf J}} ^J] = \epsilon^{IJ}{}_K {{\bf J}} ^K$, where we use the the metric $\eta_{\sigma s} $ to lower the index $K$.
This means that we have different algebras for different choices of $(\sigma,s)$.
We denote the different cases by $\mathfrak{su}_{\sigma s}$
\begin{eqnarray} \label{generalcom}
\mathfrak{su}_{+-}= \mathfrak{su}(2):&\quad &
[{{\bf J}} _1,{{\bf J}} _2]={{\bf J}} _3,\quad
[{{\bf J}} _2,{{\bf J}} _3]={{\bf J}} _1,\quad
[{{\bf J}} _3,{{\bf J}} _1]={{\bf J}} _2, \nonumber\\
\mathfrak{su}_{-+}=\mathfrak{su}(1,1):&\quad &
[{{\bf J}} _1,{{\bf J}} _2]=-{{\bf J}} _3,\quad
[{{\bf J}} _2,{{\bf J}} _3]={{\bf J}} _1,\quad
[{{\bf J}} _3,{{\bf J}} _1]={{\bf J}} _2 \nonumber\\
\mathfrak{su}_{--}= {\mathfrak{sl}}(2,\mathbb{R}):&\quad&
[{{\bf J}} _1,{{\bf J}} _2]={{\bf J}} _3,\quad
[{{\bf J}} _2,{{\bf J}} _3]={{\bf J}} _1,\quad
[{{\bf J}} _3,{{\bf J}} _1]=-{{\bf J}} _2.
\end{eqnarray}
}
\medskip
\paragraph{Quantum algebra of observables.}
The algebra given in \eqref{newcom1} and \eqref{newcom2} is first class only for the transformation parameters that are constant on the boundary.
Such a set of constant parameters generates then global symmetry transformations which form a finite dimensional Poisson Lie algebra.
The associated quantum algebra is now generated by the quantisation of the global charges
\begin{equation}\label{chargedef}
{\widehat J}'^I= {\widehat J}^I= i \oint \hat{e}^I,\qquad {\widehat P}'_I = i \oint \hat{{ \omega}}_I.
\end{equation}
As we have seen in \eqref{newcharge}, we have just performed a linear change of basis, hence the global charges still form an algebra isomorphic to ${\mathfrak{d}}_{\sigma s}$, with ${\mathfrak{d}}_{++}=\mathfrak{so}(4)\sim\mathfrak{su}(2)\oplus \mathfrak{su}(2),\,
{\mathfrak{d}}_{+-} = {\mathfrak{sl}}(2,\mathbb{C}) = {\mathfrak{d}}_{-+},$ and $
{\mathfrak{d}}_{--}= \mathfrak{so}(2,2)\sim {\mathfrak{sl}}(2,\mathbb{R})\oplus {\mathfrak{sl}}(2,\mathbb{R}).$
The physical reality condition that arises from the quantisation of the
global algebra with $e,{ \omega}$ real \eqref{chargedef} demands
that all generators are antihermitian
and that the vector $n$ is real:
\begin{equation} \label{real}
{\widehat J}'^\dagger =-{\widehat J}', \quad {\widehat P}'^\dagger= -{\widehat P}',\quad \bar{n} =n.
\end{equation}
We note that the Euclidean case with positive cosmological constant does not have a real $n$, hence we will not discuss it here. It requires a more careful analysis on the reality condition.
\paragraph{${\mathfrak{d}}_{\sigma s}$ as a Manin pair.} The Lie algebra ${\mathfrak{d}}_{\sigma s}$ has the structure of a Manin triple, that is, it is a classical Drinfeld double that can be written as a matching pair
${\mathfrak{d}}_{\sigma s}={\mathfrak{g}}\bowtie {\mathfrak{g}}^*$ \cite{Chari:1994pz}.
By construction, ${\mathfrak{d}}_{\sigma s}$ possesses an invariant symmetric pairing denoted $\left\langle \cdot,\cdot\right\rangle$ of signature $(3,3)$ and it can be decomposed as a pair of isotropic algebras
\begin{equation}
{\mathfrak{d}}={\mathfrak{g}} \oplus {\mathfrak{g}}^*,\qquad
\left\langle \cdot,\cdot\right\rangle|_{{\mathfrak{g}}}=0=\left\langle \cdot,\cdot \right\rangle|_{{\mathfrak{g}}^*}.
\end{equation}
The symmetric pairing is simply the canonical pairing between ${\mathfrak{g}}$ and ${\mathfrak{g}}^*$.
Given ${\mathfrak{d}}_{\sigma s}$, its subalgebra ${\mathfrak{g}}$ is the subalgebra $\mathfrak{su}_{\sigma s} $
with generators ${{\bf J}} ^I$ satisfying the algebra\footnote{ Reinstating ${\kappa}$ would lead to $$[{{\bf J}} ^I,{{\bf J}} ^J] = {\kappa} \,\epsilon^{IJ}{}_K {{\bf J}} ^K. $$ } \eqref{generalcom}
\begin{equation}\label{Jcom}
[{{\bf J}} ^I,{{\bf J}} ^J] = \epsilon^{IJ}{}_K {{\bf J}} ^K.
\end{equation}
The dual algebra ${\mathfrak{g}}^*$ is the algebra with generators
\begin{equation}
\tau_I \equiv {{\bf P}} _I+ n^J \epsilon_{IJ K}{{\bf J}} ^K = {{\bf P}} _I + (n\times {{\bf J}} )_I
\end{equation}
which satisfy the $\mathfrak{an}$ algebra commutation relations
\begin{equation}\label{Pcom}
[\tau_I,\tau_J]= C_{IJ}{}^K \tau_K \quad \textrm{with } \quad
C_{IJ}{}^K = \sigma(n_I\delta_J^K - n_J\delta_I^K) .
\end{equation}
With our specific choice $n^I=(0,0,s\sigma \sqrt{|\Lambda|})$ and $\eta_{IJ}=\mathrm{diag}(+,-s\sigma,-s)$, we get an algebra which is independent of $\sigma$ and $s$:
the Lie algebra $\mathfrak{an}$ given by
\begin{equation}
[\tau_1,\tau_2]=0,\qquad [\tau_3,\tau_1]= \sqrt{|\Lambda|} \tau_1,\qquad
[\tau_3,\tau_2]= \sqrt{|\Lambda|} \tau_2.
\end{equation}
The symmetric pairing is simpl
\begin{equation}
\left\langle \tau_J, {{\bf J}} ^I \right\rangle=\delta^I_J=\left\langle {{\bf J}} ^I,\tau_J\right\rangle , \qquad
\left\langle {{\bf J}} ^I, {{\bf J}} ^J\right\rangle =0=\left\langle \tau_I,\tau_J\right\rangle.
\end{equation}
We emphasize that the structure constant $C_{IJK}$ is not cyclic as $\epsilon^{IJK}$.
The last structure constant is the mixed one
\begin{equation} \label{crosscom}
[{{\bf J}} ^I, \tau_J] = C_{JK}{}^I {{\bf J}} ^K +\epsilon^{I}{}_{J}{}^K \tau_K,
\end{equation}
which is uniquely determined from (\ref{Jcom},\ref{Pcom}) by the Killing form defining property
$
\left\langle [X,Y],Z\right\rangle=\left\langle X,[Y,Z]\right\rangle.
$
\medskip
The Drinfeld double decomposition of ${\mathfrak{d}}_{\sigma s}$ is given by the Iwasawa decomposition
\begin{equation}
{\mathfrak{d}}_{\sigma s}=\mathfrak{su}_{\sigma s} \bowtie \mathfrak{an}\sim \mathfrak{an} \bowtie \mathfrak{su}_{\sigma s}.
\end{equation}
Such an Iwasawa decomposition does not exist for ${\mathfrak{d}}_{++}\sim\mathfrak{so}(4)$, which is why we do not consider it. The cross commutator \eqref{crosscom} includes an action
of $\mathfrak{su}_{\sigma s}$ on $\mathfrak{an}$
of and retro-action of $\mathfrak{an}$ on $\mathfrak{su}_{\sigma s}$. We can isolate the different actions, by considering the projection of the cross commutator \cite{Majid:1996kd}.
\begin{eqnarray}\label{not0}
{{\bf J}} ^I\rhd \tau_J \equiv [{{\bf J}} ^I, \tau_J]_{{\mathfrak{an}}}&= \epsilon^{I}{}_{J}{}^K \tau_K, \quad& {{\bf J}} ^I\lhd \tau_J \equiv [{{\bf J}} ^I, \tau_J]_{{\mathfrak{su}}}= C_{JK}{}^I {{\bf J}} ^K \\
\tau_J \lhd {{\bf J}} ^I \equiv [ \tau_J, {{\bf J}} ^I]_{{\mathfrak{an}}}&=- {{\bf J}} ^I\rhd \tau_J , \quad &\tau_J \rhd {{\bf J}} ^I \equiv [ \tau_J, {{\bf J}} ^I]_{{\mathfrak{su}}}= -{{\bf J}} ^I\lhd \tau_J . \label{not1}\end{eqnarray}
The relations \eqref{Jcom}, \eqref{Pcom} and \eqref{crosscom} are the counterparts of \eqref{algebra}. They are the defining relations of ${\mathfrak{d}}_{\sigma s}$ as a Drinfeld double of $\mathfrak{su}$ (with a non-trivial cocycle) \cite{Chari:1994pz}.
Again, we emphasize that with the convention we took, the $\mathfrak{an}$ sector always singles out the direction 3 and is independent of $(\sigma,s)$. As we will see later, the function algebra over the Lie group $\AN$ is isomorphic to the enveloping algebra ${\cU_{q}(\su(2))}$ which is always defined with the preferred direction 3.
\medskip
Given $\alpha,\beta \in \mathfrak{su}$ and $\phi,\psi \in \mathfrak{an}$ we can summarize the Drinfeld double algebra as \cite{Majid:1996kd}
\begin{eqnarray} \label{sub lie algebra}
&&[\alpha,\beta ] = (\alpha\times \beta)_I {{\bf J}} ^I,\qquad
[\phi,\psi]= ((\phi\times \psi)\times n)^I \tau_I \\
\label{cross terms} &&
\alpha \rhd \phi = (\alpha \times \phi)^I\tau_I= -\phi\lhd\alpha, \quad \alpha\lhd \phi = (\phi\times( \alpha \times n))_I {{\bf J}} ^I=-\phi\rhd\alpha.
\end{eqnarray}
and the cross-commutator is
\begin{equation}
[\alpha ,\phi]= \alpha \rhd \phi+ \alpha\lhd \phi = - \phi \lhd \alpha - \phi\rhd \alpha ,
\end{equation}
in accordance with \eqref{not0}, \eqref{not1} and \eqref{newcom2}.
\medskip
\paragraph{Role of the matrix ${\mathfrak{r}}$. } As we have seen, the source of the deformation of the boundary symmetry algebra
is contained\footnote{ This should not be confused with the r-matrix
$r$ of the double introduced later.} in the ``little" r-matrix ${\mathfrak{r}}_{IJ} =\epsilon_{IJK}n^K$
that sources the canonical transformation.
Let us clarify the algebraic role of ${\mathfrak{r}}$. This r-matrix can be seen as building up the $\mathfrak{an}$ Lie algebra structure from the $\mathfrak{su}$ Lie algebra. First let us define the two operators
${\mathfrak{r}}_\pm : \mathfrak{an} \to \mathfrak{su}$, given by
\begin{equation}
{\mathfrak{r}}_\pm( \tau_I) ={\mathfrak{r}}_{IJ} J^J \pm \sqrt{\sigma \Lambda} \eta_{IJ} J^J,
\end{equation}
we can recover the $\mathfrak{an}$ Lie bracket from the $\mathfrak{su}$ bracket.
\begin{equation}
[\phi,\psi]_\mathfrak{an}= [{\mathfrak{r}}_+(\phi),{\mathfrak{r}}_+(\psi)]_\mathfrak{su}-[{\mathfrak{r}}_-(\phi),{\mathfrak{r}}_-(\psi)]_\mathfrak{su}.
\end{equation}
Moreover these operators are \emph{Lie algebra morphisms}.
Given two elements $\phi,\psi \in \mathfrak{an}$ we have
\begin{equation}
{\mathfrak{r}}_\pm([\phi,\psi]_\mathfrak{an})= [{\mathfrak{r}}_\pm(\phi) , {\mathfrak{r}}_\pm(\psi)]_\mathfrak{su}.
\end{equation}
This morphism property is equivalent to the identities $-n^2=\Lambda $ and
\begin{eqnarray}
(\phi\times n) \times (\psi \times n) - \sigma n^2 (\phi\times \psi)
&=& ((\phi\times \psi) \times n)\times n \cr
\phi\times (\psi \times n) - \psi\times (\phi \times n)
&=& (\phi\times \psi)\times n.
\end{eqnarray}
which are consequences of the cross-product
equality
$
(\alpha\times \beta)\times \gamma = \sigma[ (\alpha\cdot \gamma)\beta -\alpha (\gamma\cdot \beta) ].$
This key property of the matrix ${\mathfrak{r}}$ goes back to the work of Semenov-Thian-Shansky
\cite{SemenovTianShansky:1985my,SemenovTianShansky:1993ws}.
\medskip
\paragraph{Gauge theory for a Drinfeld double algebra. }
The frame field is now valued in $\mathfrak{an}$, $e\equiv e^I\tau_I$, whereas the connection $ \omega$ has still value in $\mathfrak{su}$, $ \omega\equiv \omega^I {{\bf J}} _I$. We can rewrite the momentum and angular momentum densities,
repectively $\mathcal{P}'=\mathcal{P}'^I{{\bf J}} _I\in \mathfrak{su} $ and $ \mathcal{J}'=\mathcal{J}'^I\tau_I\in \mathfrak{an}$ as objects valued in the different subalgebras and in terms of their respective Lie brackets and actions.
{ \begin{eqnarray}
\mathcal{P}'&=& \mathrm{d} \omega +{\frac{1}{2}} [ \omega , \omega ] + \omega \lhd e,\label{gen curv} \\
\mathcal{J}'&=&\mathrm{d} e
+{\frac{1}{2}} [e , e] + \omega {\rhd} e . \label{gen tor}
\end{eqnarray}}
We can also rewrite the covariant derivatives \eqref{transfosu2} and \eqref{transfoan1} in the different directions. For some scalar fields, $\alpha=\alpha^I{{\bf J}} _I \in \mathfrak{su}$ and $\phi=\phi^I\tau_I\in \mathfrak{an}$, we have
{ \begin{eqnarray}
D \alpha &=& \mathrm{d} \alpha + [ \omega,\alpha ] + e\rhd \alpha, \nonumber \\
&=&\mathrm{d} \alpha + \omega \times \alpha + e \times( n\times \alpha)\\
\tilde{D} \phi &=& \mathrm{d} \phi+[ e ,\phi ] + \omega \rhd \phi \nonumber\\
&=& \mathrm{d} \phi+ (e\times \phi)\times n + \omega \times \phi.
\end{eqnarray}
Another way to recover these relations is to consider the total connection ${\cal A} = \omega + e$ with value in ${\mathfrak{d}}$, as we do in Appendix \ref{big-guy}.}
One can check that the covariant derivatives satisfy the metric compatibility condition
\begin{equation}
\mathrm{d} (\alpha \cdot \phi) = D\alpha\cdot\phi + \alpha \cdot\tilde{D}\phi.
\end{equation}
As anticipated in \eqref{transfosu2} and \eqref{transfoan2}, the symmetry transformations, parametrized by either $\alpha\in \mathfrak{su}$ or $\phi\in \mathfrak{an}$, can be specified in terms of these new covariant derivatives.
\begin{equation}
\begin{array}{lll}
\delta'_{\alpha} \omega = D \alpha , &\,\,&
\delta'_{\alpha} e = e \lhd \alpha,
\label{transf1} \\
\delta'_\phi \omega = \omega \lhd \phi ,
&\,\,& \delta'_\phi e = \tilde{D} \phi.
\end{array}
\end{equation}
These imply the following transformations for the momentum densities,
\begin{equation}\label{transf F}
\begin{array}{lll}
\delta_{\alpha} \mathcal{P}' = [\mathcal{P}', \alpha] + \mathcal{J}'\rhd \alpha , &\qquad& \delta_{\alpha} \mathcal{J}'= \mathcal{J}' \lhd \alpha \\
\delta_\phi \mathcal{P}'= \mathcal{P}'\lhd \phi , & \qquad& \delta_\phi \mathcal{J}'= [\mathcal{J}', \phi] + \mathcal{P}' \rhd \phi .
\end{array}
\end{equation}
It is worth noticing that these transformations now have a symmetric expression, since we have an action of $\mathfrak{su}$ on $\mathfrak{an}$ and a retro-action
of $\mathfrak{an}$ on $\mathfrak{su}$.
\medskip
Finally, we use the Killing form and the fields with value in their respective algebra to define the symplectic form that we are going to discretize in the next section.
\begin{equation}
\Omega =\int_\Sigma \left\langle \delta e \curlywedge \delta A \right\rangle = \int_\Sigma \left\langle \delta e \curlywedge \delta \omega \right\rangle.
\end{equation}
\section{Recovering the deformed loop gravity phase space}\label{sec:main}
We intend to use now the recent understanding behind the notion of discretization of gauge theories \cite{Donnelly:2016auv}. Such discretization consists in two steps, a \textit{subdivision} and then by a \textit{truncation} of the degrees of freedom. We will use this to derive the discretized symplectic form, which will allow us to identify the discretized phase space variables. The quantization of such variables will make obvious how the quantum group structure appears.
\subsection{Subdivision and truncation}
By subdivision, we mean that we decompose the (2d) Cauchy data slice $\Sigma$ into a collection of subregions. This provides a \emph{cellular decomposition } of space in terms of cells of different dimensions. The cells of maximal dimensions are denoted $c_i^*$, where $i$ labels the cell which is dual to the center $c_i$, see Fig. \ref{loop04}. In terms of this subdivision, the symplectic form becomes
\begin{equation}
\Omega = \int_\Sigma \left\langle \delta e \curlywedge \delta \omega \right\rangle = \sum_i\int_{c_i^*} \left\langle \delta e \curlywedge \delta \omega \right\rangle.
\end{equation}
To proceed to the evaluation of $\Omega$, we are going to perform a \textit{truncation} of the degrees of freedom, which is in a way the core of the discretization process. We will assume that any matter degrees of freedom are localized on the vertices $v$ of the triangulation. A proper treatment of such defects could be done as in \cite{Freidel:2018pbr, Shoshany:2019ymo}. However here we will neglect them and leave for later their careful study.
Truncation refers to the fact that then in each subregion one choses a particular vacuum state or a particular family of solution of the constraints.
{ \begin{eqnarray}
0&=& \mathrm{d} \omega +{\frac{1}{2}} [ \omega , \omega ] + \omega \lhd e, \label{defCurv} \\
0&=&\mathrm{d} e +{\frac{1}{2}} [e , e] + \omega {\rhd} e \label{sols}.\label{defGauss}
\end{eqnarray}}
Once this is done, the systems attached to subregions carry representations of the boundary symmetry group. The choice of discretisation scheme is achieved once we choose a representation of the boundary symmetry.
Let us identify the solutions of \eqref{sols} in a subregion $c^*$. For this, it is convenient to consider a ${\mathfrak{d}}_{\sigma s}=\mathfrak{su}_{\sigma s}\bowtie \mathfrak{an}$ valued connection ${\cal A} = \omega + e$. The associated curvature tensor is given by (see Appendix \ref{big-guy})
{ \begin{equation}\label{r3}
{\cal F} = ( \mathrm{d} \omega +{\frac{1}{2}} [ \omega , \omega ] + \omega \lhd e)_I{{\bf J}} ^I + (\mathrm{d} e +{\frac{1}{2}} [e , e] + \omega {\rhd} e )_J\tau^J.
\end{equation}}
The gauge transformations for the connection ${\cal A}$ are given in terms of the group ${\mathfrak{D}}_{\sigma s}\sim\mathrm{SU}_{\sigma s}\bowtie\AN$. This splitting is in general only local, except for the cases ${\mathfrak{D}}_{+,-}=\SL(2,\mathbb{C})\sim\mathrm{SU}(2)\bowtie\AN$ (Euclidean case with $\Lambda<0$) and when $\Lambda=0$, where the splitting is global. For simplicity, we only focus on the connected component to the identity.
Demanding that \eqref{defCurv} and \eqref{sols} are satisfied is the same as demanding that the connection ${\cal A}$ is flat, hence it has to be pure gauge. Let us consider the ${\mathfrak{D}}_{\sigma s}$ holonomy $G_c(x)$ connecting a reference point $c$ in $c^*$ to a point $x$ still in $c^*$ (see Fig. \ref{loop04}).
In the connected component to the identity, we have a unique decomposition of $G_c(x)$ as
$G_c(x)=\ell_{c}(x) h_{c}(x)$ where $\ell_{c}(x)\in \AN$ and $h_{c}(x)\in \mathrm{SU}$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.5]{loop04}
\caption{The two subregions/triangles $c^*$ and $c'^*$ with their respective reference point/center $c$ and $c'$. $\Gamma$ is the dual 2-complex. The segment $[cc']$ forms a link, dual to the edge $[vv']$ shared by $c^*$ and $c'^*$. The $\AN$ and $\mathrm{SU}$ holonomies $\ell_{cx}$ and $h_{cx}$ are based at $c$ and go to a point $x$ in the cell $c^*$. These holonomies can be put together as a single ${\mathfrak{D}}_{\sigma s}$ holonomy $G_c(x)=\ell_{cx}h_{cx}$.}
\label{loop04}
\end{center}
\end{figure}
We will often omit the $x$ dependence in the notation. The solutions to the constraints are given by
\begin{equation} {\cal A}_{|_{c^*}} = \omega_{|_{c^*}}+e_{|_{c^*}}= G_{c}^{-1} \mathrm{d} G_{c} = (\ell_{c}h_{c})^{-1} \mathrm{d} (\ell_{c}h_{c}),\end{equation}
which in terms of components give (we recall that the Lie algebra $\mathfrak{an}$ is not stable under the adjoint action of $\mathrm{SU}$),
\begin{eqnarray}
\omega_{|_{c^*}} & =& h_{c}^{-1} \text{d}h_{c} \: + \: \left(h_{c}^{-1} (\ell_c^{-1} \text{d}\ell_{c}) h_{c}\right)_{|_{\mathfrak{su}}}\label{con1}\\
e_{|_{c^*}} &=& \left(h_{c}^{-1} (\ell_{c}^{-1} \text{d} \ell_{c}) h_{c}\right)_{|_{\mathfrak{an}}} \,. \label{con2}
\end{eqnarray}
When considering an infinitesimal transformation, we recover the transformations \eqref{transf1} for $e=0$ and $ \omega=0$. Also these solutions are the deformed version of the standard discrete picture with $\Lambda=0$ (and $n=0$),
\begin{equation}
\omega_{|_{c^*}} = h_{c}^{-1} \text{d}h_{c} , \quad e_{|_{c^*}} = h_{c}^{-1}\mathrm{d} X \, h_{c}, \quad \textrm{with } \ell \equiv X\in \mathbb{R}^3.
\label{standard 1}
\end{equation}
Before identifying the truncated symplectic form, it will be convenient to rewrite the restriction $\Omega_{|_c}$ of $\Omega$ to the cell $c^*$ as
\begin{equation}
\Omega_{|_c}= \int_{c^*} \left\langle \delta e \curlywedge \delta \omega \right\rangle = \tfrac12 \int_{c^*} \left\langle \delta {\cal A} \curlywedge \delta {\cal A} \right\rangle.
\end{equation}
The truncation then imposes that ${\cal A}=G^{-1}_c\mathrm{d} G_c $.
\begin{equation}
\Omega_{|_c}\approx \Omega_c\equiv {\frac{1}{2}} \int_{c^*}\left\langle \delta (G_c^{-1} \mathrm{d} G_c) \curlywedge \delta (G_c^{-1} \mathrm{d} G_c) \right\rangle = \delta \Theta_c,
\end{equation}
where $\approx$ means we went on-shell, ie we truncated the number of degrees of freedom. $\Theta_c ={\frac{1}{2}} \int_{c^*} \left\langle G_c^{-1} \mathrm{d} G_c \curlywedge \delta (G_c^{-1} \mathrm{d} G_c) \right\rangle$ is the truncated symplectic potential.
\medskip
The next steps will consist in evaluating $\sum_i \Omega_{c_i}= \sum_i \delta \Theta_{c_i} $ in order to identify the discretized variables and their phase space structure.
\medskip
An important first step is to realize that what is relevant is actually the boundary data of the subregion (as we could guess already from the charge analysis in the continuum). We will be using extensively from now on the notation
\begin{equation}\label{defD}
{\stackrel[R]{}{\Delta} } u :=\delta uu ^{-1},\qquad {\overline \Delta} u :=u ^{-1} \delta u
\end{equation} for some group element $u$.
${\stackrel[R]{}{\Delta} } u$ is right invariant ${\stackrel[R]{}{\Delta} }(ug)={\stackrel[R]{}{\Delta} } u$ and ${\overline \Delta} u $ is left invariant
${\overline \Delta}(gu) ={\overline \Delta} u $, for a field independent group element $\delta g=0$.
\begin{proposition} \label{prop1}In the component connected to the identity, where ${\cal D}_{\sigma s}=\mathrm{SU} \bowtie \AN\ni G =\ell_c h_c$, there exist a boundary symplectic potential $\vartheta$ and a boundary Lagrangian $L_\partial$ given by
\begin{equation}
\vartheta:= - \left\langle \ell_c^{-1} \mathrm{d} \ell_c , \Delta h_c \right\rangle ,
\qquad
L_\partial:= \tfrac12 \left\langle \mathrm{d} h_c h_c^{-1} {\wedge} \ell_c^{-1} \mathrm{d} \ell_c\right\rangle,
\end{equation}
such that $\Theta_c$ decomposes as a sum of a total derivative and a total variation
\begin{equation}
\Theta_c = \int_{c^*} \left(\mathrm{d} \vartheta + \delta L_\partial \right).
\end{equation}
As a corollary we have that $\Omega_c = \delta \Theta_c = \int_{c^*} \mathrm{d} \delta \vartheta = \int_{\partial c^*} \delta \vartheta $ is a pure boundary term.
\noindent $\blacksquare$
\end{proposition}
Let us prove this proposition. We will omit the index $c$ to simplify the notation. Some useful relations are given by
\begin{eqnarray}
G^{-1} \mathrm{d} G &=& h^{-1} \mathrm{d} h + h^{-1} (\ell^{-1} \mathrm{d} \ell) h \\
\delta (G^{-1} \mathrm{d} G) &=& h^{-1} \mathrm{d} \Delta h h+\delta ( h^{-1} (\ell^{-1} \mathrm{d} \ell)h) \cr
&=&
h^{-1} (\mathrm{d} \Delta h + [(\ell^{-1} \mathrm{d} \ell),\Delta h ]+ \delta (\ell^{-1} \mathrm{d} \ell) ) h
\end{eqnarray}
Using these, we directly get
\begin{eqnarray}
2\Theta &=& \int \left\langle G^{-1} \mathrm{d} G \wedge\delta (G^{-1} \mathrm{d} G) \right\rangle\cr
&=& \int \left\langle (\ell^{-1} \mathrm{d} \ell) \wedge (\mathrm{d} \Delta h + [(\ell^{-1} \mathrm{d} \ell),\Delta h ]) \right\rangle
+ \left\langle h^{-1} \mathrm{d} h , \delta ( h^{-1} (\ell^{-1} \mathrm{d} \ell)h)\right\rangle
\cr
&=&\int - \mathrm{d} \left\langle (\ell^{-1} \mathrm{d} \ell) \wedge \Delta h \right\rangle + \frac12 \left\langle [\ell^{-1} \mathrm{d} \ell, \ell^{-1} \mathrm{d} \ell] \wedge \Delta h \right\rangle \cr
&& + \int \delta \left\langle \mathrm{d} h h^{-1}, \ell^{-1} \mathrm{d} \ell\right\rangle -
\left\langle \mathrm{d} {\stackrel[R]{}{\Delta} } h, (\ell^{-1} \mathrm{d} \ell)\right\rangle
\cr&=&\int - 2\mathrm{d} \left\langle (\ell^{-1} \mathrm{d} \ell) \wedge \Delta h \right\rangle
+ \delta \left\langle \mathrm{d} h h^{-1}\wedge (\ell^{-1} \mathrm{d} \ell)\right\rangle.
\end{eqnarray}
which establishes the result.
\medskip
Therefore the symplectic form associated with a cell $c^*$ can be written as
a sum of boundary edge contributions
\begin{equation}
\Omega_{|_c^*}=\delta \Theta_{|_c^*}=\delta \int_{c^*} \left\langle e \wedge \delta \omega\right\rangle \approx \Omega_c
= \sum_{e \in \partial c^*} \Omega_{c}^{e},
\end{equation}
where each contribution in the sum is given by
\begin{equation} \label{Thetac}
\Omega_c^e =\delta \Theta_c^e,\quad \Theta_c^e := -\int_e \left\langle\ell^{-1}_c \mathrm{d} \ell_c , {\stackrel[R]{}{\Delta} } h_c \right\rangle.
\end{equation}
\subsection{From holonomy to ribbon and Heisenberg double}
\subsubsection{From holonomies to ribbons }
The different subregions $ c^*$ and $ c'^*$ share some common boundaries. This common boundary is referred to as an edge $e$. This means that the variables evaluated on the edge can be related through transformations relating the different frames associated to each triangle. As we will see this will generate some simplifications in the total symplectic form $\sum_{i}\Omega_{c_i}$.
Let us now focus on two cells $c^*$ and $c'^*$, sharing the edge $e=[vv']$, where $v$ and $v'$ are vertices of the cellular decomposition.
As a set we have $c^*\cap c'^*=[vv']$, in addition $[vv']$ possesses an orientation induced by the orientation of $c$, see Fig \ref{loop04}.
We have two contributions, for the edge $[vv']$ coming from the two cells sharing $[vv']$.
\begin{equation} \label{split 2}
\Omega_{c c'} \equiv \Omega_{ c}^{[vv']} + \Omega_{ c'}^{[v'v]}=
\Omega_{ c}^{[vv']} - \Omega_{ c'}^{[vv']},
\end{equation}
where the sign changed because the edge $[vv']$ has a different orientation depending whether it is belonging to the boundary of
$ c^*$ or $ c'^*$.
On the boundary $[vv']$, the different fields can be combined as ${\mathfrak{D}}_{\sigma s}$ holonomies $G_{c_i} = \ell_{c_i} h_{c_i}$, with
$\ell_{c_i} \in \AN$ and $h_{c_i} \in \mathrm{SU}_{\sigma s}$, are related by a ${\mathfrak{D}}_{\sigma s}$-transformation.
The continuity equation states that the connection evaluated on $[vv']$ can be expressed either from the perspective of the frame of $c^*$ or the one of $c'^*$.
\begin{equation}
{\cal A}(x) =(G_c^{-1}\mathrm{d} G_c)(x) = (G_{c'}^{-1}\mathrm{d} G_{c'})(x), \qquad x\in [vv'].
\end{equation}
This differential equation can be integrated. Indeed, the group elements $ G_{c}(x)\equiv G_{cx}$ and $ G_{c'}(x)\equiv G_{c'x}$ are evaluated at the same point $x\in [vv']$ and since the connection is flat, there exists an holonomy ${\cal G}_{c'c}=L_{c'c} H_{c'c}$ such that $G_{c'}(x)={\cal G}_{c'c} G_{c}(x)$.
Note that for any given holonomy $G_{xy}$ connecting $x$ to $y$, we take the convention $G_{yx}\equiv G_{xy}^{-1} $.
The differential continuity equation is
\begin{equation}
\partial_x G_{c x}G_{x c'}=0.
\end{equation}
for $x\in [vv']$. This implies the integrated continuity condition
\begin{equation}\label{intcont}
G_{c v}G_{v c'}= G_{c v'}G_{v' c'}.
\end{equation}
Using the left Iwasawa decomposition $G_{cx}=\ell_{cx} h_{cx}$ in the cell $c^*$ and the right one\footnote{{ We note that since the inverse is an antihomomorphism, $G_{yx}=\ell_{yx} h_{yx} {\,\rightarrow\,} G_{xy}^{-1}=h_{xy}^{-1} \ell_{xy}^{-1}= G_{yx} = h_{yx}\ell_{yx}$, the right decomposition of the inverse is the analogue of the left decomposition.
}} $G_{xc'}=h_{xc'} \ell_{xc'}$ in $c'^*$, we can rewrite this condition as
\begin{equation}
\ell_{cv} h_{cv} h_{vc'} \ell_{vc'}= \ell_{cv'} h_{cv'} h_{v'c'} \ell_{v'c'}
\quad \Leftrightarrow \quad h_{cv} h_{vc'} \ell_{vc'}\ell_{c'v'}= \ell_{vc}\ell_{cv'} h_{cv'} h_{v'c'}.
\end{equation}
In other words once we introduce the {\it triangular holonomies}
\begin{equation}
L_{vv'}^c\equiv \ell_{vc}\ell_{cv'}\in \AN, \qquad H_{cc'}^v \equiv h_{cv} h_{vc'}\in \mathrm{SU}_{\sigma s},
\end{equation}
and we can express the integrated continuity equation (\ref{intcont}) as the
\emph{ribbon structure}, see Fig. \ref{loop03},
\begin{equation} \label{iwatop}
L_{vv'}^c H_{cc'}^{v'} = H_{cc'}^v L_{vv'}^{c'}.
\end{equation}
The triangular holonomies are the classical analogues of Kitaev's triangle operators \cite{Kitaev1997} \cite{triangle}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.30]{pic2}
\caption{The constraint \eqref{iwatop} provides the natural way to define a ribbon structure associated to each link $[cc']$. It encodes that the holonomy around the ribbon is trivial. }
\label{loop03}
\end{center}
\end{figure}
\subsubsection{Heisenberg double/phase space associated to a link}
Having such a ribbon structure points for a natural symplectic form \cite{Alekseev_1994}. In fact we are going to prove that the explicit evaluation of $\Omega_{c c'}$, defined in \eqref{split 2}, is the natural symplectic form making ${\mathfrak{D}}_{\sigma s}$ a Heisenberg double, the generalization of the notion of cotangent bundle as a phase space \cite{Alekseev_1994}.
\medskip
\begin{theorem}\label{thm}
The symplectic form associated to a link $[cc']$ is given by
\begin{eqnarray}\label{main}
\Omega_{c c'}&=& \Omega_{c}^{{[vv']}} -
\Omega_{ c'}^{[vv']} = {\frac{1}{2}} \left(\left\langle {\stackrel[R]{}{\Delta} } H^v _{cc'}\, \wedge\, {\stackrel[R]{}{\Delta} } L^{c}_{vv'}\right\rangle + \left\langle{\overline \Delta} H^{v'}_{cc'}\,\wedge\, {\overline \Delta} L^{c'}_{vv'}\right\rangle \right).
\end{eqnarray}
$\blacksquare$
\end{theorem}
\medskip
The proof of this result is presented in section \ref{proof}. This theorem can be seen as the main result of the paper.
Before proving the theorem it can be instructive to check that $\Omega_{c c'}$ is indeed closed \cite{Alekseev_1994}.
For notational simplicity, let us omit the indices and lets assume that
$\ell, {\tilde \ell} \in\AN$, $h,{\tilde h} \in\mathrm{SU}$ are such that they form a ribbon structure
\begin{equation} { G\equiv \ell\, h= {\tilde h} \,\,{\tilde \ell} }. \label{iwa0}
\end{equation}
The 2-form $\Omega_{c c'}=\Omega$ can then be written as
\begin{equation}\label{goal}
\Omega= \frac12 \Omega_L +\frac12 \Omega_R,\qquad
\Omega_L := \left\langle {\stackrel[R]{}{\Delta} } {\tilde h} \wedge {\stackrel[R]{}{\Delta} } \ell \right\rangle
,\qquad
\Omega_R := \left\langle {\overline \Delta} h \wedge {\overline \Delta} {\tilde \ell} \right\rangle
\end{equation}
The variation of this equation implies that $ \Delta G= {\stackrel[R]{}{\Delta} } \ell + G {\overline \Delta} h G^{-1} $, also that
$ \Delta G= {\stackrel[R]{}{\Delta} } {\tilde h} + G {\overline \Delta} {\tilde \ell} G^{-1}$ and the identity
\begin{eqnarray}
{\stackrel[R]{}{\Delta} } \ell- {\stackrel[R]{}{\Delta} }{\tilde h} &=& G ({\overline \Delta} {\tilde \ell} - {\overline \Delta} h )G^{-1}.
\end{eqnarray}
Since $\delta {\stackrel[R]{}{\Delta} } {\tilde h} = {\stackrel[R]{}{\Delta} } {\tilde h} \wedge {\stackrel[R]{}{\Delta} } {\tilde h} $, and
$\delta {\overline \Delta} h=- {\overline \Delta} h\wedge {\overline \Delta} h$, one finds that
\begin{eqnarray}
\delta \Omega_L &=&\left\langle ({\stackrel[R]{}{\Delta} } {\tilde h} -{\stackrel[R]{}{\Delta} } \ell) \stackrel{\wedge}{,} {\stackrel[R]{}{\Delta} } {\tilde h} \wedge {\stackrel[R]{}{\Delta} } \ell \right\rangle
\cr
&=& \frac13 \left\langle ({\stackrel[R]{}{\Delta} } {\tilde h} -{\stackrel[R]{}{\Delta} } \ell) \stackrel{\wedge}{,} ({\stackrel[R]{}{\Delta} } {\tilde h} -{\stackrel[R]{}{\Delta} } \ell)\wedge ({\stackrel[R]{}{\Delta} } {\tilde h} -{\stackrel[R]{}{\Delta} } \ell) \right\rangle
\end{eqnarray}
We used in the second equality the fact that $\AN$ and $\mathrm{SU}$ are isotropic.
We find a similar result for $\Omega_R$ with $({\stackrel[R]{}{\Delta} } {\tilde h} -{\stackrel[R]{}{\Delta} } \ell)$ replaced by
$-({\overline \Delta} {\tilde \ell} - {\overline \Delta} h )$ and therefore $\delta \Omega_R =-\delta \Omega_L$, and $\Omega$ is closed.
Hence the Poisson bracket associated to $\Omega_{c c'}$ satisfies the Jacobi identity.
This phase space structure generalizes the usual notion of cotangent bundle.
\subsection{Drinfeld double as symmetry of the Heisenberg double}\label{sec:sym}
\paragraph{Match pair of groups.} We recall that the decompositions of ${\mathfrak{D}}_{\sigma s}$ into $\AN$ and $\mathrm{SU}_{\sigma s}$ provide the definitions of actions of $\AN$ on $\mathrm{SU}_{\sigma s}$ and vice versa. This allows to see ${\mathfrak{D}}_{\sigma s}$ as a matched pair of groups \cite{Majid:1996kd}.
\begin{eqnarray}
\ell h = {\tilde h} {\tilde \ell} = (\ell \rhd h )(\ell\lhd h ) \Rightarrow \ell \rhd h \equiv {\tilde h} , \quad \ell\lhd h \equiv \tilde{\ell}\\
{\tilde h} {\tilde \ell} = \ell h = ({\tilde h} \rhd {\tilde \ell} ) ({\tilde h} \lhd {\tilde \ell} )\Rightarrow {\tilde h} \rhd {\tilde \ell} \equiv \ell, \quad {\tilde h} \lhd {\tilde \ell} \equiv h.
\end{eqnarray}
Some of the compatibility properties of the actions are as follows.
\begin{eqnarray}
&&1\lhd h =1, \,\, \ell \lhd(h_1h_2)= (\ell \lhd h_1)\lhd h_2, \,\, (\ell_1\ell_2)\lhd h = (\ell_1\lhd(\ell_2\rhd h))(\ell_2\lhd h)\nonumber\\
&&\ell\rhd 1 =1, \,\, \ell \rhd(h_1h_2)= (\ell \rhd h_1)((\ell\lhd h_1)\rhd h_2), \,\, (\ell_1\ell_2)\rhd h = \ell_1\rhd(\ell_2\rhd h)\nonumber\\
&& (h^{-1} \rhd\ell^{-1}) = {\tilde \ell} ^{-1} =( \ell \lhd h )^{-1}, \quad (h^{-1} \lhd\ell^{-1}) = {\tilde h} ^{-1} =( \ell \rhd h )^{-1} \label{crossaction4}
\end{eqnarray}
where we used in the last line the inverse of \eqref{iwa0}, namely $h^{-1} \ell^{-1} = {\tilde \ell} ^{-1} {\tilde h} ^{-1}$. We have similar properties for the other actions in terms of ${\tilde h} $ and ${\tilde \ell} $.
\paragraph{General action of ${\mathfrak{D}}_{\sigma s}$ on itself.}
The Heisenberg double is defined in terms of the group ${\mathfrak{D}}_{\sigma s}$. The group ${\mathfrak{D}}_{\sigma s}$ acts on the left (or on the right) on itself.
\begin{eqnarray}\label{symactions}
\begin{array}{ccc}{\mathfrak{D}}_{\sigma s}\times {\mathfrak{D}}_{\sigma s} &{\,\rightarrow\,}& {\mathfrak{D}}_{\sigma s}\nonumber\\
(G' ,G)&{\,\rightarrow\,}& G' G \end{array} \qquad \begin{array}{ccc}{\mathfrak{D}}_{\sigma s}\times {\mathfrak{D}}_{\sigma s} &{\,\rightarrow\,}& {\mathfrak{D}}_{\sigma s}\nonumber\\
(G',G)&{\,\rightarrow\,}& G G' \end{array}.
\end{eqnarray}
Using either of the left or right decompositions $G=\ell h = {\tilde h} \tilde{\ell}$, and the left decomposition for $G'=\ell' h'$, $\ell'\in\AN, \, h'\in \mathrm{SU}_{\sigma s}$, we have, for the left action,
\begin{eqnarray}
G'G= \ell' h' \, \ell h= [\ell' (h'\rhd \ell')] [(h' \lhd \ell)h]= \ell' h' {\tilde h} \tilde{\ell} = (\ell' \rhd (h' {\tilde h} ))
(\ell' \lhd (h' {\tilde h} )) \tilde{\ell}.
\end{eqnarray}
The left and right actions of ${\mathfrak{D}}_{\sigma s}$ on itself encode the natural phase space symmetry actions and provide a discretization of the symmetries generated by the charges \eqref{chargedef}.
\medskip
\paragraph{Rotations on the left.} Let us consider the infinitesimal transformations associated to left transformations (the right transformations are obtained in an analogous manner).
Let us first look at the infinitesimal (left) action $\delta^L_\alpha$ of the rotations $h'\sim 1 + \alpha$, $\alpha\in\mathfrak{su}$ on $G\in {\mathfrak{D}}_{\sigma s}$.
\begin{equation} \label{rotsymleft}
h'{\,\triangleright\,} G = h' G \sim (1+\alpha )G \textrm{ with } G= \ell h = {\tilde h} \tilde{\ell}
\end{equation}
We deduce then the easy transformations,
\begin{eqnarray}\label{whererot}
\delta^L_\alpha G= \alpha G, \quad
\delta^L_\alpha {\tilde h} = \alpha {\tilde h} , \quad \delta^L_\alpha \tilde{\ell} = 0.
\end{eqnarray}
The other transformations, $\delta^L_\alpha h , \delta^L_\alpha \ell $, require a bit more work.
We have
\begin{eqnarray}
h' \,\ell\, h = (h' \rhd\ell) \, (h' \lhd\ell) \, h \rightarrow \left| \begin{array}{l} h' \rhd\ell = h' \,\ell \, (h' \lhd\ell)^{-1}
= h' \,\ell\, (\ell^{-1} \rhd (h')^{-1})
\\
h'\rhd h= (h' \lhd\ell) \, h
\end{array}\right. .
\end{eqnarray}
So at the infinitesimal level\footnote{Note that we have
\begin{equation}\label{v to alpha}
(h' \lhd\ell)^{-1}
= (\ell^{-1} \rhd h'{}^{-1}) \Rightarrow - (\alpha \lhd\ell)
= - (\ell^{-1} \rhd \alpha). \nonumber
\end{equation}}, we have
\begin{eqnarray}
\delta^L_\alpha \ell &= &
\alpha \ell - \ell (\alpha \lhd \ell) , \label{infi rot ell}\\
\delta^L_\alpha h &=& (\alpha \lhd\ell) \, h.\label{infi rot h}
\end{eqnarray}
Since we deal with a match pair of groups, due to the action and back action we can have a twisted compatibility relation with the product \cite{Majid:1996kd}. In particular for the action on the $\AN$ sector we have,
\begin{eqnarray}\label{braiding}
h\rhd (\ell_1\ell_2)= (h\rhd\ell_1) ((h\lhd\ell_1)\rhd \ell_2){\,\rightarrow\,} \delta_{\alpha} (\ell_1\ell_2)= (\delta_{\alpha} \ell_1)\ell_2 + \ell_1 (\delta_{\alpha \lhd \ell_1}\ell_2).
\end{eqnarray}
The action \eqref{infi rot ell} satisfies such condition.
\begin{eqnarray}
\delta_{\alpha} (\ell_1\ell_2)&=&
\alpha \ell_1\ell_2 - \ell_1\ell_2 (\alpha \lhd (\ell_1\ell_2))\nonumber\\
&=&\{ ( \alpha \ell_1 ) - \ell_1 (\alpha \lhd\ell_1) \}\ell_2 + \ell_1\{ (\alpha \lhd\ell_1) \ell_2 - \ell_2 (( \alpha \lhd\ell_1)\lhd \ell_2)\} \nonumber\\
&=&(\delta_{\alpha} \ell_1)\ell_2 + \ell_1(\delta_{\alpha \lhd\ell_1}\ell_2).
\end{eqnarray}
\medskip
\paragraph{Charge for the rotations on the left.} { In the continuum picture we have identified the charges $J'$ generating the rotational symmetry. The following proposition determines the corresponding charge in the discrete picture.
\begin{proposition}\label{prop:sym1}
The triangular holonomy $\ell=L^{c}_{vv'}$ generates the infinitesimal left rotations.
\begin{eqnarray}
\delta^L_\alpha \lrcorner\, \Omega_{c c'
&=& \left\langle \alpha \, , \, {\stackrel[R]{}{\Delta} } \ell \right\rangle .\label{gausscharge}
\end{eqnarray}
\end{proposition}
We provide the proof in Appendix \ref{proofprop:sym1}. Geometrically this (infinitesimal) rotation is located at $c$ as it can be read from \eqref{whererot}, remembering that $\tilde{\ell} = L^{c'}_{vv'}$ and ${\tilde h} = H^v_{cc'}$.
\medskip
\paragraph{Generating left rotations with Poisson brackets.}
The Poisson bracket associated to the symplectic form can be obtained by inverting the symplectic form \cite{Alekseev_1994}. We can also directly infer it from the infinitesimal transformations. Indeed, as discussed in \cite{Babelon_1992}, since $\ell$ is the charge of the left rotation $\delta^L_\alpha$ we can recover from the action of $\delta^L_\alpha$ on $(\ell,{\tilde \ell} ,h,{\tilde h} )$
the Poisson bracket of $\ell$ with all the other components, using the correspondence
\begin{equation}\label{babelon1}
\delta^L_\alpha \cdot = -\left\langle \alpha\,,\,\{\ell_{1}\,,\, \cdot \} \ell_{1}^{-1} \right\rangle_{1} ,
\end{equation}
where we are using here the notation $\ell_{1}:= \ell{\otimes} 1$, $\ell_{2}:= 1{\otimes} \ell$ and $\left\langle,\right\rangle_{1}$ means we are contracting the first sector of the tensor product.
\begin{proposition}\label{prop:sym2}
The Poisson brackets implementing the infinitesimal transformation \eqref{babelon1} can be conveniently written in terms of the $r$-matrix \cite{Semenov1992}
\begin{equation}
r_-\equiv - \tau_I {\otimes} {{\bf J}} ^I.
\end{equation}
and are given by
\begin{eqnarray}
\poi{\ell_{1},\ell_{2}}= \com{r_-, \ell_{1}\ell_{2}}, &\, &
\poi{\ell_{1},h_{2}}=\ell_{1} r_- h_{2}, \\
\poi{\ell_{1}, \tilde{\ell}_{2}}= 0, &\, &
\,\,
\poi{\ell_{1}, {\tilde h} _{2}}= r_- \, \ell_{1} {\tilde h} _{2}. \nonumber
\end{eqnarray}
\end{proposition}
We provide the proof in Appendix \ref{proofprop:sym2}.}
\medskip
\paragraph{Translations on the left.}
A similar calculation can be performed for the infinitesimal (left) translations $\delta^L_\phi$,
$\AN\ni \ell'\sim 1 + \phi$, $\phi\in\mathfrak{an}$.
\begin{equation} \label{transsymleft}
\ell'{\,\triangleright\,} G = \ell' G \sim (1+\phi )G \textrm{ with } G= \ell h = {\tilde h} \tilde{\ell}
\end{equation}
We deduce again the easy transformations,
\begin{eqnarray}\label{wheretrans}
\delta^L_\phi G= \phi G, \quad
\delta^L_\phi \ell = \phi \ell , \quad \delta^L_\phi h = 0 ,
\end{eqnarray}
and the other transformations, $\delta^L_\phi {\tilde h} , \delta^L_\phi \tilde{\ell} $, require a bit more work.
We have
\begin{eqnarray}
\,{\tilde h} \,\tilde{\ell}\, = (\ell' \rhd{\tilde h} ) \, (\ell' \lhd{\tilde h} ) \, \tilde{\ell} \rightarrow \left| \begin{array}{l} \ell' \rhd{\tilde h} = \ell' \, {\tilde h} \, (\ell' \lhd{\tilde h} )^{-1} = \ell' \, {\tilde h} \,({\tilde h} ^{-1} \rhd m^{-1}) \\
\ell' \rhd \tilde{\ell}= (\ell' \lhd{\tilde h} ) \, \tilde{\ell}
\end{array}\right. .
\end{eqnarray}
We note that the formulae are actually very similar to the left rotations we first determined. It is natural since the construction is by essence symmetric between the $\mathfrak{su}$ and $\mathfrak{an}$ sectors.
At the infinitesimal level, we have
\begin{eqnarray}
\delta^L_\phi {\tilde h} &= &\phi {\tilde h} - {\tilde h} ({\tilde h} ^{-1} \rhd \phi) = \phi {\tilde h} - {\tilde h} ( \phi \lhd {\tilde h} ) , \label{infi trans h}\\
\delta^L_\phi \tilde{\ell} &=& (\phi \lhd{\tilde h} ) \, \tilde{\ell}.\label{infi trans ell}
\end{eqnarray}
It is clear that the action \eqref{infi trans h} satisfies a twisted compatibility condition with the product of $\mathrm{SU}$, since the formula is very similar to \eqref{infi rot ell}.
\medskip
\paragraph{Charge for the translations on the left.} { In the continuum picture we have identified the charges generating the translation symmetry $P'$. The following proposition determines the corresponding charge in the discrete picture. As one could expect, the charge generating the left translation is now given by the $\mathrm{SU}$ holonomy.
\begin{proposition}\label{prop3}
The triangular holonomy ${\tilde h} =H^{v}_{cc'}$ generates the infinitesimal left translations.
\begin{eqnarray}
\delta^L_\phi \lrcorner\, \Omega_{c c'
&=&
- \left\langle \phi ,\, {\stackrel[R]{}{\Delta} } {\tilde h} \, \right\rangle \label{transcharge}.
\end{eqnarray}
\end{proposition}
The proof of this proposition is very close to the one of Proposition \ref{prop:sym1}, thanks to the symmetric treatment between the variables $\ell \leftrightarrow {\tilde h} $, $\tilde{\ell} \leftrightarrow h$, and sectors $\mathfrak{an} \leftrightarrow \mathfrak{su}$. Geometrically this (infinitesimal) translation is based at $v$, as it can be read from \eqref{wheretrans}, remembering that $\ell= L^c_{vv'}$ and $h=H^{v'}_{cc'}$.
\medskip
\paragraph{Generating left translations with Poisson brackets.}
We can also derive the infinitesimal translations using the Poisson bracket.
\begin{equation}\label{babelon2}
\delta^L_\phi \cdot = \left\langle \phi\,,\,\{{\tilde h} _{1}\,,\, \cdot \} {\tilde h} _{1}^{-1} \right\rangle_{1}.
\end{equation}
The difference of minus sign with respect to \eqref{babelon1} is due to the fact that the charges have opposite sign as one can see looking at \eqref{transcharge} and \eqref{gausscharge}.
\begin{proposition}\label{prop:sym4}
The Poisson brackets implementing the infinitesimal transformation \eqref{babelon2} can be conveniently written in terms of the $R$-matrix \cite{Semenov1992}
\begin{equation}
r_+ = J_I{\otimes} \tau^I,
\end{equation}
and are given by
\begin{eqnarray}
\poi{{\tilde h} _{1},{\tilde h} _{2}}=\com{r_+,{\tilde h} _{1}{\tilde h} _{2}}, \quad
\poi{{\tilde h} _{1}, \tilde{\ell}_{2}}= {\tilde h} _1r_+ \tilde{\ell}_2, \quad
\poi{{\tilde h} _{1}\,,\,h_{2}}=0, \quad
\poi{{\tilde h} _{1}\,,\,\ell_{2}} = r_+ {\tilde h} _{1} \ell_{2}.
\end{eqnarray}
\end{proposition}
The proof is given in Appendix \ref{proofprop:sym4}. It is very similar to the earlier proof of proposition \ref{prop:sym2} due to the symmetry between the sectors $\mathrm{SU}$ and $\AN$ in the different decompositions.
}
\smallskip
A similar construction can be done for the infinitesimal right translations and rotations, which are respectively generated by $H_{cc'}^{v'}= h$ and $L^{c'}_{vv'}=\tilde{\ell}$ and act respectively at $v'$ and $c'$. { Determining these infinitesimal transformations allows to find the missing Poisson brackets, such as in particular
\begin{equation}
\poi{h_{1},h_{2}}= -\com{r_+, h_{1}h_{2}}, \qquad
\poi{h_{1}, \tilde{\ell}_{2}}= -h_1\tilde{\ell}_2 r_+, \qquad \poi{\tilde{\ell}_{1},\tilde{\ell}_{2}}
= -\com{r_-, \tilde{\ell}_{1}\tilde{\ell}_{2}} .
\end{equation}
}
These can be obtained by the correspondence ${\tilde h} ^{-1} \to h$.
\smallskip
In summary we find that the Heisenberg poisson brackets when restricted to the variables
$(h,\ell)$ are\footnote{Note that since $r_+=r_-+C$, we have $-[r_+,h_{1}h_{2}]=-[r_-,h_{1}h_{2}] $.}
\begin{eqnarray}\label{Heisc}
\poi{\ell_{1},\ell_{2}}= \com{r_-, \ell_{1}\ell_{2}}, \qquad
\poi{\ell_{1},h_{2}}=\ell_{1} r_- h_{2}, \qquad
\poi{h_{1},h_{2}}= -\com{r_-, h_{1}h_{2}}.
\end{eqnarray}
\paragraph{Finite transformations. } We can also look at the finite version of the left or right transformations. These are obtained from the group ${\mathfrak{D}}_{\sigma s}$ acting on itself as we have discussed earlier \eqref{symactions}. We can prove that they are phase space symmetries if we equip the group ${\mathfrak{D}}_{\sigma s}$ with another Poisson structure, which this time is not invertible (it is however compatible with the group product of ${\mathfrak{D}}_{\sigma s}$). In this case, ${\mathfrak{D}}_{\sigma s}$ as a symmetry group is called the Drinfled double.
In order to write these we note that the r-matrices $(r_+,r_-)$ satisfy the relations
\begin{equation}
2 r:= r_++r_-, \qquad r_+-r_- = C
\end{equation}
where $C$ is the quadratic Casimir of ${\mathfrak{d}}$ and we have introduced the antisymmetric $r$-matrix $r$.
\begin{eqnarray}
\textrm{\,\,\,\,Heisenberg double :} \{ G_1,G_2\} = [r,{G{\otimes} G}]_+ = {r G{\otimes} G} + { G{\otimes} G} r , \label{ps}\\
\textrm{Drinfeld double :} \{ G'_1,G'_2\} = [r , G'{\otimes} G']_-= {r G'{\otimes} G'} - { G'{\otimes} G'} r , \label{drin}
\end{eqnarray}
with $G,G'\in {\mathfrak{D}}_{\sigma s}$ .
{
The set of Poisson brackets we just derived in \eqref{Heisc} are equivalent to the Poisson brackets \eqref{ps}. On the other hand the Poisson brackets given in \eqref{drin} are simply \cite{Semenov1992},
\begin{eqnarray}
\poi{\ell'_{1},\ell'_{2}}= \com{r_-, \ell'_{1}\ell'_{2}}, \qquad
\poi{\ell'_{1},h'_{2}}=0= \poi{h'_{1},\ell'_{2}}, \qquad
\poi{h'_{1},h'_{2}}= \com{r_+, h'_{1}h'_{2}}.
\end{eqnarray}
The left or right action of ${\mathfrak{D}}_{\sigma s}$ as a Drinfeld double on ${\mathfrak{D}}_{\sigma s}$ as a Heisenberg double is a Poisson map \cite{Semenov1992}. This means in physical terms that our phase space structure is covariant under the action of the Drinfeld double, which encodes some symmetry transformations equipped with a (in general non-trivial) Poisson structure. Upon quantization, the non-trivial Poisson structure becomes the relevant non-commutative/quantum group structure. Our quantum mechanical states being built from representations of these symmetries will then be naturally defined in terms of quantum group representations. We will come back to this point in Section \ref{qgp}.}
\subsection{Proof of the main result} \label{proof}
Let us prove here the main result of the paper given by theorem \ref{thm}.
We start from the discretized symplectic form on the boundary of the cell $c$. Within any cell $c^*$ we have from Proposition \ref{prop1} that
\begin{eqnarray}
\Theta_{|_c}=\int_{c^*}\left\langle e \wedge \delta \omega \right\rangle \approx \Theta_{c}= \sum_{[vv']\in \partial c^*}\Theta_{c}^{[vv']},\quad
\Theta_{c}^{[vv']} = -\int_{[vv']}\delta \left\langle \ell_{c}^{-1} \mathrm{d} \ell_{c} \wedge {\stackrel[R]{}{\Delta} } h_{c}\right\rangle.
\end{eqnarray}
Given two cells $c^*,c'^*$ one defines
the holonomy $G_{cc'}= L_{cc'} H_{cc'}$ and denote
$G_{cx}= \ell_{cx} h_{cx}$, $G_{xc}= G_{cx}^{-1}$.
We also denote, for any holonomy $u_{ab}$ from to $a$ to $b$, $u^{-1}_{ab}=u_{ba}$.
Given $x\in[vv']$, one defines
\begin{equation} \label{defH}
H^x_{c'c}\equiv h_{c'x}h_{xc},\qquad {\tilde \ell} _{cx}\equiv
L_{cc'}\ell_{c'x}.
\end{equation}
Taking the variation of the first equation of \eqref{defH}, we get
\begin{eqnarray}\label{hshift}
{\stackrel[R]{}{\Delta} } h_{c'x}&=& \Delta H^x_{c'c} +H^x_{c'c} \Delta h_{cx} H^x_{cc'} = H^x_{c'c}({\stackrel[R]{}{\Delta} } h_{cx}-{\stackrel[R]{}{\Delta} } H^x_{cc'} ) (H^x_{c'c})^{-1},
\end{eqnarray}
where we have used that ${\stackrel[R]{}{\Delta} } H^{-1} = - H^{-1} {\stackrel[R]{}{\Delta} } H H$.
Taking the differential of the second relation in (\ref{defH}) gives
\begin{equation}\label{lshift}
\ell_{c'x}^{-1} \mathrm{d} \ell_{c'x}= {\tilde \ell} _{cx}^{-1} \mathrm{d} {\tilde \ell} _{cx}.
\end{equation}
The continuity equations across the edge $[vv']$ separating $c$ from $c'$ is equivalent to an exchange relation:
\begin{equation}\label{contui1}
G_{c'c} G_{cx}= G_{c' x},\quad \Leftrightarrow\quad
H_{c'c} \ell_{cx}
= {\tilde \ell} _{cx} H^x_{c'c}.
\end{equation}
Taking the differential of the continuity equation
\eqref{contui1}, we get
\begin{eqnarray}
\label{diffcont}
{\tilde \ell} _{cx}^{-1} \mathrm{d} {\tilde \ell} _{cx} &=&
\left(H^x_{c'c} (\ell_{cx}^{-1} \mathrm{d} \ell_{cx})H^x_{cc'} + H^x_{c'c} \mathrm{d} H^x_{cc'} \right).
\end{eqnarray}
This relation, together with (\ref{hshift},\ref{lshift}) allows us to relate the contribution of the cell $c'$ to the one of the cell $c$. Denoting $\Theta_{cc'}= \Theta_{c}^{[vv']}-\Theta_{c'}^{[vv']}$ with $\Theta_c^e := -\int_e \left\langle\ell^{-1}_{cx} \mathrm{d} \ell_{cx}, {\stackrel[R]{}{\Delta} } h_{cx} \right\rangle$, see \eqref{Thetac}, one finds that
\begin{equation}
\Theta_{cc'} = -\int_{[vv']} \left\langle \ell_{cx}^{-1} \mathrm{d} \ell_{cx} {,} \left( {\stackrel[R]{}{\Delta} } H ^x_{cc'} \right)\right\rangle
= \int_{[vv']} \left\langle {\tilde \ell} _{cx}^{-1} \mathrm{d} {\tilde \ell} _{cx} {,}\left( {\stackrel[R]{}{\Delta} } H ^x_{c'c} \right)\right\rangle.
\end{equation}
The second equality is due to the differential continuity equation \eqref{diffcont}
and the identity ${\overline \Delta} H = H^{-1} {\stackrel[R]{}{\Delta} } H H=- {\stackrel[R]{}{\Delta} } H^{-1}$.
The fact that there are two equivalent expressions for the symplectic potential simply follows from the exchange $c\leftrightarrow c'$. Under this exchange $\Theta_{cc'}$ is antisymmetric.
It is also clear from the continuity equation written as
${\tilde \ell} _{cx}^{-1} H_{c'c} \ell_{cx}
= H^x_{c'c} $ that under this exchange we have ${\tilde \ell} _c\leftrightarrow \ell_c$.
The variation of the differential continuity \eqref{diffcont} gives
\begin{eqnarray}
H^x_{cc'} \delta( {\tilde \ell} _{cx}^{-1} \mathrm{d} {\tilde \ell} _{cx})H^x_{c'c} - \delta (\ell_{cx}^{-1} \mathrm{d} \ell_{cx})&=&
\left( [ \ell_{cx}^{-1} \mathrm{d} \ell_{cx}, \Delta H^x_{cc'}] + \mathrm{d} {\stackrel[R]{}{\Delta} } H^x_{cc'} \right).
\end{eqnarray}
One can use this to establish that
\begin{eqnarray}
\delta \left\langle (\ell_{cx}^{-1} \mathrm{d} \ell_{cx}) {\stackrel[R]{}{\Delta} } H ^x_{cc'} \right\rangle
&=&
\left\langle \delta (\ell_{cx}^{-1} \mathrm{d} \ell_{cx}) \curlywedge {\stackrel[R]{}{\Delta} } H ^x_{cc'} \right\rangle +\left\langle [\ell_{cx}^{-1} \mathrm{d} \ell_{cx} {,} {\stackrel[R]{}{\Delta} } H ^x_{cc'}] \curlywedge {\stackrel[R]{}{\Delta} } H ^x_{cc'} \right\rangle\cr
&=&
\left\langle H^x_{cc'} \delta( {\tilde \ell} _{cx}^{-1} \mathrm{d} {\tilde \ell} _{cx})H^x_{c'c} \curlywedge {\stackrel[R]{}{\Delta} } H ^x_{cc'} \right\rangle
\cr
&=& - \left\langle \delta( {\tilde \ell} _{cx}^{-1} \mathrm{d} {\tilde \ell} _{cx}) \curlywedge {\stackrel[R]{}{\Delta} } H ^x_{c'c} \right\rangle.
\end{eqnarray}
where we have denoted $\curlywedge $ the variational wedge product.
This means that
\begin{eqnarray}\label{omprime}
\Omega_{cc'}
&=& \int_{[vv']} \left\langle \delta( {\tilde \ell} _{cx}^{-1} \mathrm{d} {\tilde \ell} _{cx}) \curlywedge {\stackrel[R]{}{\Delta} } H ^x_{c'c} \right\rangle = - \int_{[vv']} \left\langle\delta (\ell_{cx}^{-1} \mathrm{d} \ell_{cx}) \curlywedge \Delta H^x_{cc'}\right\rangle .
\end{eqnarray}
From the variation of the continuity equation \eqref{contui1}
one gets
\begin{eqnarray}\label{varcont}
{\stackrel[R]{}{\Delta} } H ^x_{cc'} &=& {\overline \Delta} \ell_{cx} +\ell_{xc}{\overline \Delta} H_{c'c} \ell_{cx}
+ \ell_{xc}H_{cc'} {\overline \Delta} {\tilde \ell} _{xc} H_{c'c}\ell_{cx}\cr
&=&\ell_{c}^{-1} \left\{ H_{cc'} ({\stackrel[R]{}{\Delta} } {\tilde \ell} _{c}) H_{c'c} + {\stackrel[R]{}{\Delta} } H_{cc'} -{\stackrel[R]{}{\Delta} } \ell_c \right\}\ell_{c},
\end{eqnarray}
where we have used that ${\overline \Delta} H^{-1}=-{\stackrel[R]{}{\Delta} } H$.
Similarly we have an equivalent variational continuity identity obtained by exchanging $c\leftrightarrow c'$ and $\ell_c\leftrightarrow{\tilde \ell} _c$
\begin{eqnarray}\label{varcont2}
{\stackrel[R]{}{\Delta} } H ^x_{c'c}
&=&{\tilde \ell} _{c}^{-1} \left\{ H_{c'c} ({\stackrel[R]{}{\Delta} } \ell_{c}) H_{cc'} + {\stackrel[R]{}{\Delta} } H_{c'c} - {\stackrel[R]{}{\Delta} } {\tilde \ell} _c \right\}{\tilde \ell} _{c} .
\end{eqnarray}
Using these relations and the fact that $\delta (\ell_c^{-1} \mathrm{d} \ell_c)= \ell_c^{-1}( \mathrm{d} \Delta \ell_c)\ell_c$ one can evaluate \eqref{omprime}
\begin{eqnarray}
\Omega_{cc'} &=& -\int_{[vv']} \left\langle \mathrm{d} {\stackrel[R]{}{\Delta} } \ell_{c} \curlywedge
\left( H_{cc'} ({\stackrel[R]{}{\Delta} } {\tilde \ell} _{c}) H_{c'c} + {\stackrel[R]{}{\Delta} } H_{cc'} \right)\right\rangle, \label{om1}\\
&=& \int_{[vv']} \left\langle \mathrm{d} {\stackrel[R]{}{\Delta} } {\tilde \ell} _{c} \curlywedge
\left( H_{c'c} ({\stackrel[R]{}{\Delta} } \ell_{c}) H_{cc'} + {\stackrel[R]{}{\Delta} } H_{c'c} \right)\right\rangle. \label{om2}
\end{eqnarray}
Note that we repeatedly use the fact that the subalgebra $\mathfrak{su}$ or $\mathfrak{an}$ are isotropic with respect to our scalar product.
Quite remarkably the integrant of $\Omega_{cc'}$ is a total differential.
This can be simply seen by taking the sum of (\eqref{om1}) and (\eqref{om2}) which gives after integration
\begin{equation} \label{omdiscrete}
\Omega_{cc'} =
- \frac12
\left.\left(
\left\langle {\stackrel[R]{}{\Delta} } \ell_{c} \curlywedge {\stackrel[R]{}{\Delta} } H_{cc'} \right\rangle +
\left\langle {\stackrel[R]{}{\Delta} } \ell_{c} \curlywedge H_{cc'} ({\stackrel[R]{}{\Delta} } {\tilde \ell} _{c}) H_{c'c} \right\rangle
+ \left\langle {\stackrel[R]{}{\Delta} } H_{c'c} \curlywedge {\stackrel[R]{}{\Delta} } {\tilde \ell} _{c} \right\rangle
\right)\right|_{x=v}^{x=v'}.
\end{equation}
To evaluate this expression one recall the definition of the triangular holonomies
\begin{equation}
L_{vv'}^c = \ell_{vc}\ell_{cv'},\qquad L_{v'v}^{c'}= {\tilde \ell} _{v'c}{\tilde \ell} _{cv}.
\end{equation}
Taking their variation gives
\begin{equation}
({\stackrel[R]{}{\Delta} } \ell_{cv'}-{\stackrel[R]{}{\Delta} } \ell_{cv}) =\ell_{cv}\,({\stackrel[R]{}{\Delta} } L_{vv'}^{c})\, \ell_{cv}^{-1},
\qquad
({\stackrel[R]{}{\Delta} } {\tilde \ell} _{cv'}-{\stackrel[R]{}{\Delta} } {\tilde \ell} _{cv}) =-{\tilde \ell} _{cv'}\,({\stackrel[R]{}{\Delta} } L_{v'v}^{c'})\, {\tilde \ell} _{cv'}^{-1}.
\end{equation}
By adding a vanishing contribution $\left\langle {\stackrel[R]{}{\Delta} } \ell_{cv'} \curlywedge H_{cc'} {\stackrel[R]{}{\Delta} } {\tilde \ell} _{cv} H_{cc'} ^{-1}\right\rangle - \left\langle{\stackrel[R]{}{\Delta} } \ell_{cv'} \curlywedge H_{cc'} {\stackrel[R]{}{\Delta} } {\tilde \ell} _{cv} H_{cc'} ^{-1}\right\rangle$ to \eqref{omdiscrete}, we obtain
\begin{eqnarray}
-2\Omega_{cc'}
&=&
\left\langle ({\stackrel[R]{}{\Delta} } \ell_{cv'}-{\stackrel[R]{}{\Delta} } \ell_{cv}) \wedge ({\stackrel[R]{}{\Delta} } H_{cc'} + H_{cc'} {\stackrel[R]{}{\Delta} } {\tilde \ell} _{cv} H_{cc'} ^{-1})\right\rangle
\nonumber\\ &+&
\left\langle{\stackrel[R]{}{\Delta} } H_{c'c} \curlywedge ({\stackrel[R]{}{\Delta} } {\tilde \ell} _{cv'}-{\stackrel[R]{}{\Delta} } {\tilde \ell} _{cv}) \right\rangle
+\left\langle {\stackrel[R]{}{\Delta} } \ell_{cv'} \wedge H_{cc'} ({\stackrel[R]{}{\Delta} } {\tilde \ell} _{cv'}-{\stackrel[R]{}{\Delta} } {\tilde \ell} _{cv}) H_{cc'} ^{-1}\right\rangle \nonumber\\
&=&
\left\langle \,({\stackrel[R]{}{\Delta} } L^c_{vv'})\, \curlywedge \ell_{cv}^{-1}({\stackrel[R]{}{\Delta} } H_{cc'}+H_{cc'} {\stackrel[R]{}{\Delta} } {\tilde \ell} _{cv} H_{cc'})\ell_{cv} \right\rangle \nonumber\\ &-&
\left\langle {\tilde \ell} _{cv'}^{-1} ({\stackrel[R]{}{\Delta} } H_{c'c} + H_{c'c}{\stackrel[R]{}{\Delta} } \ell_{cv'} H_{cc'}){\tilde \ell} _{cv'}
\curlywedge \,({\stackrel[R]{}{\Delta} } L_{v'v}^{c'})\, \right\rangle
\end{eqnarray}
We can now use the variational continuity equations \eqref{varcont} at $x=v'$ and \eqref{varcont2} at $x=v$, to get the simple expression
\begin{eqnarray}
2 \Omega_{cc'} &= & -
\left\langle \,{\stackrel[R]{}{\Delta} } L^c_{vv'}\, \curlywedge {\stackrel[R]{}{\Delta} } H_{cc'}^{v} \right\rangle +
\left\langle {\stackrel[R]{}{\Delta} } H_{c'c}^{v'} \curlywedge \,{\stackrel[R]{}{\Delta} } L_{v'v}^{c'}\, \right\rangle\cr
&=& \left\langle {\stackrel[R]{}{\Delta} } H_{cc'}^{v} \curlywedge {\stackrel[R]{}{\Delta} } L^c_{vv'} \right\rangle +
\left\langle {\overline \Delta} H_{cc'}^{v'} \curlywedge \,{\overline \Delta} L_{vv'}^{c'}\, \right\rangle,
\end{eqnarray}
which is the desired result.
\subsection{Ribbon network as the classical version of the quantum group spin network}
Let us recall that we consider a cellular decomposition $\Gamma^*$ of the 2d manifold $\Sigma$. We denote $\Gamma$ the dual 2-complex, made of nodes, links and faces. Let us see how the model is now built in terms of the discretized variables. We focus, in this section, on the Euclidean case with $\Lambda<0$, since the Iwasawa decomposition is global in this case.
\medskip
First the links are glued to each other at a node. For each link, we have a ribbon, hence we need to glue the ribbons together. By construction, the triangular holonomies in the $\AN$ sector going around a cell (eg a triangle) have a product which is the identity (as we assumed there is no torsion defect).
\begin{equation}
{\cal L}^c= L^c_{vv'}L^c_{v'v''}L^c_{v''v}= \ell_{vc}\ell_{cv'} \ell_{v'c}\ell_{cv''} \ell_{v''c}\ell_{cv}=1.
\end{equation}
This indicates that the three ribbons ends form a closed $\AN$ holonomy and tells us how the ribbon are glued together, see Fig. \ref{pic3}. This is the analogue of the Gauss constraint.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.35]{pic3}
\caption{The ribbon data encodes all the geometric data. In particular, when the ribbons meet at a node, the Gauss constraint $\ell_1\ell_2\ell_3=L^c_{v''v}L^c_{v'v''} L^c_{vv'}=1$ encodes the gauge invariance at the node and is the generalization of the flat case $X_1+X_2+X_3=0$. }
\label{pic3}
\end{center}
\end{figure}
\medskip
Once the ribbons are glued together, we can also look at the faces generated by the``long side'' of the ribbon. Provided there is no curvature excitation, we expect to have a product of $\mathrm{SU}$ holonomies associated to the links $l_i$ around $v$ (or said otherwise the links which form the boundary of face $v^*$) being equal to the identity.
\begin{equation}
{\cal G}^v= \prod_{l_i\in\partial v^*} \left(H^v_{l_i}\right)^{\pm1}=1,
\end{equation}
where $\pm1$ depends on the orientation of the link $l_i$.
\medskip
These two sets of constraints provide the discretization of the (global) charges \eqref{new charges}. As we have seen in Section \ref{sec:sym}, these two sets of holonomies generate the discrete analogue of the gauge transformations and the translations, as expected. Hence they should be seen as a discretization fo the charges $J'$ and $P'$ given in \eqref{new charges}, for constant transformation parameters on the boundary.
Alternatively, one can check how the constraints ${\cal L}^c$, ${\cal G}^v$ can be viewed as a discretization of the generalized torsion and curvature constraints \eqref{gen tor}, \eqref{gen curv} (with no matter source).
{
\begin{proposition}\label{prop:sym5}
The holonomies $L^c_{vv'}$, $H^v_{cc'}$ are related to the (infinitesimal) continuum charges in the following way, with $h_{cx}$ a $\mathrm{SU}$ holonomy connecting $c$ to $x$ a point in the relevant path.
\begin{eqnarray}
&&L^c_{vv'}= {\cal P} \exp \left(\int_{[vv']} h_{cx} {\,\triangleright\,} e(x) \right) , \\
&&H^v_{cc'}= {\cal P} \exp \left(\int_{[vv']} \omega(x) - \left(h_{cx}^{-1}\, (h_{cx}\rhd e (x))\, h_{cx}\right)_{|_{\mathfrak{su}}} \right)
\end{eqnarray}
The discrete constraints ${\cal L}^c=L^c_{vv'} L^c_{v'v''} L^c_{v''v}=1$, ${\cal G}^v=H^v_{cc'} .. H^v_{c^{(n)}c}=1$ encode that the generalized torsion and curvature are zero.
\begin{eqnarray}
{\cal L} ^c = 1 &\Leftrightarrow & \mathrm{d} e + \omega\rhd e+{\frac{1}{2}} [e \wedge e]_{\mathfrak{an}}=0 \label{calc991}\\
{\cal G}^v =1 &\Leftrightarrow&
\mathrm{d} \omega + \omega \lhd e+{\frac{1}{2}} [ \omega \wedge \omega]_{\mathfrak{su}}=0.
\end{eqnarray}
\end{proposition}
We leave the proof of the proposition in Appendix \ref{proofprop:sym5}. The expression of the discretized variables in terms of the continuum fields when $\Lambda\neq0$ is another aspect of the main result of this paper. }
\medskip
The (generalized) LQG phase space is given in terms of the product of phase space ${\mathfrak{D}}_{\sigma s}^{l_i} $ associated to the links $l_i=[c_ic_{i+1}]$, quotiented by the action of the (Gauss) constraints $ {\cal L}^{c^i}\equiv \prod_{j} L^{c^i}_{v_jv_{j+1}}$ acting at the nodes $c_i$.
\begin{equation}
{\cal P}:= \times_i {\mathfrak{D}}_{\sigma s}^{l_i} /\!/ {\cal L}^{c_i}
\end{equation}
The dynamics is given in terms of the contraints ${\cal G}^{v}$ associated to the vertices $v_i$ of $\Gamma^*$, expressed in terms of the $H^{v_i}$.
\medskip
This model is exactly the model discussed in \cite{Bonzom:2014wva}. The ribbon structure was proposed to define the classical phase space structure of 3d gravity in the presence of a cosmological constant. Here this model is derived rigorously from the continuum. Note that \cite{Dupuis:2019pi} analyzed how such model can be related to the Fock-Rosly approach to the Chern-Simons formulation (in the case of the torus space).
\section{Recovering the quantum group structure}\label{qgp}
The quantum theory associated to the Heisenberg double phase space in the $\mathrm{SU}_{\sigma s}$ case is a standard construction leading to the appearance of quantum group \cite{Chari:1994pz}. For the sake of being complete let us recall the construction without going through all the technical details (see also \cite{Bonzom:2014bua} in the $\mathrm{SU}(2)$ case). Again, we focus on ${\mathfrak{D}}_{+-}=\SL(2,\mathbb{C})= \mathrm{SU}(2)\bowtie \AN$, the Euclidean case with $\Lambda<0$.
Constructing a quantum theory means that we use a representation of the relevant symmetries, which we saw in Sections \ref{sec:new action} and \ref{sec:sym} were associated to charges. In the case of 3d gravity, we have two types of symmetries, the rotation symmetries and the translations. While in the full theory we need to implement both, the order in which we implement them at the quantum level matters. The different options are first the rotations then the translations, or vice versa, or both at the same time. The first approach consists in the LQG picture, the second one is "dual LQG" \cite{Delcamp:2018sef}, and the third one is the Chern-Simons picture.
In the following we will focus on the LQG approach, meaning that we will implement the rotational symmetry first, encoded by the Gauss charges.
\subsection{Poisson-Lie symmetry}
Before proceeding to quantization we need to tie one lose end.
The relationship between the $r$-matrix $r$ entering the Poisson brackets and the
$r$-matrix ${\mathfrak{r}}$ entering the deformation of the action.
These are given by
\begin{equation}
r_- = -\tau_I {\otimes} {{\bf J}} ^I\in \mathfrak{an} {\otimes} \mathfrak{su}, \qquad
{\mathfrak{r}}= {\mathfrak{r}}_{IJ} {{\bf J}} ^I{\otimes} {{\bf J}} ^J \in \mathfrak{su} {\otimes} \mathfrak{su}.
\end{equation}
We have seen in \eqref{Heisc} that the Poisson brackets of the rotational holonomies is given by
\begin{eqnarray}
\poi{h_1,h_2}= -[r_-,h_1h_2].
\end{eqnarray}
We expect however that the charge of symmetries acting on our phase space to belong the
the Poisson-Lie group SU. This possesses the Poisson commutation relations
\begin{equation}
\poi{h_1,h_2}= [{\mathfrak{r}} ,h_1h_2].
\end{equation}
There seems to be a tension between this two results.
This tension is simply resolved by the fact that these two expressions are the same.
\begin{equation}\label{equivalence}
[{\mathfrak{r}} ,h_1h_2] = -[r_-,h_1h_2]
\end{equation}
Strikingly this shows that the $r$-matrix we have introduced at the very beginning as a boundary term \eqref{rmatbdy} enters as a structure constant deforming the symmmetry group action. ${\mathfrak{r}}$ is the standard $r$-matrix encoding the deformation of the group $\mathrm{SU}(2)$ \cite{Semenov1992}, \cite{Chari:1994pz}. \textit{Our construction highlights that the notion of quantum group appears from the addition of the specific boundary term in \eqref{rmatbdy}.}
One first establish it at the level of the Lie algebra:
Given $\alpha \in \mathfrak{su}$ we want to prove that
\begin{equation} \label{inftsmal}
[{\mathfrak{r}} ,\alpha_1+\alpha_2] = -[r_-,\alpha_1+\alpha_2].
\end{equation}
This can be established by a direct computation as shown in \cite{Chari:1994pz}.
For the reader's convenience we present it here explicitly.
Taking $\alpha={{\bf J}} ^I$, and using \eqref{crosscom} and \eqref{Pcom}, we have
\begin{eqnarray}
-[r_-,{{\bf J}} ^I{\otimes} 1 +1{\otimes} {{\bf J}} ^I]&=& [\tau_J,{{\bf J}} ^I]{\otimes} {{\bf J}} ^J - \tau_J{\otimes} [{{\bf J}} ^J,{{\bf J}} ^I] \cr
&=& -(C_{JK}{}^I {{\bf J}} ^K +\epsilon^{I}{}_{J}{}^K \tau_K){\otimes} {{\bf J}} ^J -
\tau_J{\otimes} ( \epsilon^{JI}{}_{K} {{\bf J}} ^K),\cr
&=& C_{JK}{}^I {{\bf J}} ^J \otimes {{\bf J}} ^K \cr
&=& \sigma [(n \cdot {{\bf J}} ) \otimes {{\bf J}} ^I-{{\bf J}} ^I{\otimes} (n \cdot {{\bf J}} ) ].
\end{eqnarray}
while on the other hand we have
\begin{eqnarray}
[{\mathfrak{r}} ,{{\bf J}} ^I{\otimes} 1 +1{\otimes} {{\bf J}} ^I] &=&{\mathfrak{r}}_{AB} \left( [{{\bf J}} ^A,{{\bf J}} ^I]{\otimes} {{\bf J}} ^B +
{{\bf J}} ^A{\otimes} [{{\bf J}} ^B,{{\bf J}} ^I]\right) \cr
&=&\left( {\mathfrak{r}}_{AB} \epsilon^{AIC} \right) ( {{\bf J}} _C {\otimes} {{\bf J}} ^B-{{\bf J}} ^B {\otimes} {{\bf J}} _C)\cr
&=& n^D \epsilon_{DAB}\epsilon^{AIC}( {{\bf J}} _C {\otimes} {{\bf J}} ^B-{{\bf J}} ^B {\otimes} {{\bf J}} _C)\cr
&=& \sigma n^D (\delta_{B}^I\delta_D^C-\delta_{B}^C\delta_D^I)( {{\bf J}} _C {\otimes} {{\bf J}} ^B-{{\bf J}} ^B {\otimes} {{\bf J}} _C)\cr
&=& \sigma [ (n\cdot {{\bf J}} ) {\otimes} {{\bf J}} ^I-{{\bf J}} ^I {\otimes} (n\cdot {{\bf J}} ) ].
\end{eqnarray}
This establishes \eqref{inftsmal}.
The identity \eqref{equivalence} follows by exponentiation.
\medskip
\subsection{Quantization}
\paragraph{Specific representation choice. } It is useful to choose a specific representation to make some explicit calculations.
An element $\ell$ in $\AN$ will be specified by a real number $\lambda$ and a complex number $z$.
\begin{equation}
\ell \equiv \begin{pmatrix} \lambda & 0 \\ z & \lambda^{-1} \end{pmatrix}, \quad \bar{\ell} \equiv \begin{pmatrix} \lambda^{-1} & -\overline{z} \\ 0 & \lambda \end{pmatrix}.
\end{equation}
Note however that this representation is not faithful for $\AN$ so this is why we need to consider also $ \bar{\ell} \equiv \ell^{\dagger -1}$. (The map $G\to\bar{G}={G}^{\dagger -1}$ is a group morphism of $\AN$, which leaves the rotation subgroup invariant, as can be seen from the Iwasawa decomposition $\ell h{\,\rightarrow\,} h^\dagger \ell^\dagger {\,\rightarrow\,} \ell^{\dagger -1} h^{\dagger -1}=\bar{\ell} h$ \cite{Bonzom:2014bua}.) { It is convenient to consider dimensionless Lie algebra generators, $(\sigma^I, \xi_I = i \sigma_I + ( \sigma \times \hat n)_I)$, where $\hat n=(0,0,1)$ is the (dimensionless) normalized vector, and $\sigma_I$ are the (hermitian) Pauli matrices\footnote{Rescaled by a factor ${\frac{1}{2}}$.} with $[\sigma_I,\sigma_J]=i\,\epsilon_{IJ}{}^K\sigma_K$.
\begin{equation}
{{\bf J}} ^I = -i \kappa \, \sigma^I, \quad \tau_I = i \sqrt{|\Lambda|} \xi_I.
\end{equation}
This means in particular that the $r$-matrix parametrizing the Poisson brackets will have an explicit parameter dependence (not hidden in the Lie algebra generators anymore as in section \ref{sec:sym}), given by $\gamma= {\kappa} \sqrt{|\Lambda|}$.
This leads to an explicit expression for the $r$-matrix.
\begin{equation}
r_- \,=\,- \tau^I{\otimes} {{\bf J}} _I\, = -\, \gamma \, \xi ^I{\otimes} \sigma_I = i \frac{\gamma}{4} \begin{pmatrix}- 1 &0 &0 &0\\ 0 &1 &0&0 \\ 0 &-4 &1 &0 \\ 0 &0 &0 &-1 \end{pmatrix}.
\end{equation}
We recall that for a given link, we have the ribbon variables $\ell\in\AN$, with Poisson brackets
\begin{eqnarray}\label{poirel}
&\poi{\ell_1,\ell_2}=\com{r_-, \ell_1\ell_2}, \quad \poi{\bar\ell_1 ,\bar\ell_2 }=\com{r_-, \bar\ell_1 \bar\ell_2 }, \quad \poi{\ell_1 ,\bar\ell_2 }=\com{r_-, \ell_1 \bar\ell_2 } .
\end{eqnarray}
These are equivalent to the following Poisson commutation relations
\begin{equation}
\poi{\lambda, z} = i\frac{\gamma}2 {z}\lambda ,\qquad \poi{\lambda, \bar{z} } = -i\frac{\gamma}2 {\bar{z}}\lambda,\qquad
\poi{\bar{z},z} =- i\gamma( \lambda^2-\lambda^{-2}).
\end{equation}
while other commutators vanish.
\paragraph{Quantization. }
Let us quantize the matrix elements of $\ell$, so that they become operators \cite{STERN_1995, Bonzom:2014bua}.
We first introduce the parameter
\begin{equation}
q= e^{\hbar\gamma/2}, \textrm{ with } \hbar\gamma= \hbar \kappa \sqrt{|\Lambda|} = 8 \pi \frac{ l_P}{l_c},
\end{equation}
where $l_P=\hbar G$ is the Planck length and $l_C=|\Lambda|^{-{\frac{1}{2}}}$ is the cosmological scale.
We define then the deformed quantum monodromy matrix
\begin{equation}
\ell
{\,\rightarrow\,} \hat \ell =\begin{pmatrix} K & 0 \\ (q - q^{-1}) J_+ & K^{-1} \end{pmatrix}, \quad \bar\ell
{\,\rightarrow\,} \hat {\bar\ell} =\begin{pmatrix} K^{-1} & -(q - q^{-1}) J_- \\ 0 & K \end{pmatrix} ,
\end{equation}
where the correspondence is
\begin{equation}
K= \hat\lambda ,\quad
(q-q^{-1}) J_+ = \hat{z},\qquad
-(q-q^{-1}) J_- = \hat{\bar{z}}.
\end{equation}
The classical $r$-matrix becomes the quantum $R$-matrix
\begin{equation}\label{quR}
r_- {\,\rightarrow\,} R_-= q^{-\frac12} \begin{pmatrix} q & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & (q - q^{-1}) & 1 & 0\\ 0 & 0 & 0 & q \end{pmatrix}= {\bf 1} +i \hbar\,r_- + \mathcal{O}(\hbar^2).
\end{equation}
Finally the Poisson brackets which appears though the limit $[\hat{\ell}_1,\hat{\ell}_2] \to -i \hbar\poi{{\ell}_1,{\ell}_2}$,
are quantized through
\begin{eqnarray} \label{RLL=LLR}
R_-\,\hat{\ell}_1\,\hat{\ell}_2 = \hat{\ell}_2\,\hat{\ell}_1\,R_- &{\,\rightarrow\,}& \poi{\ell_1,\ell_2}=
\com{r_-, \ell_1\ell_2}, \nonumber\\
R_-\,\hat{\bar\ell}_1\, \hat{\bar\ell}_2 = \hat {\bar\ell}_2\,\hat{\bar\ell}_1\,R_- &{\,\rightarrow\,}&
\poi{{\bar\ell}_1 , {\bar\ell}_2 }=\com{r_-, {\bar\ell}_1 {\bar\ell}_2 }, \nonumber\\
R_-\,\hat{\ell}_1\,\hat{\bar\ell}_2 = \hat{\bar\ell}_2\,\hat{\ell}_1\,R_- &{\,\rightarrow\,}&
\poi{\ell_1 ,{\bar\ell}_2 }=\com{r_- , \ell_1 {\bar\ell}_2 }.
\end{eqnarray}
In components, the commutation relations on the right hand side of \eqref{RLL=LLR} read}
\begin{equation}
K\,J_+\,K^{-1} = q\,J_+,\qquad K\,J_-\,K^{-1} = q^{-1}\,J_-,\qquad [J_+,J_-] = \frac{K^2 - K^{-2}}{q - q^{-1}}.
\end{equation}
These are the commutation relations of $\mathcal{U}_q(\mathrm{SU}(2))$. This is encoding the well-known fact that the quantum algebra of functions on $\AN$ is isomorphic to the algebra $\mathcal{U}_q(\mathrm{SU}(2))$.
\smallskip
The last element we need is the Hilbert space. Since we intend first to implement the rotational symmetries, we consider the natural Hilbert space associated to the $\hat \ell$ which actually span $\mathcal{U}_q(\mathrm{SU}(2))$. Hence we consider the Hilbert space given in terms of the irreducible representations of $\mathcal{U}_q(\mathrm{SU}(2))$. Strictly speaking we should consider such Hilbert space for a half link, and glue two of such representations to build a full link as recalled in \cite{Delcamp:2018sef}. We will skip these subtleties here.
\smallskip
Now that we have the quantum theory for a given link, we need to extend the structure to the full graph $\Gamma$. For simplicity we have taken $\Gamma^*$ to be a triangulation so that the nodes of $\Gamma$ are trivalent. For each node, we have three $\AN$ holonomies, belonging to different phase spaces, which product is $1$. This is the Gauss law. The product is given by the matrix product.
The quantum version of the Gauss law is direct. Since we have to consider three phase spaces, we have to deal with three Hilbert space copies, with each quantum $\AN$ holonomy acting a given Hilbert space. The $\AN$ holonomies are multiplied using the matrix product, hence the natural quantization of the holonomy product is
\begin{equation}
(\ell\ell')_{ik} = \sum_j(\ell)_{ij}(\ell')_{jk} {\,\rightarrow\,} (\widehat{\ell\ell'})_{ik} \equiv \sum_j(\widehat\ell)_{ij}{\otimes} (\widehat\ell)_{jk}= {\Delta} \widehat \ell_{ik} .
\end{equation}
This is nothing else than the natural coproduct for the algebra of functions on $\AN$.
We read in terms of the components,
\begin{equation}
\begin{aligned}
&{\Delta} \widehat{\ell}
= \begin{pmatrix} K\otimes K & 0\\ (q-q^{-1}) (J_{+} \otimes K + K^{-1}\otimes J_+) & K^{-1}\otimes K^{-1} \end{pmatrix},\\
\text{and}\qquad &{\Delta} \hat{ \bar \ell}
= \begin{pmatrix} K\otimes K & -(q-q^{-1}) (J_{-} \otimes K + K^{-1}\otimes J_-)\\ 0 & K^{-1}\otimes K^{-1} \end{pmatrix}.
\end{aligned}
\end{equation}
We recognize the coproduct of $\mathcal{U}_q(\mathrm{SU}(2))$. The Gauss constraint demanding that the product of the three $\AN$ holonomies is 1 is then quantized as
\begin{equation}
1_{ik}= (\ell\ell'\ell'')_{ik} = \sum_{jl}\ell_{ij}\ell'_{jl}\ell''_{lk} {\,\rightarrow\,} \sum_{jl}\widehat \ell_{ij} {\otimes}\widehat\ell_{jl}{\otimes} \widehat\ell_{lk} = (1{\otimes} {\Delta})\circ {\Delta} \widehat\ell_{ik}= \widehat1_{ik}.
\end{equation}
The elements in the Hilbert space solutions of such constraints are the $\mathcal{U}_q(\mathrm{SU}(2))$ intertwiners, generated by the deformed Clebsh-Gordan coefficients.
We recover in this way the $\mathcal{U}_q(\mathrm{SU}(2))$ spin networks. Solving then the last set of constraints for the $\mathrm{SU}(2)$ holonomies gives rise to the Turaev-Viro amplitude\footnote{The TV model is usually defined for ${\mathcal U}_q(\mathfrak{su}(2))$ with $q$ root of unity to have a finite model. The other signature and cosmological constant sign cases usually lead to a divergent model, just like the Ponzano-Regge model. These divergences can be understood as signaling the presence of a non-compact symmetry and can be gauged away \cite{Freidel:2002dw}. } \cite{Turaev:1992hq}.
\section*{Outlook}
In this work we investigated why, at the quantum level, a deformed gauge symmetry, parametrized by the cosmological constant $\Lambda$, appears whereas the original action for 3d gravity is a plain undeformed gauge theory.
\smallskip
The first key insight was to realize that we had to perform a change of variables at the continuum level, in order to have a Gauss constraint/rotational charge algebra depending upon the cosmological constant. The change of variables is a simple canonical transformation parametrized by a vector $n$ which equivalently can be seen as induced by a boundary term. Such vector $n$ is taken as a scalar (ie an invariant) for the gauge symmetries and therefore leads to a modification of the realization of the symmetries. Since $n$ is constrained to depend on $\Lambda$, we do get symmetries that depend on $\Lambda$ at the action level.
This is yet another example that the choice of variables matters in the quest of defining a proper quantum gravity theory. There is an obvious parallel in our work and the 4d LQG approach where one performs a canonical transformation parameterized by a scalar, the Immirzi parameter or equivalently adds a (topological) term not modifying the equations of motion, the Holst term to define the Ashtekar-Barbero variables. This canonical transformation renders the theory more amenable to discretization, just like our term does for 3d gravity. The main difference however is that $n$ is parameterized by $\Lambda$ so it is not really adding an extra parameter in the theory unlike the Immirzi parameter.
\smallskip
The second key insight is the discretization procedure. It is in fact a subtle procedure: we have decomposed the system into subsystems and managed to project all the degrees of freedom on the boundary of the subsystems by imposing an appropriate truncation of the degrees of freedom. Such truncation is obtained by going on-shell. In the 3d gravity context, this amounts to consider region of homogeneous curvature and no torsion. This is essentially the same as dealing with the notion of "geometric structures"\cite{Carlip:1998uc} or equivalently homogeneously curved polygons. A boundary shared by two polygons can be viewed from the perspective of each polygon, and an isometry relating the two, the so-called continuity equations. This allowed to express the discrete variables solely in terms of "corner" terms (the classical version of the Kitaev triangle operators \cite{Kitaev1997} \cite{triangle}). { From this perspective, the quantum group symmetry appears in a sense as the "corner term contributions".
Note also that our work shares some similarities with the seminal works \cite{Gawedzki:1990jc}\cite{Falceto_1993}, where the quantum group symmetry is identified at the classical level for the Wess-Zumino model. }
\smallskip
The phase space associated to each link of the graph $\Gamma$ (dual to the triangulation $\Gamma^*$) now depends on the cosmological constant $\Lambda$. Importantly, we have derived this phase space (the Heisenberg double) starting from the continuum symplectic form. It was already known that such Heisenberg double equipped with the appropriate constraints, provides a discretization of 3d gravity with a non-vanishing cosmological constant \cite{Bonzom:2014wva} and also leads upon quantization to deformed spin networks and the TV amplitude \cite{Bonzom:2014bua}. We have therefore found the missing link connecting the discretized model and the continuum model. This paper provides therefore a long thought-for and rigorous derivation of the quantum group structure -- as a kinematical symmetry-- in the 3d loop quantum gravity case. { Interestingly it can also provide the link between the Fock-Rosly approach and the gravity continuum variables, since it was explicitly shown in \cite{Dupuis:2019pi} how such approach was related to the ribbon model \cite{Bonzom:2014wva}.}
\smallskip
This works opens many new avenues of investigation. Let us review some of them.
\paragraph{More general vector $n$.}
There is some room to go beyond the quantum group case, by removing some conditions on the vector $n$,
\begin{equation}
\delta n =0, \quad n^2=-\Lambda^2, \quad \mathrm{d} n=0, \quad n^I= (0,0,n^3).
\end{equation}
We can consider for example a vector such $\mathrm{d} n\neq0$, which would generate some new central extension \eqref{newcom1} that would be interesting to explore.
In our construction, the vector $n$ is a scalar for the symmetries, with its norm fixed by the cosmological constant. Hence in a sense, the only relevant information we keep about $n$ is its norm. It would be interesting to see how its direction could also be relevant.
For example, two vectors $n$, related by a rotation lead to isomorphic quantum group structures. At the classical level a rotation of the vectors corresponds to a canonical transformation. It would be interesting to see whether this is the case at the quantum group level.
That is is it possible to relate explicitly two rotated quantum group structure by a unitary transformation?
\paragraph{Unexplored cases.} For the sake of simplicity, we focused on the simplest cases. Indeed as we argued earlier, the Euclidean case with positive cosmological constant has to be treated separately due to appearance of reality conditions since we have to deal with a complex $n$.
In the Lorentzian cases, we focused on the component connected to the identity to use the Iwasawa decomposition, $\ell h = {\tilde h} {\tilde \ell} $, but one should deal with the general case, where there exist $d_i, d_j\in{\mathfrak{d}}_{-,s}$, such that { $\ell d_i h = {\tilde h} d_j^{-1}{\tilde \ell} $. }The Heisenberg double can be generalized accordingly \cite{Alekseev_1994}. This amounts however to decorate the ribbon by some curvature parametrized by $d_i$.
We have studied only one polarization choice in the discretization in section \ref{sec:main}. Namely we looked at the case where $\AN$ holonomies are associated to the edges whereas the $\mathrm{SU}$ holonomies are associated to the links. Due to the symmetric treatment between the two groups, we can actually swap the location of the holonomies.
{ In fact the continuity equation \eqref{intcont} also allows to identify the dual variables.
\begin{eqnarray}
&&G_{c v}G_{v c'}= G_{c v'}G_{v' c'
\Leftrightarrow \quad \tilde L^v_{cc'} \tilde H^{c'}_{vv'}= \tilde H^{c}_{vv'}\tilde L^{v'}_{cc'}.
\end{eqnarray}
Hence we have $\tilde L^x_{cc'}\in\AN$, with $x$ being $v$ or $v'$, associated to the links and $ \tilde H^{y}_{vv'}\in\mathrm{SU}$, with $y$ being $c$ or $c'$, associated to the edges.}
This provides a deformation of the dual loop formalism \cite{Dupuis:2017otn, Shoshany:2019ymo}, which should be the classical analogue of \cite{Dittrich:2016typ} (for the case $q$ real though).
We leave the study of this other polarization for later studies.
It is clear that our construction can be generalized to any factorizable group. Namely, considering a BF theory associated with a simple Lie group $G$, we expect the boundary deformation to be given in terms of the standard $r$-matrix and the main results and proofs to generalize seamlessly. We leave this for future work.
\paragraph{Adding matter.}
While we did not introduce matter, in the shape of curvature or torsion excitations, the formalism can certainly be extended to this case. We expect that the edge mode (or corner terms) perspective provides naturally the notion of particles in the curved case, just like it did in the flat case \cite{Freidel:2018pbr}.
We expect then to recover a version of the Kitaev model, defined for (deformation of) Lie groups. It would be then interesting to explore how much gravity questions we could ask in the Kitaev model context. This would develop some new interplay between models of (topological) quantum information theory and quantum gravity.
\paragraph{4d case.}
The case of real interest is certainly the 4d case and one can expect that our approach here is also relevant in this context. Indeed, preliminary calculations show that one can perform an analogue change of variables to remove the volume term in the action and to have some $\Lambda$ dependent gauge transformations. This would provide hints on the proper deformation one would expect in the 4d case. According to the signature and sign of the cosmological constant, there might be also some non-trivial interplays with the time gauge. This question is currently being addressed.
\subsection*{Acknowledgements}
\noindent This research was supported in part by Perimeter Institute for Theoretical
Physics. Research at Perimeter Institute is supported by the Government
of Canada through the Department of Innovation, Science and Economic
Development Canada and by the Province of Ontario through the Ministry
of Research, Innovation and Science. A.\,O.\ is supported by the NSERC Discovery grants held by M.\,D.\ and F.\,G.
|
2004.05527
|
\section{Introduction} \label{sec:intro}
Among the basic problems in combinatorics are so-called {\em reconstruction
problems}, where we try to identify an object knowing only partial information
about it. A classic example is the famous Graph Reconstruction Conjecture of
Ulam first posed in the thesis of Kelly in 1942:
\begin{conjecture}[The Reconstruction Conjecture; Kelly~\cite{kel1,kel2},
Ulam~\cite{U}]
For $n\geq 3$, every $n$-vertex graph is determined by the multiset of its
$(n-1)$-vertex induced subgraphs.
\end{conjecture}
The multiset of $(n-1)$-vertex induced subgraphs is called the {\it deck} of
the graph, with each such subgraph being a {\it card} in the deck. The vertices
in the cards are unlabeled, meaning that only the isomorphism class of each
card is given. The restriction $n\geq 3$ is needed because the two graphs with
two vertices have the same deck. A graph is {\it reconstructible} if it is
determined by its deck, meaning that no other graph has the same deck. In this
terminology, the Reconstruction Conjecture asserts that every graph with at
least three vertices is reconstructible.
The conjecture has attracted a lot of attention. Graphs in many families are
known to be reconstructible; these include disconnected graphs, trees, regular
graphs, and perfect graphs. Surveys on graph reconstruction
include~\cite{Bondy91,BH77,Lauri97,LS,Maccari02}.
Various parameters have been introduced to measure the difficulty of
reconstructing a graph. Harary and Plantholt~\cite{HP} defined the
{\it reconstruction number} of a graph to be the minimum number of cards from
its deck that suffice to determine it, meaning that no other graph has this
multiset of cards in its deck (surveyed in~\cite{AFLM}). All trees with at
least five vertices have reconstruction number $3$ (Myrvold~\cite{Myrvold90}),
and almost all graphs have reconstruction number $3$
(Bollob\'as~\cite{Bollobas}). No graphs have reconstruction number $2$, since
two cards cannot determine whether the two vertices deleted to form the cards
are adjacent.
Let $K_{\VEC t1r}$ denote the complete $r$-partite graph with part-sizes
$\VEC t1r$. Since $K_{t,t}$ and $K_{t+1,t-1}$ have $t+1$ common cards, the
reconstruction number of an $n$-vertex graph can be as large as $\FR n2+2$
(Myrvold~\cite{Myrvold89}). (Here $K_{s,t}$ is the complete bipartite graph
with parts of sizes $s$ and $t$.) Harary and Plantholt~\cite{HP} strengthened
the Reconstruction Conjecture by conjecturing that when $n\ge3$ every
$n$-vertex graph has reconstruction number at most $\FR n2+2$, with equality
only for $K_{n/2,n/2}$ and $2K_{n/2}$ in general, plus $P_4$.
Kocay and Kreher~\cite{KK} constructed $n$-graphs with reconstruction number
$\FR n2+1$ when $n=4q-4$ and $q$ is a prime power congruent to $1$ modulo $4$.
We can also study the reconstruction number of graph properties.
Myrvold~\cite{Mthe} and Bowler et al.~\cite{BBFM} showed that any $\FL{n/2}+2$
cards determine whether an $n$-vertex graph is connected. Much effort went
into reducing the number of cards needed to determine $m$, the number of edges.
Myrvold~\cite{Myrvold92} showed that $m$ and in fact also the degree list are
determined by any $n-1$ cards when $n\ge7$ (this is sharp). Monikandan and
Balakumar~\cite{MB} showed that $m$ is determined within $1$ by any $n-2$ cards
(strengthening~\cite{RM}). Woodall~\cite{Woodall} proved for
$n\ge\max\{34,3p^2+1\}$ that $m$ is determined within $p-2$ by $n-p$ cards.
Brown and Fenner~\cite{BF} proved that $m$ is determined by any $n-2$ cards
when $n\ge29$, and they presented two $8$-vertex graphs with six common cards
whose numbers of edges differ by $1$. Groenland, Guggiari, and Scott~\cite{GGS}
proved that in fact $m$ is determined by any $n-\sqrt n\,/20$ cards when
$n$ is sufficiently large.
These results concern not so much the reconstruction number as the
{\it adversary reconstruction number}~\cite{Mthe} or {\it universal
reconstruction number}~\cite{BBF}, which is the minimum $t$ such that any $t$
cards determine the graph (or a particular property). Bowler, Brown, and
Fenner~\cite{BBF} presented infinite families of pairs of graphs sharing
$2\FL{(n-1)/3}$ common cards, improving Myrvold~\cite{Mthe,Myrvold90}.
They conjectured that when $n$ is sufficiently large, every $n$-vertex graph
is determined by any $2\FL{(n-1)/3}+1$ of its cards ($n>12$ is needed).
Kelly~\cite{kel2} took another direction, considering cards obtained by
deleting more vertices.
\begin{definition}
A {\it $k$-card} of a graph is an induced subgraph having $k$ vertices. The
\emph{$k$-deck} of $G$, denoted $\mathcal{D}_k(G)$, is the multiset of all $k$-cards
(given as isomorphism classes). A graph $G$ is {\it determined by its $k$-deck}
if $\mathcal{D}_k(H)=\mathcal{D}_k(G)$ implies $H\cong G$. A graph $G$ (or a graph invariant)
is {\it $\ell$-reconstructible} if it is determined by $\mathcal{D}_{|V(G)|-\ell}(G)$
(agreeing on all graphs having that deck). The
{\it maximum reconstructibility} of a graph $G$
is the maximum $\ell$ such that $G$ is $\ell$-reconstructible.
\end{definition}
Study of reconstruction from the $k$-deck was begun by Manvel~\cite{Manvel}.
There followed several papers by N\'ydl, including surveys (\cite{N84,N01}) of
the early results. N\'ydl studied the least $k$ (as a function of $n$) such
that every $n$-vertex graph (or every $n$-vertex graph in a restricted family
such as trees) is determined by its $k$-deck.
For an $n$-vertex graph, ``determined by its $k$-deck'' and
``$\ell$-reconstructible'' have the same meaning when $k+\ell=n$.
The motivation for defining the maximum reconstructibility as a measure
of the ease of reconstructing a graph is the following elementary observation.
\begin{observation}\label{k-1}
For any graph $G$, the $k$-deck $\mathcal{D}_k(G)$ determines the $(k-1)$-deck
$\mathcal{D}_{k-1}(G)$.
\end{observation}
The proof is that each card in $\mathcal{D}_{k-1}(G)$ appears in $\C{V(G)}-k+1$
cards in $\mathcal{D}_k$. By Observation~\ref{k-1}, information that is determined by
the $k$-deck is also determined by the $j$-deck when $j>k$. This leads to a
stronger version of the Reconstruction Conjecture.
\begin{conjecture}[Kelly~\cite{kel2}]
For $\ell\in{\mathbb N}$, there is an integer $M_\ell$ such that every graph with at
least $M_\ell$ vertices is $\ell$-reconstructible.
\end{conjecture}
The original Reconstruction Conjecture is the claim $M_1=3$.
Having checked by computer that every graph with at least six and at most nine
vertices is $2$-reconstructible (there are $5$-vertex graphs that are not),
McMullen and Radziszowski~\cite{MR} asked whether $M_2=6$. With computations
up to nine vertices, Rivshin and Radziszowski~\cite{RR} conjectured
$M_\ell\le3\ell$. N\'ydl~\cite{N92} disproved this, showing that $M_\ell$
must grow at least superlinearly in $\ell$; that is, $M_\ell/\ell\to\infty$.
He proved that for any $n_0\in{\mathbb N}$ and $0<q<1$, there are nonisomorphic
$n$-vertex graphs for some $n$ larger than $n_0$ having the same
$\FL{qn}$-deck.
For more detailed understanding, it is natural to study the threshold number
of vertices for $\ell$-reconstructibility for graphs in a given family, which
may be smaller than for the family of all graphs. We may write this as
$M_\ell({\bf G})$ for a family ${\bf G}$. For example, Spinoza and West~\cite{SW} (see
Section~\ref{sec:maxdeg2}) proved that the path $P_{2\ell}$ and the graph
$C_{\ell+1}+P_{\ell-1}$ have the same $\ell$-deck (here ``$+$'' denotes
disjoint union of graphs, while $C_n$, $P_n$, $K_n$ respectively denote the
cycle, path, and complete graph on $n$ vertices).
Thus $M_\ell({\bf G}_2)\ge 2\ell+1$ for $\ell\ge3$, where ${\bf G}_d$ is the family
of graphs with maximum degree at most $d$. In fact, they proved
$M_\ell({\bf G}_2)=2\ell+1$.
For the family ${\bf T}$ of trees, the same lower bound is known and is sharp
for $\ell=2$ (Giles~\cite{Giles}), but equality is open for $\ell\ge3$. Let
$S_{a,b,c}$ be the subdivision of $K_{1,3}$ consisting of paths of lengths $a$,
$b$, and $c$ with one common endpoint (in general, a tree consisting of paths
with one common endpoint is called a ``spider'').
N\'ydl~\cite{N81} observed that $S_{k-1,k-1,1}$ and $S_{k,k-2,1}$ are spiders
with $2k$ vertices having the same $k$-deck. We will give a short proof of
this using the results on common $k$-decks for graphs in ${\bf G}_2$ that are
discussed in Section~\ref{sec:maxdeg2}. The result implies
$M_\ell({\bf T})\ge2\ell+1$, and N\'ydl~\cite{N81} conjectured that equality holds.
One can generalize this question to the family ${\bf T}_r$ of connected graphs $G$
such that $\C{E(G)}-\C{V(G)}+1\le r$; that is, ${\bf T}={\bf T}_0$. N\'ydl~\cite{N81}
constructed two graphs with $3k+9$ vertices and $3k+12$ edges having the same
$2k$-deck, thus yielding $M_\ell({\bf T}_4)\ge 3\ell-18$.
As with ordinary reconstruction, proving that the graphs in a family ${\bf G}$ are
$\ell$-reconstruct\-ible may involve two steps. One is to show that the
family is {\it $\ell$-recognizable}, meaning that whether $G\in{\bf G}$ holds is
determined by $\mathcal{D}_{|V(G)|-\ell}$. That is, every graph having the same
deck as a graph in ${\bf G}$ is also in ${\bf G}$. For example, Manvel~\cite{Manvel}
showed that when $|V(G)|=n\ge6$, the $(n-2)$-deck determines whether $G$ is
connected, acyclic, unicyclic, regular, or bipartite. That is, these
properties are $2$-reconstructible when $n\ge6$.
The pair $\{P_{2\ell},C_{\ell+1}+P_{\ell-1}\}$ mentioned earlier shows for
$n$-vertex graphs that guaranteeing $\ell$-reconstructibility of the property
of connectedness (or $\ell$-recognition of the family of connected graphs)
requires $n\ge2\ell+1$. The correct general threshold remains open.
On the other hand, the fraction of $n$-vertex graphs whose maximum
reconstructibility is at least $(1-o(1))n/2$ tends to $1$ (see
Section~\ref{sec:almostall}). This was observed originally by
M\"uller~\cite{Muller}. In particular, there is surprisingly small difference
between the maximum reconstructibility of almost all graphs and the failure of
reconstructibility of the property of connectedness.
Spinoza and West~\cite{SW} showed that in fact in this setting only
$\CH{\ell+2}2$ cards are needed, generalizing the concept of reconstruction
number to $\ell$-reconstruction number.
For some easily reconstructed families it is natural to fix the number of
vertices kept in each card. The $2$-deck of $G$ determines only $|E(G)|$ and
$|V(G)|$. The $3$-deck determines also the number of edge incidences, whether
$G$ is triangle-free, and whether $G$ belongs to the family of complete
multipartite graphs, since that is true if and only if $P_2+P_1$ is not an
induced subgraph.
Results on $\ell$-reconstructibility are known for degree lists, connectedness,
trees, graphs with maximum degree $2$, random graphs, and graphs that are
disconnected, complete multipartite, or regular. We describe these results in
the subsequent sections, and we include a few new results about these classes.
In Section~\ref{sec:trees} we offer a new short proof of N\'ydl's result that
$M_\ell({\bf T})\ge 2\ell+1$.
In Section~\ref{sec:rpartite} we show that $n$-vertex graphs whose components
have at most $n-\ell$ vertices are $\ell$-reconstructible, while graphs with
components having more vertices are guaranteed to be $\ell$-reconstructible
only if the original Reconstruction Conjecture is true.
In Section~\ref{sec:regular} we show that $r$-regular graphs with connectivity
$1$ are $(r+1)$-reconstructible.
We mention two other models of reconstruction. Levenshtein et al.~\cite{LKKM}
considered a local version of reconstruction in which the vertices of a graph
are labeled and for an $n$-vertex graph $G$ we have only $n$ cards: for each
vertex $v$, we are given the set $B_2(v)$ of the vertices at distance at most
$2$ from $v$ (but do not know which of them are adjacent to $v$). It was proved
in~\cite{LKKM} that every connected graph whose girth is at least $7$ and whose
diameter and minimum degree are at least $2$ is reconstructible in this model.
The authors also provided a graph with girth $6$ that is not reconstructible.
Levenshtein~\cite{Lev,Le} posed a more general problem where we know the sets
$B_t(v)$ instead of $B_2(v)$ and presented partial results on it.
In Section~\ref{sec:relation}, we describe the main results on a model
that uses the term ``$k$-reconstructible'' with a different meaning.
In that model, due to Fra\"iss\'e~\cite{Fr}, we are reconstructing digraphs
(viewed as general binary relations), and we are told the identities of the
deleted vertices. Our aim in describing these alternative models is to avoid
future confusion.
\section{Degree Lists}\label{sec:deglist}
The {\it degree list} (also called {\it degree sequence}) of a graph is the
multiset of degrees of the vertices in the graph. When studying the
Reconstruction Problem, the first thing one learns is that the degree list of
a graph is $1$-reconstructible. It suffices to find the number $m$ of edges,
because the degree of each vertex $v$ is the difference between $m$ and the
number of edges in the card $G-v$. The number $m$ is the information provided
by the $2$-deck (known by Observation~\ref{k-1}); one can also compute $m$
directly from the $(n-1)$-deck by $m=\sum_v |E(G-v)|/(n-2)$.
For decks with smaller cards, reconstruction of anything becomes more
difficult. The pairs of graphs with the same $k$-decks mentioned in the
introduction have the same degree list, but it is easy to construct examples
with different lists.
\begin{example}\label{list}
{\it For any positive $a,b,c$ with $a+b+c=t\ge4$, the graphs $C_t+P_1$ and
$S_{a,b,c}$ with $t+1$ vertices all have the same $3$-deck.} Note that
$\Delta(C_t+P_1)=2$ and $\Delta(S_{a,b,c})=3$. All these graphs have $t$
copies of $P_3$ and $t(t-3)$ copies of $P_2+P_1$ as induced subgraphs (this
involves a few cases for $S_{a,b,c}$), and the remaining $3$-vertex induced
subgraphs all have no edges. Hence $\mathcal{D}_3(C_t+P_1)=\mathcal{D}_3(S_{a,b,c)})$,
for all such $(a,b,c)$.
\end{example}
Concerning thresholds, this easy example shows that guaranteeing
$\ell$-reconstructibility of the degree list or the maximum degree
requires $n>\ell+3$. In this example, we considered $\mathcal{D}_k(G)$ with
$k\in\{\Delta(G),\Delta(G)+1\}$. Manvel observed that having one more vertex
in the cards prevents such examples.
\begin{theorem}[Manvel~\cite{Manvel}]\label{deltadeg}\label{tlist}
The degree list of a graph $G$ is determined by $\mathcal{D}_{\Delta(G)+2}(G)$.
\end{theorem}
The result for $\Delta(G)$ is easy since all induced subgraphs with at most
$k$ vertices are visible in the $k$-deck. Hence $\mathcal{D}_{\Delta(G)+2}(G)$ shows
that $G$ has a vertex of degree $\Delta(G)$ and none larger. To determine the
degree list, one can then proceed by induction on $r$ to count the vertices of
degree $\Delta(G)-r$ using the following tool observed originally by
Manvel~\cite{Manvel}.
\begin{lemma}\label{count}
Let $G$ be an $n$-vertex graph. The sum, over all cards in
the $k$-deck $\mathcal{D}_k(G)$, of the number of vertices of degree $j$ in the card,
equals $\SE ij{j+n-k} a_i\CH ij\CH{n-1-i}{k-1-j}$, where $a_i$ is the number of
vertices having degree $i$ in $G$.
\end{lemma}
The lemma holds because a vertex $v$ appears and has degree $j$ in a card
in the $k$-deck if and only if the card is formed by choosing $v$ along with
$j$ of its neighbors and $k-1-j$ of its nonneighbors. The lemma yields the
following corollary by solving for $a_i$ through successively smaller $i$.
\begin{corollary}[Manvel~\cite{Manvel}]\label{klist}
The degree list of a graph $G$ is determined when both $\mathcal{D}_k(G)$ and the
numbers of vertices with degree $i$ for all $i$ at least $k$ are known.
\end{corollary}
\begin{example}
For sharpness of Theorem~\ref{deltadeg}, Manvel~\cite{Manvel} showed that the
maximum degree itself is not always determined by $\mathcal{D}_{\Delta(G)+1}(G)$.
He constructed graphs $G$ and $H$ such that $\Delta(G)=k$, $\Delta(H)=k-1$, and
$\mathcal{D}_k(G)=\mathcal{D}_k(H)$. Both graphs are forests of stars. However, in
this construction the number of vertices is $(k+2)2^{k-2}$, exponential in $k$.
In particular, $G=\sum_i\CH{k}{2i}K_{1,k-2i}$ and
$H=\sum_i\CH{k}{2i+1}K_{1,k-1-2i}$. From this Manvel concluded that
for all $k$ there exist nonisomorphic graphs with the same $k$-deck.
\end{example}
\begin{question}\label{deltaalt}
What is the smallest value of $n$ such that $n$-vertex graphs $G$ and $H$
with maximum vertex degrees $k$ and $k-1$ exist having the same $k$-deck?
\end{question}
Lemma~\ref{count} and Corollary~\ref{klist} are used as tools in
reconstruction of degree lists.
\begin{theorem}[Chernyak~\cite{Che}]\label{Chernyak}
When $n\ge6$, the degree list of every $n$-vertex graph is $2$-reconstructible.
\end{theorem}
This result is sharp, because the $5$-vertex graphs $C_4+P_1$ and $S_{2,1,1}$
from Example~\ref{list} have the same $3$-deck but different degree lists.
Exact results were pushed one step further.
\begin{theorem}[Kostochka--Nahvi--West--Zirlin~\cite{KNWZ}]\label{deg3rec}
For $n\ge7$, the degree list of every $n$-vertex graph is $3$-reconstructible.
\end{theorem}
Theorem~\ref{deg3rec} is sharp: the $6$-vertex graphs $C_5+P_1$ and
$S_{2,2,1}$ and $S_{3,1,1}$ from Example~\ref{list} all have the same $3$-deck.
Note that since the $(n-2)$-deck determines the $(n-3)$-deck,
Theorem~\ref{deg3rec} combined with an analysis of $6$-vertex graphs implies
Theorem~\ref{Chernyak}.
By making more thorough use of Lemma~\ref{count}, Taylor~\cite{Taylor} obtained
a surprisingly small general threshold on the number of vertices for
$\ell$-reconstructibility of the degree list.
\begin{theorem}[Taylor~\cite{Taylor}]\label{Taylor}
For $n\ge g(\ell)$, the degree list of every $n$-vertex graph is
$\ell$-reconstructible, where
$$
g(\ell) = (\ell-\log{\ell}+1)
\left( {\rm e} + \frac{{\rm e} \log{\ell} + {\rm e} +1}{(\ell-1) \log{\ell}-1} \right) +1.
$$
Here ${\rm e}$ denotes the base of the natural logarithm.
\end{theorem}
This result also shows that the degree list of an $n$-vertex graph is
reconstructible from the $k$-deck when $n$ is not too much larger than $k$,
regardless of the value of the maximum degree. In particular,
$n\ge\frac{1+o(1)}{1-1/{\rm e}}k$ suffices. This theorem seems rather strong
about reconstructibility but perhaps does not say much about
Question~\ref{deltaalt}, giving only a linear lower bound.
Theorem~\ref{Taylor} is strong but likely not sharp; answering the next
question would improve it and generalize Theorem~\ref{deg3rec}.
By Theorem~\ref{Taylor} the threshold is asymptotically at most ${\rm e}\ell$.
One could begin by seeking the largest graphs whose degree lists are not
$4$-reconstructible.
\begin{question}
For fixed $\ell\in{\mathbb N}$, what is the least threshold $n_\ell$ such that the
degree list of every graph with at least $n_\ell$ vertices is
$\ell$-reconstructible?
\end{question}
\section{Connectedness}\label{sec:connect}
For graphs with at least three vertices, connectedness is $1$-reconstructible,
because an $n$-vertex connected graph has at least two connected $(n-1)$-cards,
while a disconnected graph has at most one connected $(n-1)$-card (when
$n\ge3$). Manvel~\cite{Manvel} strengthened this result.
\begin{theorem}[Manvel~\cite{Manvel}]\label{thm: two deck}\label{manvel}
For $n \ge 6$, the connectedness of an $n$-vertex graph is $2$-reconstructible,
and the threshold for $n$ is sharp.
\end{theorem}
The threshold is sharp because the $5$-vertex graphs $C_4+P_1$ and $S_{2,1,1}$
have the same $3$-deck (Example~\ref{list}). In fact, these graphs and their
complements are the only $5$-vertex graphs that are not
$2$-reconstructible~\cite{MR}.
All results that obtain a function $f(\ell)$ such that some property or class
of graphs is $\ell$-reconstructible for graphs with at least $f(\ell)$ vertices
provide support for Kelly's Conjecture. For general $\ell$, this is known for
connectedness.
\begin{theorem}[Spinoza--West~\cite{SW}]\label{conn}
For $\ell\in{\mathbb N}$, the connectedness of every $n$-vertex graph is
$\ell$-reconstructible when $n>2\ell^{(\ell+1)^2}$
\end{theorem}
The threshold in Theorem~\ref{conn} is not sharp.
\begin{conjecture}[Spinoza--West~\cite{SW}]\label{connconj}
For $n\ge 2\ell+2$, the connectedness of an $n$-vertex graph is
$\ell$-reconstructible, and the threshold for $n$ is sharp.
\end{conjecture}
For $\ell=2$, the $5$-vertex graphs $C_4+P_1$ and $S_{2,1,1}$ again show that
the conjecture is sharp. For larger $\ell$ the right answer may be $2\ell+1$,
which is needed due to the example $\{P_{2\ell},C_{\ell+1}+P_{\ell-1}\}$
(see Section~\ref{sec:maxdeg2}). There are three sets of two $7$-vertex graphs
that have the same $4$-deck; none consists of a connected and a disconnected
graph. N\'ydl's graphs showing that $M_\ell$ grows superlinearly are all
connected. We believe that connectedness of an $n$-vertex graph is
$\ell$-reconstructible whenever $\ell<\CL{n/2}$ (except for
$\{C_4+P_1,S_{2,1,1}\}$).
For $\ell=3$, Spinoza and West~\cite{SW} improved the threshold in
Theorem~\ref{conn} to $n\ge25$. The exact answer was found later.
\begin{theorem}[Kostochka--Nahvi--West--Zirlin~\cite{KNWZ}]\label{conn3}
For every graph with at least seven vertices, connectedness is
$3$-reconstructible.
\end{theorem}
Theorem~\ref{conn3} is sharp due to $\{C_5+P_1,S_{2,2,1},S_{3,1,1}\}$
(Example~\ref{list}). When combined with a short analysis of $6$-vertex
graphs, Theorem~\ref{conn3} implies Theorem~\ref{manvel}. The proof of
Theorem~\ref{conn3} uses Theorem~\ref{deg3rec} to reduce the problem to graphs
with exactly two vertices of degree $1$ and none of degree $0$.
Toward Conjecture~\ref{connconj}, it would be interesting to find a substantial
improvement of the threshold in Theorem~\ref{conn} or to find the largest two
graphs, one connected and one disconnected, that have the same deck of
subgraphs with four vertices deleted.
\section{Graphs with Maximum Degree $2$}\label{sec:maxdeg2}
In Problem 11898 of the American Mathematical Monthly, Stanley posed a
question related to reconstructing $2$-regular graphs from their $k$-decks.
\begin{problem}[Stanley \cite{Stanley}]\label{stanley}
Let $n$ and $k$ be integers, with $n\ge k \ge 2$. Let $G$ be a graph with $n$
vertices whose components are cycles of length greater than $k$. Let $i_k(G)$
be the number of $k$-element independent sets of vertices of $G$. Show that
$i_k(G)$ depends only on $k$ and $n$.
\end{problem}
Let $s(G,H)$ denote the number of induced subgraphs of $G$ isomorphic to $H$.
Stanley's problem asserts $s(G,\overline{K}_k)=s(G',\overline{K}_k)$ for
$n$-vertex $2$-regular graphs $G$ and $G'$ whose components have length greater
than $k$ (here $\overline{H}$ denotes the complement of $H$). Stanley's proposed
solution of Problem~\ref{stanley} used generating functions.
Independent sets are just one type of $k$-vertex induced subgraph. In a graph
with maximum degree $2$ whose cycles have more than $k$ vertices, all
$k$-vertex induced subgraphs are {\it linear forests}, meaning disjoint unions
of paths. By looking at a larger class of graphs, Spinoza and West gave
a bijective proof by induction on $k$ that proves the same conclusion for
the number of subgraphs isomorphic to any $k$-vertex linear forest and thereby
proves that the graphs with the stated property all have the same $k$-deck.
\begin{theorem}[Spinoza--West~\cite{SW}]\label{main}
Let $G$ and $G'$ be graphs with maximum degree $2$ having the same number
of vertices and the same number of edges. If every component in each graph is
a cycle with at least $k+1$ vertices or a path with at least $k-1$ vertices,
then $\mathcal{D}_k(G)=\mathcal{D}_k(G')$.
\end{theorem}
Important cases of the theorem, and indeed its proof, are captured by the
following three statements, among which the third is the key, proved
inductively.
\begin{claim}\label{twocomp}
$\mathcal{D}_k(C_{q+r})=\mathcal{D}_k(C_q+C_r)$ if $q,r\ge k+1$,
$\mathcal{D}_k(P_{q+r})=\mathcal{D}_k(C_q+P_r)$ if $q\ge k+1$ and $r\ge k-1$, and
$\mathcal{D}_k(P_{q-1}+P_r)=\mathcal{D}_k(P_q+P_{r-1})$ if $q,r\ge k$.
\end{claim}
The proof of Theorem~\ref{main} reduces to Claim~\ref{twocomp}
by the following natural lemma.
\begin{lemma}\label{samedeck}
If $G$, $G'$, and $H$ are graphs, then $\mathcal{D}_k(G)=\mathcal{D}_k(G')$ if and only if
$\mathcal{D}_k(G+H)=\mathcal{D}_k(G'+H)$.
\end{lemma}
For every graph $G$ with maximum degree $2$, Theorem~\ref{main} provides
a lower bound for the value $k$ such that $\mathcal{D}_k(G)$ determines $G$ and
hence an upper bound on the maximum reconstructibility. Except for some
small instances, these bounds turn out to be sharp. Without giving the
complete details of the statement, the result is the following.
\begin{theorem}\label{rho2}
Let $G$ be a graph with maximum degree $2$. If $m$ is the maximum number of
vertices in a component, $F$ is a component with $m$ vertices,
and $m'$ is the maximum number of vertices in a component other than $F$,
then $G$ is $k$-deck reconstructible if and only if
$k\ge\max\{\FL{m/2}+\epsilon,m'+\epsilon'\}$, where $\epsilon=1$ if $F$ is
a path (otherwise $\epsilon=0$), and $\epsilon'\in\{0,1,2\}$.
\end{theorem}
We omit the technical definition of $\epsilon'$ that incorporates the
small exceptions to the general formula. In particular, for a $2$-regular
$n$-vertex graph $G$, the formula for $k$ simplifies to $\max\{\FL{m/2},m'\}$.
Thus the maximum reconstructibility of the cycle $C_n$ is $\CL{n/2}$, and no
graph in this class has smaller reconstructibility.
That is, every $2$-regular $n$-vertex graph is $\CL{n/2}$-reconstructible,
and in fact graphs with maximum degree $2$ are $\FL{n/2}$-reconstructible.
For maximum degree $3$ or $3$-regular graphs, discussed in
Section~\ref{sec:regular}, much less is known. It is only known that
$3$-regular graphs are $2$-reconstructible, with no nontrivial upper bounds
known on the maximum reconstructibility.
\section{Trees}\label{sec:trees}
Trees have played a prominent role in the study of reconstruction. The
original 1957 paper of Kelly~\cite{kel2} showed that trees are
$1$-reconstructible. Giles~\cite{Giles} showed in 1976 that trees with
at least five vertices are $2$-reconstructible ($P_4$ and $K_{1,3}$ have the
same $2$-deck). According to N\'ydl~\cite{N01}, the survey of Bondy and
Hemminger~\cite{BH77} reported the existence of a preprint by Giles
proving that sufficiently large trees are $\ell$-reconstructible, but this
was apparently never published and seems to remain open.
N\'ydl gave a lower bound for the threshold $M'_\ell$ such that for
$n\ge M'_\ell$, no two $n$-vertex trees have the same $(n-\ell)$-deck.
Since that paper is somewhat inaccessible (and does not present a full proof),
we give a new short proof here using the third statement of Claim~\ref{twocomp}.
Recall that $S_{a,b,c}$ is the spider with $a+b+c+1$ vertices consisting of
paths of lengths $a$, $b$, and $c$ with a common endpoint.
\begin{theorem}[N\'ydl~\cite{N90}]\label{trees}
The two trees $S_{k-1,k-1,1}$ and $S_{k,k-2,1}$ have the same $k$-deck.
\end{theorem}
\begin{proof}
Let $G=S_{k-1,k-1,1}$ and $H=S_{k,k-2,1}$; both $G$ and $H$ have $2k$ vertices.
We partition the $k$-decks according to the usage of the non-peripheral leaf,
which we call $v$ in each graph. The portions of the $k$-deck in which $v$
does not appear are the same, since both equal $\mathcal{D}_k(P_{2k-1})$. The portions
in which $v$ appears and its neighbor does not are also the same, since they
are the $(k-1)$-decks (plus an isolated vertex) of $P_{k-1}+P_{k-1}$ and
$P_k+P_{k-2}$, which by Claim~\ref{twocomp} are the same.
In the remainder of the cards in the decks, $v$ appears in a nontrivial
component that is a spider $S_{a,b,1}$. Consider those cards where this
spider takes $a$ vertices from the first leg and $b$ vertices from the second
leg in the original specification of the host trees. Since $a,b\le k-2$,
such a spider exists in $H$ if and only if it exists in $G$.
For each choice of $(a,b)$, the cards in this portion of the $k$-deck of $G$
consist of the disjoint union of $S_{a,b,1}$ with cards in the
$(k-a-b-2)$-deck of $P_{k-1-a}+P_{k-1-b}$. Similarly, in the $k$-deck of $H$
we have the disjoint union of $S_{a,b,1}$ with cards in the $(k-a-b-2)$-deck
of $P_{k-a}+P_{k-2-b}$. Since $k-a\ge k-a-b-2$ and $k-b-1\ge k-a-b-2$,
Claim~\ref{twocomp} applies to guarantee that these portions of the two
$k$-decks are the same.
\end{proof}
Thus $M'_\ell\ge2\ell+1$.
\begin{conjecture}[N\'ydl~\cite{N90}]\label{Ntree}
$M'_\ell=2\ell+1$.
\end{conjecture}
Note that Conjecture~\ref{Ntree} is not quite the same as $M_\ell({\bf T})=2\ell+1$.
N\'ydl required only that no two $n$-vertex trees have the same $(n-\ell)$-deck,
but for $\ell$-reconstructibility there is also the matter of showing that
all possible reconstructions from the deck are trees; that is, showing that
the family of trees is $\ell$-recognizable when $n\ge 2\ell+1$.
An $n$-vertex graph is a tree if and only if it has $n-1$ edges and is
connected (or has $n-1$ edges and no cycles). From the $2$-deck, we know the
number of edges. If Conjecture~\ref{connconj} is true in the stronger form
replacing $2\ell+2$ with $2\ell+1$ for $\ell\ge3$, which Theorem~\ref{conn3}
proves for $\ell=3$, then combined with Conjecture~\ref{Ntree} it would
imply $M_\ell({\bf T})=2\ell+1$. Indeed, the full strength of
Conjecture~\ref{connconj} probably is not needed; we only need to know from
the $\FL{n/2}$-deck whether an $n$-vertex graph with $n-1$ edges has a cycle
of length at least $n/2$.
\section{Disconnected and Complete Multipartite Graphs}\label{sec:rpartite}
One of the earliest results on reconstruction, by Kelly~\cite{kel2},
is that disconnected graphs are $1$-reconstructible. Manvel~\cite{Manvel}
discussed the $\ell$-reconstructibility of disconnected graphs.
We expand on this discussion to obtain a sharp threshold on the size
of components that makes $\ell$-reconstructibility easy.
We first prove what might be called the ``negative'' result.
\begin{proposition}
If graphs with at least $\ell+2$ vertices consisting of a connected graph
and $\ell-1$ isolated vertices are $\ell$-reconstructible, then the original
Reconstruction Conjecture holds.
\end{proposition}
\begin{proof}
We need to prove $1$-reconstructibility when $n\ge3$. Kelly~\cite{kel2} proved
this for disconnected graphs, so consider a connected $n$-vertex graph $G$.
We are given $\mathcal{D}_{n-1}(G)$. By Observation~\ref{k-1}, we also know
$\mathcal{D}_{n-i}(G)$ for $2\le i\le \ell$. Let $G'=G+(\ell-1)K_1$; note that
$G'$ has at least $\ell+2$ vertices. For $2\le i\le \ell$, let $\mathcal{D}'_i$
consist of $\CH{\ell-1}{\ell-i}$ copies of $C+(i-1)K_1$ for each occurrence of
$C$ in $\mathcal{D}_{n-i}(G)$. Note that
$$
\mathcal{D}_{n-\ell}(G')=\mathcal{D}_{n-1}(G)\cup (\mathcal{D}'_2\cup\cdots\cup\mathcal{D}'_\ell).
$$
Thus if $G+(\ell-1)K_1$ is $\ell$-reconstructible, then we have determined
$G$ from $\mathcal{D}_{n-1}(G)$.
\end{proof}
Now consider $n$-vertex graphs whose components all have at most $n-\ell$
vertices. Manvel~\cite{Manvel} observed that IF it is known that $G$ is a
such a graph, then $G$ is $\ell$-reconstructible. In fact, we show that such
graphs can be recognized from the $(n-\ell)$-deck. With Manvel's observation,
this implies that these graphs are in fact $\ell$-reconstructible.
The argument generalizes a proof of the $1$-reconstructibility of disconnected
graphs, involving a counting argument for ordinary graph reconstruction
that was applied by Bondy and Hemminger~\cite{BH78} and originated with
Greenwell and Hemminger~\cite{GH}. A similar argument to that given here
appears in the ``Main Lemma'' of N\'ydl~\cite{N01}.
We need the basic idea of Kelly's Lemma~\cite{kel2}, which counts the copies of
a given graph $F$ that appear in $G$ by dividing the total number of
appearances of $F$ in the deck by the number of times each copy appears. We
use an analogue for induced subgraphs and generalize to the $(n-\ell)$-deck.
\begin{lemma}\label{kelly}
If $G$ is an $n$-vertex graph, and $F$ is a graph with at most $n-\ell$
vertices, then the number $s_F(G)$ of occurrences of $F$ as an induced subgraph
of $G$ is $\ell$-reconstructible.
\end{lemma}
\begin{proof}
Let $p=|V(F)|$. Each induced copy of $F$ appears in $\CH{n-p}\ell$ cards
in $\mathcal{D}_{n-\ell}(G)$. Letting $t$ be the total count of all appearances of $F$
as an induced subgraph in all cards in $\mathcal{D}_{n-\ell}(G)$, we have
$s_F(G)=t\big/\CH{n-p}\ell$.
\end{proof}
\begin{theorem}\label{disconn}
If every connected subgraph of $G$ has at most $n-\ell$ vertices, then
$G$ is $\ell$-reconstructible.
\end{theorem}
\begin{proof}
It suffices to determine, for every connected graph $F$, the number $c_F(G)$ of
components of such a graph $G$ that are isomorphic to $F$.
Let an {\it induced chain} of length $r$ be a list $\VEC F0r$ of connected
induced subgraphs of $G$ such that $F_i$ an induced subgraph of $F_{i+1}$ for
$0\le i<r$. For any connected induced subgraph $F$ of $G$, let the {\it depth}
of $F$ be the maximum $r$ such that $F$ is the first subgraph in an induced
chain of length $r$.
Since every connected subgraph of $G$ has at most $n-\ell$ vertices,
all connected induced subgraphs of $G$ appear in the deck. Since we know
all these subgraphs, we can determine all the induced chains, and hence we
know the depth of each connected induced subgraph.
If $F$ has depth $0$, then every induced copy of $F$ in $G$ is a component,
so $c_F(G)=s_F(G)$. For larger depth, group the induced copies of $F$ by the
unique component of $G$ containing that copy. Summing over all components of
$G$, we obtain
\begin{equation*}
s_F(G)=\sum_{H} s_F(H)c_H(G).\tag{1}
\end{equation*}
When $s_F(H)\ne0$ and $F\ne H$, every induced chain starting at $H$ can be
augmented by adding $F$ at the beginning, so $H$ has smaller depth than $F$.
Using Lemma~\ref{kelly} to compute $s_F(G)$ and applying the induction
hypothesis to compute values of $c_H(G)$, we now know every quantity in
(1) other than $c_F(G)$ and can solve for $c_F(G)$.
\end{proof}
Here we used the property that for the family ${\mathcal F}$ of connected graphs, every
member of ${\mathcal F}$ contained in $G$ has at most $n-\ell$ vertices and belongs to a
unique maximal member of ${\mathcal F}$ contained in $G$, namely a component. The
argument applies also to other families ${\mathcal F}$ that have this property.
For a very special class of disconnected graphs, much stronger results about
reconstructibility are known. Note first a simple observation, using the
fact that the cards in the $k$-deck determine their complements.
\begin{observation}
A graph $G$ is determined by its $k$-deck if and only if its complement
$\overline{G}$ is determined by its $k$-deck.
\end{observation}
Hence when we discuss $\ell$-reconstructibility of graphs whose components are
complete graphs, we are also discussing $\ell$-reconstructibility of complete
multipartite graphs.
Let $G$ be a disjoint union of complete graphs. Membership in this family is
determined by the $3$-deck, since a graph is a disjoint union of complete
graphs if and only if it does not have $P_3$ as an induced subgraph. When the
largest component has at most $k$ vertices, Theorem~\ref{disconn} implies that
the graph is determined by its $k$-deck (Spinoza and West~\cite{SW} had
observed that the $(k+1)$-deck suffices).
More interesting is the situation when we bound the number of parts rather
than the size of the parts.
\begin{theorem}[Spinoza--West~\cite{SW}]\label{Trpart}
Every complete $r$-partite graph $G$ is determined by its $(r+1)$-deck
(as are disjoint unions of $r$ complete graphs).
\end{theorem}
The proof of Theorem~\ref{Trpart} is actually algebraic. For $r\ge2$,
the $3$-deck tells us that $G$ is complete multipartite, and the absence
of $K_{r+1}$ in the deck makes it $r$-partite. Letting the part-sizes be
$\VEC q1r$, form the polynomial $\PE i1r (x-q_i)$. The coefficient of
$(-1)^jx^{r-j}$ in the expansion is the number of complete cards in $\mathcal{D}_i(G)$.
Since $\mathcal{D}_i(G)$ is determined by $\mathcal{D}_{r+1}(G)$, we know the polynomial and
can find the roots $\VEC q1r$.
Theorem~\ref{Trpart} is sharp for $r\le2$ and all $n$. It is immediate that
complete bipartite graphs are not $2$-deck reconstructible, since they are not
determined by their numbers of edges and vertices (the $3$-vertex cards are
not given. For complete tripartite graphs, the $3$-deck determines that the
graph is a complete multipartite graph, but the following example shows that
that is not sufficient.
\begin{example}[Spinoza--West~\cite{SW}]\label{Erpart}
The complete multipartite graphs $K_{7,4,3}$ and $K_{6,6,1,1}$ have the
same $3$-deck. It consists of $84$ copies of $K_3$, $240$ copies of
$P_3$, and $40$ copies of $\overline{K}_3$.
\end{example}
We expect that diligence will yield more general examples.
\begin{question}
Is it true for all $r\in{\mathbb N}$ with $r\ge3$ that there are a complete $r$-partite
graph and a complete $(r+1)$-partite graph having the same $r$-deck?
\end{question}
Finally, N\'ydl considered the reconstructibility of disjoint unions of
complete graphs where neither the number of components nor the sizes of the
components are restricted. We can still recognize from the $3$-deck that our
graph is in this class.
\begin{theorem}[N\'ydl~\cite{N85}]
Let $G$ be an $n$-vertex graph that is a disjoint union of complete graphs.
If $n<k\ln(k/2)$, then $G$ is determined by its $k$-deck. If
$n=(k+1)2^{k-1}$, then there is such a graph $G$ that is not determined
by its $k$-deck.
\end{theorem}
These bounds are quite far apart, and neither says much about the threshold
number of vertices for $\ell$-reconstructibility of disjoint unions of
complete graphs (or, equivalently, complete multipartite graphs).
The extremal problem is the following.
\begin{problem}
Determine the maximum $n$ such that every $n$-vertex complete multipartite
graph is determined by its $k$-deck.
\end{problem}
\section{Regular graphs}\label{sec:regular}
As noted above, $1$-reconstructibility of disconnected graphs is easy, but
$2$-reconstructibility of all disconnected graphs implies the Reconstruction
Conjecture.
Similarly, $1$-reconstructibility of regular graphs is easy using the
$1$-reconstructibility of the degree list. Motivated by this, at a meeting in
Sanya in 2019 Bojan Mohar asked whether regular graphs are $2$-reconstructible.
Since $1$-regular graphs are determine by their degree lists, they are
determined by their $3$-decks and hence are $(n-3)$-reconstructible. The
results of Spinoza and West~\cite{SW} described in Section~\ref{sec:maxdeg2}
imply that $2$-regular graphs are $\FL{n/2}$-reconstructible. Both
thresholds are sharp.
For $r\ge 3$, $2$-reconstructibility of $r$-regular graphs is not immediate,
even though the degree list is $2$-reconstructible, because we must determine
which of the deficient vertices in a card is adjacent to which of the two
missing vertices. Nevertheless, the question has been answered for $r=3$.
\begin{theorem}[Kostochka--Nahvi--West--Zirlin~\cite{KNWZ2}]\label{3reg2rec}
Every $3$-regular graph is $2$-reconstructible.
\end{theorem}
Although this result takes considerable effort, it is (we hope) just the
beginning of study in this area. It would be interesting both to answer
Mohar's question and to determine the maximum reconstructibility for
$3$-regular graphs or for graphs with maximum degree $3$, extending the
results discussed earlier.
\begin{problem}
For each $r\in{\mathbb N}$ with $r\ge2$, prove that every $r$-regular graph
is $2$-reconstructible.
\end{problem}
\begin{problem}
Show that for each $\ell\geq 1$ there is a threshold $n_\ell$ such that every
$3$-regular graph with at least $n_\ell$ vertices is
$\ell$-reconstructible.
\end{problem}
Although we do not know whether all $r$-regular graphs are $2$-reconstructible,
for those that are not $2$-connected we can say something much stronger.
Note that we are {\it deleting} $r+1$ vertices; the cards have $n-r-1$ vertices.
\begin{theorem}
Every $r$-regular graph $G$ that is not $2$-connected is $(r+1)$-reconstructible.
\end{theorem}
\begin{proof}
If $G$ is disconnected, then every component has at least $r+1$ and hence at
most $n-(r+1)$ vertices. Thus Theorem~\ref{disconn} applies to make it
$(r+1)$-reconstructible.
Now suppose that $G$ has a cut-vertex. A subgraph of an $r$-regular graph is
{\em near $r$-regular} if it has exactly one vertex with degree less than $r$.
In every leaf block of $G$, only the cut-vertex of $G$ has degree less than
$r$. Hence every leaf block is near $r$-regular; furthermore, every near
$r$-regular subgraph of $G$ having no cut-vertex is a leaf block.
Besides the cut-vertex, a leaf block must have at least $r+1$ other vertices;
if only $r$, then $G$ would be $K_{r+1}$. Since $G$ has at least two leaf
blocks, $G$ has at least $2r+3$ vertices. Hence the cards in the
$(n-r-1)$-deck have at least $r+2$ vertices, so by
Manvel's result (Theorem~\ref{tlist}) we can reconstruct the degree list.
A $2$-connected $r$-regular graph cannot have a near $r$-regular subgraph $H$
with more than one vertex. If such $H$ exists, let $x$ be a vertex having
degree $r$ in $H$, and let $y$ be a vertex of $G$ not in $H$. Since $2$ is
$2$-connected, by Menger's Theorem it has internally disjoint paths from $x$
to $y$. Such paths must leave $H$ at distinct vertices having degree less than
$r$ in $H$, contradicting that $H$ is near $r$-regular.
Hence we have shown that the class of $r$-regular graphs with connectivity $1$
is $(r+1)$-recognizable.
Since every leaf block omits at least the $r+1$ non-cut-vertices of some other
leaf block, every leaf block has at most $n-(r+1)$ vertices.
By Observation~\ref{k-1}, we know all the subgraphs of $G$ having at most
$n-(r+1)$ vertices, with their multiplicities. The near $r$-regular ones
without cut-vertices are the leaf blocks. Hence we know all the leaf blocks,
with their multiplicities.
Let $B$ be a leaf block with fewest vertices, and let $s=\C{V(B)}$.
In $\mathcal{D}_{n-s+1}(G)$ there is a card that has as (leaf) blocks all the leaf
blocks of $G$ other than $B$, and one less leaf block isomorphic to $B$ than
$G$ has. This card $H$ is near $r$-regular. Reconstruct $G$ by attaching $B$
at the vertex of $H$ with degree less than $r$.
\end{proof}
\section{Almost All Graphs} \label{sec:almostall}
Using cards not much larger than those that fail to determine connectedness, we
can almost always reconstruct a graph. Chinn~\cite{Chinn} and
Bollob\'as~\cite{Bollobas} proved that almost all graphs are
$1$-reconstructible. In fact, this holds also for $\ell$-reconstructibility,
as observed earlier by M\"uller~\cite{Muller}. The needed tool is that for
almost all graphs, the induced subgraphs with many vertices are pairwise
nonisomorphic and have no nontrivial automorphisms (precise statement below).
We say that a property {\it holds for almost all graphs} if the fraction of
graphs with vertex set $\{1,\ldots,n\}$ for which the property holds tends to
$1$ as $n$ tends to $\infty$.
For $1$-reconstructibility, Chinn proved the following (in a stronger form):
\begin{theorem}[Chinn~\cite{Chinn}]\label{chinn}
If the subgraphs of a graph $G$ obtained by deleting two vertices are pairwise
nonisomorphic, then $G$ is reconstructible.
\end{theorem}
When the subgraphs satisfy this hypothesis, vertex $u$ is identifiable in $G-w$
because it is the only vertex in $G-w$ whose deletion yields a subgraph
obtainable from $G-u$ by deleting one vertex. From $G-v$ and $G-w$, one can
similarly identify $v$ in $G-w$. Now one can check whether $u$ and $v$ are
adjacent in $G$ by checking whether $u$ and $v$ are adjacent in $G-w$.
However, since we used both $G-u$ and $G-v$ to determine whether $u$ and $v$
are adjacent, we used all the cards.
\begin{theorem}[Bollob\'as~\cite{Bollobas}]\label{bollob}
For almost every graph, any three cards determine $G$.
\end{theorem}
Under the same hypothesis as in Theorem~\ref{chinn}, Bollob\'as gave a more
careful argument to reconstruct all of $G$ from $G-u$ and $G-v$ except for
determining whether $u$ and $v$ are adjacent. For that he consulted a third
card, invoking the uniqueness of the graphs in $\mathcal{D}_{n-3}(G)$ to identify $u$
and $v$ in $G-w$. However, it seems that uniqueness in $\mathcal{D}_{n-2}(G)$ suffices
to identify $u$ and $v$ in $G-w$ as discussed above.
Theorem~\ref{bollob} is stronger than saying that {\it some} three cards
determine $G$, which is the meaning of reconstruction number $3$ (two cards can
never determine whether the two deleted vertices are adjacent). The needed
tool is the next lemma.
\begin{lemma}[M\"uller~\cite{Muller}]\label{lem: almost all}\label{nonisom}
Let $\epsilon$ be a small positive real number. For almost every graph $G$,
the induced subgraphs with at least $k$ vertices have no nontrival
automorphisms and are pairwise nonisomorphic, where
$k=(1+\epsilon)\frac{|V(G)|}{2}$
\end{lemma}
Via counting arguments, M\"uller showed that graphs with this property are
reconstructible from smaller cards. Spinoza and West more directly generalized
the combinatorial argument of Bollob\'as, thereby reconstructing the graph from
a small set of cards in the $(k+1)$-deck. However, one step in their
construction does not work when $\ell=1$.
\begin{theorem}[Spinoza--West~\cite{SW}]\label{lrecon}
For $\ell>1$, if the subgraphs of $G$ obtained by deleting $\ell+1$ vertices
have no nontrival automorphisms and are pairwise nonisomorphic, then $G$ is
$\ell$-reconstructible, using just $\binom{\ell+2}{2}$ cards from the
$(|V(G)|-\ell)$-deck.
\end{theorem}
This not only shows $\ell$-reconstructibility; it also places a bound on the
natural generalization of reconstruction number to the $(n-\ell)$-deck.
Furthermore, asymptotically at least $\CH{n}{\ell+1}(n-\ell-1)^{\CH{\ell+1}2}$
sets of $\CH{\ell+2}2$ cards from $\mathcal{D}_{n-\ell}(G)$ determine $G$.
The cards are chosen by specifying a fixed set $S$ of $\ell+1$ vertices in $G$
and taking all cards that delete $\ell$ of them, plus for each pair $u,v\in S$
one card obtained by deleting $S-\{u,v\}$ and one vertex outside $S$.
\section{Another Model of Reconstruction}\label{sec:relation}
As mentioned in the introduction, the term ``$k$-reconstructible'' is also used
in another model of reconstruction with different definitions. Here we explain
the difference in order to reduce confusion.
We use ``digraph'' to mean a general binary relation (no repeated edges).
Two digraphs $D$ and $D'$ on an $n$-element vertex set $V$ are
{\it $k$-isomorphic} if for every $k$-element subset $X\subseteq V$, the
subdigraphs of $D$ and $D'$ induced by $X$ are isomorphic. They are
{\it $(\le k)$-isomorphic} if they are $k'$-isomorphic for all $k'$ with
$1\le k'\le k$. They are {\it $(-k)$-isomorphic} if they are
$(n-k)$-isomorphic. A digraph $D$ is {\it $\alpha$-reconstructible}, where
$\alpha\in\{k,\le k,-k\}$, if every digraph $\alpha$-isomorphic to $D$ is
isomorphic to $D$.
These notions were introduced by Fra\"iss\'e~\cite{Fr}, who conjectured
that for sufficiently large $k$ every digraph is $(\le k)$-reconstructible
(and analogously for $m$-ary relations, for each $m$).
The difference between Fra\"iss\'e's model and that of Kelly and Ulam is that
in Fra\"iss\'e's problem we are told the identities of the missing vertices,
but in the problem of Kelly and Ulam we are given only the multiset of
isomorphism types. The notions coincide for the original conjecture: a graph
(that is, a symmetric digraph) is reconstructible (in the Kelly--Ulam sense) if
and only if it is $(-1)$-reconstructible (in the Fra\"iss\'e sense).
Stockmeyer~\cite{St} showed that general digraphs (in fact, orientations of
complete graphs) are not $(-1)$-reconstructible.
The difference is clear when $k=2$. Only graphs with at most one edge (and
their complements) are reconstructible from their $2$-decks, but every
symmetric digraph is $2$-reconstructible, since we are told which pairs are
adjacent. This does not hold for general digraphs; any two orientations of a
complete graph are $2$-isomorphic.
Fra\"iss\'e's conjecture was proved for digraphs (that is, binary relations) by
Lopez~\cite{Lo1,Lo2}, who proved that every digraph is $(\le6)$-reconstructible
(this is sharp). The theorem was proved independently by Reid and
Thomassen~\cite{RT}, and it also follows from the later characterization of the
non-$(\le k)$-reconstructible digraphs by Boudabbous and Lopez~\cite{BL}.
A history of the topic appears in \cite{BHZ}.
Analogously to Observation~\ref{k-1}, Pouzet showed that if two $n$-vertex
digraphs are $p$-isomorphic, then they are also $q$-isomorphic whenever
$1\le q\le \min\{p,n-p\}$. With Lopez's Theorem, this implies that every
digraph with at least $11$ vertices is $6$-reconstructible, and every
digraph with at least $12$ vertices is $(-6)$-reconstructible.
|
2004.05337
|
\section{Introduction}
\subsection*{Background and main results}
An \emph{arrow configuration} on a $4$-regular graph is an assignment of an arrow to every edge such that exactly two arrows point towards each vertex.
The \emph{six-vertex model} (or more precisely the \emph{F model}) with parameter $c>0$ on a finite 4-regular graph embedded in a surface is
the probability measure on all arrow configurations that is proportional to $c^{N}$, where $N$ is the number of vertices of type $3a$ or $3b$ in the configuration (see Fig.~\ref{fig:6v}).
These are the vertices for which the arrows alternate between incoming and outgoing as one goes around the vertex.
The three-dimensional prototype of the model with $c=1$ (the uniform measure on arrow configurations) was introduced by Pauling~\cite{Pau} in 1935 to study the residual entropy of ice arising from the phenomenon of hydrogen bonding.
The square lattice version discussed here, called the F model, first appeared in the work of
Rys on antiferroelectricity~\cite{Rys}. The exact value of the free energy per site on the square lattice
was given by Lieb~\cite{Lieb,Lieb1} using the transfer matrix method. Since then the six-vertex model has been a prominent example of an integrable lattice model
of equilibrium statistical mechanics.
For a detailed account of the model and its history we refer the reader to~\cite{LiebWu,BaxterBook,Reshetikhin}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.9]{sixvertex3.pdf}
\caption{Six possible arrow configurations around a vertex of~$\mathbb T_{n}$. To each such configuration there correspond two spin configurations which differ by a global sign change.
The configuration with the upper left spin fixed to $-i$ is depicted here.
The solid (resp.\ dotted) black line represents an edge of $\mathbb T^{\bullet}_{n}$ (resp.\ $\mathbb T^{\circ}_{n}$).
}
\label{fig:6v}
\end{center}
\end{figure}
In this article we consider the six-vertex model on a toroidal piece of the square lattice
\[
\mathbb T_{n}=(\mathbb Z/2nj_1 \mathbb Z)\times (\mathbb Z/2nj_2 \mathbb Z)
\]
of size $2nj_1 \times 2nj_2$, where $j_1,j_2 \in \mathbb N$ are fixed and $n$ increases to infinity. We denote the corresponding probability measure by $\mu_n=\mu_n^c$.
In the first main result we establish convergence to an infinite-volume measure for the model with $c\in[\sqrt{3},2]$, and derive a spatial mixing property of the limit.
\begin{theorem}\label{thm:existence}
Let $c\in[\sqrt{3},2]$.
\begin{itemize}
\item[$(i)$] There exists a translation invariant probability measure $\mu=\mu^c$ (independent of $j_1$ and $j_2$) on arrow configurations on $\mathbb Z^2$,
such that
\[
\mu_{n} \to \mu \text{ weakly} \quad \textnormal{ as } n \to \infty.
\]
\item[$(ii)$] There exists $\kappa=\kappa(c)>0$ such that for any two local events $A$ and $B$ depending on the state (orientation) of edges in finite boxes
$\Lambda, \Lambda' \subset E(\mathbb Z^2)$ respectively, we have
\begin{align}\label{eq:mixing}
|\mu(A\cap B) -\mu(A)\mu(B)| \leq K_{|\Lambda|,|\Lambda'|}d(\Lambda,\Lambda')^{-\kappa},
\end{align}
where $d(\Lambda,\Lambda')$ is the graph distance between $\Lambda$ and~$\Lambda'$, and where $K_{|\Lambda|,|\Lambda'|}$ depends only on the size of the boxes.
\item[$(iii)$] In particular, $\mu$ is ergodic with respect to any nontrivial translation.
\end{itemize}
\end{theorem}
An observable of interest in the six-vertex model is its \emph{height function} $h$. For now we consider it directly in the infinite volume limit as an integer-valued function defined on the faces of $\mathbb Z^2$.
We first chose a chessboard white and black coloring of the faces of $\mathbb Z^2$.
For reasons to become clear later, we set $h$ to be $\pm 1$ with probability $1/2$ on a chosen white face $u_0$ next to the origin of $\mathbb Z^2$. For any other face $u$
and a dual oriented path $\gamma$ connecting $u_0$ with $u$,
we denote by $h^{\gamma}_{\leftarrow}(u)$ and $h^{\gamma}_{\rightarrow}(u)$ the numbers of arrows in the
underlying six-vertex configuration that cross $\gamma$ from right to left, and from left to right respectively.
The height at $u$ is then defined by
\begin{align} \label{eq:hf}
h(u)-h(u_0) = h^{\gamma}_{\leftarrow}(u) - h^{\gamma}_{\rightarrow}(u).
\end{align}
That the right-hand side is independent of $\gamma$ follows from the fact that $\mathbb Z^2$ is simply connected and from the property that six-vertex configurations form conservative flows.
It is predicted that the model should undergo a phase transition at $c=2$ in the sense that the variance of the height-function should be uniformly bounded
(over all faces of $\mathbb Z^2$) for $c>2$ (the \emph{localized} regime), and should be unbounded for $c\leq 2$ (the \emph{delocalized} regime).
So far this has been rigorously confirmed
for $c>2$~\cite{Disc,GlaPel}, $c=2$~\cite{DCST,GlaPel}, $c=1$~\cite{She,CPST,LogVar}, the \emph{free fermion} point $c=\sqrt{2}$~\cite{Ken01} corresponding to the dimer model,
and a small neighborhood of $c=\sqrt{2}$~\cite{GMT}.
Moreover, logarithmic (in the distance to the origin) divergence of the variance was established in~\cite{DCST,GlaPel,LogVar}.
We note that a closely related result was recently proved also in the model of uniform Lipschitz functions on the triangular lattice~\cite{GlaMan}.
Finally, a much stronger property was obtained in~\cite{Ken01,GMT}, namely that the
fluctuations of the height function in the scaling limit are described by the Gaussian free field.
The following result adds to this list by identifying delocalization in the weak sense for all $c\in[\sqrt{2+\sqrt{2}},2]$.
\begin{theorem}[Delocalization of the height function] \label{thm:delocalization}
Let $c\in[\sqrt{2+\sqrt{2}},2]$. Then under the infinite volume measure~$\mu$ we have
\begin{align} \label{eq:delocalization}
\mathbf{Var}_{\mu} [h(u)] \to \infty \quad \text{ as } \quad |u| \to \infty,
\end{align}
where $u$ is a face of $\mathbb Z^2$.
\end{theorem}
We note that Theorem~\ref{thm:existence} and a stronger version of Theorem~\ref{thm:delocalization} yielding logarithmic
divergence of the variance have been independently proved for all $c\in[1,2]$ in the work
of Duminil-Copin et al.~\cite{DKMO}. However, the methods that we use are different than those of~\cite{DKMO} and arguably more elementary.
We also believe that the ideas presented in this paper will be useful in further analysis of
the six-vertex model, in particular in questions regarding its scaling limit.
\subsection*{Outline of the approach}
Before explaining the arguments in detail, we give a brief overview of our approach.
We color the faces of $\mathbb T_{n}$ and $\mathbb Z^2$ in a checkerboard manner.
Let $\mathbb Z^2_{\bullet}$ (resp.\ $\mathbb Z^2_{\circ}$) be the square lattice of side length $\sqrt2 $ rotated by $\frac{\pi}4$ whose vertices are the black (resp.\ white) faces of $\mathbb Z^2$, and
where two vertices are adjacent if the corresponding faces of $\mathbb Z^2$ share a vertex (see Fig.~\ref{fig:BKW1}).
Let $\mathbb T_{n}^{\bullet}$ and $\mathbb T_{n}^{\circ}$ be defined analogously for $\mathbb T_{n}$.
The first main ingredient of the proofs of both main theorems is the Baxter--Kelland--Wu correspondence~\cite{BKW} between the six-vertex model on $\mathbb T_{n}$ with parameter $c\in[\sqrt{3},2]$
and the critical random cluster model on $\mathbb T_{n}^{\bullet}$
with cluster parameter
\begin{align} \label{eq:qc}
q=(c^2-2)^2\in [1,4].
\end{align}
For $c\in[\sqrt{3},2)$, unlike for $c\in [2,\infty)$,
this representation is not a stochastic, but rather a complex-measure coupling between the six-vertex and the (slightly modified) critical
random cluster model. For this reason it has not been clear how to transfer relevant probabilistic information between the two sides of this coupling.
The main novelty of our approach is an extension of this correspondence to identities between correlation functions of certain observables.
These observables on the side of the six-vertex model are simply spins assigned to the faces of the lattice and given by
\begin{align} \label{eq:sigmadef}
\sigma(u)=i^{h(u)},
\end{align}
where $i$ is the imaginary unit. Note that from a spin configuration one recovers the six-vertex configuration in a unique (and local) way, and hence the spins
carry all the probabilistic information of the six-vertex model.
Recall that our convention is to fix the height function to be $\pm1$ with equal probability on a white face $u_0$ adjacent to the origin. This makes the distribution of spins invariant under
the sign change $\sigma \mapsto -\sigma$.
Since the parity of the height function always changes between adjacent faces
the spins are real on the black, and imaginary on the white faces of $\mathbb Z^2$.
Also note that $\sigma$ is always well-defined locally.
Globally however, the height function may have a non-trivial period when one goes around the torus.
If the period is nontrivial mod $4$, the spin picks up a multiplicative term of $-1$ when going around the torus. This is a technical inconvenience that we discuss in more detail in
the following sections.
To illustrate the type of identities between correlation functions obtained in this paper we briefly discuss here the simplest case of the two-point function.
Let~$\phi$ be the critical random cluster measure on the rotated lattice $\mathbb Z^2_{\bullet}$ (see Fig.~\ref{fig:large}) with cluster parameter $q\in[1,4]$ as in \eqref{eq:qc}. The fact that this measure is unique, which was
established by Duminil-Copin, Sidoravicius and Tassion in \cite{DCST}, is crucial and constitutes the second main ingredient of our proof of Theorem~\ref{thm:delocalization}.
For two black faces $u,u' \in \mathbb {Z}^2_{\bullet}$, we establish that
\begin{align} \label{eq:example}
\mathbf E_{\mu}[\sigma(u)\sigma(u')] = \mathbf E_{\phi}\big [\rho^{N(u,u')}(-\rho)^{N(u',u)}\big],
\end{align}
where
\begin{align} \label{eq:lambda}
\rho = \tan \lambda, \qquad \text{ with } \qquad \sqrt q = 2 \cos \lambda,
\end{align}
and where $N(u,u')$ is the number of loops on $\mathbb Z^2$ that disconnect $u$ from $u'$ in the loop representation of the random cluster model.
Analogous identities to \eqref{eq:example} for many-point correlation functions already on the finite level of
$\mathbb T_{n}$ are also established, and are the main tool to obtain the existence of the infinite-volume measure, and hence prove Theorem~\ref{thm:existence}.
To show delocalization of the height function as stated in Theorem~\ref{thm:delocalization}, we use a recent result of the author~\cite{Lis19}. We first establish decorrelation of spins saying that
\begin{align} \label{eq:decorrelation}
\mathbf E_{\mu}[\sigma(u)\sigma(u')] \to 0, \qquad \text{ as } \qquad |u-u'|\to \infty.
\end{align}
This in turn implies (up to technical details that are taken care of in the present article) that there is no percolation in the associated percolation model studied in~\cite{Lis19,GlaPel}. Finally, non-percolation was shown in~\cite{Lis19} to imply delocalization.
It is clear that identity \eqref{eq:example} is useful for proving such decorrelation of spins \eqref{eq:decorrelation}. Indeed, for $c\in (\sqrt{2+\sqrt 2},2]$, we have that $0\leq \rho<1$.
On the other hand, by the results of \cite{DCST} we know that $N(u,u')\to \infty$ as $|u-u'|\to \infty$ $\phi$-almost surely. Hence, by~\eqref{eq:example} the spins decorrelate and the height function delocalizes.
The peripheral case $c=\sqrt{2+\sqrt 2}$ corresponding to $q=2$ and $\rho =1$ requires a slightly different argument.
To finish this discussion we note that Theorem~1.1 implies decorrelation of local increments of the height function for $c\in[\sqrt 3,2]$.
To prove delocalization however, we need additional information on the global
increment between two far-away points $u,u' \in \mathbb {Z}^2_{\bullet}$.
From the results of \cite{Lis19}, it turns out that the sufficient information is the behaviour of the parity of $({h(u)-h(u')})/2$. This is provided by~\eqref{eq:decorrelation} since for a black face $u'$, $h(u')$ is even, and hence
\[
\sigma(u)\sigma(u')= (-1)^{({h(u)-h(u')})/2}.
\]
A natural question that remains is if our approach to prove delocalization extends to the case $\rho =\tan \lambda>1$, or equivalently $c\in [\sqrt 3, \sqrt{2+\sqrt 2})$, (by possibly studying a different observable than \eqref{eq:decorrelation} to obtain delocalization).
In the rest of the paper we provide the precise statements and the remaining necessary details for the proofs of our results.
\subsection*{Acknowledgments} I would like to thank Nathana\"{e}l Berestycki for stimulating discussions and his insight into Lemma~\ref{lem:loopcorrelationslimit}, and Alexander Glazman for
many valuable discussions.
\begin{figure}
\begin{center}
\includegraphics[scale=0.9]{BKW.pdf}
\caption{Eight possible ways the loops connect at a vertex.
The yellow and red edges are the open edges in the percolation configuration $\xi$ and $\xi^{\dagger}$ respectively. }
\label{fig:BKW}
\end{center}
\end{figure}
\section{The Baxter--Kelland--Wu representation of spin correlations}
The Baxter--Kelland--Wu (BKW) correspondence is the starting point of our argument.
We recall it here while simultaneously establishing closely related identities for correlations of the $\sigma$-spins.
The first step is to represent the arrow configurations on $\mathbb T_{n}$
as \emph{fully packed} configurations of directed and noncrossing loops $\vec { L}$, also on $\mathbb T_{n}$. The term fully packed means that each edge of $\mathbb T_{n}$ is
traversed exactly once by a loop from~$\vec{ L} $. The loops in $\vec L$ should follow the arrows of the arrow configuration~$\alpha$, and the only choice remaining is to
decide how the directed edges connect at each vertex to form noncrossing loops.
For configurations of type 1 and 2, there is no choice, whereas for configurations of type 3 we can choose two different types of connections,
see Fig.~\ref{fig:BKW}. On the other hand, to reverse the map in order to obtain $\alpha$ from $\vec L$, it is enough to keep the information about the orientation of each edge and otherwise forget how the loops connect at the vertices.
This gives a many-to-one map from $\vec \mathcal L_{n}$, defined to be the set of all fully packed oriented loop configurations, to $\mathcal O_{n}$ -- the set of all arrow configurations on $\mathbb T_{n}$.
The crucial idea now is to parametrize the six-vertex weights in terms of the types of turns the loops make at each vertex. To this end, we define the weight of an oriented loop configuration by
\begin{align} \label{eq:loopweight}
w(\vec L) = e^{\frac{i\lambda}4 (\textnormal{left}(\vec L)-\textnormal{right}(\vec L))},
\end{align}
where $\lambda$ is as in \eqref{eq:lambda}, and where $\textnormal{left}(\vec L)$ and $\textnormal{right}(\vec L)$ are the total numbers of left and right turns of all the loops in the configuration.
Note that at each vertex of type 1 or 2, the loops make turns in opposite directions and hence the joint contribution of these two turns to the weight is $1$.
On the other hand, for vertices of type 3, we either have two turns left or two turns right which yields a total weight $2\cos \tfrac{\lambda}2=c$.
This exactly means that after projecting the renormalized complex measure on
$\vec \mathcal L_{n}$ induced from the weight~\eqref{eq:loopweight} onto arrow configurations~$\alpha$ (i.e., summing over all oriented loop configurations corresponding to $\alpha$) we recover the six-vertex probability measure $\mu_{n}$.
The next step of the correspondence is to go from an oriented loop configuration $\vec L$ to an unoriented one by simply forgetting the orientations of the loops.
To this end, note that after reorganizing the factors in \eqref{eq:loopweight}
according to which turn is made by which loop, we obtain that
\begin{align} \label{eq:loopweight1}
w(\vec L) = \prod_{\vec \ell\in \vec L} e^{\frac{i\lambda}4 (\textnormal{left}(\vec \ell)-\textnormal{right}(\vec \ell))}= \prod_{\vec \ell\in \vec L} e^{{i\lambda}\textnormal{w}(\vec \ell)},
\end{align}
where $\textnormal{left}(\vec \ell)$ and $\textnormal{right}(\vec \ell)$ are the total numbers of left and right turns of a single oriented loop $\vec\ell$,
and where $\textnormal{w}(\vec \ell)$ is the total \emph{winding number} of the loop.
The important observation here is that if $\vec \ell$ is contractible on the underlying torus, then $\textnormal{w}(\vec \ell)=\pm 1$ depending on the counterclockwise or clockwise orientation of the loop,
and $\textnormal{w}(\vec \ell)=0$
if $\vec \ell$ is noncontractible. By an unoriented loop configuration $L$ we mean a fully packed configuration of noncrossing loops obtained from some $\vec L\in \vec \mathcal L_{n}$
by erasing all arrows from the edges.
From \eqref{eq:loopweight1} we can conclude that the weights $w(\vec L)$ induce a probability measure $\phi_n$ on the set of fully-packed unoriented loop configurations $\mathcal L_{n}$ given by
\begin{align} \label{eq:nu}
\phi_n( L) = \frac{1}{Z_{n}} \prod_{ \ell\in L} ( e^{{i\lambda}\textnormal{w}(\vec \ell)}+ e^{-{i\lambda}\textnormal{w}(\vec \ell)}) = \frac{1}{Z_{n}}\sqrt{q}^{|L|} \big(\tfrac{2}{\sqrt q}\big)^{|L_{\textnormal{nctr}}|},
\end{align}
where \[
Z_n = \sum_{\alpha \in \mathcal O_n} c^{N(\alpha)}
\]
is the partition function of the six-vertex model, and $L_{\textnormal{nctr}}$ is the set of noncontractible loops in $L$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.62]{largemy1.pdf} \hspace{0.1cm} \includegraphics[scale=0.62]{largeloops.pdf}
\caption{An arrow configuration on a $6\times 4$ torus and a corresponding fully packed configuration of directed loops }
\label{fig:BKW1}
\end{center}
\end{figure}
Before discussing the connection with the random cluster model, let us derive the necessary formulas for the spin correlations as expectations of certain loop statistics under $\phi_{n}$.
As already mentioned, one needs to take slightly more care when defining the height function $h$ and hence the spins~$\sigma$
in finite volume as one may pick up a nontrivial period of $-1$ when going around the torus. To circumvent this obstacle, we identify the vertices of $\mathbb T_{n}$ with those of the box
\begin{align} \label{eq:box}
\{-nj_1+1,\ldots,nj_1\}\times \{-nj_2+1,\ldots,nj_2\} \subset \mathbb Z^2.
\end{align}
As before the height function at $u_0$ is chosen to be $\pm 1$ with equal probability. For every other face, we use formula \eqref{eq:hf} with the
restriction that the path $\gamma$ cannot take a step from a vertex with the $i$-th coordinate equal to $nj_i$ to a vertex with the same coordinate equal to $-nj_i+1$ and vice versa, for $i=1,2$.
Let $u_1,\ldots,u_{p}\in \mathbb T^{\bullet}_n$ and $v_1,\ldots,v_{r} \in \mathbb T^{\circ}_n$ be
black and white faces of $\mathbb T_{n}$ respectively.
We are interested in the the correlation function
\[
\mathbf E_{\mu_{n}} \Big[\prod_{i=1}^{p}\sigma(u_i) \prod_{j=1}^{r}\sigma(v_j) \Big].
\]
One can see that if either $p$ or $r$ is odd, then this expectation is zero by symmetry. Indeed, first note that after fixing spins on one sublattice and reversing all arrows,
the spins on the other sublattice change sign.
Furthermore the six-vertex model is invariant under arrow reversal, and we also choose the distribution of $\sigma(u_0)$ to be symmetric.
Hence, we can assume that $p=2k$ and $r=2l$.
We chose half of the faces $u_1,\ldots,u_{2k},v_1,\ldots,v_{2l}$ and declare them \emph{sources}, and we call the remaining half \emph{sinks}.
We now fix $k+l$ directed paths in the dual of $\mathbb T_{n}$ that connect pairwise the sources to the sinks, and
define $\Gamma$ to be the collection (sometimes called a \emph{zipper}) of directed edges of $\mathbb T_{n}$ which cross these paths from right to left, see Fig.~\ref{fig:zipper}.
It is possible that one edge crosses multiple paths. We also define $s^{+}_i$ and $s^-_i$ to be the source and sink of path number $i$ respectively for $1\leq i\leq k+l$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.9]{zipper.pdf}
\caption{The directed path (red) with source $s^+$ and sink $s^-$. The green edges represent (up to arrow reversal) the zipper $\Gamma$ used in the computation of $\mathbf{ E}_{\mu_{n} }[\sigma(s^+)\sigma(s^-)]$
(the actual edges in the zipper cross the path from right to left).
For this arrow configuration $\alpha$, we have $\epsilon(\alpha)=\sigma(s^+)\sigma(s^-)=-1$}
\label{fig:zipper}
\end{center}
\end{figure}
Having fixed~$\Gamma$, for any other collection of directed edges $H$, we define
\begin{align} \label{eq:epsilon}
\epsilon(H)=i^{|\Gamma\cap H|}(-i)^{|\Gamma\cap (-H)|} \in \{ 1,i,-1,-i \},
\end{align}
where $-H$ is the set of all reversed edges from $H$.
Note that
\begin{align} \label{eq:reverse}
\epsilon(H) \epsilon(-H)=1.
\end{align}
Below, with a slight abuse of notation, we will identify six-vertex configurations~$\alpha$,
oriented loop configurations $\vec L$, and single oriented loops $\vec \ell$ with the naturally associated collections of directed edges.
Recall that by our convention, $\sigma(u_i)^2=1$ and $\sigma(v_j)^2=-1$, or in other words
\[
i^{h(u_i)}=i^{-h(u_i)} \quad \textnormal{ and } \quad i^{h(v_j)}=-i^{-h(v_j)}.
\]
Writing $s$ for the number of white sinks, we have
\begingroup
\allowdisplaybreaks
\begin{align}
(-1)^s \mathbf E_{\mu_{n}} \Big[\prod_{i=1}^{2k}\sigma(u_i) \prod_{j=1}^{2l}\sigma(v_j) \Big]&= (-1)^s \mathbf E_{\mu_{n}} \Big[i^{{\sum_{i=1}^{2k}h(u_i)+ \sum_{j=1}^{2l}h(v_j) }}\Big] \nonumber\\
&= \mathbf E_{\mu_{n}} \Big[i^{\sum_{i=1}^{k+l}(h(s^+_{i})-h(s^-_i)) }\Big]\nonumber \\
&= \mathbf E_{\mu_{n}} [\epsilon (\alpha)]\nonumber \\
&= \frac 1{Z_{n}} \sum_{\vec L \in \vec \mathcal L} \epsilon(\vec L) w(\vec L)\nonumber\\
&= \frac 1{Z_{n}} \sum_{\vec L \in \vec \mathcal L} \prod_{\vec\ell \in \vec L} e^{{i\lambda}\textnormal{w}(\vec \ell)} \epsilon (\vec\ell)\nonumber\\
&= \frac 1{Z_{n}} \sum_{ L \in \mathcal L} \big( \prod_{\ell \in L} \rho(\ell)\big) \sqrt{q}^{|L|} \big(\tfrac{2}{\sqrt q}\big)^{|L_{\textnormal{nctr}}|} \nonumber,
\end{align}
where
\begin{align} \label{def:rho}
\rho(\ell)=\frac{e^{{i\lambda}\textnormal{w}(\vec \ell)} \epsilon (\vec\ell) +e^{{i\lambda}\textnormal{w}(-\vec \ell)} \epsilon (-\vec\ell)}
{e^{{i\lambda}\textnormal{w}(\vec \ell)} +e^{{i\lambda}\textnormal{w}(-\vec \ell)}},
\end{align}
\endgroup
with $\vec \ell$ and $-\vec \ell$ being the two oriented versions of $\ell$.
To get the third equality we represented each of the increments of the height function in the second line as a sum of one-step increments along the fixed paths,
and then used the definitions of $h$~\eqref{eq:hf} and $\epsilon$~\eqref{eq:epsilon}. To obtain the fourth identity we followed the same reasoning as in the
standard BKW representation. This can be done since the observable $\epsilon$ depends only on the orientations of the arrows, and this information is preserved when going form $\alpha$ to $\vec L$.
The last equality follows by forgetting the orientations of the loops as we did in \eqref{eq:nu}.
As a consequence we get the following crucial identity for correlation functions.
\begin{lemma} \label{lem:loopcorellations} Let $u_1,\ldots, u_{2k} \in \mathbb T^{\bullet}_n$ and $v_1,\ldots, v_{2l} \in \mathbb T^{\circ}_n$
be black and white faces of $\mathbb T_n$ respectively, and let $\rho$ is defined in~\eqref{def:rho} and $s$ is the number of white sinks. Then
\[
\mathbf E_{\mu_{n}} \Big[\prod_{i=1}^{2k}\sigma(u_i) \prod_{j=1}^{2l}\sigma(v_j) \Big] = (-1)^{s}\mathbf E_{\phi_{n}} \Big[\prod_{\ell\in L} \rho(\ell)\Big].
\]
\end{lemma}
We note for future reference that the same formula can be obtained when the numbers of white and black faces are both odd. As discussed before, both correlations are then equal to zero.
Also observe that on the side of the six-vertex model the correlations involve observables that are local functions of the spins, whereas the loop observables
depend on the global topology of all loops. Hence, slightly more care will be required when talking about convergence of the correlations under $\phi_{n}$ as $n\to \infty$, which is the subject of the next section.
Useful in this analysis will be the following interpretation of $\rho(\ell)$ for contractible loops.
For a contractible loop $\ell$, let $\delta(\ell)$ be the number of sources minus the number of sinks enclosed by the loop. Then
\begin{align} \label{eq:rho}
\rho(\ell) = \begin{cases}
1& \textnormal{ if $\delta(\ell) =0 \textnormal{ mod } 4$}, \\
-\tan \lambda &\textnormal{ if $\delta(\ell) =1 \textnormal{ mod } 4$}, \\
-1 &\textnormal{ if $\delta(\ell) =2 \textnormal{ mod } 4$}, \\
\tan \lambda &\textnormal{ if $\delta(\ell) =3 \textnormal{ mod } 4$}.
\end{cases}
\end{align}
This can be obtained from~\eqref{def:rho} by computing the total flux of all the fixed directed paths through~$\ell$ (the number of times the paths cross $\ell$ from the inside to the outside minus
the number of crossings from the outside to the inside). For topological reasons, this number is independent of the particular choice of paths, and is equal to~$\delta(\ell)$.
In particular if a contractible loop $\ell$ encloses all the sources and sinks, or none of them, then $\delta(\ell)=0$ and $\rho(\ell)=1$.
Moreover, if $\ell$ is noncontractible, then $\rho( \ell)=\epsilon(\vec \ell)$ if $\epsilon(\vec \ell)$ is real, and $\rho( \ell)=0$ otherwise. Here, $\vec \ell$ is any of the two orientations of $\ell$.
This means that for any $\Lambda \subset \mathbb T_n$,
\begin{align} \label{eq:quasilocal}
\prod_{\ell\in L} \rho(\ell) \mathbf 1_{T_\Lambda}(L)=\prod_{\ell \in L \cap \Lambda} \rho(\ell)\mathbf 1_{T_\Lambda}(L),
\end{align} where, with a slight abuse of notation, $L\cap \Lambda$ are the loops from $L$ that are contained in $\Lambda$, and where $T_\Lambda$ is the set of loop configurations $L$ such that $L\cap \Lambda$ contains a contractible loop surrounding all the sources and sinks.
Indeed, if $L\in T_\Lambda$, then for topological reasons,
\begin{itemize}
\item any other contractible
loop $\ell \in L$ that is not contained in $\Lambda$, either surrounds no sinks and no sources, or surrounds all of them, and hence by \eqref{eq:rho}, $\rho(\ell)=1$.
\item any noncontractible loop $\ell$ satisfies $\rho( \ell)=\epsilon(\vec \ell)=1$.
\end{itemize}
This is a form of \emph{locality} of the loop observables that we will later use to
conclude convergence of their expectations in the infinite volume limit.
To finally make a connection with the random cluster model we follow Baxter, Kelland and Wu, and interpret the unoriented loops
as interfaces winding between clusters of open edges in a bond percolation configuration and its dual configuration
\[
\xi\in \Omega^{\bullet}_n=\{ 0,1\}^{E(\mathbb T^{\bullet}_{n})} \qquad \text{and} \qquad \xi^{\dagger}\in \Omega^{\circ}_n= \{ 0,1\}^{E(\mathbb T^{\circ}_{n})}
\]
respectively (see~Fig.\ref{fig:BKW} and Fig.\ref{fig:large}). This yields a bijection between $\mathcal L_{n}$ and $\Omega^\bullet_n$,
and we will write $L(\xi)$ for the unoriented loop configuration corresponding to $\xi$ under this map.
It turns out that the distribution of $\xi$ defined by formula~\eqref{eq:nu} is very closely related to the one of the \emph{critical random cluster model} which on the torus is given, up to normalizing constants, by
\begin{align*}
\phi_{n}^{\textnormal{rc}}(\xi) \propto q^{k(\xi)} \big(\tfrac{p_c}{1-p_c} \big)^{|\xi|} = q^{k(\xi)} \sqrt{q}^{|\xi|},
\end{align*}
where $p_c=\sqrt{q}/(1+\sqrt{q})$ is the critical parameter~\cite{BefDum}, and $k(\xi)$ is the number of connected component of $\xi$ (including isolated vertices) thought of as a subgraph of $\mathbb T^{\bullet}_{n}$.
Indeed, using Euler's formula for graphs drawn on the torus (see e.g.\ Lemma 3.9 in~\cite{Disc}), one can rewrite this as
\begin{align} \label{eq:nurc}
\phi_{n}^{\textnormal{rc}}(\xi) \propto \sqrt{q}^{|L(\xi)|}q^{s(\xi)} ,
\end{align}
where
\[s(\xi)=
\begin{cases}
1 & \text{if $\xi$ is a net}, \\
0 & \text{otherwise}.
\end{cases}
\]
Here, a \emph{net} is a subgraph of $\mathbb T_{n}^{\bullet}$ that contains two noncontractible cycles of different homotopy class.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{largeunoriented.pdf}
\caption{
An unoriented loop configuration on a $6\times 4$ torus and the corresponding bond percolation configurations $\xi$ (yellow) and $\xi^{\dagger}$ (red).
The solid (resp.\ dotted) black lines represent the edges of $\mathbb T_{n}^{\bullet}$ (resp.\ $\mathbb T_{n}^{\circ}$)
}
\label{fig:large}
\end{center}
\end{figure}
\section{The infinite volume limit}
In this section we discuss convergence of finite volume measures as $n\to \infty$ and as a result we prove Theorem~\ref{thm:existence}.
The main tools are the formulas for spin correlations from the previous section and the results on the critical random cluster model with $q\in[1,4]$ of Duminil-Copin, Sidoravicius and Tassion~\cite{DCST}.
\subsection{Convergence of $\phi_{n}$}
Let $\Omega_\bullet=\{0,1\}^{E(\mathbb Z_\bullet^2)}$ and let $\mathcal F$ be the product $\sigma$-algebra on $\Omega_\bullet$.
Recall that $\Omega_\bullet$ with the product discrete topology is a compact space for which $\mathcal F$ is the Borel $\sigma$-algebra.
In what follows we think of $\Omega^\bullet_n$ as a subset of $\Omega_\bullet$ by cutting the torus $\mathbb T_n$ along the two noncontractible cycles at graph distance
$nj_1$ in the horizontal and $nj_2$ in the vertical direction from the origin (as it was done in \eqref{eq:box}), and
extending each percolation configuration~$\xi \in \Omega^\bullet_n$ to by setting its values to zero on the edges outside the resulting (rotated) box in $\mathbb Z^2_\bullet$.
In particular, we think of $\phi_{n}$ as a measure on~$(\Omega_\bullet,\mathcal F)$.
The weak convergence of $\phi_{n}$ to~$\phi$ will be a consequence of the fact that~$\phi$ is the unique critical random
cluster measure on~$\mathbb Z_\bullet^2$~\cite{DCST}, and the close relationship between formulas~\eqref{eq:nu} and~\eqref{eq:nurc}.
\begin{lemma} \label{lem:phiconvergence}
For $c\in [\sqrt 3,2]$, $\phi_n$ converges weakly to $\phi$ as $n\to \infty$.
\end{lemma}
Before proving the result we recall some classical definitions. To this end, for $E\subset E(\mathbb Z_\bullet^2)$, let $\mathcal F_E\subset \mathcal F$ be the $\sigma$-algebra generated by the
states of the edges in~$E$. Also define $E^c = E(\mathbb Z_\bullet^2) \setminus E$.
We say that a probability measure $\phi_0$ on~$\Omega}%{\{0,1\}^{E(\mathbb{Z}^2)}_\bullet$ is \emph{insertion}
\emph{tolerant} if there exists $\epsilon>0$ such that for every $e\in E(\mathbb Z_\bullet^2)$ and
every event $A \in \mathcal F_{ \{e\}^c} $ of positive measure, we have
\begin{align} \label{eq:insertion}
\phi_0 (\xi(e) =1 \mid A) \geq \epsilon.
\end{align}
We say that $\phi_0$ is \emph{deletion tolerant} if the law of $1-\xi$ is insertion tolerant. Finally,~$\nu$ has \emph{finite energy} if it is both insertion and deletion tolerant.
A result that we will use is the classical Burton--Keane~\cite{BK}
theorem saying that the probability of seeing more than one infinite cluster is zero under any
translation invariant probability measure with finite energy.
We say that a probability measure $\phi_0$ on $(\Omega_\bullet,\mathcal F)$ is a \emph{critical DLR random cluster measure} with parameter $q$ if for all $A\in \mathcal F$ and all
finite boxes~$\Lambda\subset E(\mathbb Z^2_{\bullet})$,
we have
\begin{align} \label{eq:DLR}
\phi_0(A \mid \mathcal F_{ \Lambda^c})(\zeta) = \phi_{\Lambda}^{\zeta}(A) \qquad \text{for $\phi_0$-a.e.\ $\zeta$},
\end{align}
where $\phi_{\Lambda}^{\zeta}$ is the \emph{critical random cluster measure with boundary conditions} $\zeta$ defined on
\[
\Omega_{\Lambda}^{\zeta}=\{ \xi \in \Omega: \xi(e)=\zeta(e) \textnormal{ for } e\in \Lambda^c\},
\]
and given by
\begin{align} \label{eq:bcrc1}
\phi_{\Lambda}^{\zeta}(\xi) \propto q^{k_{\Lambda}(\xi)} \sqrt{q}^{|\xi \cap \Lambda|} .
\end{align}
Here $k_{\Lambda}(\xi)$ is the number of connected components of $\xi$ that intersect $\Lambda$.
We note that if $\zeta$ contains at most one infinite cluster, then
\begin{align} \label{eq:bcrc}
\phi_{\Lambda}^{\zeta}(\xi)\propto \sqrt{q}^{|L_{\Lambda}(\xi)|},
\end{align}
where $L_{\Lambda}(\xi)$ are the loops (or biinfinite paths) in $L(\xi)$ that intersect $\Lambda$.
One can check this by establishing that both weights in~\eqref{eq:bcrc1} and \eqref{eq:bcrc} change in the same way after altering the state of a single edge.
The fundamental result for us will be that for $q\in [1,4]$, there exists exactly one critical DLR random cluster measure as was shown in~\cite{DCST}.
\begin{proof}[Proof of Lemma~\ref{lem:phiconvergence}]
Since $\Omega}%{\{0,1\}^{E(\mathbb{Z}^2)}_\bullet$ is compact, the sequence $(\phi_n)$ is tight and it is enough to prove that every subsequential limit $\phi_0=\lim_{k\to \infty} \phi_{n_k}$ is equal to $\phi$.
To show this, by the uniqueness result of~\cite{DCST}, we only need to check that $\phi_0$ satisfies the
DLR condition \eqref{eq:DLR} for any box $\Lambda$.
We will do this by arguing that in the infinite volume limit, the value of ${|L_{\textnormal{nctr}}(\xi)|}$ in \eqref{eq:nu} does
not depend on the state of $\xi$ inside $\Lambda$, which will imply that the conditional distribution of \eqref{eq:nu} simplifies to~\eqref{eq:bcrc}.
To be precise, note that by~\eqref{eq:nu} the measures $\phi_n$ have finite energy with constants that are uniform in $n$, and therefore $\phi_0$ has finite energy as the weak limit of $\phi_{n_{k}}$. Moreover, $\phi_0$ is clearly translation invariant.
Therefore by the classical Burton--Keane argument~\cite{BK} the configuration
$\xi$ has at most one infinite connected component $\phi_0$-a.s. The same holds for $\xi^\dagger$ since it has the same distribution as~$\xi$ under $\phi_0$. For topological reasons, this means that $L(\xi)$ contains at most one infinite loop (by which we mean a biinfinite path)~$\phi_0$-a.s.\ which is the interface between these potential infinite primal and dual clusters.
Let $\Lambda_N\subset E(\mathbb Z^2_\bullet)$ be a box of size $N\times N$ in $\mathbb Z^2_\bullet$ centered at the origin.
For~$N$ such that $\Lambda\subseteq\Lambda_N$, let $s_N$ be the number of paths contained in $\Lambda_N\setminus \Lambda$ (that are parts of loops in $L(\xi)$) that intersect both $\Lambda$ and the outside of~$\Lambda_N$. Note that $s_N$ must be even, and let
\[
S_{N}=\{\xi: s_N(\xi)\leq 2\}\in \mathcal F_{\Lambda^c} \cap \mathcal F_{\Lambda_N}.
\]
For future reference, also note that since there is at most one infinite loop $\phi_0$-a.s. and since $S_N$ is increasing in $N$, for every $B\in \mathcal F$, we have
\begin{align} \label{eq:indicator}
\mathbf{1}_{B}=\lim_{N\to \infty}\mathbf{1}_{B\cap S_{N}}, \qquad \phi_0\textnormal{-a.s.}
\end{align}
We now fix $\varepsilon>0$ and take $k$ so large that the law of the percolation configuration
under $\phi_{n_{k}}$ and $\phi_0$ restricted to $\Lambda_N$ are at total variation distance less than $\varepsilon$ from each other.
This is possible by the weak convergence of $\phi_{n_{k}}$ to $\phi_0$ and since $N$ is fixed.
Now observe that for two configurations $\zeta, \zeta' \in S_{N}$ such that $\zeta=\zeta'$ on $\Lambda_N$,
we have $\phi_{\Lambda}^{\zeta}=\phi_{\Lambda}^{\zeta'}$.
This follows from~\eqref{eq:bcrc}, and the fact that if $s_N(\zeta)=2$, then necessarily the two paths crossing the annulus $\Lambda_N\setminus \Lambda$
must belong to the same loop in $L(\zeta)$ (the other case $s_N(\zeta)=0$ is clear).
Moreover, by \eqref{eq:nu} we have for $\zeta\in S_N$,
\[
\phi_{n_k}(\cdot \mid \mathcal F_{ \Lambda^c})(\zeta)= \phi_{\Lambda}^{\zeta}(\cdot) \propto \sqrt{q}^{|L_{\Lambda}(\cdot)|}
\] since, on $S_{N}$, changing the state of
an edge in~$\Lambda$ cannot change the number of noncontractible loops in~$L$ (by the same reasoning as above).
Hence, for all events $A\in \mathcal F_{\Lambda_N}$ and $B\in \mathcal F_{\Lambda^c}\cap \mathcal F_{\Lambda_N}$, we can write
\begin{align*}
\int_{B\cap S_{N}} \phi_{\Lambda}^{\zeta}(A)d\phi_0(\zeta)+O(\varepsilon) &=\int_{B\cap S_{N}} \phi_{\Lambda}^{\zeta}(A)d\phi_{n_{k}}(\zeta) \\
&=\int_{B\cap S_{N}} \phi_{n_k}(A \mid \mathcal F_{ \Lambda^c})(\zeta)d\phi_{n_{k}}(\zeta) \\
&= \phi_{n_k}(A\cap B \cap S_N ).
\end{align*}
Taking first $k\to \infty$ and using the fact that $A\cap B \cap S_N $ is a local event, and then taking $N \to \infty$ and using \eqref{eq:indicator}, we get
\[
\int_B \phi_{\Lambda}^{\zeta}(A)d\phi_0(\zeta) = \phi_{0}(A\cap B )
\]
for all local events $A\in \mathcal F$ and $B\in \mathcal F_{\Lambda^c} $. This yields the DLR condition \eqref{eq:DLR} since the local events in
$ \mathcal F$ and $\mathcal F_{\Lambda^c}$ generate the respective $\sigma$-algebras.
\end{proof}
\subsection{Convergence of $\mu_{n}$}
In this section we use the correlation identities from Lemma~\ref{lem:loopcorellations} to deduce weak convergence of $\mu_{n}$ from the convergence of $\phi_{n}$.
Recall the (local) map from spin configurations $\sigma$ to arrow configurations~$\alpha$.
Using this correspondence, from now on, we will think of $\mu_n$ as a measure on $\Sigma_n: = \{-1,1\}^{\mathbb {T}^{\bullet}_n} \times \{-i,i\}^{\mathbb {T}_n^\circ}$.
Note that compared to the original definition, now~$\mu_n$ also accounts for the independent coin flip that we used to decide the value of the spin on the fixed face~$u_0$.
Similarly to previous considerations,
we will also think of $\Sigma_n$ as a subset of $\Sigma := \{-1,1\}^{\mathbb {Z}^2_\bullet} \times \{-i,i\}^{\mathbb {Z}^2_\circ}$ by setting the values of spins outside the box~\eqref{eq:box}
to $1$ or $i$ depending on the sublattice.
In particular, $\mu_n$ becomes a measure on $(\Sigma,\mathcal G)$ where $\mathcal G$ is the product $\sigma$-algebra on~$\Sigma$.
We first show convergence of spin correlations.
\begin{lemma} \label{lem:loopcorrelationslimit} Let $u_1,\ldots, u_{2k} \in \mathbb Z_{\bullet}^2$ and $v_1,\ldots, v_{2l} \in \mathbb Z_{\circ}^2$
be black and white faces of $\mathbb Z^2$ respectively, and let $\rho$ and $s$ be as in Lemma~\ref{lem:loopcorellations}. Then for $c\in [\sqrt 3,2]$,
\begin{align} \label{eq:corfunctions}
\mathbf E_{\mu_{n}} \Big[ \prod_{i=1}^{2k}\sigma(u_i) \prod_{j=1}^{2l}\sigma(v_j) \Big] \to (-1)^{s}\mathbf E_{\phi} \Big[\prod_{\ell\in L} \rho(\ell)\Big] \quad \textnormal{as } n \to \infty.
\end{align}
\begin{proof}
We will use the locality property~\eqref{eq:quasilocal} and the fact that there are infinitely many loops in $L(\xi)$
surrounding all the faces $u_1,\ldots, u_{2k},v_1,\ldots, v_{2l}$ $\phi$-a.s.
To be precise, by Lemma~\ref{lem:loopcorellations} it is enough to show that
\begin{align} \label{eq:difference}
\Big| \mathbf E_{\phi_{n}} \Big[\prod_{\ell\in L} \rho(\ell) \Big] - \mathbf E_{\phi} \Big[\prod_{\ell\in L} \rho(\ell)\Big]\Big|\to 0 \quad \textnormal{as } n \to \infty.
\end{align}
To this end, recall that $\Lambda_N\subset E(\mathbb Z^2_\bullet)$ is the box of size $N\times N$ in $\mathbb Z^2_\bullet$ centered at the origin, and
$L(\xi) \cap \Lambda_N$ is the set of loops in $L(\xi)$ that are contained in $\Lambda_N$.
Let $T_N\in\mathcal F_{\Lambda_N}$ be the event that there is a loop in $L(\xi) \cap \Lambda_N$ that surrounds all the faces $u_1,\ldots, u_{2k},v_1,\ldots, v_{2l}$.
Note that $|\prod_{\ell\in L} \rho(\ell)| \leq C$ deterministically for some $C<\infty$ that depends only on the distances between the fixed faces.
From~\cite{DCST} we know that there are infinitely many loops that surround all the fixed faces $\phi$-a.s., and therefore $\phi(T_N)\to 1$ as $N\to \infty$. Hence, for $\varepsilon >0$ we can choose~$N$ so large that $\phi(T_N) > 1-\varepsilon/C$, and therefore
\begin{align*}
\Big| \mathbf E_{\phi} \Big[\prod_{\ell\in L} \rho(\ell)\Big] - \mathbf E_{\phi} \Big[\prod_{\ell\in L} \rho(\ell) \mathbf{1}_{T_N}\Big]\Big|=
\Big| \mathbf E_{\phi} \Big[\prod_{\ell\in L} \rho(\ell)\Big] - \mathbf E_{\phi} \Big[\prod_{\ell\in L \cap \Lambda_N} \rho(\ell) \mathbf{1}_{T_N}\Big]\Big| <\varepsilon,
\end{align*}
where we used the locality property~\eqref{eq:quasilocal} to obtain the equality.
Since the random variables $\prod_{\ell\in L \cap \Lambda_N} \rho(\ell)$ and $\mathbf{1}_{T_N}$ are local, by Lemma~\ref{lem:phiconvergence} we can now take~$M$ so large that
\[
\Big|\mathbf E_{\phi} \Big[\prod_{\ell\in L \cap \Lambda_N} \rho(\ell) \mathbf{1}_{T_N}\Big] - \mathbf E_{\phi_n} \Big[\prod_{\ell\in L \cap \Lambda_N} \rho(\ell) \mathbf{1}_{T_N}\Big] \Big| <\varepsilon \quad \textnormal{and} \quad
\phi_{n}(T_N) > 1-2\varepsilon/C
\]
for all $n\geq M$. Using~\eqref{eq:quasilocal} again, we altogether get an upper bound of $4\varepsilon$ on~\eqref{eq:difference} for $n\geq M$.
\end{proof}
\end{lemma}
We are now able to prove the convergence part of Theorem~\ref{thm:existence}.
\begin{proof}[Proof of part $(i)$ of Theorem~\ref{thm:existence}]
Note again that since~$\Sigma$ is compact, it is enough to prove that all subsequential limits of $\mu_n$ are equal. By the lemma above,
all these limits have the same correlation functions of the form \eqref{eq:corfunctions}. We finish the proof by noticing that the indicator function of any local
event can be written as a linear combination of such correlation functions (see \eqref{eq:indA}). \end{proof}
We denote the limiting measure on $(\Sigma,\mathcal G)$ by $\mu$.
\begin{corollary} \label{cor:loopcorr}
In the setting of Lemma~\ref{lem:loopcorrelationslimit}, we have
\begin{align*}
\mathbf E_{\mu} \Big[ \prod_{i=1}^{2k}\sigma(u_i) \prod_{j=1}^{2l}\sigma(v_j) \Big] = (-1)^{s}\mathbf E_{\phi} \Big[\prod_{\ell\in L} \rho(\ell)\Big].
\end{align*}
\end{corollary}
\subsection{Mixing property of $\mu$}
Let $\mathcal G_{\textnormal{even}} \subset \mathcal G$ be the $\sigma$-algebra of even events, i.e., events invariant under the global sign flip $\sigma \mapsto -\sigma$.
In this section we show that $\mu$ as a measure on $(\Sigma, \mathcal {G}_{\textnormal {even}})$
(and hence also as a measure on arrow configurations $\mathcal O$ equipped with the product $\sigma$-algebra) is mixing in the sense as in part $(ii)$ of Theorem~\ref{thm:existence}.
Our argument uses Lemma~\ref{lem:loopcorrelationslimit} and heavily relies on the mixing property of the random cluster measure $\phi$ established in~\cite{DCST}.
\begin{proof}[Proof of part $(ii)$ of Theorem~\ref{thm:existence}]
We present the proof in the language of spins. The corresponding statement for arrow configurations follows immediately.
Let $A,B\in \mathcal G_{\textnormal{even}}$ depend on the state of spins in finite square boxes $V,V'\subset \mathbb Z^2_\bullet \cup \mathbb Z^2_\circ$ respectively.
We have
\begin{align}\label{eq:indA}
\mathbf 1_{A}(\sigma)&= \sum_{\tilde \sigma \in A} \prod_{v\in V}\tfrac12(1+\epsilon(v)\tilde\sigma(v)\sigma(v))
= \tfrac1{2^{|V|}} \sum_{S\subseteq V} \Big(\sum_{\tilde \sigma \in A} \prod_{v \in S} \epsilon(v) \tilde \sigma(v) \Big) \prod_{v \in S} \sigma(v),
\end{align}
where $\epsilon(v)=1$ if $v\in \mathbb Z^2_\bullet$ and $\epsilon(v)=-1$ if $v\in \mathbb Z^2_\circ$. Since $A$ is invariant under sign change, only terms involving sets $S$ of even
cardinality remain after the sum over~$\tilde \sigma$ is taken.
This means that for some (explicit) coefficients $\beta_S$, $\beta'_{S'}$,
\begin{align*}
\mathbf 1_{A}(\sigma) = \mathop{\sum_{S \subseteq V}}_{|S| \textnormal{ even}} \beta_S \sigma(S),\quad \textnormal{ and } \quad \mathbf 1_{B}(\sigma) = \mathop{\sum_{S' \subseteq V'}}_{|S'| \textnormal{ even}} \beta'_{S' }\sigma(S'),
\end{align*}
where $\sigma(S)=\prod_{u\in S} \sigma(u)$, and therefore
\begin{align*}
\mu(A\cap B) -\mu(A)\mu(B) & = \mathop{\sum_{S\subseteq V,S' \subseteq V'}}_{|S|, |S'| \textnormal{ even}}\beta_{S}\beta_{S'}
( \mathbf E_{\mu} [ \sigma(S) \sigma(S')] - \mathbf E_{\mu} [ \sigma(S)] \mathbf E_{\mu} [ \sigma(S')]) .
\end{align*}
Hence, to get \eqref{eq:mixing} it is enough to show that there exists $\kappa>0$ such that
\begin{align} \label{eq:sigmafactorization}
| \mathbf E_{\mu} [ \sigma(S) \sigma(S')] - \mathbf E_{\mu}[\sigma(S)] \mathbf E_{\mu}[\sigma(S')] | \leq Kd(V,V')^{-\kappa}
\end{align}
for any pair of sets $S\subseteq V,S'\subseteq V'$ of even cardinality, where $K$ depends only on the size of $V$ and $V'$.
To this end, we use Lemma~\ref{lem:loopcorrelationslimit}, where we choose an equal number of sources and sinks in~$S$ (and hence also in $S'$), to get
\begin{align} \label{eq:rhom}
& \mathbf E_{\mu} [ \sigma(S) \sigma(S')] =(-1)^{s+ s'} \mathbf E_{\phi}\Big [ \prod_{\ell \in L} \rho_{S\cup S'}(\ell) \Big], \\
& \mathbf E_{\mu} [ \sigma(S)] =(-1)^s \mathbf E_{\phi}\Big [ \prod_{\ell \in L} \rho_S(\ell) \Big] \nonumber \\
& \mathbf E_{\mu} [ \sigma(S')] = (-1)^{ s'}\mathbf E_{\phi}\Big [ \prod_{\ell \in L} \rho_{ S'}(\ell) \Big], \nonumber
\end{align}
where $s$ and $ s'$ are the number of white sinks in $S$ and $ S'$, and where $\rho_{S\cup S'}$ is defined for $S\cup S'$
as in~\eqref{eq:rho}, and $\rho_S$ (resp.\ $\rho_{ S'}$) is defined for $S$ (resp.\ $ S'$) using the same (but properly restricted) choice of sinks and sources.
In particular, we have that $\rho_S= \rho_{S\cup S'}$ and $\rho_{ S'}= \rho_{S\cup S'}$ on loops not surrounding any face of $ S'$ and $S$ respectively.
We note here that $S$ can contain an even or an odd number of, say, white faces. In the latter case, the two last correlations are equal to zero. However, as mentioned before, the
formula from Lemma~\ref{lem:loopcorrelationslimit} is still valid, and we chose to use it to have a uniform treatment of both cases.
Let $\Lambda_N ,\Lambda'_N \subset E( \mathbb Z^2_{\bullet})$ be boxes of size $N\times N$ centered around the centers of $V$ and $V'$ respectively.
Define $T_{N}\in\mathcal F_{\Lambda_N}$ to be the event that there is no loop in $L$ that intersects both $V$ and the complement of $\Lambda_N$.
Analogously define $ T'_{N}\in\mathcal F_{ \Lambda'_N}$ for $ \Lambda'_N$ and $ V'$.
By the strong RSW property of $\phi$ established in~\cite{DCST}, we know that there exists $\kappa'>0$ depending only on $q$, and $K_2<\infty$ depending on $q$ and the size of $V$ and $V'$, such that for all $N>0$,
\begin{align}\label{eq:RSW}
\phi(T_N\cap T'_N)\geq 1-K_2 N^{-\kappa'}.
\end{align}
Indeed, to ensure $T_N$, it is enough to construct an open circuit in the percolation configuration $\xi$ that surrounds $V$ and stays within $\Lambda_N$.
By the strong RSW property and the positive association of $\phi$, this can be done with constant probability
for every annulus in a properly defined sequence of disjoint concentric and exponentially growing annuli centered around $V$. We leave the details of this standard argument to the reader.
Moreover we have
\[
\big |\prod_{\ell \in L} \rho_{U}(\ell) \big| \leq \max(\tan \lambda, 1)^{ |V| +|V'|} = : K_1
\]
deterministically for $U= S, S', S\cup S' $, since there can be at most $|V| +|V'|$ loops intersecting $V\cup V'$.
Hence, by \eqref{eq:rhom}, \eqref{eq:RSW} we have
\begin{align} \label{eq:in1}
& \Big|\mathbf E_{\mu} [ \sigma(S) \sigma( S')]-\mathbf E_{\phi} \Big[ \prod_{\ell \in L} \rho_{S\cup S'}(\ell)\mathbf 1_{T_N}\mathbf 1_{ T'_N}\Big]\Big| \leq K_3N^{-\kappa'}, \\
&\Big| \mathbf E_{\mu} [ \sigma(S)] -\mathbf E_{\phi}\Big [ \prod_{\ell \in L\cap \Lambda_N} \rho_S(\ell)\mathbf 1_{T_N}\Big]\Big| \leq K_3N^{-\kappa'}, \nonumber\\
&\Big| \mathbf E_{\mu} [ \sigma(S')] -\mathbf E_{\phi}\Big [ \prod_{\ell \in L\cap \Lambda'_N} \rho_{ S'}(\ell)\mathbf 1_{ T'_N}\Big]\Big| \leq K_3N^{-\kappa'}, \nonumber
\end{align}
where $K_3=K_1K_2$.
Combined with the fact that the spin correlations are by definition bounded by one, the last two inequalities give
\begin{align} \label{eq:in3}
&\Big| \mathbf E_{\mu} [ \sigma(S)] \mathbf E_{\mu} [ \sigma(S')] -\mathbf E_{\phi}\Big [ \prod_{\ell \in L\cap \Lambda_N} \rho_S(\ell)\mathbf 1_{T_N}\Big]
\mathbf E_{\phi}\Big [ \prod_{\ell \in L\cap \Lambda'_N} \rho_{ S'}(\ell)\mathbf 1_{ T'_N}\Big]\Big| \\
& \qquad \qquad \leq K_3N^{-\kappa'} (K_3N^{-\kappa'}+2) . \nonumber
\end{align}
On the other hand, we have
\begin{align} \label{eq:in2}
\prod_{\ell \in L} \rho_{S\cup \tilde S}(\ell)\mathbf 1_{T_N}\mathbf 1_{ T'_N}= \prod_{\ell \in L \cap \Lambda_N}
\rho_S(\ell)\mathbf 1_{T_N} \prod_{\ell \in L\cap \Lambda'_N}\hspace{-0.3cm} \rho_{ S'}(\ell) \mathbf 1_{ T'_N}
\end{align}
whenever $\Lambda_N$ and $ \Lambda'_N$ are disjoint.
Moreover, since these two factors are local functions depending only on the state of edges in $\Lambda_N$ and $ \Lambda'_N$
respectively, by the mixing property of the critical random cluster model from in Theorem~5 of~\cite{DCST}, we have
\begin{align} \label{eq:in4}
&\Big|\mathbf E_{\phi} \Big[ \prod_{\ell \in L \cap \Lambda_N}
\rho_S(\ell)\mathbf 1_{T_N} \prod_{\ell \in L\cap \Lambda'_N}\hspace{-0.3cm} \rho_{ S'}(\ell) \mathbf 1_{ T'_N} \Big] -
\mathbf E_{\phi} \Big[ \prod_{\ell \in L \cap \Lambda_N} \rho_S(\ell)\mathbf 1_{T_N}\Big]\mathbf E_{\phi} \Big[ \prod_{\ell \in L\cap \Lambda'_N}\hspace{-0.3cm} \rho_{ S'}(\ell) \mathbf 1_{ T'_N}\Big] \Big|
\\ & \nonumber\qquad \leq\mathbf E_{\phi} \Big[ \prod_{\ell \in L \cap \Lambda_N} |\rho_S(\ell)|\mathbf 1_{T_N}\Big]\mathbf E_{\phi} \Big[ \prod_{\ell \in L\cap \Lambda'_N}\hspace{-0.3cm}| \rho_{ S'}(\ell)| \mathbf 1_{ T'_N}\Big]
\big(\tfrac{N}{ d(\Lambda_N,\Lambda'_N)+N} \big)^{\kappa''}
\\&\nonumber \qquad \leq K_1^2 \big(\tfrac{N}{ d(\Lambda_N,\Lambda'_N)+N} \big)^{\kappa''}
\end{align}
whenever $d(\Lambda_N,\Lambda'_N)\geq N$ for some $\kappa''>0$ that depends only on $q$.
Combining this with \eqref{eq:in1}, \eqref{eq:in3}, \eqref{eq:in2} and \eqref{eq:in4} we obtain that the left-hand side of \eqref{eq:sigmafactorization} is at most
\[
K_3N^{-\kappa'} (K_3N^{-\kappa'}+3) + K_1^2\big(\tfrac{N}{ d(\Lambda_N,\Lambda'_N)+N} \big)^{\kappa''} \leq K_4\big(N^{-\kappa'} +\big(\tfrac{N}{ d(\Lambda_N,\Lambda'_N)+N} \big)^{\kappa''}\big)
\]
for $d(\Lambda_N,\Lambda'_N)\geq N$, where $K_4$ depends only on $q$ and the size of $V$ and $V'$.
Taking $N=\lfloor \sqrt{d(V,V')}\rfloor$ we show \eqref{eq:sigmafactorization} and complete the proof.
\end{proof}
We note that ergodicity of $\mu$ follows by using standard arguments where one approximates translation invariant events by local events, and then uses the established mixing property.
\subsection{Decorrelation of monochromatic spins} In this section we study the decay of spin correlations for spins on faces of the same color, and without loss of generality we choose the black faces.
The simplest case of Corollary~\ref{cor:loopcorr} says that for $u,u'\in \mathbb Z^2_{\bullet}$,
\begin{align} \label{eq:twopoint}
\mathbf E_{\mu}[\sigma(u)\sigma(u')] = \mathbf E_{\phi} \Big[\prod_{\ell\in L} \rho(\ell)\Big] = \mathbf E_{\phi} \Big[\rho^{N(u,u')}(-\rho)^{N(u',u)}\Big] ,
\end{align}
where $N(u,u')$ is the number of loops in $L=L(\xi)$ which surround $u$ but not~$u'$.
Note that for $c=2$, we have $\rho=0$ and the right-hand side becomes $\mathbf E_{\phi} [N(u,u')=0]$ (see Remark~\ref{rem:c2}).
Using that $\rho = \tanh \lambda \leq 1$ for $c\in [\sqrt{2+\sqrt 2},2]$ we obtain the following result.
\begin{theorem} \label{thm:decorrelation}
For $c\in [\sqrt{2+\sqrt 2},2]$, there exists $\theta=\theta(c)>0$ such that for all $u,u'\in \mathbb Z^2_{\bullet}$,
\begin{align} \label{eq:decorrelation1}
\mathbf E_{\mu}[\sigma(u)\sigma(u')]\leq |u-u'|^{-\theta}.
\end{align}
\end{theorem}
We note that positivity of this two-point function follows from the percolation representation of the spin model described in Section~\ref{sec:omega}.
We also note that our argument does not give much information on the value of the exponent $\theta$.
\begin{proof}
We consider two cases.
\textbf{Case I:} $c\in (\sqrt{2+\sqrt 2},2]$. In this case $\rho<1$, and we can simply bound the right-hand side of \eqref{eq:twopoint} from above by $\mathbf E_{\phi} [\rho^{N(u,u')}] $.
Note that $N(u,u')$ is bounded from below by the number of loops surrounding $u$ whose diameter is smaller that $|u-u'|$. This number on the other hand stochastically dominates a binomial random variable with $\log |u-u'|$ trials and with (uniformly in $u,u'$) positive success probability.
This is a consequence of the strong RSW results for the random cluster model obtained in~\cite{DCST}. Indeed, using the positive association of the measure and the
the fact that one can cross long rectangles with uniform positive probability and under arbitrary boundary conditions, one can iteratively construct circuits of $\xi$ and $\xi^{\dagger}$
in exponentially growing annuli around $u$. Each pair of such consecutive clusters of $\xi$ and $\xi^{\dagger}$ contributes one loop to $L(\xi)$ that surrounds $u$ but not $u'$.
This yields \eqref{eq:decorrelation1} by using elementary properties of binomial distribution. We leave the details to the reader.
\textbf{Case II:} $c= \sqrt{2+\sqrt 2}$. In this case $\rho=1$ and the right-hand side of \eqref{eq:twopoint} simplifies to $\mathbf E_{\phi} [(-1)^{N(u',u)}] $.
Let $v=u+(1,0),v'=u'+(1,0)$ be the two vertices of $\mathbb Z^2_{\circ}$ directly to the right of $u$ and $u'$, and let $e,e'\in E(\mathbb Z^2)$
be the edges separating $u$ from $v$, and $u'$ from $v'$ respectively. Since the law of $L(\xi)$ is invariant under translation by $(1,0)$, we
have that $N(u',u)$ has the same distribution as $N(v',v)$, and we can write
\begin{align} \label{eq:cancellations}
\mathbf E_{\mu}[\sigma(u)\sigma(u')] & =\tfrac 12 \mathbf E_{\phi} [(-1)^{N(u',u)}+(-1)^{N(v',v)}].
\end{align}
We now notice that for each configuration of $\xi$, we have $N(u',u)(\xi)=N(v',v)(\xi)$ if there is a loop in $L(\xi)$ which goes through
$e$ and surrounds $u'$ or $v'$, or there is a loop that goes through $e'$ and surrounds $u$ or $v$.
Moreover, in this case $N(u',u)$ is even.
Otherwise we have $N(u',u)(\xi)=N(v',v)(\xi)\pm 1$ and the corresponding two terms in the expression
above cancel out. All in all we obtain that \eqref{eq:cancellations} is bounded above by the probability that the cluster in $\xi$ of either $u$, $u'$, $v$ or $v'$ has radius larger than $|u-v|$.
Again by the RSW property of the critical random cluster measure, this probability decays polynomially in $|u-v|$, and we finish the proof.
\end{proof}
\begin{remark}
We want to stress the fact that such polynomial decorrelation (including a polynomial lower bound) for monochromatic spins is expected to hold for all positive $c$. However, so far we were not able to obtain it
using \eqref{eq:decorrelation1}. The reason is that in the case when $\rho>1$ one needs to argue that the fluctuations of the
random sign $(-1)^{N(u',u)}$ and the exponential growth of $\rho^{N(u,u')+N(u',u)}$ cancel out to order $O(|u-u'|^{-\theta})$. Note that \eqref{eq:decorrelation1} already implies
(since the left-hand side is bounded above by one) that such
cancellations occur to order~$O(1)$.
\end{remark}
\begin{remark}
By arguments as in the previous section, polynomial decorrelation of monochromatic spins yields a similar mixing property of $\mu$ for all local events (not only even local events).
\end{remark}
\section{Delocalization of the height function}\label{sec:details}
In this section we combine the framework developed in~\cite{Lis19} with the results from the previous sections to prove delocalization of the height function for $c\in[\sqrt{2+\sqrt 2},2]$.
To this end, we need to consider a conditioned version of the six-vertex model.
We define $\mathcal O^0_{n}\subset \mathcal {O}_n$ to be the set of arrow configurations such that the spin system $\sigma$ is globally well defined on $\mathbb T_{n}$. In other words,
these are the arrow configurations such that the increment of the height function along any noncontractible cycle in the dual graph $\mathbb T_n^*$ is zero mod $4$.
We denote by $\mu^0_{n}$ the measure $\mu_{n}$ conditioned on $\mathcal O^0_{n}$. As before, we will identify $\mu^0_{n}$ with a probability measure on the set of spin configurations $\Sigma_n$, which we think of as a subset of~$\Sigma$.
\subsection{Convergence of $\mu^0_{n}$}
We will first show that $\mu^0_{n}$ also converges to $\mu$ as $n\to \infty$.
The argument is analogous to the one used to establish convergence of $\mu_n$ itself, and we will only focus here on the (topological) differences arising from the conditioning on $\mathcal O^0_{n}$.
To this end, we perform the same steps as in the unconditional BKW representation.
We first expand the arrow configurations in $\mathcal O^0_{n}$ to obtain a set of fully-packed oriented loop configurations, denoted by $\vec {\mathcal {L}}^0_n$.
We denote the sets of oriented and unoriented loop configurations composed of only contractible loops by $\vec {\mathcal {L}}^{\textnormal{ctr}}_n$ and $ {\mathcal {L}}^{\textnormal{ctr}}_n$ respectively.
We now notice that any contractible oriented loop contributes zero to the increment of the height function along any noncontractible cycle.
Hence, $\vec {\mathcal {L}}^{\textnormal{ctr}}_n \subset \vec {\mathcal {L}}^0_n$ and the complex measure induced on $\vec {\mathcal {L}}^{\textnormal{ctr}}_n$
and the probability measure induced on $ {\mathcal {L}}^{\textnormal{ctr}}_n$ by $\mu^0_n$ is the same as that induced by~$\mu_{n}$.
To treat the case involving noncontractible loops, we recall a topological fact saying that for a simple noncontractible closed curve on the torus, the algebraic numbers $(k,l)$
of times the curve intersects the equator and a fixed meridian respectively are coprime (in particular, one of them has to be odd).
Moreover, such pairs of numbers $(k,l)$ are in a one-to-one correspondence with isotopy classes of such curves.
Let $\vec L \in \vec {\mathcal {L}}_n \setminus \vec {\mathcal {L}}^{\textnormal{ctr}}_n$ contain noncontractible loops.
Note that since these loops do not intersect, they have to be, up to orientation, of the same isotopy class $(k,l)$.
Moreover, since the torus $\mathbb T_n$ is of even size, the total increment of the height function must be even along any noncontractible loop.
Combined with the fact that at least one of the numbers $(k,l)$, say $k$, is odd, this means that there must be an even number, say $2m$, of noncontractible loops in $\vec L$.
Let $m_1$ and $m_2$ be the numbers of such loops which intersect the meridian from right to left and from left to right respectively. In particular $m_1+m_2=2m$.
Then the increment of the height function of $\vec L$ is $a=(m_1-m_2)k$ along the meridian and $b=\pm(m_1-m_2)l$ along the equator.
If we now reverse the orientation of the noncontractible loop $\vec \ell_0\in \vec L$
which goes through the vertex with the smallest number (in some fixed ordering), we obtain a configuration $\vec L'$ for which these increments are
$a'=(m_1-m_2\pm2)k$ and $b'=\pm(m_1-m_2\pm2)l$ respectively. Since $m_1-m_2$ is even, we have the following cases:
if $l$ is even, then $a=a'+2 \textnormal{ (mod 4)}$ and $b=b'=0 \textnormal{ (mod 4)}$, and if $l$ is odd, then $a=b=a'+2 =b'+2\textnormal{ (mod 4)}$.
In both situations, exactly one of the two configurations $\vec L$ and $\vec L'$ belongs to $\vec {\mathcal {L}}^0_n$. Note that the correspondence~$\vec L \leftrightarrow \vec L'$
is involutive and measure preserving (since $\vec \ell_0$ has total winding zero). This implies that exactly half (in terms of the induced complex measure) oriented
loop configurations in $\vec {\mathcal {L}}_n \setminus \vec {\mathcal {L}}^{\textnormal{ctr}}_n$
belong to $\vec {\mathcal {L}}^0_n$.
In particular we get the following formula for the induced probability measure on $\mathcal{L}_n$,
\begin{align} \label{eq:nu0}
\phi^0_n( L) = \frac{1}{Z^0_{n}} \sqrt{q}^{|L|} \big(\tfrac{2}{\sqrt q}\big)^{|L_{\textnormal{nctr}}|}\tfrac12 \big(1+\mathbf 1_{ {\mathcal {L}}^{\textnormal{ctr}}_n}(L)\big),
\end{align}
where $L_{\textnormal{nctr}}$ is the set of noncontractible loops in $L$.
As a result of considerations exactly like in the previous section, we obtain the following convergence.
\begin{proposition} \label{lem:0conv}
For $c\in [\sqrt 3, 2]$, $\mu^0_n \to \mu$ weakly as $n \to \infty$.
\end{proposition}
\subsection{The percolation process $\omega$} \label{sec:omega}
We follow~\cite{GlaPel,Lis19} and define a bond percolation model $\omega$ on top of the spin configuration $\sigma$ sampled according to $\mu^0_n$
(the corresponding parameters in~\cite{Lis19} are $q=q'=2$ and $a=b=c^{-1}$).
We note that the model was also used in~\cite{SpiRay} in the study of the six-vertex model in the localized regime.
Recall that the graphs $\mathbb{T}^{\bullet}_n$ and $\mathbb{T}^{\circ}_n$ (likewise $\mathbb {Z}^2_\bullet$ and $\mathbb {Z}^2_\circ$) are dual to each other,
and denote by $\sigma^{\bullet}$ and $\sigma^{\circ}$ the restrictions
of the spin configuration $\sigma$ to the vertices of the respective graphs.
We now define $\eta(\sigma^{\circ}) \subseteq E(\mathbb{T}^{\bullet}_n)$ to be the set of \emph{contours} of $\sigma^{\circ}$, i.e., edges
whose {dual} edge in $E(\mathbb{T}^{\circ}_n)$ carries two different values of the spin~$\sigma^{\circ}$ at its endpoints.
Given $\sigma$, to obtain the percolation configuration $\omega \subseteq E(\mathbb{T}^{\bullet}_n)$, we proceed in steps:
\begin{itemize}
\item[$(i)$] we start with the configuration where all edges are closed,
\item[$(ii)$] we then declare each edge in $\eta(\sigma^{\circ})$ open,
\item[$(iii)$] for each edge $\{u,u'\} \in E(\mathbb{T}^{\bullet}_n)$
still closed after step $(ii)$ and such that $\sigma(u) = \sigma(u')$, we toss an independent coin with success probability $1-1/c$.
On success, we declare the edge open, and otherwise we keep it closed,
\item[$(iv)$] we denote by $\omega$ the set of all open edges.
\end{itemize}
Note that in particular $\eta(\sigma^{\circ})\subseteq \omega$.
We will write $\mathbf P_n$ for the probability measure on configurations $(\sigma,\omega)\in \Sigma_n \times \Omega_n^{\bullet}$ obtained from
these steps when $\sigma$ is distributed according to $\mu^0_n$.
Since the above procedure is local, independent for different edges, and invariant under the global sign change $\sigma \mapsto -\sigma$, from Proposition~\ref{lem:0conv} we immediately conclude the following.
\begin{corollary} \label{cor:P}
$\mathbf P_n$ converges weakly as $n\to \infty$ to a probability measure $\mathbf P$ on
$(\Sigma\times \Omega_{\bullet}, \mathcal G \otimes \mathcal F)$ which is translation invariant, satisfies a mixing property as in Theorem~\ref{thm:existence}, and hence is ergodic on $\mathcal G_{\textnormal{even}} \otimes \mathcal F$ with respect to the translations of~$\mathbb Z^2_{\bullet}$.
\end{corollary}
The following result connecting the percolation properties of $\omega$ under $\mathbf P$ with the behaviour of the height function under $\mu$ was proved in~\cite{Lis19}.
\begin{lemma} \label{lem:Lis19}
For $c\in [\sqrt{3},2]$, if
\[
\mathbf P( \exists \textnormal{ an infinite cluster of }\omega) = 0,
\]
then
\begin{align*}
\mathbf{Var}_{\mu} [h(u)] \to \infty \quad \text{ as } \quad |u| \to \infty,
\end{align*}
where $u \in \mathbb Z^2_{\circ} \cup \mathbb Z^2_{\bullet} $ is a face of $\mathbb Z^2$.
\end{lemma}
Therefore, to prove Theorem~\ref{thm:delocalization} it is enough to show the following.
\begin{proposition}\label{prop:nopercolation}For $c\in [\sqrt{2+\sqrt 2},2]$, $\mathbf P( \exists \textnormal{ an infinite cluster of }\omega) = 0$.
\end{proposition}
We devote the rest of this section to the proof of this result. We note that percolation properties of related models were studied in~\cite{HolLi}.
We first recall a crucial property of the coupling between $\sigma$ and $\omega$ given by the following description of the conditional law of $\sigma^{\bullet}$ given $\omega$~\cite{GlaPel,SpiRay,Lis19}, which is directly analogous to the Edwards--Sokal coupling between the Potts model and the random cluster model~\cite{EdwSok}.
\begin{lemma}[Edwards--Sokal property of $\omega$ and $\sigma^{\bullet}$]\label{lem:EdSo}
Under the probability measure $\mathbf P_n$, conditionally on~$\omega$, the spins $\sigma^\bullet$ are distributed like an independent uniform assignment of a $\pm 1$ spin
to each connected component of $\omega$. The same is true for $\mathbf P$ given that $\mathbf P( \exists \textnormal{ an infinite cluster of }\omega)=0$.
\end{lemma}
As a direct consequence we obtain a relation between connectivities in $\omega$ and spin correlations,
\begin{align} \label{eq:EScor}
\mathbf P_{n}(u \textnormal{ connected to } u' \textnormal{ in } \omega) = \mathbf{E}_{\mu^0_{n}}[\sigma(u)\sigma(u')].
\end{align}
The idea now is to use this identity and the decorrelation of spins from Theorem~\ref{thm:decorrelation} to conclude no percolation for $\omega$.
\begin{remark} \label{rem:c2}
For $c=2$, both the distribution of $\omega$ under $\mathbf P$ and of $\xi$ under $\phi$ are the critical random cluster model with $q=4$ (see \cite{Lis19}).
In this case we know that formula \eqref{eq:EScor} also holds in the infinite volume (since $\omega$ does not percolate), and it is identical to formula \eqref{eq:example}
since for $q=4$, we have $\rho=0$ and the event that there is no loop separating $u$ from $u'$ in $L(\xi)$ is the same as the event of $u$ being connected to $u'$ in~$\xi$.
\end{remark}
We will first need to prove that there is at most one infinite cluster in $\omega$ under~$\mathbf P$.
To this end,
we start with establishing insertion tolerance of $\omega$.
\begin{lemma}[Insertion tolerance of $\omega$] \label{lem:insertion}
For $c\in [\sqrt{3},2]$, the law of $\omega$ under $\mathbf P$ is insertion tolerant as defined in~\eqref{eq:insertion}.
\end{lemma}
\begin{proof}
Since $\mathbf P$ is the weak limit of $\mathbf P_n$, it is enough to prove that $\mathbf P_n$ satisfies \eqref{eq:insertion} with a constant $\epsilon>0$ that is independent of $n$.
To this end, for a configuration $\zeta \in \Omega^{\bullet}_n $ and an edge $e=\{u_1,u_2\} \in E(\mathbb T^{\bullet}_n)$, let $\zeta^e,\zeta_e \in \Omega^{\bullet}_n$ be the configurations that
agree with $\zeta$ on $E(\mathbb T^{\bullet}_n)\setminus \{ e\}$ and such that $\zeta^e(e)=1$ and $\zeta_e(e)=0$.
Note that by Lemma~\ref{lem:EdSo} we have
\begin{align}
\mathbf P_n (\omega = \zeta^e)&\geq\mathbf P_n (\omega = \zeta^e, \sigma(u_1) =\sigma(u_2), \sigma(v_1) =\sigma(v_2)) \nonumber \\
& =\tfrac p{1-p} \mathbf P_n (\omega = \zeta_e, \sigma(u_1) =\sigma(u_2), \sigma(v_1) =\sigma(v_2)) \nonumber \\
& = \tfrac p{1-p} \mathbf P_n (\omega = \zeta_e, \sigma(u_1) =\sigma(u_2)) \nonumber\\
& \geq \tfrac 12 \tfrac p{1-p} \mathbf P_n (\omega = \zeta_e) \nonumber \\
& = \tfrac 12(c-1) \mathbf P_n (\omega = \zeta_e) \label{eq:insertionbound} ,
\end{align}
where $\{v_1,v_2\}\in E(\mathbb{T}^{\circ}_n)$ is the dual edge of $e$, and $p=1-c^{-1}$ is the success probability from step $(iii)$ of the definition of $\omega$.
To get the first equality, we used the fact that if $\sigma(u_1) =\sigma(u_2)$ and $\sigma(v_1) =\sigma(v_2)$, then we can open $e$ only in step $(iii)$ by tossing a coin.
In the second equality, we used that if $e$ is closed then necessarily $\sigma(v_1) =\sigma(v_2)$. The last inequality follows from Proposition~\ref{lem:EdSo} and the fact that,
on the event that $v_1$ is not connected to $v_2$ in $\omega$, both faces obtain independent $\pm 1$ spins (otherwise, they must have the same spin).
From \eqref{eq:insertionbound} we get that
\[
\mathbf P (\omega(e)=1 \mid \omega(e') = \zeta(e') \textnormal{ for } e'\neq e) \geq \tfrac{c-1}{c+1} ,
\]
and hence \eqref{eq:insertion} holds true with $\epsilon = (c-1)/(c+1)$. This ends the proof.
\end{proof}
We will now exclude the possibility of more than one infinite clusters in $\omega$ under~$\mathbf P$. Since the law of $\omega$ is
not deletion tolerant in the sense of \eqref{eq:insertion}, we need to slightly modify the classical argument of Burton and Keane~\cite{BK}.
(Actually, one can always remove an edge from $\omega \setminus \eta(\sigma^{\circ})$ by paying a constant price, but removing edges from $ \eta(\sigma^{\circ})$
cannot be done locally and the cost can be arbitrarily high).
\begin{lemma} \label{lem:BK}
For $c\in [\sqrt{3},2]$,
\begin{align*}
\mathbf P(\exists \textnormal{ more than one infinite cluster of $\omega$})=0.
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:BK}]
By Corollary~\ref{cor:P}, $\mathbf P$ is translation invariant and ergodic when projected to $\mathcal F$, and by Lemma~\ref{lem:insertion}, it is insertion tolerant. Hence, by classical arguments we have
that
\[
\mathbf P(\exists \textnormal{ more than one but finitely many infinite clusters of $\omega$})=0.
\]
To conclude the proof we therefore need to show that
\begin{align} \label{eq:Cinfty}
\mathbf P(\exists \textnormal{ infinitely many infinite clusters of $\omega$})=0.
\end{align}
To this end, we say that $0\in \mathbb Z^2_{\bullet}$ is a \emph{trifurcation} if it belongs to an infinite cluster of $\omega$
that splits into exactly three infinite and no finite clusters after removing $0$ and the edges incident on $0$.
We assume by contradiction that the probability in \eqref{eq:Cinfty} is equal to $1$ (we can assume this by ergodicity) of $\mathbf P$. We will show that under this assumption
\[
\mathbf P(0 \textnormal{ is a trifurcation})>0.
\]
This will yield the desired contradiction in the same way as in the original argument of Burton and Keane.
In what follows, we will construct trifurcations by modifying (in steps) the configuration $(\sigma,\omega)$ inside a large but finite box.
To this end, for $\Lambda\subset E(\mathbb Z^2_\bullet)$, let
\[
C_6(\Lambda) = \{ \partial \Lambda \textnormal{ intersects at least \emph{six} infinite clusters of } \omega|_{ \Lambda^c} \} \in\mathcal G \otimes \mathcal F_{\Lambda^c},
\]
where $\partial \Lambda$ is the set of vertices of $\Lambda$ adjacent to a vertex outside $\Lambda$, and $\omega|_{ \Lambda^c}$ is the restriction of the configuration $\omega$ to the edges of $\Lambda^c$.
We now fix $\Lambda$ to be a square box large enough so
that for $C_6=C_6(\Lambda)$,
\begin{align*} \label{eq:I6}
\mathbf P(C_6 )>1/2.
\end{align*}
For a set of black vertices $B$ and white vertices $W$, we define
\begin{align*}
S_{\pm}(B) &= \{ \sigma \textnormal{ is constant and equal } \pm 1 \textnormal{ on } B\} \in \mathcal G \otimes \mathcal F, \textnormal{ and} \\
S_{\pm}(W)& = \{ \sigma \textnormal{ is constant and equal } \pm i \textnormal{ on } W\} \in \mathcal G \otimes \mathcal F.
\end{align*}
Note that by the Edwards--Sokal property from Lemma~\ref{lem:EdSo}, for any event $I\in \mathcal G \otimes \mathcal F$ depending only on $\omega$, we have
\[
\mathbf P_{n} (S_+(B) \mid I ) \geq (\tfrac 1 2)^{|B|}
\]
independently of $n$. Hence,
by the weak convergence of $\mathbf P_{n}$ to $\mathbf P$ we know that
\[
\mathbf P (S_+(V(\Lambda)) \mid C_6 ) \geq (\tfrac 12) ^{|V( \Lambda)|}.
\]
Recall that $\eta(\sigma^\circ)\subset E(\mathbb Z^2_\bullet)$ is the set of interfaces separating spins of different value in $\sigma^\circ$ which in turn is the restriction of $\sigma$ to $\mathbb Z^2_\circ$.
The crucial observation now is that for each $(\sigma,\omega) \in S_+(V(\Lambda))\cap C_6$, one can choose a constant sign $\varsigma=\varsigma(\sigma,\omega)=\pm 1$ such that
there are at least \emph{three} infinite clusters in $ \omega|_{ \Lambda^c} \cup \eta( \sigma_{\varsigma}^\circ)$, where
\[ \sigma_{\varsigma}(u)= \begin{cases} \varsigma & \textnormal{ for } u\in V(\Lambda^*), \\
\sigma(u) & \textnormal{ otherwise},
\end{cases}
\]
and where $\Lambda^*\subset E(\mathbb Z^2_{\circ})$ is the box whose vertices are the bounded faces of $\Lambda$ (see Fig.~\ref{fig:BK}).
\begin{figure}
\begin{center}
\includegraphics[scale=1]{BK.pdf}
\caption{The dotted black lines represent the boundary edges of $\Lambda$.
The signs show the sign of $-i\sigma^{\circ}$, the blue lines represent $\eta(\sigma_{\pm}^\circ)$
and the red lines show the edges of $\omega|_{\Lambda^c} \setminus \eta(\sigma_{\pm}^\circ)$ with $(\sigma,\omega)\in C_6$. There are five infinite clusters in $\omega |_{ \Lambda^c}\cup \eta(\sigma_+^\circ) $ and one in $\omega |_{ \Lambda^c}\cup \eta(\sigma_-^\circ) $.
The red dotted edges are added at finite cost to create a trifurcation at $0$
}
\label{fig:BK}
\end{center}
\end{figure}
Moreover, we have that
\begin{align*}
T_{\pm}:= \{ (\sigma,\omega)\in S_+(V(\Lambda))\cap C_6: \varsigma (\sigma,\omega) = \pm 1\} \in \mathcal G_{ V(\Lambda^*)^c } \otimes \mathcal F_{\Lambda^c} ,
\end{align*}
where $\mathcal G_{ V(\Lambda^*)^c }$ is the $\sigma$-algebra generated by the spins outside $V(\Lambda^*)$.
Furthermore, for any $I\in \mathcal G_{ V(\Lambda^*)^c } \otimes \mathcal F_{\Lambda^c} $,
\begin{align} \label{eq:delta1}
\mathbf P (S_\pm(V(\Lambda^*)) \mid S_+( V(\Lambda))\cap I ) \geq \delta_1 >0,
\end{align}
where $\delta_1$ depends only on $\Lambda$.
Indeed, by the definition of the spin model, the only constraint on the values of spins is that if $u,u'\in \mathbb Z^2_{\bullet}$ and $v,v'\in \mathbb Z^2_{\circ}$
are incident on a common vertex of $\mathbb Z^2$, then $(\sigma(u)-\sigma(u'))(\sigma(v)-\sigma(v'))=0$. In other words, the interfaces $\eta(\sigma^\bullet)$ and $\eta(\sigma^\circ)$ cannot cross.
Since here we assume that $\sigma$ is constant on $V(\Lambda)$, we can always set $\sigma$ to be constant on $V(\Lambda^*)$ and keep this constraint satisfied.
This means that the equivalent of \eqref{eq:delta1} is satisfied by $\mathbf P_n$ for all $n$, and hence \eqref{eq:delta1} holds true by taking the weak limit.
We now define
\begin{align*}
S&= \big(S_+(V(\Lambda^*)) \cup S_-(V(\Lambda^*))\big)\cap S_+(V(\Lambda)), \textnormal{ and }\\
C_3&=\{\textnormal{there are at least \emph{three} infinite clusters in $ \omega|_{ \Lambda^c} \cup \eta( \sigma^\circ)$}\}.
\end{align*}
Note that conditioned on $C_3 \cap S$, one can construct a trifurcation with probability $\delta_2>0$ (depending only on $\Lambda$) by opening some of the edges of $\Lambda$ to create three paths connecting $0$ to three infinite clusters at the boundary of $\Lambda$, and by keeping the remaining edges closed (as depicted on the left-hand side of Fig.~\ref{fig:BK}).
Here we use the definition of the process $\omega$ and fact that $\sigma$ is constant on $V(\Lambda^*)$, and hence the contour configurations
$\eta(\sigma^\circ)$ does not intersect the interior of $\Lambda$.
All in all, we have
\begin{align*}
\mathbf P(0 \textnormal{ is a trifurcation}) &\geq \delta_2 \mathbf P(C_3 \cap S) \\
& \geq\delta_2[ \mathbf P (S_+(V(\Lambda^*))\cap T_+ ) + \mathbf P (S_-(V(\Lambda^*))\cap T_- )] \\
& = \delta_2[\mathbf P (S_+(V(\Lambda^*))| T_+ ) \mathbf P (T_+)+\mathbf P (S_-(V(\Lambda^*))| T_- ) \mathbf P (T_-)]\\
&\geq\delta_1\delta_2[ \mathbf P (T_+)+ \mathbf P (T_-)] \\
&= \delta_1\delta_2 \mathbf P(S_+(V(\Lambda))\cap C_6) \\
&= \delta_1\delta_2\mathbf P(S_+(V(\Lambda))| C_6) \mathbf P (C_6)\\
&\geq \delta_1\delta_2 (\tfrac 12) ^{|V( \Lambda)|+1} \\
&>0.
\end{align*}
Using arguments exactly as in~\cite{BK} we finish the proof.
\end{proof}
We are finally ready to show that $\omega$ does not percolate under $\mathbf P$, which by Lemma~\ref{lem:Lis19} will yield delocalization of the height function for $c\in[\sqrt{2+\sqrt 2},2]$.
\begin{proof}[Proof of Proposition~\ref{prop:nopercolation}]
For $u,u'\in \mathbb Z^2_{\bullet}$, by Corollary~\ref{cor:P} we have
\begin{align*}
\mathbf P(u \textnormal{ connected to } u' \textnormal{ in } \omega) &= \lim_{N\to \infty} \mathbf P(u \textnormal{ connected to } u' \textnormal{ in } \omega|_{\Lambda_N}) \\
& = \lim_{N\to \infty} \lim_{n\to \infty}\mathbf P_{n}(u \textnormal{ connected to } u' \textnormal{ in } \omega|_{\Lambda_N}) \\
&\leq \lim_{n\to \infty}\mathbf P_{n}(u \textnormal{ connected to } u' \textnormal{ in } \omega)\\
& = \lim_{n\to \infty} \mathbf{E}_{\mu^0_{n}}[\sigma(u)\sigma(u')] \\
& = \mathbf{E}_{\mu}[\sigma(u)\sigma(u')].
\end{align*}
The second last equality follows from the Edwards--Sokal property~\eqref{eq:EScor}, and the last one from Proposition~\ref{lem:0conv}.
Combining this with the decorrelation of spins from Theorem~\ref{thm:decorrelation}, we get that
\begin{align} \label{eq:disconnection}
\mathbf P(u \textnormal{ connected to } u' \textnormal{ in } \omega)\to 0\quad \textnormal{ as } \quad |u-u'|\to \infty.
\end{align}
To finish the proof, we now proceed by contradiction along classical lines. We assume that
$\mathbf P( \exists \textnormal{ an infinite cluster of }\omega) > 0$, and by ergodicity of $\mathbf P$ from Corollary~\ref{cor:P} and Lemma~\ref{lem:BK}, we have that
\[
\mathbf P(\exists \textnormal{ a unique infinite cluster of } \omega) =1.
\]
We now fix a box $\Lambda \subset E(\mathbb Z^2_{\bullet})$ so large that
\begin{align}\label{eq:34}
\mathbf P(\textnormal{the infinite cluster of } \omega \textnormal{ intersects }\Lambda) \geq 3/4.
\end{align}
Let
\begin{align*}
A&=\{\textnormal{the infinite cluster of } \omega \textnormal{ intersects }u+\Lambda \textnormal{ and } u'+\Lambda\},\\
B&= \{ \omega \textnormal{ is constant and equal to $1$ on $u+\Lambda$ and $u'+\Lambda$} \}.
\end{align*}
Then, by translation invariance and \eqref{eq:34} we have $\mathbf P(A) \geq 1/2$, and by insertion tolerance from Lemma~\ref{lem:insertion},
we have $\mathbf P(B\mid A) \geq \epsilon^{2|\Lambda|} \mathbf P(A)$, where $\epsilon>0$ is as in \eqref{eq:insertion}.
We can now write
\begin{align*}
&\mathbf P(u \textnormal{ connected to } u' \textnormal{ in } \omega ) \geq \mathbf P(A\cap B) \geq \epsilon^{2|\Lambda|} /2.
\end{align*}
Since this lower bound is positive and independent of $u$ and $u'$, we get a contradiction with \eqref{eq:disconnection}, and we finish the proof.
\end{proof}
\bibliographystyle{amsplain}
|
2004.05364
|
\section{%
Introduction
}
Rowmotion (at the combinatorial level) is a bijection $R$ on the set $\JJ(P)$
of order ideals of a finite poset $P$, which assigns to $I \in \JJ(P)$
the order ideal $R(I)$ generated by the minimal elements of the complement $P \setminus I$.
The map $R$ can be also described in terms of toggles.
For each $v \in P$, let $t_v : \JJ(P) \to \JJ(P)$ be the map given by
\begin{equation}
\label{eq:Ctoggle}
t_v(I)
=
\begin{cases}
I \cup \{ v \} &\text{if $v \not\in I$ and $I \cup \{ v \} \in \JJ(P)$,} \\
I \setminus \{ v \} &\text{if $v \in I$ and $I \setminus \{ v \} \in \JJ(P)$,} \\
I &\text{otherwise,}
\end{cases}
\end{equation}
and call it the \emph{toggle} at $v$,
Then the rowmotion map $R$ is expressed as the composition
\begin{equation}
\label{eq:row=toggle}
R = t_{v_1} \circ t_{v_2} \circ \cdots \circ t_{v_N},
\end{equation}
where $(v_1, v_2, \dots, v_N)$ is any linear extension of $P$,
i.e., a list of elements of $P$ such that $v_i < v_j$ in $P$ implies $i<j$.
This rowmotion has been studied from several perspectives and under various names.
See \cite{SW} and \cite{TW} for the history and references.
Rowmotion exhibits nice properties such as periodicity and homomesy
on special posets including root posets (see \cite{Pan, AST})
and minuscule posets (see \cite{RS, RW}).
In general, given a set $S$ and a bijection $f : S \to S$,
we say that a statistic $\theta : S \to \Real$ is \emph{homomesic} with respect to $f$
if there exists a constant $C$ such that for any $\langle f \rangle$-orbit $T$
$$
\frac{ 1 }{ \# T }
\sum_{x \in T} \theta(x)
=
C.
$$
We refer the reader to \cite{R} for the homomesy phenomenon.
For a minuscule poset $P$ and a simple root $\alpha \in \Pi$, we put
\begin{equation}
\label{eq:file}
P^\alpha = \{ v \in P : c(v) = \alpha \},
\end{equation}
where $c : P \to \Pi$ is the coloring of $P$ with color set $\Pi$, the set of simple roots.
This subset $P^\alpha$ is called the \emph{file} corresponding to $\alpha$.
(See Section~3 for a definition of minuscule posets and related terminologies.)
If $P$ is a minuscule poset, then the associated rowmotion map $R$ has the following properties:
\begin{theorem}
\label{thm:prototype}
Let $P$ be a minuscule poset associated to a minuscule weight $\lambda$ of a simple Lie algebra $\mathfrak{g}$.
Then we have
\begin{enumerate}
\item[(a)]
(periodicity, Rush--Shi \cite[Thoerem~1.4]{RS})
The rowmotion map $R$ has a finite order equal to the Coxeter number $h$ of $\mathfrak{g}$.
\item[(b)]
(file homomesy, Rush--Wang \cite[Theorem~1.2]{RW})
For each simple root $\alpha \in \Pi$,
the refined order ideal cardinality $\# (I \cap P^\alpha)$ is homomesic with respect to $R$.
More precisely, for any $I \in \JJ(P)$, we have
$$
\frac{1}{h}
\sum_{k=0}^{h-1} \# \left( R^k(I) \cap P^\alpha \right)
=
\langle \varpi^\vee, \lambda \rangle,
$$
where $\varpi^\vee$ is the fundamental coweight corresponding to $\alpha$.
\end{enumerate}
\end{theorem}
One motivation of this paper is to lift the results in the above theorem to the birational level.
Einstein--Propp \cite{EP} introduced birational rowmotion
by lifting the notion of toggles from the combinatorial level to the piecewise-linear level,
and then to the birational level.
Given a finite poset $P$, let $\hat{P} = P \sqcup \{ \hat{1}, \hat{0} \}$ be the poset obtained from $P$
by adjoining an extra maximum element $\hat{1}$ and an extra minimum element $\hat{0}$.
For positive real numbers $A$ and $B$, we put
$$
\KK^{A,B}(P)
=
\{ F : \hat{P} \to \Real_{>0} \mid F(\hat{1}) = A, \ F(\hat{0}) = B \},
$$
where $\Real_{>0}$ denotes the set of positive real numbers.
For $v \in P$, we define the \emph{birational toggle} $\tau^{A,B}_v : \KK^{A,B}(P) \to \KK^{A,B}(P)$ at $v$ by
\begin{equation}
\label{eq:Btoggle}
\left( \tau^{A,B}_v F \right)(x)
=
\begin{cases}
\dfrac{ 1 }
{ F(v) }
\cdot
\dfrac{ \sum_{w \in \hat{P}, \, w \lessdot v} F(w) }
{ \sum_{z \in \hat{P}, \, z \gtrdot v} 1/F(z) }
&\text{if $x = v$,}
\\
F(x) &\text{otherwise,}
\end{cases}
\end{equation}
where the symbol $x \gtrdot y$ means that $x$ covers $y$,
i.e., $x>y$ and there is no element $z$ such that $x>z>y$.
It is clear that $\tau^{A,B}_v$ is an involution.
(See (\ref{eq:PLtoggle}) for a definition of piecewise-linear toggles.)
Then we define \emph{birational rowmotion} $\rho^{A,B} : \KK^{A,B}(P) \to \KK^{A,B}(P)$ by
\begin{equation}
\label{eq:Brow}
\rho^{A,B}
=
\tau^{A,B}_{v_1} \circ \cdots \circ \tau^{A,B}_{v_N},
\end{equation}
where $(v_1, \dots, v_N)$ is a linear extension of $P$.
It can be shown that the definition of $\rho^{A,B}$ is independent of the choices of linear extensions.
Since rowmotion is defined by toggling from top to bottom,
we have a recursion formula for the values of the birational rowmotion map:
\begin{equation}
\label{eq:Brow_inductive}
\left( \rho^{A,B} F \right)(v)
=
\frac{ 1 }
{ F(v) }
\cdot
\frac{ \sum_{w \in \hat{P}, \, w \lessdot v} F(w) }
{ \sum_{z \in \hat{P}, \, z \gtrdot v} 1/\left( \rho^{A,B} F \right) (z) }.
\end{equation}
We omit the superscript ${}^{A,B}$ and simply write $\KK(P)$, $\tau_v$ and $\rho$
when there is no confusion.
For birational rowmotion on a product of two chains,
periodicity and (multiplicative) file homomesy are obtained by
Grinberg--Roby \cite{GR2} and Einstein--Propp \cite{EP} respectively.
In this paper we generalize their results from products of two chains (type $A$ minuscule posets)
to arbitrary minuscule posets.
For a minuscule poset and a simple root $\alpha \in \Pi$, we define
\begin{equation}
\label{eq:Phi}
\Phi_\alpha(F)
=
\prod_{v \in P^\alpha} F(v)
\end{equation}
for $F \in \KK^{A,B}(P)$.
Our main results for birational rowmotion are summarized as follows:
\begin{theorem}
\label{thm:main1}
Let $P$ be the minuscule poset associated to a minuscule weight $\lambda$
of a finite dimensional simple Lie algebra $\mathfrak{g}$.
Let $\rho = \rho^{A,B}$ be the birational rowmotion map.
Then we have
\begin{enumerate}
\item[(a)]
(periodicity)
The map $\rho$
has finite order equal to the Coxeter number $h$ of $\mathfrak{g}$.
\item[(b)]
(reciprocity)
For any $v \in P$ and $F \in \KK^{A,B}(P)$, we have
\begin{equation}
\label{eq:reciprocity}
\left( \rho^{\rank(v)} F \right) (v)
=
\frac{ AB }
{ F( \iota v) },
\end{equation}
where $\rank : P \to \{ 1, 2, \dots, h-1 \}$ is the rank function of the graded poset $P$
and $\iota : P \to P$ is the canonical involutive anti-automorphism of $P$
(see Proposition~\ref{prop:involution}).
\item[(c)]
(file homomesy)
For a simple root $\alpha$, we have
\begin{equation}
\label{eq:homomesyR}
\prod_{k=0}^{h-1} \Phi_\alpha (\rho^k F)
=
A^{h \langle \varpi^\vee, -w_0 \lambda \rangle}
B^{h \langle \varpi^\vee, \lambda \rangle}
\end{equation}
for any $F \in \KK^{A,B}(P)$,
where $w_0$ is the longest element of the Weyl group $W$ of $\mathfrak{g}$,
and $\varpi^\vee$ is the fundamental coweight corresponding to $\alpha$.
\end{enumerate}
\end{theorem}
Part (a) of this theorem is established in \cite{GR1,GR2}
except for the type $E_7$ minuscule poset.
In this paper we provide a way to settle the $E_7$ case by using a computer.
For a type $A$ minuscule poset,
Part (b) is obtained in \cite[Theorem~30]{GR2}.
Our proof of Part (b) is based on a case-by-case analysis (with a help of computer in types $E_6$ and $E_7$).
Part (c) in type $A$ follows from Einstein--Propp \cite[Theorems~7.3 and 8.5]{EP}
(see Musiker--Roby \cite[Theorem~2.16]{MR} for another proof).
We will give an almost uniform proof to Part (c).
Also we can use tropicalization (or ultradiscretization) to deduce the results
for combinatorial rowmotion in Theorem~\ref{thm:prototype} (see Section~2).
Another aim of this paper is to introduce and study birational Coxeter-motion on minuscule posets,
which is regarded as a generalization of birational promotion on a product of two chains
(see \cite[Definition~5.3]{EP}).
For a simple root $\alpha \in \Pi$, we define $\sigma^{A,B}_\alpha : \KK^{A,B}(P) \to \KK^{A,B}(P)$
as the composition
\begin{equation}
\label{eq:sigma}
\sigma^{A,B}_\alpha = \prod_{v \in P_\alpha} \tau^{A,B}_v,
\end{equation}
which is independent of the order of composition.
Then a \emph{Coxeter-motion map} is a product of all the $\sigma^{A,B}_\alpha$'s in any order.
Our results for birational Coxeter-motion are stated as follows:
\begin{theorem}
\label{thm:main2}
Let $P$ be the minuscule poset.
Let $\gamma = \gamma^{A,B}$ be a birational Coxeter-motion map.
Then we have
\begin{enumerate}
\item[(a)]
(periodicity)
The map $\gamma$ has finite order equal to the Coxeter number $h$.
\item[(b)]
(file homomesy)
For each simple root $\alpha \in \Pi$, we have
\begin{equation}
\prod_{k=0}^{h-1} \Phi^\alpha (\gamma^k F)
=
A^{h \langle \varpi^\vee, -w_0 \lambda \rangle}
B^{h \langle \varpi^\vee, \lambda \rangle}.
\end{equation}
\end{enumerate}
\end{theorem}
If $P$ is a type $A$ minuscule poset and $\pi$ is the birational promotion map
(a special case of birational Coxeter-motion maps),
then there is an explicitly defined ``recombination map'' $\mathfrak{R}$
such that $\mathfrak{R} \rho = \pi \mathfrak{R}$ (see \cite[Theorem~8.2]{EP}),
which, together with Theorem~\ref{thm:main1} (a), implies Part (a) of the above theorem.
We prove Part (a) for arbitrary minuscule posets by showing that any birational Coxeter-motion map
is conjugate to the birational rowmotion map in the birational toggle group (Theorem~\ref{thm:conj} below).
By applying tropicalization to Part (a),
we obtain the periodicity of piecewise-linear promotion,
which is proved in \cite[Theorem~1.12]{GPT} via quiver representation.
Part (b) in type $A$ is obtained in \cite[Theorem~7.3]{EP}.
Hopkins \cite{Ho} obtains another example of homomesy for the birational rowmotion
for a wider class of posets including minuscule posets.
\begin{theorem}
\label{thm:Hopkins}
(Hopkins \cite[Theorem~4.43]{Ho})
Let $P$ be a minuscule poset and $\rho = \rho^{A,B}$ the birational rowmotion map.
For $F \in \KK^{A,B}(P)$, we define
$$
\Psi(F)
=
\prod_{x \in P}
\frac{ F(x) }
{ \sum_{y \in \hat{P}, y \lessdot x} F(y) }.
$$
Then we have
$$
\prod_{k=0}^{h-1} \Psi(\rho^k F)
=
\left( \frac{ A }{ B} \right)^{\# P}.
$$
\end{theorem}
Via tropicalization, this theorem reduces to the homomesy phenomenon
of the antichain cardinality statistic, which was proved in \cite[Theorem~1.4]{RW}.
In a forthcoming paper \cite{O}, we use explicit formulas for iterations of the birational rowmotion map
to give refinements of Theorem~\ref{thm:Hopkins}.
Our refinement in type $A$ provides a birational lift of the homomesy given in \cite[Proof of Theorem~27]{EP}.
The remaining of this paper is organized as follows.
We collect some general facts concerning birational rowmotion in Section~2,
and give a definition and properties of minuscule posets in Section~3.
In Sections 4 to 6 we give a proof of our main theorems.
The periodicity in Theorem~\ref{thm:main1} (a) and Theorem~\ref{thm:main2} (a) is proved in Section~4,
and the reciprocity in Theorem~\ref{thm:main1} (b) is verified in Section~5.
In Section~6, after investigating local properties around a file,
we complete the proof of file homomesy in Theorem~\ref{thm:main1} (c) and Theorem~\ref{thm:main2} (b).
\subsection*{%
Acknowledgements
}
This work was partially supported by
JSPS Grants-in-Aid for Scientific Research No.~18K03208.
The author is grateful to Tom Roby for fruitful discussions.
\section{%
Generalities on rowmotion
}
In this section, we explain how combinatorial and birational rowmotion are related,
and give some general facts about birational rowmotion.
\subsection{%
Combinatorial, piecewise-linear and birational rowmotion
}
We begin with recalling the definition of piecewise-linear toggles and rowmotion.
Given a finite poset $P$ and real numbers $a$, $b$, we put
$$
\PP^{a,b}(P)
=
\{ f : \hat{P} \to \Real : f(\hat{1}) = a, \, f(\hat{0}) = b \},
$$
where $\hat{P} = P \sqcup \{ \hat{1}, \hat{0} \}$.
We define the \emph{piecewise-linear toggles}
$\tilde{t}^{\pm,a,b}_v : \PP^{a,b}(P) \to \PP^{a,b}(P)$ at $v \in P$
by the formulas
\begin{equation}
\label{eq:PLtoggle}
\begin{aligned}
\left( \tilde{t}^{+,a,b}_v f \right)(v)
&=
\max \{ f(w) : w \in \hat{P}, \, w \lessdot v \} + \min \{ f(z) : z \in \hat{P}, \, z \gtrdot v \} - f(v),
\\
\left( \tilde{t}^{-,a,b}_v f \right)(v)
&=
\min \{ f(w) : w \in \hat{P}, \, w \lessdot v \} + \max \{ f(z) : z \in \hat{P}, \, z \gtrdot v \} - f(v),
\end{aligned}
\end{equation}
and $\left( \tilde{t}^{\pm,a,b}_v f \right)(x) = f(x)$ for $x \neq v$.
For an order ideal $I \in \JJ(P)$, let $\chi^\pm_I$ be the characteristic functions defined by
$$
\chi^+_I(v)
=
\begin{cases}
0 &\text{if $v \in I$ or $v = \hat{0}$,} \\
1 &\text{if $v \in P \setminus I$ or $v = \hat{1}$,}
\end{cases}
\quad
\chi^-_I(v)
=
\begin{cases}
1 &\text{if $v \in I$ or $v = \hat{0}$,} \\
0 &\text{if $v \in P \setminus I$ or $v = \hat{1}$.}
\end{cases}
$$
Then it follows from the definition (\ref{eq:Ctoggle}) and (\ref{eq:PLtoggle}) that
the toggle $\tilde{t}^{\pm,a,b}_v$ is a piecewise-linear lift of the combinatorial toggle $t_v$
in the following sense:
\begin{equation}
\label{eq:C-PL-toggle}
\tilde{t}^{+,1,0}_v ( \chi^+_I )
=
\chi^+_{t_v I},
\quad
\tilde{t}^{-,0,1}_v ( \chi^-_I )
=
\chi^-_{t_v I}.
\end{equation}
The \emph{piecewise-linear rowmotion} map $\tilde{R}^{\pm,a,b} : \PP^{a,b}(P) \to \PP^{a,b}(P)$
is defined by
$$
\tilde{R}^{\pm,a,b} = \tilde{t}^{\pm,a,b}_{v_1} \circ \cdots \circ \tilde{t}^{\pm,a,b}_{v_N},
$$
where $(v_1, \dots, v_N)$ is a linear extension of $P$.
A rational function $F(X_1, \cdots, X_m) \in \Rat(X_1, \cdots, X_m)$ is called
\emph{subtraction-free} if
$F$ is expressed as a ratio $F = G/H$ of two polynomials $G(X_1, \cdots, X_m)$ and $H(X_1, \cdots, X_m)
\in \Int[X_1, \dots, X_m]$ with nonnegative integer coefficients.
By using
$$
\lim_{\ep \to +0}
\ep \log (e^{a/\ep} + e^{b/\ep})
=
\max \{ a, b \},
\quad
\lim_{\ep \to -0}
\ep \log (e^{a/\ep} + e^{b/\ep})
=
\min \{ a, b \},
$$
we can see that, if $F(X_1, \dots, X_m)$ is subtraction-free,
then for any real numbers $x_1, \dots, x_m \in \Real$
the limits
$$
f^\pm(x_1, \cdots, x_m)
=
\lim_{\ep \to \pm 0}
\ep \log F(e^{x_1/\ep}, \cdots, e^{x_m/\ep})
$$
exist and
$f^+(x_1, \dots, x_m)$ (resp. $f^-(x_1, \dots, x_m)$) is the piecewise-linear function in $x_1, \dots, x_m$
obtained from $F$ by replacing the multiplication $\cdot$, the division $/$ and the addition $+$
with the addition $+$, the subtraction $-$ and the maximum $\max$ (resp. the minimum $\min$).
This procedure from $F$ to $f^\pm$ are called the tropicalization (or ultradiscretization).
\begin{prop}
\label{prop:tropical}
Let $P$ be a finite poset.
Let $R : \JJ(P) \to \JJ(P)$ and $\rho = \rho^{A,B} : \KK^{A,B}(P) \to \KK^{A,B}(P)$
be the combinatorial and birational rowmotion maps respectively.
Let $m : P \times \Int \to \Int$ be a map with finite support.
If there is a integers $p$ and $q$ such that
\begin{equation}
\label{eq:Btropical}
\prod_{(v,k) \in P \times \Int}
\left[ \left( \rho^k F \right)(v) \right]^{m(v,k)}
=
A^p B^q
\end{equation}
for any $F \in \KK^{A,B}(P)$, then
\begin{equation}
\label{eq:Ctropical}
\sum_{(v,k) \in P \times \Int}
m(v,k) \chi[v \not\in R^k(I)]
=
p,
\quad
\sum_{(v,k) \in P \times \Int}
m(v,k) \chi[v \in R^k(I)]
=
q,
\end{equation}
where $\chi[S] = 1$ if $S$ is true and $0$ if $S$ is false.
\end{prop}
\begin{demo}{Proof}
By applying the tropicalization procedure to (\ref{eq:Btropical}), we obtain
$$
\sum_{(v,k) \in P \times \Int}
m(v,k) \left( \tilde{R}^{\pm,a,b} f \right)(v)
=
a p + b q
$$
for any $f \in \PP^{a,b}(P)$.
Then specializing $f = \chi^\pm_I$ and using (\ref{eq:C-PL-toggle}), we obtain (\ref{eq:Ctropical}).
\qed
\end{demo}
\begin{corollary}
\label{cor:tropical}
\begin{enumerate}
\item[(a)]
If $\left( \rho^h F \right)(v) = F(v)$ for any $F \in \KK^{A,B}(P)$ and $v \in P$,
then $R^h (I) = I$ any $I \in \JJ(P)$.
\item[(b)]
Let $v$ and $w \in P$ and $k$ a positive integer.
If $\left( \rho^k F \right)(v) \cdot F(w) = AB$,
then $v \in R^k(I)$ and $w \not\in I$ are equivalent for any $I \in \JJ(P)$.
\item[(c)]
Let $M$ be a subset of $P$ and $h$ be a positive integer.
If $\prod_{k=0}^{h-1} \prod_{v \in M} \left( \rho^k F \right)(v) = A^p B^q$
for any $F \in \KK^{A,B}(P)$,
then we have $\sum_{k=0}^{h-1} \# \left( R^k(I) \cap M \right) = q$ for any $I \in \JJ(P)$.
\end{enumerate}
\end{corollary}
Similar statements hold for birational Coxter-motion.
\subsection{%
Birational rowmotion on graded posets
}
In this subsection we present some properties of birational rowmotion on graded posets.
A poset $P$ is called \emph{graded of height $n$}
if there exists a rank function $\rank : P \to \{ 1, 2, \dots, n \}$ satisfying the following three conditions:
\begin{enumerate}
\item[(i)]
If $v$ is minimal in $P$, then $\rank(v) = 1$;
\item[(ii)]
If $v$ is maximal in $P$, then $\rank(v) = n$;
\item[(iii)]
If $v$ covers $w$, then $\rank(v) = \rank(w)+1$.
\end{enumerate}
\begin{lemma}
\label{lem:order}
If $P$ is a graded poset of height $n$
and the birational rowmotion map $\rho^{A,B}$ has a finite order $N$,
then $N$ is divisible by $n+1$.
\end{lemma}
\begin{demo}{Proof}
By Corollary~\ref{cor:tropical} (a),
we have $R^N(I) = I$ for all $I \in \JJ(P)$.
On the other hand, it is easy to see that the $\langle R \rangle$-orbit of the empty order ideal $\emptyset$
has length $n+1$.
Hence we see that $n+1$ divides $N$.
\qed
\end{demo}
The following lemma gives a relation between $\rho^{A,B}$ and $\rho^{1,1}$.
\begin{lemma}
\label{lem:A=B=1}
Let $P$ be a graded poset of height $n$.
For a map $F : P \to \Real_{>0}$ and positive real numbers $A$, $B \in \Real_{>0}$,
we denote by $F^{A,B} \in \KK^{A,B}(P)$ the extension of $F$ to $\hat{P}$ such that
$F^{A,B}(\hat{1}) = A$ and $F^{A,B}(\hat{0}) = B$.
For $1 \le k \le n$ and $v \in P$, we have
\begin{equation}
\left(
\left( \rho^{A,B} \right)^k F^{A,B}
\right)(v)
=
\left(
\left( \rho^{1,1} \right)^k F^{1,1}
\right)(v)
\times
\begin{cases}
A &\text{if $1 \le k \le \rank(v)-1$,} \\
AB &\text{if $k = \rank(v)$,} \\
B &\text{if $\rank(v)+1 \le k \le n$,} \\
1 &\text{if $k = n+1$.}
\end{cases}
\end{equation}
\end{lemma}
\begin{demo}{Proof}
We can use the recursive formula (\ref{eq:Brow_inductive})
to proceed by the double induction on $k$ and $n-\rank(v)$.
\qed
\end{demo}
\subsection{%
Change of variables
}
Let $P$ be a finite poset.
Given an initial state $X \in \KK^{A,B}(P)$,
we regard $X(v)$ ($v \in P$) as indeterminates.
In the computation of $\left( \rho^k X \right)(v)$ ($v \in P$)
of iterations of the birational rowmotion map $\rho = \rho^{A,B}$,
it is convenient to change variables from $\{ X(v) : v \in P \}$
to $\{ Z(v) : v \in P \}$ defined by the formula
\begin{equation}
\label{eq:X2Z}
Z(v)
=
\begin{cases}
X(v) &\text{if $v$ is minimal,} \\
\dfrac{ X(v) }
{ \sum_{w \in P, \, w \lessdot v} X(w) }
&\text{otherwise.}
\end{cases}
\end{equation}
This change of variables is used in \cite{MR} to describe a lattice path formula
for birational rowmotion on a type $A$ minuscule poset.
Then the inverse change of variables is given by
\begin{equation}
\label{eq:Z2X}
X(v) = \sum Z(v_1) Z(v_2) \cdots Z(v_r),
\end{equation}
where the sum is taken over all saturated chains $v_1 \gtrdot \cdots \gtrdot v_r$ in $P$
such that $v_1 = v$ and $v_r$ is minimal in $P$.
Note that this change of variables is a birational lift of Stanley's transfer map
between the order polytope and the chain polytope of a poset (see \cite[Section~3]{Sta}).
\section{
Minuscule posets
}
In this section we review a definition and properties of minuscule posets.
\subsection{%
Definition and properties of minuscule posets
}
Let $\mathfrak{g}$ be a finite dimensional simple Lie algebra over the complex number field $\Comp$
of type $X_n$, where $X \in \{ A, B, C, D, E, F, G \}$ and $n$ is the rank of $\mathfrak{g}$.
We fix a Cartan subalgebra $\mathfrak{h}$ and choose a positive root system $\Delta_+$
of the root system $\Delta \subset \mathfrak{h}^*$.
Let $\Pi = \{ \alpha_1, \dots, \alpha_n \}$ be the set of simple roots,
where we follow \cite[Planche~I--IX]{B1} for the numbering of simple roots.
We denote by $\varpi_i$ the fundamental weight corresponding to
the $i$th simple root $\alpha_i$.
Let $\Delta^\vee_+ \subset \mathfrak{h}$ be the positive coroot system.
Let $W$ be the Weyl group of $\mathfrak{g}$, which acts on $\mathfrak{h}$ and $\mathfrak{h}^*$.
The simple reflections $\{ s_\alpha : \alpha \in \Pi \}$ generate $W$.
For a dominant integral weight $\lambda$,
we denote by $V_{X_n,\lambda}$ the irreducible $\mathfrak{g}$-module with highest weight $\lambda$
and by $L_{X_n,\lambda}$ the set of weights of $V_{X_n,\lambda}$.
We say that $\lambda$ is \emph{minuscule} if $L_{X_n,\lambda}$ is a single $W$-orbit.
See \cite[VIII, \S7, n${}^\circ$3]{B2} for properties of minuscule weights.
It is known that minuscule weights are fundamental weights.
Table~\ref{tab:minuscule} is the list of minuscule weights.
\begin{table}[ht]
\caption{List of minuscule weights}
\label{tab:minuscule}
\centering
\begin{tabular}{c|c|c}
type & minuscule weights & Coxeter number \\
\hline
$A_n$ & $\varpi_1, \varpi_2, \dots, \varpi_n$ & $n+1$ \\
$B_n$ & $\varpi_n$ & $2n$ \\
$C_n$ & $\varpi_1$ & $2n$ \\
$D_n$ & $\varpi_1, \varpi_{n-1}, \varpi_n$ & $2n-2$ \\
$E_6$ & $\varpi_1, \varpi_6$ & $12$ \\
$E_7$ & $\varpi_7$ & $18$ \\
$E_8$ & none & $30$ \\
$F_4$ & none & $12$ \\
$G_2$ & none & $6$
\end{tabular}
\end{table}
Let $\lambda$ be a minuscule weight of a simple Lie algebra $\mathfrak{g}$ of type $X_n$.
We equip the set of wegiths $L_{X_n,\lambda}$ with a poset structure by defining
$\mu \ge \nu$ if $\nu - \mu$ is a linear combination of simple roots
with nonnegative integer coefficients.
We note that $\lambda$ is the minimum element of the poset $L_{X_n,\lambda}$.
\begin{definition}
Let $\mathfrak{g}$ be a simple Lie algebra of type $X_n$
and $\lambda$ a minuscule weight.
Then the \emph{minuscule poset} $P_{X_n,\lambda}$ is defined by
\begin{equation}
P_{X_n,\lambda}
=
\{ \beta^\vee \in \Delta^\vee_+ : \langle \beta^\vee, \lambda \rangle = 1 \},
\end{equation}
where the partial ordering on $P_{X_n,\lambda}$ is given by saying that
$\alpha^\vee \ge \beta^\vee$ if $\alpha^\vee - \beta^\vee$
is a linear combination of simple coroots with nonnegative integer coefficients.
\end{definition}
\begin{prop}
\label{prop:minuscule}
Let $\lambda$ be a minuscule weight
and $P_{X_n,\lambda}$ be the corresponding minuscule poset.
Then we have
\begin{enumerate}
\item[(a)]
(\cite[Propotisions~3.2, 4.1]{P})
The poset $L_{X_n,\lambda}$ is a distributive lattice.
\item[(b)]
(\cite[Theorem~11]{P})
There exists a unique map $c : P_{X_n,\lambda} \to \Pi$, called the \emph{coloring} of $P_{X_n,\lambda}$,
such that the map
$$
\JJ(P_{X_n,\lambda}) \ni I \mapsto \lambda - \sum_{v \in I} c(v) \in L_{X_n,\lambda}
$$
gives an isomorphism of posets.
\end{enumerate}
\end{prop}
If $\lambda$ is a minuscule weight, then the stabilizer $W_\lambda$ of $\lambda$ in $W$
is the maximal parabolic subgroup generated by $\{ s_\beta : \beta \in \Pi \setminus \{ \alpha \} \}$,
where $\alpha$ is the simple root corresponding to the fundamental weight $\lambda$.
\begin{prop}
\label{prop:involution}
Let $P_{X_n,\lambda}$ be the minuscule poset corresponding to a minuscule weight $\lambda$,
and $w_\lambda$ the longest element of the stabilizer $W_\lambda$.
Then the map
$$
\iota : P_{X_n,\lambda} \ni \beta^\vee \mapsto w_\lambda \beta^\vee \in P_{X_n,\lambda}
$$
gives an involutive anti-automorphism of the poset $P_{X_n,\lambda}$.
\end{prop}
\begin{demo}{Proof}
It is enough to show that $\beta^\vee > \gamma^\vee$ implies $w_\lambda \beta^\vee < w_\lambda \gamma^\vee$
for $\beta^\vee$, $\gamma^\vee \in P_{X_n,\lambda}$.
It follows from $\langle \beta^\vee, \lambda \rangle = \langle \gamma^\vee, \lambda \rangle = 1$ that
$\beta^\vee - \gamma^\vee$ is a linear combination of $\Pi^\vee \setminus \{ \alpha^\vee \}$
with nonnegative integer coefficients,
where $\Pi^\vee$ is the set of simple coroots and $\alpha^\vee$ is the simple coroot
dual to $\lambda$.
Since $w_\lambda (\Pi^\vee \setminus \{ \alpha^\vee \}) = - (\Pi^\vee \setminus \{ \alpha^\vee \})$,
we see that $w_\lambda \beta^\vee - w_\lambda \gamma^\vee$
is a linear combination of $\Pi^\vee \setminus \{ \alpha^\vee \}$
with nonpositive integer coefficients.
\qed
\end{demo}
The following properties of minuscule posets can be checked easily
(e.g., by using a description given in the next subsection).
\begin{prop}
\label{prop:minuscule2}
Let $P = P_{X_n,\lambda}$ be the minuscule poset corresponding to a minuscule weight $\lambda$,
and $c : P \to \Pi$ the coloring.
\begin{enumerate}
\item[(a)]
The poset $P$ is graded of height $h-1$, where $h$ is the Coxeter number of $\mathfrak{g}$.
\item[(b)]
The poset $P$ has a unique minimal element $v_{\min}$ and a unique maximal element $v_{\max}$.
Moreover, if we put $\alpha_{\min} = c(v_{\min})$ and $\alpha_{\max} = c(v_{\max})$,
then the simple root $\alpha_{\min}$ corresponds to the fundamental weight $\lambda$
and $\alpha_{\max} = -w_0 \alpha_{\min}$ corresponds to $-w_0 \lambda$,
where $w_0$ is the longest element of $W$.
\item[(c)]
If $v \lessdot w$ in $P$, then their colors $c(v)$ and $c(w)$ are adjacent in
the Dynkin diagram of $\mathfrak{g}$.
\item[(d)]
For each $\alpha \in \Pi$, the subposet $P^\alpha = \{ v \in P : c(v) = \alpha \}$ is a chain.
\item[(e)]
If $v$, $w \in P^\alpha$, then the difference $\rank(v) - \rank(w)$ is even.
\end{enumerate}
\end{prop}
\subsection{%
Description of minuscule posets
}
In this subsection we give an explicit description of minuscule posets and their colorings.
The minuscule posets can be embedded into the poset $\Int^2$,
where $(i,j) \le (i',j')$ in $\Int^2$ if and only if $i \le i'$ and $j \le j'$.
\paragraph{Type $A_n$.}
The positive coroot system $\Delta^\vee_+$ of type $A_n$ can be described as
$\Delta^\vee_+ = \{ e_i - e_j : 1 \le i < j \le n+1 \}$ with $e_1 + \dots + e_{n+1} = 0$.
Then we have
$$
P_{A_n,\varpi_r}
=
\{ e_i - e_j : 1 \le i \le r, \ r+1 \le j \le n+1 \}
$$
and tha map $e_i - e_j \mapsto (r-i,j-r-1)$ gives an isomorphism of posets
from $P_{A_n,\varpi_r}$ to the subposet
$$
\{ (i,j) \in \Int^2 : 0 \le i \le r-1, \, 0 \le j \le n-r \}
\subset \Int^2.
$$
The poset $P_{A_n,\varpi_r}$ is a product poset $[0,r-1] \times [0,n-r]$ of two chains,
where $[0,m] = \{ 0, 1, \dots, m \}$ is a chain.
We call this poset $P_{A_n,\varpi_r}$ a \emph{rectangle poset}.
The involution $\iota$ is the $180^\circ$ rotation of the Hasse diagram.
For example, the Hasse diagram and the coloring of $P_{A_7, \varpi_3}$ are given
in Figure~\ref{fig:a},
where we label a vertex $v$ with $i$ to indicate that $c(v) = \alpha_i$.
\begin{figure}[ht]
\centering
\begin{picture}(130,130)
\put(5,45){\circle{10}}
\put(25,25){\circle{10}}
\put(25,65){\circle{10}}
\put(45,5){\circle{10}}
\put(45,45){\circle{10}}
\put(45,85){\circle{10}}
\put(65,25){\circle{10}}
\put(65,65){\circle{10}}
\put(65,105){\circle{10}}
\put(85,45){\circle{10}}
\put(85,85){\circle{10}}
\put(85,125){\circle{10}}
\put(105,65){\circle{10}}
\put(105,105){\circle{10}}
\put(125,85){\circle{10}}
\put(25,25){\circle{10}}
\put(9,49){\line(1,1){12}}
\put(9,41){\line(1,-1){12}}
\put(29,29){\line(1,1){12}}
\put(29,21){\line(1,-1){12}}
\put(29,69){\line(1,1){12}}
\put(29,61){\line(1,-1){12}}
\put(49,9){\line(1,1){12}}
\put(49,49){\line(1,1){12}}
\put(49,41){\line(1,-1){12}}
\put(49,89){\line(1,1){12}}
\put(49,81){\line(1,-1){12}}
\put(69,29){\line(1,1){12}}
\put(69,69){\line(1,1){12}}
\put(69,61){\line(1,-1){12}}
\put(69,109){\line(1,1){12}}
\put(69,101){\line(1,-1){12}}
\put(89,49){\line(1,1){12}}
\put(89,89){\line(1,1){12}}
\put(89,81){\line(1,-1){12}}
\put(89,121){\line(1,-1){12}}
\put(109,69){\line(1,1){12}}
\put(109,101){\line(1,-1){12}}
\put(0,40){\makebox(10,10){$1$}}
\put(20,20){\makebox(10,10){$2$}}
\put(20,60){\makebox(10,10){$2$}}
\put(40,0){\makebox(10,10){$3$}}
\put(40,40){\makebox(10,10){$3$}}
\put(40,80){\makebox(10,10){$3$}}
\put(60,20){\makebox(10,10){$4$}}
\put(60,60){\makebox(10,10){$4$}}
\put(60,100){\makebox(10,10){$4$}}
\put(80,40){\makebox(10,10){$5$}}
\put(80,80){\makebox(10,10){$5$}}
\put(80,120){\makebox(10,10){$5$}}
\put(100,60){\makebox(10,10){$6$}}
\put(100,100){\makebox(10,10){$6$}}
\put(120,80){\makebox(10,10){$7$}}
\end{picture}
\caption{$P_{A_7,\varpi_3}$}
\label{fig:a}
\end{figure}
\paragraph{Type $B_n$.}
If we realize the positive coroot system $\Delta^\vee_+$ of type $B_n$ as
$\Delta^\vee_+ = \{ e_i \pm e_j : 1 \le i < j \le n \} \cup \{ 2 e_i : 1 \le i \le n \}$, then
we have
$$
P_{B_n,\varpi_n}
=
\{ e_i + e_j : 1 \le i \le j \le n \},
$$
and the map $e_i + e_j \mapsto (n-j,n-i)$ gives an poset isomorphism from $P_{B_n,\varpi_n}$
to the subposet
$$
\{ (i,j) \in \Int^2 : 0 \le i \le j \le n-1 \}
\subset \Int^2.
$$
We call $P_{B_n,\varpi_n}$ a \emph{shifted staircase poset}.
The involution $\iota$ is the horizontal flip of the Hasse diagram.
For example the Hasse diagram of $P_{B_4,\varpi_4}$ and its coloring are given in Figure~\ref{fig:b}.
\begin{figure}[ht]
\setlength{\unitlength}{1pt}
\begin{minipage}{0.24\hsize}
\centering
\begin{picture}(70,130)
\put(5,5){\circle{10}}
\put(5,45){\circle{10}}
\put(5,85){\circle{10}}
\put(5,125){\circle{10}}
\put(25,25){\circle{10}}
\put(25,65){\circle{10}}
\put(25,105){\circle{10}}
\put(45,45){\circle{10}}
\put(45,85){\circle{10}}
\put(65,65){\circle{10}}
\put(9,9){\line(1,1){12}}
\put(9,49){\line(1,1){12}}
\put(9,41){\line(1,-1){12}}
\put(9,89){\line(1,1){12}}
\put(9,81){\line(1,-1){12}}
\put(9,121){\line(1,-1){12}}
\put(29,29){\line(1,1){12}}
\put(29,69){\line(1,1){12}}
\put(29,61){\line(1,-1){12}}
\put(29,101){\line(1,-1){12}}
\put(49,49){\line(1,1){12}}
\put(49,81){\line(1,-1){12}}
\put(0,0){\makebox(10,10){$4$}}
\put(0,40){\makebox(10,10){$4$}}
\put(0,80){\makebox(10,10){$4$}}
\put(0,120){\makebox(10,10){$4$}}
\put(20,20){\makebox(10,10){$3$}}
\put(20,60){\makebox(10,10){$3$}}
\put(20,100){\makebox(10,10){$3$}}
\put(40,40){\makebox(10,10){$2$}}
\put(40,80){\makebox(10,10){$2$}}
\put(60,60){\makebox(10,10){$1$}}
\end{picture}
\caption{$P_{B_4,\varpi_4}$}
\label{fig:b}
\end{minipage}
\begin{minipage}{0.24\hsize}
\centering
\begin{picture}(70,130)
\put(5,5){\circle{10}}
\put(5,125){\circle{10}}
\put(25,25){\circle{10}}
\put(25,105){\circle{10}}
\put(45,45){\circle{10}}
\put(45,85){\circle{10}}
\put(65,65){\circle{10}}
\put(9,9){\line(1,1){12}}
\put(9,121){\line(1,-1){12}}
\put(29,29){\line(1,1){12}}
\put(29,101){\line(1,-1){12}}
\put(49,49){\line(1,1){12}}
\put(49,81){\line(1,-1){12}}
\put(0,0){\makebox(10,10){$1$}}
\put(0,120){\makebox(10,10){$1$}}
\put(20,20){\makebox(10,10){$2$}}
\put(20,100){\makebox(10,10){$2$}}
\put(40,40){\makebox(10,10){$3$}}
\put(40,80){\makebox(10,10){$3$}}
\put(60,60){\makebox(10,10){$4$}}
\end{picture}
\caption{$P_{C_4,\varpi_1}$}
\label{fig:c}
\end{minipage}
\begin{minipage}{0.24\hsize}
\centering
\begin{picture}(70,130)
\put(5,5){\circle{10}}
\put(5,125){\circle{10}}
\put(25,25){\circle{10}}
\put(25,65){\circle{10}}
\put(25,105){\circle{10}}
\put(45,45){\circle{10}}
\put(45,85){\circle{10}}
\put(65,65){\circle{10}}
\put(9,9){\line(1,1){12}}
\put(9,121){\line(1,-1){12}}
\put(29,29){\line(1,1){12}}
\put(29,69){\line(1,1){12}}
\put(29,61){\line(1,-1){12}}
\put(29,101){\line(1,-1){12}}
\put(49,49){\line(1,1){12}}
\put(49,81){\line(1,-1){12}}
\put(0,0){\makebox(10,10){$1$}}
\put(0,120){\makebox(10,10){$1$}}
\put(20,20){\makebox(10,10){$2$}}
\put(20,60){\makebox(10,10){$5$}}
\put(20,100){\makebox(10,10){$2$}}
\put(40,40){\makebox(10,10){$3$}}
\put(40,80){\makebox(10,10){$3$}}
\put(60,60){\makebox(10,10){$4$}}
\end{picture}
\caption{$P_{D_5,\varpi_1}$}
\label{fig:d1}
\end{minipage}
\begin{minipage}{0.24\hsize}
\centering
\begin{picture}(70,130)
\put(5,5){\circle{10}}
\put(5,45){\circle{10}}
\put(5,85){\circle{10}}
\put(5,125){\circle{10}}
\put(25,25){\circle{10}}
\put(25,65){\circle{10}}
\put(25,105){\circle{10}}
\put(45,45){\circle{10}}
\put(45,85){\circle{10}}
\put(65,65){\circle{10}}
\put(9,9){\line(1,1){12}}
\put(9,49){\line(1,1){12}}
\put(9,41){\line(1,-1){12}}
\put(9,89){\line(1,1){12}}
\put(9,81){\line(1,-1){12}}
\put(9,121){\line(1,-1){12}}
\put(29,29){\line(1,1){12}}
\put(29,69){\line(1,1){12}}
\put(29,61){\line(1,-1){12}}
\put(29,101){\line(1,-1){12}}
\put(49,49){\line(1,1){12}}
\put(49,81){\line(1,-1){12}}
\put(0,0){\makebox(10,10){$5$}}
\put(0,40){\makebox(10,10){$4$}}
\put(0,80){\makebox(10,10){$5$}}
\put(0,120){\makebox(10,10){$4$}}
\put(20,20){\makebox(10,10){$3$}}
\put(20,60){\makebox(10,10){$3$}}
\put(20,100){\makebox(10,10){$3$}}
\put(40,40){\makebox(10,10){$2$}}
\put(40,80){\makebox(10,10){$2$}}
\put(60,60){\makebox(10,10){$1$}}
\end{picture}
\caption{$P_{D_5,\varpi_5}$}
\label{fig:d2}
\end{minipage}
\end{figure}
\paragraph{Type $C_n$.}
If we realize the positive coroot system $\Delta^\vee_+$ of type $C_n$ as
$\Delta^\vee_+ = \{ e_i \pm e_j : 1 \le i < j \le n \} \cup \{ e_i : 1 \le i \le n \}$, then
we have
$$
P_{C_n,\varpi_1}
=
\{ e_1 - e_2, \dots, e_1 - e_n, e_1, e_1 + e_n, \dots, e_1 + e_2 \}.
$$
The poset $P_{C_n,\varpi_1}$ is a chain, and isomorphic to
the subposet
$$
\{ (1,1), \dots, (1,n-1), (1,n), (2,n), \dots, (n,n) \} \subset \Int^2.
$$
For example the Hasse diagram of $P_{C_4,\varpi_1}$ and its coloring are given in Figure~\ref{fig:c}.
Note that $P_{C_n,\varpi_1}$ is isomorphic to $P_{A_{2n-1},\varpi_1}$,
but they have different colorings.
\paragraph{Type $D_n$.}
We realize the positive coroot system $\Delta^\vee_+$ of type $D_n$ as
$\Delta^\vee_+ = \{ e_i \pm e_j : 1 \le i < j \le n \}$.
For the minuscule weight $\varpi_1$, we have
$$
P_{D_n,\varpi_1}
=
\{ e_1-e_2, \dots, e_1-e_{n-1}, e_1-e_n, e_1+e_n, e_1+e_{n-1}, \dots, e_1+e_2 \},
$$
and it is isomorphic to the subposet
$$
\{ (1,1), \dots, (1,n-1), (1,n), (2,n-1), (2,n), \dots, (n,n) \} \subset \Int^2.
$$
See Figure~\ref{fig:d1} for the Hasse diagram of $P_{D_5,\varpi_1}$ and its coloring.
The poset $P_{D_n,\varpi_1}$ is called a \emph{double-tailed diamond poset}.
The involutive anti-automorphism $\iota$ is given by
$$
\iota (e_1 + e_k) = e_1 - e_k \quad(1 \le k \le n-1),
\quad
\iota (e_1 + \ep e_n) = e_1 + (-1)^n \ep e_n.
$$
For the minuscule weights $\varpi_n$ and $\varpi_{n-1}$, we have
$$
P_{D_n,\varpi_n}
=
\{ e_i + e_j : 1 \le i < j \le n \}
$$
and $P_{D_n,\varpi_{n-1}}$ is obtained from $P_{D_n,\varpi_n}$ by replacing $e_i + e_n$ with $e_i-e_n$
for $1 \le i \le n-1$.
Both posets $P_{D_n,\varpi_n}$ and $P_{D_n,\varpi_{n-1}}$ are isomorphic to
$\{ (i,j) \in \Int^2 : 0 \le i \le j \le n-2 \}$.
For example, the Hasse diagram and the coloring of $P_{D_5,\varpi_5}$ are given in Figure~\ref{fig:d2}.
Note that $P_{D_n,\varpi_{n-1}} \cong P_{D_n,\varpi_n}$ and they are isomorphic to $P_{B_{n-1},\varpi_{n-1}}$,
but they have different colorings.
\paragraph{Type $E_6$.}
The minuscule poset $P_{E_6,\varpi_6}$ is isomorphic to the subposet
$$
\left\{
\begin{matrix}
(1,1),(2,1),(3,1),(4,1),(5,1),
(3,2),(4,2),(5,2), \\
(4,3),(5,3),(6,3),
(4,4),(5,4),(6,4),(7,4),(8,4)
\end{matrix}
\right\}
\subset \Int^2,
$$
and the Hasse diagram and the coloring are given in Figure~\ref{fig:e6}.
The involution $\iota$ is the $180^\circ$ rotation of the Hasse diagram.
As posets, $P_{E_6,\varpi_1} \cong P_{E_6,\varpi_6}$.
\begin{figure}[ht]
\begin{minipage}{0.49\hsize}
\begin{center}
\begin{picture}(90,210)
\put(5,85){\circle{10}}
\put(5,205){\circle{10}}
\put(25,65){\circle{10}}
\put(25,105){\circle{10}}
\put(25,145){\circle{10}}
\put(25,185){\circle{10}}
\put(45,45){\circle{10}}
\put(45,85){\circle{10}}
\put(45,125){\circle{10}}
\put(45,165){\circle{10}}
\put(65,25){\circle{10}}
\put(65,65){\circle{10}}
\put(65,105){\circle{10}}
\put(65,145){\circle{10}}
\put(85,5){\circle{10}}
\put(85,125){\circle{10}}
\put(9,89){\line(1,1){12}}
\put(9,81){\line(1,-1){12}}
\put(9,201){\line(1,-1){12}}
\put(29,69){\line(1,1){12}}
\put(29,61){\line(1,-1){12}}
\put(29,109){\line(1,1){12}}
\put(29,101){\line(1,-1){12}}
\put(29,149){\line(1,1){12}}
\put(29,141){\line(1,-1){12}}
\put(29,181){\line(1,-1){12}}
\put(49,49){\line(1,1){12}}
\put(49,41){\line(1,-1){12}}
\put(49,89){\line(1,1){12}}
\put(49,81){\line(1,-1){12}}
\put(49,129){\line(1,1){12}}
\put(49,121){\line(1,-1){12}}
\put(49,161){\line(1,-1){12}}
\put(69,21){\line(1,-1){12}}
\put(69,109){\line(1,1){12}}
\put(69,141){\line(1,-1){12}}
\put(0,80){\makebox(10,10){$1$}}
\put(0,200){\makebox(10,10){$1$}}
\put(20,60){\makebox(10,10){$3$}}
\put(20,100){\makebox(10,10){$3$}}
\put(20,140){\makebox(10,10){$2$}}
\put(20,180){\makebox(10,10){$3$}}
\put(40,40){\makebox(10,10){$4$}}
\put(40,80){\makebox(10,10){$4$}}
\put(40,120){\makebox(10,10){$4$}}
\put(40,160){\makebox(10,10){$4$}}
\put(60,20){\makebox(10,10){$5$}}
\put(60,60){\makebox(10,10){$2$}}
\put(60,100){\makebox(10,10){$5$}}
\put(60,140){\makebox(10,10){$5$}}
\put(80,0){\makebox(10,10){$6$}}
\put(80,120){\makebox(10,10){$6$}}
\end{picture}
\end{center}
\caption{$P_{E_6,\varpi_6}$}
\label{fig:e6}
\end{minipage}
\begin{minipage}{0.49\hsize}
\begin{center}
\begin{picture}(110,330)
\put(5,5){\circle{10}}
\put(5,165){\circle{10}}
\put(5,325){\circle{10}}
\put(25,25){\circle{10}}
\put(25,145){\circle{10}}
\put(25,185){\circle{10}}
\put(25,305){\circle{10}}
\put(45,45){\circle{10}}
\put(45,85){\circle{10}}
\put(45,125){\circle{10}}
\put(45,165){\circle{10}}
\put(45,205){\circle{10}}
\put(45,245){\circle{10}}
\put(45,285){\circle{10}}
\put(65,65){\circle{10}}
\put(65,105){\circle{10}}
\put(65,145){\circle{10}}
\put(65,185){\circle{10}}
\put(65,225){\circle{10}}
\put(65,265){\circle{10}}
\put(85,85){\circle{10}}
\put(85,125){\circle{10}}
\put(85,165){\circle{10}}
\put(85,205){\circle{10}}
\put(85,245){\circle{10}}
\put(105,105){\circle{10}}
\put(105,225){\circle{10}}
\put(9,9){\line(1,1){12}}
\put(9,169){\line(1,1){12}}
\put(9,161){\line(1,-1){12}}
\put(9,321){\line(1,-1){12}}
\put(29,29){\line(1,1){12}}
\put(29,149){\line(1,1){12}}
\put(29,141){\line(1,-1){12}}
\put(29,189){\line(1,1){12}}
\put(29,181){\line(1,-1){12}}
\put(29,301){\line(1,-1){12}}
\put(49,49){\line(1,1){12}}
\put(49,89){\line(1,1){12}}
\put(49,81){\line(1,-1){12}}
\put(49,129){\line(1,1){12}}
\put(49,121){\line(1,-1){12}}
\put(49,169){\line(1,1){12}}
\put(49,161){\line(1,-1){12}}
\put(49,209){\line(1,1){12}}
\put(49,201){\line(1,-1){12}}
\put(49,249){\line(1,1){12}}
\put(49,241){\line(1,-1){12}}
\put(49,281){\line(1,-1){12}}
\put(69,69){\line(1,1){12}}
\put(69,109){\line(1,1){12}}
\put(69,101){\line(1,-1){12}}
\put(69,149){\line(1,1){12}}
\put(69,141){\line(1,-1){12}}
\put(69,189){\line(1,1){12}}
\put(69,181){\line(1,-1){12}}
\put(69,229){\line(1,1){12}}
\put(69,221){\line(1,-1){12}}
\put(69,261){\line(1,-1){12}}
\put(89,89){\line(1,1){12}}
\put(89,121){\line(1,-1){12}}
\put(89,209){\line(1,1){12}}
\put(89,241){\line(1,-1){12}}
\put(0,0){\makebox(10,10){$7$}}
\put(0,160){\makebox(10,10){$7$}}
\put(0,320){\makebox(10,10){$7$}}
\put(20,20){\makebox(10,10){$6$}}
\put(20,140){\makebox(10,10){$6$}}
\put(20,180){\makebox(10,10){$6$}}
\put(20,300){\makebox(10,10){$6$}}
\put(40,40){\makebox(10,10){$5$}}
\put(40,80){\makebox(10,10){$2$}}
\put(40,120){\makebox(10,10){$5$}}
\put(40,160){\makebox(10,10){$5$}}
\put(40,200){\makebox(10,10){$5$}}
\put(40,240){\makebox(10,10){$2$}}
\put(40,280){\makebox(10,10){$5$}}
\put(60,60){\makebox(10,10){$4$}}
\put(60,100){\makebox(10,10){$4$}}
\put(60,140){\makebox(10,10){$4$}}
\put(60,180){\makebox(10,10){$4$}}
\put(60,220){\makebox(10,10){$4$}}
\put(60,260){\makebox(10,10){$4$}}
\put(80,80){\makebox(10,10){$3$}}
\put(80,120){\makebox(10,10){$3$}}
\put(80,160){\makebox(10,10){$2$}}
\put(80,200){\makebox(10,10){$3$}}
\put(80,240){\makebox(10,10){$3$}}
\put(100,100){\makebox(10,10){$1$}}
\put(100,220){\makebox(10,10){$1$}}
\end{picture}
\end{center}
\caption{$P_{E_7,\varpi_7}$}
\label{fig:e7}
\end{minipage}
\end{figure}
\paragraph{Type $E_7$.}
The minuscule poset $P_{E_7,\varpi_7}$ is isomorphic to the subposet
$$
\left\{
\begin{matrix}
(1,1),(1,2),(1,3),(1,4),(1,5),(1,6),
(2,4),(2,5),(2,6), \\
(3,5),(3,6),(3,7),
(4,5),(4,6),(4,7),
(5,5),(5,6),(5,7), \\
(4,8),(4,9),
(5,8),(5,9),
(6,8),(6,9),
(7,9),
(8,9),
(9,9)
\end{matrix}
\right\}
\subset \Int^2,
$$
and the Hasse diagram and the coloring are given in Figure~\ref{fig:e7}.
The involution $\iota$ is the horizontal flip of the Hasse diagram.
\section{%
Periodicity
}
The goal of this section is to prove the periodicity of birational rowmotion
and Coxeter-motion (Theorem~\ref{thm:main1} (a) and Theorem~\ref{thm:main2} (a)).
\subsection{%
Periodicity of birational rowmotion
}
For the birational rowmotion map on minuscule posets,
the periodicity has been established in \cite{GR1, GR2}
except for the type $E_7$ minuscule poset.
Let $P$ be a minuscule poset associated to a Lie algebra $\mathfrak{g}$,
and $\rho^{A,B} : \KK^{A,B}(P) \to \KK^{A,B}$ the birational rowmotion map.
Since the periodicity depends only on the poset structure,
we may assume that $\mathfrak{g}$ is simply-laced.
And by Proposition~\ref{prop:minuscule2} (a), Lemmas~\ref{lem:order} and \ref{lem:A=B=1},
it is enough to show that
$\rho = \rho^{1,1}$ satisfies $\rho^h = 1$,
where $h$ is the Coxeter number of $\mathfrak{g}$.
\begin{itemize}
\item
If $P$ is a type $A_n$ minuscule poset, i.e., if $P$ is a rectangle poset $[0,r-1] \times [0, n-r]$,
then it was shown that the birational rowmotion map $\rho$ has order $n+1$
(Grinberg--Roby \cite[Theorem~30]{GR2}, see \cite[Corollary~2.12]{MR} for another proof).
\item
If $P = P_{D_n,\varpi_1}$ is a double-tailed diamond poset,
then $P$ is a skeletal poset of height $2n-3$,
and it follows from \cite[Propositions~61, 74 and 75]{GR1} that $\rho$ has order $2n-2$
(see \cite[Section~10]{GR1} for a definition of skeletal posets and details).
\item
If $P = P_{D_n,\varpi_n}$ is a shifted staircase poset,
then Grinberg--Roby \cite[Theorem58]{GR2} proved that $\rho$ has order $2n$.
\item
If $P = P_{E_6,\varpi_6}$ is the minuscule poset of type $E_6$, then by using a computer we can verify
that $\rho$ has order $12$.
\item
Let $P = P_{E_7,\varpi_7}$ be the minuscule poset of type $E_7$.
Given an initial state $X \in \KK^{1,1}(P)$,
we regard $\{ X(v) : v \in P \}$ as indeterminates
and introduce new indeterminates $\{ Z(v) : v \in P \}$ by (\ref{eq:X2Z}).
With the author's laptop, it takes about 20 seconds for Maple19
to compute all the values $\left( \rho^k X \right)(v)$ ($0 \le k \le 18$, $v \in P$)
as rational functions in $\{ Z(v) : v \in P \}$
and check that $\left( \rho^{18} X \right)(v) = X(v)$ for all $v \in P$.
\end{itemize}
This completes the proof of Theorem~\ref{thm:main1} (a).
\subsection{%
Periodicity of birational Coxeter-motion
}
In order to prove the periodicity of birational Coxeter-motion (Theorem~\ref{thm:main2} (a)),
we work with the birational toggle group and show that
any birational Coxeter-motion maps are conjugate to the birational rowmotion map
in this group.
Let $P$ be a finite poset and fix positive real numbers $A$ and $B$.
We define the \emph{birational toggle group}, denote by $G(P)$,
to be the subgroup generated by birational toggles $\tau_v = \tau^{A,B}_v$ ($v \in P$)
in the group of all bijections on $\KK^{A,B}(P)$.
A key tool here is the non-commutativity graph.
Given elements $g_1, \dots, g_n$ of a group $G$,
the \emph{non-commutativity graph} $\Gamma(g_1, \dots, g_n)$ is defined as the graph
with vertex set $\{ 1, 2, \dots, n \}$, in which
two vertices $i$ and $j$ are joined if and only if $g_i g_j \neq g_j g_i$.
The following lemma is useful.
\begin{lemma}
\label{lem:conj}
(\cite[V, \S6, n${}^\circ$1, Lemma~1]{B1})
Let $g_1, \dots, g_n$ be elements of a group $G$.
If the non-commutativity graph $\Gamma(g_1, \dots, g_n)$ has no cycle,
then $g_{\nu(1)} \dots g_{\nu(n)}$ is conjugate to $g_1 \dots g_n$ in $G$
for any permutation $\nu \in S_n$.
\end{lemma}
First we prove that all birational Coxeter-motion maps are conjugate.
\begin{prop}
\label{prop:conj}
Let $P$ be a minuscule poset.
Then all birational Coxeter-motion maps are conjugate to each other in
the birational toggle group $G(P)$.
\end{prop}
\begin{demo}{Proof}
Note that birational toggles $\tau_v$ and $\tau_w$ are commutative
unless $v \lessdot w$ ore $v \gtrdot w$.
It follows from Proposition~\ref{prop:minuscule2} (c) that,
if simple roots $\alpha$ and $\beta$ are not adjacent in the Dynkin diagram of $\mathfrak{g}$,
then the corresponding elements $\sigma_\alpha$ and $\sigma_\beta$ commute with each other in $G(P)$.
Hence the non-commutativity graph $\Gamma(\sigma_{\alpha_1}, \dots, \sigma_{\alpha_n})$,
where $\alpha_1, \dots, \alpha_n$ are the simple roots,
is a subgraph (of the underlying simple graph) of the Dynkin diagram.
Since the Dynkin diagram of $\mathfrak{g}$ has no cycle,
we can use Lemma~\ref{lem:conj} to conclude that any two Coxeter-motion maps
are conjugate in $G(P)$.
\qed
\end{demo}
The periodicity of birational Coxeter-motion maps (Theorem~\ref{thm:main2} (a)) immediately follows
from the following thoerem and the periodicity of the birational rowmotion map
(Theorem~\ref{thm:main1} (a)).
\begin{theorem}
\label{thm:conj}
Let $P$ be a minuscule poset.
Then any birational Coxeter-motion map is conjugate to the birational rowmotion map $\rho = \rho^{A,B}$
in the biratoinal toggle group $G(P)$.
\end{theorem}
This theorem is a birational lift of \cite[Theorem~1.3]{RS}.
In order to prove this theorem, we use a notion of rc-poset,
which is introduced by Striker--Williams \cite[Section~4.2]{SW}.
We put $\Lambda = \{ (i,j) \in \Int^2 : \text{$i+j$ is even} \}$.
A poset $P$ is called a \emph{rowed-and-columned poset} (\emph{rc-poset} for short)
if there is a map $\pi : P \to \Lambda$ such that,
if $v$ covers $u$ in $P$ and $\pi(v) = (i,j)$,
then $\pi(u) = (i+1,j-1)$ or $(i-1,j-1)$.
Minuscule posets $P = P_{X_n,\lambda}$ are rc-posets with respect to the composition map
$\pi : P \to \Lambda$ of the embedding $P \hookrightarrow \Int^2$ given in Subsection~3.2
and the map $\Int^2 \ni (i,j) \mapsto (j-i,j+i) \in \Lambda$.
A \emph{row} (resp. \emph{column}) of an RC-poset $P$ is a subset $M$ of $P$ of the form
\begin{gather*}
M = \{ v \in P : \text{the second coordinate of $\pi(v)$ equals $r$} \},
\\
\text{(resp. }
M = \{ v \in P : \text{the first coordinate of $\pi(v)$ equals $c$} \}
\text{)}
\end{gather*}
for some $r$ (resp. $c$).
If $M$ is a subset of a row or a column of $P$,
then the composition of toggles $\tau_v$ ($v \in M$) is independent of the order of composition,
so we denote by $\tau[M]$ the resulting element of the toggle group $G(P)$.
If $R_1, \dots, R_n$ are the non-empty rows of an rc-poset $P$ from bottom to top,
then the rowmotion map $\rho = \rho^{A,B}$ is given by
$$
\rho = \tau[R_1] \circ \tau[R_2] \circ \dots \circ \tau[R_n].
$$
The following Lemma is proved by exactly the same argument as in \cite{SW}.
\begin{lemma}
\label{lem:rc}
(\cite[Theorem~5.2]{SW})
Let $P$ be an rc-poset.
Let $R_1, \dots, R_n$ be the non-empty rows of $P$ from bottom to top, and
$C_1, \dots, C_m$ the non-empty columns of $P$ from left to right.
Then the rowmotion map $\rho$ is conjugate to
$\tau[C_{\nu(1)}] \circ \cdots \circ \tau[C_{\nu(m)}]$ in $G(P)$ for any $\nu \in S_m$.
\end{lemma}
We prove Theorem~\ref{thm:conj} by using this lemma.
\begin{demo}{Proof of Theorem~\ref{thm:conj}}
Let $\Pi = \{ \alpha_1, \dots, \alpha_n \}$ be the set of simple roots,
where we follow the numbering in \cite{B1},
and $C_1, \dots, C_m$ the non-empty columns of $P$
(see Figures~\ref{fig:a}--\ref{fig:e7}).
Then, by Lemmas~\ref{lem:conj} and \ref{lem:rc},
it is enough to prove that $\gamma = \sigma_{\alpha_1} \cdots \sigma_{\alpha_n}$ is conjugate to
$\tau[C_1] \cdots \tau[C_m]$.
We prove this claim by a case-by-case argument.
\begin{itemize}
\item
If $P = P_{A_n,\varpi_r}$,
then $\sigma_{\alpha_i} = \tau[C_i]$ for $1 \le i \le n$
and $\gamma = \tau[C_1] \cdots \tau[C_m]$.
\item
If $P = P_{B_n,\varpi_n}$,
then $\sigma_{\alpha_i} = \tau[C_{n+1-i}]$ for $1 \le i \le n$,
and $\gamma$ is conjugate to $\sigma_{\alpha_n} \cdots \sigma_{\alpha_1}
= \tau[C_1] \cdots \tau[C_m]$ by Lemma~\ref{lem:conj}.
\item
If $P = P_{C_n,\varpi_1}$,
then $\sigma_{\alpha_i} = \tau[C_i]$ for $1 \le i \le n$
and $\gamma = \tau[C_1] \cdots \tau[C_n]$.
\item
If $P = P_{D_n,\varpi_1}$,
then $\tau[C_i] = \sigma_{\alpha_i}$ for $i \neq n-3$
and $\tau[C_{n-3}] = \sigma_{\alpha_{n-3}} \sigma_{\alpha_n}$.
Hence $\tau[C_1] \cdots \tau[C_{n-1}] = \sigma_{\alpha_1} \cdots \sigma_{\alpha_{n-4}}
\sigma_{\alpha_{n-3}} \sigma_{\alpha_n} \sigma_{\alpha_{n-2}} \sigma_{\alpha_{n-1}}$
is conjugate to $\gamma$ by Lemma~\ref{lem:conj}.
\item
If $P = P_{D_n, \varpi_n}$,
then $\tau[C_1] = \sigma_{\alpha_{n-1}} \sigma_{\alpha_n}$
and $\tau[C_i] = \sigma_{\alpha_{n-i}}$ for $2 \le i \le n-1$.
Hence $\tau[C_1] \cdots \tau[C_{n-1}] = \sigma_{\alpha_{n-1}} \sigma_{\alpha_n}
\sigma_{\alpha_{n-2}} \cdots \sigma_{\alpha_1}$
is conjugate to $\gamma$ by Lemma~\ref{lem:conj}.
\item
If $P = P_{E_6,\varpi_6}$, then we have
$$
C_1 = P^{\alpha_1},
\quad
C_2 = P^{\alpha_3} \sqcup (C_2 \cap P^{\alpha_2}),
\quad
C_3 = P^{\alpha_4},
\quad
C_4 = P^{\alpha_5} \sqcup (C_4 \cap P^{\alpha_2}),
\quad
C_5 = P^{\alpha_6}.
$$
If we put
\begin{gather*}
g_1 = \tau[P^{\alpha_1}],
\quad
g_2 = \tau[P^{\alpha_3}]
\quad
g_3 = \tau[P^{\alpha_4}],
\quad
g_4 = \tau[P^{\alpha_5}],
\quad
g_5 = \tau[P^{\alpha_6}],
\\
g_6 = \tau[C_2 \cap P^{\alpha_2}],
\quad
g_7 = \tau[C_5 \cap P^{\alpha_2}],
\end{gather*}
then Figure~\ref{fig:e6-nc} shows the non-commutativity graph $\Gamma(g_1, \dots, g_7)$.
\begin{figure}
\centering
\setlength{\unitlength}{1pt}
\begin{picture}(130,30)
\put(5,25){\circle{10}}
\put(35,25){\circle{10}}
\put(65,25){\circle{10}}
\put(95,25){\circle{10}}
\put(125,25){\circle{10}}
\put(45,5){\circle{10}}
\put(85,5){\circle{10}}
\put(10,25){\line(1,0){20}}
\put(40,25){\line(1,0){20}}
\put(70,25){\line(1,0){20}}
\put(100,25){\line(1,0){20}}
\put(49,9){\line(1,1){12}}
\put(81,9){\line(-1,1){12}}
\put(0,20){\makebox(10,10){$1$}}
\put(30,20){\makebox(10,10){$2$}}
\put(60,20){\makebox(10,10){$3$}}
\put(90,20){\makebox(10,10){$4$}}
\put(120,20){\makebox(10,10){$5$}}
\put(40,0){\makebox(10,10){$6$}}
\put(80,0){\makebox(10,10){$7$}}
\end{picture}
\caption{Non-commutativity graph for $P_{E_6,\varpi_6}$}
\label{fig:e6-nc}
\end{figure}
Hence by applying Lemma~\ref{lem:conj}, we see that
$$
\tau[C_1] \cdots \tau[C_5]
=
\tau[P^{\alpha_1}]
\tau[P^{\alpha_3}]
\tau[C_2 \cap P^{\alpha_2}]
\tau[P^{\alpha_4}]
\tau[P^{\alpha_5}]
\tau[C_5 \cap P^{\alpha_2}]
\tau[P^{\alpha_6}]
$$
is conjugate to
$$
\gamma =
\tau[P^{\alpha_1}]
\tau[C_2 \cap P^{\alpha_2}]
\tau[C_5 \cap P^{\alpha_2}]
\tau[P^{\alpha_3}]
\tau[P^{\alpha_4}]
\tau[P^{\alpha_5}]
\tau[P^{\alpha_6}].
$$
\item
If $P = P_{E_7,\varpi_7}$, then we have
\begin{gather*}
C_1 = P^{\alpha_7},
\quad
C_2 = P^{\alpha_6},
\quad
C_3 = (C_3 \cap P^{\alpha_2}) \sqcup P^{\alpha_5},
\\
C_4 = P^{\alpha_4},
\quad
C_5 = (C_5 \cap P^{\alpha_2}) \sqcup P^{\alpha_3},
\quad
C_6 = P^{\alpha_1},
\end{gather*}
and $P^{\alpha_2} = (C_3 \cap P^{\alpha_2}) \sqcup (C_5 \cap P^{\alpha_2})$.
If we put
\begin{gather*}
g_1 = \tau[P^{\alpha_7}],
\quad
g_2 = \tau[P^{\alpha_6}],
\quad
g_3 = \tau[P^{\alpha_5}],
\quad
g_4 = \tau[P^{\alpha_4}],
\quad
g_5 = \tau[P^{\alpha_3}],
\quad
g_6 = \tau[P^{\alpha_1}],
\\
g_7 = \tau[C_3 \cap P^{\alpha_2}],
\quad
g_8 = \tau[C_5 \cap P^{\alpha_2}]
\end{gather*}
then Figure~\ref{fig:e7-nc} shows the non-commutativity graph $\Gamma(g_1, \dots, g_8)$.
\begin{figure}
\centering
\setlength{\unitlength}{1pt}
\begin{picture}(160,30)
\put(5,25){\circle{10}}
\put(35,25){\circle{10}}
\put(65,25){\circle{10}}
\put(95,25){\circle{10}}
\put(125,25){\circle{10}}
\put(155,25){\circle{10}}
\put(75,5){\circle{10}}
\put(115,5){\circle{10}}
\put(10,25){\line(1,0){20}}
\put(40,25){\line(1,0){20}}
\put(70,25){\line(1,0){20}}
\put(100,25){\line(1,0){20}}
\put(130,25){\line(1,0){20}}
\put(79,9){\line(1,1){12}}
\put(111,9){\line(-1,1){12}}
\put(0,20){\makebox(10,10){$1$}}
\put(30,20){\makebox(10,10){$2$}}
\put(60,20){\makebox(10,10){$3$}}
\put(90,20){\makebox(10,10){$4$}}
\put(120,20){\makebox(10,10){$5$}}
\put(150,20){\makebox(10,10){$6$}}
\put(70,0){\makebox(10,10){$7$}}
\put(110,0){\makebox(10,10){$8$}}
\end{picture}
\caption{Non-commutativity graph for $P_{E_7,\varpi_7}$}
\label{fig:e7-nc}
\end{figure}
Hence by applying Lemma~\ref{lem:conj}, we see that
$$
\tau[C_1] \cdots \tau[C_6]
=
\tau[P^{\alpha_7}]
\tau[P^{\alpha_6}]
\tau[C_3 \cap P^{\alpha_2}] \tau[P^{\alpha_5}]
\tau[P^{\alpha_4}]
\tau[C_5 \cap P^{\alpha_2}] \tau[P^{\alpha_3}]
\tau[P^{\alpha_1}]
$$
is conjugate to
$$
\gamma
=
\tau[P^{\alpha_1}]
\tau[C_3 \cap P^{\alpha_2}]
\tau[C_5 \cap P^{\alpha_2}]
\tau[P^{\alpha_3}]
\tau[P^{\alpha_4}]
\tau[P^{\alpha_5}]
\tau[P^{\alpha_6}]
\tau[P^{\alpha_7}].
$$
\end{itemize}
This completes the proof of Theorem~\ref{thm:conj}, and hence of Theorem~\ref{thm:main2} (a).
\qed
\end{demo}
\section{%
Reciprocity
}
In this section we prove the reciprocity for birational rowmotion (Theorem~\ref{thm:main1} (b))
and propose a conjectural reciprocity for a particular birational Coxeter-motion map.
The proof of the reciprocity for birational rowmotion is based on a case-by-case analysis.
Let $P$ be a minuscule poset associated to a simple Lie algebra $\mathfrak{g}$
and $\rho^{A,B}$ the birational rowmotion map.
We may assume that $\mathfrak{g}$ is simply-laced and that $A=B=1$ (see Lemma~\ref{lem:A=B=1}).
For a type $A$ minuscule poset,
the reciprocity was proved by Grinberg--Roby \cite[Theorem~30]{GR2}
and Musiker--Roby \cite[Corollary~2.13]{MR}.
Also we can verify the reciprocity for the minuscule posets of types $E_6$ and $E_7$
by using a computer.
The remaining minuscule posets are the shifted staircase posets $P_{D_n,\varpi_n}$
and the double-tailed diamond posets $P_{D_n,\varpi_1}$.
\subsection{
Shifted staircase posets
}
Let $P = \{ (i,j) \in \Int^2 : 0 \le i \le j \le r \}$ be a shifted staircase poset,
and $\rho = \rho^{1,1} : \KK^{1,1}(P) \to \KK^{1,1}(P)$ the birational rowmotion map on $P$.
We derive the reciprocity for $P$ from that for
the rectangle poset $\tilde{P} = \{ (i,j) \in \Int^2 : 0 \le i, j \le r \}$.
We denote by $\tilde{\rho} : \KK^{1,1}(\tilde{P}) \to \KK^{1,1}(\tilde{P})$
the birational rowmotion map on $\tilde{P}$ with $A=B=1$.
The following lemma is a consequence of \cite[Lemma~59 (c)]{GR2}
and Lemma~\ref{lem:A=B=1} (with $A = 1/2$ and $B=2$).
\begin{lemma}
\label{lem:doubling}
For $F \in \KK^{1,1}(P)$, we define $\tilde{F} \in \KK^{1,1}(\tilde{P})$ by
$$
\tilde{F}(i,j)
=
\begin{cases}
F(i,j) &\text{if $i \le j$,} \\
F(j,i) &\text{if $i > j$.}
\end{cases}
$$
Then we have
$$
\left( \rho^k F \right) (i,j)
=
\left( \tilde{\rho}^k \tilde{F} \right) (i,j)
\times
\begin{cases}
1/2 &\text{if $1 \le k \le i+j$,} \\
1 &\text{if $k=i+j+1$,} \\
2 &\text{if $i+j+2 \le k \le 2r+1$,} \\
1 &\text{if $k=2r+2$}
\end{cases}
$$
for $1 \le k \le 2r+2$ and $(i,j) \in P$.
\end{lemma}
By using this lemma and the reciprocity for the rectangle poset $\tilde{P}$, we have
$$
\left( \rho^{i+j+1} F \right)(i,j)
=
\left( \tilde{\rho}^{i+j+1} \tilde{F} \right)(i,j)
=
\frac{ 1 }{ \tilde{F} (r-i,r-j) }
=
\frac{ 1 }{ F(r-j,r-i) }.
$$
This is the desired identity for a shifted staircase poset.
\subsection{%
Double-tailed diamond posets
}
In this subsection, we prove the reciprocity for double-tailed diamond posets.
Let $P = P_{D_n,\varpi_1}$ be the minuscule poset associated to the minuscule weight
$\lambda = \varpi_1$ of the Lie algebra of type $D_n$.
We label elements of $P$ by
\begin{gather*}
v_i = e_1 + e_{i+1} \quad (1 \le i \le n-2),
\\
v_{n-1}^+ = e_1 + e_n,
\quad
v_{n-1}^- = e_1 - e_n,
\\
v_i = e_ 1 - e_{2n-1-i} \quad (n \le i \le 2n-3).
\end{gather*}
Note that $v_1$ is the maximum element and $v_{2n-3}$ is the minimum element.
Fix an initial state $X \in \KK^{1,1}(P)$.
We regard $X(v)$ ($v \in P$) as indeterminates and define $Z \in \KK^{1,1}(P)$ by (\ref{eq:X2Z}).
We write
\begin{gather*}
x_i = X(v_i) \quad(1 \le i \le 2n-3, \, i \neq n-1),
\quad
x_{n-1}^\pm = X(v_{n-1}^\pm),
\\
z_i = Z(v_i) \quad(1 \le i \le 2n-3, \, i \neq n-1),
\quad
z_{n-1}^\pm = Z(v_{n-1}^\pm).
\end{gather*}
Then we have
$$
z_i
=
\begin{cases}
\dfrac{ x_i }{ x_{i+1} } &\text{if $i \neq n-1, 2n-3$,} \\
\dfrac{ x_{n-2} }{ x_{n-1}^+ + x_{n-1}^- } &\text{if $i = n-1$,} \\
x_{2n-3} &\text{if $i=2n-3$,}
\end{cases}
\quad
z_{n-1}^\pm = \frac{ x_{n-1}^\pm }{ x_n }.
$$
For positive integers $i$ and $l$ satisfying $1 \le i \le 2n-3$ and $i+l-1 \le 2n-3$,
we define monomials $C(i;l)$ and $C^\pm(i;l)$ as follows:
\begin{enumerate}
\item[(i)]
If $1 \le i \le n-2$ and $i+l-1 \le n-2$, then we put
$$
C(i;l) = z_i z_{i+1} \cdots z_{i+l-1}.
$$
\item[(ii)]
If $1 \le i \le n-1$ and $n-1 \le i+l-1 \le 2n-3$, then we put
$$
C^\pm(i;l) = z_i z_{i+1} \cdots z_{n-2} z_{n-1}^\pm z_n \cdots z_{i+l-1}.
$$
\item[(iii)]
If $n+2 \le i \le 2n-3$, then we put
$$
C(i;l) = z_i z_{i+1} \cdots z_{i+l-1}.
$$
\end{enumerate}
Then the original indeterminates $X(v)$ can be expressed in terms of $Z(v)$ as follows:
\begin{lemma}
\label{lem:diamond}
The values $X(v)$ ($v \in P$) are expressed in terms of $C(i;l)$ and $C^\pm(i;l)$ as follows:
$$
\begin{cases}
X(v_i) = C^+(i;2n-i-2) + C^-(i;2n-i-2) &\text{if $1 \le i \le n-2$,} \\
X(v_{n-1}^\pm) = C^\pm(n-1;n-1) &\text{if $i=n-1$,} \\
X(v_i) = C(i;2n-i-2) &\text{if $n \le i \le 2n-3$.}
\end{cases}
$$
\end{lemma}
Recall that $P$ is a graded poset with rank function $\rank$ given by
$\rank(v_i) = 2n-i-2$ ($1 \le i \le 2n-3$, $i \neq n-1$) and $\rank(v_{n-1}^\pm) = n-1$.
Then it is straightforward to prove the following explicit formulas
by using the induction on $k$ and $i$.
(We omit the proof.)
\begin{prop}
\label{prop:diamond}
Let $v \in P$ and $k$ a positive integer.
If $1 \le k \le \rank(v)$, then the value $\left( \rho^k X \right)(v)$ of iterations of birational rowmotion
is expressed in terms of $C(i;l)$ and $C^\pm(i;l)$ as follows:
\begin{enumerate}
\item[(a)]
If $v = v_i$ with $1 \le i \le n-2$, we have
$$
\left( \rho^k X \right) (v_i)
=
\begin{cases}
\dfrac{ 1 }{ C(k ; i) } &\text{if $1 \le k \le n-i-1$,} \\
\dfrac{ 1 }{ C^+(k ; i) } + \dfrac{ 1 }{ C^-(k+1 ; i) } &\text{if $n-i \le k \le n-1$,} \\
\dfrac{ 1 }{ C(k ; i) } &\text{if $n \le k \le 2n-i-2$.}
\end{cases}
$$
\item[(b)]
If $v = v_{n-1}^\pm$, we have
$$
\left( \rho^k X \right) (v_{n-1}^\pm)
=
\frac{ 1 }
{ C^{\ep (-1)^{k-1}}(k ; n-1) }.
$$
\item[(c)]
If $v = v_i$ with $n \le i \le 2n-3$, we have
$$
\left( \rho^k X \right) (v_i)
=
\frac{ 1 }
{ C^+(k ; i) + C^-(k ; i) }.
$$
\end{enumerate}
\end{prop}
Since the involution $\iota : P \to P$ is given by
$$
\iota(v_i) = v_{2n-i-2}
\quad(1 \le i \le 2n-3, \, i \neq n-1),
\quad
\iota(v_{n-1}^\ep) = v_{n-1}^{\ep (-1)^n},
$$
we obtain the desired reciprocity by comparing formulas in Lemma~\ref{lem:diamond}
and Proposition~\ref{prop:diamond}.
This completes the proof of Theorem~\ref{thm:main1} (b) for all minuscule posets.
\subsection{%
Reciprocity for birational Coxeter-motion
}
We have the following conjectural reciprocity for a particular birational Coxeter-motion map.
\begin{conjecture}
\label{conj:half-period}
Let $P$ be a minuscule poset.
We decompose the simple root system $\Pi$ into a disjoin union of two subsets $\Pi_1$ and $\Pi_2$
such that any roots in $\Delta_i$ are pairwise orthogonal for each $i$.
We define $\gamma_1$ and $\gamma_2$ by
$$
\gamma_1 = \prod_{\alpha \in \Pi_1} \sigma^{A,B}_\alpha,
\quad
\gamma_2 = \prod_{\beta \in \Pi_2} \sigma^{A,B}_\beta,
$$
and put
$$
\delta
=
\underbrace{\gamma_1 \gamma_2 \gamma_1 \gamma_2 \gamma_1 \cdots}_{\text{$h$ factors}},
$$
where $h$ is the Coxeter number.
Then we conjecture that
\begin{equation}
\label{eq:half-period}
( \delta F )(v)
=
\frac{ AB }
{ F( \iota v) }
\end{equation}
for any $F \in \KK^{A,B}(P)$ and $v \in P$.
\end{conjecture}
The periodicity of birational Coxeter-motion maps is a consequence of this conjecture.
In fact, $\gamma = \gamma_1 \gamma_2$ is a Coxeter-motion map and
$$
\gamma^h
=
\begin{cases}
\delta^2 &\text{if $h$ is even,} \\
\delta_{1,2} \delta_{2,1} &\text{if $n$ is odd,}
\end{cases}
$$
where $\delta_{1,2} = \gamma_1 \gamma_2 \gamma_1 \gamma_2 \gamma_1 \cdots \gamma_1$
and $\delta_{2,1} = \gamma_2 \gamma_1 \gamma_1 \gamma_2 \gamma_1 \cdots \gamma_2$.
If $h$ is even, then we have
$$
\left( \gamma^h F \right) (v)
=
\left( \delta^2 F \right) (v)
=
\frac{ AB }
{ \left( \delta F \right) (\iota v) }
=
\frac{ AB }
{ AB / F( \iota^2 v ) }
=
F(v).
$$
If $h$ is odd, we can derive $\left( \gamma^h F \right)(v) = F(v)$ from (\ref{eq:half-period})
in a similar manner.
\section{%
File homomesy
}
This section is devoted to the proof of the file homomesy phenomenon
(Theorem~\ref{thm:main1} (c) and Theorem~\ref{thm:main2} (b)).
\subsection{%
Local properties
}
First we investigate local properties of birational rowmotion
and Coxeter-motion around a given file.
Let $P$ be a minuscule poset with coloring $c : P \to \Pi$.
We regard the Hasse diagram of the poset $\hat{P} = P \sqcup \{ \hat{1}, \hat{0} \}$ as a directed graph,
where a directed edge $u \to v$ corresponds to the covering relation $u \lessdot v$.
For $\alpha \in \Pi$, let $\hat{N}^\alpha$ be the neighborhood of
$P^\alpha = \{ x \in P : c(x) = \alpha \}$ given by
$$
\hat{N}^\alpha
=
\{ x \in \hat{P} : \text{there is an element $y \in P^\alpha$ such that $x \lessdot y$ or $x \gtrdot y$} \}.
$$
We define $G^\alpha$ to be the bipartite directed subgraph of the Hasse diagram of $\hat{P}$
with black vertex set $P^\alpha$ and white vertex set $\hat{N}^\alpha$.
It follows from Proposition~\ref{prop:minuscule2} (c) that
$$
\hat{N}^\alpha
=
\bigsqcup_{\beta \sim \alpha} P^\beta
\sqcup
\begin{cases}
\{ \hat{1}, \hat{0} \} &\text{if $\alpha = \alpha_{\max} = \alpha_{\min}$,} \\
\{ \hat{1} \} &\text{if $\alpha = \alpha_{\max} \neq \alpha_{\min}$,} \\
\{ \hat{0} \} &\text{if $\alpha = \alpha_{\min} \neq \alpha_{\max}$,} \\
\emptyset &\text{otherwise,}
\end{cases}
$$
where $\beta$ runs over all simple roots adjacent to $\alpha$ in the Dynkin diagram,
and $\alpha_{\max}$ (resp. $\alpha_{\min}$) is the color of the maximum (resp. minimum)
element of $P$.
To describe the graph structure of $G^\alpha$,
we introduce two series of posets $G_m$ and $H_m$.
For a positive integer $m$, let $G_m$ be the poset consisting of $3m$ elements
$x_1, \cdots, x_m, y_1, \cdots, y_{m-1}, z_1, \allowbreak \cdots, z_{m-1}, u, v$ with covering relations
$$
u \lessdot x_1,
\quad
x_i \lessdot y_i \lessdot x_{i+1},
\quad
x_i \lessdot z_i \lessdot x_{i+1},
\quad
x_m \lessdot v.
$$
Note that $G_1$ is the three-element chain.
And, for an integer $m \ge 2$, let $H_m$ be the $(2m+1)$-element chain
$$
u \lessdot x_1 \lessdot y_1 \lessdot x_2 \lessdot y_2 \lessdot \cdots \lessdot y_{m-1} \lessdot x_m \lessdot v
$$
We regard the Hasse diagrams of $G_m$ and $H_m$ as bipartite directed graphs with black vertices $x_1, \dots, x_m$.
For example, the Hasse diagrams of $G_4$ and $H_4$ are shown in
Figures~\ref{fig:G} and \ref{fig:H} respectively.
\begin{figure}[ht]
\setlength{\unitlength}{1.5pt}
\centering
\begin{minipage}{0.3\hsize}
\centering
\begin{picture}(50,90)
\put(25,5){\circle{3}}
\put(25,15){\circle*{3}}
\put(15,25){\circle{3}}
\put(35,25){\circle{3}}
\put(25,35){\circle*{3}}
\put(15,45){\circle{3}}
\put(35,45){\circle{3}}
\put(25,55){\circle*{3}}
\put(15,65){\circle{3}}
\put(35,65){\circle{3}}
\put(25,75){\circle*{3}}
\put(25,85){\circle{3}}
\put(15,65){\circle{3}}
\put(15,65){\circle{3}}
\put(25,6.5){\vector(0,1){7}}
\put(24,16){\vector(-1,1){8}}
\put(26,16){\vector(1,1){8}}
\put(16,26){\vector(1,1){8}}
\put(34,26){\vector(-1,1){8}}
\put(24,36){\vector(-1,1){8}}
\put(26,36){\vector(1,1){8}}
\put(16,46){\vector(1,1){8}}
\put(34,46){\vector(-1,1){8}}
\put(24,56){\vector(-1,1){8}}
\put(26,56){\vector(1,1){8}}
\put(16,66){\vector(1,1){8}}
\put(34,66){\vector(-1,1){8}}
\put(25,76.5){\vector(0,1){7}}
\put(25,80){\makebox(10,10){$v$}}
\put(25,70){\makebox(15,10){$x_4$}}
\put(25,50){\makebox(15,10){$x_3$}}
\put(25,30){\makebox(15,10){$x_2$}}
\put(25,10){\makebox(15,10){$x_1$}}
\put(25,0){\makebox(10,10){$u$}}
\put(0,60){\makebox(15,10){$y_3$}}
\put(0,40){\makebox(15,10){$y_2$}}
\put(0,20){\makebox(15,10){$y_1$}}
\put(35,60){\makebox(15,10){$z_3$}}
\put(35,40){\makebox(15,10){$z_2$}}
\put(35,20){\makebox(15,10){$z_1$}}
\end{picture}
\caption{$G_4$}
\label{fig:G}
\end{minipage}
\begin{minipage}{0.3\hsize}
\centering
\begin{picture}(50,90)
\put(25,5){\circle{3}}
\put(25,15){\circle*{3}}
\put(15,25){\circle{3}}
\put(25,35){\circle*{3}}
\put(15,45){\circle{3}}
\put(25,55){\circle*{3}}
\put(15,65){\circle{3}}
\put(25,75){\circle*{3}}
\put(25,85){\circle{3}}
\put(15,65){\circle{3}}
\put(15,65){\circle{3}}
\put(25,6.5){\vector(0,1){7}}
\put(24,16){\vector(-1,1){8}}
\put(16,26){\vector(1,1){8}}
\put(24,36){\vector(-1,1){8}}
\put(16,46){\vector(1,1){8}}
\put(24,56){\vector(-1,1){8}}
\put(16,66){\vector(1,1){8}}
\put(25,76.5){\vector(0,1){7}}
\put(25,80){\makebox(10,10){$v$}}
\put(25,70){\makebox(15,10){$x_4$}}
\put(25,50){\makebox(15,10){$x_3$}}
\put(25,30){\makebox(15,10){$x_2$}}
\put(25,10){\makebox(15,10){$x_1$}}
\put(25,0){\makebox(10,10){$u$}}
\put(0,60){\makebox(15,10){$y_3$}}
\put(0,40){\makebox(15,10){$y_2$}}
\put(0,20){\makebox(15,10){$y_1$}}
\end{picture}
\caption{$H_4$}
\label{fig:H}
\end{minipage}
\end{figure}
\begin{lemma}
\label{lem:decomp}
Each bipartite directed graph $G^\alpha$ is decomposed into a disjoint union of graphs of the form $G_m$ or $H_m$
as follows:
\begin{itemize}
\item
If $P = P_{A_n,\varpi_r}$, then
$$
G^{\alpha_i}
\cong
\begin{cases}
G_i &\text{if $1 \le i \le r$,} \\
G_r &\text{if $r \le i \le s$,} \\
G_{n-i+1} &\text{if $s \le i \le l$,}
\end{cases}
$$
where $r+s = n+1$ and $r \le s$.
\item
If $P = P_{B_n,\varpi_n}$, then
$$
G^{\alpha_i}
\cong
\begin{cases}
G_i &\text{if $1 \le i \le n-1$,} \\
H_l &\text{if $i = n$.}
\end{cases}
$$
\item
If $P = P_{C_n,\varpi_1}$, then
$$
G^{\alpha_i}
\cong
\begin{cases}
G_1 \sqcup G_1 &\text{if $1 \le i \le n-2$,} \\
H_2 &\text{if $i = n-1$,} \\
G_1 &\text{if $i = n$.}
\end{cases}
$$
\item
If $P = P_{D_n,\varpi_1}$, then
$$
G^{\alpha_i}
\cong
\begin{cases}
G_1 \sqcup G_1 &\text{if $1 \le i \le n-3$,} \\
G_2 &\text{if $i = n-2$,} \\
G_1 &\text{if $i = n-1$, $n$.}
\end{cases}
$$
\item
If $P = P_{D_n,\varpi_n}$, then
$$
G^{\alpha_i}
\cong
\begin{cases}
G_l &\text{if $1 \le i \le n-2$,} \\
\left( G_1 \right)^{\sqcup \lfloor (n-1)/2 \rfloor} &\text{if $i = n-1$,} \\
\left( G_1 \right)^{\sqcup \lfloor n/2 \rfloor} &\text{if $i = n$,} \\
\end{cases}
$$
where $G_1^{\sqcup m}$ is the disjoint union of $m$ copies of $G_1$,
and $\lfloor x \rfloor$ stands for the largest integer not exceeding $x$.
\item
If $P = P_{E_6,\varpi_6}$, then
$$
G^{\alpha_i}
\cong
\begin{cases}
G_1 \sqcup G_1 &\text{if $i=1$, $2$, $6$,} \\
G_1 \sqcup G_2 &\text{if $i=3$, $5$,} \\
G_4 &\text{if $i=4$.}
\end{cases}
$$
\item
If $P = P_{E_7, \varpi_7}$, then
$$
G^{\alpha_i}
\cong
\begin{cases}
G_1 \sqcup G_1 &\text{if $i=1$,} \\
G_1 \sqcup G_1 \sqcup G_1 &\text{if $i=2$,} \\
G_2 \sqcup G_2 &\text{if $i=3$,} \\
G_6 &\text{if $i=4$,} \\
G_1 \sqcup G_3 \sqcup G_1 &\text{if $i=5$,} \\
G_1 \sqcup G_2 \sqcup G_1 &\text{if $i=6$,} \\
G_1 \sqcup G_1 \sqcup G_1 &\text{if $i=7$.}
\end{cases}
$$
\end{itemize}
\end{lemma}
The following relations are a key to the proof of the file homomesy phenomenon.
\begin{lemma}
\label{lem:local1}
Let $\rho = \rho^{A,B}$ be the birational rowmotion map and $\alpha$ a simple root.
\begin{enumerate}
\item[(a)]
If $G_m$ appears as a connected component of $G^\alpha$, then we have
\begin{equation}
\label{eq:local1}
\prod_{i=1}^m (\rho^{i-1} F)(x_i) \cdot \prod_{i=1}^m (\rho^i F)(x_i)
=
F(u)
\cdot
\prod_{l=1}^{m-1} (\rho^i F)(y_i)
\cdot
\prod_{l=1}^{m-1} (\rho^i F)(z_i)
\cdot
(\rho^m F)(v).
\end{equation}
\item[(b)]
If $H_m$ appears as a connected component of $G^\alpha$, then we have
\begin{equation}
\label{eq:local3}
\prod_{i=1}^m (\rho^{i-1} F)(x_i) \cdot \prod_{i=1}^m (\rho^i F)(x_i)
=
F(u)
\cdot
\prod_{l=1}^{m-1} (\rho^i F)(y_i)^2
\cdot
(\rho^m F)(v).
\end{equation}
\end{enumerate}
\end{lemma}
\begin{demo}{Proof}
(a)
It follows from (\ref{eq:Brow_inductive}) that
\begin{gather*}
F(x_m) \cdot (\rho F)(x_m)
=
(\rho F)(v) \cdot ( F(y_{m-1}) + F(z_{m-1}) ),
\\
F(x_i) \cdot (\rho F)(x_i)
=
\frac{ (\rho F) (y_i) \cdot (\rho F) (z_i) \cdot
( F(y_{i-1}) + F(z_{i-1}) ) }
{ (\rho F)(y_i) + (\rho F)(z_i) }
\quad(2 \le i \le m-1),
\\
F(x_1) \cdot (\rho F)(x_1)
=
\frac{ F(u) \cdot (\rho F)(y_1) \cdot (\rho F)(z_1) }
{ (\rho F)(y_1) + (\rho F)(z_1) }.
\end{gather*}
By replacing $F$ with $\rho^{m-1} F$ (resp. $\rho^{i-1} F$) in the first (resp. second) equation,
and then by multiplying the resulting equations together, we obtain (\ref{eq:local1}).
(b) can be checked by a similar computation.
\qed
\end{demo}
\begin{lemma}
\label{lem:local2}
Let $\alpha$ be a simple root and
and $\sigma_\alpha = \prod_{v \in P^\alpha} \tau^{A,B}_v$
the product of birational toggles over $P^\alpha$.
\begin{enumerate}
\item[(a)]
If $G_m$ appears as a connected component of $G^\alpha$, then we have
\begin{equation}
\label{eq:local2}
\prod_{i=1}^m F(x_i) \cdot \prod_{i=1}^m (\sigma_\alpha F)(x_i)
=
F(u)
\cdot
\prod_{l=1}^{m-1} F(y_i)
\cdot
\prod_{l=1}^{m-1} F(z_i)
\cdot
F(v).
\end{equation}
\item[(b)]
If $H_m$ appears as a connected component of $G^\alpha$, then we have
\begin{equation}
\label{eq:local4}
\prod_{i=1}^m F(x_i) \cdot \prod_{i=1}^m (\sigma_\alpha F)(x_i)
=
F(u)
\cdot
\prod_{l=1}^{m-1} F(y_i)^2
\cdot
F(v).
\end{equation}
\end{enumerate}
\end{lemma}
\begin{demo}{Proof}
(a)
By the definition (\ref{eq:Btoggle}), we have
\begin{gather*}
F(x_m) \cdot (\sigma_\alpha F)(x_m)
=
F(v) \cdot ( F(y_{m-1}) + F(z_{m-1}) ),
\\
F(x_i) \cdot (\sigma_\alpha F)(x_i)
=
\frac{ F(y_i) \cdot F(z_i) \cdot ( F(y_{i-1}) + F(z_{i-1}) ) }
{ F(y_i) + F(z_i) }
\quad(2 \le i \le m-1),
\\
F(x_1) \cdot (\sigma_\alpha F)(x_1)
=
\frac{ F(y_1) \cdot F(z_1) \cdot F(u) }
{ F(y_1) + F(z_1) }.
\end{gather*}
Multiplying them together, we obtain (\ref{eq:local2}).
(b) can be checked by a similar computation.
\qed
\end{demo}
\subsection{%
File homomesy for birational rowmotion
}
In this subsection, we prove the file homomesy phenomenon for birational rowmotion
(Theorem~\ref{thm:main1} (c)).
The following properties of Coxeter elements will be useful
in the proof of Theorem~\ref{thm:main1} (c) and Theorem~\ref{thm:main2}(b);
the proof of the latter will be given in the next subsection.
A \emph{Coxeter element} in a Weyl group $W = \langle s_\alpha : \alpha \in \Pi \rangle$
is a product of all simple reflections $s_\alpha$ in any order.
Then it is known that all Coxeter elements are conjugate.
By definition, the Coxeter number is the order of any Coxeter element.
\begin{lemma}
\label{lem:Coxeter}
Let $c$ be a Coxeter element and $h$ the Coxeter number.
Then we have
\begin{enumerate}
\item[(a)]
If $\mu \in \mathfrak{h}^*$ satisfies $c \mu = \mu$, then $\mu = 0$.
\item[(b)]
As a linear transformation on $\mathfrak{h}^*$, we have
\begin{equation}
\label{eq:Coxeter1}
\sum_{k=0}^{h-1} c^k = 0
\end{equation}
\item[(c)]
Let $\alpha \in \Pi$ be a simple root and $\varpi$ the corresponding fundamental weight.
If $c = s_{\alpha_1} \cdots s_{\alpha_n}$ is a Coxeter element with $\Pi = \{ \alpha_1, \dots, \alpha_n \}$
and $\beta = s_{\alpha_1} \cdots s_{\alpha_{k-1}} \alpha_k$, where $\alpha = \alpha_k$, then we have
\begin{gather}
\label{eq:Coxeter2}
c \varpi = \varpi - \beta,
\\
\label{eq:Coxeter3}
\sum_{k=1}^{h-1} \sum_{i=0}^{k-1} c^i ( \beta )
=
h \varpi.
\end{gather}
\end{enumerate}
\end{lemma}
\begin{demo}{Proof}
(a)
See \cite[V, \S6, n${}^\circ$2]{B1}.
(b) follows from $c^h = 1$ and (a).
(c)
Since $s_\gamma \varpi = \varpi - \langle \gamma^\vee, \varpi \rangle \gamma
= \varpi - \delta_{\alpha,\gamma} \alpha$ for $\gamma \in \Pi$,
we have
$c \varpi
=
\varpi - s_{\alpha_1} \cdots s_{\alpha_{k-1}} \alpha_k
=
\varpi - \beta$.
Hence we see that
$$
c^k \varpi = \varpi - \sum_{i=0}^{k-1} c^i \beta.
$$
By using (\ref{eq:Coxeter1}), we obtain
$$
0
=
\sum_{k=0}^{h-1} c^k \varpi
=
h \varpi - \sum_{k=1}^{h-1} \sum_{i=0}^{k-1} c^i \beta,
$$
from which (\ref{eq:Coxeter3}) follows.
\qed
\end{demo}
In order to prove Theorem~\ref{thm:main1} (c), we consider
\begin{equation}
\label{eq:Phi'}
\Phi'_\alpha(F)
=
\prod_{v \in P^\alpha}
\left( \rho^{(\rank(v) - \rank(v^\alpha_0))/2} F \right)(v),
\end{equation}
instead of $\Phi_\alpha(F) = \prod_{v \in P^\alpha} F(v)$.
Here $v^\alpha_0$ is the minimum element of $P^\alpha$.
Note that $P^\alpha$ is a chain and $\rank(v) - \rank(v^\alpha_0)$ is an even integer
(see Proposition~\ref{prop:minuscule2} (d) and (e)).
Since $\rho$ has a finite order $h$, we have
\begin{equation}
\label{eq:Phi=Phi'}
\prod_{k=0}^{h-1} \Phi_\alpha( \rho^k F )
=
\prod_{k=0}^{h-1} \Phi'_\alpha( \rho^k F ).
\end{equation}
\begin{remark}
It is worth mentioning that $\Phi'_\alpha( \rho^k X)$ are Laurent monomials in the variables $Z(v)$
defined by (\ref{eq:X2Z}).
In a forthcoming paper \cite{O},
we will give explicit formulas for $\Phi'_\alpha( \rho^k X)$ in classical types.
\end{remark}
\begin{prop}
\label{prop:rel_Phi'}
For $\alpha \in \Pi$ and $F \in \KK^{A,B}(P)$, we have
\begin{equation}
\label{eq:rel_Phi'}
\Phi'_\alpha(F) \cdot \Phi'_\alpha ( \rho F )
=
A^{\delta_{\alpha,\alpha_{\max}}} B^{\delta_{\alpha,\alpha_{\min}}}
\prod_{\beta \sim \alpha} \Phi'_\beta ( \rho^{m_{\alpha,\beta}} F)^{-\langle \beta, \alpha^\vee \rangle},
\end{equation}
where $\beta$ runs over all simple roots adjacent to $\alpha$ in the Dynkin diagram and
$$
m_{\alpha,\beta}
=
\begin{cases}
1 &\text{if $v^\beta_0 > v^\alpha_0$,} \\
0 &\text{if $v^\beta_0 < v^\alpha_0$.}
\end{cases}
$$
\end{prop}
\begin{demo}{Proof}
We explain the proof in the case where $\mathfrak{g}$ is of type $E_7$, $\lambda = \varpi_7$
and $\alpha = \alpha_5$. (The other cases can be proved in a similar way.)
We label elements of $P^\alpha$ as $v^\alpha_0, v^\alpha_1, v^\alpha_2, \dots$ from bottom to top.
By the definition (\ref{eq:Phi'}), we have
\begin{align*}
\Phi'_{\alpha_4}(F)
&=
F(v^{\alpha_4}_0) \cdot (\rho F)(v^{\alpha_4}_1) \cdot
(\rho^2 F)(v^{\alpha_4}_2) \cdot (\rho^3 F)(v^{\alpha_4}_3) \cdot
(\rho^4 F)(v^{\alpha_4}_4) \cdot (\rho^5 F)(v^{\alpha_4}_5),
\\
\Phi'_{\alpha_5}(F)
&=
F(v^{\alpha_5}_0) \cdot (\rho^2 F)(v^{\alpha_5}_1) \cdot
(\rho^3 F)(v^{\alpha_5}_2) \cdot (\rho^4 F)(v^{\alpha_5}_3) \cdot
(\rho^6 F)(v^{\alpha_5}_4),
\\
\Phi'_{\alpha_6}(F)
&=
F(v^{\alpha_6}_0) \cdot (\rho^3 F)(v^{\alpha_6}_1) \cdot
(\rho^4 F)(v^{\alpha_6}_2) \cdot (\rho^7 F)(v^{\alpha_6}_3).
\end{align*}
The subgraph $G^{\alpha_5}$ has three connected components
\begin{gather*}
\{ v^{\alpha_4}_0, v^{\alpha_5}_0, v^{\alpha_6}_0 \} \cong G_2,
\\
\{ v^{\alpha_4}_1, v^{\alpha_4}_2, v^{\alpha_4}_3, v^{\alpha_4}_4,
v^{\alpha_5}_1, v^{\alpha_5}_2, v^{\alpha_5}_3, v^{\alpha_6}_1, v^{\alpha_6}_2 \}
\cong G_3,
\\
\{ v^{\alpha_4}_5, v^{\alpha_5}_4, v^{\alpha_6}_3 \} \cong G_1.
\end{gather*}
By applying (\ref{eq:local1}) to three connected components of $G^{\alpha_5}$,
we obtain
\begin{align*}
&
F(v^{\alpha_5}_0) \cdot (\rho F)(v^{\alpha_5}_0)
=
F(v^{\alpha_6}_0) \cdot (\rho F)(v^{\alpha_4}_0),
\\
&
F(v^{\alpha_5}_1) \cdot (\rho F)(v^{\alpha_5}_2) \cdot (\rho^2 F)(v^{\alpha_5}_3) \cdot
(\rho F)(v^{\alpha_5}_1) \cdot (\rho^2 F)(v^{\alpha_5}_2) \cdot (\rho^3 F)(v^{\alpha_5}_3)
\\
&\quad
=
F(v^{\alpha_4}_1) \cdot
(\rho F)(v^{\alpha_6}_1) \cdot (\rho F)(v^{\alpha_4}_2) \cdot
(\rho^2 F)(v^{\alpha_6}_2) \cdot (\rho^2 F)(v^{\alpha_4}_3) \cdot
(\rho^3 F)(v^{\alpha_4}_4)
\\
&
F(v^{\alpha_5}_4) \cdot (\rho F)(v^{\alpha_5}_4)
=
F(v^{\alpha_4}_5) \cdot (\rho F)(v^{\alpha_6}_3).
\end{align*}
By replacing $F$ with $\rho^2 F$ (resp. $\rho^6 F$) in the second (resp. third) equation,
and then by multiplying three resulting equations together, we have
$$
\Phi'_{\alpha_5}(F) \cdot \Phi'_{\alpha_5}(\rho F)
=
\Phi'_{\alpha_6}(F) \cdot \Phi'_{\alpha_4}(\rho F).
$$
Since $v^{\alpha_6}_0 < v^{\alpha_5}_0 < v^{\alpha_4}_0$ (see Figure~\ref{fig:e7}),
we obtain (\ref{eq:rel_Phi'}) in this case.
\qed
\end{demo}
\begin{corollary}
\label{cor:rel_Phi}
For a simple root $\beta \in \Pi$, we put
$$
\tilde{\Phi}_\beta(F)
=
\prod_{k=0}^{h-1} \Phi_\beta(\rho^k F).
$$
Then we have for fixed $\alpha \in \Pi$,
\begin{equation}
\label{eq:rel_Phi}
\prod_{\beta \in \Pi}
\tilde{\Phi}_\beta(F)^{\langle \beta, \alpha^\vee \rangle}
=
A^{\delta_{\alpha,\alpha_{\max}}} B^{\delta_{\alpha,\alpha_{\min}}}
\end{equation}
for any $F \in \KK^{A,B}(P)$.
\end{corollary}
\begin{demo}{Proof}
Since $\rho$ has a finite order $h$, Equation (\ref{eq:Phi=Phi'}) implies
$\tilde{\Phi}_\beta(F) = \prod_{k=0}^{h-1} \Phi'_\beta(\rho^{k+m} F)$ for any integer $m$.
Hence (\ref{eq:rel_Phi}) follows from (\ref{eq:rel_Phi'}).
\qed
\end{demo}
Now we are ready to prove Theorem~\ref{thm:main1} (c).
\begin{demo}{Proof of Theorem~\ref{thm:main1} (c)}
We define an element $\tilde{\mu}(F) \in \mathfrak{h}^*$ for $F \in \KK^{A,B}(P)$ by putting
$$
\tilde{\mu}(F)
=
\sum_{\alpha \in \Pi} \log \tilde{\Phi}_\alpha(F) \cdot \alpha.
$$
Note that, if $\varpi^\vee$ is the fundamental coweight corresponding to $\alpha$,
then we have
$$
\log \tilde{\Phi}_\alpha(F) = \langle \varpi^\vee, \tilde{\mu}(F) \rangle.
$$
Since $\varpi_{\max} = - w_0 \lambda$ (resp. $\varpi_{\min} = \lambda$)
is the fundamental weight corresponding to the color $\alpha_{\max}$ (resp. $\alpha_{\min}$)
of the maximum (resp. minimum) element of $P$ (see Proposition~\ref{prop:minuscule2} (b)),
it it enough to show
\begin{equation}
\label{eq:mu}
\tilde{\mu}(F)
=
h a \cdot \varpi_{\max} + h b \cdot \varpi_{\min},
\end{equation}
where $a = \log A$, $b = \log B$.
Since we have
$$
\sum_{\beta \in \Pi} \langle \beta, \alpha^\vee \rangle \log \tilde{\Phi}_\beta(F)
=
h a \delta_{\alpha,\alpha_{\max}}
+
h b \delta_{\alpha,\alpha_{\max}}
$$
by Corollary~\ref{cor:rel_Phi}, we see that for any $\alpha \in \Pi$
\begin{align*}
s_\alpha \tilde{\mu}(F)
&=
\sum_{\beta \in \Pi}
\log \tilde{\Phi}_\beta(F) \cdot ( \beta - \langle \beta, \alpha^\vee \rangle \alpha )
\\
&=
\sum_{\beta \in \Pi} \log \tilde{\Phi}_\beta(F) \beta
-
\left( \sum_{\beta \in \Pi} \langle \beta, \alpha^\vee \rangle \log \tilde{\Phi}_\beta(F) \right) \alpha
\\
&=
\tilde{\mu}(F)
-
\left( \delta_{\alpha,\alpha_{\max}} h a + \delta_{\alpha,\alpha_{\min}} h b \right) \alpha.
\end{align*}
Let $c = s_{\alpha_1} \cdots s_{\alpha_n}$ be a Coxeter element
and put
$$
\beta_{\max} = s_{\alpha_1} \cdots s_{\alpha_{k-1}} \alpha_k,
\quad
\beta_{\min} = s_{\alpha_1} \cdots s_{\alpha_{m-1}} \alpha_m,
$$
where $\alpha_k = \alpha_{\max}$, $\alpha_m = \alpha_{\min}$.
Then we have
$$
c \tilde{\mu}(F)
=
\tilde{\mu}(F)
-
\left( h a \cdot \beta_{\max} + h b \cdot \beta_{\min} \right).
$$
By substituting $\beta_{\max} = \varpi_{\max} - c \varpi_{\max}$
and $\beta_{\min} = \varpi_{\min} -c \varpi_{\min}$ (see (\ref{eq:Coxeter2})),
we have
$$
c \left( \tilde{\mu}(F) - h a \cdot \varpi_{\max} - h b \cdot \varpi_{\max} \right)
=
\tilde{\mu}(F) - h a \cdot \varpi_{\max} - h b \cdot \varpi_{\max}.
$$
Then it follows from Lemma~\ref{lem:Coxeter} (a) that
$$
\tilde{\mu}(F) - h a \cdot \varpi_{\max} - h b B \cdot \varpi_{\max} = 0.
$$
This completes the proof of (\ref{eq:mu}) and hence of Theorem~\ref{thm:main1} (c).
\qed
\end{demo}
\subsection{%
File homomesy for birational Coxeter-motion
}
In this subsection we prove Theorem~\ref{thm:main2} (b).
The following proposition is a consequence of Lemma~\ref{lem:decomp}
and Equations (\ref{eq:local2}), (\ref{eq:local4}).
\begin{prop}
\label{prop:rel_Phi}
Let $\sigma_\alpha = \prod_{v \in P^\alpha} \tau_v : \KK^{A,B}(P) \to \KK^{A,B}(P)$
be the product of toggles over $P^\alpha$.
Then we have
\begin{enumerate}
\item[(a)]
For a simple root $\alpha$, we have
$$
\Phi_\alpha(F) \cdot \Phi_\alpha ( \sigma_\alpha F )
=
A^{\delta_{\alpha,\alpha_{\max}}} B^{\delta_{\alpha,\alpha_{\min}}}
\prod_{\beta \sim \alpha} \Phi_\beta(F)^{-\langle \beta, \alpha^\vee \rangle}.
$$
\item[(b)]
For simple roots $\alpha \neq \beta$, we have $\Phi_\beta (\sigma_\alpha F) = \Phi_\beta(F)$.
\end{enumerate}
\end{prop}
By using this proposition, we can complete the proof of the file homomesy phenomenon
for birational Coxeter-motion.
\begin{demo}{Proof of Theorem~\ref{thm:main2} (b)}
We define an element $\mu(F) \in \mathfrak{h}^*$ for $F \in \KK^{A,B}(P)$ by putting
$$
\mu(F) = \sum_{\beta \in \Pi} \log \Phi_\beta(F) \cdot \beta.
$$
First we prove
\begin{equation}
\label{eq:s-mu}
\mu (\sigma_\alpha F)
=
s_\alpha \mu(F)
+
\left(
\delta_{\alpha,\alpha_{\max}} a + \delta_{\alpha,\alpha_{\min}} b
\right) \alpha
\end{equation}
where $a = \log A$ and $b = \log B$.
By using Proposition~\ref{prop:rel_Phi}, we have
\begin{align*}
\mu (\sigma_\alpha F)
&=
\sum_{\beta \neq \alpha} \log \Phi_\beta (\sigma_\alpha F) \beta
+
\log \Phi_\alpha (\sigma_\alpha F) \alpha
\\
&=
\sum_{\beta \neq \alpha} \log \Phi_\beta (F) \beta
+
\left(
\delta_{\alpha,\alpha_{\max}} a + \delta_{\alpha,\alpha_{\min}} b
- \sum_{\beta \neq \alpha} \langle \beta, \alpha^\vee \rangle \log \Phi_\beta(F)
- \log \Phi_\alpha(F)
\right) \alpha
\\
&=
\sum_{\beta \neq \alpha} \log \Phi_\beta(F) (\beta - \langle \beta, \alpha^\vee \rangle \alpha)
- \log \Phi_\alpha(F) \alpha
+
\left(
\delta_{\alpha,\alpha_{\max}} a + \delta_{\alpha,\alpha_{\min}} b
\right) \alpha
\\
&=
\sum_{\beta \neq \alpha} \log \Phi_\beta(F) s_\alpha(\beta)
+ \log \Phi_\alpha(F) s_\alpha(\alpha)
+
\left(
\delta_{\alpha,\alpha_{\max}} a + \delta_{\alpha,\alpha_{\min}} b
\right) \alpha
\\
&=
s_\alpha(\mu(F))
+
\left(
\delta_{\alpha,\alpha_{\max}} a + \delta_{\alpha,\alpha_{\min}} b
\right) \alpha.
\end{align*}
Suppose that $\gamma = \sigma_{\alpha_1} \cdots \sigma_{\alpha_n}$,
and let $c = s_{\alpha_1} \cdots s_{\alpha_n}$ be the corresponding Coxeter element.
Then, by iteratively using (\ref{eq:s-mu}), we obtain
$$
\mu( \gamma F )
=
c ( \mu(F) ) + a \cdot \beta_{\max} + b \cdot \beta_{\min},
$$
where $\beta_{\max}$ and $\beta_{\min}$ are defined by
$\beta_{\max} = s_{\alpha_1} \cdots s_{\alpha_{k-1}} \alpha_k$,
$\beta_{\min} = s_{\alpha_1} \cdots s_{\alpha_{m-1}} \alpha_m$
with $\alpha_k = \alpha_{\max}$ and $\alpha_m = \alpha_{\min}$.
Hence by induction on $k$ we see that
$$
\mu (\gamma^k F)
=
c^k(\mu(F))
+
a \sum_{i=0}^{k-1} c^i(\beta_{\max})
+
b \sum_{i=0}^{k-1} c^i(\beta_{\min}).
$$
Therefore we have
$$
\sum_{k=0}^{h-1} \mu ( \gamma^k F )
=
\sum_{k=0}^{h-1} c^k (\mu(F))
+
a \sum_{k=1}^{h-1} \sum_{i=0}^{k-1} c^i(\beta_{\max})
+
b \sum_{k=1}^{h-1} \sum_{i=0}^{k-1} c^i(\beta_{\min}).
$$
Now it follows from (\ref{eq:Coxeter1}) and (\ref{eq:Coxeter3}) that
$$
\sum_{k=0}^{h-1} \mu ( \gamma^k F )
=
a h \cdot \varpi_{\max} + b h \cdot \varpi_{\min}.
$$
By the definition of $\mu(F)$, we have
$$
\sum_{\beta \in \Pi}
\log \left( \prod_{k=0}^{h-1} \Phi_\beta( \gamma^k F) \right) \cdot \beta
=
a h \cdot \varpi_{\max} + b h \cdot \varpi_{\min}.
$$
Then we can complete the proof by taking the pairing $\langle \quad, \varpi^\vee \rangle$.
\qed
\end{demo}
|
2004.14439
|
\section{Introduction}
Web information is available in the form of websites, micro-blogs, email, and other social networking sites.Expert ranking has emerged as a new area of research since information present on the web can be authored by anyone especially in the case of micro-blogs, thus a requirement was felt to rank the expertise level of contributors\cite{expert20a}.Expert is someone with high level of knowledge related to a certain subject ~\cite{6}.Expert ranking is studied in different scenarios like finding an expert in micro-blogs~\cite{1}.Newer areas of expert findings includes author ranking~\cite{2,3,3a} and employee/contractor ranking in a large organization or online job portals~\cite{4}~\cite{expert20b},~\cite{expert20c}Recently finding influencers in bibliometric~\cite{expert20e} networks also come under expert finding techniques in academia. More recently authors~\cite{5} have proposed evaluation of web content credibility through expert ranking.The rise in large online enterprises, expert identification in large organizations and enterprises has emerged.
Two major approaches towards expert ranking are graph-based and document-based~\cite{7,17}. The graph-based techniques explore the social connections of the author, while the document-based techniques explore the documents produced by the author as evidence of expertise. Researchers have also utilized a hybrid approach to expert ranking~\cite{19}.
Previous techniques that are graph-based utilizes Pagerank like algorithms for ranking, unfortunately, none of them have addressed the issue of collusion and negative referrals. The document-based techniques relate an author with the topic, techniques like LDA and topic modeling is utilized. Document-based techniques are limited by scenarios where the quality of documents are not available. The problem under discussion in this research is the identification and ranking of experts in large organizations. The objective of expert ranking in enterprise organizations is to answer questions like ``Who is an expert on subject X?'' or ``Does X have knowledge of subject Y?''.Although previous researchers have claimed that document-based techniques are suitable for organization since documents are present in an organized manner, but authors are considering the situation when there is an intention to find the tacit expertise~\cite{expert20d}~\cite{expert20f} of an employee not present in documented form or when management is interested to find expertise level of an employee for an additional task not related to his/ her documented area of expertise. In such scenarios document-based techniques are insufficient, the link/graph-based techniques are also incapable to address this issue, thus in order to address this research gap authors have proposed a reputation based scheme. The reputation is calculated through feedback from other users. The technique is compared to baseline techniques i.e. page rank based expert rank~\cite{7}, worker rank~\cite{8} and expertise assessment in online labor markets~\cite{4} treated as baselines. The experimental results of precision and mean average error show better performance of the proposed EER technique in comparison to the baselines in identification of the experts.
The paper consists of sections of Research Objectives, Related Work that highlights the literature review, The EER Technique, Problem Formulation, Evaluations, and Conclusion.
\section{Research Objectives}
The expert rank techniques in the context of enterprise and large organizations is rarely researched . However evolution of online enterprises spread across continents has raised importance of these techniques specifically when the skill sets are not documented(tacit).The previous techniques generically had limitations in terms of negative referral. In case of domain under
discussion reputation based approaches are inadequate in truly representing the opinions and suffered
inability of adaptation to dynamism. The reputation calculation structures are inappropriate to incorporate all kinds of interactions resulting in reputation inflation .Thereby authors highlighted and aimed towards following research objectives:
\begin{itemize}
\item To design expert rank algorithms for a large organization/enterprises to find the tacit talent of employees.
\item A reputation-based expert rank technique based on the type of interactions.
\item A technique that can solve negative referrals and collusion problems of previous graph-based techniques.
\item To propose a solution that can use the time-based reputation with the ability to judge dynamic behaviors.
\end{itemize}
\section{Related Work}
This section of the paper highlights the literature of expert ranking in different domains with focus on enterprises and large organizations. The authors have further categorized literature according to different techniques utilized in expert identification.
\subsection{Expert Ranking domain category: Enterprise and micro blogs}
Knowledge in organizations and large enterprises is documented and quality of information is high due to disciplined and organized policy of documentation compared to online knowledge communities and blogs \cite{7}.However, authors have claimed expert finding a problem in organizations as discussed by \cite{9} and \cite{10}. Sources of information including self-disclosed information, document and social network based information can provide evidence for expert ranking in enterprises.
Self-disclosed information is hard to attain and update that may require more time, while document based and social network based indicators are significant that can be automated to find an expert.
Most organizations observe that expert finding is used to find people outside the organizations only~\cite{11},primarily due the assumption that employees within the organization are well known for their expertise and skills. However with the emergence of large enterprises, that are geographically distributed with employees from different knowledge backgrounds, education, skills, merged employees from other organizations expert finding, within the organization has also emerged as an important dimension.In certain scenarios the knowledge of employees is not documented and is thus tacit.
Research in the domain of expert finding falls into two categories, i.e. graph-based and document-based, few researchers have explored the problem domain under discussion however they utilized one of the two approaches or a hybrid one. In case of employee ranking in organizations and enterprises, the techniques depends upon the availability of the type of knowledge regarding employee expertise.If the knowledge regarding employees is tacit and no documentary proof exists, social interactions are the source of information , while link based, document based techniques have been utilized when explicit knowledge in the form of documentary proof/email communication/official communication exists as shown in the figure~\ref{fignew}.Social interactions have been explored using link and graph based techniques however due to their limitations, reputation based techniques using different mathematical structures have been proposed by the researchers.A recent research workerRank i.e. ranking of workers utilizes reputation information and document-based techniques. Authors\cite{4} of another reputation based technique utilizing the HMM model has also addressed problem of expert identification in organizations.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{f.jpg}
\caption{Employee Knowledge Categorization}
\label{fignew}
\end{figure}
Given below is the detailed literature survey of expert ranking techniques based on social network (graph-based), document-based techniques, hybrid and reputation based techniques, also shown in the figure \ref{fig1a}.
\subsection{Social network/Graph-based Expert Identification Systems}
There has been an increase in the research related to identification of experts.COGNOS~\cite{12}is a technique based upon wisdom of crowds. It utilizes twitter list, to find an expert. The most commonly occurring name in a subject returned by the crawler is considered to be the subject expert. The COGNOS was tested with plenty of experiments that however highlighted a shortcoming against the malicious and fake users.Such users can create fake lists so as to undermine true experts.
The Twitter Who to Follow service~\cite{13} provides query capability to find an expert. It utilizes self disclosed information of users in terms of bio and other details along with its social connections.In a more recent work~\cite{14}the authors carried out a survey that revealed that an advisor can be regarded as reliable and trustworthy through analysis of his review history.
Page rank ~\cite{15} that basically ranks web pages with high rank if it is pointed by popular pages. An advancement of Page Rank algorithm for twitter is the Twitter Rank~\cite{16}. The algorithm finds experts from the number of followers of the tweets. In order to find topic experts they utilized LDA technique to relate influential tweets with a topic.
The authors of this work~\cite{18} are of the view that experts returned by twitter rank are generally well known thus they proposed metrics of number of tweets, followed tweets and replied tweets. The data returned from these metrics are then clustered by using Gaussian Mix Algorithm. The evaluation was carried out against the survey results.
Expert Rank~\cite{7}is inspired by the PageRank algorithm,finds an expert by analysing the documents produced by the user along with user's influence in the social network.However this technique and others that utilize graph based techniques are unable to differentiate types of interactions. They have targeted online knowledge communities. The algorithm analyzes the documents produced by the candidate expert as well as his value/rank in the community, together they produce better results.
Theses techniques take number of following or number of interactions are however,unable to address number of unfollowings or negative interactions.
Expert Finding through social referrals~\cite{19} finds an experts through referrals being made to neighbors chosen on the basis of profile match and associated cost. If the target is not found among the immediate neighbors, further search is carried out. Upon completion of the search when a desired expert is found, the initiator pays everyone in the referral chain. Divya et al\cite{a1}\cite{a2} utilized email communications of an organization to find experts using link structures. Similarly expertise rank\cite{a6} from question answer links of online courses discussion forum is also based upon graph techniques.
\subsection{Document-based Techniques}
These techniques are utilized to find topic based experts from the documents produced by the candidate experts.These techniques usually employ text mining to relate topic with an expert. Most popularly the LDA~\cite{17} technique is utilized to relate a topic with the expert. This technique is utilized by twitter rank , that applies the LDA to the tweets and posts produced by the users. This technique is effective when large amount of organized documentation is available also LDA puts all document data under one topic. Research under probabilistic topic modelling\cite{a10} is also document based and utilizes different mechanism for text analysis, that are then related to topic based experts.\cite{a11} utilizes generalized LDA technique for topic modelling from the social network annotations.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{Fig1}
\caption{Classification of techniques for expert finding in enterprises}
\label{fig1a}
\end{figure}
\subsection{Hybrid Systems}
There are certain systems that utilize both document and social network information to find experts. For example Campbell\cite{10} utilized the popular HITS algorithm on the social network being built from the email communication.They also analyzed the content of the email.However the technique was only appropriate for small sized data sets.Hybrid approach \cite{a3}\cite{a4} utilizing subject relationship,and user influence gathered from link analysis are utilized to find experts.Another work \cite{19} utilized the message thread in the online java forums along with analysis of the content of messages. J Wang et al \cite{a5} used convoluted neural networks to predict user with best answer thus indicating it as subject expert.
\subsection{Reputation-based techniques}
In literature techniques that utilize reputation based information for experts includes work by Faisal et al~\cite{21}that addresses the issue of expert finding in micro-blogs, the calculation structure adopted by the technique is a simple summation. Worker Rank~\cite{8} ranks employees of an organization by utilizing weighted average method to calculate reputation. These weights are generated from normal distribution based calculation structure.More recently a Hidden Markov Model (HMM)~\cite{4}based reputation technique has been proposed for expertise assessment in online labor markets.This work addresses the issue of inflated reputation score and reputation staticity in a dynamic environment. They are of view that the reputation rank must be according to recent interactions. Such issue has been addressed by authors~\cite{22} in time bound topic modeling for expert ranking.Answer reputation\cite{a8}\cite{a9} was measured utilizing number of accepted answers up voted answers. The reputation model is simpler and ignores user consistency and tags\cite{a7}.
Table\ref{t1} summarizes various expert identification techniques along with their methodologies and the parameters required by them to execute the technique.
\begin{table*}
\centering
\caption{Expert Identification Systems Comparison}
\label{t1}
\begin{tabular}{@{}l l l@{}}
\toprule
\textbf{Framework } & \textbf{Parameters} & \textbf{Techniques}\\
\midrule
COGNOS & Twitter List & Mining Twitter List\\
Profile History & Review History & Mining Review History\\
Twitter Rank & Followers & PageRank, LDA\\
Topical Authorities &Follow,Reply & Gaussian Mixture Model\\
Expert Rank & Document Analysis, Social Rank& PageRank\\
Social Referral & Social Connections & Profile Matching\\
Fu et al &Email communication & Graph +document approach\\
Hecking et al. & Question/Answers & Graph\\
Divya et al & Email links & Link Structures\\
Worker Rank & Workers job/skills & Document + Reputation (NDR)information\\
Expert in online labor market & Skills & Hidden Markov Reputation model\\
\bottomrule
\end{tabular}
\end{table*}
\section{Problem Formulation}
This section defines the basic terminologies used in EER followed by a formal definition of the problem of Expert Employee Ranking.
\textbf{Interactions Categorization}: Interactions are categorized as alpha or beta, whereby alpha represent all kinds of positive relations, feedback, ratings, for example in case of social media they can be number of followings. While beta represents all kind of negative interactions, relations, feedback, ratings. For example number of unfollowings represent beta in a scenario of social networks for instance.
\textbf{Expert Employee:} An employee is considered as an expert if he possesses highest level of skills pertaining to his subject area while Tacit Expertise refers to expert with highest level of skills for which there is no documentary proof.
\textbf{Definition (Enterprise Expert ranking using Employee Reputation):} Let $E=\{e_1, e_2, e_3, \cdots, e_n\}$ be the set of expert users and $Ev$ represents the expected value. Let $I$ be the set of all interactions in which user $U$ has participated, where $I=\{i_1, i_2, i_3,\cdots, i_m\}$ and categorization results in $i=\{\alpha/\beta\}$ and $U=\{u_1, u_2, u_3,\cdots,u_n\}$ such that $U_i$, has some documented and undocumented skills. Authors assume that a user$U_i$ is member of $E$ if he has participated in interaction $I_i$ , if and only if the $Rep((Ev), U_i,I)$ is maximum. Where $Rep()$ utilizes beta probability density function to compute the expected value for set $I$ and $U_i$.
\section{Expert Employee Reputation (EER)}
The proposed methodology intends to address the following problems
\begin{itemize}
\item Identification of an expert in an organization for which no document exists, where the intention of an employer is, to find tacit expertise of an employee.
\item Finding an employee suitable for an extra task not directly related to the documented qualification and expertise.
\item Existing graph-based techniques for expertise ranking used in the given domain are vulnerable to the issue of negative referrals and collusion.
\end{itemize}
To address these problems authors found that document-based and graph-based techniques will be insufficient so they proposed a reputation-based approach that ranks and identifies an expert through reputation feedback and interactions.
The reputation information can be gathered from direct opinion or through organizational micro-blogs, where an employee might be involved in the discussion are under consideration.Given in Figure~\ref{fig1} is the Architecture of EER. The employee interactions are categorized according to a criteria, this information is then given as an input to Beta Probability Density function that generates the expected value of the employee treated as his reputation rank.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Fig2.pdf}
\caption{EER Architecture}
\label{fig1}
\end{figure}
Reputation is defined as ``Overall quality as seen and judged by users'' according to Merriam-Webster's~\cite{23} online dictionary.
The reputation is calculated from the opinion about previous behavior of the entities derived from the history of interactions.
The past behavior of the entities and opinions of others is utilized to find the reputation. The opinion is based on the history of interactions. However, there are scenarios when information is not present in the form of opinions. In such cases, interactions might be in the form of text messages, comments or posts. The following section presents details of the two major modules of EER, i.e. interaction categorization and the beta probability density function.
\subsection{Interactions Categorization}
Interactions are the communication, ratings, followings, query answer reply.Interaction categorization is proposed since previous techniques were unable to identify them for example in case of twitter rank or others that are based on page rank are not able to take into account the number of unfollowings or number of dislikes.Thus in EER if interactions are present in the form of text, sentiment analysis~\cite{24}\cite{a12} is one such technique that can be utilized to categorize the interactions as either positive or negative. If the interactions are measured in terms of continuous values, a threshold needs to be decided for the binary classification. Thus the values above that threshold are considered as positive and those below are considered as negative.The decision regarding this threshold or cutoff in most of the cases requires ground truth values\cite{25}.Mean Probability and $0.5$ is however satisfactory threshold for interaction categorization.
Sometimes such interactions are also stated explicitly as either positive or negative thus they can be then utilized further directly.
\subsection{Beta Probability Distribution}
Beta probability distribution~\cite{26} finds the posterior probability of the binary events. Beta probability density function is parameterized by alpha and beta representing the two events.In EER alpha and beta are represented by the positive and negative interactions between the employees of the enterprise respectively. Given below in equation\ref{e1a} is the expected value.
\begin{equation}
\label{e1a}
E(v)= \frac{\alpha}{\alpha +\beta}
\end{equation}
Beta probability can be used to represent the subjective degree of belief. Let's assume positive interaction between employees and negative interactions in the enterprise as the two events, where positive interactions are represented by alpha and beta represents negative interactions.Assuming $'A'$ represents number of activities of a context, where $'x'$ represents a particular context. Thus $'A'$ are activities of context $'x'$.
If $'M'$ represents the total number of nodes in the network, the expert node in the context $'x'$ can be computed by utilizing equation~\ref{e1a}.
Supposing $z$ and $z_1$ are the number of outcomes of alpha and beta respectively, that implies that after every $z$ outcome the $z_1$ outcome can be expected. In the EER, assume $p$ are the observed number of outcomes for $z$ and $n$ are the observed number of outcomes for $z_1$, then following equations can be derived.
\begin{equation}
\label{e1}
\alpha = z + 1
\end{equation}
\begin{equation}
\label{e2}
\beta = z_1 + 1
\end{equation}
\begin{equation}
\label{e3}
E(v)= \frac{z + 1}{z + z_1 + 2}
\end{equation}
Substituting the number of outcomes for $z$ and $z_1$, we get
\begin{equation}
\label{e4}
E(v)= \frac{p + 1}{p + n + 2}
\end{equation}
The Expert Reputation of a node is the sum of expected value $E(v)$ of all its interactions, given by
\begin{equation}
\label{e5}
E=\sum_{i=1}^{M-1} E(v)
\end{equation}
where $'E'$ called as expected value represents the reputation of a node in a particular context.
The proposed technique is different in terms that it considers the type of interaction of the users. For every user in the network, the positive and negative interactions are recorded. These are then utilized to find the expected posterior behavior of the nodes by utilizing the beta probability expected value as discussed above. Based upon these values all the nodes are ranked. The node with maximum reputation value is regarded as the expert node. Given below is the Algorithm \ref{algo-1}, the details are already discussed in the previous section. In Line 3, 4 the interactions are categorized into positive or negative domains. Line 5 uses the beta probability expected value to generate the expert rank of the node.
\begin{algorithm}
\caption{Reputation based Expert Rank}
\label{algo-1}
\begin{algorithmic}[1]
\STATE{Load Dataset}
\LOOP
\STATE{Compute no of positive interactions p for worker $w$}
\STATE{Compute no of negative interactions n for worker $w$}
\STATE{Compute expected value for worker $w$ as}
\STATE{$Ev \leftarrow p+1/p+n+2$}
\STATE{Let M represent total no. of interactions}
\LOOP
\STATE{$E\leftarrow \sum_{i=1}^{M-1} E(v)$}
\ENDLOOP
\STATE{Let T represent the total number of nodes}
\STATE{Compare the Reputation of worker w with $T-w$ nodes }
\STATE{$max\leftarrow w$}
\ENDLOOP
\STATE{Compute max as the Expert}
\end{algorithmic}
\end{algorithm}
Lines 7-10 compute the expert rank of a node for all its interactions. The results are then added. Lines 11-13 compare the expert rank of a node to the rest of nodes in the network. The node with the maximum expert rank value is then regarded as the expert node.
Given in algorithm\ref{algo-2} is the part of the algorithm with the time factor.
\begin{algorithm}
\caption{The Time factor Algorithm}
\label{algo-2}
\begin{algorithmic}[1]
\STATE{Find time t node i interacting node $n-i$}
\IF{$t\leftarrow 0$}
\STATE {Latest interaction only node i,$n-i$}
\ELSE
\STATE{All interactions node i,$n-i$}
\ENDIF
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{algo-2} states that if the value of variable $t =0$ then only the latest interaction is counted. Otherwise,a history of interaction can be counted. The algorithm can be fine-tuned to include a particular length of history. Like previous $(1, 2, ..10)$ or any specific number of interactions in history thereby excluding rest. This feature of the algorithm addresses the dynamism of the expert ranking with the time that was solved using HMM's reputation in one of the baselines.
\section{Evaluation}
Two performance indicators within~\cite{27} this context are reported.The average absolute error \cite{28} between real and predicted.The metric compares the reputation values calculated by the proposed technique against the real values in order to find ability of the technique in predicting the rankings close to real rankings.Other metric is Precision that is used to find the probability by which the proposed technique truly ranks the expertise.
\subsection{Baselines}
Authors carried out comparison to expert rank~\cite{7} that is a graph based technique utilizing PageRank like algorithm and Worker Rank \cite{8}, a technique utilizing normal distribution based reputation model for ranking employees in organizations according to their expertise.
Furthermore, another recent technique using Hidden Markov Model (HMM) \cite{4} as a reputation calculation structure is also used for evaluation. The HMM-based technique proposed to solve the issue of reputation inflation, and change in reputation value with time that is not being addressed in other reputation models in general. The proposed beta probability-based reputation model has incorporated the time factor to address this shortcoming. Thus, comparison to this technique is made under the time decay metric.Since the scenario under discussion lacks documentary proof, therefore the comparison against document based techniques stands void.
\subsection{Datasets}
The effectiveness of the reputation-based expert rank algorithm is found by performing experiments on three different datasets~\cite{29}. Table~\ref{t2} highlights these datasets.
\begin{table}
\centering
\caption{Datasets}
\label{t2}
\begin{tabular}{@{}c c@{}}
\toprule
\textbf{Datasets} & \textbf{Nodes}\\
\midrule
Dataset DS1 & 46 \\
Dataset DS2 & 77\\
Dataset DS3 & 77\\
\bottomrule
\end{tabular}
\end{table}
Dataset DS1:This data set represents the interactions in term of expertise level of the users. The weights are based on a scale from 0 to 5. 0: I Do Not Know This Person; 1: Strongly Disagree; 2: Disagree; 3: Neutral; 4: Agree; and 5: Strongly Agree.
Dataset DS2: This data set represents the ratings given according to the degree of advice received from the users of the network. The scale of the weights is 0: I Do Not Know This Person/I Have Never Met this Person; 1: Very Infrequently; 2: Infrequently; 3: Somewhat Infrequently; 4: Somewhat Frequently; 5: Frequently; and 6: Very Frequently.
Dataset DS3:This third data set is about the employees knowledge regarding the skills and expertise of each other.
The weight scale is 0: I Do Not Know This Person/I Have Never Met this Person; 1: Strongly Disagree; 2: Disagree; 3: Somewhat Disagree; 4: Somewhat Agree; 5: Agree; and 6: Strongly Agree.
\subsection{Ranking Match Test}
This test is carried out to find out the capability of the proposed technique(EER) in truly representing the ratings given by the individual nodes.The results are compared against the graph based baseline that utilizes page rank algorithm~\cite{15} referred as BL1 and a reputation based technique for ranking workers in an enterprise~\cite{8} referred as BL2 in text.The reputation structure adopted by this technique is normal distribution(NDR)\cite{30}. Scenario under discussion in this paper is based on the hypothesis of absence or lack of documentary proofs, thus comparison to the third category of document based technique is not carried out.
The test was carried out on Datasets DS1, DS2 and DS3. Comparison of reputation values produced by the proposed technique against the real values is carried out by using the performance metric MAE(Mean Average Error). This evaluates the capability of EER in predicting rankings close to real ones.The result is shown in Table~\ref{t3}and Fig~\ref{fig2}.The Mean average Error value for EER technique is 0.1 as compared to 0.3 for BL1 and 1.6 for BL2 in case of DS1. Same trend is found for other two data sets.
\begin{table}
\centering
\caption{MAE of EER, Baseline Algorithms}
\label{t3}
\begin{tabular}{@{}l c c c c@{}}
\toprule
Data Sets& Avg Weight& EER MAE& BL1 MAE& BL2MAE\\
\midrule
DS1 &3.8 & 0.1 & 0.3 &1.6\\
DS2 &2.76& 0.07 &0.56& 1.59\\
DS3 &4.5 &1.14& 2.3 &3.3\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{Fig3.pdf}
\caption{Comparison of EER and Baselines w.r.t to MAE}
\label{fig2}
\end{figure}
\begin{table}
\centering
\caption{P$@$10 for the three datasets}
\label{t4}
\begin{tabular}{@{}l c c c @{}}
\toprule
Datasets& BL2(Worker Rank)& BL1& EER\\
\midrule
DS1 &0.0 &0.0 &0.06\\
DS2 &0.1 &0.15 &0.2\\
DS3 &0.0 &0.1 &0.2\\
\bottomrule
\end{tabular}
\end{table}
The results demonstrate the accuracy of the EER technique in representation of the expert nodes.
\textbf{\textit{Analysis}}:
The MAE result from the three datasets reveal that,EER yields lesser value when compared with the baselines as shown in table \ref{t3} and figure \ref{fig2}. Similarly, the precision results in table \ref{t4} and figure\ref{fig3} represent enhanced performance of EER.The precision results of BL2 are nearly zero ,analysis revealed the reason that the weights in BL2 are normally distributed. Comparatively performance of BL1 is better than BL2; but it also yields zero result for the data set DS1.Further analysis of this outcome revealed that in dataset DS1 every node has equal number of connections.The BL1 page rank based technique is limited by such scenarios, even the weighted page ranks are also unable to support results.By carefully observing the results of the proposed technique against all three datasets it is found that precision for DS2 and DS3 are better compared to DS1, the reason being the density of data. The DS1 is denser compared to the other two datasets. Thus,the proposed technique produces better results for sparse datasets. It is also observed that for DS2 precision of BL2 (worker Rank) is quite promising revealing the fact that the ratings in that dataset are normally distributed as compared to DS1 and DS3. Overall on average for the three datasets the Precision of EER is almost 7\% improved compared to baselines.
This all discussion leads authors to summarize that the proposed technique EER is independent of the pattern of ratings and the density of the datasets whereas other baselines seem to change its behavior with these factors.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{Fig4.pdf}
\caption{Comparison of techniques w.r.t Precision}
\label{fig3}
\end{figure}
\subsection{Interaction Categorization Test}
The second test was conducted to show the novice capability of the proposed EER technique to identify the difference between negative and positive interactions thereby solving the issue of negative referrals of graph-based techniques. For this DS1, DS2 was utilized. The data was manipulated by changing the weights of the expert node identified through the EER technique. The expert node identified through the BL1 technique was node 3. Both EER and BL1 algorithms were executed on the manipulated dataset. Since the EER technique can identify positive and negative interaction, so this time instead of node1 a new node is identified as an expert node i.e. node2. The BL1 algorithm however ranked the same node3 as an expert, this is because the BL1 algorithm is based upon the PageRank that cannot take into account the case of negative connections and referrals. It is pertinent to mention that none of the expert identification techniques have addressed this problem. This shows the ability of EER in identifying the experts by taking into account interactions categories, whereas the BL1 algorithm is unable to establish the categories of interactions.
The authors utilized DS1 and DS2 datasets to carry out the experiments.
In one scenario nodes were ranked considering only positive interactions, whereas in the second scenario nodes were ranked by taking into account both positive and negative interactions, that were categorized by using a cutoff. The variance of top three nodes form both lists were calculated to find how close they reflect the original opinions.The results revealed that values are closer to mean when both positive and negative interactions are utilized.
The table~\ref{t5} shows the MAE of the ranked list in above discussed two scenarios. Figure~\ref{fig4}shows that in case of DS1 very few instances overlap for both ranked lists.
\begin{table}
\centering
\caption{MAE of EER, BL Algorithms}
\label{t5}
\scalebox{0.85}{
\begin{tabular}{@{}c c c c @{}}
\toprule
Avg Weight set1(+ve/-ve)& Avg Weight set2(+ve) &MAERank set1 &MAE Rank set2\\
\midrule
3.94 &3.60 & 0.14 & 0.20\\
3.93 &3.00 & 0.12 & 0.80\\
3.89 &3.72 & 0.07 & 0.09\\
\bottomrule
\end{tabular}}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{Fig5.pdf}
\caption{Interaction overlap graph DS1}
\label{fig4}
\end{figure}
The same experiment when conducted on the data set DS2 showed almost 60\% overlap. To analyse further manual checking of the data set was carried out, revealing that nearly all of the nodes had positive interactions since the ratings were 3 or above.
Thus, the difference in the result was minimum. The graphical representation in Figure~\ref{fig5}depicts interaction overlap of two lists for DS2.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{Fig6.pdf}
\caption{Interaction overlap graph DS2}
\label{fig5}
\end{figure}
\subsection{Dynamic Behavior Test}
In order to verify the dynamism of the proposed technique EER, the authors took a specific scenario of a node with a history of 10 interactions with different other nodes. Using the proposed EER with the time factor if $t=0$, it implies the usage of all the interactions. This produced an expected value of 0.4, as opposed to when t=1 i.e. only recent interaction is utilized as 0.3.
Given below in Table~\ref{t6} and Figure~\ref{fig6} is the representation of the expected value of a node with a variable history of interactions. Thus the problem of reputation staticity~\cite{4} is solved as addressed in the HMM-based technique. The authors are of the view that the HMM technique is a generalized version of beta distribution where each state individually assumed beta probability. From the literature, it is also evident that the HMM technique has limitations regarding state duration that follows geometric pattern unsuitable for real-life examples. Also, the number of hidden states needs to be declared apriori.In the authors view the HMM-based reputation model has added more complexity when the learning module is added.Furthermore,the working of the HMM model is restricted by parameter estimation that increases its complexity~\cite{31}. The model is hard to interpret for daily users, compared to the simple beta probability that is only parameterised by two variables. By closely observing the result of the proposed technique and HMM-based technique it is evident that to observe a change in behavior, the change in HMM-based technique is steeper comparatively, that shows its ability to respond to changing environment. The given scenario of this research is the capability of finding tacit experts. The tacit expertise usually do not suffer such rapid changes. Such changes are usually utilized to estimate the malicious behavior of the entity that previously had a well-established behavior.
\begin{table}
\centering
\caption{Expected Value (observation probabilities) with varying history}
\label{t6}
\scalebox{0.85}{
\begin{tabular}{@{}l c c c @{}}
\toprule
Interaction History & Expected Value(EER) & Expected Value(HMM)\\
\midrule
All History& 0.40 & 0.45\\
Latest & 0.30 & 0.20\\
Latest 3 & 0.40 & 0.40\\
Latest 5 & 0.42 & 0.50\\
Latest 7 & 0.50 & 0.60\\
\bottomrule
\end{tabular}}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{Fig7.pdf}
\caption{The hidden Markov reputation model and Beta reputation model with different observation probabilities.}
\label{fig6}
\end{figure}
\section{Conclusion}
Expert ranking techniques have recently gained lot of attention and research due to emergence of online knowledge communities, microblogs, internet based business markets. Most of the work in this domain targeted microblogs, however few took the same problem in the domain of large organizations and enterprises that are spread over continents. With the global spread of organizations, employees from diverse knowledge backgrounds join in, thus making expert identification a need for the managers, especially for the cases when identification of undocumented and tacit expertise of an employee is required. Thus the authors have proposed a reputation based technique to address the problem that has strong basis of mathematics and statistics. The proposed technique has addressed issue left unaddressed by previous techniques in terms of negative referral, collusion in case of graph based techniques. Comparison is conducted against domain specific techniques utilizing normal distribution reputation with its own limitations and HMM based reputation model. The EER technique is able to update according to the history of interactions so that only recent interactions could be utilized for calculations. The experimental results reveal better performance of EER in comparison to the three baselines in terms of MAE, Precision.
The EER technique is not restricted by the domain like previous techniques that were specifically designed for micro-blogs or email communication etc.It can be adopted for other domains as well. Furthermore with the emergence of global enterprises, and online job portals, such technique can also be helpful to fulfill the requirement of assignment of related task to employee with related expertise.
As a future enhancement, the application of the proposed scheme to other application domains will also be explored.
\bibliographystyle{cas-model2-names}
|
1702.03360
|
\section{Introduction}
The thermodynamics of black holes contains several peculiarities in contrast to standard thermodynamics.
One example is the different scaling behaviour when rescaling the thermodynamic variables. This can be directly verified noting that the entropy of a black hole is -- in general -- a quasi-homogeneous function of the extensive thermodynamic quantities describing the system~\cite{dolan2015black},
and its scaling behaviour is dictated by the Smarr relation.
Such systems are generically called \emph{quasi-homogeneous}.
As a consequence, it is usually recognized that using the formalism of {homogeneous} thermodynamics in the case of black
holes is not fully justified and that a modification of the thermodynamic laws for systems with
quasi-homogeneous entropy is called for~\cite{dolan2015black}.
{It has been established that in systems where } entropy and energy are not additive the standard way to define equilibrium
has to be adjusted and, in such case, the thermodynamic temperature may not be the correct parameter to be equated at
equilibrium~\cite{oppenheim2003thermodynamics,abe2001general,
ramirez2008violation,ramirez2008systems,biro2011zeroth,lenzi2012extensive,haggard2013death,velazquez2016remarks}.
In spite of this, {it has been repeatedly argued in favour of}
the existence of first order phase transitions -- i.e.,~coexistence processes -- {within the framework of black hole thermodynamics}.
{Such arguments are} based on the analogy with the van der Waals (vdW) phase diagram and use the Maxwell equal
area law to find the coexistence curve \emph{as if} the system was homogeneous (see
e.g.~\cite{dolan2015black,2012Dola,kubizvnak2012p,spallucci2013maxwell,spallucci2013maxwell?s,dolan2014vacuum,lan2015note,wei2015clapeyron,wei2015insight,mo2015coexistence,kubizvnak2017black}
and the references therein).
In this work we
consider systems whose entropy is a quasi-homogeneous function of the extensive variables
and show that Maxwell's equal area law -- based on the definition of thermodynamic equilibrium
for homogeneous systems (cf.~\cite{callen2006thermodynamics} and the discussion in Section 4.3
in~\cite{Bravetti2015377}) -- is inconsistent with the generalized Gibbs-Duhem (GGD) identity that must
hold in such cases~\cite{belgiorno2003quasi,belgiorno2011general}.
{We show that this situation can be remedied introducing a new set of the variables
defining equilibrium. Based on these generalized variables, we propose a definition of thermodynamic equilibrium {originating} from the
GGD identity and we demonstrate that such revision is essential {in} Maxwell's construction
for phase coexistence. It is worth mentioning that our \emph{generalized zeroth law} reduces to the standard definition for homogeneous systems of degree one.}
To illustrate our proposal we discuss two relevant cases: {on the one hand}, we show that for the Schwarzschild
black hole {the \emph{new} temperature characterizing equilibrium is constant, i.e. it does not depend on its mass $M$. This} coincides (up to a constant factor) with
the result in~\cite{czinner2015black},
where such parameter is obtained using a generalized zeroth law for non-extensive statistical mechanics developed
in~\cite{biro2011zeroth}. This proves that, at least in the Schwarzschild case, there is a consistency between
different approaches. On the other hand, we consider the first order phase transition in the Kerr--Anti de Sitter (Kerr--AdS) family of black holes and show that the Maxwell construction
as applied in the literature leads to a violation of the GGD. Using
the new generalized intensive parameters and according to our definition of thermodynamic equilibrium, such transition seems to
disappear. Given the importance of this example in the context of the AdS/CFT correspondence, we believe that this
can be relevant for future investigations.
This paper is structured as follows. In \secref{sec:mathincons} we review the thermodynamics of
quasi-homogeneous systems as developed in~\cite{belgiorno2003quasi,belgiorno2011general}. In~\secref{sec:GZL} we
point out the aforementioned mathematical inconsistency between Maxwell's construction based on the
standard zeroth law of thermodynamics and the Gibbs-Duhem relation in the case of quasi-homogeneous entropy,
and continue by proposing a generalized form of the zeroth law,
which is consistent with the corresponding GGD relation. To illustrate the new form of the zeroth law, we consider the examples of
Schwarzschild and Kerr--AdS black holes in \secref{sec:ex}, before we conclude in \secref{sec:conc}.
Throughout this work we use Planck units, in which $c=G=\hbar=k_{\rm B}=1$.
\section{Quasi-homogeneous thermodynamics}
\label{sec:mathincons}
In this section we briefly review some results of the thermodynamics of quasi-homogeneous systems
obtained in~\cite{belgiorno2003quasi,belgiorno2011general}.
Let us start by recalling some definitions.
Unless otherwise stated, we will not use Einstein's sum convention.
\begin{definition}[Quasi-homogeneous function] \label{def:qhf}
Let $r,\lambda\in \mathbb{R}$, $\lambda\neq 0$ and ${\bf \beta} = \left(\beta_1,...,\beta_n\right)\in \mathbb{R}^n$.
A function $w$ of a set of variables $\left\{q^i\right\}_{i=1}^n$ is said to be \emph{quasi-homogeneous of degree
$r$ and type ${\bf \beta}$} if
\beq \label{eq:Gquasihom}
w (\lambda^{\beta_1} q^1,..., \lambda^{\beta_n} q^n) = \lambda^{r} w(q^1,...,q^n) \,.
\eeq
\end{definition}
The particular case where $\beta_i=1$ for every value of $i$ yields the standard scaling relation of homogeneous
functions of degree $r$, i.e.
\beq
\label{eq.whom}
w(\lambda q^1,...,\lambda q^n) = \lambda^r w(q^1,...,q^n) \,.
\eeq
In the following we will use $S$ instead of $w$, because the function of interest in thermodynamics is the entropy.
The variables $\left\{q^i\right\}_{i=1}^n$ are the extensive variables of the system, such as internal energy $U$,
volume $V$ or number of particles $N$.
{In standard thermodynamics} of extensive systems the entropy is a homogeneous function of degree
one of the extensive variables, i.e.,
\beq\label{homS}
S(\lambda U,\lambda V,\lambda N) = \lambda S(U,V,N) \,,
\eeq
while in black holes thermodynamics the entropy is a quasi-homogeneous function as in Def.\,\ref{def:qhf}.
\begin{prop}
Let $S=S(q^1,...,q^n)$ be a quasi-homogeneous function of degree $r$ and type $\beta$. Then, the conjugate
variables to the $q^i$, defined by
\beq
\label{def.p0}
p_i\left(q^j\right) \equiv \frac{\partial}{\partial q^i} S\left(q^j\right) \,,
\eeq
are quasi-homogeneous functions of degree $r-\beta_i$ for every value of $i$.
\end{prop}
\begin{proof}
\begin{align}
\label{eq.p1}
p_i\left(\lambda^{\beta_j} q^j\right) = \frac{\partial}{\partial (\lambda^{\beta_i} q^i)}
S\left(\lambda^{\beta_j} q^j\right)
& = \frac{1}{\lambda^{\beta_i}} \frac{\partial}{\partial q^i}\big[\lambda^r S\left(q^j\right)\big]
\nonumber \\
& = \lambda^{r-\beta_i}\frac{\partial}{\partial q^i} S(q^j) \,.
\end{align}
Therefore
\beq
\label{hombeta_a-1}
p_i\left( \lambda^{\beta_j} q^j \right) =\lambda^{r-\beta_i} p_i (q^j) \,.
\eeq
\end{proof}
Note that if $S$ is homogeneous of degree $r=1$ [cf.~equation \eqref{homS} above],
then the conjugate variables $p_i$ are homogeneous functions of degree 0, i.e.~$p_i(\lambda q^j) = p_i(q^j)$,
i.e., they do not change when the system is re-scaled. Only in this case, the conjugate variables
are \emph{intensive} and we recover the usual thermodynamic quantities, e.g.~$1/T$, $p/T$,
$\mu/T$. In all other cases we shall refer to the conjugate variables $p_i$ as the \emph{would-be intensive}
quantities, as in~\cite{belgiorno2003quasi,belgiorno2011general}.
\begin{prop}[Euler's Theorem]
\label{teo.eu}
Let $S=S(q^1,...,q^n)$ be a quasi-homogeneous function of degree $r$ and type $\beta$. Then
\beq
\label{eu.teo}
r S(q^j) = \sum_{i=1}^n \beta_i \big[q^i p_i(q^j) \big] \,.
\eeq
\end{prop}
\begin{proof}
Consider the derivative of $S(\lambda^{\beta_j} q^j)$ with respect to the scaling parameter $\lambda$.
On the one hand, since $S$ is a quasi-homogeneous function of degree $r$ and type $\beta$, we have
\begin{align}
\label{teo.eu1}
\frac{\partial}{\partial \lambda} S(\lambda^{\beta_j} q^j)
=\frac{\partial}{\partial \lambda} \big[\lambda^r S(q^j) \big]
= r \lambda^{r-1} S(q^j) \,.
\end{align}
On the other hand, a direct calculation yields
\begin{align}
\label{teo.eu2}
\frac{\partial}{\partial \lambda} S(\lambda^{\beta_j} q^j)
& = \sum_{i=1}^n \frac{\partial S(\lambda^{\beta_j} q^j)}{\partial (\lambda^{\beta_i} q^i)}
\frac{\partial (\lambda^{\beta_i} q^i)}{\partial \lambda} \nonumber \\
& = \sum_{i=1}^n \frac{\partial S(\lambda^{\beta_j} q^j)}{\partial (\lambda^{\beta_i} q^i)}
\left(\beta_i \lambda^{\beta_i-1} q^i \right) \nonumber\\
& = \sum_{i=1}^n \left(\beta_i \lambda^{r-1} q^i \right) p_i(q^j),
\end{align}
where the last equality follows from Def.\,\eqref{def.p0} and \eqsref{eq.p1} and \eqref{hombeta_a-1}.
Thus, combining the results of \eqref{teo.eu1} and \eqref{teo.eu2}, \eqeqref{eu.teo} is obtained.
\end{proof}
In standard thermodynamics the above result reduces to the well-known identity for the entropy,
\beq\label{eulers1}
S=\frac{1}{T}U-\frac{p}{T}V+\frac{\mu}{T}N \,.
\eeq
With Proposition~\ref{teo.eu}, we can write a GGD relation for the case of quasi-homogeneous thermodynamic systems.
\begin{prop}[Generalized Gibbs-Duhem identity]
Let $S(q^{1},\dots,q^{n})$ be a quasi-homogeneous function of degree $r$ and type $\beta$
and let $\left\{p_i\right\}_{i=1}^n$ be the set of conjugate variables [cf.~equation~\eqref{def.p0}].
Then,
\beq\label{eq:GD-QH}
\sum_{i=1}^n \big[ \left(\beta_i - r \right) p_i(q^j) {\rm d} q^i+ \beta_i q^i {\rm d} p_i(q^j) \big] =0 \,.
\eeq
\end{prop}
\begin{proof}
Since $S$ satisfies the hypothesis of Proposition \ref{teo.eu}, let us consider the differential of
\eqref{eu.teo}, namely
\beq
r {\rm d} S(q^j) = \sum_{i=1}^n\beta_i {\rm d}\big[q^i p_i(q^j) \big] \,.
\eeq
The left hand side is simply
\beq
\label{gd.1}
r {\rm d} S =r \sum_{i=1}^n \frac{\partial}{\partial q^i} S(q^j) {\rm d} q^i= r \sum_{i=1}^n p_i(q^j) {\rm d} q^i \,,
\eeq
whereas the right hand side yields
\beq
\label{gd.2}
\sum_{i=1}^n\beta_i {\rm d}\big[q^i p_i(q^j) \big] =
\sum_{i=1}^n \beta_i \big[q^i {\rm d} p_i(q^j) + p_i(q^j) {\rm d} q^i \big] \,.
\eeq
Subtracting \eqref{gd.1} from \eqref{gd.2} and collecting the $\beta_i$ produces the desired result.
\end{proof}
In the case where $S$ is homogeneous of degree $r$, equation
\eqref{eq:GD-QH} reduces to
\beq\label{G-D2}
(1-r) \sum_{i=1}^n p_i(q^j) {\rm d} q^i + \sum_{i=1}^n q^i {\rm d} p_i(q^j) = 0 \,.
\eeq
From this result it follows that in standard thermodynamics (with $r=1$), using the appropriate identifications of the variables, one obtains the Gibbs-Duhem relation
\beq\label{GDstandard}
U{\rm d}\left(\frac{1}{T}\right)-V{\rm d}\left(\frac{p}{T}\right)+N{\rm d}\left(\frac{\mu}{T}\right)=0\,,
\eeq
which is a mathematical identity stating that the intensive quantities are not all independent
in equilibrium~\cite{callen2006thermodynamics}.
\section{A mathematical inconsistency and its possible resolution}
\label{sec:GZL}
In this section we prove the mathematical inconsistency between the usual zeroth law
of thermodynamics, the standard Maxwell construction for coexistence between different phases
and the GGD identity, and provide a possible resolution through a
redefinition of the equilibrium parameters.
We start from the crucial fact that in ordinary thermodynamics the Gibbs-Duhem identity
\eqref{GDstandard} is mathematically consistent with Maxwell's law for phase coexistence.
Here, one considers a single system splitting into two different phases remaining at equilibrium,
i.e.~sharing the same values of their intensive quantities, while the entropy and volume of the system
change, causing a discontinuity in the extensive quantities and thus giving rise to a \emph{first order
phase transition. }
Clearly in this case the definition of equilibrium between the phases in terms of equal values
of the conjugate (intensive) quantities is consistent with~\eqref{GDstandard}.
From the above discussion on the role of the intensive variables in Maxwell's construction and its
consistency with the Gibbs-Duhem relation \eqref{GDstandard}, it is evident why such consistency is
lost in the case of quasi-homogeneous systems, where equation~\eqref{eq:GD-QH} holds. Indeed for the
two phases to be at equilibrium, the zeroth law would predict that no change in any of the would-be intensive
variables $p_{i}$ would happen, i.e.,~${\rm d} p_{i}=0$ for all $i$. This implies that the second term
in~\eqref{eq:GD-QH} vanishes identically. However, in general the first term in~\eqref{eq:GD-QH} is different
from zero, thus {leading to} an inconsistency.
For instance, in the case of a homogeneous entropy of degree $r$, it follows from the first law
${\rm d} S = \sum_{i=1}^n p_i {\rm d} q^i$ that the first term of \eqref{G-D2} is proportional to the change in
the entropy during the transition, and hence to the latent heat, which cannot be zero in a first order
phase transition.
This inconsistency leads to the two following possibilities: either one gives up the standard formulation
of phase coexistence expressed by the Maxwell construction (at least in its usual form), or one has to {re-define}
the conditions for equilibrium, i.e.,~the zeroth law. Due to the many indications arising from different
perspectives pointing to the fact that the zeroth law needs to be revisited for systems with non-additive
entropy and energy relations (see e.g.~\cite{oppenheim2003thermodynamics,abe2001general,
ramirez2008violation,ramirez2008systems,biro2011zeroth,lenzi2012extensive,haggard2013death,velazquez2016remarks}),
we opt for the latter route.
From the analysis of the homogeneity of the first derivatives of $S$ -- see \eqref{hombeta_a-1} --
let us propose the following
\begin{definition}[Generalized intensive variables] \label{def.GIV}
Let $S(q^{1},\dots,q^{n})$ be a quasi-homogeneous function of degree $r$ and type $\beta$ and let
$\left\{p_i\right\}_{1}^{n}$ be the set of conjugate variables. Assume that $\beta_i\neq 0$ for every $i$.
The quantities
\beq \label{eq:GenIntQuantQH}
{\tilde{p}_i(q^j)} \equiv \left[\left(q^i\right)^{\beta_i-r}\right]^{1/\beta_i} p_i (q^j) \,.
\eeq
are called the \emph{generalized intensive variables}.
\end{definition}
Indeed, these variables reduce to~\eqref{def.p0} when $S$ is homogeneous of degree 1. Moreover, one can easily
prove the following
\begin{prop}\label{PropHomog}
The generalized intensive variables \eqref{eq:GenIntQuantQH} are quasi-homogeneous functions of degree 0.
\end{prop}
\begin{proof}
\begin{align}
{\tilde{p}_i(\lambda^{\beta_j} q^j)}
&= \left[\left(\lambda^{\beta_i} q^i\right)^{\beta_i-r}\right]^{1/\beta_i} p_i (\lambda^{\beta_j}q^j)\nonumber\\
&=\lambda^{\beta_i - r} \left[\left(q^i\right)^{\beta_i-r}\right]^{1/\beta_i}
\big[\lambda^{r-\beta_i} p_i(q^j)\big] \nonumber\\
&=\left[\left(q^i\right)^{\beta_i-r}\right]^{1/\beta_i} p_i (q^j) = {\tilde{p}_i(q^j)} \,.
\end{align}
\end{proof}
This is a desirable property for quantities defining a notion of equilibrium {as they remain invariant under a scaling of the system.
Note that these generalized variables {could have been} inferred from Eq.~(75) in~\cite{belgiorno2003quasi}.
However, in that work they were not singled out nor were advocated as the {correct} ones to describe equilibrium.
Using the generalized intensive variables~\eqref{eq:GenIntQuantQH},
we can re-write the GGD identity~\eqref{eq:GD-QH} as in~\cite{belgiorno2003quasi}}:
\begin{prop}\label{prop.GGD}
Let $S(q^{1},\dots,q^{n})$ be a quasi-homogeneous function of degree $r$ and type $\beta$ and let
$\left\{{\tilde{p}_i} \right\}_{i=1}^n$ be the set of generalized intensive variables. Then,
\beq\label{GDnew}
\sum_{i=1}^n \beta_i \left(q^i\right)^{r/\beta_i} {\rm d} \tilde{p}_i(q^j) = 0\,.
\eeq
\end{prop}
\begin{proof}
From \eqeqref{eq:GenIntQuantQH} we have
\beq
p_{i}(q^j) = {\tilde{p}}_i(q^{j}) \, {\left(q^i\right)}^{r/\beta_i -1} \,,
\eeq
and we can thus rewrite the identity \eqref{eq:GD-QH} in terms of the $\tilde{p}_i(q^j)$ as
\beq\label{G-D3quasi}
\begin{split}
\sum_{i=1}^n \beta_i \Big[ \left(1 - \frac{r}{\beta_i} \right) {\tilde{p}}_i \,{\left(q^i\right)}^{r/\beta_i - 1}\,
{\rm d} q^i
+ q^i {\rm d}\left( {\tilde{p}}_i \,{\left(q^i\right)}^{r/\beta_i - 1} \right) \Big] = 0\,.
\end{split}
\eeq
By explicit calculation of the second term, we can rewrite the above identity as
\begin{eqnarray}
0 &=& \sum_{i=1}^n \beta_i \Bigg[ \left( 1 - \frac{r}{\beta_i} \right) {\tilde{p}}_i \,
{\left(q^i\right)}^{r/\beta_i - 1} \, {\rm d} q^i \nonumber \\
&+& q^i \Big[ {\left(q^i\right)}^{r/\beta_i-1}\,{\rm d} {\tilde{p}}_i+ \left( \frac{r}{\beta_i} -1 \right)
{\left(q^i\right)}^{r/\beta_i-2} \,{\tilde{p}}_i \,{\rm d} q^i \Big] \Bigg] \nonumber\\
&=& \sum_{i=1}^n \beta_i\, {\left(q^i\right)}^{r/\beta_i} {\rm d} \tilde{p}_i \,. \label{G-D4quasi}
\end{eqnarray}
\end{proof}
Note that the GGD identity~\eqref{eq:GD-QH} only
establishes the existence of a relation between the would-be
intensive and the would-be extensive variables, without fixing the
values of the generalized intensive variables uniquely. In this
sense our choice of the generalized intensive variables~\eqref{eq:GenIntQuantQH} is not the only one possible.
However, it is motivated by the following considerations. Firstly,~\eqref{eq:GenIntQuantQH} reduce to~\eqref{def.p0} when the entropy is homogeneous of degree $1$.
Moreover, these quantities are quasi-homogeneous functions of degree $0$, thus being true intensive variables
(under the appropriate re-scalings of the extensive ones).
Finally, as stated in Proposition~\ref{prop.GGD}, using these variables the GGD identity takes the same form as the standard one (cf.~\cite{belgiorno2003quasi}).
Indeed, Propositions~\ref{PropHomog} and~\ref{prop.GGD} suggest the following modification of the notion of
thermodynamic equilibrium:
\begin{definition}[Thermodynamic Equilibrium] \label{def.TE}
Two systems whose entropy is a quasi-homogeneous function of the same degree {and type} are in thermodynamic equilibrium
with each other if and only if they have the same values of the ${\tilde{p}}_i(q^j)$.
\end{definition}
This is the \emph{generalized zeroth law of thermodynamics} that we propose for any quasi-homogeneous system.
Note that Def.\,\ref{def.TE} is mathematically consistent with the identity~\eqref{eq:GD-QH}
-- cf.~\eqref{GDnew} -- even when considering processes of coexistence as in the case of the usual
Maxwell equal area law.
Let us remark that with our prescription one can consider the example of a process of coexistence among
different phases at equilibrium without any incongruence, as long as the definition of equilibrium is given by
equating the quantities in~\eqref{eq:GenIntQuantQH}. Note also that our simple redefinition gives a general
prediction about the quantities that have to be constant at equilibrium.
In the next section we consider examples from black holes thermodynamics and show that for the Schwarzschild
black hole our redefinition of the equilibrium condition {yields a constant generalized temperature. This result} coincides with a different instance of the generalized
zeroth law of thermodynamics resulting from non-extensive statistical mechanics~\cite{czinner2015black}. {As a more relevant consequence we will also show that}
for the Kerr--AdS black hole our construction suggests that a reconsideration of the first order phase transition
might be in order.
\section{Quasi-homogeneous black hole thermodynamics}
\label{sec:ex}
In this section we investigate some examples for the above ideas in the context of black hole thermodynamics.
In principle, our generalization of the zeroth law can be applied to any black hole system, given that one can
easily determine the degrees of homogeneity from the Smarr relation,
\begin{equation}\label{smarr}
(D-3)M = (D-2)TS+(D-2)\Omega J
-2PV +(D-3)\Phi Q
\end{equation}
where $D$ is the number of spacetime dimensions, $M$ is the mass of the black hole, $T$ is the Hawking
temperature, $S$ is the entropy and the other terms are work terms depending on the black hole family in
question~\cite{dolan2015black}. Here, we consider two in particular, namely the Schwarzschild and the Kerr--AdS
black holes, to compare our results with previous proposals and to illustrate new features.
\subsection{Schwarzschild}
The Schwarzschild black hole is the most straightforward example, since its thermodynamics is described by
only one extensive variable, i.e.,~its mass $M$. The entropy as a function of $M$ is
\begin{equation}
S(M) = 4 \pi M^2 \,,
\end{equation}
which is a homogeneous function of degree $r=2$. From this the standard temperature is derived as
\begin{equation}\label{TSchwnormal}
\frac{1}{T} = \frac{\partial S}{\partial M} = 8 \pi M \,.
\end{equation}
It is immediate to see that this a homogeneous function of degree $1$ with respect to $M$, and therefore not a real intensive quantity.
With~\eqref{TSchwnormal}
and using \eqref{eq:GenIntQuantQH}, we can obtain the generalized temperature as
\begin{equation} \label{eq:TgenSch}
\tilde{T} = TM = \frac{1}{8\pi} \,,
\end{equation}
i.e.,~a constant. Note that, a constant is -- trivially -- a real intensive quantity, as it does not change
with any scaling of $M$.
Note also that by \eqref{GDnew} the generalized intensive quantities cannot be independent. This means
that in this case, since there is only one such generalized intensive quantity, it must be a constant. This
fact outlines that the Schwarzschild black hole is not a proper thermodynamic system. However, it is
interesting to see that even in this case our formalism coincides with previous approaches. Indeed,
a similar result, i.e.,~a constant generalized temperature, has been obtained previously for the
Schwarzschild black hole~\cite{czinner2015black} by using the generalized zeroth law derived from non-extensive
statistical mechanics proposed in~\cite{biro2011zeroth}. In this work, the most general conditions for thermal
equilibrium of systems with non-additive energy and entropy are established by using a method based on the
definition of the so-called formal logarithms of these quantities. However, the same method was also applied
in~\cite{czinner2016kerr} in the analysis of the Kerr black hole, resulting in a constant generalized temperature,
regardless of the angular momentum, identical to the Schwarzschild case -- an indication that the result may be
unphysical, as the authors point out themselves.
Moreover, from our formalism a dependence of the generalized temperature on the angular momentum is to be expected.
Finally, in~\cite{biro2013q,czinner2015RenyiKerr} the R\'enyi entropy was used as the formal logarithm of the Bekenstein-Hawking entropy.
In this case the temperature for the Schwarzschild case depends on the mass $M$ and
is not intensive. The connection of our proposal to these approaches and the general question of the underlying
behaviour of the energy and entropy is thus not quite clear and might be addressed in future works.
\subsection{Kerr--AdS}
Kerr black holes in asymptotically Anti--de Sitter space are thermodynamically determined by three extensive variables,
namely their mass $M$, angular momentum $J$ and pressure $P$, which is defined via the cosmological constant $\Lambda$
of the spacetime as
\begin{equation}
P = -\frac{\Lambda}{8 \pi} \,.
\end{equation}
The cosmological constant is usually included as a pressure into the thermodynamic description of black holes
\cite{2009Kast,2012Dola,dolan2015black}, and thus it turns out that the internal energy of the black hole is
\begin{equation}
U = M - PV \,,
\end{equation}
and therefore the mass of the black hole is identified with the enthalpy
\begin{equation}
M \equiv H = U + PV \,.
\end{equation}
For the Kerr--AdS black hole one obtains~\cite{2012Dola}
\begin{equation}
H(S,P,J) = \frac{1}{2} \sqrt{\frac{4 \pi ^2 J^2 \left(\frac{8 P S}{3}+1\right)+\left(\frac{8 P S^2}{3}+S\right)^2}{\pi S}} \,,
\end{equation}
and from this, provided $J \neq 0$, it is possible to calculate the expression for the internal energy as
\begin{eqnarray}
&& U(S,V,J) = \left(\frac{\pi}{S}\right)^3 \Bigg[ \left( \frac{3V}{4\pi} \right) \left\{\frac{S^2}{2\pi^2} + J^2
\right\} ~~~~~~~~~~ \nonumber\\
&&~~~~~~~~~~~~~~~~~ - J^2 \left\{\left( \frac{3V}{4\pi} \right)^2 - \left( \frac{S}{\pi} \right)^3 \right\}^{1/2} \Bigg] \,.
\end{eqnarray}
For simplicity and without loss of generality, we will limit further analyses to positive angular momenta,
i.e.,~$J>0$.
The temperature and pressure can be easily obtained as
\beq
\label{eq:temp}
T = \frac{1}{8S^4} \left[ \frac{6 \pi ^{3/2} J^2 \left(9 \pi V^2-8 S^3\right)}{\sqrt{9 \pi V^2-16 S^3}} -18 \pi ^2 J^2 V-3 S^2 V \right],
\eeq
and
\beq
\label{eq:press}
P = \frac{3}{8S^3} \left[ 2 \pi ^2 J \left(J-\frac{3 \sqrt{\pi } J V}{\sqrt{9 \pi V^2-16 S^3}}\right)+S^2 \right],
\eeq
respectively.
The case of Kerr--AdS is particularly interesting for our purposes because its equation of state, i.e.,~the relation
$P(V,T)$ at fixed $J$, qualitatively shows the same oscillatory behaviour as a vdW fluid, which is generally taken as an
indication of the presence of a first order phase transition, sometimes referred to as the CCK phase
transition~\cite{dolan2014vacuum,caldarelli2000thermodynamics,altamirano2014thermodynamics}.
To see this let us fix $J=1$ from now on and first look at \figref{fig:PoverV}, where we plot $P(V,T)$ as a function of
$V$ for various choices of $T$, with the inlet zooming in on one of the curves to show the characteristic vdW bump.
\begin{figure}[h!]
\includegraphics[width=0.45\textwidth]{figure1}
\caption{Equation of state $P(V,T)$ for different values of $T$ and with $J=1$.}
\label{fig:PoverV}
\end{figure}
The region of the bump is the area where one would apply the Maxwell equal area law in analogy to ordinary thermodynamics
\cite{2012Dola}. A different (equivalent) way to look at such transition is by considering the graph of the Gibbs
free energy,
\begin{equation} \label{eq:Gtheory}
G(T,P,J) = U - TS + PV \,.
\end{equation}
To illustrate the multi-valued behavior of the Gibbs free energy we plot in \figsref{fig:Tconst} and \ref{fig:Pconst}
the cuts along the lines of constant $T$ and $P$, respectively, featuring the characteristic swallowtails.
\begin{figure}[h!]
{\includegraphics[width=0.45\textwidth]{figure2}}
\caption{Cuts of the Gibbs free energy at constant $T$.} \label{fig:Tconst}
{\includegraphics[width=0.45\textwidth]{figure3}}
\caption{Cuts of the Gibbs free energy at constant $P$.} \label{fig:Pconst}
\end{figure}
Based on the above analogy with the vdW phase diagram, it has been argued that there is a first order phase transition between
small and large Kerr--AdS black holes, for appropriate values of the temperature and pressure.
Indeed, the standard Maxwell construction can be performed, and the form of the coexistence curve can be
calculated~\cite{dolan2014vacuum,caldarelli2000thermodynamics,altamirano2014thermodynamics,wei2016analytical}.
In the following we use this example -- which is considered to be well understood in the literature -- to claim that a revision due to the
GGD identity is called for.
We start by showing that the standard Maxwell construction in
this case is inconsistent with the GGD identity.
To do so, it is more convenient to use the equation of state
$T(S,P)$, as plotted in \figref{fig:MaxC}.
For appropriate values of $T$ and $P$, this equation of state
exhibits an oscillatory behavior, as for the case of $P(T,V)$
above (cf.~\figref{fig:PoverV}).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.45\textwidth]{figure7}
\caption{Equation of state $T(S,P)$ for $J=1$ and $P=0.002$, together with the standard Maxwell construction. The two colored areas are equal.}
\label{fig:MaxC}
\end{center}
\end{figure}
The corresponding value for the transition temperature is also calculated using Maxwell's equal area law. \\
Now we proceed to verify the GGD identity for this case.
Using the Smarr relation \eqref{smarr} applied to four spacetime
dimensions (and $Q=0$), we have
\begin{equation}\label{SmarrKAdS}
S = \frac{1}{2T} U + \frac{3}{2} \frac{P}{T} V - \frac{\Omega}{T} J \,.
\end{equation}
From this, we can determine the degrees of homogeneity of the variables $T$, $P$ and $\Omega$ as $\beta_T = 1/2$, $\beta_P = 3/2$
and $\beta_\Omega = 1$ (cf.~Eq.~\eqref{eu.teo}). The overall degree of homogeneity of the entropy is $r=1$.
Therefore, the GGD~\eqref{eq:GD-QH} in the Kerr--AdS case reads
\beq\label{GGDKAdS}
-\dfrac{1}{2T}{\rm d} U+\dfrac{1}{2}\dfrac{P}{T}{\rm d} V+\dfrac{1}{2}U{\rm d}\left(\dfrac{1}{T}\right)+\dfrac{3}{2}V{\rm d}\left(\dfrac{P}{T}\right)+J{\rm d}\left(\dfrac{\Omega}{T}\right)=0 \,.
\eeq
Note that the last three terms have an analogous form with the standard Gibbs-Duhem relation~\eqref{GDstandard} (although with different coefficients).
Now let us use the standard Maxwell equal area law to prove an inconsistency with~\eqref{GGDKAdS}.
By the usual argument, the two coexisting phases are in equilibrium and therefore all the last three terms vanish along the coexistence process.
Thus we are left with the following expression
\beq\label{GGDKAdS2}
-\dfrac{1}{2T_{\rm tr}}\Delta U+\dfrac{1}{2}\dfrac{P_{\rm tr}}{T_{\rm tr}}\Delta V=0\,,
\eeq
where $\Delta U$ and $\Delta V$ represent the jumps in these
quantities along the coexistence line and
$T_{\rm tr}$ and $P_{\rm tr}$ are the constant values of the temperature and pressure along the transition.
Eq.~\eqref{GGDKAdS2} can be further simplified to
\beq\label{GGDKAdS3}
\Delta U-P_{\rm tr}\Delta V=T_{\rm tr}\Delta S-2P_{\rm tr}\Delta V=0\,,
\eeq
where in the last equality we made use of the first law (with $J$ constant), that is, $\Delta U=T_{\rm tr}\Delta S-P_{\rm tr}\Delta V$.
Now it is an easy exercise to use the values
of $T_{\rm tr}$, $P_{\rm tr}$, $\Delta S$ and $\Delta V$ calculated using the standard Maxwell construction
to show that Eq.~\eqref{GGDKAdS3} is not satisfied, i.e.,~that there is an inconsistency with the GGD identity.
Table~\ref{table1} shows the results of these
calculations for different values of the transition pressure
$P_{\rm tr}$.
In the first column we report the chosen values for the transition pressure $P_{\rm tr}$. In the second column
we provide the corresponding transition temperature $T_{\rm tr}$, calculated using the standard Maxwell construction. In the third
column we show that the area law is satisfied, by checking that the deviation from zero of the difference between the two areas
in yellow in Fig.~\ref{fig:MaxC} is negligible.
In the last column we demonstrate that the GGD identity~\eqref{GGDKAdS3} is not satisfied,
by showing that the deviation from zero is large compared to that of the area law, and thus not negligible.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$P_{\rm tr}$ & $T_{\rm tr}$ & ${\rm Maxwell_{\rm Dev}}$ & ${\rm GGD_{\rm Dev}}$ \\
\hline
$1.0 \times 10^{-3}$ & $2.6 \times 10^{-2}$ & $1.8 \times 10^{-2}$ & $6.5 \times 10^{-1}$ \\
\hline
$1.6 \times 10^{-3}$ & $3.2 \times 10^{-2}$ & $2.2 \times 10^{-2}$ & $4.6 \times 10^{-1}$ \\
\hline
$2.0 \times 10^{-3}$ & $3.5 \times 10^{-2}$ & $2.2 \times 10^{-3}$ & $3.6 \times 10^{-1}$ \\
\hline
$2.6 \times 10^{-3}$ & $3.9 \times 10^{-2}$ & $4.5 \times 10^{-3}$ & $2.0 \times 10^{-1}$ \\
\hline
\end{tabular}
\caption{Values of $T_{\rm tr}$, ${\rm Maxwell_{\rm Dev}}$ and ${\rm GGD_{\rm Dev}}$ obtained numerically from Maxwell's equal area law for different choices of $P_{\rm tr}$. More details in the text.}
\label{table1}
\end{center}
\end{table}
Since the analysis of the phase transition in terms of the standard
definition of thermodynamic equilibrium leads to an inconsistency
with the GGD identity, we now reconsider the phase transition in
terms of the generalized intensive quantities defined in~\eqref{eq:GenIntQuantQH}.
From~\eqeqref{eq:GenIntQuantQH} and~\eqref{SmarrKAdS}, we can infer the generalized intensive variables responsible for equilibrium as
\begin{equation}
\frac{1}{\tilde{T}} = \frac{1}{T U} \quad {\rm and} \quad \frac{\tilde{P}}{\tilde{T}} = \frac{P}{T} V^{1/3} \,.
\end{equation}
Combining the two expressions, we end up with the generalized thermodynamic equilibrium parameters
\begin{equation}
\tilde{T} = TU \quad {\rm and} \quad \tilde{P} = P U V^{1/3} \,.
\end{equation}
By construction, these functions are quasi-homogeneous of degree $0$ and type $\beta = (1,3/2,1)$
with respect to the correspondingly re-scaled
extensive variables $S$, $V$ and $J$, i.e.,~
\begin{eqnarray}
\tilde{T}(\lambda^{1} S, \lambda^{3/2} V, \lambda^{1} J) &=& \lambda^0 \,\tilde{T}(S,V,J) \,, \\
\tilde{P}(\lambda^{1} S, \lambda^{3/2} V, \lambda^{1} J) &=& \lambda^0 \,\tilde{P}(S,V,J) \,.
\end{eqnarray}
In terms of $S$ and $V$ (for $J=1$) these read
\begin{align}
\label{eq:Ttilde}
\tilde{T}(S,V) = & \frac{3V\pi^{13/2}}{8S^7} \left[ -36 \left( \frac{S}{\pi} \right)^3 - 10 \left( \frac{S}{\pi} \right)^5
+ 27 \left( \frac{V}{\pi} \right)^2 +\right. \nonumber\\
& \left.9 \left( \frac{S}{\pi} \right)^2 \left( \frac{V}{\pi} \right)^2 \right]+ \frac{3\pi^6}{64S^7} \sqrt{9\pi V^2 - 16S^3}\ \times \nonumber\\
& \left[ 32 \left( \frac{S}{\pi} \right)^3 - 72 \left( \frac{V}{\pi} \right)^2
- 24 \left( \frac{S}{\pi} \right)^2 \left( \frac{V}{\pi} \right)^2 - 3\left( \frac{S}{\pi} \right)^4
\left( \frac{V}{\pi} \right)^2 \right]
\end{align}
and
\begin{align}
\label{eq:Ptilde}
\tilde{P}(S,V) = &\frac{9 \pi^{5/2} V^{4/3}}{32 S^6} \sqrt{9\pi V^2 - 16S^3} \left( 2\pi^2 + S^2 \right) \ \times \nonumber\\
& \left( 6\pi^2 V + 3 S^2 V - 2\pi^{3/2} \sqrt{9\pi V^2 - 16S^3} \right)
\end{align}
Note that in order to show the quasi-homogeneity of these functions by rescaling the extensive variables,
it is necessary to recover the terms containing $J$, including it as an extensive variable.
Using these expressions, we can return to the plot of the equation of state, but now in terms of the new
variables, plotting $\tilde{P}(V,\tilde{T})$ as a function of $V$ for different choices of $\tilde{T}$.
As can be seen in \figref{fig:PtildeoverV}, the curves are monotonously decreasing, therefore the system appears
to be stable and there is no necessity for the Maxwell construction.
\begin{figure}[h!]
{\includegraphics[width=0.45\textwidth]{figure4}}
\caption{Equation of state $\tilde{P}(V,\tilde{T})$ at constant $\tilde{T}$.} \label{fig:PtildeoverV}
\end{figure}
The same effect can be observed using the Gibbs free energy. We can re-express definition \eqref{eq:Gtheory}
in terms of the new intensive variables $\tilde{T}$ and $\tilde{P}$ and calculate the function
$G(\tilde{T},\tilde{P},J)$, inverting \eqsref{eq:Ttilde} and \eqref{eq:Ptilde} numerically. The result can be seen in
figures \figsref{fig:Ttildeconst} and \ref{fig:Ptildeconst}, where cuts at constant $\tilde{T}$ and $\tilde{P}$ show that
the Gibbs free energy in terms of the generalized intensive
variables is a single-valued smooth function. \\
\begin{figure}
{\includegraphics[width=0.45\textwidth]{figure5}}
\caption{Cuts of the Gibbs free energy at constant $\tilde{T}$.} \label{fig:Ttildeconst}
{\includegraphics[width=0.45\textwidth]{figure6}}
\caption{Cuts of the Gibbs free energy at constant $\tilde{P}$.} \label{fig:Ptildeconst}
\end{figure}
We conclude that for the Kerr--AdS black hole the standard Maxwell
equal area law is inconsistent with the GGD identity. Besides,
the use of the generalized intensive variables proposed here as the parameters defining thermodynamic equilibrium
seems to indicate that there is no first order phase transition between large and small black holes, as previously
argued in the literature. However, our results deserve more investigation. Perhaps a comparison with explicit models
directly constructed from statistical mechanics could shed more light on the validity of such statements. Alternatively,
an analysis involving thermodynamic response functions could be interesting, although the significance of these
response functions in the context of a generalized zeroth law should be re-evaluated.
\section{Conclusions and future directions}
\label{sec:conc}
In this work we consider a generalization of the zeroth law of thermodynamics for systems whose thermodynamic entropy
is a quasi-homogeneous function of the (would-be) extensive variables (Def.\,\ref{def.TE}). Originating from the generalized
version of the Gibbs-Duhem identity, we show how to define the generalized intensive variables that can be used to define
thermodynamic equilibrium in such general cases (Def.\,\ref{def.GIV}). Moreover, we prove that this new definition
resolves an inconsistency between the use of the standard Maxwell equal area law and the GGD identity
that is usually overlooked, especially in the literature regarding the thermodynamics of black holes. Within this context,
we consider two examples where the application of our generalized zeroth law should be relevant, namely the Schwarzschild
and the Kerr--AdS black holes.
The former is important because with our approach we recover a previous result found in~\cite{czinner2015black},
derived from a different perspective. The latter example is of interest because in the usual treatment the Kerr--AdS
family of black holes shows a behavior which is very similar to that of a van der Waals fluid, including a first order
phase transition. However, we argue that the use of the standard Maxwell equal area law in such case is not fully consistent and that
using the generalized intensive variables that we have introduced here in order
to define thermodynamic equilibrium, such phase transition disappears. This statement however should be further
investigated in other contexts in order to corroborate such a conclusion.
Our results are intended to be a step forward towards a deeper
formal understanding of the thermodynamic properties of systems
with quasi-homogeneous entropy. However, it also calls for more
detailed investigations.
One can use the arguments given here to understand whether other
reported first order phase transitions in black holes are consistent
with their respective GGD identities or not (cf.~e.g.~\cite{kubizvnak2012p,spallucci2013maxwell,spallucci2013maxwell?s,dolan2014vacuum,lan2015note,wei2015clapeyron,wei2015insight,mo2015coexistence,kubizvnak2017black}).
It would also be interesting to study the implications of the
present analysis for the conditions of equilibrium between black
holes and heat reservoirs, e.g.~a Schwarzschild black hole in a hot
flat space. Moreover, we would
like to extend the comparison between our approach and the one presented in~\cite{biro2011zeroth,czinner2015black,czinner2016kerr,
biro2013q,czinner2015RenyiKerr} to other cases to see whether the agreement we found for the Schwarzschild black hole
holds in more general contexts.
Besides, it would be worth using explicit calculations as in~\cite{ramirez2008violation,
ramirez2008systems} to check whether our prediction of the new thermodynamic parameters defining equilibrium can be tested
by numerical experiments, and to compare our results with the formalism proposed in~\cite{PhysRevE.88.042135,PhysRevLett.114.230601}
presenting a different instance of a GGD relation for systems with long-range interactions.
These directions will be the subject of future work.\\
\section*{Acknowledgements}
A.B. was supported by a DGAPA--UNAM postdoctoral fellowship. C.G. was supported by an UNAM postdoctoral fellowship program.
F.N. received support from PAPIIT-UNAM Grant IN-111617.
\bibliographystyle{ieeetr}
|
1310.3886
|
\section{Introduction}
Recently, measurements of the Cosmic Microwave Background (CMB) spectral deviations
from the black-body spectrum
have become a focus of attention as important probes of the physics in
the early Universe, because a powerful CMB observation missions called as
PIXIE and PRISM have been proposed~\cite{Kogut:2011xw, Andre:2013afa}.
Although the CMB spectrum is predicted as a nearly black-body
spectrum in the standard Big Bang scenario, spectral distortions from the black-body spectrum can be
created by energy injections into the CMB in the early universe.
Therefore, the measurement of CMB distortions is expected as a probe
of the thermal evolution of the Universe~(for recent reviews, see Refs.~\cite{Chluba:2011hw,Sunyaev:2013aoa}).
The diffusion of the acoustic waves before the recombination epoch, known as
Silk damping \cite{Silk:1967kq}, is one of the major energy injection
sources
\cite{1991MNRAS.248...52B,1991ApJ...371...14D,Hu:1994bz,Chluba:2012gq,Chluba:2012we,Dent:2012ne,Chluba:2011hw,Khatri:2012tv}.
Other
energy injection sources include massive unstable relic particles which
decay before the recombination epoch \cite{Hu:1993gc},
Hawking radiation from primordial black
holes \cite{Tashiro:2008sf}, diffusion damping of acoustic wave due to the cosmic strings~\cite{Tashiro:2012pp,Tashiro:2012nb},
and dissipation of
primordial magnetic fields before and after the recombination epoch~\cite{Jedamzik:1999bm, Sethi:2004pe, Kunze:2013uja}.
The CMB distortions are typically classified into two types,
so-called $\mu$- and $y$-distortions, depending on the epoch when
energy injections occur. The $\mu$-distortions
are produced due to energy injections to CMB photons in the redshift
range $2 \times 10^6 \gtrsim z \gtrsim 5 \times 10^4$. On the other
hand, the $y$-distortions are created by energy injections in the
redshift range $5 \times 10^4 \gtrsim z \gtrsim 1090$, and are also produced through the cosmic reionization
process \cite{Hu:1993tc} and the thermal Sunyaev-Zel'dovich(SZ) effect \cite{Zeldovich:1969ff}
from the clusters of galaxies \cite{Refregier:2000xz}.
Current constraints on these distortions have been respectively obtained as $|\mu| < 9 \times 10^{-5}$
and $y < 1.5 \times 10^{-5}$ from COBE FIRAS~\cite{Fixsen:1996nj}.
The future mission PIXIE has the potential to give
tighter constraints on both
types of distortions, $|\mu| \sim5 \times 10^{-8}$ and
$y \sim 10^{-8}$ at the 5 $\sigma$ level \cite{Kogut:2011xw},
which will be improved further by an order of magnitude by another
future survey PRISM.
In this paper, we investigate CMB distortions created by
energy injections due to the damping of the primordial magnetic fields.
Primordial magnetic fields could be the
seed fields of observed micro-Gauss magnetic fields in the
galaxies and galaxy clusters.
There are a large number of works to study the origin of primordial magnetic fields in the early Universe;
during inflation (see, e.g., \cite{Ratra:1991bn,Martin:2007ue,Demozzi:2009fu} and references therein) or at the phase transition
(see, e.g., \cite{Hogan:1983zz,Vachaspati:1991nm,Enqvist:1994rm,Sigl:1996dm,Kahniashvili:2012uj} and references therein).
Current upper limits on the large-scale
magnetic fields are obtained through CMB anisotropies (see, e.g., \cite{Shaw:2010ea,Shiraishi:2012rm,Yamazaki:2012pg})
and large scale structures (see, e.g., \cite{Shaw:2010ea, Pandey:2012ss, Kahniashvili:2012dy}). These upper limits allow
the existence of the nano-Gauss primordial magnetic fields on Mpc
scales. Recently, there are also several reports on the lower limits of magnetic fields in
the inter-galactic medium whose strength is larger than $O(10^{-15} -
10^{-20})$~Gauss by using the observations of TeV
blazars~\cite{Tavecchio:2010mk,Neronov:1900zz,Dolag:2010ni,Takahashi:2011ac}
although this claim is still under discussion \cite{Broderick:2011av, Miniati:2012ge}.
The effect of primordial magnetic fields on the CMB distortions
has been studied in Refs.~\cite{Jedamzik:1999bm, Sethi:2004pe, Kunze:2013uja}. If primordial
magnetic fields exist, they induce the velocity of the photon-baryon fluid through
the Lorentz force before the recombination epoch. The induced kinetic energy dissipates
through the viscosity of the photon-baryon fluid corresponding to
the Silk damping~\cite{Jedamzik:1996wp,Subramanian:1997gi}.
Even after the recombination, the magnetic fields
induce the velocity of baryon fluid via the Lorentz force with residual ionized baryons.
This velocity fields also dissipate by
ambipolar diffusion and decaying magnetohydrodynamical turbulence,
and, consequently, CMB distortions are produced~\cite{Sethi:2004pe, Kunze:2013uja}.
For example, calculating spatially averaged distortions due to the
magnetic field damping before the recombination epoch,
the authors of Ref.~\cite{Jedamzik:1996wp}
have obtained the upper limits on the strength of the
magnetic fields by comparing the results from COBE-FIRAS, which
are $ 3\times 10^{-8} $ Gauss on comoving coherent scale $\sim 400~{\rm
pc}$ from the constraint for $\mu$-distortions ($0.3~{\rm pc}$ for $y$-distortions).
Recently, in Ref.~\cite{Kunze:2013uja},
the authors have claimed that the PIXIE would be expected to give a constraint as $8 \times 10^{-10}$ Gauss
from the limit on $|\mu|$.
In this paper, we focus on the anisotropies of CMB
distortions induced by primordial magnetic fields.
In the future experiments,
it is expected
to measure such anisotropies of the distortion before the recombination epoch. We investigate the
angular power spectrum of the $\mu$- and $y$-distortions due to the
damping of primordial magnetic fields with a given initial power
spectrum.
The shape of the angular power spectrum, in particular, the existence of
the peak of the spectrum, is expected to depend on the kind of energy injections.
We show that
the amplitude of the spectrum depends on the structure of primordial
magnetic fields and the peak scale informs us about the dissipation scale
of magnetic fields.
We
also evaluate the cross-correlation between the CMB distortion and
the CMB temperature anisotropies. There are several works about such
cross-correlation in the context of searching primordial
non-Gaussianity~\cite{Pajer:2012vz,Ganc:2012ae}.
If the magnetic fields exist,
for example,
these fields generate the anisotropic
stress during the radiation-dominated era which becomes a source of the
additional primordial curvature perturbations.
CMB temperature fluctuations induced by such primordial curvature perturbations
sourced from the anisotropic stress of primordial magnetic fields
would correlate with the CMB distortions due to the damping of primordial magnetic fields,
because both of them are given in terms of the convolution of the magnetic fields.
Including the analysis of such cross-correlation, we discuss the
possibility of detecting the CMB distortions due to the existence of primordial magnetic fields.
This paper is organized as follows.
In section 2,
we briefly review CMB distortions induced from the damping of the magnetic fields and
present the formalism for calculation of angular power spectra of anisotropies of $\mu$ and $y$ parameters.
We also discuss the cross-correlation between the CMB distortions and the CMB temperature anisotropy
induced from the primordial magnetic fields.
In section 3, we numerically calculate angular power spectra of the CMB distortions, taking the amplitude of the
primordial magnetic fields to be a largest possible one derived from the current CMB observations.
In section 4, we discuss the possibility of detecting anisotropic $\mu$- and $y$-distortions in future or on-going CMB experiments.
In section 5, we conclude this paper.
In this paper, we use the natural unit: $\hbar=c=k_B=1$.
Cosmological parameters are set according to WMAP result\cite{Hinshaw:2012fq}: the abundance of baryon $\Omega_b=0.045$,
that of cold dark matter $\Omega_c=0.222$, that of dark energy $\Omega_\Lambda=0.733$ and Hubble constant $H_0=70.4~{\rm km/s/Mpc}$.
\section{Formulation for CMB distortions due to primordial magnetic fields}
\subsection{Primordial magnetic fields}
We assume that spatially-varying random magnetic fields
$\mathbf{B}(z,\mathbf{x})$ are created in the early universe.
We define $\mathbf{b}(z,\mathbf{x})$ as
\begin{equation}
\mathbf{B}(z,\mathbf{x})=\frac{\mathbf{b}(z,\mathbf{x})}{a^2},
\end{equation}
where
$a$ is the scale factor,
and $\mathbf{b}(z,\mathbf{x})$ describes
the evolution of magnetic fields other than decay due to cosmic expansion.
In addition to the cosmic expansion, small-scale magnetic fields lose their
amplitude
through the dissipation process due to the viscous photon-baryon
fluid before the recombination epoch
\cite{Jedamzik:1996wp}.
Accordingly, the time-evolution of $\tilde{\mathbf{b}}(z,\mathbf{k})$,
which is the Fourier transformed component of $\mathbf{b}(z,\mathbf{x})$ with comoving wavenumber
$\mathbf{k}$, is given by
\begin{equation}
\mathbf{\tilde{b}}(z,\mathbf{k})=\mathbf{\tilde{b}}(\mathbf{k})\exp(-\tau(z,\mathbf{k})),
\end{equation}
where
\begin{equation} \tau(z,\mathbf{k})=-\int^{t(z)}_{t(z_0)}dt^{\prime}
~\Gamma (t^\prime,\mathbf{k}),
\end{equation}
with the dissipation rate $\Gamma (t,\mathbf{k})$.
Here, we take $z=z_0$ to be an arbitrary initial redshift
when magnetic fields on interesting scales have hardly decayed yet
and $\mathbf{\tilde{b}}(\mathbf{k}) = \tilde{\mathbf{b}}(z_0,\mathbf{k}) $.
Note that $\tau(z,k)>1$ means that magnetic fields with wavenumber $k$ have almost decayed at redshift $z$.
We assume that the initial random magnetic fields are
isotropically homogeneous and obey the Gaussian statistics.
Therefore, the auto-correlation function of $\mathbf{\tilde{b}}(\mathbf{k})$ is expressed as
\begin{equation}
\left< \tilde{b}_i(\mathbf{k})\tilde{b}_j(\mathbf{p})\right>=P_{ij}(\hat k)P_B(k)(2\pi)^3\delta(\mathbf{k}+\mathbf{p}),
\label{magauto}
\end{equation}
where $k=|\mathbf{k}|$, $p=|\mathbf{p}|$ and
\begin{equation}
P_{ij}(\hat k)=\delta_{ij}-\hat{k}_i\hat{k}_j,
\end{equation}
is a projection tensor which reflects the zero divergence of magnetic fields.
We assume that the power spectrum, $P_B$, is given as a blue-tilted power-law function with a cut-off scale, defined as
\begin{equation}
P_B(k)=
\begin{cases}
n\pi^2\frac{B_0^2}{k^3}\left(\frac{k}{k_c}\right)^n& \ ;k<k_c \\
0& \ ;k>k_c
\end{cases},
\label{P_B}
\end{equation}
where $n > 0 $ is the spectral index \footnote{ Although, here, we do not mention concrete models of generating the primordial magnetic fields,
such blue-tilted power spectrum is motivated by some models, e.g., the phase transition scenarios
in the early universe \cite{Hogan:1983zz,Vachaspati:1991nm,Enqvist:1994rm,Sigl:1996dm,Kahniashvili:2012uj}.}
and
$k_c$ is the cut-off wavenumber depending on the generation mechanism of the magnetic fields.
The dissipation rate $\Gamma (t,\mathbf{k})$ is expressed as the
imaginary part in the solutions of dispersion relations for the magnetohydrodynamic (MHD)
modes, called fast- and slow-magnetosonic, and Alfven modes.
In Ref.~\cite{Jedamzik:1996wp}, the authors have shown that, among these modes, the Alfven and slow-magnetosonic modes
can survive below the Silk damping scale. Therefore the
energy of the magnetic fields can be stored in these modes and dissipate
with damping rates of
the Alfven and slow-magnetosonic modes, which depend on scales.
On the scale larger than the mean free path for photon
$l_\gamma$ , i.e., $k/a \lesssim l_\gamma^{-1}$,
the damping of MHD modes is caused by the photon shear viscosity.
On the other hand, on the scale smaller than $l_\gamma$, i.e., $k/a \gtrsim l_\gamma^{-1}$, MHD modes are damped by the occasional collisions of
the fluid particles with the background ones, which is parametrized by the drag coefficient
$\alpha\simeq (l_\gamma R)^{-1}$ with $R={3\rho_b \over 4\rho_r}$ being the ratio between the energy densities of the baryon $\rho_b$ and the radiation $\rho_r$.
Furthermore, the damping rate is different in the oscillatory limit and the overdamped limit
and then
the dissipation rate, depending on the scales, is obtained as \cite{Jedamzik:1996wp}
\begin{equation} \Gamma (t,\mathbf{k})\sim
\begin{cases}
0 & ; {\rm for}
\ \frac{k}{a}\lesssim H
~~~{\rm (no~damping~for~superHubble~modes)}\\
\frac{l_\gamma }{10 (1+R)}\left(\frac{k}{a}\right)^2 & ; {\rm for}
\ H \lesssim \frac{k}{a} \lesssim \frac{30 v_A\cos\theta(1+R)}{l_\gamma }
~~~{\rm (oscillatory~limit~for~photon~shear~viscosity)}\\
\frac{v_A^2\cos^2\theta}{5 l_\gamma } & ;{\rm for}
\ \frac{30 v_A\cos\theta(1+R)}{l_\gamma} \lesssim \frac{k}{a} \lesssim l_\gamma^{-1}
~~~ {\rm (overdamped~limit~for~photon~shear~viscosity)}\\
\frac{c_A^2\cos^2\theta}{\alpha}\left(\frac{k}{a}\right)^2 & ;{\rm for}
\ l_\gamma^{-1} \lesssim \frac{k}{a} \lesssim \frac{\alpha}{2c_A\cos\theta}
~~~ {\rm (overdamped~limit~for~occasional~collisions)}\\
\frac{\alpha}{2} & ;{\rm for}
\ \frac{k}{a} \gtrsim \frac{\alpha}{2c_A\cos\theta}
~~{~\rm (oscillatory~limit~for~occasional~collisions)}
\end{cases},
\label{Imomega}
\end{equation}
with
\begin{equation}
v_A^2 = {\hat{\mathbf{B}}_{\rm eff}^2 \over (1+R+\hat{\mathbf{B}}_{\rm eff}^2)},
~~
c_A^2 = {\hat{\mathbf{B}}_{\rm eff}^2 \over R}.
\end{equation}
Here
$H$ is the Hubble parameter, $\theta$ is the angle between $\hat{\mathbf{B}}_{\rm eff}$ and
$\mathbf{k}$,
$v_A$ and $c_A$ respectively denote
the relativistic and non-relativistic Alfven velocities \cite{Jedamzik:1996wp,Seshadri:2000ky,Mack:2001gc}
and the normalized mean square of the effective background field
$\hat{\mathbf{B}}_{\rm eff}$ is given by
\begin{equation}
\hat{\mathbf{B}}_{\rm eff}^2 \equiv
{\mathbf{B}_{\rm eff}^2 \over 16 \pi \rho_r / 3}
={ \left< \mathbf{B}^2(z,\mathbf{x}) \right>
\over 16 \pi \rho_r / 3}
={3 \over 16 \pi \rho_r} \int \frac{dk}{\pi^2}k^2P_B(k)\frac{1}{a^4}e^{-2\tau(z,k)}.
\end{equation}
We hereafter simply set as $\cos \theta=1$, and regard $\tau(z,\mathbf{k})$ as the function of $k$.
\subsection{Auto- and cross-correlation functions of $\mu$ and $y$ parameters}
The dissipation of primordial magnetic fields discussed above can
be a mechanism of energy injection which
creates the CMB spectral distortions.
Since the amplitude of magnetic fields spatially varies, the dissipation
energy of the magnetic fields also spatially fluctuates and
it can produce
the anisotropic spectral distortions of the CMB.
The spectral distortions of the CMB are characterized by $\mu$ and $y$
parameters.
These parameters are given by~\cite{Sunyaev:1980vz,Hu:1992dc} \footnote{Recently, Refs.~\cite{Chluba:2012gq,Kunze:2013uja} found that
an extra factor $1/3$ in the Eqs.~(\ref{mu_general}) and (\ref{y_general}) is needed because only $1/3$ of the energy injection contributes to the distortions. However this modification do not change our final results significantly.}
\begin{equation}
\mu(\mathbf{x})=1.4 \int^{z_{\mu,i}}_{z_{\mu,f}} dz \frac{dQ(z,\mathbf{x})/dz}{\rho_\gamma(z)} ,\label{mu_general}
\end{equation}
and
\begin{equation}
y(\mathbf{x})=\frac{1}{4} \int^{z_{y,i}}_{z_{y,f}} dz
\frac{dQ(z,\mathbf{x})/dz}{\rho_\gamma(z)}, \label{y_general}
\end{equation}
respectively.
Here, $dQ(z,\mathbf{x})/dz$ is the energy injected at redshift $z$ and
comoving coordinate $\mathbf{x}$, $\rho_\gamma(z)$ is the photon energy
density, and we take
$z_{\mu,i}= 2\times 10^6$, $z_{\mu,f}=z_{y,i}= 5\times 10^4$ and $z_{y,f} =z_{\rm rec}= 1090$.
$z_{\rm rec}$ is also the redshift at the recombination.
The injected energy is given by~\cite{Jedamzik:1999bm}
\begin{equation}
\frac{dQ}{dz}(z,\mathbf{x})=-\frac{1}{8\pi a^4}\frac{d}{dz}\left(\mathbf{b}(z,\mathbf{x})\right)^2. \label{dQdz}
\end{equation}
Substituting Eq. (\ref{dQdz}) into Eqs.~(\ref{mu_general}) and (\ref{y_general}),
$\mu$ and $y$ parameters induced by dissipating magnetic fields are respectively given by
\begin{equation}
\mu(\mathbf{x})=\frac{1.4}{8\pi}\left(\frac{(\mathbf{b}(z_{\mu,i},\mathbf{x}))^2}{\rho_{\gamma,0}}-
\frac{(\mathbf{b}(z_{\mu,f},\mathbf{x}))^2}{\rho_{\gamma,0}}\right)
\label{mu_B},
\end{equation}
\begin{equation}
y(\mathbf{x})=\frac{1}{32\pi}\left(\frac{(\mathbf{b}(z_{y,i},\mathbf{x}))^2}{\rho_{\gamma,0}}-\frac{(\mathbf{b}(z_{y,f},\mathbf{x}))^2}{\rho_{\gamma,0}}\right)
\label{y_B},
\end{equation}
where $\rho_{\gamma,0}=\rho_\gamma(0)$.
In terms of the Fourier modes of magnetic fields, $\tilde{\mathbf{b}}(z,\mathbf{k})$, we can rewrite these
parameters as
\begin{equation}
\mu(\mathbf{x})=\frac{1.4}{8\pi \rho_{\gamma,0}}\int
\frac{d^3k}{(2\pi)^3}\int \frac{d^3k^\prime}{(2\pi)^3} ~
\tilde{\mathbf{b}}(\mathbf{k})\cdot\tilde{\mathbf{b}}^*(\mathbf{k}^\prime)C_\mu(k,k^\prime)e^{i(\mathbf{k}-\mathbf{k}^\prime)\cdot \mathbf{x}},
\label{mux}
\end{equation}
\begin{equation}
y(\mathbf{x})=\frac{1}{32\pi \rho_{\gamma,0}}\int
\frac{d^3k}{(2\pi)^3}\int \frac{d^3k^\prime}{(2\pi)^3} ~
\tilde{\mathbf{b}}(\mathbf{k})\cdot\tilde{\mathbf{b}}^*(\mathbf{k}^\prime)C_y(k,k^\prime)e^{i(\mathbf{k}-\mathbf{k}^\prime)\cdot \mathbf{x}},
\label{yx}
\end{equation}
where
\begin{equation}
C_\mu(k,k^\prime)=\exp\left(-\tau(z_{\mu,i},k)\right)\exp\left(-\tau(z_{\mu,i},k^{\prime})\right)-\exp\left(-\tau(z_{\mu,f},k)\right)\exp\left(-\tau(z_{\mu,f},k^{\prime})\right),
\label{Cmu}
\end{equation}
\begin{equation}
C_y(k,k^\prime)=\exp\left(-\tau(z_{y,i},k)\right)\exp\left(-\tau(z_{y,i},k^{\prime})\right)-\exp\left(-\tau(z_{y,f},k)\right)\exp\left(-\tau(z_{y,f},k^{\prime})\right).
\label{Cy}
\end{equation}
Let us discuss the angular power spectrum of the distortions.
First, considering the expansion of the distortion parameters, $\mu$ and $y$, by
the spherical harmonics, $Y_{lm}(\hat{n})$, we can obtain each mode-coefficient as
\begin{equation}
a^{\mu}_{lm} =\int d^2\hat{n}~ \mu(r_{\rm rec}\hat{n})Y^*_{lm}(\hat{n}),
\label{eq:alm_mu}
\end{equation}
and
\begin{equation}
a^{y}_{lm} =\int d^2\hat{n}~ y(r_{\rm rec}\hat{n})Y^*_{lm}(\hat{n}) ,
\label{eq:alm_yy}
\end{equation}
where we take the sudden last-scattering approximation in which
the observed CMB photons are last-scattered simultaneously at $z=z_{\rm rec}$.
In Eqs.~(\ref{eq:alm_mu}) and (\ref{eq:alm_yy}),
$\hat{n}$ is the direction of the line of sight and $r_{\rm
rec}=\int^{z_{\rm rec}}_0 dz/H(z)\simeq 1.4\times 10^4 ~{\rm Mpc}$
is the comoving distance from the earth to the last-scattering surface.
Angular power spectra of two kinds of distortions are given by
\begin{equation}
\left< a^X_{lm} (a^Y_{l^\prime m^\prime})^*\right> = C^{XY}_l \delta_{ll^\prime}\delta_{mm^\prime},
\end{equation}
where $X$ and $Y$ are either $\mu$ or $y$.
According to Eqs.~(\ref{magauto}), (\ref{mux}), (\ref{yx}), (\ref{eq:alm_mu}) and (\ref{eq:alm_yy}),
we have
\begin{equation}
C^{\mu\mu}_l=\frac{1.4^2}{2(2\pi)^5\rho_{\gamma,0}^2}\int dp\int
dq\int^1_{-1}d\mu ~
p^2q^2 P_B(\chi)P_B(q)\left(C_\mu(\chi,q)\right)^2f(p,q,\mu)\left(j_l(pr_{\rm rec})\right)^2,
\label{CmumuB}
\end{equation}
\begin{equation}
C^{yy}_l=\frac{1}{32(2\pi)^5\rho_{\gamma,0}^2}\int dp\int
dq\int^1_{-1}d\mu~
p^2q^2 P_B(\chi)P_B(q)\left(C_y(\chi,q)\right)^2f(p,q,\mu)\left(j_l(pr_{\rm rec})\right)^2,
\label{CyyB}
\end{equation}
and
\begin{equation}
C^{\mu y}_l=\frac{1.4}{8(2\pi)^5\rho_{\gamma,0}^2}\int dp\int
dq\int^1_{-1}d\mu~
p^2q^2 P_B(\chi)P_B(q)C_\mu(\chi,q)C_y(\chi,q)f(p,q,\mu)\left(j_l(pr_{\rm rec})\right)^2,
\label{CmuyB}
\end{equation}
where
\begin{equation}
\chi=\sqrt{p^2+q^2+2pq\mu},
\quad
f(p,q,\mu)=\frac{p^2(1+\mu^2)+4pq\mu+2q^2}{p^2+2pq\mu+q^2},
\end{equation}
and $j_l$ is the $l$-th spherical Bessel function.
Note that
the finite thickness of the last scattering surface cannot be neglected for smaller scale anisotropies $l\gtrsim 1000$
and the sudden last-scattering approximation is not valid for such scales.
According to Ref.~\cite{Subramanian:1998fn}, however,
the effect of the finite thickness of the last scattering surface can be simply taken into account in the above expressions as
\begin{equation}
C^{XY}_l\approx
\begin{cases}
C^{XY,0}_l \ {\rm ;for} \ l<r_{\rm rec}/\sigma_{\rm LS} \\
\frac{C^{XY,0}_l}{l\sigma_{\rm LS}/r_{\rm rec}} \ {\rm ;for} \ l>r_{\rm rec}/\sigma_{\rm LS}
\end{cases},
\label{finiteLS}
\end{equation}
where $C^{XY,0}_l$ is the angular power spectrum
given by Eq. (\ref{CmumuB}), (\ref{CyyB}) or (\ref{CmuyB})
and $\sigma_{\rm LS}\simeq 17~{\rm Mpc}$~\cite{Subramanian:2002nh} is the thickness of the last scattering surface.
\subsection{Cross-correlation functions between CMB distortions and CMB temperature anisotropies}
The cross correlation between the CMB temperature
anisotropy and the CMB spectral distortion is exactly zero as long as
the primordial curvature perturbations are pure Gaussian, and hence it
would be a new probe of the non-Gaussian feature of the primordial curvature perturbations \cite{Pajer:2012vz,Ganc:2012ae}.
Primordial magnetic fields generate not only CMB $\mu$ and $y$ distortions as shown in the previous
subsection, but also
the large-scale temperature anisotropy of
the CMB \cite{Shaw:2010ea,Shiraishi:2012rm,Yamazaki:2012pg},
which are in general given as a quadratic function of the random Gaussian magnetic fields, $\mathbf{B}$, as shown below.
Since the $\mu$ and $y$ parameters are also proportional to $\mathbf{B}^2$ as shown in Eqs. (\ref{mu_B}) and (\ref{y_B}),
the primordial magnetic fields can make
non-zero cross-correlation between the CMB temperature and spectral
distortion anisotropies.
Therefore, we investigate such a
cross-correlation as a signature of primordial magnetic fields in this section.
One of the effects of the primordial magnetic fields on the CMB temperature anisotropy
is so-called a scalar passive mode,
\footnote{There is also another type of CMB fluctuations called the scalar magnetic mode\cite{Shaw:2009nf}.
We will discuss this mode later and show that both of the cross-correlation angular power spectra
due to the scalar passive mode and the scalar magnetic mode are far below the detectable level in future experiments in following sections.
The cross-correlation angular power spectrum
between the vector or tensor mode of the temperature anisotropy and the
$\mu$ or $y$ anisotropy vanishes, since $\mu$ and $y$ are scalar-like quantities.
}
which is extra curvature perturbations induced from the magnetic anisotropic stress
on super-horizon scales
generated during radiation dominated era before the neutrino decoupling time.
The scalar passive mode of the
curvature perturbations on the comoving slicing, $\zeta_{sp}$, is given by~\cite{Shaw:2009nf}
\begin{equation}
\zeta_{sp}(\mathbf{k})=-\frac{1}{3}R_\gamma\Pi_B(\mathbf{k})\left(\ln\left(\frac{\eta_\nu}{\eta_B}\right)+\frac{5}{8R_\nu}-1\right),\label{zetasp}
\end{equation}
where $\Pi_B$ is the scalar part of the anisotropic stress of
magnetic fields, $R_\gamma=\rho_\gamma/\rho_r$,
$\rho_r=\rho_\gamma+\rho_\nu$ is the energy density of relativistic
particles, $\rho_\nu$ is the neutrino energy density,
$R_\nu=\rho_\nu/\rho_r$, $\eta_\nu$ is the conformal time at neutrino
decoupling and $\eta_B$ is that at magnetic field generation.
We
hereafter set $\eta_\nu/\eta_B=10^{17}$.
This value corresponds to magnetic fields generated at the energy scale of Grand Unified Theory and maximizes the scalar passive mode.
The scalar part of the anisotropic stress, $\Pi_B$, is given by
\begin{equation}
\Pi_B(\mathbf{k})=\frac{9}{2}T_{ij}(\hat{k})\Delta^{ij}(\mathbf{k}),
\end{equation}
where
\begin{equation} \Delta^{ij}(\mathbf{k})=\frac{1}{4\pi\rho_{\gamma,0}}\int
\frac{d^3p}{(2\pi)^3}\int
\frac{d^3q}{(2\pi)^3}\tilde{b}^i(\mathbf{p})\tilde{b}^j(\mathbf{q})(2\pi)^3\delta(\mathbf{k}-\mathbf{p}-\mathbf{q}),
\end{equation}
and
\begin{equation}
T_{ij}(\hat{k})=\hat{k}_i\hat{k}_j-\frac{1}{3}\delta_{ij}.
\end{equation}
The multipole coefficient of the scalar passive mode
is given in terms of $\zeta_{sp}$ as
\begin{equation}
a^{T,sp}_{lm}=4\pi
i^l\int\frac{d^3k}{(2\pi)^3}
\Delta^S_l(k)\zeta_{sp}(\mathbf{k})Y^*_{lm}(\hat k), \label{alm_Tsp}
\end{equation}
where $\Delta^S_l(k)$ is the transfer function of the scalar mode which
we calculate using CAMB~\cite{Lewis:1999bs,CAMBsite}.
Then we can compute the cross-correlation angular power spectra between the CMB distortions and the CMB temperature anisotropies, which are defined as
\begin{equation}
\left< a^X_{lm} (a^T_{l^\prime m^\prime})^*\right>=C^{XT}_l \delta_{ll^\prime}\delta_{mm^\prime},
\end{equation}
where $X$ is $\mu$ or $y$ again.
From Eqs~(\ref{magauto}), (\ref{mux}), (\ref{yx}), (\ref{eq:alm_mu}), (\ref{eq:alm_yy}), (\ref{zetasp}) and (\ref{alm_Tsp}),
explicit forms of $C^{XT}_l$ are given by
\begin{equation}
C^{\mu T}_l=1.4A\int dp\int dq \int ^1_{-1}d\mu p^2q^2P_B(\chi)P_B(q)C_\mu(q,\chi)\Delta^S_l(p)g(p,q,\mu)j_l(pr_{LS}),
\label{CmuTB}
\end{equation}
and
\begin{equation}
C^{yT}_l=\frac{1}{4}A\int dp\int dq \int ^1_{-1}d\mu p^2q^2P_B(\chi)P_B(q)C_y(q,\chi)\Delta^S_l(p)g(p,q,\mu)j_l(pr_{LS}),
\label{CyTB}
\end{equation}
where
\begin{equation}
g(p,q,\mu)=\frac{(1-3\mu^2)k^2-(1+\mu^2)p^2-(1+3\mu^2)kp\mu}{3(k^2+p^2+2kp\mu)},
\end{equation}
and $A=\frac{1}{(2\pi)^5}\frac{3R_\gamma}{2\rho_{\gamma,0}^2}\left(\ln\left(\frac{\eta_\nu}{\eta_B}\right)+\frac{5}{8R_\nu}-1\right)$.
\section{Estimate of power spectra of the CMB distortion parameters}
\subsection{Current upper limit on primordial magnetic fields and scale of magnetic field decay}
\begin{figure}[tbp]
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\subfigure[$\tau(z_{\mu,f},k)$]{
\includegraphics[width=75mm]
{tau_mu_f.eps}
\label{fig:tau_mu}
}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\subfigure[$\tau(z_{y,f},k)$]{
\includegraphics[width=75mm]
{tau_y_f.eps}
\label{fig:tau_y}
}
\end{center}
\end{minipage}
\end{tabular}
\caption{The dependence of $\tau(z_{\mu,f},k)$ and $\tau(z_{y,f},k)$ on $k$.
In both figures, we plot $B_0=1.1\times 10^2\, {\rm nG}$, $n=4$ and $k_c=100\,{\rm Mpc}^{-1}$ for $\tau(z_{\mu,f},k)$ and $k_c=10\,{\rm Mpc}^{-1}$ for $\tau(z_{y,f},k)$. The dashed line in each figure represents $\tau=1$.
}
\label{fig:tau}
\end{figure}
In order to evaluate the angular power spectra of the CMB distortion anisotropies derived in the previous section,
we need to set parameters $B_0$, $n$ and $k_c$, which specify the power spectrum of primordial magnetic fields.
In this subsection, we briefly review the current constraints on these parameters.
Then, in the next subsection,
we consider situations where observational signals of $\mu$ and $y$ anisotropies are maximized within such constraints.
One of strong cosmological constraints on primordial magnetic fields is
that obtained from the isotropic CMB distortion by COBE FIRAS~\cite{Fixsen:1996nj}: $|\mu| < 9 \times 10^{-5}$
and $y < 1.5 \times 10^{-5}$.
These limits bring the upper bound of the decaying energy density of magnetic
fields during the era when CMB distortions are created: $\rho_B=\mathbf{B}_{\rm
eff}^2/8\pi\lesssim10^{-4}\times\rho_\gamma$
~\cite{Jedamzik:1999bm}.
In terms of $B_0$,
this constraint leads
\begin{equation}
B_0<1.1\times 10^2~{\rm nG}, \label{COBEcons}
\end{equation}
which is independent of $n$ and $k_c$,
if $k_c$ is at the scale where magnetic fields decay while CMB distortions can be generated.
Another important constraint is that from observations of CMB temperature anisotropies.
Ref.~\cite{Yamazaki:2010nf} derived the upper limit of the amplitude of primordial magnetic fields
\begin{equation}
|B_\lambda|<3.0{\rm nG}, \label{CMBcons}
\end{equation}
where $B_\lambda$
corresponds to the strength of primordial magnetic fields on a comoving scale of $1\,{\rm Mpc}$,
which is related with $B_0$ as
\begin{equation}
B_0=\left[\frac{2}{n\Gamma(n/2)}\right]^{1/2}(2\pi)^{n/2}\left(\frac{k_c}{k_\lambda}\right)^{n/2}B_\lambda,
\label{B0Blambda}
\end{equation}
with $k_\lambda=2\pi\, {\rm Mpc}^{-1}$.
From the above expression, we find that the constraint for $B_0$ obtained from the CMB temperature anisotropy
depends on the spectral index $n$ and the cut-off wavenumber $k_c$.
Our aim of this paper is to evaluate the maximum signals of the CMB distortion
anisotropy due to primordial magnetic fields.
Basically, the power spectrum of the CMB distortions due to the decay of the magnetic fields has a peak at the cut-off scale ($\sim 1/k_c$)
and the peak amplitude depends on the total decaying energy density of
magnetic fields over all scales, not only on the peak scale.
Hence in order to obtain the large amplitude on the observable scales, which are much larger than the peak scale,
we take the peak scale characterized by $k_c$ as large as possible (We discuss the
details in the next subsection). Since the typical scale of the decay of the magnetic fields becomes larger as the Universe expands,
we set the peak scale of the power spectrum to the scale
on which magnetic fields decay around the end of the production era of
CMB distortions.
This means that $k_c$ satisfies $\tau(z_{\mu,f}, k_c) \sim 1$
for the $\mu$-distortion and
$\tau(z_{y,f}, k_c) \sim 1$
for the $y$-distortion.
We plot $\tau(z_{\mu,f},k)$ and $\tau(z_{y,f},k)$ as functions of $k$ in FIG.~\ref{fig:tau}.
In both figures, we take
$B_0=1.1\times 10^2 {\rm nG}$ and $n=4$ which satisfy the COBE bound.
According to FIG.~\ref{fig:tau}, we set
$k_c=100\,{\rm Mpc}^{-1}$ for the $\mu$ distortion and $k_c=10\,{\rm Mpc}^{-1}$ for $y$ distortion.
Note that $\tau(z_{X,f},k)<1$ for $k < k_c$ means that magnetic fields hardly decay during the production
era of CMB distortions and $C^{XX}_l$ is strongly suppressed.
In FIG. \ref{fig:n-B0}, we show the region in the $n$-$B_0$ plane excluded by Eqs.~(\ref{COBEcons}) and (\ref{CMBcons}), and
we set the peak of the magnetic field power spectrum $k_c$ as $k_c=10\,{\rm Mpc}^{-1}$ or $k_c=100\,{\rm Mpc}^{-1}$ for the constraint given by Eq. (\ref{CMBcons}).
The upper limit on $B_0$ from Eq.~(\ref{COBEcons}) does not depends on $n$, which is because of out notation (\ref{P_B}).
On the other hand, the constraint Eq.~(\ref{CMBcons}) becomes less severe as $n$ increases, since for large $n$ the magnetic field power spectrum is highly peaked at the scale smaller than that concerned with observable CMB anisotropies $k\lesssim k_\lambda$.
Besides, the constraint Eq.~(\ref{CMBcons}) becomes looser for larger $k_c$, since the peak of the magnetic field power spectrum becomes apart from the CMB anisotropy scale.
Eq.~(\ref{CMBcons}) is more severe than Eq.~(\ref{COBEcons}) for $n<3.3$ and
$n<1.6$, when $k_c=10\,{\rm Mpc}^{-1}$ and $k_c=100\,{\rm Mpc}^{-1}$
respectively.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=80mm]
{nB0.eps}
\end{center}
\caption{The allowed region in the plane of $B_0$, the amplitude of the magnetic field power spectrum, and $n$, its tilt.
The black dashed line shows the upper bound given by Eq.~(\ref{COBEcons}), which is obtained from the observation of CMB distortion by COBE.
The blue (red) line shows the constraint Eq.~(\ref{CMBcons}) from current observations of CMB temperature anisotropies for $k_c=100 (10)~{\rm Mpc}^{-1}$.}
\label{fig:n-B0}
\end{figure}
\subsection{Angular power spectra of CMB distortions}
Let us study angular power spectra of CMB
distortion anisotropies.
\subsubsection{Correlations of CMB distortions}
First, we consider the auto- and cross-correlations in CMB distortions, i.e. $\mu$-$\mu$, $y$-$y$ and $\mu$-$y$.
Here, we choose the parameter sets so that we obtain the maximum amplitude of the power spectra
with the current constraint shown in FIG.~\ref{fig:n-B0} being satisfied.
We show $C^{\mu\mu}_l$ in FIG.~\ref{fig:C_mumu} for $n=1$ (black solid), 2 (red long dashed) and 3 (blue short dashed),
where we fix $k_c=100~{\rm Mpc}^{-1}$.
Following FIG.~\ref{fig:n-B0},
we set $B_0$ to be a maximum allowed value for each $n$
as $32~{\rm nG}$ for $n=1$ and $1.1\times 10^2~{\rm nG}$ for $n=2$ and $n=3$.
We also show $C^{yy}_l$ in FIG. \ref{fig:C_yy} for $n=1$ (black solid), 2 (red long dashed), 3 (blue short dashed) and 4 (green dotted),
where we fix $k_c=10~{\rm Mpc}^{-1}$.
As shown in FIG.~\ref{fig:n-B0},
for $k_c=10~{\rm Mpc}^{-1}$ the current observational cosmological limit for $B_0$ mainly comes from the CMB temperature anisotropies
(denoted as the red line) for $n \lesssim 3.5$ and it depends on the spectral index $n$.
Hence we set $B_0$ in FIG.~\ref{fig:n-B0} to be
$10~{\rm nG}$, $29~{\rm nG}$, $83~{\rm nG}$ and $1.1\times 10^2~{\rm nG}$ for $n=1$, $n=2$, $n=3$ and $n=4$, respectively.
As for the cross angular power spectrum, $C^{\mu y}$, which is shown in FIG. \ref{fig:C_muy},
we fix the amplitude $B_0$ to be $1.1 \times 10^2$~nG and change the peak scale $k_c$ for each spectral index $n$.
Following the observational constraint from the CMB temperature anisotropies given by Eq.~(\ref{CMBcons})
and the relation between $B_\lambda$ and $B_0$ given by Eq.~(\ref{B0Blambda}),
for fixed $B_0$, the peak scale $k_c$ for each $n$ is chosen in order to obtain the maximum allowed value of $B_\lambda$.
Then, in this figure,
we set $k_c$ to 300, 100, 30, 10~Mpc$^{-1}$ for $n=1.2$ (black solid), 1.6 (red long dashed), 2.1 (blue short dashed) and 3.6 (green dotted), respectively.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=80mm]
{Cmumu_highl.eps}
\caption{The $\mu$-$\mu$ auto-correlation angular power spectra for $n=1$ (black solid), $n=2$ (red long dashed) and $n=3$(blue short dashed).
For all cases, $k_c=100~{\rm Mpc}^{-1}$.
$B_0$ is set to $B_0=32~{\rm nG}$ for $n=1$, which corresponds to
$B_\lambda=3.0~{\rm nG}$, and $B_0=1.1\times 10^2~{\rm nG}$ for $n=2,3$.
}
\label{fig:C_mumu}
~\\
~\\
\includegraphics[width=80mm]
{Cyy_highl.eps}
\caption{The $y$-$y$ auto-correlation angular power spectra for $n=1$ (black solid), $n=2$ (red long dashed), $n=3$ (blue short dashed) and $n=4$ (green dotted).
For all cases, $k_c=10~{\rm Mpc}^{-1}$.
$B_0$ is set to $B_0=10~{\rm nG}$, $29~{\rm nG}$ and $83~{\rm nG}$ for $n=1$, 2 and 3 respectively, which corresponds to $B_\lambda=3.0~{\rm nG}$, and $B_0=1.1\times 10^2~{\rm nG}$ for $n=4$.
}
\label{fig:C_yy}
~\\
~\\
\includegraphics[width=80mm]
{Cmuy.eps}
\caption{The $\mu$-$y$ cross-correlation angular power spectra for parameter sets $(n, k_c[\rm{Mpc}^{-1}])=(1.2, 300)$ (black solid),
$(1.6, 100)$ (red long dashed), $(2.1, 30)$ (blue short dashed) and $(3.3, 10)$ (green dotted).
For all cases, $B_0$ is fixed to $1.1\times 10^2~{\rm nG}$.
}
\label{fig:C_muy}
\end{center}
\end{figure}
The magnitudes of the auto-correlation spectra $C^{\mu\mu}_l$ and $C^{yy}_l$ can be roughly estimated as follows.
$C_X$ given by Eqs.~(\ref{Cmu}) or (\ref{Cy}) is
\begin{equation}
C_X(k,k^\prime)\sim
\begin{cases}
1 & ;{\rm for} \ k_{X,f}<\max\{k,k^{\prime}\}<k_{X,i} \\
0 & ;{\rm otherwise}
\end{cases}, \label{CXkk}
\end{equation}
where $k_{X,i}$ and $k_{X,f}$
the Fourier modes of magnetic fields which satisfies
$\tau (k_{X,i}, z_{X,i} ) = 1$
and
$\tau (k_{X,f}, z_{X,f} ) = 1$, respectively.
In this sense,
$k_c$ is almost identical to $k_{X,f}$ here.
The spherical Bessel function can be approximated as
\begin{equation}
j_l(x)\simeq
\begin{cases}
0 & ;{\rm for} \ x<l \\
\frac{1}{x}\cos \left( x-\frac{(l+1)\pi}{2}\right) & ;{\rm for} \ x>l
\end{cases}.
\end{equation}
Neglecting the effect of the finite thickness of the last scattering
surface, the auto-correlation spectrum $C^{XX}_l$ can be roughly estimated as
\begin{equation}
\frac{l(l+1)C^{XX}_l}{2\pi}\sim \left(\frac{\rho_B}{\rho_{\gamma}}\right)^2\frac{l^2}{(k_{c}r_{\rm rec})^2}, \label{CXXest}
\end{equation}
if $k_{X,f}<k_c<k_{X,i}$.
Since
we are setting $k_c\sim k_{X,f}$ as mentioned before,
this estimation is consistent with the spectra shown in FIGs. \ref{fig:C_mumu}
and \ref{fig:C_yy}, especially with respect to the dependence on
$l$, $l(l+1)C^{XX}_l/2\pi\propto l^2$, for $l\lesssim 10^3$.
The reason why $l(l+1)C^{XX}_l/2\pi \propto l$ for
$l\gtrsim 10^3$ in FIGs. \ref{fig:C_mumu}
and \ref{fig:C_yy} is that the effect of the finite thickness of the last scattering surface, which is introduced as Eq. (\ref{finiteLS}),
suppresses $C^{XX}_l$ by a factor $l\sigma_{\rm LS}/r_{\rm rec}$.
The peak and the cut-off of $l(l+1)C^{XX}_l/2\pi$ at $l\sim 10^6$ for $\mu$ and at $l\sim 10^5$ for $y$
correspond to those of the magnetic fields power spectrum at $k=k_c$.
Eq. (\ref{CXXest}) shows that the amplitude of $C^{XX}_l$ is
determined by the total energy of decaying magnetic fields.
Therefore, for fixed $k_c$, its amplitude is determined only by $B_0$, not by $n$.
Since, here, we
set $B_0$ to a
smaller value for $n=1$ than for $n=2$ and $n=3$
for $C^{\mu\mu}_l$
in order to satisfy the current observational constraints, the amplitude becomes also smaller.
Of course, $C^{\mu\mu}_l$ for $n=2$ and $n=3$ overlaps each other because of the same value of $B_0$ for both cases.
On the other hand, the amplitude of $C^{yy}_l$ apparently seems to depend on the spectral index $n$ in FIG. \ref{fig:C_yy}.
However, since for $C^{yy}_l$ we take smaller $k_c$ than that for $C^{\mu\mu}_l$,
the value of $B_0$ taken here strongly depends on the spectral index $n$
in order to maximize the amplitude within the observational constraints shown in FIG. \ref{fig:n-B0}, as we have shown above.
In the range of
$n < 3.3$, $B_0$ is smaller for smaller $n$
and hence the amplitude of $C^{yy}_l$ becomes smaller for smaller $B_0$,
which is consistent with the simple estimation Eq.~(\ref{CXXest}).
The second factor of the RHS of Eq.~(\ref{CXXest}), which comes from the spherical Bessel function,
tells us that smaller $k_c$ leads to larger $C^{XX}_l$ for fixed $l$, as mentioned in the previous subsection.
In fact, $r_{\rm rec} \simeq 10^4\, {\rm Mpc}^{-1}$ and we
take $k_c=10~{\rm Mpc}^{-1}$ for $\mu$ and
$100~{\rm Mpc}^{-1}$ for $y$ here, and hence,
for the CMB observation scales ($l \lesssim 10^4$),
$C^{XX}_l$ is much suppressed compared with the value at the peak scale, $l \sim k_c r_{\rm rec}$.
This suppression reflects the fact that the typical scales of fluctuations of the
$\mu$- and $y$-parameter, $\sim 2\pi/k_c \sim 2\pi/k_{X,f}$ , are much smaller than the observation scale, $\sim r_{\rm rec}/l$.
Because we take smaller $k_c$ for $C^{yy}_l$ compared with $C^{\mu\mu}_l$, the amplitude of $C^{yy}_l$
seems to be larger than that of $C^{\mu\mu}_l$ for fixed $l$ and the same value of $B_0$.
On the other hand, as is shown in FIG.~\ref{fig:C_muy},
we found that the cross-correlation spectrum $C^{\mu y}_l$ in general cannot be as large as
the auto-correlation ones $C^{\mu\mu}_l$ and $C^{yy}_l$ even if parameters $(n,~k_c,~B_0)$ are tuned.
This is because the scales of primordial magnetic fields which mainly contribute to $\mu$- and $y$-type distortions
are different.
In other words, this can be understood by seeing that
$C_\mu(\chi, q)C_y(\chi,q)$ in Eq.\eqref{CmuyB} vanishes, provided the rough approximation Eq. \eqref{CXkk}.
We thus conclude that it is difficult to observe $C^{\mu y}_l$ unless $C^{\mu\mu}_l$ and $C^{yy}_l$
are observed with high significance. Therefore we do not take into account $C^{\mu y}_l$ in Section \ref{sec:detect},
where we discuss detectability of primordial magnetic fields by CMB observations of distortion power spectra.
\subsubsection{Cross-correlation with CMB temperature anisotropies}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=80mm]
{C_muT.eps}
\caption{The $\mu$-$T$ cross-correlation angular power spectra for
$n=1$ (black solid), $n=2$ (red long dashed) and $n=3$ (blue short dashed).
For all cases, $k_c=100~{\rm Mpc}^{-1}$.
$B_0$ is set to $B_0=32~{\rm nG}$ for $n=1$,
and $B_0=1.1\times 10^2~{\rm nG}$ for $n=2,3$.}
\label{fig:C_muT}
~\\
~\\
\includegraphics[width=80mm]
{C_yT.eps}
\end{center}
\caption{The $y$-$T$ cross-correlation angular power spectra for
$n=1$ (black solid), $n=2$ (red long dashed) and $n=3$ (blue short dashed).
For all cases, $k_c=10~{\rm Mpc}^{-1}$.
$B_0$ is set to $B_0=10~{\rm nG}$, $29~{\rm nG}$ and $83~{\rm nG}$ for $n=1$, 2 and 3 respectively, which corresponds to $B_\lambda=3.0~{\rm nG}$.}
\label{fig:C_yT}
\end{figure}
Next, we calculate the cross-correlation between CMB distortion anisotropies and the scalar passive mode of CMB temperature anisotropies
given by Eqs. (\ref{CmuTB}) and (\ref{CyTB}).
We show $C^{\mu T}_l$ in FIG. \ref{fig:C_muT} for
$n=1$ (black), 2 (red long dashed) and 3 (blue short dashed).
For all cases, we set $k_c=100~{\rm Mpc}^{-1}$
and $B_0=32 {\rm nG}$ for $n=1$, which corresponds to $B_\lambda=3.0~{\rm nG}$, and $B_0=1.1\times 10^2~{\rm nG}$ for $n=2,3$ respectively.
We also show $C^{y T}_l$ in FIG. \ref{fig:C_yT} for
$n=1$ (black), 2 (red long dashed) and 3 (blue short dashed).
We take $k_c=10~{\rm Mpc}^{-1}$
and $B_0=10~{\rm nG}$, $29~{\rm nG}$ and $83~{\rm nG}$ for $n=1$, 2 and 3, respectively.
As shown in
FIGs. \ref{fig:C_muT} and \ref{fig:C_yT}, $C^{\mu T}_l$ and $C^{y
T}_l$ are suppressed compared with the auto-correlations $C^{\mu \mu}_l$ and $C^{yy}_l$. This
is just because the typical length scale of the CMB distortion fluctuations is
much smaller than the Silk damping scale $k_{\rm Silk}\sim 0.1~{\rm Mpc}^{-1}$.
As is well known, the CMB temperature fluctuations have been exponentially
damped by the Silk damping~\cite{Silkdamping}
on the scales with $k > k_{\rm Silk}$.
On the other hand,
the amplitudes of the anisotropies of $\mu$ and $y$
on the CMB observation scales ($k < k_{\rm Silk}$), where the CMB temperature anisotropy
keeps its amplitude,
are much suppressed as shown in the above discussion about the auto power spectra of $\mu$ and $y$.
Hence, even though the cross correlations between the temperature and distortion anisotropies
exist due to the fact that both of these anisotropies are given in terms of the convolution of the Gaussian magnetic fields,
the cross correlations are more suppressed than auto correlations of those anisotropies.
Note that as shown in FIGs. \ref{fig:C_muT} and \ref{fig:C_yT},
the amplitudes of the cross-correlations depend on not only $B_0$ but also $n$ in contrast with the case
of auto power spectra of $\mu$ and $y$.
This is because the $\mu$ and $y$ anisotropies on the scales larger than the Silk scale ($k < k_{\rm Silk}$)
depend on not only $B_0$, which determines the amplitudes of $\mu$ and $y$ anisotropies on the peak scale $\sim 2\pi/k_c$, but also the tilt $n$.
\section{Detectability of CMB distortion anisotropies} \label{sec:detect}
In this section, we study the detectability of the anisotropies of CMB
distortion parameters in the future observations by performing a signal-to-noise (SN) analysis.
In order to evaluate SN ratio (SNR), first, we must estimate the variance of the
angular power spectrum.
The variances of $C^{\mu\mu}_l$ and $C^{yy}_l$ estimated from the full sky observation of CMB are given by \cite{Knox:1995dq}
\begin{equation}
\sigma^2_{ll^\prime}=\left< \left(C^{XX}_l-\left<C^{XX}_l\right>\right) \left(C^{XX}_{l^\prime}-\left<C^{XX}_{l^\prime}\right>\right)\right>=\frac{2}{2l+1}\left(C^{XX}_l+C^{XX,N}_l\right)^2\delta_{ll^\prime},
\end{equation}
where
$C^{XX,N}_l$ is the noise power
spectrum of the observation. We assume that the foregrounds can be
removed perfectly. In this assumption, the noise power spectrum $C^{XX,N}_l$ consists
of the experimental noise power spectrum and
can be written as \cite{Knox:1995dq}
\begin{equation}
C^{XX,N}_l=\sigma_X^2\theta_b^2 b_l^{-2},
\label{eq:auto_cov}
\end{equation}
where $\sigma_X$ is the $1\sigma$ uncertainty in $X$ per pixel, $\theta_b$ is the beam width and $b_l$ is the so-called beam transfer function given by
\begin{equation}
b_l=\exp \left(-\frac{l^2\theta_b^2 }{16\ln 2}\right).
\end{equation}
From Eq.~(\ref{eq:auto_cov}),
we can obtain the SNR in a measurement of $C^{\mu\mu}_l$ and $C^{yy}_l$ by
\begin{equation}
\left(\frac{S}{N}\right)^2=\sum_l\frac{2l+1}{2}\frac{\left(C^{XX}_l\right)^2}{\left(C^{XX}_l+C^{XX,N}_l\right)^2}.
\end{equation}
We define
a function which represents
the detectable level of the signal $l(l+1)C^{XX}_l/2\pi$, $C_l^{DL}$, as
\begin{equation}
C_l^{DL}=\frac{l(l+1)}{2\pi}\sqrt{\frac{2}{(2l+1)l}}C^{XX,N}_l. \label{effnoise}
\end{equation}
The fact that
$l(l+1)C^{XX}_l/2\pi>C_l^{DL}$
means that SNR becomes larger than 1.
When we take a logarithmically homogeneous binning of $l$ with bin width $\Delta \ln l=1$, there are $l$ multipoles in a bin at $l$.
Since different multipoles are independent, the noise level per each bin should be given by $\sigma_{ll}/\sqrt{l}$.
Therefore, the detectable level of $l(l+1)C^{XX}_l/2\pi$ is roughly given by Eq. (\ref{effnoise}).
For the cross-correlations, $C^{\mu T}_l$ and $C^{yT}_l$, the variance
is obtained from
\begin{equation}
\sigma^2_{ll^\prime}=\left< \left(C^{XT}_l-\left<C^{XT}_l\right>\right) \left(C^{XT}_{l^\prime}-\left<C^{XT}_{l^\prime}\right>\right)\right>=\frac{1}{2l+1}\left(C^{XX}_l+C^{XX,N}_l\right)\left(C^{TT}_l+C^{TT,N}_l\right)\delta_{ll^\prime},
\end{equation}
where $C^{TT}_l$ is the primary CMB temperature power spectrum
and $C^{TT,N}_l$ is the noise
power spectrum for the CMB temperature observation.
Here
we assume that, compared with the CMB signal, the
experimental noise is very small on scales of interest. Therefore, we can
neglect the noise power spectrum.
In this assumption, the SNR for the cross-correlations is given by
\begin{equation}
\left(\frac{S}{N}\right)^2\simeq\sum_l(2l+1)\frac{\left(C^{XT}_l\right)^2}{\left(C^{XX}_l+C^{XX,N}_l\right)C^{TT}_l}.
\end{equation}
Let us discuss the detectability of anisotropic CMB distortions in each future experiment.
\subsection{PIXIE case}
PIXIE \cite{Kogut:2011xw} is a recently proposed satellite for CMB observation, which can measure CMB distortion parameters to
a very high accuracy.
For PIXIE, the beam width is $\theta_b=1.6^{\circ}$ and the $1\sigma$ uncertainty in $\mu$ and $y$ parameters averaged over the full sky are $\delta \mu=10^{-8}$ and $\delta y =2\times 10^{-9}$ respectively~\cite{Kogut:2011xw}.
This leads to
\begin{equation}
C^{\mu\mu,N}_l=1.3\times 10^{-15}\times \exp
\left(\frac{l^2}{84^2}\right) , \label{CmumuNPIXIE}
\end{equation}
and
\begin{equation}
C^{yy,N}_l=5.0\times 10^{-17}\times \exp \left(\frac{l^2}{84^2}\right).\label{CyyNPIXIE}
\end{equation}
In FIG. \ref{fig:CN}, we show the function of the detectable level, $C_l^{DL}$, of
the auto power spectra of $\mu$ (left panel (a)) and $y$ (right panel (b)) for each CMB experiment
and
the largest $C^{XX}_l$ allowed by the current observations with a black dotted line.
The detectable level function for PIXIE
is shown
as a function of the multipole $l$ with a red solid line.
Due to the exponential factor in Eqs.~(\ref{CmumuNPIXIE}) and (\ref{CyyNPIXIE}), PIXIE can measure only large scale anisotropies, which
correspond to $l\lesssim 100$, and for both of $\mu$ and $y$ distortions
the black dashed line, which represents the largest signal allowed by the current observations,
is much lower than the red line.
As a result, it would be difficult to detect
the auto-correlation signals of
the CMB distortion anisotropies induced by primordial magnetic
fields by PIXIE.
The cross-correlation signals between CMB distortion and temperature anisotropies
would be more difficult to be detected by PIXIE.
A rough estimate gives $l(l+1)C^{TT}_l\sim 6.0\times 10^{-10}$ as in \cite{Pajer:2012vz}, and
we see that it is necessary that $C^{\mu T}_{l=100} \gtrsim 10^{-16}$ or $C^{y T}_{l=100} \gtrsim 10^{-17}$ for the SNR larger than 1.
Both of $C^{\mu T}_l$ and $C^{yT}_l$ shown in FIGs.~\ref{fig:C_muT} and \ref{fig:C_yT} are much smaller than these required values.
\begin{figure}[t]
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\subfigure[The detectable level of $l(l+1)C^{\mu\mu}_l/2\pi$.]{
\includegraphics[width=75mm]
{Cmumunoise.eps}
\label{fig:CmuN}
}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\subfigure[The detectable level of $l(l+1)C^{yy}_l/2\pi$.]{
\includegraphics[width=75mm]
{Cyynoise.eps}
\label{fig:CyN}
}
\end{center}
\end{minipage}
\end{tabular}
\caption{The detectable levels of $l(l+1)C^{\mu\mu}_l/2\pi$ and $l(l+1)C^{yy}_l/2\pi$ in PIXIE (red solid), Planck (green long dashed), LiteBIRD (blue short dashed), CMBpol (magenta chain) and SPT (light blue two-dot chain).
We also plot $l(l+1)C^{\mu\mu}_l/2\pi$ for $n=3, B_0=1.1\times 10^2~{\rm
nG}$ and $l(l+1)C^{yy}_l/2\pi$ for $n=4, B_0=1.1\times 10^2~{\rm nG}$
for comparison (black dotted).}
\label{fig:CN}
\end{figure}
\subsection{Planck case}
The authors of \cite{Ganc:2012ae} have argued that the anisotropies of CMB
distortion parameters can be detected not only absolutely calibrated
experiments such as PIXIE but also relatively calibrated experiments
like WMAP and Planck, although an isotropic CMB distortion can be probed
only by absolutely calibrated experiments.
For relatively calibrated experiments, anisotropies of CMB
distortion parameters are seen as temperature anisotropies, whose
amplitude depends on the frequency channel. The temperature for photon
frequency $\nu$ is given by
\begin{equation}
T(\nu)=\frac{T_0x}{\ln (1+n(x))^{-1}},
\end{equation}
where $x=2\pi\nu/T_0$, $T_0$ is the temperature averaged over all sky
and $n(x)$ is the photon occupation number.
Without CMB distortions, the occupation number is given by
the Planck distribution as $n(x)=(e^x-1)^{-1}\equiv n_0(x)$.
Due to the CMB distortions the energy spectrum of CMB photons deviates
from the Planck distribution. As a result,
the apparent temperature anisotropy depending on the frequency channel
is created from the CMB distortions.
In the case of the $\mu$-distortions, the apparent temperature
anisotropy is given by~\cite{Ganc:2012ae}
\begin{equation}
\frac{\delta T(\hat n, \nu)}{T} \simeq -\frac{\delta \mu(\hat n)}{x},
\end{equation}
where $\delta \mu$ is the fluctuating part of $\mu$ and $x=2\pi \nu/T$.
In the case of $y$-distortion, the resultant temperature anisotropy is
\begin{equation}
\frac{\delta T(\hat n, \nu)}{T}=\left(x\frac{e^x+1}{e^x-1}-4\right)\delta y(\hat n)\equiv a(\nu)\delta y(\hat n),
\end{equation}
where $\delta y$ is the fluctuating part of $y$.
As shown in the above expressions,
these temperature anisotropies produced by the CMB distortions have the
frequency dependence, and taking difference between the
temperature anisotropies in different frequency channels,
we find $\delta \mu$ and $\delta y$.
The experimental noise power spectrum in this type of observation using
two different frequency channels $\nu_1$ and $\nu_2$ is given by
\begin{equation}
C^{\mu\mu,N}_l=\left[\frac{\nu_1\nu_2/(\nu_1-\nu_2)}{56.80{\rm GHz}}\right]^2\sum_{i=1,2}\sigma^2_{T,i}\theta_{b,i}^2b^{-2}_{i,l},
\end{equation}
for $\mu$-distortions and
\begin{equation}
C^{yy,N}_l=\left(\frac{1}{a(\nu_1)-a(\nu_2)}\right)^2\sum_{i=1,2}\sigma^2_{T,i}\theta_{b,i}^2b^{-2}_{i,l},
\end{equation}
for $y$-distortions, where $\sigma_{T,i}$ is the $1\sigma$ uncertainty in $\delta T/T$ per pixel at frequency $\nu_i$, $\theta_{b,i}$ is the beam width of channel $\nu_i$ and $b_{i,l}=\exp \left(-l^2\theta_{b,i}^2 /16\ln 2\right)$.
\begin{table}[t]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.3\hsize}
\begin{center}
\subtable[Planck (from \cite{Planck:2006aa})]{
\begin{tabular}{c|c|c}
\hline
\hline
bands [GHz] & $\theta_b$ & $\sigma_T$ \\
\hline
$100$ & $9.5^\prime$ & $2.5\times 10^{-6}$ \\
$143$ & $7.1^\prime$ & $2.2\times 10^{-6}$ \\
\hline
\hline
\end{tabular}
\label{table:planck}
}
\end{center}
\end{minipage}
\begin{minipage}{0.3\hsize}
\begin{center}
\subtable[LiteBIRD (from \cite{LiteBIRD})]{
\begin{tabular}{c|c|c}
\hline
\hline
bands [GHz] & $\theta_b$ & $\sigma_T$ \\
\hline
$90$ & $60^\prime$ & $2.1\times 10^{-8}$ \\
$150$ & $36^\prime$ & $3.3\times 10^{-8}$ \\
\hline
\hline
\end{tabular}
\label{table:LiteBIRD}
}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.3\hsize}
\begin{center}
\subtable[CMBpol (from \cite{Baumann:2008aq})]{
\begin{tabular}{c|c|c}
\hline
\hline
bands [GHz] & $\theta_b$ & $\sigma_T$ \\
\hline
$100$ & $8^\prime$ & $1.1\times 10^{-7}$ \\
$150$ & $5^\prime$ & $1.6\times 10^{-7}$ \\
\hline
\hline
\end{tabular}
\label{table:CMBpol}
}
\end{center}
\end{minipage}
\begin{minipage}{0.3\hsize}
\begin{center}
\subtable[SPT (from \cite{Reichardt:2011yv,Schaffer:2011mz})]{
\begin{tabular}{c|c|c}
\hline
\hline
bands [GHz] & $\theta_b$ & $\sigma_T$ \\
\hline
$95$ & $1.7^\prime$ & $9.6\times 10^{-6}$ \\
$150$ & $1.2^\prime$ & $5.5\times 10^{-6}$ \\
\hline
\hline
\end{tabular}
\label{table:SPT}
}
\end{center}
\end{minipage}
\end{tabular}
\addtocounter{table}{-1}
\caption{Parameters which characterize sensitivities of various relatively calibrated experiments.
$\theta_b$ is Gaussian beam width at FWHM,
$\sigma_T$ is temperature noise.
Note that each satellite has some frequency bands other than those shown above.
We here show parameters for two bands which have best sensitivities.
}
\end{center}
\end{table}
\stepcounter{table}
According to \cite{Planck:2006aa},
Planck has the sensitivity $\sigma_T=2.5\times 10^{-6}$ with the beam
width $\theta_b=9.5^\prime$ for $100~{\rm GHz}$ channel
and $\sigma_T=2.2\times 10^{-6}$ with $\theta_b=7.1^\prime$
for $143~{\rm GHz}$ shown in TABLE \ref{table:planck}.
These channels have the best sensitivity among frequency channels of Planck.
Therefore, the noise power spectrum for $\mu$ and $y$-distortions are
\begin{equation}
C^{\mu\mu,N}_l=1.6\times 10^{-15}\times e^{(l/855)^2}+7.1\times 10^{-16}\times e^{(l/1.1\times 10^3)^2},
\end{equation}
and
\begin{equation}
C^{yy,N}_l= 2.2\times 10^{-16}\times e^{(l/855)^2}+9.4\times 10^{-17}\times e^{(l/1.1\times 10^3)^2},
\end{equation}
respectively.
From the above expressions, we find that
Planck is expected to probe the anisotropies on smaller scales compared with PIXIE due to the difference of exponential factor.
However, as shown in FIG.~\ref{fig:CN}
where $C_l^{DL}$ for Planck is shown as a green long-dashed line,
both of the largest allowed $C^{\mu\mu}_l$ and $C^{yy}_l$
still do not have amplitudes large enough to be detected by Planck.
Actually, $C^{\mu\mu}_l$ for $n=3, B_0=1.1\times 10^{2}{\rm nG}$ gives
$S/N\simeq 1.3\times 10^{-2}$ and $C^{yy}_l$ for $n=4,
B_0=1.1\times 10^{2}{\rm nG}$ gives $S/N\simeq 0.3$.
For the cross-correlation cases,
$C^{\mu T}_l$ and $C^{yT}_l$ are far from the detectable level: $C^{\mu T}_l \gtrsim 10^{-18}$ and $C^{y T}_l \gtrsim 10^{-19}$ at $l\sim 10^3$.
\subsection{LiteBIRD case}
LiteBIRD~\cite{LiteBIRD} is a proposed CMB satellite which aims to detect low $l$ B-mode polarization anisotropy.
Although the angular resolution of LiteBIRD will be worse than Planck,
the sensitivity per pixel will be better than Planck
and it would be a powerful experiment to detect the CMB distortions through the relative calibration.
For the beam width and sensitivity for the $90~{\rm GHz}$ and the $150~{\rm GHz}$ channels
shown in TABLE \ref{table:LiteBIRD}, the experimental noise power spectrum for the relative calibration by LiteBIRD
is given by
\begin{equation}
C^{\mu\mu,N}_l=2.2\times 10^{-18}\times e^{(l/135)^2}+1.8\times 10^{-18}\times e^{(l/226)^2},
\end{equation}
and
\begin{equation}
C^{yy,N}_l= 3.3\times 10^{-19}\times e^{(l/135)^2}+2.8\times 10^{-19}\times e^{(l/226)^2}.
\end{equation}
From the above expressions, we find that LiteBIRD can probe $\mu$ and $y$-distortion anisotropies with a
sensitivity better than PIXIE up to as high $l$ as PIXIE can.
As a result,
in an
observation by LiteBIRD, anisotropies of CMB distortions induced by primordial
magnetic fields
can reach the detectable level
within satisfying the current observational constraints,
as shown in Fig.~\ref{fig:CN} where
$C_l^{DL}$ for LiteBIRD is shown as a blue short-dashed line.
In particular, $C^{yy}_l$ for
$n=3, B_0=1.1\times 10^{2}~{\rm nG}$ gives $S/N\simeq 8$ and
that for $n=4, B_0=1.1\times 10^{2}~{\rm nG}$ gives $S/N\simeq 22$.
Approximating that $C^{yy}_l$ depends on $B_0$ only through the
overall factor proportional to $B_0^4$ \footnote{ Strictly speaking,
$B_0$ affects $C^{yy}_l$ also through ${\rm Im}\, \omega(t,k)$
in Eq. (\ref{Imomega}). } and setting the detection threshold of the SNR to 4,
we find the threshold value of $B_0$ for detection of $C^{yy}_l$.
For $n\gtrsim3$, magnetic fields with
$B_0>70~{\rm nG}$ and
$k_c = 10{\rm Mpc}^{-1}$ can generate detectable anisotropies of $y$-distortions.
For a smaller value of $n$, $C^{yy}_l$ can not be observable with the constraint Eq.~(\ref{CMBcons}) satisfied.
$C^{\mu\mu}_l$ is slightly smaller than the detectable level ($C^{\mu\mu}_l$ for $n=3, B_0=1.1\times 10^{2}{\rm nG}$ gives $S/N\simeq 1.3$)
and the cross correlations,
$C^{\mu T}_l$ and $C^{yT}_l$, are still far too small to be detected by LiteBIRD.
\subsection{CMBpol case}
CMBpol~\cite{Baumann:2008aq} is a future CMB satellite with a sensitivity similar to LiteBIRD and an angular resolution higher than Planck.
Using the $100{\rm GHz}$ and $150{\rm GHz}$ bands, whose beam width and
sensitivity are given in TABLE \ref{table:CMBpol},
we evaluate each noise power spectrum of $\mu$ and $y$ parameter as
\begin{equation}
C^{\mu\mu,N}_l=1.6\times 10^{-18}\times e^{(l/1.0\times10^3)^2}+1.6\times 10^{-18}\times e^{(l/1.6\times10^3)^2},
\end{equation}
and
\begin{equation}
C^{yy,N}_l= 1.9\times 10^{-19}\times e^{(l/1.0\times10^3)^2}+1.8\times 10^{-19}\times e^{(l/1.6\times10^3)^2}.
\end{equation}
Because of a high sensitivity and a high angular resolution, CMBpol enlarges the possibility to probe anisotropies of $y$-distortions,
as shown in Fig.~\ref{fig:CN} where
$C_l^{DL}$ for CMBpol is shown as a magenta dot-dashed line.
Setting the detection threshold of the SNR to 4 in a similar way to the LiteBIRD case,
we find that the auto power spectrum of $y$ anisotropies
can be detected for the primordial magnetic fields with
$n\gtrsim 3$, $B_0>39~{\rm nG}$ and $k_c=10~{\rm Mpc}^{-1}$.
On the other hand, if the power spectrum of primordial magnetic fields
is less blue-tilted, that is, $n\lesssim 2$,
the primordial magnetic fields satisfying the current constraint Eq.~(\ref{CMBcons}) cannot induce detectable anisotropies of $y$.
In CMBpol experiment, $\mu$ anisotropies can also reach the detectable level as shown in FIG.~\ref{fig:CmuN}.
For $n\gtrsim 2$, in particular, the primordial
magnetic fields with $B_0>60~{\rm nG}$ and $k_c=100~{\rm Mpc}^{-1}$ create $C^{\mu\mu}_l$ detectable with a SNR larger than 4.
However the smaller value of $n$ makes $\mu$ anisotropies undetectable
as in the case of $y$ distortions.
For the cross-correlation signals,
$C^{\mu T}_l$ and $C^{yT}_l$ still can not be detected by CMBpol.
\subsection{SPT case}
Angular power spectra of $\mu$ and $y$ parameters induced by primordial
magnetic fields have a larger amplitude on small scales as mentioned in the previous section, where
space-based CMB experiments such as the above examples can not probe. CMB
observations on such small scales $l> 1000$ are performed by
ground-based telescopes.
South Pole Telescope (SPT)~\cite{Reichardt:2011yv} is one of the latest ones among such telescopes.
According to \cite{Reichardt:2011yv,Schaffer:2011mz},
SPT has $\sigma_T=9.6\times 10^{-6}$ and
$\theta_b=1.7^{\prime}$ for the
$95~{\rm GHz}$ band and
$\sigma_T=5.5\times 10^{-6} $ and $ \theta_b=1.2^{\prime}$ for the $150~{\rm GHz}$ band shown in TABLE \ref{table:SPT}.
The noise power spectrum of SPT for each type of distortion is respectively given by
\begin{equation}
C^{\mu\mu,N}_l=4.2\times 10^{-16}\times
e^{(l/4.8\times10^3)^2}+7.6\times 10^{-17}\times
e^{(l/6.8\times10^3)^2}, \end{equation}
and
\begin{equation}
C^{yy,N}_l=1.4\times 10^{-17}\times
e^{(l/4.8\times10^3)^2}+1.0\times 10^{-17}\times
e^{(l/6.8\times10^3)^2}. \end{equation}
The detectable level, $C_l^{DL}$, for SPT is shown as a light blue two-dot chain line in FIG. \ref{fig:CN}.
From FIG. \ref{fig:CyN}, we can see that SPT can detect $C^{yy}_l$ induced by the primordial magnetic fields, while as we will discuss shortly later
contamination from the SZ effect would be significant. On the other hand, from FIG. \ref{fig:CmuN}, $C^{\mu\mu}_l$ induced from
primordial magnetic fields satisfying the current observational constraints is too small to be detected by SPT.
\subsection{Effects of the thermal Sunyaev-Zel'dovich effect}
So far we have discussed the detectability of $\mu$- and $y$-distortion anisotropies induced by
primordial magnetic fields, simply assuming that there are no other sources of the distortions.
However, these distortions can be generated by various processes both in the early and late-time Universe.
In particular, as is mentioned in the introduction, $y$-distortion is generated by the thermal SZ effect in the late-time Universe.
According to a recent measurement of $y$-distortion map
by the Planck satellite \cite{Ade:2013qta}, $C^{yy}_l$ generated by the SZ effect is $\mathcal O(10^{-16})$
at $50\lesssim\ell\lesssim1000$. This is about three order of magnitude larger than the maximum
$C^{yy}_l$ from primordial magnetic fields (See Fig. \ref{fig:CyN}). Since, at large angular scales, both $C^{yy}_l$ from
the thermal SZ effect and primordial magnetic fields have the same spectral shapes $C^{yy}_l\sim\rm{constant}$,
it should be difficult to detect the $C^{yy}_l$ from primordial magnetic fields.
However, at smaller scales, shapes of the spectra differ. In particular, $C^{yy}_l$ of the thermal SZ effect drops sharply
at around the angular scales of galaxy clusters, $l\sim 3000$, while one from primordial magnetic fields
decays mildly as $C^{yy}_l\propto 1/l$. Thus, we expect future observations which observe
$C^{yy}_l$ at small angular scales may be able to distinguish primordial magnetic fields and the SZ effect.
Furthermore, we also expect that the cross-correlation between thermal SZ effect and and the distribution of galaxy clusters (see, e.g., Ref.~\cite{Fang:2011zk})
helps us distinguish between the signals of $y$-distortion from the thermal SZ effect and the primordial magnetic fields considered here.
On the other hand, since $\mu$-distortions cannot be generated by astrophysical processes,
we expect that it is more plausible to seek for the signature of primordial magnetic fields
in the $\mu$-distortion.
\section{Conclusion}
In this paper, we have considered $\mu$- and $y$-distortions of the
CMB photon energy spectrum, which are generated by decay of
primordial magnetic fields. In particular, we have focused on
anisotropies of the CMB distortions, which are induced by space-varying
random magnetic fields. Using the decay rate of magnetic fields derived
in \cite{Jedamzik:1996wp}, we have presented the formalism to calculate
the angular power spectra of these distortion parameters.
We have also considered the cross-correlations between the CMB distortion parameters and
temperature anisotropies induced by magnetic fields.
We have numerically calculated angular power
spectra $C^{\mu\mu}_l$, $C^{yy}_l$, {$C^{\mu y}_l$,} $C^{\mu T}_l$ and $C^{yT}_l$, taking
various values of the tilt of the magnetic field power spectrum $n$.
We have evaluated the maximum values
of $C^{\mu\mu}_l$, $C^{yy}_l$, {$C^{\mu y}_l$,} $C^{\mu T}_l$ and
$C^{yT}_l$ allowed by the current observational constraints on the
magnetic fields, setting the amplitude of the
magnetic fields to the upper limit of
the constraints and choosing the cut-off scales appropriately.
The peak scales of the angular power spectra correspond to the cutoff scales of
the magnetic fields and the peak amplitudes of the auto-correlation spectra $C^{\mu\mu}_l$ and $C^{yy}_l$ are basically
determined by the total energy density of the decay of the magnetic fields. However, since the angular power spectra have the dependence on $l^2$,
the amplitude are suppressed on the observation scales.
On the other hand, we found that the cross correlation between
the distortions $C^{\mu y}_l$ is small since different scales dominantly
contributing to $\mu$- and $y$-distortions are different.
The cross-correlation with CMB temperature and distortion anisotropies,
$C^{\mu T}_l$ and $C^{yT}_l$, are also suppressed more than
$C^{\mu\mu}_l$ and $C^{yy}_l$, since temperature fluctuations on such a
small length scale are exponentially suppressed due to Silk damping.
Following the numerical calculation of the angular power spectra, we have also discussed the possibility of detection of
anisotropic CMB distortions induced by primordial magnetic fields.
Although PIXIE is absolutely-calibrated experiment and have a high
sensitivity for the measurement of the CMB distortions, it is not able to measure
$C^{\mu\mu}_l$ and $C^{yy}_l$ since it can reach the small scale only up
to $l\sim 100$ in the current design.
Following the method proposed in \cite{Ganc:2012ae},
relatively calibrated experiments
can also measure anisotropies of CMB distortions. LiteBIRD can detect $C^{yy}_l$ due to the magnetic fields
with the large tilt $n\gtrsim 3$ and the amplitude close
to the upper limit from current observations.
CMBpol can measure $C^{yy}_l$ by weaker magnetic fields.
According to the situation, it might detect not $C^{yy}_l$ but $C^{\mu\mu}_l$.
Through observations of anisotropies of CMB distortions by these future
CMB satellites,
we might confirm existence of the primordial magnetic fields
with highly blue-tilted power spectrum or put the novel constraint on
such magnetic fields.
On the other hand, $C^{\mu T}$ and $C^{yT}$ are far from the detectable level in
both observation.
In small-scale CMB measurements by ground-based telescopes such as SPT,
it is difficult to search anisotropies of CMB distortions due to magnetic fields.
This is because the recent result of SPT is consistent with the SZ effect,
which induces $y$-distortions indistinguishable from those by magnetic fields,
and contributions from magnetic fields should be subdominant.
This leads to the upper limit on magnetic fields:
$B_0<1.8\times 10^2~{\rm nG}$ for $k_c=10~{\rm Mpc}^{-1}$, which is larger by
an $\mathcal{O}(1)$ factor than the COBE constraint,
Eq.~(\ref{COBEcons}). The $\mu$ anisotropies are too small to be detected by
SPT.
We mention to another contribution from primordial magnetic fields to the cross-correlation between CMB distortion and temperature anisotropies.
In addition to the scalar passive mode considered here as the CMB temperature anisotropies sourced from the primordial magnetic fields,
primordial magnetic fields also induce the so-called compensated scalar magnetic mode of CMB temperature fluctuations \cite{Shaw:2009nf}.
Although it is subdominant compared with the scalar passive mode for $l\lesssim 5000$ \cite{Shaw:2009nf}, it becomes the dominant component for higher $l$.
This is because the scalar magnetic mode is actively produced and not suppressed exponentially like the scalar passive mode.
Since small-scale perturbations also contribute to $C^{X T}_l$ for small $l$,
it could be possible that the scalar magnetic mode makes a considerable contribution to $C^{X T}_l$ in the observable range
with the blue-tilted power spectrum of the magnetic fields.
As a brute-force estimation for such contribution,
let us consider the unrealistic case where the exponential suppression due to the Silk damping is absent in the temperature anisotropies.
In this case,
we can expect from FIG. 2 in \cite{Shaw:2009nf} that
the temperature anisotropy induced from the scalar passive mode is larger than that generated by the scalar magnetic mode
by several orders of magnitude even on small scales.
Furthermore, the angular cross power spectra between the temperature anisotropy induced from the scalar passive mode
and the CMB distortions due to the primordial magnetic fields, $C^{X T}_l$, is expected to be comparable to the auto angular power spectrum
of the temperature anisotropy, $C^{T T}_l$, in the absence of Silk damping.
This is because Eqs. (\ref{CmumuB}) and (\ref{CmuTB}) (Eqs. (\ref{CyyB}) and (\ref{CyTB})) have almost same forms except for the difference in $\mathcal{O}(1)$ prefactors when we make an approximation that $\Delta^S_l(k)\sim j_l(kr_{\rm rec})$.
We therefore see that
$C^{XT}_l$ due to the scalar magnetic mode would be smaller than $C^{X X}_l$ by several orders of magnitudes.
Hence, when we consider the detectability of the CMB distortions due to the primordial magnetic fields in the previous section,
including $C^{XT}_l$ due to the scalar magnetic mode in the analysis seems not to increase the detectability of the distortions.
Then, we do not include the effects of the scalar magnetic mode in this paper
and would like to consider the detailed analysis including such effect as a future issue.
Although PIXIE will not detect anisotropic parts of $\mu$ and $y$
induced by primordial magnetic fields, it can detect their isotropic
parts if $\mu \gtrsim 5\times10^{-8}$ or $y \gtrsim 10^{-8}$, which
correspond to primordial magnetic fields with $\rho_B/\rho_\gamma\gtrsim
10^{-8}$ or $B_0\gtrsim 1{\rm nG}$. Combining the result of PIXIE with
that of observations of $\mu$ and $y$ anisotropies by other satellites,
we might be able to confirm that the source of CMB distortions is
primordial magnetic fields. Such a type of analysis will shed light on
physics in the early Universe in a novel way.
\acknowledgments
S.Y. would like to thank Jens Chluba for useful discussion.
K.M., S.Y. and T.S. would like to thank the Japan Society for the Promotion of Science for financial support.
H.T. is supported by the DOE at the Arizona State University.
|
1211.6558
|
\section{Introduction}\label{sec:intro}
The Einstein equivalence principle (EEP) is a far-reaching concept in the
heart of many gravity theories. If EEP is valid, then gravitation must be a
curved-spacetime phenomenon. As a direct consequence, gravity
theories which fully embody EEP are the so-called ``metric theories of
gravity''\cite{wil93,wil06}. The validity of EEP involves three
different aspects, namely the weak equivalence principle, the local Lorentz
invariance (LLI), and the local position invariance (LPI)\cite{wil06}.
In the parametrized post-Newtonian (PPN) framework\cite{wn72,wil93,wil06}, we
present the orbital dynamics of binary systems from a generic semi-conservative
Lagrangian, based on which, new tests of strong-field LLI-violating PPN
parameters, $\hat{\alpha}_1$ and $\hat{\alpha}_2$, are proposed\cite{sw12}. New
limits are obtained from small-eccentricity relativistic neutron star
(NS) white dwarf (WD) systems, PSRs J1012+5307\cite{lcw+01,lwj+09} and
J1738+0333\cite{akk+12,fwe+12}. Here we briefly summarize the analysis and
results of Ref.~\citen{sw12}. We also propose a new test of the
strong-field LPI-violating PPN parameter, $\hat{\xi}$, and get a new limit
$|\hat{\xi}| < 3.1 \times 10^{-4}$. All limits in this proceeding
contribution correspond to 95\% confidence level, and the PPN
parameters with ``hat'' represent the strong-field generalization of the
weak-field PPN parameters (without hat). The preferred frame is assumed to be
defined by the isotropic CMB background.
\section{Local Lorentz invariance}\label{sec:lli}
LLI violation in the gravity sector is described by $\alpha_1$ and
$\alpha_2$ in the PPN framework\cite{wn72}, and these two parameters
are constrained by various observations of geophysics, Solar
System, and pulsar timing
experiments\cite{nw72,nor87,de92,bcd96,wex00,wk07,mwt08}. Recently in
Ref.~\citen{sw12}, we find that the effects of $\hat{\alpha}_1$ and
$\hat{\alpha}_2$ on the binary orbital dynamics decouple and manifest
characteristic signatures when the orbital eccentricity is small (see Fig.~1 in
Ref.~\citen{swk12} for illustrations), hence they can be constrained
individually.
Damour and Esposito-Far\`{e}se are the first to work out the effects
of $\hat{\alpha}_1$ on orbital dynamics of pulsar binaries\cite{de92}.
After dropping $\hat{\alpha}_2$ related terms, they found
that in the limit of a small eccentricity, $\hat{\alpha}_1$ induces a
polarization
of the orbit. The effect linearly depends on $\hat{\alpha}_1$ and the binary
velocity with respect to the preferred frame, ${\bf w}$. Due to unknown
angles, previous methods can only get probabilistic limits on $\hat{\alpha}_1$.
Ref.~\citen{sw12} demonstrates that, given a sufficiently long observing time
span, the large periastron advance would
be able to overcome probabilistic assumptions. By utilizing the limits
of eccentricity variations, we get a robust and conservative
constraint,
\begin{equation}\label{eq:a1}
\hat{\alpha}_1 = -0.4^{+3.7}_{-3.1} \times 10^{-5} \,,
\end{equation}
from PSR J1738+0333. It surpasses the current best limit
from LLR\cite{mwt08} by a factor of five.
In the limit of a small eccentricity, $\hat{\alpha}_2$ induces a precession of
the orbital angular momentum around ${\bf w}$. It changes the orientation
of the orbital plane with respect to the Earth.
After subtracting other potential astrophysical and gravitational
contributions, we get a combined limit from PSRs J1012+5307 and
J1738+0333\cite{sw12},
\begin{equation}
\label{eq:a2}
|\hat{\alpha}_2| < 1.8 \times 10^{-4} \,.
\end{equation}
This limit is still three orders of magnitude less constraining than the limit
given in Ref.~\citen{nor87}, however, it is obtained for a strongly
self-gravitating body.
\section{Local position invariance}\label{sec:lpi}
LPI violation is described by the Whitehead's term, characterized by
$\xi$\cite{wil73,wil93,wil06}. Even for fully conservative theories of gravity
one may have a $\xi \ne 0$. From its Lagrangian, we can immediately
identify its analogy with the $\alpha_2$ term by replacing ${\bf w}$ into
${\bf v}_G \equiv |\Phi_G|^{1/2} {\bf n}_G$ and $\alpha_2$ into
$-2\xi$, where $\Phi_G$ is the Galactic potential at the position of
the binary, and ${\bf n}_G$ is the direction of the Galactic
acceleration. Hence, for small-eccentricity binaries, $\xi$ induces a
precession of orbital angular momentum around ${\bf n}_G$, which
causes a change in the binary orientation. The same analysis done for
$\hat{\alpha}_2$ in Ref.~\citen{sw12} applies to the $\hat{\xi}$ test. The
probability distributions of $\hat{\xi}$ from PSRs J1012+5307, J1738+0333,
and their combination are illustrated in Fig.~\ref{fig:xi} (cf. Fig.~4
in Ref.~\citen{sw12}).
From their combination, we get
\begin{equation}\label{eq:xi}
|\hat{\xi}| < 3.1 \times 10^{-4} \,,
\end{equation}
which surpasses the limit from the non-detection of anomalous Earth
tide in gravimeter data\cite{wg76,wil06} by one order of magnitude.
\begin{figure}[t]
\begin{center}
\psfig{file=fig_xi.eps,width=10cm}
\end{center}
\caption{Probability distributions of $\hat{\xi}$ from binary pulsars
PSRs J1012+5307 (dotted blue), J1738+0333 (dashed red) and their
combination (solid black).}
\label{fig:xi}
\end{figure}
\section*{Acknowledgements}
Lijing Shao is supported by China Scholarship Council (CSC).
|
1602.02106
|
\part{\partial}
\define\emp{\emptyset}
\define\imp{\implies}
\define\ra{\rangle}
\define\n{\notin}
\define\iy{\infty}
\define\m{\mapsto}
\define\do{\dots}
\define\la{\langle}
\define\bsl{\backslash}
\define\lras{\leftrightarrows}
\define\lra{\leftrightarrow}
\define\Lra{\Leftrightarrow}
\define\hra{\hookrightarrow}
\define\sm{\smallmatrix}
\define\esm{\endsmallmatrix}
\define\sub{\subset}
\define\bxt{\boxtimes}
\define\T{\times}
\define\ti{\tilde}
\define\nl{\newline}
\redefine\i{^{-1}}
\define\fra{\frac}
\define\un{\underline}
\define\ov{\overline}
\define\ot{\otimes}
\define\bbq{\bar{\QQ}_l}
\define\bcc{\thickfracwithdelims[]\thickness0}
\define\ad{\text{\rm ad}}
\define\Ad{\text{\rm Ad}}
\define\Hom{\text{\rm Hom}}
\define\End{\text{\rm End}}
\define\Aut{\text{\rm Aut}}
\define\Ind{\text{\rm Ind}}
\define\IND{\text{\rm IND}}
\define\ind{\text{\rm ind}}
\define\Res{\text{\rm Res}}
\define\res{\text{\rm res}}
\define\Ker{\text{\rm Ker}}
\define\Gal{\text{\rm Gal}}
\redefine\Im{\text{\rm Im}}
\define\sg{\text{\rm sgn}}
\define\tr{\text{\rm tr}}
\define\dom{\text{\rm dom}}
\define\supp{\text{\rm supp}}
\define\card{\text{\rm card}}
\define\bst{\bigstar}
\define\he{\heartsuit}
\define\clu{\clubsuit}
\redefine\spa{\spadesuit}
\define\a{\alpha}
\redefine\b{\beta}
\redefine\c{\chi}
\define\g{\gamma}
\redefine\d{\delta}
\define\e{\epsilon}
\define\et{\eta}
\define\io{\iota}
\redefine\o{\omega}
\define\p{\pi}
\define\ph{\phi}
\define\ps{\psi}
\define\r{\rho}
\define\s{\sigma}
\redefine\t{\tau}
\define\th{\theta}
\define\k{\kappa}
\redefine\l{\lambda}
\define\z{\zeta}
\define\x{\xi}
\define\vp{\varpi}
\define\vt{\vartheta}
\define\vr{\varrho}
\redefine\G{\Gamma}
\redefine\D{\Delta}
\define\Om{\Omega}
\define\Si{\Sigma}
\define\Th{\Theta}
\redefine\L{\Lambda}
\define\Ph{\Phi}
\define\Ps{\Psi}
\redefine\aa{\bold a}
\define\bb{\bold b}
\define\boc{\bold c}
\define\dd{\bold d}
\define\ee{\bold e}
\define\bof{\bold f}
\define\hh{\bold h}
\define\ii{\bold i}
\define\jj{\bold j}
\define\kk{\bold k}
\redefine\ll{\bold l}
\define\mm{\bold m}
\define\nn{\bold n}
\define\oo{\bold o}
\define\pp{\bold p}
\define\qq{\bold q}
\define\rr{\bold r}
\redefine\ss{\bold s}
\redefine\tt{\bold t}
\define\uu{\bold u}
\define\vv{\bold v}
\define\ww{\bold w}
\define\zz{\bold z}
\redefine\xx{\bold x}
\define\yy{\bold y}
\redefine\AA{\bold A}
\define\BB{\bold B}
\define\CC{\bold C}
\define\DD{\bold D}
\define\EE{\bold E}
\define\FF{\bold F}
\define\GG{\bold G}
\define\HH{\bold H}
\define\II{\bold I}
\define\JJ{\bold J}
\define\KK{\bold K}
\define\LL{\bold L}
\define\MM{\bold M}
\define\NN{\bold N}
\define\OO{\bold O}
\define\PP{\bold P}
\define\QQ{\bold Q}
\define\RR{\bold R}
\define\SS{\bold S}
\define\TT{\bold T}
\define\UU{\bold U}
\define\VV{\bold V}
\define\WW{\bold W}
\define\ZZ{\bold Z}
\define\XX{\bold X}
\define\YY{\bold Y}
\define\ca{\Cal A}
\define\cb{\Cal B}
\define\cc{\Cal C}
\define\cd{\Cal D}
\define\ce{\Cal E}
\define\cf{\Cal F}
\define\cg{\Cal G}
\define\ch{\Cal H}
\define\ci{\Cal I}
\define\cj{\Cal J}
\define\ck{\Cal K}
\define\cl{\Cal L}
\define\cm{\Cal M}
\define\cn{\Cal N}
\define\co{\Cal O}
\define\cp{\Cal P}
\define\cq{\Cal Q}
\define\car{\Cal R}
\define\cs{\Cal S}
\define\ct{\Cal T}
\define\cu{\Cal U}
\define\cv{\Cal V}
\define\cw{\Cal W}
\define\cz{\Cal Z}
\define\cx{\Cal X}
\define\cy{\Cal Y}
\define\fa{\frak a}
\define\fb{\frak b}
\define\fc{\frak c}
\define\fd{\frak d}
\define\fe{\frak e}
\define\ff{\frak f}
\define\fg{\frak g}
\define\fh{\frak h}
\define\fii{\frak i}
\define\fj{\frak j}
\define\fk{\frak k}
\define\fl{\frak l}
\define\fm{\frak m}
\define\fn{\frak n}
\define\fo{\frak o}
\define\fp{\frak p}
\define\fq{\frak q}
\define\fr{\frak r}
\define\fs{\frak s}
\define\ft{\frak t}
\define\fu{\frak u}
\define\fv{\frak v}
\define\fz{\frak z}
\define\fx{\frak x}
\define\fy{\frak y}
\define\fA{\frak A}
\define\fB{\frak B}
\define\fC{\frak C}
\define\fD{\frak D}
\define\fE{\frak E}
\define\fF{\frak F}
\define\fG{\frak G}
\define\fH{\frak H}
\define\fJ{\frak J}
\define\fK{\frak K}
\define\fL{\frak L}
\define\fM{\frak M}
\define\fN{\frak N}
\define\fO{\frak O}
\define\fP{\frak P}
\define\fQ{\frak Q}
\define\fR{\frak R}
\define\fS{\frak S}
\define\fT{\frak T}
\define\fU{\frak U}
\define\fV{\frak V}
\define\fZ{\frak Z}
\define\fX{\frak X}
\define\fY{\frak Y}
\define\ta{\ti a}
\define\tb{\ti b}
\define\tc{\ti c}
\define\td{\ti d}
\define\te{\ti e}
\define\tf{\ti f}
\define\tg{\ti g}
\define\tih{\ti h}
\define\tj{\ti j}
\define\tk{\ti k}
\define\tl{\ti l}
\define\tm{\ti m}
\define\tn{\ti n}
\define\tio{\ti\o}
\define\tp{\ti p}
\define\tq{\ti q}
\define\ts{\ti s}
\define\tit{\ti t}
\define\tu{\ti u}
\define\tv{\ti v}
\define\tw{\ti w}
\define\tz{\ti z}
\define\tx{\ti x}
\define\ty{\ti y}
\define\tA{\ti A}
\define\tB{\ti B}
\define\tC{\ti C}
\define\tD{\ti D}
\define\tE{\ti E}
\define\tF{\ti F}
\define\tG{\ti G}
\define\tH{\ti H}
\define\tI{\ti I}
\define\tJ{\ti J}
\define\tK{\ti K}
\define\tL{\ti L}
\define\tM{\ti M}
\define\tN{\ti N}
\define\tO{\ti O}
\define\tP{\ti P}
\define\tQ{\ti Q}
\define\tR{\ti R}
\define\tS{\ti S}
\define\tT{\ti T}
\define\tU{\ti U}
\define\tV{\ti V}
\define\tW{\ti W}
\define\tX{\ti X}
\define\tY{\ti Y}
\define\tZ{\ti Z}
\define\tcc{\ti\cc}
\define\sha{\sharp}
\define\sh{\sharp}
\define\Mod{\text{\rm Mod}}
\define\Ir{\text{\rm Irr}}
\define\sps{\supset}
\define\uP{\un P}
\define\bnu{\bar\nu}
\define\bc{\bar c}
\define\bp{\bar p}
\define\br{\bar r}
\define\bg{\bar g}
\define\hC{\hat C}
\define\bE{\bar E}
\define\bS{\bar S}
\define\bP{\bar P}
\define\bce{\bar\ce}
\define\tce{\ti\ce}
\define\bul{\bullet}
\define\uZ{\un Z}
\define\che{\check}
\define\cha{\che{\a}}
\define\chg{\che g}
\define\bfU{\bar\fU}
\define\tfK{\ti{\fK}}
\define\tfD{\ti{\fD}}
\define\tfC{\ti{\fC}}
\define\bat{\bar\t}
\define\dcl{\dot{\cl}}
\define\cir{\bul}
\define\prq{\preceq}
\define\tss{\ti{\ss}}
\define\tSS{\ti{\SS}}
\define\tcj{\ti{\cj}}
\define\tcp{\ti{\cp}}
\define\tcf{\ti{\cf}}
\define\tcb{\ti{\cb}}
\define\tcy{\ti{\cy}}
\define\y{\ti r}
\define\tip{\ti\p}
\define\chR{\check R}
\define\tcw{\ti{\cw}}
\define\tfc{\ti{\fc}}
\define\Rep{\text{\rm Rep}}
\define\Reg{\text{\rm Reg}}
\define\tLL{\ti\LL}
\define\bvt{\bar{\vt}}
\define\BFO{BFO}
\define\EW{EW}
\define\EGNO{EGNO}
\define\KL{KL1}
\define\KLL{KL2}
\define\KM{KM}
\define\SPEC{L1}
\define\ORA{L2}
\define\LEA{L3}
\define\CELLSIV{L4}
\define\POSI{L5}
\define\HEC{L6}
\define\ACTION{L7}
\define\EXCEP{L8}
\define\INV{L9}
\define\PERR{Pe}
\head Introduction\endhead
Let $W$ be an irreducible Weyl group with length function $l:W@>>>\NN$ and let
$S=\{s\in W;l(s)=1\}$. Let $\Irr W$ be a set of representatives for the
isomorphism classes of irreducible representations of $W$ (over $\CC$). In
\cite{\SPEC} a certain subset of $\Irr W$ was defined. The representations in
this subset were later called {\it special representations}; they play a key
role in the classification of unipotent representations of a reductive group
over a finite field $\FF_q$ for which $W$ is the Weyl group. (The definition
of special representations is reviewed in 3.1.)
It will be convenient to replace irreducible representations of $W$ with the
corresponding simple modules of the asymptotic Hecke algebra $\JJ$ (see
\cite{\HEC, 18.3}) associated to $W$ via the canonical isomorphism
$\psi:\CC[W]@>\si>>\JJ$ (see 3.1); let $E_\iy$ be the simple $\JJ$-module
corresponding to $E\in\Irr W$ under $\psi$.
In this paper we show that a special representation $E$ of $W$ is characterized
by the following positivity property of $E_\iy$: there exists a $\CC$-basis of
$E_\iy$ such that any element $t_u$ in the standard basis of $\JJ$ acts in
this basis through a matrix with all entries in $\RR_{\ge0}$.
The fact that for a special representation $E$, $E_\iy$ has the positivity
property above was pointed out (in the case where $W$ is of classical type) in
\cite{\INV}. In this paper I will recall the argument of \cite{\INV} (see 3.3)
and I give two other proofs which apply for any $W$. One of these proofs (see
4.4) is based on the interpretation \cite{\LEA}, \cite{\BFO}, of $\JJ$ (or its
part attached to a fixed two-sided cell) in terms of $G$-equivariant vector
bundles on $X\T X$ where $X$ is a finite set with an action of a finite group
$G$. Another proof (see Section 2) is based on the use of Perron's theorem for
matrices with all entries in $\RR_{>0}$. (Previously, Perron's theorem has
been in used in the context of canonical bases in quantum groups in the study
\cite{\POSI} of total positivity and, very recently, in the context of the
canonical basis \cite{\KL} of $\CC[W]$, in \cite{\KM}; in both cases the
positivity properties of the appropriate canonical bases were used). We also
show that the Hecke algebra representation corresponding to a special
representation $E$ can be realized essentially by a $W$-graph (in the sense of
\cite{\KL}) in which all labels are natural numbers. Some of our results admit
also an extension to the case of affine Weyl groups (see Section 5).
\head 1. Statement of the main theorem\endhead
\subhead 1.1\endsubhead
Let $v$ be an indeterminate and let $\ca=\ZZ[v,v\i]$.
Let $\ch$ be the Hecke algebra of $W$ that is, the associative $\ca$-algebra
with $1$ with an $\ca$-basis $\{T_w;w\in W\}$ (where $T_1=1$) and with
multiplication such that $T_wT_{w'}=T_{ww'}$ if $l(ww')=l(w)+l(w')$ and
$(T_s+1)(T_s-v^2)=0$ if $s\in S$. Let $\{c_w;w\in W\}$ be the $\ca$-basis of
$\ch$ denoted by $\{C'_w;w\in W\}$ in \cite{\KL} (with $q=v^2$); see also
\cite{\HEC, 5.2}. For example, if $s\in S$, we have $c_s=v\i T_s+v\i$. The
left cells and two-sided cells of $W$ are the equivalence classes for the
relations $\si_L$ and $\si_{LR}$ on $W$ defined in \cite{\KL}, see also
\cite{\HEC, 8.1}; we shall write $\si$ instead of $\si_L$. For $x,y$ in $W$ we
have $c_xc_y=\sum_{z\in W}h_{x,y,z}c_z$ where $h_{x,y,z}\in\NN[v,v\i]$. As in
\cite{\HEC, 13.6}, for $z\in W$ we define $a(z)\in\NN$ by
$h_{x,y,z}\in v^{a(z)}\ZZ[v\i]$ for all $x,y$ in $W$ and
$h_{x,y,z}\n v^{a(z)-1}\ZZ[v\i]$ for some $x,y$ in $W$. (For example,
$a(1)=0$ and $a(s)=1$ if $s\in S$.) For $x,y,z$ in $W$ we have
$h_{x,y,z}=\g_{x,y,z\i}v^{a(z)}\mod v^{a(z)-1}\ZZ[v\i]$ where
$\g_{x,y,z\i}\in\NN$ is well defined.
Let $\JJ$ be the $\CC$-vector space with basis $\{t_w;w\in W\}$. For $x,y$ in
$W$ we set $t_xt_y=\sum_{z\in W}\g_{x,y,z\i}t_z\in\JJ$. This defines a
structure of associative $\CC$-algebra on $\JJ$ with unit element of the form
$\sum_{d\in\cd}t_d$ where $\cd$ is a certain subset of the set of involutions
in $W$, see \cite{\HEC, 18.3}.
For any subset $X$ of $W$ let $\JJ_X$ be the subspace of $\JJ$ with basis
$\{t_w;w\in X\}$; let $\JJ^+_X$ be the set of elements of the form
$\sum_{w\in X}f_wt_w\in\JJ_X$ with $f_w\in\RR_{>0}$ for all $w\in X$. We have
$\JJ=\op_\boc\JJ_\boc$ where $\boc$ runs over the two-sided cells of $W$. Each
$\JJ_\boc$ is a subalgebra of $\JJ$ with unit element $\sum_{d\in\cd_\boc}t_d$
where $\cd_\boc=\boc\cap\cd$; moreover, $\JJ_\boc\JJ_{\boc'}=0$ if
$\boc\ne\boc'$.
{\it Until the end of Section 4 we fix a two-sided cell $\boc$.}
\nl
Let $L$ be the set of left cells that are contained in $\boc$. We have
$$\boc=\sqc_{\G\in L}\G=\sqc_{\G,\G'\text{ in }L}(\G\cap\G'{}\i);$$
moreover, $\G\cap\G'{}\i\ne\emp$ for any $\G,\G'\text{ in }L$. It follows that
$$\JJ_\boc=\op_{\G\in L}\JJ_\G=\op_{\G,\G'\text{ in }L}\JJ_{\G\cap\G'{}\i};$$
moreover, $\JJ_{\G\cap\G'{}\i}\ne0$. Note that for $\G\in L$, $\JJ_\G$ is a
left ideal of $\JJ_\boc$.
A line $\cl$ in $\JJ_{\G\cap\G'{}\i}$ is said to be {\it positive} if
$\cl^+:=\cl\cap\JJ^+_{\G\cap\G'{}\i}\ne\emp$; in this case, $\cl^+$ consists
of all $\RR_{>0}$-multiples of a single nonzero vector. We now state our main
result.
\proclaim{Theorem 1.2}(a) Let $\G\in L$. There is a unique left ideal $M_\G$
of $\JJ_\boc$ such that property ($\he$) below holds:
($\he$) $M_\G=\op_{\G'\in L}M_{\G,\G'}$ where for any $\G'\in L$,
$M_{\G,\G'}:=M_\G\cap\JJ_{\G\cap\G'{}\i}$ is a positive line.
(b) Let $\G\in L,\G'\in L,u\in\boc$. We have $u\in\ti\G\cap\ti\G'{}\i$ for
well-defined $\ti\G,\ti\G'$ in $L$. If $\ti\G\ne\G'$, then $t_uM_{\G,\G'}=0$.
If $\ti\G=\G'$, then $t_uM_{\G,\G'}=M_{\G,\ti\G'}$ and
$t_uM_{\G,\G'}^+=M_{\G,\ti\G'}^+$.
(c) The $\JJ_\boc$-module $M_\G$ in (a) is simple. Its isomorphism class is
independent of $\G\in L$.
(d) The subspace $\II=\op_{\G\in L}M_\G$ of $\JJ_\boc$ is a simple two-sided
ideal of $\JJ_\boc$.
(e) Let $\G,\G',\ti\G,\ti\G'$ be in $L$. If $\G\ne\ti\G'$ then
$M_{\G,\G'}M_{\ti\G,\ti\G'}=0$. If $\G=\ti\G'$, then multiplication in
$\JJ_\boc$ defines an isomorphism
$M_{\G,\G'}\ot M_{\ti\G,\G}@>\si>>M_{\ti\G,\G'}$ and a surjective map
$M_{\G,\G'}^+\T M_{\ti\G,\G}^+@>>>M_{\ti\G,\G'}^+$.
(f) Let $\G,\G'$ be in $L$. The antiautomorphism $\th:\JJ_\boc@>>>\JJ_\boc$
given by $t_x\m t_{x\i}$ for all $x\in\boc$ maps $M_{\G,\G'}$ onto
$M_{\G',\G}$ and $M_{\G,\G'}^+$ onto $M_{\G',\G}^+$.
\endproclaim
The proof is given in Section 2.
\subhead 1.3\endsubhead
As a consequence of Theorem 1.2, the simple $\JJ_\boc$-module $M_\G$ admits a
$\CC$-basis $\{\ti e_{\G'};\G'\in L\}$ with the following property:
(i) {\it If $u\in\boc$ and $\G'\in L$, then $t_u\ti e_{\G'}$ is an
$\RR_{\ge0}$-linear combination of elements $\ti e_{\G''}$ with $\G''\in L$;
more precisely, if $u\in\ti\G\cap\ti\G'{}\i$ with $\ti\G,\ti\G'$ in $L$, then
$t_u\ti e_{\G'}=\l_{u,\G',\ti\G'}\ti e_{\ti\G'}$
\nl
with $\l_{u,\G',\ti\G'}\in\RR_{>0}$ if $\ti\G=\G'$ and $\l_{u,\G',\ti\G'}=0$
if $\ti\G\ne\G'$.}
\nl
Indeed, we can take for $\ti e_{\G'}$ any element of $M_{\G,\G'}^+$ and we use
1.2(b).
\subhead 1.4\endsubhead
Let $\le$ be the standard partial order on $W$. By \cite{\KL}, to any $y\ne w$
in $W$ one can attach a number $\mu(y,w)\in\ZZ$ such that for any $s\in S$ and
any $w\in W$ with $sw>w$ we have $c_sc_w=\sum_{y\in W;sy<y}\mu(y,w)c_y$. By
\cite{\KLL} we have $\mu(y,w)\in\NN$.
\subhead 1.5\endsubhead
Let $\un\ch=\CC(v)\ot_\ca\ch$ where we use the obvious imbedding
$\ca@>>>\CC(v)$; we denote $1\ot c_w$ again by $c_w$. Let
$\un\JJ=\CC(v)\ot_\CC\JJ$ where we use the obvious imbedding
$\CC@>>>\CC(v)$. We have a homomorphism of $\CC(v)$-algebras (with $1$)
$\Ps:\un\ch@>>>\un\JJ$ given by
$$\Ps(c_x)=\sum_{d\in\cd,z\in W,d\si z}h_{x,d,z}t_z$$
for all $x\in W$, see \cite{\HEC, 18.9}. (Note that $\Ps$ is in fact the
composition of a homomorphism in {\it loc.cit.} with an automorphism of
$\un\ch$.)
\subhead 1.6\endsubhead
For any $\G\in L$ let $S_\G$ be the set of all $t\in S$ such that $rt<r$ for
some (or equivalently any) $r\in\G$.
We fix $\G\in L$. Let $\{\ti e_{\G'};\G'\in L\}$ be a $\CC$-basis of $M_\G$ as
in 1.3; we use the notation of 1.3. We shall view $\CC(v)\ot M_\G$ as an
$\un\ch$-module via $\Ps$. Let $s\in S$ and let $\G'\in L$; let $\d$ be the
unique element in $\G'\cap\cd$. We show:
(a) {\it If $s\in S_{\G'}$, then $\Ps(T_s)\ti e_{\G'}=v^2\ti e_{\G'}$.}
(b) {\it If $s\n S_{\G'}$, then
$$\Ps(T_s)\ti e_{\G'}=
-\ti e_{\G'}+\sum_{\ti\G\in L;s\in S_{\ti\G}}f_{\ti\G,\G'}v\i\ti e_{\ti\G}$$
where}
$$f_{\ti\G,\G'}=\sum_{u\in\G'\cap\ti\G\i}\mu(u,\d)\l_{u,\G',\ti\G}\in
\RR_{\ge0}.$$
\nl
By definition, we have
$$\Ps(c_s)\ti e_{\G'}=\sum_{d\in\cd,u\in c,d\si u}h_{s,d,u}t_u\ti e_{\G'}=
\sum_{\ti\G\in L}\sum_{d\in\cd,u\in\G'\cap\ti\G\i,d\in\G'}
h_{s,d,u}\l_{u,\G',\ti\G}\ti e_{\ti\G}.$$
Since in the last sum we have $d\in\G'$ we see that we can assume that $d=\d$.
Thus we have
$$\Ps(c_s)\ti e_{\G'}=\sum_{\ti\G\in L}\sum_{u\in\G'\cap\ti\G\i}
h_{s,\d,u}\l_{u,\G',\ti\G}\ti e_{\ti\G}.$$
If $s\d<\d$ (that is, $s\in S_{\G'}$) we have $c_sc_\d=(v+v\i)c_\d$ hence
$h_{s,\d,u}$ is $(v+v\i)$ for $u=\d$ and is $0$ for $u\ne\d$; hence in this
case
$$\Ps(c_s)\ti e_{\G'}=(v+v\i)\ti e_{\G'};$$
(we use that $\l_{\d,\G',\G'}=1$.)
We now assume that $s\d>\d$ (that is, $s\n S_{\G'}$). In this case,
$h_{s,\d,u}$ is $\mu_{u,\d}$ if $su<u$ and is $0$ if $su>u$ (see 1.4); hence
$$\align&\Ps(c_s)\ti e_{\G'}=\sum_{\ti\G\in L}\sum_{u\in\G'\cap\ti\G\i;su<u}
\mu(u,\d)\l_{u,\G',\ti\G}\ti e_{\ti\G}\\&=\sum_{\ti\G\in L;s\in S_{\ti\G}}
\sum_{u\in\G'\cap\ti\G\i}\mu(u,\d)\l_{u,\G',\ti\G}\ti e_{\ti\G}.\endalign$$
Now (a),(b) follow.
Note that (a),(b) show that in the $\un\ch$-module $\CC(v)\ot M_\G$ the
generators $T_s$ act with respect to the basis $\{\ti e_{\G'};\G'\in L\}$
essentially by formulas which are those in a $W$-graph (in the sense of
\cite{\KL}) in which all labels are in $\RR_{\ge0}$.
\subhead 1.7\endsubhead
In Section 4 we will give another proof of the existence part of 1.2(a) which
also shows that $\ti e_{\G'}$ in 1.3 can be chosen so that
(i) each $\ti e_{\G'}$ is a $\ZZ_{>0}$-linear combination of elements in
$\{t_x;x\in\G\cap\G'{}\i\}$,
(ii) $\l_{u,\G',\G'_1}\in\ZZ_{\ge0}$ (notation of 1.3).
\nl
In particular, with this choice of $\ti e_{\G'}$, the constants
$f_{\ti\G,\G'}$ in the "$W$-graph formulas" in 1.6 are in $\ZZ_{\ge0}$.
\head 2. Proof of Theorem 1.2\endhead
\subhead 2.1\endsubhead
From \cite{\HEC,\S15} we see that, for $x,y,u$ in $W$ we have:
$$\g_{x,y,u}=\g_{y,u,x}=\g_{u,x,y},\tag a$$
$$\g_{x,y,u}\ne0\imp x\si y\i,y\si u\i,u\si x\i.\tag b$$
By \cite{\HEC, 18.4(a)}:
(c) {\it for $y,z$ in $W$ we have $y\si z$ if and only if $t_yt_{z\i}\ne0$.}
\subhead 2.2\endsubhead
Let $\G,\G',\ti\G,\ti\G'$ be in $L$. From 2.1(b) we deduce:
$$\text{ If }\G\ne\ti\G',\text{ then }
\JJ_{\G\cap\G'{}\i}\JJ_{\ti\G\cap\ti\G'{}\i}=0,\tag a$$
$$\JJ_{\G\cap\G'{}\i}\JJ_{\ti\G\cap\G\i}\sub\JJ_{\ti\G\cap\G'{}\i}.\tag b$$
We show:
$$\text{If }u\in\G\cap\G'{}\i,\text{ then }
t_u\JJ_{\ti\G\cap\G\i}^+\sub\JJ_{\ti\G\cap\G'{}\i}^+.\tag c$$
Let $\x=\sum_{y\in\ti\G\cap\G\i}f_yt_y\in\JJ_{\ti\G\cap\G\i}$ with
$f_y\in\RR_{>0}$ for all $y$. We must show that
$t_u\x\in\JJ_{\ti\G\cap\G'{}\i}^+$; it is enough to show that for any
$z\in\ti\G\cap\G'{}\i$ there exists $y\in\ti\G\cap\G\i$ such that
$\g_{u,y,z\i}\ne0$ or that there exists $y\in W$ such that $\g_{z\i,u,y}\ne0$
(see 2.1(a)); such $y$ is automatically in $\ti\G\cap\G\i$. Hence it is enough
to show that for any $z\in\ti\G\cap\G'{}\i$ we have $t_{z\i}t_u\ne0$. This
holds since $z\i\si u\i$ (see 2.1(c)).
\mpb
From (c) we deduce
$$\JJ_{\G\cap\G'{}\i}^+\JJ_{\ti\G\cap\G\i}^+\sub\JJ_{\ti\G\cap\G'{}\i}^+.
\tag d$$
\subhead 2.3\endsubhead
Let $\G\in L$. For any $\G'\in L$ we define a $\CC$-linear map
$T_{\G'}:\JJ_{\G\cap\G'{}\i}@>>>\JJ_{\G\cap\G'{}\i}$ by
$$T_{\G'}(t_x)=\sum_{y\in\G\cap\G\i}t_xt_y=
\sum_{y\in\G\cap\G\i,z\in\G}\g_{x,y,z\i}t_z.$$
We show:
(a) {\it the matrix representing $T_{\G'}$ with respect to the basis
$\{t_w;w\in\G\cap\G'{}\i\}$ has all entries in $\RR_{>0}$.}
\nl
An equivalent statement is: for any $x,z$ in $\G\cap\G'{}\i$, the sum
$\sum_{y\in\G\cap\G\i}\g_{x,y,z\i}$ is $>0$. Since $\g_{x,y,z\i}\in\NN$ for
all $y$, it is enough to show that for some $y\in\G\cap\G\i$ we have
$\g_{x,y,z\i}\ne0$ or equivalently (see 2.1(a)) that for some $y\in W$ we have
$\g_{z\i,x,y}\ne0$ (we then have automatically $y\in\G\cap\G\i$). Thus, it is
enough to show that $t_{z\i}t_x\ne0$. This follows from 2.1(c) since
$z\i\si x\i$.
\mpb
Applying Perron's theorem \cite{\PERR} to the matrix in (a) we see that there
is a unique $T_{\G'}$-stable positive line $\cl_{\G,\G'}$ in
$\JJ_{\G\cap\G'{}\i}$ (the "Perron line").
Now let $u\in\boc$; we have, $u\in\ti\G\cap\ti\G'{}\i$ with $\ti\G,\ti\G'$ in
$L$. From 2.2(a), 2.2(d), we deduce
(b) If $\ti\G\ne\G'$, then $t_u\JJ_{\G\cap\G'{}\i}=0$ hence
$t_u\cl_{\G,\G'}=0$;
(c) if $\ti\G=\G'$, then
$t_u\cl_{\G,\G'}^+\sub\JJ^+_{\G\cap\ti\G'{}\i}$, hence $t_u\cl_{\G,\G'}$
is a positive line in $\JJ_{\G\cap\ti\G'{}\i}$.
\nl
In the setup of (c), we have $t_u(T_{\G'}(\x))=T_{\ti\G'}(t_u\x)$ for any
$\x\in\JJ_{\G\cap\G'{}\i}$. It follows that $t_u\cl_{\G,\G'}$ is a
$T_{\ti\G'}$-stable line in $\JJ_{\G\cap\ti\G'{}\i}$. Thus,
(d) if $\ti\G=\G'$, then $t_u\cl_{\G,\G'}=\cl_{\G,\ti\G'}$.
\nl
We set $\cm_\G=\op_{\G'\in L}\cl_{\G,\G'}$. From (b),(d) we see that $\cm_\G$
is a $\JJ_\boc$-submodule of $\JJ_\G$.
We now see that the existence part of 1.2(a) is proved: we can take
$M_\G=\cm_\G$.
\subhead 2.4\endsubhead
Let $\G\in L$ and let $M_\G$ be any $\JJ_\boc$-submodule of $\JJ_\G$ for which
property ($\he$) in 1.2(a) holds. We show that 1.2(b) holds for $M_\G$. Let
$u\in\ti\G\cap\ti\G'{}\i$ be as in 1.2(b) and let $\G'\in L$. If
$\ti\G\ne\G'$, then $t_u\JJ_{\G\cap\G'{}\i}=0$ hence $t_uM_{\G,\G'}=0$. Now
assume that $\ti\G=\G'$. By 2.2(c), left multiplication by $t_u$ maps
$\JJ^+_{\G\cap\G'{}\i}$ into $\JJ^+_{\G\cap\ti\G'{}\i}$ hence it maps any
positive line in $\JJ_{\G\cap\G'{}\i}$ onto a positive line in
$\JJ_{\G\cap\ti\G'{}\i}$. In particular, it maps $M_{\G,\G'}$ onto a line in
$\JJ_{\G\cap\ti\G'{}\i}$, which, being also contained in $M_\G$, must be equal
to $M_{\G,\ti\G'}$; moreover, it maps $M_{\G,\G'}^+$ into
$\JJ^+_{\G\cap\ti\G'{}\i}$ hence onto $M_{\G,\ti\G'}^+$. Thus, 1.2(b) holds for
$M_\G$.
We now choose a basis $\{\ti e_{\G'};\G'\in L\}$ of $M_\G$ such that
$\ti e_{\G'}\in M_{\G,\G'}^+$ for any $\G'\in L$; then for any $u\in\boc$, the
matrix of the $t_u$-action on $M_\G$ in this basis has entries in $\RR_{\ge0}$.
Thus,
(a) $\tr(t_u,M_\G)\in\RR_{\ge0}$ for all $u\in\boc$.
\nl
We show:
(b){\it the $\CC$-linear map $\nu:\JJ_\boc@>>>\End_\CC(M_\G)$ given by the
$\JJ_\boc$-module structure on $M_\G$ is surjective.}
\nl
It is enough to show that for any $\G',\ti\G'$ in $L$ there exists $u\in\boc$
such that $\nu(t_u)$ carries the line $M_{\G,\G'}$ onto the line
$M_{\G,\ti\G'}$ and carries the line $M_{\G,\G''}$ (where $\G''\in L$,
$\G''\ne\G'$) to zero. Note that any $u\in\G'\cap\ti\G'{}\i$ has the required
properties. This proves (b).
It follows that the $\JJ_\boc$-module $M_\G$ is simple. We show:
(c) {\it Assume that $M'$ is any simple $\JJ_\boc$-module such that
$\tr(t_u,M')\in\RR_{\ge0}$ for all $u\in\boc$. Then $M'$ is isomorphic to
$M_\G$.}
\nl
Assume that this is not so. We use the orthogonality formula
$$\sum_{u\in\boc}\tr(t_u,M_\G)\tr(t_{u\i},M')=0,$$
which is a special case of \cite{\HEC, 19.2(e)} (taking into account
\cite{\HEC, 20.1(b)} and using that $u\m u\i$ maps $\boc$ into $\boc$). Since
each term in the last sum is in $\RR_{\ge0}$, it follows that each term in the
last sum is $0$. In particular, we have $\tr(t_d,M_\G)\tr(t_d,M')=0$ for any
$d\in\cd_\boc$. We show that for any $d\in\cd_\boc$ we have
$\tr(t_d,M_\G)\in\RR_{>0}$. Using the basis of $M_\G$ employed in the proof of
(a), it is enough to show that some diagonal entry of the matrix of the
$t_d$-action in this basis is $\ne0$ (all entries are in $\RR_{\ge0}$). We
have $d\in\G'$ for a unique $\G'\in L$; then $t_dM_{\G,\G'}^+=M_{\G,\G'}^+$
and the desired property holds.
From $\tr(t_d,M_\G)\tr(t_d,M')=0$ and $\tr(t_d,M_\G)\in\RR_{>0}$ we deduce
that $\tr(t_d,M')=0$ for any $d\in\cd_\boc$. Since $\sum_{d\in\cd_\boc}t_d$ is
the unit element $1_\boc$ of $\JJ_\boc$, it follows that $\tr(1_\boc,M')=0$.
This is a contradiction. This proves (c).
\mpb
Let $I$ be the simple ideal of $\JJ_\boc$ such that $I\cm_\G\ne0$. It is a
$\CC$-vector space of dimension $N^2$ where $N$ is the number of elements in
$L$. If $\ti\G\in L$, then $\cm_{\ti\G}$ is a simple $\JJ_\boc$-module such
that $\tr(t_u,\cm_{\ti\G})\in\RR_{\ge0}$ for all $u\in c$ (we use (a) with
$M_\G$ replaced by $\cm_{\ti\G}$); hence, by (c), we have
$\cm_{\ti\G}\cong M_\G$ as $\JJ_\boc$-modules. In particular, the isomorphism
class of $\cm_{\ti\G}$ is independent of $\ti\G$. We see that the (necessarily
direct) sum $\sum_{\ti\G\in L}\cm_{\ti\G}$ is contained in $I$ and has
dimension $N^2$ hence it is equal to $I$; we also see that $M_\G\sub I$ and,
taking intersections with $\JJ_\G$, we see that $M_\G\sub\cm_\G$, hence
$M_\G=\cm_\G$ (since $\dim M_\G=\dim\cm_\G=N$). We now see that the uniqueness
part of 1.2(a) is proved. Note that 1.2(b),1.2(c),1.2(d) are also proved and
we have $\II=I$.
\subhead 2.5\endsubhead
We prove 1.2(e). In the setup of (e), if $\G\ne\ti\G'$ then, using 2.2(a), we
have $M_{\G,\G'}M_{\ti\G,\ti\G'}=0$. Assume now that $\G=\ti\G'$. Using 2.2(d),
we see that $M_{\G,\G'}^+M_{\ti\G,\G}^+$ is contained in
$\JJ_{\ti\G\cap\G'{}\i}^+$; it is also contained in $\II$ (since $\II$ is
closed under multiplication), hence it is contained in
$\II\cap\JJ_{\ti\G\cap\G'{}\i}^+=M_{\ti\G,\G'}^+$. Thus, multiplication
restricts to a map $M_{\G,\G'}^+\T M_{\ti\G,\G}^+@>>>M_{\ti\G,\G'}^+$. This
map is necessarily surjective since $M_{\ti\G,\G'}^+$ is a single orbit of
$\RR_{>0}$ under scalar multiplication. This implies that the linear map
between lines $M_{\G,\G'}\ot M_{\ti\G,\G}@>>>M_{\ti\G,\G'}$ is an isomorphism.
This proves 1.2(e).
\subhead 2.6\endsubhead
We prove 1.2(f). Any element $\x\in\JJ_\boc$ defines a linear map
${}^t(\th(\x)):M_\G^*@>>>M_\G^*$ where $M_\G^*$ denotes the dual space and
${}^t$ denotes the transpose. This defines a $\JJ_\boc$-module structure on
$M_\G^*$ such that for any $x\in\boc$ we have
$\tr(t_x,M_\G^*)=\tr(t_{x\i},M_\G)$. By the argument in \cite{\HEC, 20.13(a)},
$\tr(t_{x\i},M_\G)$ is the complex conjugate of $\tr(t_x,M_\G)$. But the last
trace is a real number, so that $\tr(t_x,M_\G^*)=\tr(t_x,M_\G)$. It follows
that $M_\G^*\cong M_\G$ as $\JJ_\boc$-modules. From the definitions, the
simple two-sided ideal $\II'$ of $\JJ_\boc$ such that $\II'M_\G^*\ne0$
satisfies $\II'=\th(\II)$. It follows that $\th(\II)=\II$. Since
$\II=\op_{\ti\G,\ti\G'\text{ in }L}M_{\ti\G,\ti\G'}$ and
$\th(\JJ_{\G,\G'})=\JJ_{\G',\G}$, it follows that
$$\th(M_{\G,\G'})\sub
\JJ_{\G',\G}\cap\op_{\ti\G,\ti\G'\text{ in }L}M_{\ti\G,\ti\G'}=M_{\G',\G}.$$
Since $\th$ is a vector space isomorphism, it follows that
$\th(M_{\G,\G'})=M_{\G',\G}$. Note that $\th(\JJ_{\G,\G'}^+)=\JJ_{\G',\G}^+$;
hence
$$\th(M_{\G,\G'}^+)\sub\JJ_{\G',\G}^+\cap M_{\G',\G}=M_{\G',\G}^+.$$
This forces the equality $\th(M_{\G,\G'}^+)=M_{\G',\G}^+$ (since
$M_{\G',\G}^+$ is a single orbit of $\RR_{>0}$ under scalar multiplication).
This proves 1.2(f). Theorem 1.2 is proved.
\subhead 2.7\endsubhead
After an earlier version of this paper was posted, P. Etingof told me that the
line $M_{\G,\G'}$ in 1.2 is the same as the line associated in
\cite{\EGNO, 3.4.4} to the right $\JJ_{\G\cap\G\i}$-module
$\JJ_{\G\cap\G'{}\i}$ (viewed as a based module over a based ring) that is,
the unique positive line $\fL$ in $\JJ_{\G\cap\G'{}\i}$ such that $\fL$ is a
right $\JJ_{\G\cap\G\i}$-submodule. (The discussion in {\it loc.cit.} concerns
left (instead of right) indecomposable based modules over a fusion ring.)
Indeed, from the definitions we see that $\fL$ must be the same as
$\cl_{\G,\G'}$ in 2.3, hence the same as $M_{\G,\G'}$.
\head 3. Special representations\endhead
\subhead 3.1\endsubhead
When $\ch$ is tensored with $\CC$ (using the ring homomorphism $\ca@>>>\CC$,
$v\m1$), then it becomes $\CC[W]$, the group algebra of $W$. (For $w\in W$ we
have $1\ot T_w=w\in\CC[W]$; we denote $1\ot c_w$ again by $c_w$.) We have a
homomorphism of $\CC$-algebras (with $1$) $\ps:\CC[W]@>>>\JJ$ given by
$$\ps(c_x)=\sum_{d\in\cd,z\in W,d\si z}h_{x,d,z}|_{v=1}t_z$$
for all $x\in W$, see \cite{\HEC, 18.9}; this is an isomorphism, see
\cite{\HEC, 20.1}. For example, if $W=\{1,s\}$ is of type $A_1$ we have
$\ps(c_1)=t_1+t_s$, $\ps(c_s)=2t_s$; hence $\ps(1)=t_1+t_s$,
$\ps(s)=-t_1+t_s$.
For each $E\in\Irr W$ let $E_\iy$ be the simple $\JJ$-module corresponding
to $E$ under $\ps$ and let $\boc_E$ be the unique two-sided cell of $W$ such
that $\JJ_{\boc_E}E\ne0$. (Note that $E=E_\iy$ as $\CC$-vector spaces.) Let
$\Irr^\boc W=\{E\in\Irr W;\boc_E=\boc\}$ and let $a'=a(xw_0)$ for any
$x\in\boc$, where $w_0$ is the longest element of $W$.
For any $k\in\NN$ let $\fS^k$ be the $k$-th symmetric power of the reflection
representation of $W$, viewed as a representation of $W$ in an obvious way.
For $E\in\Irr W$ let $b_E$ be the smallest integer $k\ge0$ such that $E$ is a
constituent of $\fS^k$.
Now for any $E\in\Irr^\boc W$ we have $b_E\ge a'$ and there is a unique
$E\in\Irr^\boc W$ such that $b_E=a'$; this $E$ is denoted by $E^\boc$ and is
called the {\it special representation} associated to $\boc$. (This is a
reformulation of the definition of special representations given in
\cite{\SPEC}.)
\proclaim{Theorem 3.2}In the setup of Theorem 1.2, for any $\G\in L$, we have
$M_\G\cong E^\boc_\iy$ as $\JJ_\boc$-modules.
\endproclaim
We give two proofs; one is contained in 3.3,3.4,3.5. The other is given in
3.4,3.6.
\subhead 3.3\endsubhead
In this subsection we assume that $W$ is of type $A,B$ or $D$.
Let $\G\in L$. For any $\G'\in L$ we set
$$\e_{\G'}=\sum_{z\in\G\cap\G'{}\i}t_z\in\JJ^+_{\G\cap\G'{}\i}.$$
By \cite{\INV, 4.8(b)}, $\{\e_{\G'};\G'\in L\}$ is a $\CC$-basis of the
unique $\JJ$-submodule of $\JJ_\G$ isomorphic to $E^\boc_\iy$.
By the uniqueness part of 1.2(a) this $\JJ$-submodule of $\JJ_\G$ (viewed
as a $\JJ_\boc$-module) must be the same as $M_\G$ in 1.2(a). We see that in
this case, $M_\G$ is isomorphic to $E^\boc_\iy$ and $M_{\G,\G'}$ is the line
spanned by $\e_{\G'}$. In particular, 3.2 holds in our case.
\subhead 3.4\endsubhead
In this subsection we assume that $\boc$ is such that $\Irr^\boc W$ consists
of exactly $2$ irreducible representations. In this case, $W$ is of type $E_7$
(resp. $E_8$) and the $2$ irreducible representations in $\Irr^\boc W$ have
degree $512$ (resp. $4096$). Let $\G\in L$ and let $d\in\cd\cap\G$. The
$\CC$-linear map $r:\JJ_\G@>>>\JJ_\G$ given by left multiplication by
$(-1)^{l(d)}\psi(w_0)$ is in fact $\JJ$-linear (since $w_0$ is central in $W$)
and $r(t_x)=t_{x^*}$ for any $x\in\G$, where $x\m x^*$ is a certain fixed
point free involution of $\G$, see \cite{\ACTION}. Then
$\JJ_\G^1=\{\x\in\JJ_\G;r(\x)=\x\}$ is a simple $\JJ_\boc$-submodule of
$\JJ_\G$ with $\CC$-basis $\{t_x+t_{x^*};x\in\G_1\}$ where $\G_1$ is a set of
representatives for the orbits of $x\m x^*$ on $\G$. Note that, if $x\in\G$,
then $\{x,x^*\}$ is the intersection of $\G$ with the inverse of a left cell
$\G'\in L$; hence $t_x+t_{x^*}\in\JJ^+_{\G\cap\G'{}\i}$. By the uniqueness
part of 1.2(a), we must have $M_\G=\JJ_\G^1$ and that for any $\G'\in L$,
$M_{\G,\G'}$ is the line spanned by $\sum_{x\in\G\cap\G'{}\i}t_x$.
Now let $E\in\Irr^\boc W$ be such that $E_\iy=\JJ_\G^1$. From the definitions
we have $\tr((-1)^{l(d)}w_0,E)=|\G_1|=\dim E$. Hence, if $\e=\pm1$ is the
scalar by which $w_0$ acts on $E$, then $(-1)^{l(d)}\e=1$. We have
$l(d)=a(d)\mod2$; hence $\e=(-1)^{a(d)}$. But this equality characterizes
the special representation in $\Irr^\boc W$ (the special representation
satisfies it, the nonspecial representation doesn't satisfy it). We see that
$M_\G=\JJ_\G^1\cong E^\boc_\iy$. In particular, 3.2 holds in our case.
\subhead 3.5\endsubhead
In this subsection we assume that $W$ is of exceptional type, but that $\boc$
is not as in 3.4. In this case, $E^\boc$ is the only representation in
$\Irr^\boc W$ of dimension equal to $|L|$; since $M_\G$ (in 1.2(a)) has
dimension equal to $|L|$, it follows that $M_\G=\JJ_\G^1\cong E^c_\iy$. In
particular, 3.2 holds in our case. This completes the proof of Theorem 3.2.
\subhead 3.6\endsubhead
In this subsection we give a second proof of Theorem 3.2 assuming that $\boc$
is not as in 3.4. Let $a=a(w)$ for any $w\in\boc$. Let $\G\in L$. Let
$X=\sum_{w\in W}v^{-l(w)}T_w\in\un\ch$. We can view $\CC(v)\ot\JJ_\G$ and
$\CC(v)\ot M_\G$ as $\un\ch$-modules via $\Ps$ in 1.5. By \cite{\INV, 4.6},
for any $x\in\G$ we have
$$Xt_x=v^a\sum_{z\in\boc}t_zt_x\mod\sum_{i<a}v^i\JJ_\G.$$
By an argument as in the proof of 2.2(c), we see that
$\sum_{z\in\boc}t_zt_x\in\JJ_\G^+$. It follows that if $\G'\in L$ and
$\x\in M_{\G,\G'}$ then
$$X\x=v^a\x'\mod\sum_{i<a}v^i\JJ_\G$$
where $\x'\in \JJ_\G^+$. In particular, we have $X\x\ne0$. Thus,
$X(\CC(v)\ot M_\G)\ne0$. Using this and Theorem 4.2 in \cite{\INV} we deduce
that the simple $\un\ch$-module $\CC(v)\ot M_\G$ is a constituent of the
"involution module" $M$ in \cite{\INV, 0.1} (with $\QQ(u)$ replaced by
$\CC(v)$). According to \cite{\EXCEP} if a simple $\un\ch$-module appears in
$\CC(v)\ot\JJ_{\G'}$ for every $\G'\in L$ and it appears in $M$, then that
$\un\ch$-module corresponds to $E^\boc$. We deduce that $M_\G\cong E^\boc_\iy$.
This completes the second proof of Theorem 3.2, assuming that $\boc$ is not as
in 3.4.
\head 4. Equivariant vector bundles\endhead
\subhead 4.1\endsubhead
In this section we fix a reductive, not necessarily connected algebraic group
$G$ over $\CC$ acting on a finite set $X$. Let $G\bsl X$ be the set of
$G$-orbits on $X$. Representations of reductive groups over $\CC$ are always
assumed to be of finite dimension over $\CC$ and algebraic. For $x\in X$ let
$G_x=\{g\in G;gx=x\}$. Now $G$ acts
diagonally on $X\T X$ and we can consider the Grothendieck group $K_G(X\T X)$
of $G$-equivariant complex vector bundles ($G$-eq.v.b.) on $X\T X$ . This is
an (associative) ring with $1$ under convolution, denoted by $*$ (see
\cite{\LEA, 2.2}, \cite{\CELLSIV, 10.2}). For a $G$-eq.v.b. $V$ on $X\T X$ we
denote by $V_{x,y}$ the fibre of $V$ at $(x,y)\in X\T X$. Let $B$ be the set
of pairs $(\Om,\r)$ where $\Om$ is a $G$-orbit $\Om$ in $X\T X$ and $\r$ is an
irreducible representation of $G_\Om$ (the isotropy group of a point
$(x,y)\in\Om$). For any $(\Om,\r)\in B$ we denote by $V^{\Om,\r}$ the
$G$-eq.v.b. on $X\T X$ such that $V_{\Om,\r}|_{X\T X-\Om}=0$ and the action of
$G_\Om$ on $V^{\Om,\r}_{x,y}$ is equivalent to $\r$. Now
$\{V^{\Om,\r};(\Om,\r)\in B\}$ is a $\ZZ$-basis of $K_G(X\T X)$. Let
$\KK_G(X\T X)=\CC\ot K_G(X\T X)$, viewed as a $\CC$-algebra.
\subhead 4.2\endsubhead
In this subsection we assume that $G$ is finite. Let $\o,\o'$ be in $G\bsl X$.
Let $V^{\o,\o'}$ be the $G$-eq.v.b. on $X\T X$ such that
$V^{\o,\o'}_{a,b}=\CC[G]$ if $(a,b)\in\o\T\o'$, $V^{\o,\o'}_{a,b}=0$ if
$(a,b)\n\o\T\o'$. (Here $\CC[G]$ is the left regular representation of $G$.)
The $G$-action $g:V^{\o,\o'}_{a,b}@>>>V^{\o,\o'}_{ga,gb}$ is left translation
by $g$ on $\CC[G]$ (if $(a,b)\in\o\T\o'$) and is $0$ if $(a,b)\n\o\T\o'$. We
show:
(a) {\it Let $(Om,\r)\in B$; we have $\Om\sub\o_1\T\o'_1$ where $\o_1,\o'_1$
are in $G\bsl X$. Then $U':=V^{\Om,\r}*V^{\o,\o'}$ is isomorphic to a direct
sum of copies of the single $G$-eq.v.b. $V^{\o_1,\o'}$. More precisely, if
$\o'_1\ne\o$, we have $U'=0$; if $\o'_1=\o$, we have
$U'=V_{\o_1,\o'}^{\op(\dim\r|\Om||\o_1|\i)}$.}
\nl
For $(a,b)\in X\T X$ we have
$$U'_{a,b}=\op_{z\in\o;(a,z)\in\Om}V^{\Om,\r}_{a,z}\ot\CC[G]\text{ if }
b\in\o',$$
$$U'_{a,b}=0\text{ if }b\n\o'.$$
Thus the support of $U'$ is contained in $\o_1\T\o'$ and $U'=0$ unless
$\o'_1=\o$. We now assume that $\o'_1=\o$ and $(a,b)\in\o_1\T\o'$. Then
$U'_{a,b}=\op_{z;(a,z)\in\Om}V^{\Om,\r}_{a,z}\ot\CC[G]$. We have
$\dim U'_{a,b}=d|G||\Om|/|\o_1|$. We show:
(b) {\it as a $G_a\cap G_b$-module, $U'_{a,b}$ is a multiple of the regular
representation.}
\nl
Let $\s_1,\do,\s_k$ be the various $G_a\cap G_b$-orbits contained in $\o'$. We
have $U'_{a,b}=\op_{i=1}^kR_i$, where
$R_i=\op_{z\in\s_i}V^{\Om,\r}_{a,z}\ot\CC[G]$. We pick $z_i\in\s_i$. Now
$$R_i=\ind_{G_a\cap G_b\cap G_{z_i}}^{G_a\cap G_b}(A\ot B)$$
where
$$A=\res_{G_a\cap G_b\cap G_{z_i}}^{G_a\cap G_{z_i}}(V^{\Om,\r}_{a,z_i}),\qua
B=\res_{G_a\cap G_b\cap G_{z_i}}^{G_{z_i}\cap G_b}(\CC[G]).$$
It is enough to show that $R_i$ is a multiple of the regular representation of
$G_a\cap G_b$. Since $R_i$ is induced, it is enough to show that $A\ot B$ is a
multiple of the regular representation of $G_a\cap G_b\cap G_{z_i}$. It is
also enough to show that $B$ is a multiple of the regular representation of
$G_a\cap G_b\cap G_{z_i}$. This follows from the fact that $\CC[G]$ is a
multiple of the regular representation of $G_{z_i}\cap G_b$. This proves (b).
Now (a) follows.
\mpb
Note that in $\KK_G(X\T X)$ we have
$$V^{\o,\o'}=\sum_{(\Om,\r)\in B;\Om\sub\o\T\o'}\dim\r|\Om|V^{\Om,\r}.\tag c$$
\subhead 4.3\endsubhead
We now drop the assumption (in 4.2) that $G$ is finite. Let $\hat\KK_G(X\T X)$
be the $\CC$-vector space consisting of formal (possibly infinite) linear
combinations $\sum_{(\Om,\r)\in B}f_{\Om,\r}V^{\Om\r}$ where
$f_{\Om,\r}\in\CC$. The left $\KK_G(X\T X)$-module structure on $\KK_G(X\T X)$
given by left multiplication extends naturally to a left $\KK_G(X\T X)$-module
structure on $\hat\KK_G(X\T X)$. If $\o,\o'$ are as in $G\bsl X$ then we can
define $V^{\o,\o'}\in\hat\KK_G(X\T X)$ by the sum 4.2(c) (which is now a
possibly infinite sum); we set $\bar V^{\o,\o'}=|\o|\i V^{\o,\o'}$ so that
$$\bar V^{\o,\o'}=
\sum_{(\Om,\r)\in B;\Om\sub\o\T\o'}\dim\r|\Om||\o|\i V^{\Om,\r}.\tag a$$
Now formula 4.2(a) extends to the present case as follows. Let $(Om,\r)\in B$;
we have $\Om\sub\o_1\T\o'_1$ where $\o_1,\o'_1$ are $G$-orbits in $X$. Then
$$V^{\Om,\r}V^{\o,\o'}=NV^{\o_1,\o'}\tag b$$
where $N\in\ZZ$ is $0$ if $\o'_1\ne\o$ and $N=\dim\r|\Om||\o_1|\i$ if
$\o'_1=\o$. Hence
$$V^{\Om,\r}\bar V^{\o,\o'}=N'\bar V^{\o_1,\o'}\tag c$$
where $N'\in\ZZ$ is $0$ if $\o'_1\ne\o$ and $N'=\dim\r|\Om||\o|\i$ if
$\o'_1=\o$.
\mpb
Let $\o'\in G\bsl X$. Let $\ti R_{\o'}$ be the subspace of $\hat\KK_H(X\T X)$
consisting of formal (possibly infinite) linear combinations
$\sum_{(\Om,\r)\in B;pr_2\Om=\o'}f_{\Om,\r}V^{\Om,\r}$ with $f_{\Om,\r}\in\CC$.
Here $pr_2:X\T X@>>>X$ is the second projection. Note that $\ti R_{\o'}$ is a
$\KK_G(X\T X)$-submodule of $\hat\KK_H(X\T X)$.
Let $R_{\o'}$ be the subspace of $\hat\KK_G(X\T X)$ with basis formed by the
elements $\bar V^{\o,\o'}$ for various $\o\in G\bsl X$. Using (c) we see that
$R_{\o'}$ is a (simple) $\KK_G(X\T X)$-submodule of $\hat\KK_G(X\T X)$; we
have $R_{\o'}\sub\ti R_{\o'}$. Using (c) we see also that if
$\o''\in G\bsl X$ then $\bar V^{\o,\o'}\m\bar V^{\o,\o''}$ defines an
isomorphism of $\KK_G(X\T X)$-modules $R_{\o'}@>\si>>R_{\o''}$. Hence the
isomorphism class of the $\KK_G(X\T X)$-module $R_{\o'}$ is independent of the
choice of $\o'$.
\subhead 4.4\endsubhead
We now assume that $G$ is the finite group associated to $\boc$ in \cite{\ORA}
and that $X$ is the finite $G$-set $\op_{\G\in L}G/H_\G$ where $H_\G$ is the
subgroup of $G$ defined in \cite{\LEA, 3.8}.
In this case we have $\hat\KK_G(X\T X)=\KK_G(X\T X)$. By a conjecture in
\cite{\LEA, 3.15}, proved in \cite{\BFO}, there exists an isomorphism of
$\CC$-algebras $\c:\KK_G(X\T X)@>\si>>\JJ_\boc$ carrying the basis
$(V^{\Om,\r})$ of $\KK_G(X\T X)$ onto the basis $\{t_x;x\in\boc\}$ of
$\JJ_\boc$. Under $\c$, the left ideal $\JJ_\G$ of $\JJ_\boc$ (for $\G\in L$)
corresponds to the left ideal $\ti R_{\o'}$ of $\KK_G(X\T X)$ where
$\o'\in G\bsl X$ corresponds to $\G$, and the basis $\{t_x;x\in\G\}$ of
$\JJ_\G$ corresponds to the intersection of the basis $(V^{\Om,\r})$ of
$\KK_G(X\T X)$ with $\ti R_{\o'}$. The basis of $R_{\o'}$ formed by the
elements $\bar V^{\o,\o'}$ corresponds to a family of elements
$\{e_{\G'};\G'\in L\}$ in $\JJ_\G$.
From 4.3(a) we see that $e_{\G'}\in\JJ^+_{\G\cap\G'{}\i}$ for any $\G'\in L$
(in fact the coefficients of the various $t_x, x\in\G\cap\G'{}\i$ are in
$\ZZ_{>0})$ and from 4.3(c) we see that for $u\in\boc$ the product
$t_ue_{\G'}$ is a $\ZZ_{\ge0}$ multiple of an element $e_{\G''}$. We see that
the $\CC$-subspace of $\JJ_\G$ spanned by $\{e_{\G'};\G'\in L\}$ satisfies
property ($\he$) in 1.2(a) hence it is equal to $M_\G$. This provides another
proof in our case for the existence part of
1.2(a), with the additional integrality properties in 1.7.
\head 5. Final remarks\endhead
\subhead 5.1\endsubhead
Theorem 1.2 and its proof remain valid if $W$ is replaced by an affine Weyl
group (with $\boc$ assumed to be finite) or by a finite Coxeter group; in the
last case we use the positivity property of $h_{x,y,z}$ established in
\cite{\EW}. In these cases, the simple $\JJ_\boc$-module given by Theorem 1.2
will be called the special $\JJ_\boc$-module.
\subhead 5.2\endsubhead
Assume now that $W$ is an (irreducible) affine Weyl group and that $\boc$ is a
not necessarily finite two-sided cell of $W$. We denote again by $L$ the set
of left cells of $W$ that are contained in $\boc$; this is a finite set. Then
the $\CC$-algebra $\JJ_\boc$ with its basis $\{t_x;x\in\boc\}$ is defined. Let
$\hat JJ_\boc$ be the set of formal (possibly infinite) linear combinations
$\sum_{u\in\boc}f_ut_u$ where $f_u\in\CC$. This is naturally a left
$\JJ_\boc$-module. For any subset $X$ of $\boc$ let $\hat\JJ_X$ be the set of
all $\sum_{u\in\boc}f_ut_u\in\hat\JJ_\boc$ such that $f_u=0$ for $u\in\boc-X$.
If $\G\in L$, then $\hat\JJ_\G$ is a $\JJ_\boc$-submodule of $\hat\JJ_\boc$.
We have $\hat\JJ_\G=\op_{\G'\in L}\hat\JJ_{\G\cap\G'{}\i}$.
According to a conjecture in \cite{\CELLSIV, 10.5}, proved in \cite{\BFO}, we
can find $G,X$ as in 4.1 and an isomorphism of $\CC$-algebras
$\c:\KK_G(X\T X)@>\si>>\JJ_\boc$ carrying the basis
$\{V^{\Om,\r};(\Om,\r)\in B\}$ of $\KK_G(X\T X)$ onto the basis
$\{t_x;x\in\boc\}$ of $\JJ_\boc$. This extends in an obvious way to an
isomorphism $\hat\c:\hat\KK_G(X\T X)@>\si>>\hat\JJ_\boc$ under which the left
$\KK_G(X\T X)$-module structure on $\hat\KK_G(X\T X)$ corresponds to the left
$\JJ_\boc$-module structure on $\hat\JJ_\boc$. If $\G\in L$, there is a unique
$\o'\in G\bsl X$ (the set of $G$-orbits in $X$) such that $\hat\c$ carries
$R_{\o'}$ (see 4.3) onto a (simple) $\JJ_\boc$-submodule $M_\G$ of
$\hat\JJ_\G$ whose isomorphism class is independent of $\G$; we say that this
is the special $\JJ_\boc$-module. The $\JJ_\boc$-module $M_\G$ admits a basis
$\{e_{\G'};\G'\in L\}$ in which any $t_u$ (with $u\in\boc$) acts by a matrix
with all entries in $\ZZ_{\ge0}$, namely the basis corresponding to the basis
$\{\bar V^{\o,\o'};\o\in G\bsl X\}$ of $R_{\o'}$.
\widestnumber\key{EGNO}
\Refs
\ref\key\BFO\by R.Bezrukavnikov, M.Finkelberg and V.Ostrik\paper On tensor
categories attached to cells in affine Weyl groups, III\jour Israel J.Math.
\vol170\yr2009\pages207-234\endref
\ref\key\EW\by B.Elias and G.Williamson\paper The Hodge theory of Soergel
bimodules\jour Ann. Math\vol180\yr2014\pages1089-1136\endref
\ref\key\EGNO\by P.Etingof, S.Gelaki, D. Nikshich, V.Ostrik\book Tensor
categories\bookinfo Math. Surveys and Monographs\vol205\publ Amer. Math. Soc.
\yr2015\endref
\ref\key\KL\by D.Kazhdan and G.Lusztig\paper Representations of Coxeter groups
and Hecke algebras\jour Inv. Math.\vol53\yr1979\pages165-184\endref
\ref\key\KLL\by D.Kazhdan and G.Lusztig\paper Schubert varieties and
Poincar\'e duality\jour Proc. Symp. Pure Math.\vol36\publ Amer.Math.Soc.
\yr1980\pages185-203\endref
\ref\key\KM\by T.Kildetoft and V.Mazorchuk\paper Special modules over
positively based algebras\lb\jour arxiv:1601.06975\endref
\ref\key\SPEC\by G.Lusztig\paper A class of irreducible representations of a
Weyl group\jour Proc. Kon. Nederl. Akad.(A)\vol82\yr1979\pages323-335\endref
\ref\key\ORA\by G.Lusztig\book Characters of reductive groups over a finite
field \bookinfo Ann.Math.Studies\vol107\publ Princeton U.Press\yr1984\endref
\ref\key\LEA\by G.Lusztig\paper Leading coefficients of character values of
Hecke algebras\jour Proc. Symp. Pure Math.\vol47\yr1987\pages235-262\endref
\ref\key\CELLSIV\by G.Lusztig\paper Cells in affine Weyl groups, IV\jour
J. Fac. Sci. Tokyo U.(IA)\vol36\yr1989\pages297-328\endref
\ref\key\POSI\by G.Lusztig\paper Total positivity in reductive groups\inbook
Lie theory and geometry, in honor of Bertram Kostant, ed. J.-L.Brylinski
et.al.\bookinfo Progr.in Math. 123\yr1994\publ Birkh\"auser\publaddr Boston,
Basel, Berlin\pages531-568\endref
\ref\key\HEC\by G.Lusztig\book Hecke algebras with unequal parameters\bookinfo
CRM Monograph Ser.18\publ Amer. Math. Soc.\yr2003\endref
\ref\key\ACTION\by G.Lusztig\paper Action of longest element on a Hecke
algebra cell module\jour Pacific. J. Math.\vol279\yr2015\pages383-396\endref
\ref\key\EXCEP\by G. Lusztig\paper Exceptional representations of Weyl groups
\jour arxiv:1405.6686 \toappear J.Alg.\endref
\ref\key\INV\by G.Lusztig\paper An involution based left ideal in the Hecke
algebra\jour arxiv:1507.02263\endref
\ref\key\PERR\by O.Perron\paper Zur theorie der matrizen\jour Math.Annalen\vol
64\yr1907\pages248-263\endref
\endRefs
\enddocument
|
1810.06969
|
\section{Introduction}\label{sec:introduction}
Labyrinth fractals are fractal dendrites in the plane, that can also be viewed as a special family of Sierpi\'nski carpets. Such carpets are not only studied by mathematicians, but also by physicists, e.g., as mathematical models for porous materials, rocks, or disordered media \cite{Tarafdar_modelporstructrepeatedSC2001, AnhHoffmanSeegerTarafdar2005}. The mathematical objects called \emph{labyrinth fractals} were introduced and studied by Cristea and Steinsky \cite{laby_4x4, laby_oigemoan, mixlaby}, on the one hand, and on the other hand in recent research in physics \cite{PotapovGrachev2012, GrachevPotapovGerman2013, PotapovGermanGrachev2013, PotapovZhang2016} objects called fractal labyrinths, strongly related to the labyrinth fractals, are used, as well as the labyrinth fractals mentioned above, including in \cite{GrachevPotapovGerman2013} the notation and mathematical frame introduced by Cristea and Steinsky \cite{laby_4x4, laby_oigemoan}. These fractal labyrinths and labyrinth fractals appear in physics in several different contexts. To the best of our knowledge, they first occured in the study of anomalous diffusion, and particle dynamics \cite{PotapovGrachev2012}. In \cite{GrachevPotapovGerman2013} they were used as a tool for processing and analysing planar nanostructures, while in \cite{PotapovGermanGrachev2013}the authors applied them in the context of fractal reconstruction of complex images, signals and radar backgrounds. In the very recent article \cite{PotapovZhang2016} it is shown how the benefits of the wide simulation abilities of labyrinth fractals were used in oder to create a software that generates the shape of ultra-wide band fractal antennas, based on the geometry of labyrinth fractals, as introduced in \cite{laby_4x4}. Fractal antennas are already known to have applications, among others, in medicine, and cellular communications on base stations and mobile terminals, they have been of interest to scientists from the fields of physics and electronics for the last decade and still are a subject of ongoing research.
In nature or in technics, objects that can be described by prefractals of labyrinth fractals occur in various situations: the system of blood or lymphatic vessels in the body of humans or animals, the leaf veins of plants, river systems, dendrites in the brain, the electrical discharches (e.g., lightening) on the one hand, and, on the other hand, systems of irrigation in agriculture, systems of ressources or information distribution, communication or transport networks. In the context of physics, a fractal labyrinth is defined \cite{PotapovGrachev2012} as ``a connected topological structure with fractal dimension greater than $1$ and with the scaling nature of the conducting channels''.
Thus the labyrinth fractals defined by Cristea and Steinsky \cite{laby_4x4,laby_oigemoan, mixlaby} provide a
broad class of fractal labyrinths as described and used in physics and other
applied sciences, and which through their transparent construction method are
amenable to rigorous mathematical treatment.
The results found for these mathematical objects have both potential and actual
applications in, and implications to, fields where their finite,``real''
counterparts occur, like, e.g., physics, material science, or life science.
\emph{Mixed labyrinth fractals} were introduced and studied in more recent work by Cristea and Steinsky \cite{mixlaby}. They are a generalisation of the self-similar labyrinth fractals introduced and studied by the same authors in previous work \cite{laby_4x4, laby_oigemoan}. In the case of mixed labyrinth fractals more that one pattern is used in order to construct the set, as described in Section \ref{sec:Definition}. It has been proven \cite{mixlaby} that, when passing from the self-similar case to the generalised case of the mixed labyrinth sets and mixed labyrinth fractals, several of the topological properties are preserved: the mixed labyrinth fractals are dendrites in the unit square, too, that have exactly one exit on each side of the unit square. In the self-similar case it was shown that special patterns, called blocked patterns, generate fractals that are dendrites with the property that the arc between any two points in the fractal has infinite length.
\par
In the present article we show that in the case of mixed labyrinth fractals the situation is much more complex: on the one hand, one can find sequences of blocked labyrinth patterns that generate labyrinth fractals where the arc between any two points in the fractal is finite, and on the other hand one can find sequences of blocked labyrinth patterns whose resulting labyrinth fractal has the property that the arc between any two points of the fractal has infinite length. Moreover, we give an example for the construction of mixed labyrinth fractals where some arcs in the fractal have finite length and others have infinite length, analogous to the case when
self-similar labyrinth fractals are generated by a pattern that is horizontally but not vertically, or vertically but not horizontally blocked (see, e.g., \cite{laby_4x4}). Finally, we state a conjecture on lengths of arcs in mixed labyrith fractals, for future research.
\par
The results in this article provide ideas and modalities for constructing such fractal dendrites with desired properties regarding the lengths of arcs beween points in the fractal, that could serve as models, e.g., in the context of particle transport, nanostructures, image processing. We remark here that although there are several well known examples of continuous curves with infinite length, like the Peano curve \cite{Peano}, the Hilbert \cite{Hilbert} or the von Koch curve\cite{vonKoch1904, vonKoch1906}, not all of them have the property that the arc between any two points of the curve has infinite length, as in the case of the arcs in some of the labyrinth fractals. Moreover, we note that random Koch curves, i.e. objects that are related, e.g., to arcs between certain points (exit points) in labyrinth fractals, are studied with respect to random walks by theoretical physicists in connection with diffusion processes, e.g. \cite{SeegerHoffmannEssex2009_randomKoch}. In this context we also mention diffusion processes of water in biological tissues. There are many more available examples that support the idea that labyrinth fractals, whether mixed or self-similar, are mathematical objects worth understanding with respect to their topological and geometrical properties, with benefits both in mathematics and in other sciences.
\section{Labyrinth fractals}\label{sec:Definition}
One way to construct labyrinth fractals is with the help of \emph{labyrinth patterns}.
Let $x,y,q\in [0,1]$ such that $Q=[x,x+q]\times [y,y+q]\subseteq [0,1]\times [0,1]$.
For any point $(z_x,z_y)\in[0,1]\times [0,1]$ we define the function
$P_Q(z_x,z_y)=(q z_x+x,q z_y+y)$.
For any integer $m\ge 1$ let $S_{i,j,m}=\{(x,y)\mid \frac{i}{m}\le x \le \frac{i+1}{m} \mbox{ and } \frac{j}{m}\le y \le \frac{j+1}{m} \}$ and
${\cal S}_m=\{S_{i,j,m}\mid 0\le i\le m-1 \mbox{ and } 0\le j\le m-1 \}$.
Any nonempty ${\cal A} \subseteq {\cal S}_m$ is called an $m$-\emph{pattern} and $m$ its \emph{width}. Let $\{{\cal A}_k\}_{k=1}^{\infty}$
be a sequence of non-empty patterns and $\{m_k\}_{k=1}^{\infty}$ be the corresponding
\emph{width-sequence}, i.e., for all $k\ge 1$ we have
${\cal A}_k\subseteq {\cal S}_{m_k}$.
We let $m(n)=\prod_{k=1}^n m_k$, for all $n \ge 1$.
Let ${\cal W}_1={\cal A}_{1}$, we call ${\cal W}_1$ the
\emph{set of white squares of level $1$}, and
define ${\cal B}_1={\cal S}_{m_1} \setminus {\cal W}_1$
as the \emph{set of black squares of level $1$}.
For $n\ge 2$ the \emph{set of white squares of level $n$} is defined as
\begin{equation} \label{eq:W_n}
{\cal W}_n=\bigcup_{W\in {\cal A}_{n}, W_{n-1}\in {\cal W}_{n-1}}\{ P_{W_{n-1}}(W)\}.
\end{equation}
\noindent We remark that ${\cal W}_n\subset {\cal S}_{m(n)}$, and we define the \emph{set of black squares of level $n$} by ${\cal B}_n={\cal S}_{m(n)} \setminus {\cal W}_n$. For $n\ge 1$, we define $L_n=\bigcup_{W\in {\cal W}_n} W$.
Thus, $\{L_n\}_{n=1}^{\infty}$ is a monotonically decreasing sequence of compact sets, and $L_{\infty}=\bigcap_{n=1}^{\infty}L_n$ is the \emph{limit set defined by the sequence of patterns
$\{{\cal A}_k\}_{k=1}^{\infty}.$ }
Figures~\ref{fig:A1A2A3}, \ref{fig:W2}, and \ref{fig:pre_dendrite_general} show examples of labyrinth patterns and illustrate the first three steps of the
construction of a mixed labyrinth set.
We define, for ${\cal A}\subseteq {{\cal S}_m}$, $\mathcal{G}({\cal A})\equiv (\mathcal{V}(\mathcal{G}({\cal A})),\mathcal{E}(\mathcal{G}({\cal A})))$ to be \emph{the graph of ${\cal A}$}, i.e., the graph whose vertices $\mathcal{V}(\mathcal{G}({\cal A}))$ are the white squares in ${\cal A}$, i.e., $\mathcal{V}(\mathcal{G}({\cal A}))={\cal A}$ and whose edges $\mathcal{E}(\mathcal{G}({\cal A}))$ are the unordered pairs of white squares, that have a common side. The \emph{top row} in ${\cal A}$ is the set of all white squares in $\{S_{i,m-1,m}\mid 0\le i\le m^n-1 \}$. The bottom row, left column, and right column in ${\cal A}$ are defined analogously. A \emph{top exit} in ${\cal A}$ is a white square in the top row, such that there is a white square in the same column in the bottom row. A \emph{bottom exit} in ${\cal A}$ is defined analogously. A \emph{left exit} in ${\cal A}$ is a white square in the left column, such that there is a white square in the same row in the right column. A \emph{right exit} in ${\cal A}$ is defined analogously.
One can of course define the above notions in the special case ${\cal A}={\cal W}_{n}$. In this case the top row (in ${\cal W}_{n}$) is called the
\emph{top row of level} $n$. The \emph{bottom row, left column, and right column of level} $n$ are defined analogously.
\begin{figure}[hhhh]
\begin{center}
\includegraphics[width=0.6\textwidth]{fig-a1a2a3}
\caption{Three labyrinth patterns, ${\cal A}_1$ (a $4$-pattern), ${\cal A}_2$ (a $5$-pattern), and ${\cal A}_3$ (a $4$-pattern)}
\label{fig:A1A2A3}
\end{center}
\end{figure}
\begin{figure}[hhhh]\label{W2}
\begin{center}
\includegraphics[width=0.3\textwidth]{fig-w2}
\caption{The set ${\cal W}_2$, constructed based on the above patterns ${\cal A}_1$ and ${\cal A}_2$, that
can also be viewed as a $20$-pattern} \label{fig:W2}
\end{center}
\end{figure}
\begin{figure}[hhhh]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig-a1a2a3pref}
\caption{A prefractal of the mixed labyrinth fractal defined by a sequence $\{{\cal A}_k\}$ where the first three patterns are ${\cal A}_1, {\cal A}_2, {\cal A}_3$, respectively, shown in Figure \ref{fig:A1A2A3}}
\label{fig:pre_dendrite_general}
\end{center}
\end{figure}
A non-empty $m$-pattern ${\cal A} \subseteq {{\cal S}_m}$, $m \ge 3$ is called a $m\times m$-\emph{labyrinth pattern} (in short, \emph{labyrinth pattern}) if ${\cal A}$ satisfies Property~\ref{prop1}, Property~\ref{prop2}, and Property~\ref{prop3}.
\begin{property}\label{prop1}
$\mathcal{G}({\cal A})$ is a tree.
\end{property}
\begin{property}\label{prop2}
Exactly one top exit in ${\cal A}$
lies in the top row, exactly one bottom exit lies in the bottom row, exactly one left exit lies in the left column, and exactly one right exit lies in the right column.
\end{property}
\begin{property}\label{prop3}
If there is a white square in ${\cal A}$ at a corner of ${\cal A}$, then there is no white square in ${\cal A}$ at the diagonally opposite corner of ${\cal A}$.
\end{property}
Let $\{{\cal A}_k\}_{k=1}^{\infty}$ be a sequence of non-empty patterns, with $m_k\ge 3$, $n\ge 1$ and ${\cal W}_n$ the corresponding set of white squares of level $n$. We call ${\cal W}_{n}$ an $m(n)\times m(n)$-\emph{mixed labyrinth set} (in short, \emph{labyrinth set}), if ${\cal A} ={\cal W}_{n}$ satisfies Property~\ref{prop1}, Property~\ref{prop2}, and Property~\ref{prop3}.
It was shown \cite{mixlaby} that if all patterns in the sequence $\{{\cal A}_k\}_{k=1}^{\infty}$ are labyrinth patterns, then ${\cal W}_n$ is a labyrinth set, for any $n\ge 1$.
The limit set $L_{\infty}$ defined by a sequence $\{{\cal A}_k\}_{k=1}^{\infty}$ of labyrinth patterns is called \emph{mixed labyrinth fractal}.
One can immediately see that in the special case when all patterns in the sequence $\{{\cal A}_k\}_{k=1}^{\infty}$ are identical, $L_{\infty}$ is a self-similar labyrinth fractal, as defined in \cite{laby_4x4, laby_oigemoan}.
In the following we introduce some more notation. For $n\ge 1$ and $W_1,W_2 \in \mathcal{V}(\mathcal{G}({\cal W}_{n}))$ we denote by $p_n(W_1,W_2)$ the path in $\mathcal{G}({\cal W}_{n})$ that connects $W_1$ and $W_2$.
A path in $\mathcal{G}({\cal W}_{n})$ is called $\A$\emph{-path} if it leads from the top to
the bottom exit of $W_n$.
The $\B,\C,\D,\E$, and $\F$\emph{-paths} lead from left to right, top to right, right to bottom, bottom to left, and left to
top exit, respectively.
Within a path in $\mathcal{G}({\cal W}_{n})$
each white square in the path is denoted
according to its neighbours within the path:
if it has a top and a bottom neighbour it is called $\A$-\emph{square}
(with respect to the path), and
it is called $\B,\C,\D,\E$, and $\F$-\emph{square} if its neighbours are at left-right,
top-right,
bottom-right, left-bottom, and left-top, respectively.
If the considered square is an exit, it is supposed to have a neighbour outside the
side of the exit. A bottom exit, e.g., is supposed to have a neighbour below, outside the bottom, additionally to its neighbour that lies inside the unit square.
For more details on labyrinth sets and mixed labyrinth fractals and for results on topological properties of mixed labyrinth fractals we refer to the paper \cite{mixlaby}.
\section{Existing results on arcs in mixed labyrinth fractals}\label{sec:old}
In this section we list some of the results obtained for mixed labyrinth fractals \cite{mixlaby} that are useful in the context of this paper. We use the notation introduced in the previous section.
\begin{lemma}\label{lemma:Construction}(Arc Construction) Let $a,b\in L_{\infty}$, where $a\neq b$. For all $n \ge 1$, there are $W_n(a),W_n(b)\in V(\mathcal{G}({\cal W}_{n}))$ such that
\begin{itemize}
\item[(a)]$W_1(a)\supseteq W_2(a)\supseteq\ldots$,
\item[(b)]$W_1(b)\supseteq W_2(b)\supseteq\ldots$,
\item[(c)]$\{a\}=\bigcap_{n=1}^{\infty}W_n(a)$,
\item[(d)]$\{b\}=\bigcap_{n=1}^{\infty}W_n(b)$.
\item[(e)]The set $\bigcap_{n=1}^{\infty}\left(\bigcup_{W\in p_n(W_n(a),W_n(b))} W\right)$ is an arc between $a$ and $b$.
\end{itemize}
\end{lemma}
We recall from \cite{laby_4x4} that the squares $W_n(a)$, $W_n(b)$, $n\ge 1$ in the above lemma are chosen in the following way: let $W(a)$ be the set of all white squares in $\bigcup_{n=1}^{\infty} {\cal W}_n $ that contain $a$. Let $W_1(a)$ be a white square in ${\cal G} ({\cal W}_1)$ that contains infinitely many white squares of $W(a)$ as a subset. For $n\ge 2$, we define $W_n(a)$ as a white square in ${\cal G}({\cal W}_n)$, such that $W_n(a)\subseteq W_{n-1}(a) $, and $W_n(a)$ contains infinitely many squares of $W(a)$ as a subset. $W_n(b)$, for $n\ge 1 $, is defined in the analogous manner.
\begin{proposition}\label{lemma:m^n} Let $n,k\ge 1$, $\{W_1,\ldots,W_k\}$ be a (shortest) path in
$\mathcal{G}({\cal W}_{n})$ between the exits $W_1$ and $W_k$, $K_0=W_1 \cap \fr([0,1]\times[0,1])$, $K_k=W_k \cap \fr([0,1]\times[0,1])$, where $\fr(\cdot)$ denotes the boundary of a set, and $c$ be a curve in $L_n$
from a point of $K_0$ to a point of $K_k$. The length of $c$ is at least $(k-1)/(2\cdot m(n))$.
\end{proposition}
Let $T_n\in {\cal W}_{n}$ be the top exit of ${\cal W}_{n}$, for $n\ge 1$. The \emph{top exit of}
$L_{\infty}$ is $\bigcap_{n=1}^{\infty}T_n$. The other exits of $L_{\infty}$ are defined analogously. We note that
Property~\ref{prop2} yields that $(x,1),(x,0)\in L_{\infty}$ if and only if $(x,1)$ is the top exit of $L_{\infty}$ and $(x,0)$
is the bottom exit of $L_{\infty}$. For the left and the right exit the analogous statement holds.
Let $n\ge 1$, $W\in {\cal W}_{n}$, and $t$ be the intersection of $L_{\infty}$ with the top edge of $W$.
Then we call $t$ the \emph{top exit} of $W$. Analogously we define the \emph{bottom exit}, the\emph{ left exit} and
the \emph{right exit} of $W$.
We note that the uniqueness of each of these four exits is provided by the uniqueness of the four exits of a
mixed labyrinth fractal and by the fact that each
such set of the form
$L_{\infty} \cap W$, where $W\in {\cal W}_{n}$, is a mixed labyrinth fractal scaled by the factor $m(n)$.
We note that we have now defined exits for
three different types of objects, i.e., for ${\cal W}_{n}$ (and ${\cal A}_{k}$), for $L_{\infty}$,
and for squares in ${\cal W}_{n}$.
\begin{proposition}\label{lemma:ArcSimilarity1} Let $e_1,e_2$ be two exits in $L_{\infty}$, and $W_n(e_1),
W_n(e_2)$ be the exits in $\mathcal{G}({\cal W}_{n})$ of the same type as $e_1$ and $e_2$, respectively,
for some $n\ge 1.$
If $a$ is the arc that connects $e_1$ and $e_2$ in $L_{\infty}$, $p$ is the path in
$\mathcal{G}({\cal W}_{n})$ from $W_n(e_1)$ to $W_n(e_2)$, and $W\in {\cal W}_{n}$ is a $\A$-square
with respect to $p$, then $W\cap a$ is an arc in $L_{\infty}$ between the top and the bottom exit of $W$.
If $W$ is another type of square, the corresponding analogous statement
holds.
\end{proposition}
\noindent For the corresponding results, in detail, for self-similar fractals we refer to \cite{laby_oigemoan}.
\section{Blocked labyrinth patterns, blocked labyrinth sets and a recent conjecture}
\label{sec:Blocked}
We recall that an $m\times m$-labyrinth pattern ${\cal A}$ is called \emph{horizontally blocked} if the row (of squares)
from the left to the right exit contains at least one black square. It is called \emph{vertically blocked} if the
column (of squares) from the top to the bottom exit contains at least one black square. Analogously we define for any
$n \ge 1$ a horizontally or vertically blocked labyrinth set of level $n$.
As an example, the labyrinth patterns shown in Figure \ref{fig:A1A2A3} and \ref{fig:complement}
are horizontally and vertically blocked, while those in Figure \ref{fig:counterexample_1} are not blocked.
\begin{figure}
\begin{center}
\includegraphics{fig-block6x6}
\end{center}
\caption{A horizontally and vertically blocked ($6 \times 6$-labyrinth) pattern}\label{fig:complement}
\end{figure}
\begin{figure}[hhhh]
\begin{center}
\includegraphics[width=0.6\textwidth]{fig-notblocked}
\caption{Examples of labyrinth patterns, that are neither horizontally nor vertically blocked}
\label{fig:counterexample_1}
\end{center}
\end{figure}
In the self-similar case the following facts were proven \cite[Theorem 3.18]{laby_oigemoan}:
\begin{theorem} Let $L_{\infty}$ be the (self-similar) labyrinth fractal generated by a horizontally and
vertically blocked $m\times m$-labyrinth pattern.
Between any two points in $L_{\infty}$ there is a unique arc ${a}$.
The length of ${a}$ is infinite.
The set of all points, at which no tangent to ${a}$ exists,
is dense in ${a}$.
\end{theorem}
For the case of mixed labyrinth fractals, Cristea and Steinsky \cite{mixlaby} recently formulated the following conjecture.
\begin{conjecture}\label{conj:main result}
Let $\{{\cal A}_k\}_{k=1}^{\infty}$ be a sequence of both horizontally and vertically blocked labyrinth patterns,
$m_k\ge 4$.
For any two points in the limit set $L_{\infty}$ the length of the arc $a\subset L_{\infty}$ that connects them is infinite and the set of all points, where no tangent to $a$ exists, is dense in $a$.
\end{conjecture}
In this article we solve the arc length problem posed by the above conjecture by showing that, depending on the choice of the both horizontally and vertically blocked labyrinth patterns in the sequence $\{{\cal A}_k\}_{k=1}^{\infty}$, both situations can occur: the arc between any points of the fractal has finite length, or the arc between any two points of the fractal has infinite length.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{fig-spcross}
\caption{An example: the special cross pattern ${\cal A}_1$ with $m_1=11$}
\label{fig:ex:A_1_finite_arc}
\end{center}
\end{figure}
\vspace{0.3cm}
\noindent {\bf Example 1.}
Let $\{{\cal A}_k\}_{k=1}^{\infty}$ be a sequence of (both horizontally and vertically) blocked labyrinth patterns, $m_k\ge 11$, with $m_k= 2 a_k+1$, $a_k\ge 5$.
We consider a sequence of patterns that are both vertically and horizontally blocked and have a
``cross shape'' like ${\cal A}_1$ in Figure
\ref{fig:ex:A_1_finite_arc}, i.e., the pattern looks like a ``cross" centered in the ``central square" of the
pattern (here coloured in light grey) and each ``arm" of the cross is ``blocked" such that in order to get from the ``center" of the cross
to any of the four exits of the pattern we have to go a detour around a black square that lies in
the same row or column as the respective exit and the mentioned ``central square" of the pattern. More precisely, we position the four black squares between the central square
and the exits of ${\cal A}_k$ in the columns (rows) $(a_k+1)/2$ and $(3a_k+3)/2$, if $a_k$ is odd,
and in the columns (rows)
$(a_k+2)/2$ and $(3a_k+2)/2$,
if $a_k$ is even. We call these patterns
\emph{special cross patterns}.
One can immediately see that the ``central square" of such a special cross pattern, where the four ``arms" of
the cross meet, changes its type, depending on the path in ${\cal{G}}({\cal A}_1)$ that we consider between
two exits of the pattern ${\cal A}_1$: in the $\A$-path in the pattern, it is a $\A$-square, and in the
$y$-path, it is a $y$-square, for any $y\in \{\B,\C,\D,\E,\F \}$.
\\[0.3cm]
We recall that the path matrix of a labyrinth set or a labyrinth pattern ${\cal A}$ is a
$6\times 6$-matrix $M$ such that the element in row $x$ and column $y$ is the number of
$y$-squares in the $x$-path in $\mathcal{G}({\cal A})$. It was proven \cite[Proposition 1]{mixlaby} that, for any sequence of labyrinth patterns $\{{\cal A}_k\}_{k\ge 1}$ with corresponding sequence of path matrices $\{M_k\}_{k\ge 1}$, for any integer $n\ge 1$ the matrix $M(n):= \prod_{k=1}^n M_k$ is the matrix of the mixed labyrinth set ${\cal W}_n$ (of level $n$), i.e. the sum of the entries in any row of $M(n)$ gives the length of the path between two of the exits in $\mathcal{G}({\cal W}_n)$. For more details and properties of
path matrices we refer to the papers \cite{ laby_4x4, laby_oigemoan, mixlaby}.\\[0.2cm]
With the help of Figures \ref{fig:ex:A_1_finite_arc} and \ref{fig:sequence_special_cross_patterns} one can easily check that for this special sequence of labyrinth patterns $\{ {\cal A}_k\}_{k \ge 1}$, the path matrix of the pattern ${\cal A}_k$ is
\[
M_k=\left(
\begin{array}{rrllll}
2a_k-3 & 0 & 2 & 2 & 2 & 2\\
0 & 2a_k-3 & 2 & 2 & 2 & 2\\
a_k-2 & a_k-2 & 3 & 2 & 2 & 2\\
a_k-2 & a_k-2 & 2 & 3 & 2 & 2\\
a_k-2 & a_k-2 & 2 & 2 & 3 & 2\\
a_k-2 & a_k-2 & 2 & 2 & 2 & 3\\
\end{array}\right)
,
~~\mbox{for}~~ k\ge 1~~ \mbox{and}~~ m_k=2a_k+1.
\]
Thus the length of the path in $\mathcal{G}({\cal A}_k)$ between any two exits is in this case
exactly $2a_k+5$.
Herefrom we obtain that the length of any path between two exits in ${\cal W}_n$ is then
$\prod_{k=1}^n(2a_k+5)=\prod_{k=1}^n(m_k+4)$.
\begin{figure}
\begin{center}
\includegraphics[width=0.18\textwidth]{fig-spcross5}
\includegraphics[width=0.18\textwidth]{fig-spcross6}
\includegraphics[width=0.18\textwidth]{fig-spcross7}
\includegraphics[width=0.18\textwidth]{fig-spcross8}
\includegraphics[width=0.18\textwidth]{fig-spcross9}
\end{center}
\caption{Example: five consecutive elements of a sequence of special cross patterns, where $a_k=k+4$, here for $k=1,\dots,5$.}
\label{fig:sequence_special_cross_patterns}
\end{figure}
\begin{figure}[hhhh]
\begin{center}
\includegraphics[scale=0.5]{fig-cross5-6-7.eps}
\caption{A mixed labyrinth fractal of level 2 that is a prefractal of the mixed labyrinth fractal defined by a sequence $\{{\cal A}_k\}$ of special cross patterns as shown in Figure \ref{fig:sequence_special_cross_patterns}}
\label{fig:pre_cross5-6-7}
\end{center}
\end{figure}
As a next step, we introduce, for $n=1,2,\dots$, the curves $\gamma_n^q$, for
$q \in {\cal E}=\{\A,\B,\C,\D,\E,\F \}$. Here, $q \in{\cal E}$ indicates which exits are connected by the path,
e.g., if $q= \A$ then $\gamma_n^q$ is a curve that connects the top and the bottom exit of $L_{\infty}$,
if $q = \B$ then $\gamma_n^q$ is a curve that connects the left and the right exit of $L_{\infty}$, and so on.
In the sequel we define these simple curves. Let $n \ge 1$. For $q= \A$ we construct the curve
$\gamma_n^q$ in the following way. Let $W\in {\cal W}_n$ be, e.g., a square of type $\A$ in the path of type $q$.
Then, we define the restriction $ \gamma_n^q|W$ to be the vertical line segment that connects the
midpoints of the top and of the bottom edge of $W$. We proceed analogously in the case when $W$
is a square of type $\B,\C,\D,\E,\F $, in each case $ \gamma_n^q|W$ is the union of two line segments
(both horizontal, or one horizontal and one vertical)
that both go through the center of $W$ and the midpoint of some edge of $W$, such that the sum of their lengths
is $\frac{1}{m(n)}$. We immediately get the length of the curve $\gamma_n$, for $n\ge 1$:
\begin{equation}\label{length_gamma_n}
\ell(\gamma_n)=\prod_{k=1}^n \frac{m_k+4}{m_k}=\prod_{k=1}^n\left( 1+\frac{4}{m_k}\right).
\end{equation}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.7\textwidth]{fig-path}
\caption{This picture shows a fragment of $\gamma _n ^q$, and (dashed) $\gamma _{n-1} ^q$. Here $q \in {\cal E}$ indicates that the arc connects the left and right exit of a square $W \in {\cal W}_n$, in the picture the most left and most right dotted points. }
\label{fig:consecutive_paths}
\end{center}
\end{figure}
Now we study the sequence $\{\ell(\gamma_n) \}_{n\ge 1}$. From \eqref{length_gamma_n} we easily see that
$\{\ell(\gamma_n) \}_{n\ge 1}$ is a strictly increasing sequence, thus $\lim_{n\to \infty} \ell(\gamma_n)=
\sup_{n=1,2,\dots} \ell(\gamma_n).$\\
\noindent
By basic mathematical analysis facts $\prod_{k=1}^n \left( 1+\frac{4}{m_k}\right)$
converges if and only if $ \sum_{k\ge 1} \frac{4}{m_k}$ converges, i.e.,
$ \sum_{k\ge 1} \frac{1}{m_k}<\infty$.
By taking, e.g., $a_k=5^k$, for $k=1,2,\dots$ we obtain
$\sup_{n=1,2,\dots} \ell(\gamma_n)=\lim_{n \to \infty} \ell(\gamma_n)<\infty$.
\\[0.3cm ]
\noindent
{\bf Remark.} One can verify, by using the definition of the Hausdorff distance $d_H$, that, for $q \in {\cal E}$,
the arc $\gamma^q$ in $L_{\infty}$ that connects the two exits of $L_{\infty}$ indicated by $q$ satisfies
$d_H(\gamma_n^q, \gamma^q)\to 0$, for $n\to \infty.$
Here we mean the Hausdorff distance between the images of the two curves, as sets in the Euclidean plane endowed with the Euclidan distance.
\begin{lemma}
\label{lemma:norminfinity_convergence}
With the above notation, there are parametrisations ${\tilde{\gamma}}_n^q(t):[0,1]\to [0,1]^2$
and ${\tilde{\gamma}}^q(t):[0,1]\to [0,1]^2$ of ${\gamma}_n^q$
and ${\gamma}^q$, respectively, such that for all $q \in {\cal E}$, we have
$||{\tilde{\gamma}}_n^q - {\tilde{\gamma}} ^q ||_{\infty} \to 0$, for $n \to \infty$, where $|| \cdot||_{\infty}$ is the supremum norm.
\end{lemma}
\begin{proof}Sketch: the proof of Lemma \ref{lemma:norminfinity_convergence} is based on the fact that one can easily find parametrisations ${\tilde{\gamma}}_n^q(t):[0,1]\to [0,1]^2$
and ${\tilde{\gamma}}^q(t):[0,1]\to [0,1]^2$ of ${\gamma}_n^q$ and ${\gamma}^q$, respectively, such that
$||\tilde{\gamma}_n^q - \tilde{\gamma} ^q ||_{\infty} \le \frac{3}{2}\cdot \frac{1}{m(n)}$, for $n=1,2, \dots$. See also Figure \ref{fig:consecutive_paths}.
\end{proof}
\begin{lemma}
\label{lemma:unif_conv_implies_finite_length}
Let ${\tilde{\gamma}}_n:[0,1]\to [0,1]^2$
and ${\tilde{\gamma}}:[0,1]\to [0,1]^2$ be parametrisations of the planar curves ${\gamma}_n$ and ${\gamma}$, respectively, whose lengths we denote by $\ell({\gamma}_n)$ and $\ell({\gamma})$. If
$||{\tilde{\gamma}}_n - {\tilde{\gamma}}||_{\infty} \to 0$,
for ${n \to \infty}$, and $ \displaystyle \sup_{n}\ell({\gamma}_n) <\infty$, then $ \ell({\gamma})<\infty$. Moreover, in this case the following inequalities hold: $\displaystyle \liminf_{n \to \infty} \ell({\gamma}_n) \le \ell({\gamma}) \le \limsup_{n \to \infty} \ell({\gamma}_n)$.
\end{lemma}
\begin{proof}
We give an indirect proof of the first assertion of the lemma, the second one we leave as an exercise, since for our purposes the first assertion is already enough. Assume $\ell({\gamma})=\infty$. Since $ \displaystyle \sup_{n}\ell({\gamma}_n) <\infty$, we can choose a positive integer $N$ such that $\displaystyle N> \sup_{n}\ell({\gamma}_n)+1$. Then, by the definition of the length of a curve, there exist the real numbers $0=s_0<s_1<\dots < s_m=1$, $m \ge 1$, such that $\sum_{k=1}^m ||{\tilde{\gamma}}(s_k)- {\tilde{\gamma}}(s_{k-1}) ||>N$, where $||\cdot ||$ denotes the Euclidean norm in the plane. From the convergence hypothesis it follows that there exists an integer $n_0$ such that for every $n \ge n_0$ we have $||{\tilde{\gamma}}-{\tilde{\gamma}}_n ||_{\infty}<\frac{1}{2(m+1)}$, and thus $\displaystyle \max_{k=1,\dots,m} ||{\tilde{\gamma}}(s_k)-{\tilde{\gamma}}_n(s_k) ||<\frac{1}{2(m+1)}$. Moreover, by the definition of the length of a curve, $\ell(\gamma _n)\ge \sum_{k=1}^m ||{\tilde{\gamma}}_n(s_k)- {\tilde{\gamma}}_n(s_{k-1}) ||$.
Thus, we now easily obtain the following inequalities:
\begin{align*}
N &< \sum_{k=1}^m ||{\tilde{\gamma}}(s_k)- {\tilde{\gamma}}(s_{k-1}) || \\
& \le \sum_{k=1}^m \left(||{\tilde{\gamma}}(s_k)- {\tilde{\gamma}_n}(s_{k}) ||+||{\tilde{\gamma}}_n(s_k)- {\tilde{\gamma}}_n(s_{k-1}) ||+||{\tilde{\gamma}}_(s_{k-1})- {\tilde{\gamma}}(s_{k-1}) ||\right) \\
& \le 2 \sum_{k=0}^m ||{\tilde{\gamma}}(s_k)- {\tilde{\gamma}}_n(s_{k}) || + \sum_{k=1}^m ||{\tilde{\gamma}}_n(s_k)- {\tilde{\gamma}}_n(s_{k-1}) || \le 1 + \ell(\gamma_n),
\end{align*}
which leads to a contradiction.
\end{proof}
Now, for an arbitrary $n\ge 1$, let us take $W \in {\cal W}_n$, $L_{\infty}| W:=L_{\infty}\cap W$ and consider any two of the exits $e_1$, $e_2$ of the square $W$ (as defined in the paper \cite{mixlaby}). Then the arc in $L_{\infty}| W$ that connects $e_1$ and $e_2$ is the scaled image of the arc between two exits (of the same types) of a labyrinth set $L'_{\infty}$ generated by the sequence of patterns $\{{\cal A'}_k\}_{k=1}^{\infty}$, where ${\cal A'}_k={\cal A}_{k+n}$, and the scaling factor is $m(n)$. Therefore, one can easily see that the arc beween any such exits of any square $W \in {\cal W}_n$, for any $n\ge 1$, is finite.
\\
Herefrom it then easily follows that if $x,y$ are points that belong to the set of points in $L_{\infty}$ that consists of all centres and all exits of squares of $\cup_{n\ge 1} {\cal V}({\mathcal G}({\cal W}_n))$, then the length of the arc in $L_{\infty}$ that connects $x$ and $y$ is finite.
Let ${E}_n$ be the set of all points of $L_{\infty}$ that are exits of squares of level $n$, and ${C}_n$ be the set of all points of $L_{\infty}$ that are centers of squares of level $n$. For any two distinct points $x',y' \in L_{\infty}$, we introduce the notation $a (x',y')$ for the arc in $ L_{\infty}$ that connects the points $x'$ and $y'$. Let now $W\in {\cal V}({\mathcal G}({\cal W}_n))$, with $ n\ge 0$ (for $n=0$, $W$ is the unit square, otherwise it is a white square of level n, as defined before). Let $c$ be the center of $W,$ and $e$ one of its four exits.
Now, we want to show that for any point $x \in (\inter W \cap L_{\infty})\setminus \cup_{k\ge n}(E_k \cup C_k)$, where $\inter$ denotes the interior of the set, $\ell(a(x,c))<\infty$ and $\ell(a(x,e))<\infty$. In the following we give a proof of the first inequality.
Therefore, we consider two cases.
First, we assume
that $x$ is a point on one of the four ``main arms'' of $L_{\infty} \cap W$ (which is in fact the scaled image of the mixed labyrinth fractal defined by the sequence $\{{\cal A}_k\}_{k \ge n+1}$), i.e., $x$ lies on the arc in $L_{\infty}$ that connects the center of $W$ with one of its exits. In this case, it easily follows from the above results that the length of the arc in $L_{\infty}$ between $x$ and the center of $W$ has finite length (that is less than one half of the length of the arc between two exits in $W$).
In the second case, we assume that $x$ does not lie on a ``main arm'' of $L_{\infty} \cap W$, i.e., $X$ lies on a ``branch'' of the dendrite, that originates at a point, say $c'$, with $c'\in \bigcup_{k\ge n+1}C_k,$ that lies on one of the four ``main arms'' of $L_{\infty} \cap W$ (that connects the center $c\in C_n$ of $W$ with one of its exits, say $e\in E_n$).
By the construction of the fractal and of arcs in the fractal (Lemma \ref{lemma:Construction}), there exists a point $ e'\in \bigcup_{k\ge n+1}E_k $ such that $x$ lies on the arc $a(c',e')$ in $L_{\infty}$ that connects $c'$ and $e'$, which, due to the above considerations, has finite length.
Since $ \ell(a(c,x))= \ell(a(c,c'))+ \ell(a(c',x))\le \ell(a(c,c'))+ \ell(a(c',e'))= \ell(a(c,e'))<\infty $, it follows that $ \ell(a(c,x)) <\infty$.
We leave the proof of the inequality $\ell(a(x,e))<\infty$ to the reader as an exercise.
Let now $x,y\in L_{\infty}$ be two distinct points, and let $W_n(x)$ and $W_n(y)$ be two squares in ${\cal W}_n$ such that $x\in W_n(x)$ and $y\in W_n(y)$. The squares $W_n(x)$, $n\ge 1$ are chosen in the following way: let $W(x)$ be the set of all white squares in $\bigcup_{n=1}^{\infty} {\cal W}_n $ that contain $x$. Let now $W_1(x)$ be a white square in ${\cal G} ({\cal W}_1)$ that contains infinitely many white squares of $W(x)$ as a subset. Now we define, for $n\ge 2$, $W_n(x)$ as a white square in ${\cal G}({\cal W}_n)$, such that $W_n(x)\subseteq W_{n-1}(x) $, and $W_n(x)$ contains infinitely many squares of $W(x)$ as a subset. We define $W_n(y)$, for $n\ge 1,$ in the analogous manner.
Let $p_n$ be the path between $W_n(x)$ and $W_n(y)$ constructed as in Lemma \ref{lemma:Construction}. Since $x\ne y$, it follows that there exists an integer $k\ge 1$, such that $p_k$ consists of at least $3$ squares. Let then $W \in p_n$ be a square with $W \notin \{ W_n(x), W_n(y)\}$. By the construction of the arc $a$ in $L_{\infty}$ between $x$ and $y$ as described in Lemma \ref{lemma:Construction}, $a \cap W$ is an arc between two exits of W, and thus has finite length. Since $a(x,y)$ is the union of finitely many such arcs of finite length with the arcs $a(x,e_x)$ and $a(e_y,y)$, where $e_x$ ist one of the exits of $W_n(x)$ and $e_y$ is one of the exits of $W_n(y)$, namely $\{e_x\}=a(x,y)\cap \fr(W_n(x))$, and $\{e_y\}=a(x,y)\cap \fr(W_n(y))$, it follows that $a(x,y)$ has infinite length. \\
The above example shows that one can find a sequence of patterns that generates a mixed labyrinth fractal with the property that the length of the arc that connects any two points in the fractal is finite. Moreover, one can see that for a labyrinth pattern that contains such a ``special cross" the length of the paths between the exits of the pattern does not change, it is the same as here, and thus the arc lengths in the fractal also remain finite, as in the above example.
Thus we have proven the following result.
\begin{proposition}
\label{prop:exist_patterns_finite_arcs}
There exist sequences $\{{\cal A}_k\}_{k=1}^{\infty}$ of (both horizontally and vertically) blocked labyrinth patterns, such that the limit set $L_{\infty}$ has the property that
for any two points in $L_{\infty}$ the length of the arc $a\subset L_{\infty}$ that connects them is finite.
\end{proposition}
Based on a theorem in the book of Tricot \cite[p.73, Chapter 7.1]{Tricot} regarding the existence of the tangent to a curve of finite length, we obtain the following stronger result:
\begin{theorem}
There exist sequences $\{{\cal A}_k\}_{k=1}^{\infty}$ of (both horizontally and vertically) blocked labyrinth patterns, such that the limit set $L_{\infty}$ has the property that
for any two points in $L_{\infty}$ the length of the arc $a\subset L_{\infty}$ that connects them is finite. For almost all points $x_0 \in a$ (with respect to the length) there exists the tangent at $x_0$ to the arc $a$.
\end{theorem}
\noindent
{\bf Remarks}
\begin{enumerate}
\item It is easy to see that such special cross patterns as shown in Figure
\ref{fig:ex:A_1_finite_arc}, with such a ``detour'' on each of the four arms, exist only for a pattern with width $m \ge 11$. Moreover, one can easily check that for the above example, both the box-counting and the Hausdorff dimension of the fractal is $\dim_B(L_{\infty})=\dim _H (L_{\infty})=1$ and also the box-counting dimension of any arc that connects a pair of exits in $L_{\infty}$ is $1$. The same holds for the arc between any two distinct points in the fractal.
\item By the definition of a mixed labyrinth fractal, by the shape of special
cross patterns, and the arc construction given in Lemma
\ref{lemma:Construction}, one can check that the fractal is the countable
union of rectifiable $1$-sets. An example of such a countable collection of
rectifiable $1$-sets is as follows: take, for any level $n\ge 1$ of the
construction, the arcs in $L_{\infty}$ that connect the center of any square
$W \in \mathcal{G}({\cal W}_n)$ and any of the midpoints of its sides, i.e.,
any of the four exits of $W,$ as well as the four arcs in $L_{\infty}$ that
connect the center of the unit square with any of its midpoints (the four exits
of the mixed labyrinth fractal).
\item
In Example 1 we could also take, e.g., cross patterns with $a_k=2^k$, for $k \ge 1$, and consider the first two patterns in the sequence of generating patterns, to be just unblocked, symmetric cross patterns, with the width $m_k=2 a_k+1$, $k\in \{ 1,2\}$, and for $k\ge 3$ special cross patterns. Then, ${\cal W}_n$ is blocked for all $n\ge 3$, and the resulting limit set $L_{\infty}$ would still have the property that the arc between any two points in the fractal has finite length.
\end{enumerate}
In the following example we show that one can use blocked cross patterns like the one shown in Figure \ref{fig:ex:A_1_finite_arc} in order to construct mixed labyrinth fractals with the property that the arc between any two points in the fractal has infinite length.
\\\\
\noindent {\bf Example 2.} Let $\{{\cal A}_k\}_{k=1}^{\infty}$ be a sequence of special cross patterns like those occurring in Example 1, $r \ge 2$ be an arbitrarily fixed integer, and let
$\{{\cal A}'_i,~i=1,\dots,r\} \subset \{{\cal A}_k\}_{k\ge 3} $ be a finite set of blocked labyrinth patterns among those in the above infinite sequence, where $m'_i=2a'_i+1$ denotes the width of the pattern ${\cal A}'_i$, and $l'_i=2a'_i+5$ is the lenght of the path between any two exits in ${\cal G}({\cal A}'_i)$, for $i=1,\dots,r$. We define a new sequence of labyrinth patterns $\{{\cal A}^*_j\}_{j\ge1}$, e.g., in the following way: ${\cal A}^*_j \in \{ {\cal A}'_i, ~i = 1,\dots,r\}$. Let $L_{\infty}$ be the mixed labyrinth fractal generated by the sequence of patterns $\{{\cal A}^*_j\}_{j\ge1}$, and let $a^*:=\max \{a'_i,~i=1,\dots ,r\} $, $m^*:=2a^*+1$, and $l^*:=2a^*+5$.
Since
\[
\prod_{{i=1,\dots,r, \atop {k_1+\dots+k_r=n,} }\atop {k_i\ge 0}}\left(\frac{l'_i}{m'_i}\right)^{k_i}>\left(\frac{l^*}{m^*}\right)^n=\left(1+\frac{4}{m^*} \right)^n\to \infty, ~~\text{for}~n\to \infty,
\]
it follows from Lemma \ref{lemma:Construction} that the length of the arc between any two exits in $L_{\infty}$ is infinite.
By using arguments analogous to those in Example 1, one can show that the infinite length of the arc between any two exits of $L_{\infty}$ implies that the arc between any two points in the fractal is infinite, as in the case of self-similar labyrinth fractals generated by both horizontally and vertically blocked patterns \cite{laby_4x4, laby_oigemoan}.
Thus we have proven the following
\begin{proposition}
There exist sequences $\{{\cal A}_k\}_{k=1}^{\infty}$ of (both horizontally and vertically) blocked labyrinth patterns, such that the limit set $L_{\infty}$ has the property that
for any two points in $L_{\infty}$ the length of the arc $a\subset L_{\infty}$ that connects them is infinite.
\end{proposition}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{fig-halfblocked}
\caption{An example: a half-blocked cross pattern ${\cal A}_1$ with width $m_1=11$ that is horizontally blocked, but not vertically blocked}
\label{fig:ex:A_1_finite_and_infinite}
\end{center}
\end{figure}
\noindent
{\bf Example 3. }
In Figure \ref{fig:ex:A_1_finite_and_infinite} we have a ``half-blocked'' labyrinth pattern with width $11$, that is horizontally, but not vertically blocked, we call such a pattern (of width $m=2a+1\ge 11$) a \emph{half-blocked cross pattern}, where either the horizontal or the vertical arms of the cross make a detour around a black square, positioned as in the case of the special cross patterns used in Example 1, and the ``central square'' of the pattern (where all ``cross arms'' meet) is positioned in column and row $a+1$.
Suppose we have a sequence $\{{\cal A}_k\}_{k=1}^{\infty}$ of such patterns, with $m_k=2a_k+1,$ and $a_k\ge 5$, for $k\ge 1$.
For any pattern ${\cal A}_k$, with width $m_k=2a_k+1$, of the above sequence, the path matrix is
\[
M_k=\left(
\begin{array}{rrllll}
2a_k+1 & 0 & 0 & 0 & 0 & 0\\
0 & 2a_k-3 & 2 & 2 & 2 & 2\\
a_k & a_k-2 & 2 & 1 & 1 & 1\\
a_k & a_k-2 & 1 & 2 & 1 & 1\\
a_k & a_k-2 & 1 & 1 & 2 & 1\\
a_k & a_k-2 & 1 & 1 & 1 & 2\\
\end{array}\right).
\]
Thus, the lengths of the paths between exits in ${\cal G}({\cal A}_k)$ are: $\A _k=2a_k+1$, $\B _k=2a_k+5$, $\C _k=\D _k=\E _k=\F _k=2a_k+3$.
One can immediately check that the length of the arc between the top and bottom exit of the resulting labyrinth set $L_{\infty}$ is $1$, no matter how we chose the sequence $\{ a_k\}_{k\ge 1}$. Now, let us analyse the arc between the left and the right exit in $L_{\infty}$, and denote by $\B(n)$ the path in ${\cal G}({\cal W}_n)$ between the left and right exit in ${\cal W}_n$. If we choose, e.g., $a_k=k+4$, for $k\ge 1$, then $\sum_{k=1}^{\infty}\frac{1}{m_k}=\infty$.
From Proposition \ref{lemma:m^n} we have, with the notation used above: for $q\in \{ \B \} $,
\[ \ell(\gamma^q)\ge \frac{\prod_{k=1}^n(m_k+4)-1}{2 \prod_{k=1}^n m_k}=\frac12\prod_{k=1}^n\Big(1+\frac{4}{m_k}\Big)-\frac{1}{2\prod_{k=1}^n m_k}
.
\]
Under the above assumptions, $\displaystyle \lim_{n\to \infty}\Big(\frac12\prod_{k=1}^n\big(1+\frac{4}{m_k}\big)-\frac{1}{2\prod_{k=1}^n m_k}\Big)=\infty,$ for $n\to \infty$, as one can immediately check. Herefrom one can easily infere that also the arcs in $L_{\infty}$ that connect the top or the bottom exit of $L_{\infty}$ with the left or the right exit of $L_{\infty}$ have, all four, infinite length.
Moreover, one can show, by using arguments analogous to those mentioned when proving Proposition \ref{prop:exist_patterns_finite_arcs}, that for any $W\in {\cal W}_n$ the arc $L_{\infty}$ that connects the left and the right exit of $W$ has infinite length. This also holds for
the arc in $L_{\infty}$ that connects the top or the bottom exit of $W$ with the left or the right exit of $W$.
\begin{proposition}
There exist sequences of horizontally blocked and not vertically blocked labyrinth patterns, such that the resulting mixed labyrinth fractal $L_{\infty}$ has the following properties:
\begin{enumerate}
\item The arc in the fractal that connects the top and the bottom exit of $L_{\infty}$ has finite length (equal to $1$). The arc in the fractal that connects the top and bottom exit of any square in ${\cal W}_n$ is a vertical segment with finite length. Any vertical line segment that is contained in the fractal has finite length.
\item The arc in the fractal that connects the left and the right exit of $L_{\infty}$ has infinite length. The arc in the fractal that connects the left and right exit of any square in ${\cal W}_n$ has infinite length.
\item The arc in the fractal that connects the exit $e_1$ and the exit $e_2$ in $L_{\infty}$, where $e_1$ is either the top or the bottom exit, and $e_2$ either the left or the right exit, has infinite length, and the same holds for the arcs between these pairs of exits in any square in ${\cal W}_n$.
\end{enumerate}
\end{proposition}
The corresponding analogous statements regarding the existence of sequences of vertically and not horizontally blocked labyrinth patterns hold, such that $L_{\infty}$ has the corresponding analogous properties.
\par
For the above sequence of half-blocked cross patterns the length of the arc in $L_{\infty}$ that connects two arbitrarily chosen distinct points $x,y \in L_{\infty}$ has finite length if and only if it is contained in a vertical line segment which is, itself, a subset of $L_{\infty}$.
\vspace{2mm}
\par \noindent
{\bf Some final remarks.}
In the case of mixed, i.e., not self-similar labyrinth fractals not just the shape of the patterns but also their width plays an essential role as a parameter that influences the lengths of arcs between exits or between any points in the fractal.
\\\\
{\bf Conjecture:} A sequence of both horizontally and vertically blocked labyrinth patterns with the property that the sequence of widths $\{m_k\}_{k\ge 1}$ is bounded, generates a mixed labyrinth fractal with the property that for any $x,y \in L_{\infty}$ the length of the arc in the fractal that connects $x$ and $y$ is infinite.
\\\\{\bf Acknowledgement.} The authors thank Bertran Steinsky for valuable remarks on the manuscript. We thank the referee for helpful comments.
|
2206.09471
|
\section{Introduction}\label{SectionIntro}
This article is a follow-up to the paper \cite{BaumNeaRees} entitled ``Interval groups related to finite Coxeter groups I'' by the first, third and fourth authors.\\
Let $(W,R)$ be a finite Coxeter system. In \cite{Carter}, Carter defines a diagram $\Delta$ associated with each conjugacy class of quasi-Coxeter elements in the simply laced types and also in type $F_4$; these diagrams
are the Coxeter diagrams and the diagrams shown in
Figures~\ref{FigureCarterDiagramDn} to \ref{FigureCarterDiagramF4a1}.
Let $w$ be a quasi-Coxeter element
(defined in Definition~\ref{DefQuasiCoxInGen}). We denote by $G([1,w])$ the interval group related to $w$ (defined in
Definition~\ref{DefIntervalGrpsCoxeter}).
The following theorem describes the main result of Section~\ref{SecPres}.
\begin{customthm}{A}\label{ThmPres}
Let $W$ be a simply laced Coxeter group,
$w$ a proper quasi-Coxeter element of $W$, and $\Delta$ the Carter diagram associated with $w$.
Then the interval group $G([1,w])$
is the quotient of the Artin group $A(\Delta)$
of the Carter diagram $\Delta$ associated with $w$ by the normal
closure of a set of twisted cycle commutators
$\tc{{\boldsymbol s}_1}{{\boldsymbol s}_2}{{\boldsymbol s}_3}{{\boldsymbol s}_4}$,
one for each 4-cycle $(s_1,s_2,s_3,s_4)$ within $\Delta$, where
$\tc{{\boldsymbol s}_1}{{\boldsymbol s}_2}{{\boldsymbol s}_3}{{\boldsymbol s}_4}$ is defined to be $[{\boldsymbol s}_1,{\boldsymbol s}_2^{-1}{\boldsymbol s}_3{\boldsymbol s}_4{\boldsymbol s}_3^{-1}{\boldsymbol s}_2]$.
\end{customthm}
For type $D_n$, Theorem~\ref{ThmPres} is proven in \cite{BaumNeaRees}. We restate that result in this paper as Theorem~\ref{ThmPresDn}. For the exceptional cases $E_6$, $E_7$, and $E_8$, we explain in Section~\ref{SecPres} how we prove this result computationally; that result is stated as Theorem~\ref{ThmEnIntervalProper}.
Theorem~\ref{ThmPres} evokes a similar result for Artin groups of the same types,
proved in \cite{GrantMarsh,Haley},
relating to most (but not all) of the Carter diagrams referred to in Theorem~\ref{ThmPres}. It is proven in this article as
Theorem~\ref{ThmEnArtin}.
Note that we also obtain nice presentations for the exceptional cases of non-simply laced finite Coxeter groups of types $F_4$ and $H_3$, as we describe in Theorems~\ref{ThmH3_a1}~to~\ref{ThmF4IntervalProper}.
Other important results in Section~\ref{SecPres} concern the poset related to the interval $[1,w]$. For instance, we show in Theorem~\ref{ThmLattice} that the poset is a lattice if and only if $w$ is a Coxeter element or $W$ is of type $H_3$. In particular, the interval group related to the quasi-Coxeter element $H_3(a_2)$ considered in Theorem~\ref{ThmH3_a2} is a Garside group, and by Theorem~\ref{ThmNonIsom} below, that Garside group is not isomorphic to the Artin group of type $H_3$.\\
The main result of Section~\ref{SecNonIsom} is the following. Its proof is based on an adaptation of Tits' proof in the Artin groups situation (see \cite{Tits}).
\begin{customthm}{B}\label{ThmNonIsom}
For $W$ of type $D_n$ with $n$ even and for all the other exceptional types, the interval group $G([1,w])$ of a proper quasi-Coxeter element $w$ is not isomorphic to the Artin group of the same type as $W$.
\end{customthm}
The proof of Theorem~\ref{ThmNonIsom} is done within the proofs of Theorems~\ref{NonIsoDn}~and~\ref{NonIsoEn}. A complete proof for type $D_n$ for any $n$ is done by the authors in~\cite{BHNR_Part3}.\\
\noindent
\textbf{Acknowledgements}. The third author would like to thank the DFG as he is funded through the DFG grants BA2200/5-1 and RO1072/19-1.
\section{Presentations of interval groups for quasi-Coxeter elements}\label{SecPres}
\subsection{Dual approach to Coxeter groups}\label{SubDualApproach}
Let $(W,R)$ be a Coxeter system, and let $T = \bigcup\limits_{w \in W}^{} w^{-1}Rw$ be the set of reflections of $W$. The dual approach to the Coxeter group $W$ is the study of $W$ as a group generated by $T$. Note that the classical approach uses the Coxeter system $(W,R)$ with generating set $R$.
Each $w \in W$ is a product of reflections in $T$. We define
$$
\ell_T(w):= \min \{ k \in \mathbb{Z}_{\geq 0} \mid w=t_{1}t_2 \cdots t_{k};\ t_i \in T \}
$$
called the reflection length of $w$. Let $w = t_1t_2 \cdots t_k$ with $t_i \in T$ and $k=\ell_T (w)$. We call $(t_1,t_2, \dotsc, t_k)$ (or $t_1t_2 \cdots t_k$ by abuse of notation) a reduced decomposition of $w$.
Since the set $T$ of reflections is closed under conjugation, there is a natural way to obtain new reflection decompositions from a given one. The braid group $\mathcal{B}_n$ acts on the set $T^n$ of $n$-tuples of reflections via
\begin{align*}
\boldsymbol{r}_i (t_1 ,\dotsc , t_n ) &:= (t_1 ,\dotsc , t_{i-1} , \hspace*{5pt} t_i t_{i+1} t_i,
\hspace*{5pt} \phantom{t_{i+1}}t_i\phantom{t_{i+1}}, \hspace*{5pt} t_{i+2} ,
\dotsc , t_n), \\
\boldsymbol{r}_i^{-1} (t_1 ,\dotsc , t_n ) &:= (t_1 ,\dotsc , t_{i-1} , \hspace*{5pt} \phantom
{t_i}t_{i+1}\phantom{t_i}, \hspace*{5pt} t_{i+1}t_it_{i+1}, \hspace*{5pt} t_{i+2} ,
\dotsc , t_n).
\end{align*}
\noindent We call this action of $\mathcal{B}_n$ on $T^n$ the Hurwitz action. It is readily observed that this action restricts to the set of all reduced reflection decompositions of a given element $w \in W$. If the latter action is transitive, then we say that the dual Matsumoto property holds for $w$.\\
Recall that a Coxeter element $c \in W$ is defined to be any conjugate of the product of all elements of $R$ in some order. A more general notion of (parabolic) quasi-Coxeter elements is described in the next definition. It is borrowed from \cite{BaumGobet}.
\begin{definition}\label{DefQuasiCoxInGen}
\begin{itemize}
\item[(a)] A subgroup $P$ of $W$ is called a parabolic subgroup if
there is a simple system $R'= \{ r_1' , \ldots , r_n'\}$ of $W$ such that $P = \langle r_1' , \ldots , r_m' \rangle$ for some $m \leq n$.
\item[(b)] An element $w \in W$ is called a parabolic quasi-Coxeter element for $(W,T)$
if there is a reduced decomposition $w = t_1 \cdots t_m$ such that $\langle t_1 , \ldots , t_m \rangle$ is a parabolic subgroup of $W$.
If $W$ is this parabolic subgroup, then we simply call $w$ a quasi-Coxeter element.
\end{itemize}
\end{definition}
\begin{remark}
If $W$ is finite then this definition of a parabolic subgroup agrees with
the usual definition of a parabolic subgroup defined as the conjugate of a subgroup generated
by a subset of $R$ (see \cite[Section 4]{BaumGobet}).
\end{remark}
Every Coxeter element is a quasi-Coxeter element, and a quasi-Coxeter element is called proper if it is not a Coxeter element.
The dual Matsumoto property characterises the parabolic quasi-Coxeter elements (see Theorem~1.1 in \cite{BaumGobet}).
\begin{theorem}\label{LemmaTransitivityQCox}
An element $ w \in W$ is a parabolic quasi-Coxeter element if and only if the dual Matsumoto property holds for $w$.
\end{theorem}
We finally note that proper quasi-Coxeter elements exist precisely in types $D_n$ for any $n$, $E_6$, $E_7$, $E_8$, $F_4$, $H_3$, and $H_4$ (see \cite{BaumGobet}).
\subsection{Carter diagrams}\label{SubCarterDiagrams}
Let $W$ be a simply laced Coxeter group (that is of type $A_n$, $B_n$, $D_n$, $E_6$, $E_7$, or $E_8$) or of type $F_4$.
Let $w$ be an element of a Coxeter group $W$. By Carter \cite{Carter}, there exists a bipartite decomposition of $w$ over the set $T$ of reflections of the form $$w = w_1 w_2 = \underset{w_1}{\underbrace{t_1 t_2 \cdots t_k}}\ \underset{w_2}{\underbrace{t_{k+1} \cdots t_{k+h}}},$$ where $\ell_T(w) = k+h$ and bipartite means that any $t_i$ and $t_j$ ($i \neq j$) in the decomposition of $w_1$ and in
the decomposition of $w_2$ commute.
A Carter diagram $\Delta$ related to this bipartite decomposition of $w$ has vertices that correspond to the elements $t_i$ that appear in the decomposition of $w$, and two vertices $t_i$ and $t_j$ ($i \neq j$) are related by $o(t_it_j)-2$ edges. The Carter diagram is called admissible if each of its cycles contains an even number of vertices. Carter introduced these diagrams in order to classify the conjugacy classes in Weyl groups.
Carter diagrams on $n$ vertices, where $n$ is the cardinality of $R$, describe the conjugacy classes of quasi-Coxeter elements in $W$. Now we will describe a Carter diagram related to each conjugacy class of proper quasi-Coxeter elements that contains a chordless cycle of four vertices. Note that for a Coxeter element, the corresponding Carter diagram is the Coxeter diagram (which has no cycle).
\subsubsection*{Carter diagrams in type $D_n$}\label{SubsubCarterDn}
There are $\lfloor n/2 \rfloor$ conjugacy classes of quasi-Coxeter elements in type $D_n$ for $n \geq 4$. The following Carter diagram describes these conjugacy classes, where $1 \leq m \leq \lfloor n/2 \rfloor$.
\tikzstyle{every node}=[circle, draw, fill=black!50,
inner sep=0pt, minimum width=4pt]
\begin{small}
\begin{figure}[H]
\begin{tikzpicture}
\node[draw, shape=circle, label=below:$s_2$] (2) at (0,0) {};
\node[draw, shape=circle, label=below:$s_3$] (3) at (1,0) {};
\node[] (0) at (2,0) {};
\node[] (00) at (3,0) {};
\node[draw, shape=circle, label=below:$s_{m-1}$] (4) at (4,0) {};
\node[draw, shape=circle, label=above:$s_m$] (5) at (5,0) {};
\node[draw, shape=circle, label=above:$s_{m+1}$] (6) at (6,1) {};
\node[draw, shape=circle, label=below:$s_1$] (7) at (6,-1) {};
\node[draw, shape=circle,label=above:$s_{m+2}$] (8) at (7,0) {};
\node[draw, shape=circle,label=below:$s_{m+3}$] (9) at (8,0) {};
\node[] (10) at (9,0) {};
\node[] (11) at (10,0) {};
\node[draw,shape=circle, label=below:$s_{n-1}$] (12) at (11,0) {};
\node[draw, shape=circle, label=below:$s_n$] (13) at (12.2,0) {};
\draw[-] (2) to (3);
\draw[-] (3) to (0);
\draw[dashed,-] (0) to (00);
\draw[-] (00) to (4);
\draw[-] (4) to (5);
\draw[-] (5) to (6);
\draw[-] (5) to (7);
\draw[-] (8) to (6);
\draw[-] (8) to (7);
\draw[-] (8) to (9);
\draw[-] (9) to (10);
\draw[dashed,-] (10) to (11);
\draw[-] (11) to (12);
\draw[-] (12) to (13);
\end{tikzpicture}\caption{Carter diagram $\Delta_{m,n}$ of type $D_n$.}\label{FigureCarterDiagramDn}
\end{figure}
\end{small}
Now we discuss the exceptional cases. For the related Carter diagrams, we will use the notation of Carter (see \cite{Carter}).
\subsubsection*{Carter diagrams in type $E_6$}\label{SubsubCarterE6}
There are two conjugacy classes of proper quasi-Coxeter elements, whose Carter diagrams are illustrated in Figure~\ref{FigureCarterDiagramsE6}.
\tikzstyle{every node}=[circle, draw, fill=black!50,
inner sep=0pt, minimum width=4pt]
\begin{small}
\begin{figure}[H]
\begin{center}
\begin{tabular}{lcl}
$a_1$:\quad \begin{tikzpicture}
\node[] (00) at (0,0) {};
\node[] (10) at (1,0) {};
\node[] (20) at (2,0) {};
\node[] (01) at (0,1) {};
\node[] (11) at (1,1) {};
\node[] (21) at (2,1) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\end{tikzpicture}
&\quad&
$a_2$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (20) to (21);
\end{tikzpicture}
\end{tabular}
\end{center}\caption{Carter diagrams $E_6(a_i)$ for $i=1,2$.}\label{FigureCarterDiagramsE6}
\end{figure}
\end{small}
\subsubsection*{Carter diagrams in type $E_7$}\label{SubsubCarterE7}
There are four conjugacy classes of proper quasi-Coxeter elements, whose Carter diagrams are illustrated in Figure~\ref{FigureCarterDiagramsE7}.
\begin{small}
\begin{figure}[H]
\begin{center}
\begin{tabular}{lcl}
$a_1$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\node[draw,shape=circle] (31) at (3,1) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (31) to (21);
\end{tikzpicture}
&\quad&
$a_2$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\node[draw,shape=circle] (m11) at (-1,1) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (m11) to (01);
\end{tikzpicture}
\\
&\quad&\\
$a_3$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\node[draw,shape=circle] (31) at (3,1) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (20) to (21);
\draw[-] (31) to (21);
\end{tikzpicture}
&\quad&
$a_4$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\node[draw,shape=circle] (12) at (1,2) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (20) to (21);
\draw[-] (12) to (21);
\draw[-] (12) to (01);
\end{tikzpicture}
\end{tabular}
\end{center}\caption{Carter diagrams $E_7(a_i)$ for $i=1,\ldots, 4$}\label{FigureCarterDiagramsE7}
\end{figure}
\end{small}
\subsubsection*{Carter diagrams in type $E_8$}
There are eight conjugacy classes of proper quasi-Coxeter elements, whose Carter diagrams are illustrated in Figure~\ref{FigureCarterDiagramsE8}.
\begin{small}
\begin{figure}[H]
\begin{center}
\begin{tabular}{clc}
$a_1$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\node[draw,shape=circle] (31) at (3,1) {};
\node[draw,shape=circle] (41) at (4,1) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (31) to (21);
\draw[-] (41) to (31);
\end{tikzpicture}
&\quad&
$a_2$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\node[draw,shape=circle] (31) at (3,1) {};
\node[draw,shape=circle] (m10) at (-1,0) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (31) to (21);
\draw[-] (m10) to (00);
\end{tikzpicture}
\\
&\quad &\\
$a_3$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\node[draw,shape=circle] (m11) at (-1,1) {};
\node[draw,shape=circle] (m10) at (-1,0) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (m11) to (01);
\draw[-] (m10) to (00);
\end{tikzpicture}
&\quad&
$a_4$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\node[draw,shape=circle] (31) at (3,1) {};
\node[draw,shape=circle] (41) at (4,1) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (20) to (21);
\draw[-] (31) to (21);
\draw[-] (31) to (41);
\end{tikzpicture}
\\
&\quad&\\
$a_5$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\node[draw,shape=circle] (31) at (3,1) {};
\node[draw,shape=circle] (m10) at (-1,0) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (20) to (21);
\draw[-] (31) to (21);
\draw[-] (00) to (m10);
\end{tikzpicture}
&\quad&
$a_6$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\node[draw,shape=circle] (31) at (3,1) {};
\node[draw,shape=circle] (30) at (3,0) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (20) to (21);
\draw[-] (31) to (21);
\draw[-] (31) to (30);
\draw[-] (20) to (30);
\end{tikzpicture}
\\
&\quad&\\
$a_7$:\quad \begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (10) at (1,0) {};
\node[draw,shape=circle] (20) at (2,0) {};
\node[draw,shape=circle] (01) at (0,1) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (21) at (2,1) {};
\node[draw,shape=circle] (12) at (1,2) {};
\node[draw,shape=circle] (13) at (2,2) {};
\draw[-] (00) to (10);
\draw[-] (10) to (20);
\draw[-] (01) to (11);
\draw[-] (11) to (21);
\draw[-] (00) to (01);
\draw[-] (10) to (11);
\draw[-] (20) to (21);
\draw[-] (12) to (21);
\draw[-] (12) to (01);
\draw[-] (13) to (12);
\end{tikzpicture}
&\quad&
$a_8$:\quad \begin{tikzpicture}[scale=.7]
\node[draw,shape=circle] (1) at (0,2) {};
\node[draw,shape=circle] (2) at (2,2) {};
\node[draw,shape=circle] (3) at (0,0) {};
\node[draw,shape=circle] (4) at (2,0) {};
\node[draw,shape=circle] (5) at (0.8,2.8) {};
\node[draw,shape=circle] (6) at (2.8,2.8) {};
\node[draw,shape=circle] (7) at (0.8,1) {};
\node[draw,shape=circle] (8) at (2.8,1) {};
\draw[-] (1) to (2);
\draw[-] (1) to (5);
\draw[-] (1) to (3);
\draw[-] (2) to (6);
\draw[-] (2) to (4);
\draw[-] (3) to (7);
\draw[-] (3) to (4);
\draw[-] (4) to (2);
\draw[-] (4) to (8);
\draw[-] (5) to (6);
\draw[-] (5) to (1);
\draw[-] (5) to (7);
\draw[-] (7) to (8);
\draw[-] (8) to (6);
\end{tikzpicture}
\end{tabular}
\end{center}\caption{Carter diagrams $E_8(a_i)$ for $i=1,\ldots,8$}\label{FigureCarterDiagramsE8}
\end{figure}
\end{small}
\subsubsection*{Carter diagrams in type $F_4$}\label{SubsubCarterF4}
There is one conjugacy class of proper quasi-Coxeter elements, whose Carter diagram is illustrated in Figure~\ref{FigureCarterDiagramF4a1}.
\begin{figure}[H]
\begin{center}
\begin{tikzpicture}
\node[draw,shape=circle] (00) at (0,0) {};
\node[draw,shape=circle] (11) at (1,1) {};
\node[draw,shape=circle] (m11) at (-1,1) {};
\node[draw,shape=circle] (02) at (0,2) {};
\draw[-] (00) to (m11);
\draw[-] (11) to (02);
\draw[-,double] (m11) to (02);
\draw[-,double] (11) to (00);
\end{tikzpicture}
\end{center}\caption{Carter diagram $F_4(a_1)$.}\label{FigureCarterDiagramF4a1}
\end{figure}
\subsection{Non-crossing partitions for quasi-Coxeter elements}\label{SubLattice}
In this section we define the set of non-crossing partitions $[1,w]$ for $w \in W$ a quasi-Coxeter element,
and embed it into the set of parabolic subgroups of $W$ as well as into the set of subspaces of
$V$, where $W \leq GL(V)$ is the Tits representation. These sets naturally carry the structure of a poset.
We also analyse the lattice property for the poset of non-crossing partitions of $w$. We provide this information because, for further explorations of the interval groups, a good knowledge of the poset of non-crossing partitions is needed.
\subsubsection*{The poset $([1,w], \preceq)$}
We start by defining left and right division in $W$.
\begin{definition}\label{DefAbsoluteOrder}
We say that $v \in W$ is a left divisor of $w$, and write $v \preceq w$, if $w = v u$ with $u \in W$ and $\ell_{T}(w) = \ell_{T}(v) + \ell_{T}(u)$. The order relation $\preceq$ is called the absolute order relation on $W$.
The interval $[1,w]$ related to an element $w \in W$ is defined to be the set of divisors of $w$ for $\preceq$.
We also call $[1,w]$ the set of non-crossing partitions for $w$.
We define division from the right similarly. We say that $v$ is a right divisor of $w$, and write $v \preceq_{r} w$, if $w = u v$ with
$u \in W$ and $\ell_{T}(w) = \ell_{T}(v) + \ell_{T}(u)$. We also define the interval $[1,w]_{r}$ of right divisors of an element $w \in W$.
\end{definition}
The pair $([1,w], \preceq)$ is a poset. In Theorem~\ref{ThmLattice}, we will show that, apart from in
type $H_3$, it is a lattice if and only if the element $w$ is a Coxeter element. In type $H_3$, the poset $[1,w]$ is a lattice for each quasi-Coxeter element $w \in W$.\\
We recall Corollary~6.11 of \cite{BaumGobet}, which we state here as a lemma.
\begin{lemma}\label{PrefixQuasiCox}
Let $w \in W$ be a quasi-Coxeter element. Then every element in $[1,w]$ is a parabolic quasi-Coxeter element.
\end{lemma}
\subsubsection*{The poset $({\cal P}(w), \subseteq)$}
Let $u$ be an element in the interval $[1,w]$. According to Lemma~\ref{PrefixQuasiCox}, $u$ is a parabolic quasi-Coxeter element. Therefore, by Definition~\ref{DefQuasiCoxInGen} of a parabolic quasi-Coxeter element, there exists a reduced $T$-decomposition $u = u_1 \cdots u_k$ such that $P_u:= \langle u_1, \ldots , u_k \rangle$ is a parabolic subgroup. Recall that the parabolic closure of $u$ is the intersection of all parabolic subgroups that contain $u$. It is again a parabolic subgroup.
\begin{lemma}\label{PropParaSbgrp}
The following two properties hold.
\begin{itemize}
\item[(a)] $P_u = \langle t_1, \dots, t_k \rangle$ if $u = t_1 \cdots t_k$ is a reduced decomposition, where $t_1, \ldots, t_k \in T$.
\item[(b)] $P_u$ is the parabolic closure of $u$.
\end{itemize}
\end{lemma}
\begin{proof} Statement (a) is a consequence of \cite[Theorem~1.2]{BDSW}, while (b) follows from \cite[Theorem~6.1]{BaumGobet}.
\end{proof}
Thus the definition of $P_u$ is independent of the chosen reduced $T$-decomposition of $u$.
Let ${\cal P}(w):= \{ P_u~|~ u \in [1,w]\}$ be the set of parabolic subgroups related to the quasi-Coxeter element $w$. The poset related to ${\cal P}(w)$ is $({\cal P}(w), \subseteq)$, where $\subseteq$ is the inclusion of sets.
\subsubsection*{The poset $({\cal U}(w),\subseteq)$}
For each $u \in [1,w]$, consider the subspace $\mathrm {Fix}(u):= \ker(u- id) = \{v \in V~|~ v^u = v\}$ of $V$. Consider the commutator subgroup $\mathrm {Mov}(u):= [V,u]$ in the semi-direct product
$V \rtimes \langle u \rangle$. Then due to Maschke's theorem $V = \mathrm {Fix}(u) \oplus \mathrm {Mov}(u)$. It is an easy calculation to see that $\mathrm {Fix}(u) \perp \mathrm {Mov}(u)$ and therefore
$\mathrm {Fix}(u) = \mathrm {Mov}(u)^\perp$. More generally, we will also consider for $X \subseteq W$ the subspace $\mathrm {Fix}(X):= \{v \in V~|~ v^x = v ~\mbox{for all}~x \in X\}$ of $V$. Note that $\mathrm {Fix}(P_u) = \mathrm {Fix}(u)$ for any $u \in [1,w]$.
Set ${\cal U}(w):= \{\mathrm {Fix}(u)~|~u \in [1,w]\}$. Then $({\cal U}(w), \subseteq)$ is a poset, where
$\subseteq$ is the inclusion.\\
Define mappings $$\rho_1: [1,w] \rightarrow {\cal P}(w), u ~\mbox{and}~
\rho_2: [1,w] \rightarrow {\cal U}(w) ~\mbox{by}~
\rho_1(u):=P_u,\,\rho_2(u):=\mathrm {Fix}(u).$$
Lemma~\ref{PropParaSbgrp} ensures that $\rho_1$ is well-defined. Furthermore, $\rho_1$ preserves the order relation while $\rho_2$ reverses it.
\begin{proposition}
The maps $\rho_1$ and $\rho_2$ are bijections. Moreover,
$\sigma: ({\cal P}(w),\subseteq) \rightarrow ( {\cal U}(w), \subseteq)$, defined by $\sigma(P_u):=\mathrm {Fix}(P_u)$ is an isomorphism
between posets which reverses the order relation.
\end{proposition}
\begin{proof}
We will use throughout the proof the following observation \cite[Lemma~1.2.1(i)]{BessisDualMonoid}: if $x \in W$ and $t \in T$, then we have
$$\mathrm {Fix}(x)\subseteq \mathrm {Fix}(t)~\text{if and only if }t \preceq x.$$
We begin by considering $\sigma$. It is a well known fact that $\sigma$ is an isomorphism of posets
as well (see for instance \cite[Chapter V, \S 1.6]{Bourbaki}).
Next we show the injectivity of $\rho_2$. Let $x,y \in [1,w]$ such that $\mathrm {Fix}(x) = \mathrm {Fix}(y)$.
Since $x,y \in [1,w]$ there are $x^\prime$ and $y^\prime$ in $[1,w]$ such that
$x^\prime x = w = y^\prime y$ and $\ell_T(x^\prime) + \ell_T(x) = \ell_T(w) = n =
\ell_T(y^\prime) + \ell_T(y)$.
Further we have $\ell_T(x) = n -$ dim$\mathrm {Fix}(x) = n - $ dim$ \mathrm {Fix}(y)$ by \cite[Lemma~2]{Carter},
which also implies $\ell_T(x^\prime) = \ell_T(y^\prime) $. Let $t \in T$ be such that
$t \preceq x^\prime$. Then $t \preceq w$ and therefore $\ell_T(tw) = n-1$. This yields
that there is $0 \neq v \in \mathrm {Fix}(tw)$. It follows that $t(v) = ttw(v) = w(v) \neq v$.
Therefore $t$ is the reflection with respect to the hyperplane that is perpendicular to
$\alpha:= (v-w(v))/2$.
We claim that $t \preceq y^\prime$ as well. Let $u \in \mathrm {Fix}(y^\prime)$.
Then
$(v,u) = (y^\prime (v), y^\prime(u)) = (y^\prime (v), u)$, and we get $(\alpha, u) = 0$,
as $y^\prime (v) = y^\prime y(v) = w(v)$, which implies $u \in \mathrm {Fix}(t)$ and $t \preceq y^\prime$.
Now we show $x = y$ by induction on $r:= \ell_T(x^\prime) = \ell_T(y^\prime) $.
If $r = 1$, then $t = x^\prime = y^\prime$ are reflections and $x = tw = y$.
Let $r > 1$. Then $w = x^{\prime \prime} (tx) = y^{\prime \prime} (t y)$ for some
$x^{\prime \prime}, y^{\prime \prime} \in W$ with $\ell_T(x^{\prime \prime} ) = r-1 =
\ell_T(y^{\prime \prime} )$. The assertion follows by induction as $\mathrm {Fix}(tx) = \mathrm {Fix}(t) \cap \mathrm {Fix}(x) = \mathrm {Fix}(t) \cap \mathrm {Fix}(y)$.
Since $\rho_2 = \sigma \circ \rho_1$, the claim for $\rho_1 $ holds as well.
\end{proof}
It remains an open question whether $\rho_1$ and $\rho_2$ are isomorphisms of
posets.
\subsubsection*{The lattice property}
\begin{theorem}\label{ThmLattice}
Let $w$ be a quasi-Coxeter element in a finite Coxeter group $W$. Then $([1,w],\preceq)$ is a lattice if and only if $w$ is a Coxeter element or if $W$ is of type $H_3$.
\end{theorem}
\begin{proof}
When $w$ is a Coxeter element, the fact that $([1,w],\preceq)$ is a lattice was shown in \cite{BessisDualMonoid} and \cite{BradyWatt}. Then, consider $w$ to be a proper quasi-Coxeter element.
For type $D_n$, we have shown in Proposition~6.6 in \cite{BaumNeaRees} that the poset $([1,w],\preceq)$ is not a lattice, by showing the result in type $D_4$ and then applying Theorem~2.1 of Dyer \cite{Dyer}.
Consider types $E_6$, $E_7$, and $E_8$. Since the Carter diagram related to each conjugacy class of proper quasi-Coxeter elements contains a $4$-cycle (that is, a type $D_4$ cycle), as illustrated in the figures of Section~\ref{SubCarterDiagrams}, the same Theorem~2.1 of Dyer applies. Hence we also deduce that the posets are not lattices.
Using GAP \cite{GAP4}, we show that in type $H_3$, the poset $([1,w],\preceq)$ is always a lattice and in types $H_4$ and $F_4$, the posets $([1,w],\preceq)$ of proper quasi-Coxeter elements are not lattices.
\end{proof}
\subsection{The interval groups for quasi-Coxeter elements}\label{SubsectionIntervalGrpsOfQCoxElts}
Let $w$ be a quasi-Coxeter element of $W$. Consider the interval $[1,w] = \{w \in W\ |\ w \preceq w\}$. The interval group related to the interval $[1,w]$ is defined as follows.
\begin{definition}\label{DefIntervalGrpsCoxeter}
We define the group $G([1,w])$ by a presentation with set of generators $\bs{[1,w]}$ in bijection with the interval $[1,w]$, and relations corresponding to the relations in $[1,w]$, meaning that $\bs{uv} = \bs{r}$ if $u,v,r \in [1,w]$, $uv=r$, and $u \preceq r$ i.e. $\ell_{T}(r) = \ell_{T}(u) + \ell_{T}(v)$.
\end{definition}
By transitivity of the Hurwitz action on the set of reduced decompositions of $w$ (see Theorem \ref{LemmaTransitivityQCox}), we have the following result.
\begin{proposition}\label{PropDualPresDn}
Let $w \in W$ be a quasi-Coxeter element, and let $\boldsymbol{T} \subset \bs{[1,w]}$ be the copy of the set of reflections $T$ in $W$. Then $$G([1,w]) = \langle \boldsymbol{T} ~|~ \boldsymbol{t} \boldsymbol{t}' = \boldsymbol{t}' \boldsymbol{t}'' ~\mbox{for}~\boldsymbol{t} , \boldsymbol{t}', \boldsymbol{t}'' \in \boldsymbol{T}~\mbox{if}~t \neq t' , t'' \in T~\mbox{and}~t t' = t' t'' \preceq w \rangle$$
is a presentation of the interval group with respect to $w$.
\end{proposition}
Notice that the relations presented in Proposition~\ref{PropDualPresDn} are the relations that are visible on the elements of length $2$ in the poset $([1,w],\preceq)$. We call them the dual braid relations (as in \cite{BessisDualMonoid}).
The following result, due to Bessis--Digne--Michel \cite{BessisDigneMichel}, is the main theorem of interval Garside theory (see also \cite{Michel}).
\begin{theorem}\label{TheoremMainThmIntGarTheory}
If for $v \in W$, the two intervals $[1,v]$ and $[1,v]_{r}$ are equal (we say that $v$ is balanced) and if the posets $([1,v],\preceq)$ and $([1,v]_{r},\preceq_{r})$ are lattices, then the interval group $G([1,v])$ is an interval Garside group.
\end{theorem}
Since $T$ is stable under conjugation, quasi-Coxeter elements are always balanced. The only obstruction to obtain interval Garside groups is the lattice property. Bessis \cite{BessisDualMonoid} showed the following.
\begin{theorem}\label{ThmGarsideArtin}
Let $c$ be a Coxeter element. The posets $([1,c],\preceq)$ and $([1,c],\preceq_r)$ are lattices; hence the interval group $G([1,c])$ is a Garside group. The group $G([1,c])$ is isomorphic to the Artin group associated with $W$.
\end{theorem}
Note that Garside groups are desirable since they enjoy important group-theoretical, homological, and homotopical properties. See \cite{DehornoyEtAl} for a treatment on the foundations of Garside theory.
\subsection{Presentations for the interval groups}\label{PraesentIntervalGrp}
Let $G$ be a group containing elements ${\boldsymbol s}_1$, ${\boldsymbol s}_2$, ${\boldsymbol s}_3$, and ${\boldsymbol s}_4$
that satisfy the relations of the Artin group corresponding to
the $4$-cycle that is illustrated in the figure below.
\begin{center}
\begin{tikzpicture}
\node[draw,shape=circle, label=below:${\boldsymbol s}_1$] (00) at (0,0) {};
\node[draw,shape=circle, label=right:${\boldsymbol s}_4$] (11) at (0.5,0.5) {};
\node[draw,shape=circle,label=left:${\boldsymbol s}_2$] (m11) at (-0.5,0.5) {};
\node[draw,shape=circle,label=above:${\boldsymbol s}_3$] (02) at (0,1) {};
\draw[-] (00) to (m11);
\draw[-] (11) to (02);
\draw[-] (m11) to (02);
\draw[-] (11) to (00);
\end{tikzpicture}
\end{center}
We associate two words with this $4$-cycle, which we call the {\em cycle commutator} and the {\em twisted cycle commutator}, and which we define by
$$\cc{{\boldsymbol s}_1}{{\boldsymbol s}_2}{{\boldsymbol s}_3}{{\boldsymbol s}_4} := [{\boldsymbol s}_1,{\boldsymbol s}_2{\boldsymbol s}_3{\boldsymbol s}_4{\boldsymbol s}_3^{-1}{\boldsymbol s}_2^{-1}],\ \mbox{and}\
\tc{{\boldsymbol s}_1}{{\boldsymbol s}_2}{{\boldsymbol s}_3}{{\boldsymbol s}_4} := [{\boldsymbol s}_1,{\boldsymbol s}_2^{-1}{\boldsymbol s}_3{\boldsymbol s}_4{\boldsymbol s}_3^{-1}{\boldsymbol s}_2].$$
It is straightforward to check that the four cycle commutators
$$\cc{{\boldsymbol s}_1}{{\boldsymbol s}_2}{{\boldsymbol s}_3}{{\boldsymbol s}_4},\,
\cc{{\boldsymbol s}_2}{{\boldsymbol s}_3}{{\boldsymbol s}_4}{{\boldsymbol s}_1},\,
\cc{{\boldsymbol s}_3}{{\boldsymbol s}_4}{{\boldsymbol s}_1}{{\boldsymbol s}_2},\,
\cc{{\boldsymbol s}_4}{{\boldsymbol s}_1}{{\boldsymbol s}_2}{{\boldsymbol s}_3}$$
are equivalent, in the sense that if one of them is a relator of $G$
(i.e. it evaluates to the identity in $G$), then so do the other three,
and the same is true of the corresponding twisted cycle commutators.
It follows from the braid relations between ${\boldsymbol s}_2,{\boldsymbol s}_3,{\boldsymbol s}_4$ that
${\boldsymbol s}_2^{-1}{\boldsymbol s}_3{\boldsymbol s}_4{\boldsymbol s}_3^{-1}{\boldsymbol s}_2 =_G {\boldsymbol s}_4^{-1}{\boldsymbol s}_3{\boldsymbol s}_2{\boldsymbol s}_3^{-1}{\boldsymbol s}_4$.
Hence $\tc{{\boldsymbol s}_1}{{\boldsymbol s}_2}{{\boldsymbol s}_3}{{\boldsymbol s}_4}$ and
$\tc{{\boldsymbol s}_1}{{\boldsymbol s}_4}{{\boldsymbol s}_3}{{\boldsymbol s}_2}$ are equivalent.
But we cannot deduce the same relationship between
$\cc{{\boldsymbol s}_1}{{\boldsymbol s}_2}{{\boldsymbol s}_3}{{\boldsymbol s}_4}$ and
$\cc{{\boldsymbol s}_1}{{\boldsymbol s}_4}{{\boldsymbol s}_3}{{\boldsymbol s}_2}$, and so the word
$\cc{{\boldsymbol s}_1}{{\boldsymbol s}_2}{{\boldsymbol s}_3}{{\boldsymbol s}_4}$ must be associated with an oriented 4-cycle of the form ${\boldsymbol s}_1 \rightarrow {\boldsymbol s}_2 \rightarrow {\boldsymbol s}_3 \rightarrow {\boldsymbol s}_4 \rightarrow {\boldsymbol s}_1$.
Notice also that both the cycle and twisted cycle commutator relators can be written as relations between positive words (see for instance Lemma~5.3 in \cite{BaumNeaRees}).
In the remainder of the paper, we will use the following abbreviation.
Given a Carter diagram $\Delta$ and ${\boldsymbol{\mathcal{S}}}$ a set of generators related
to the vertices of $\Delta$ we denote by $R(\Delta)$ the set of braid relations
defined by $\Delta$; that is, for each pair of generators ${\boldsymbol s},{\boldsymbol t}$, we have the relation
${\boldsymbol s} {\boldsymbol t} {\boldsymbol s} = {\boldsymbol t} {\boldsymbol s} {\boldsymbol t}$ if the vertices representing ${\boldsymbol s}$ and ${\boldsymbol t}$
are joined by an edge and ${\boldsymbol s} {\boldsymbol t} = {\boldsymbol t} {\boldsymbol s}$ if they are not.
\subsubsection*{Type $D_n$}\label{SubsubPresDn}
The presentations in type $D_n$ were the main object of study in our first paper \cite[Theorem A]{BaumNeaRees}.
\begin{theorem}\label{ThmPresDn}
Let $w$ be a quasi-Coxeter element of the Coxeter group $W$ of type $D_n$ and $\Delta_{m,n}$ its associated Carter diagram,
as shown in Figure~\ref{FigureCarterDiagramDn}.
Then the interval group $G_{m,n}:= G([1,w])$ admits a presentation over the generators ${\boldsymbol s}_1,\dotsc,{\boldsymbol s}_n$ corresponding to the vertices of $\Delta_{m,n}$ together with the relations $R_{m,n}:= R(\Delta_{m,n})$ and the twisted cycle commutator relator
$\tc{{\boldsymbol s}_1} {{\boldsymbol s}_{m}}{{\boldsymbol s}_{m+1}}{{\boldsymbol s}_{m+2}}$, associated with the $4$-cycle $(s_1, s_{m}, s_{m+1}, s_{m+2})$ within $\Delta_{m,n}$.
\end{theorem}
We would also like to draw attention to the alternative presentations for $G_{m,n}$ that are described in \cite{BHNR_Part3}.
\subsubsection*{Types $E_6$, $E_7$ and $E_8$}
We prove the following results computationally. We will explain later the computational steps used in the proofs.
Note that the presentations we obtain in Theorem~\ref{ThmEnArtin} were already described in \cite{GrantMarsh} and \cite{Haley}. The main result of this section is Theorem~\ref{ThmEnIntervalProper}.\\
We consider a Carter diagram to be orientable if its edges can be oriented in such a way that each $4$-cycle is oriented. All Carter diagrams of types $E_n(a_i)$ that appear in Figures~\ref{FigureCarterDiagramsE6} to~\ref{FigureCarterDiagramsE8} are orientable except for $E_7(a_4)$, $E_8(a_7)$, and $E_8(a_8)$.
\begin{theorem}\label{ThmEnArtin}
Let $W$ be a Coxeter group of types $E_n$ for $n=6,7$ or $8$.
Let $E_n(a_i)$ be an oriented
Carter diagram. Then the Artin group $A(E_n)$ associated with $W$ admits a presentation over the generators corresponding to the vertices of $E_n(a_i)$ with the relations $R(E_n(a_i))$ and a set of cycle commutator relators,
one corresponding to each oriented $4$-cycle in the diagram.
\end{theorem}
Note that given any orientation of the Carter diagram considered in Theorem~\ref{ThmEnArtin} provides the result of the theorem.
Exactly the same diagrams are covered by \cite[Theorem 1.1]{Haley},
which derives a presentation relative to a diagram $\Gamma'$, of an Artin
group of type $\Gamma$, whenever $\Gamma'$ can be derived from $\Gamma$ by a
sequence of {\em mutations} (see also \cite{GrantMarsh}). Note that in Theorem~\ref{ThmPres}, Theorem~\ref{ThmEnIntervalProper}, and Theorem~\ref{ThmCameron}, we consider all Carter diagrams of types $E_n(a_i)$ and not only the orientable ones.
\begin{theorem}\label{ThmEnIntervalProper}
Let $W$ be a Coxeter group of types $E_n$ for $n=6,7$ or $8$.
Let $w$ be a quasi-Coxeter element, and let $\Delta$ be the Carter diagram associated with $w$.
Then the interval group $G([1,w])$ admits a presentation over the generators corresponding to the vertices of $\Delta$ with the relations $R(\Delta)$ and a set of twisted cycle commutator relators, one corresponding to each $4$-cycle in the diagram.
\end{theorem}
An analogous result for Coxeter groups arises as
a consequence of \cite[Theorem~6.10]{Cameron}, which uses a process called {\em switching}, similar to the process of mutation described in \cite{GrantMarsh, Haley}.
\begin{theorem}\label{ThmCameron}
Let $W$ be a simply laced Coxeter group, and $\Delta$ be a Carter diagram associated with $W$.
Then the Coxeter group $W$ admits a presentation over the generators corresponding to the vertices of $\Delta$ with the quadratic relations on the generators, the relations $R(\Delta)$ and a
set of cycle commutator relators, one corresponding to each $4$-cycle in $\Delta$.
\end{theorem}
Our proofs of
Theorems~\ref{ThmEnArtin} and \ref{ThmEnIntervalProper} are large computations,
some of them requiring significant computing power over a long period of time.
The presentations were established in a sequences of steps, which we describe now. First we describe the steps that prove Theorem~\ref{ThmEnIntervalProper}.\\
\textit{Step 1.} We choose a representative for each conjugacy class of quasi-Coxeter elements. Associated to such representative is a Carter diagram (see Section~\ref{SubCarterDiagrams}). We distinguish between the conjugacy classes
using the orders of the quasi-Coxeter elements. Recall that the order of a Coxeter element is precisely the Coxeter number. For types $E_n$ ($n=6,7,8$), we summarise the orders in the next tables. In each table, the first column contains Carter diagrams and the second column the orders of the corresponding quasi-Coxeter elements.
\begin{center}
\begin{tabular}{|l|l|}
\hline
$E_6(a_1)$ & $9$\\
\hline
$E_6(a_2)$ & $6$\\
\hline
$E_6$ & $12$\\
\hline
\end{tabular}\quad
\begin{tabular}{|l|l|}
\hline
$E_7(a_1)$ & $14$\\
\hline
$E_7(a_2)$ & $12$\\
\hline
$E_7(a_3)$ & $30$\\
\hline
$E_7(a_4)$ & $6$\\
\hline
$E_7$ & $18$\\
\hline
\end{tabular}\quad
\begin{tabular}{|l|l|}
\hline
$E_8(a_1)$ & $24$\\
\hline
$E_8(a_2)$ & $20$\\
\hline
$E_8(a_3)$ & $12$\\
\hline
$E_8(a_4)$ & $18$\\
\hline
$E_8(a_5)$ & $15$\\
\hline
$E_8(a_6)$ & $10$\\
\hline
$E_8(a_7)$ & $12$\\
\hline
$E_8(a_8)$ & $6$\\
\hline
$E_8$ & $30$\\
\hline
\end{tabular}
\end{center}
\textit{Step 2.} We determine a presentation of the interval group related to the chosen quasi-Coxeter element as follows. First, we determine the length over $T$ of the elements in $W$ and then construct those of length $2$ that divide $w$. From these elements, it is easy to define the dual braid relations that describe our presentation of the interval group.
\textit{Step 3.} We choose a set of reflections $S$ of cardinality the rank of the Coxeter group such that the relations between the corresponding elements in the interval group are those that describe the relations of the Carter diagram related to the conjugacy class of the quasi-Coxeter element. We denote by ${\boldsymbol{\mathcal{S}}}$ the copy of $S$ in the interval group.
Using the dual braid relations, we determine an expression over ${\boldsymbol{\mathcal{S}}} \cup {\boldsymbol{\mathcal{S}}}^{-1}$ of all the generators in $\textbf{T} \backslash {\boldsymbol{\mathcal{S}}}$ of the interval group. Finally, we replace the elements that belong to $\textbf{T} \backslash {\boldsymbol{\mathcal{S}}}$ in the dual braid relations by their expressions over ${\boldsymbol{\mathcal{S}}} \cup {\boldsymbol{\mathcal{S}}}^{-1}$.
\textit{Step 4.} Using the package \verb|kbmag| \cite{KBMAG} of GAP \cite{GAP4} and a computation by hand, we show that all the relations other than the one described by the Carter diagram and the corresponding commutator relators simplify in the interval group (see Theorems~\ref{ThmEnArtin},~\ref{ThmEnIntervalProper} for the type of the commutator relators). This proves the presentations described in Theorems~\ref{ThmEnArtin}~and~\ref{ThmEnIntervalProper}.\\
Now Theorem~\ref{ThmEnArtin} is obtained by considering the conjugacy class of the Coxeter element in \textit{Step 1}, and then applying \textit{Step 2} to \textit{Step 4} for all the related Carter diagrams that appear in Theorem~\ref{ThmEnArtin}. Note that we attempted to construct the presentations for the non-orientable Carter diagrams that are excluded in Theorem~\ref{ThmEnArtin}. But we were unable to complete the computations, hence it seems likely (although it is not proved) that the $E_n$ Artin groups do not have presentations corresponding to those diagrams.\\
As an evidence on how difficult the computation is, consider the case $E_8(a_6)$ of Theorem~\ref{ThmEnIntervalProper} as an example. The related proper quasi-Coxeter element considered in \textit{Step 1} is of order $10$. The number of the dual braid relations we obtain in \textit{Step 2} is $3630$. Theorem~\ref{ThmEnIntervalProper} describes a presentation of the related interval group over $8$ generators and $31$ relations (these are the relations of the Carter diagram $E_8(a_6)$ along with $3$ twisted cycle commutators). The length of the longest relation we simplified in \mbox{\textit{Step 4}} is $2000$.
\subsubsection*{Types $H_3$, $H_4$ and $F_4$}\label{SubPresH3H4}
We start with type $H_4$, where the interval groups of proper quasi-Coxeter elements are sorted out quickly. Actually there exist ten conjugacy classes of proper quasi-Coxeter elements. None of the intervals are lattices, as we already
mentioned in Section~\ref{SubLattice}. We compare the results of applying the function \verb|LowIndexSubgroupsFpGroup| to these groups within GAP to show that these interval groups are not isomorphic to the Artin group of type $H_4$.\\
For type $H_3$, we have two conjugacy classes of proper quasi-Coxeter elements that we denote by $H_3(a_1)$ and $H_3(a_2)$. Using GAP, we obtain the following results.
\begin{theorem}\label{ThmH3_a1}
The interval group related to the proper quasi-Coxeter element $H_3(a_1)$ is isomorphic to the Artin group of type $H_3$. Since the interval is a lattice, then this interval group is also a Garside group.
\end{theorem}
\begin{theorem}\label{ThmH3_a2}
The interval group related to $H_3(a_2)$ admits a presentation over three generators ${\boldsymbol s}_1,{\boldsymbol s}_2,{\boldsymbol s}_3$ and the relations are described by the following diagram presentation \begin{center}
\tikzset{every node/.style={font=\scriptsize}}
\begin{tikzpicture}
\node[circle, draw, fill=black!50,
inner sep=0pt, minimum width=4pt, label=left:${\boldsymbol s}_3$] (3) at (0,0) {};
\node[circle, draw, fill=black!50,
inner sep=0pt, minimum width=4pt, label=right:${\boldsymbol s}_2$] (2) at (2,0) {};
\node[circle, draw, fill=black!50,
inner sep=0pt, minimum width=4pt,label=above:${\boldsymbol s}_1$] (1) at (1,1) {};
\draw[-] (1) to (3);
\draw[-] (1) to (2);
\draw[-] (2) to node[below] {$5$} (3);
\end{tikzpicture}
\end{center}
along with the two relations
$${\boldsymbol s}_2{\boldsymbol s}_3{\boldsymbol s}_2{\boldsymbol s}_1{\boldsymbol s}_3{\boldsymbol s}_2 = {\boldsymbol s}_3{\boldsymbol s}_2{\boldsymbol s}_1{\boldsymbol s}_3{\boldsymbol s}_2{\boldsymbol s}_3, \ ({\boldsymbol s}_3{\boldsymbol s}_2{\boldsymbol s}_1)^3 = ({\boldsymbol s}_1{\boldsymbol s}_3{\boldsymbol s}_2)^3.$$
Since the interval is a lattice, this interval group is a Garside group.
\end{theorem}
We also show that the interval group of Theorem~\ref{ThmH3_a2} is not isomorphic to the Artin group of type $H_3$ by using \verb|LowIndexSubgroupsFpGroup| within GAP. Hence it defines a new Garside group.
We conjecture that this group is the fundamental group of the complement in $\mathbb{C}^3$ of an algebraic hypersurface.\\
We mention that using the same computational approach that we described previously for the cases $E_6$, $E_7$, and $E_8$, we are able to show the following two results in the case $F_4$.
\begin{theorem}\label{ThmF4Artin}
Let $W$ be a Coxeter group of type $F_4$. Let $F_4(a_1)$ be the Carter diagram illustrated in Figure~\ref{FigureCarterDiagramF4a1} with $(s_1,s_4)$ and $(s_2,s_3)$ the edges with double bonds. Then the Artin group $A(F_4)$ admits a presentation over generators corresponding to the vertices of $F_4(a_1)$ with relations $R(F_4(a_1))$ and the commutator relator $[{\boldsymbol s}_2,{\boldsymbol s}_3]$.
\end{theorem}
\begin{theorem}\label{ThmF4IntervalProper}
Let $W$ be a Coxeter group of type $F_4$. Let $F_4(a_1)$ be the Carter diagram illustrated in Figure~\ref{FigureCarterDiagramF4a1} with $(s_1,s_4)$ and $(s_2,s_3)$ the edges with double bonds. Then the interval group in this case admits a presentation over generators corresponding to the vertices of $F_4(a_1)$ with relations $R(F_4(a_1))$ and the commutator relators $[{\boldsymbol s}_2^{-1}{\boldsymbol s}_1{\boldsymbol s}_2,{\boldsymbol s}_3^{-1}{\boldsymbol s}_4{\boldsymbol s}_3]$ and $[{\boldsymbol s}_2{\boldsymbol s}_1{\boldsymbol s}_2^{-1},{\boldsymbol s}_3{\boldsymbol s}_4{\boldsymbol s}_3^{-1}]$. Furthermore, it is not isomorphic to the Artin group $A(F_4)$.
\end{theorem}
\section{Non-isomorphism results}\label{SecNonIsom}
The aim of this section is to show that none of the interval groups $G = G([1,w])$ associated with a proper quasi-Coxeter element $w$ of a Coxeter group of
type $D_n$ with $n$ even, or of type $E_n$ with $n \in \{ 6,7, 8\}$, is isomorphic to the respective Artin group $A$.
This question has already been discussed for the remaining finite Coxeter groups (types $H_3, H_4, F_4$) in
Section~\ref{PraesentIntervalGrp}.
Our first approach was to compare the abelianisations $G/[G,G]$ and $A/[A,A]$ of the respective groups.
Let $S$ be the set of generators of the presentations for the interval groups of
type $D_n$ or $E_6, E_7$ or $E_8$ given in Section~\ref{PraesentIntervalGrp}.
The related Carter diagrams are connected, and if $s,t \in S$ correspond to neighbours in that diagram, then $sts = tst$. Therefore the commutator subgroup
$G^\prime$ of $G$ contains the element $sts^{-1}t^{-1} = t^{-1} s t t^{-1} = t^{-1} s$.
Therefore all the elements in $S$ become equal in the abelianisation. Moreover, the
twisted cycle commutator relators still hold if we identify all the elements in $S$.
This shows that the abelianisation of the Artin groups as well as of the interval groups
are isomorphic to $\mathbb{Z}$. (This also shows in particular that the interval groups are infinite groups.)
Therefore we choose the new approach to consider the abelianisations of the pure
Artin and the pure interval groups.
Our strategy is as follows. Let $\Gamma$ be the Coxeter diagram of the Coxeter group $W$. Let $\varphi : A(\Gamma) \longrightarrow W(\Gamma) $ be a homomorphism from the Artin group $A = A(\Gamma)$ to the Coxeter group $W = W(\Gamma)$ such that the kernel $\ker(\varphi)$ is the pure Artin group $\mathrm{PA}(\Gamma)$. We call this the canonical epimorphism from $A$ to $W$.
Tits proved that
the abelianisation of $\mathrm{PA}(\Gamma)$ is isomorphic to the free abelian group of rank
$|T|$.
Note that the epimorphism $\varphi$ sends the element $s \in S$ to the respective reflection in $W$.
By Theorem~\ref{ThmCameron} there is also such an epimorphism from $G$ to
$W$, whose kernel we denote by $K$ and we call it the pure interval group. We show that
the abelianisation of $K$ is of rank at most $|T|-2$.
Thereby we obtain a contradiction for the types $D_n$ with $n$ even, or $E_6, E_7, E_8$ by applying the following result of Cohen and Paris \cite{CP}.
\begin{theorem}\label{CohenParis}
Let $A(\Gamma)$ be an Artin group of type $D_n$ with $n$ even, or $E_n$ with
$n \in \{ 6,7, 8\}$. Then the canonical epimorphism is the unique epimorphism from
$A(\Gamma) $ to $W (\Gamma) $ up to automorphisms of $W(\Gamma)$.
In the case $D_n$ with $n$ odd there are three epimorphisms up to automorphisms of $W(\Gamma)$.
\end{theorem}
We follow the proof of Tits \cite{Tits} in the calculation of the abelianisation of the kernel $K$.
Therefore we first sketch his approach, then we discuss the interval groups of type
$D_n$ in detail and in the last part the interval groups for $E_6,E_7$, and $E_8$.
\subsection{The abelianisation of the pure Artin group}\label{SubTitsArtin}
We start by introducing the notation of Tits \cite{Tits}, which we will use throughout this section. Let $(W,R)$ be the Coxeter group of type $\Gamma$ with simple system
$R = \{ r_i ~|~1 \leq i \leq n\}$.
Let $I = \{1, \ldots , n\}$ and denote by ${\bf I}$ the free group on $I$. We denote by $R(\Gamma)$ the braid relations determined by the
Coxeter graph $\Gamma$.
Let
\begin{itemize}
\item $r : {\bf I} \rightarrow W$ defined by $r(i):=r_i$ and $L := \ker(r)$.
\end{itemize}
Then we have
$W = \langle r_1, \ldots , r_n~|~r_i^2, i \in I, R(\Gamma) \rangle ~\mbox{and }~L =
\ll i^2, i \in I, R(\Gamma) \gg.$
Let
\begin{itemize}
\item $N := \ll [L,L], R(\Gamma) \gg ~ \leq L$, and $V:= {\bf I}/N, q_i:= iN$, and
\item $q$ and $f$ the canonical epimorphisms $q: {\bf I} \rightarrow V$ and $f: V \rightarrow W$
defined by $q(i) := q_i$ and $f(q_i) := r_i$, respectively.
\item $U:= \ker f$ and
\item $B:= \langle\langle R(\Gamma) \rangle\rangle \leq N \leq {\bf I}$.
\end{itemize}
The homomorphism $f$ is well-defined since $N \subset L$ and $f\circ q = r$. Our setting implies the following.
\begin{lemma}\label{PropertiesU}
Let $A= A(\Gamma)$ be the Artin group of spherical type $\Gamma$. Then we have
\begin{itemize}
\item[(a)] $V/U \cong W$, and $U = L/N$;
\item[(b)] $U$ is an abelian normal subgroup of $V$;
\item[(c)] $U$ is the normal closure of the words $q_i^2$ in $V$, where $ i \in I$.
\item[(d)] $A: = A(\Gamma) = {\bf I}/B$, and $\mathrm{PA} := \mathrm{PA}(\Gamma) = L/B$;
\item[(e)] $U$ is isomorphic to the abelianisation of the pure Artin group $\mathrm{PA}$.
\end{itemize}
\end{lemma}
\begin{proof} Assertions (a) and (b) follow from the definition of $U$, (c) is a consequence of the definition of $r$, and (d) follows
from the definitions of the Artin and the pure Artin groups.
From (d) we conclude $\mathrm{PA}/[\mathrm{PA}, \mathrm{PA}] = (L /B) / (([L,L]B / B) \cong L/([L,L] B) = L/N = U$ that is (e).
\end{proof}
In order to determine the structure of $U$, Tits uses the following property of
finite Coxeter systems $(W,R)$ \cite[Proposition~2.1]{Tits}, which does not hold in $(W,S)$, where $S$ is a generating set related to a quasi-Coxeter element.
\begin{lemma}
Let ${\bf i} \in {\bf I}$ be a positive word such that $r({\bf i})$ is $R$-reduced in $W$ and let $j,j^{\prime} \in I$ such that
$r({\bf i})^{-1} r_j r({\bf i}) = r_{j^\prime}$ , then $q({\bf i})^{-1} q_j q({\bf i}) = q_{j^\prime}$.
\end{lemma}
Tits then proves the following \cite[Theorem~2.5]{Tits}.
\begin{theorem}\label{Tits}
The abelianisation of $\mathrm{PA}(\Gamma)$ is a free abelian group of rank $|T|$, where $T$ is the set of reflections in the Coxeter group $W(\Gamma)$.
\end{theorem}
\subsection{The abelianisation of the pure interval group of type $D_n$}\label{SubNonIsomDn}
Now we consider the interval group $G := G_{m,n}$, where $m > 1$. We adapt Tits's construction and argumentation for Artin groups of spherical type
to the group $G$ in order to prove an upper bound of the rank of the abelianisation $K/[K,K]$ of the ``pure interval group'' $K$ of $G$ (see the definition
below). Let
\begin{itemize}
\item $I = \{1, \ldots , n\}$ and ${\bf I}$ the free group on $I$;
\item $S = \{s_1, \ldots , s_n\}$ be the set of $n$ generators of $W$ that correspond to the vertices of $\Delta_{m,n}$;
\item $s: {\bf I} \rightarrow W$ the homomorphism sending $i$ onto $s(i):= s_i$, and $L := \ker (s)$ and
\item define ${\boldsymbol s} : { \bf I} \rightarrow G_{m,n}$ by ${\boldsymbol s}(i) := {\boldsymbol s}_i$, and $B:= \ker({\boldsymbol s})$.
\end{itemize}
Then
$$W = \langle s_1, \ldots s_n ~|~s_i^2, i \in I, R_{m,n}, \tc{s_1} {s_{m}}{s_{m+1}}{s_{m+2}} \rangle,$$
where $(s_1, s_{m}, s_{m+1}, s_{m+2})$ is the unique $4$-cycle in $\Delta_{m,n}$.
We need more notation. Define
\begin{itemize}
\item $N = \ll [L,L], R_{m,n}, \tc{1}{m}{m+1}{m+2} \gg \unlhd ~ {\bf I}$ and $V:= {\bf I}/N$,
\item homomorphisms $q: {\bf I} \rightarrow V$ by $ i \mapsto q_i:= q(i) $ and $f: V \rightarrow W$ by $q_i \mapsto r_i$ and
\item $U := \ker f$.
\end{itemize}
By \cite[Theorem~A]{BaumNeaRees}, $B := \ll R_{m,n}, \tc{1}{m}{m+1}{m+2} \gg \unlhd ~ {\bf I}$.
Further we have $B[L,L] = N$, and by construction $G \rightarrow V, {\boldsymbol s}_i \mapsto q_i$ is a homomorphism.
We get properties analogous to those for the Artin groups in Lemma~\ref{PropertiesU}:
\begin{lemma}\label{Basics}
The following hold:
\begin{itemize}
\item[(a)] $V/U \cong W$, and $U = L/N$;
\item[(b)] $U$ is an abelian normal subgroup of $V$;
\item[(c)] $U$ is the normal closure of the words $q_i^2$ in $V$, where $ i \in I$;
\item[(d)] $G = {\bf I}/B$, and $K:= L/B$ is the kernel of the map from $G$ to $W$ sending
${\boldsymbol s}_i$ onto $s_i$;
\item[(e)] $U$ is isomorphic to the abelianisation of $K$.
\end{itemize}
\end{lemma}
\begin{proof} The proofs of (a), (b), (d) are identical to the proofs of Lemma~\ref{PropertiesU}(a), (b), (d).
To prove (c),
let $X$ be the normal closure of $q_i^2 \in V$ for $1 \leq i \leq n$. Clearly $X \subseteq L/N$. Observe that
$q_i X$ satisfies the relations of the presentation for $W$ by \cite{Cameron}. This shows $L/N \subseteq X$,
and equality now follows. Statement (e) is immediate from the definitions of $L,N$ and $B$.
\end{proof}
In the following we assume that $n > 4$. This implies, as $m \leq n/2$, that $m +2 < n$.
The next lemma is an important fact which holds in $A(D_n)$ (see \cite{Tits}), but also in $G_{m,n}$ with $m >1$.
\begin{lemma}\label{ImportantFact}
The following two statements hold.
\begin{itemize}
\item[(a)] There is an action of $W$ on $U$ given by $u^w = u^v$
for $u \in U$ and $w \in W$, where $v \in V$ is any element
such that $f(v) = w$.
\item[(b)] $C_W(q_n^2) \geq C_W(s_n)$.
\end{itemize}
\end{lemma}
\begin{proof}
The action of $W$ on $U$ defined in (a) is well-defined as $U = \ker(f)$ and as $U$ is abelian.
It remains to prove (b).
As $n \geq 5$, the element $q_n$ commutes with $q_i$ for $1 \leq i \leq n-2$.
In our notation $s_n$ corresponds to the root $e_n - e_{n-1}$. Let
$$s := s_1^{s_{m+2} s_{m+3} \cdots s_ns_{m+1} \cdots s_{n-1}} \in W.$$
Then $s$ corresponds to the root $e_n + e_{n-1}$ and $C_W(s_n) = \langle s_1, \ldots , s_{n-2}, s_n, s \rangle$.
We note the following two properties.
The first is an elementary calculation, and the second follows from Lemma~\ref{Basics} (c).
\begin{itemize}
\item[(i)] The braid relation $q_jq_kq_j=q_kq_jq_k$ implies $q_j^{-1} q_k^2 q_j = q_k q_j^2 q_k^{-1}$;
\item[(ii)] $q_j^{-2} q_k^2 q_j^2 = q_k^2 $.
\end{itemize}
We apply these to derive $(q_n^2)^s = q_n^2$ by a direct calculation.
This then implies $C_W(q_n^2) \geq C_W(s_n)$.
\end{proof}
\begin{remark}\label{CentralizerwInG}
The following argument establishes Lemma~\ref{ImportantFact} (b) without having to do any direct calculations, and can also be used for the exceptional cases.
We consider two parabolic subgroups of $W$, the subgroups
$$P:= \langle s_i, s_n ~|~ 1 \leq i \leq n-2 \rangle \cong W(D_{n-2}) \times \mathbb{Z}_2~\mbox{and}$$
$$P_e:= \langle s_1, s_{m+1}, \ldots , s_n \rangle \cong W(D_{n-m}).$$
Then $\{s_1, s_{m+1}, \ldots , s_n\}$ is a simple system in $P_e$, and
$w_e:= s_1s_{m+1} \cdots s_n$ is a parabolic Coxeter element in $W$, as well as a prefix of $w$.
It follows from \cite[Proposition~2.1~(3)]{Tits}
that $C_{P_e}(q^2_n) \geq C_{P_e}(s_n) \cong W(D_{n-m-2} ) \times \mathbb{Z}_2^2$. Moreover, we have that $P \leq C_W(q_n^2)$ and
$ \langle C_{W_e}(s_n) , P \rangle = C_W(s_n) \cong W(D_{n-2} ) \times \mathbb{Z}_2^2$,
which proves Lemma~\ref{ImportantFact} (b).
\end{remark}
\begin{lemma}\label{ImportantFactHelp}
$C_W(q_n^2) $ is one of the three
following groups:
$C_W(s_n), O_2(W)C_W(s_n)$ or $W$.
\end{lemma}
\begin{proof}
We have that $C_W(q_n^2) \geq C_W(s_n)$ by Lemma~\ref{ImportantFact}. Recall that $W = O_2(W)\rtimes \mathrm {Sym}(n)$ and that
$C:= C_W(s_n) = O\rtimes H$, where $O = C_W(s_n) \cap O_2(W)$ is of order $2^{n-2}$ and $H \leq \mathrm {Sym}(n)$ is isomorphic to $\mathrm {Sym}(n-2) \times \mathbb{Z}_2$.
Then $H$ is a maximal subgroup of $\mathrm {Sym}(n)$. Let $M$ be a proper overgroup of $C$ in $W$. If $O_2(W) \not\leq M$, then $O_2(W)M = W$ and $MO_2(W)/O_2(W) \cong \mathrm {Sym}(n)$.
As $O_2(W)$ is a uniserial module for $\mathrm {Sym}(n)$ and $O \leq M$, it follows $M = W$. If $O_2(W) \leq M$, then $M = O_2(W)C$.
\end{proof}
Recall that the set of reflections $T$ in $W$ forms a single conjugacy class.
Therefore and by using Lemma~\ref{ImportantFactHelp} we are able to determine
the orbit of $q_i^2$ under the action of $V$.
\begin{cor}\label{Coro1}
Let $n \geq 5$.
Then the following hold:
\begin{itemize}
\item[(a)] $q^2_1, \ldots , q^2_n$ are conjugate in $V$;
\item[(b)] $\widehat{T}^2:= (q_i^2)^V = (q_k^2)^V$ for all $i,k \in I$;
\item[(c)] $| \widehat{T}^2 | = | T |/a$ where $a = |C_W(q_n^2) : C_W(s_n)|
\in \{1, 2, |T|\}$;
\end{itemize}
\end{cor}
\begin{proof} Assertion (a) is a consequence of the connectivity of the Carter diagram for $w$.
Assertion (b) follows from (a), and we obtain $| \widehat{T}^2 | = |W : C_W(q^2_n)| = |W : C_W(s_n)|/ |C_W(q^2_n) : C_W(s_n)| =
|T|/a$, where $a = |C_W(q^2_n) : C_W(s_n)| \in \{1, 2, |T|\}$ by Lemma~\ref{ImportantFactHelp}, which is (c).
\end{proof}
\begin{cor}\label{Coro2}
Let $n \geq 5$.
Then the following hold:
\begin{itemize}
\item[(a)] $U$ is generated as a group by $\widehat{T}^2$;
\item[(b)] $U$ is abelian and generated by at most $|T|/a$ elements, where $a \in \{1, 2, |T| \}$.
\end{itemize}
\end{cor}
\begin{proof}
Assertion (a) follows from Lemma~\ref{Basics} (c) and Corollary~\ref{Coro1} (b), and
(b) from Lemma~\ref{Basics} (b) and Corollary~\ref{Coro1} (c).
\end{proof}
Now we are able to show that for each of the three possible values of $a$ the group $U$ is generated by less than $|T|$ elements.
\begin{proposition}\label{UpperBoundDn}
Let $n \geq 4$.
Then the abelianisation of the kernel of the map from $G_{m,n}$ to $W$ that takes ${\boldsymbol s}_i$ to $s_i$ is generated by at most $|T| -2$ elements.
\end{proposition}
\begin{proof} We prove the assertion by induction on $n$. We checked by hand and using GAP that $U$ is free abelian of rank $10$ if $n = 4$.
Now let $n \geq 5$.
Let $P := \langle q_1, \ldots , q_{n-1} \rangle \leq V$. Then $P$ is a quotient of the
respective group $V_{n-1}$ related to $G_{m,n-1}$ if $m <n/2$ and to $G_{m-1,n-1}$ if $m = n/2$.
Let $T_{n-1}$ be the set of reflections in $f(P) \cong W(D_{n-1})$.
Then $|(q_1^2)^P| \leq |T_{n-1}|$ and by induction $\langle (q_1^2)^P \rangle$ is abelian of rank at most $|T_{n-1}|- 2$. This implies that
$$\ll \hat{T}^2 \gg ~= \langle \hat{T}^2 \rangle =
\langle \hat{T}_{n-1}^2 \rangle \langle \hat{T}^2 \setminus{ \hat{T}_{n-1}^2} \rangle $$
is abelian of rank at most
$ |T_{n-1}| - 2 + ( |T| - |T_{n-1}| )= |T| - 2$, as claimed.
\end{proof}
\subsection{Non-isomorphism}
\begin{theorem}\label{NonIsoDn}
Let $W$ be a Coxeter group of type $D_n$ with $n\ge 4$ even, and $w$ a proper quasi-Coxeter element in $W$. Then $G([1,w])$ is not isomorphic
to the Artin group of type $D_n$.
\end{theorem}
\begin{proof}
By Proposition~\ref{UpperBoundDn} the abelianisation of $L/B$ is of rank less than $|T|$ in $G_{m,n}$. According to Theorem~\ref{CohenParis}
there is a unique epimorphism form $A(D_n)$ to the Coxeter group $W(D_n)$ if $n$ is even. As according to Tits's Theorem~\ref{Tits} the abelianisation of the pure Artin group $\mathrm{PA}(D_n)$
is of rank $|T|$, it follows that $G_{m,n}$ and $A(D_n)$ are not isomorphic.
\end{proof}
\subsection{The cases $E_6$, $E_7$, $E_8$}\label{SubNonIsomE6E7E8}
Now consider the exceptional cases. We use the same notation as before, and we just assume that $W$ is a Coxeter group of type $E_n$ for some $n \in \{6,7,8\}$.
Observe that all the lemmata of the last section still hold beside
Lemma~\ref{ImportantFact}(b) and \ref{ImportantFactHelp}.
Thus it remains to show the following.
\begin{lemma}
Let $W$ be a Coxeter group of type $E_n$ with $n \in \{6,7,8\}$, and let $w$ be a proper quasi-Coxeter element.
Then $C_W(q_n^2) \geq C_W(s_n)$.
\end{lemma}
\begin{proof}
We follow the proof given in Remark~\ref{CentralizerwInG}. Our strategy is as follows.
We have that
$$P:= \langle s_i ~|~ [s_i,s_1] = 1 \rangle~\mbox{as well as}~P_m:= \langle s_i ~|~ 1 \leq i \leq n, i \neq m\rangle $$
are parabolic subgroups of $W$ for every $m \in \{1, \ldots , n\}$.
Therefore we have $P \cap P_m = \langle s_i ~|~ [s_i,s_j] = 1 $ and $s_i \in P_m \rangle $
by \cite[Chapter IV, Section 2]{Bourbaki}, which can easily be computed, as has been done in the following tables.
In the three tables, we always choose $s_1$ to correspond to the vertex in the Carter diagram which is in the upper right corner and $s_m$ to be in the bottom right corner, except for Cases $E_7(a_3)$ and $E_8(a_8)$, where $s_m$ is in the bottom left corner, and present $P$, $P_m$, $C_{P_m}(s_1)$, and $P \cap C_{P_m}(s_1) = P \cap P_m$. We describe the parabolic subgroups by writing down the type of the related root system.
As a next step we show that $\langle P, C_{P_m}(s_1) \rangle = C_W(s_1)$ for each $n \in \{6,7,8\}$ and each $E_n(a_i)$. By induction and by Lemma~\ref{ImportantFact}(b) we then get
$$C_W(q^2_1) \geq \langle C_P(q_1^2), C_{P_m}(q_1^2) \rangle \geq \langle P, C_{P_m}(s_1) \rangle =
C_W(s_1),$$ as claimed.
\bigskip\\
$n = 6$.
\noindent
$$\begin{array}{|c|c|c|c|c|}
\hline
\mbox{Type of}~w & P & P_m & C_{P_m}(s_1) & P \cap P_m \\\hline
E_6(a_1) & A_1 \times A_4 & D_5 & A_1^2 \times D_3 & A_1 \times A_3 \\
E_6(a_2) & A_1 \times A_3 & D_5 & A_1^2 \times D_3\ & A_1 \times A_3\\
\hline
\end{array}$$
In the first case $P$ and in the second $C_{P_m}(s_1)$ is a maximal subgroup in $C_W(s_1) \cong \mathbb{Z}_2 \times \mathrm {Sym}(6)$,
which yields the assertion in both cases.
\bigskip\\
$n = 7$.
\noindent
$$\begin{array}{|c|c|c|c|c|}
\hline
\mbox{Type of}~w & P & P_m & C_{P_m}(s_1) & P \cap P_m \\\hline
E_7(a_1) & A_1 \times D_5 & D_6 & A_1^2 \times D_4 & A_1 \times D_4\\
E_7(a_2) & A_1 \times A_5 & E_6 & A_1 \times A_5 & A_1 \times A_4 \\
E_7(a_3) & A_1 \times D_4 & E_6 & A_1 \times A_5 & A_1 \times A_4 \\
E_7(a_4) & A_1 \times D_4 & E_6 & A_1 \times A_5 & A_1 \times A_3\\
\hline
\end{array}$$
If $W$ is of type $E_7$, then $C_W(s_1)$ is of type $A_1 \times D_6$, i.e. isomorphic to $\mathbb{Z}_2 \times \mathbb{Z}_{2}^5 \rtimes \mathrm {Sym}(6)$.
The overgroups of the subgroup isomorphic to $\mathbb{Z}_2 \times \mathrm {Sym}(6)$ appearing in the table are
$$\mathbb{Z}_2 \times \mathrm {Sym}(6) < \mathbb{Z}_2^2 \times \mathrm {Sym}(6) < \mathbb{Z}_2 \times \mathbb{Z}_{2}^5 \rtimes \mathrm {Sym}(6)= C_W(s_1).$$ Thus, if $P \cap P_m$ is not of index two
in the other group, then we get $\langle P, C_{P_m}(s_1) \rangle = C_W(s_1)$, and therefore the assertion.
This applies for all cases, but not for $E_7(a_1)$. In that case we construct the centraliser by hand.
\medskip
\\
$E_7(a_1)$: In Bourbaki notation the following roots give an $E_7(a_1)$-diagram:
$$e_1 + e_3, e_4 + e_1, e_5 - e_4, e_6 - e_5$$
$$e_3 - e_2, e_3 - e_1, 1/2(e_1+e_8) - 1/2(e_2+e_3+e_4 +e_5 +e_6 +e_7)$$
Using the given roots we see that $C_{P_m}(s_1)$ is related to the root system
generated by $e_6 - e_5, e_5 + e_6, e_1 + e_3, e_4 + e_1, e_3 - e_2, e_3 - e_1$.
Thus $\langle P, C_{P_m}(s_1) \rangle $ is generated by the reflections related
to the roots $e_6 - e_5, e_5 + e_6, e_1 + e_3, e_4 + e_1, e_3 - e_2, e_3 - e_1, 1/2(e_1+e_8) - 1/2(e_2+e_3+e_4 +e_5 +e_6 +e_7)$,
which generate a root system of type $A_1 \times D_6$ - the assertion in this case.
\bigskip\\
$n = 8$.
\noindent
$$\begin{array}{|c|c|c|c|c|}
\hline
\mbox{Type of}~w & P & P_m & C_{P_m}(s_1) & P \cap P_m \\\hline
E_8(a_1) & A_1 \times E_6 & D_7 & A_1^2 \times D_5 & A_1 \times D_5 \\
E_8(a_2) & A_1 \times E_6 & D_7 & A_1^2 \times D_5 & A_1 \times D_5\\
E_8(a_3) & A_1 \times E_6 & E_7 & A_1 \times D_6 & A_1 \times A_5 \\
E_8(a_4) & A_1 \times E_6 & D_7 & A_1^2 \times D_5 & A_1 \times D_5\\
E_8(a_5) & A_1 \times E_6 & D_7 & A_1^2 \times D_5 & A_1 \times D_5\\
E_8(a_6) & A_1 \times D_5 & E_7 & A_1 \times D_6 & A_1 \times D_5\\
E_8(a_7) & A_1 \times E_6 & E_7 & A_1 \times D_6 & A_1 \times D_5\\
E_8(a_8) & A_1 \times D_4 & E_7 & A_1 \times D_6 & A_1 \times A_1^3\\
\hline
\end{array}$$
Here $C_W(s_1)$ is of type $A_1 \times E_7$. In the cases where $P_m$ is of type $E_7$, the overgroups of $C_{P_m}(s_1)$ in $C_W(s_1)$
are $$C_{P_m}(s_1) = \mathbb{Z}_2 \times \mathbb{Z}_{2}^5 \rtimes \mathrm {Sym}(6) < \mathbb{Z}_2^2 \times \mathbb{Z}_{2}^5 \rtimes \mathrm {Sym}(6) < \mathbb{Z}_2 \times E_7 = C_W(s_1).$$
It is straightforward to see that the index $|P : P \cap P_m| >2$ in all cases and thereby obtain $\langle P, C_{P_m}(s_1) \rangle = C_W(s_1)$, which is the assertion. Thus it remains to consider the cases where $P_m$ is of type $D_7$. Then $P$ is a maximal subgroup of $C_W(s_1)$, but $C_{P_m}(s_1)$ is not contained in $P$, which also shows the assertion.
\end{proof}
\begin{theorem}\label{NonIsoEn}
Let $W = W(E_n)$ be a Coxeter group of type $E_n$, $n \in \{6,7,8\}$, and $w$ a proper quasi-Coxeter element in $W$. Then the interval group $G([1,w])$ is not isomorphic to the
Artin group $A(E_n)$ of type $E_n$.
\end{theorem}
\begin{proof}
Let $w$ be a proper quasi-Coxeter element in $W$, and let $P := \langle s_i ~|~ 2 \leq i \leq n\rangle $. According to our setting the Carter diagram of $w$ where we remove the vertex related to $s_1$ contains a quadrangle.
Therefore, as in the proof of Proposition~\ref{UpperBoundDn}, we get by induction that the rank of the abelianisation $U$ is at most $|T|-2$ where $T$ is the set of reflections in $W$.
Therefore, Theorems~\ref{CohenParis} and \ref{Tits} yield that $A(E_n)$ and $G([1,w])$ are not isomorphic.
\end{proof}
\section{Open questions}\label{SubNonIsonOpen}
Since each interval group related to a proper quasi-Coxeter element is not isomorphic to the corresponding Artin group, we will develop some open questions that are originally considered in the theory of Artin groups and Garside groups. A positive answer to all these questions exist for interval groups related to Coxeter elements (the case of Artin groups). So the questions are still open for the interval groups related to proper quasi-Coxeter elements.
\begin{itemize}
\item[(a)] Can we solve the word and conjugacy problems for the interval groups?
\item[(b)] Is the centre of each interval group infinite cyclic? Note that a certain power of the lift of the quasi-Coxeter element to the interval group is always central.
\item[(c)] Are the interval groups torsion-free?
\item[(d)] Is the monoid defined from the presentation in Proposition~\ref{PropDualPresDn} (viewed as a monoid presentation) cancellative? Does it inject in the corresponding interval group?
\item[(e)] Can we describe the parabolic subgroups of the interval groups?
\item[(f)] Is the interval complex related to the poset of non-crossing partitions of a proper quasi-Coxeter element a classifying space for the interval group? This question is relevant to the $K(\pi,1)$ conjecture for Artin groups.
\end{itemize}
\bibliographystyle{alpha}
|
1111.0433
|
\section{Introduction}
\label{s1}
Consider
the the beta distribution
$\mathrm{Beta}(a,b)$, with the density function,
$$
\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}
\theta^{a-1}(1-\theta)^{b-1}.
$$
The mean of $\mathrm{Beta}(a, b)$ is readily obtained
by the formula $a/(a+b)$, but
there is no general closed formula for the median.
The median function, here denoted by $m(a,b)$,
is the function that satisfies,
$$
\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}
\int_0^{m(a,b)}\theta^{a-1}(1-\theta)^{b-1} \mathrm{d}\theta
=
\frac{1}{2}.
$$
The relationship $m(a,b)=1-m(b, a)$ holds.
Only for the special cases $a=1$ or $b=1$ we may obtain
an exact formula: $m(a, 1)=2^{-1/a}$
and $m(1, b)=1-2^{-1/b}$.
Moreover, when $a=b$, the median is exactly $1/2$.
There has been much literature about the
incomplete beta function and
its inverse
(see e.g. \citet{Dutka:1981} for a review).
The focus in literature has been
on finding accurate numerical results,
but a simple and practical
approximation that is easy to compute
has not been found.
\begin{figure}[t]
\begin{center}
\scalebox{0.80}{\includegraphics{fig-betaerr}}
\caption{\label{fig-betaerrors}
Relative errors of the approximation
$(a-1/3)/(a+b-2/3)$ of the
median of the $\mathrm{Beta}(a, b)$ distribution,
compared with the numerically computed value
for several fixed $p=a/(a+b)<1/2$.
The horizontal axis shows the shape parameter $a$
on logarithmic scale.
From left to right,
$p=0.499$, 0.49, 0.45, 0.35, 0.25, and 0.001.
}
\end{center}
\end{figure}
\section{A new closed-form approximation for the median}
Trivial bounds for the median can be derived
\citep{Payton:1989}, which are
a consequence of the more general
mode-median-mean inequality
\citep{Groeneveld:Meeden:1977}.
In the case of the beta distribution with
$1<a<b$,
the median is bounded by the
mode $(a-1)/(a+b-2)$ and the mean $a/(a+b)$:
$$
\frac{a-1}{a+b-2}
\le
m(a,b)
\le
\frac{a}{a+b}.
$$
For $a\le1$ the formula for the mode does not hold
as there is no mode.
If $1<b<a$, the order of the inequality is reversed.
Equality holds if and only if $a=b$;
in this case the mean, median, and mode are all equal to $1/2$.
This inequality shows that if the mean is kept fixed
at some $p$,
and one of the shape parameters is increased, say $a$,
then the median is sandwiched between
$p(a-1)/(a-2p)$ and $p$,
hence the median tends to $p$.
From the formulas for the mode and mean,
it can be conjectured that
the median $m(a,b)$ could be approximated by
$m(a,b;d)=(a-d)/(a+b-2d)$ for some $d\in(0,1)$,
as this form would satisfy the above inequality
while agreeing with the symmetry requirement,
that is, $m(a,b;d)=1-m(b,a;d)$.
\begin{figure}[t]
\begin{center}
\scalebox{0.80}{\includegraphics{fig-betaerrp}}
\caption{\label{fig-betaerrp}
Relative errors of the approximation
$(a-1/3)/(a+b-2/3)$ of the
median of the $\mathrm{Beta}(a, b)$ distribution
over the whole range of possible distribution means
$p=a/(a+b)$.
The smaller of the shape parameters is fixed,
i.e. for $p\le 0.5$,
the median is computed for $\mathrm{Beta}(a, a(1-p)/p)$
and for $p>0.5$,
the median is computed for $\mathrm{Beta}(bp/(1-p), b)$.
}
\end{center}
\end{figure}
Since a $\mathrm{Beta}(a,b)$ variate can be expressed as
the ratio $\gamma_1/(\gamma_1+\gamma_2)$ where
$\gamma_1\sim\mathrm{Gamma}(a)$ and
$\gamma_2\sim\mathrm{Gamma}(b)$ (both with unit scale),
it is useful to have a look at the median
of the gamma distribution.
\citet{Berg:Pedersen:2006} studied the median
function of the unit-scale
gamma distribution median function, denoted here by $M(a)$,
for any shape parameter $a>0$,
and obtained
$M(a) = a - 1/3 + o(1)$,
rapidly approaching $a-1/3$ as $a$ increases.
It can therefore be conjectured that
the distribution median may be approximated by,
\begin{equation}\label{eq-beta}
m(a, b) \approx
m(a, b; 1/3)
=
\frac{a-1/3}{(a-1/3)+(b-1/3)}
=
\frac{a-1/3}{a+b-2/3}.
\end{equation}
Figure (\ref{fig-betaerrors})
shows that
this approximation indeed appears to approach
the numerically computed median asymptotically
for all distribution means $p=a/(a+b)$ as the
(smaller) shape parameter $a\to\infty$.
For $a\ge1$, the relative error is less than 4\%,
and for $a\ge2$ this is already less than 1\%.
\begin{figure}[t]
\begin{center}
\scalebox{0.80}{\includegraphics{fig-betadisterr}}
\caption{\label{fig-betadisterr}
Logarithm of the scaled absolute error (distance)
$\log(|m(a,b;d)-m(a,b)|/p)$,
computed for a fixed distribution mean $p=0.01$ and various
$d$. The approximate median of the
$\mathrm{Beta}(a,b)$ distribution
is defined as $m(a,b;d)=(a-d)/(a+b-2d)$.
Due to scaling of the error,
the graph and its scale will not essentially change even if
the error is computed for other values of $p<0.5$.
The approximation $m(a,b;1/3)$ performs the
most consistently,
attaining the lowest absolute error eventually as the
precision of the distribution increases.
}
\end{center}
\end{figure}
Figure (\ref{fig-betaerrp}) shows the relative
error over all possible distribution means $p=a/(a+b)$,
as the smallest of the two shape parameters varies from
$1$ to $4$. This illustrates how the relative error
tends uniformly to zero over all $p$ as the shape parameters
increase.
The figure also shows that
the formula consistently either underestimates
or overestimates the median depending on whether
$p<0.5$ or $p>0.5$.
However, the function
$m(a,b;d)$
approximates the median fairly accurately
if some other $d$ close to $1/3$ (say $d=0.3$) is chosen.
Figure (\ref{fig-betadisterr}) displays
curves of the logarithm of the absolute
difference from the numerically computed
median for a fixed $p=0.01$, as the shape parameter
$a$ increases.
The absolute difference
has been scaled by $p$ before taking the logarithm:
due to this scaling,
the error stays approximately constant as $p$ decreases
so the picture and its scale will not essentially change even if
the error is computed for other values of $p<0.5$.
The figure shows that although some approximations such as
$d=0.3$ has a lower absolute error for some $a$,
the error of $m(a, b; 1/3)$ tends to be lower in the long run,
and moreover performs more consistently
by decreasing at the same rate on the logarithmic scale.
In practical applications, $d=0.333$ should be a sufficiently
good approximation of $d=1/3$.
\begin{figure}[t]
\begin{center}
\scalebox{0.7}{\includegraphics{fig-betatail}}
\caption{\label{fig-betatail}
Tail probabilities $\Pr(\theta<m)$
of the $\mathrm{Beta}(a, b)$ distribution
when $m=(a-1/3)/(a+b-2/3)$.
As the smaller of the two shape
parameters increases, the tail probability
tends rapidly and uniformly to $0.5$.
}
\end{center}
\end{figure}
Another measure of the accuracy is the
tail probability
$\Pr(\theta \le m(a,b;1/3))$ of a $\mathrm{Beta}(a, b)$ variate $\theta$:
good approximators of the
median should yield probabilities close to $1/2$.
Figure (\ref{fig-betatail}) shows that
as long as the smallest of the shape parameters
is at least 1,
the tail probability is bound between $0.4865$ and $0.5135$.
As the shape parameters increase, the
probability tends
rapidly and uniformly to $0.5$.
Finally, let us have a look at a
well-known paper that provides further
support for the uniqueness of $m(a,b;1/3)$.
\citet{Peizer:Pratt:1968} and \citet{Pratt:1968}
provide approximations for
the probability function $\Pr(\theta\le x)$
of a $\mathrm{Beta}(a,b)$ variate $\theta$.
Although they do not provide a formula
for the inverse, it is
the probability function at the approximate median.
According to \citet{Peizer:Pratt:1968},
$\Pr(\theta\le x)$
is well approximated by
$\Phi(z(a,b; x))$ where $\Phi$ is the
standard normal probability function,
and $z$ is a function of the shape parameters and
the quantile $x$.
Consider $m=m(a,b;d)$:
$z(a,b;m)$ should be close to zero and at least
tend to zero fast as $a$ and $b$ increase.
Now assume that $p$ is fixed, $a$ varies and $b=a(1-p)/p$.
The function $z(a, b; m)$ equals, rewritten with
the notation in this paper,
\begin{equation}\label{eq-peizer-beta}
\sqrt{p}\frac{1-2m}{(a-p)^{1/2}}\left(
1/3-d
-
\frac{0.02p}{a}\left[
\frac{1}{2} + \frac{1-dp/a}{p(1-p)}
\right]
\right)
\left(\frac{1+f(a,p;d)}{m(1-m)}\right)^{1/2},
\end{equation}
where the function $f(a,p;d)$ tends to zero
as $a$ increases,
being exactly zero only when $d=1/2$ or $m=1/2$.
It is evident that for the fastest convergence
rate to zero, one should choose $d=1/3$.
This is of the order $O(a^{-3/2})$;
if $d\ne 1/3$,
for example if we choose the mean $p$
as the approximation of the median ($d=0$),
the rate is at most $O(a^{-1/2})$.
|
1111.0762
|
\section{Conclusions \& Future Work}\label{sec:conc}
In this paper, we consider the challenging problem of multidimensional balanced allocation for both the sequential and the parallel $d$ choice process and show that the gap (assuming fixed $f$ populated dimensions per ball and uniform distribution of $f$ over $D$) is $O(\log\log(n))$, which is tight (within $D/f$ factor of the lower bound). This improves the best prior~\cite{md-mm} bound of $O(\log\log(nD))$. Further, for arbitrary number of balls $m >> n$, the expected gap also has upper bound of $O(\log\log(n))$, that is independent of $m$ for the fixed $f$ case with uniform distribution of populated dimensions. For the variable $f$ case with (non-uniform) binomial distribution of populated dimensions, the gap is $O(\log(n))$ for $m=O(n)$. To the best of our knowledge, this is the first such analysis for $d$-choice paradigm with multidimensional balls and bins.
\par Our analysis also provides a much easier and elegant proof technique (as compared to~\cite{petra-heavy-case}) for the $O(\log\log(n))$ gap for $m >> n$ scalar balls thrown into $n$ bins using the symmetric multiple choice process. Moreover, for the weighted sequential scalar balls and bins and general case $m >> n$, we show the upper bound on the expected gap as $O(\log(n))$ which improves upon the best prior bound of $n^c$ ($c$ depends on the weight distribution that has finite fourth moment) provided in~\cite{kunal-weighted}.In future, we would like to generalize the potential function approach for parallel and weighted balls and bins.
\par Further, we consider the challenging problem of multidimensional balanced allocation for the $(1+\beta)$ choice process and show that for arbitrarily large number of balls, the expected gap (assuming fixed $f$ populated dimensions per ball and uniform distribution of $f$ over $D$) is $O(\frac{\log(n)}{\beta})$, which is tight (within $D/f$ factor of the lower bound) and also independent of $m$. Further, the expected gap is also independent of $m$ for non-uniform distribution of $f$ dimensions over $D$, with fixed $f$ per ball) and for random $f$ with Binomial distribution.
\bibliographystyle{plain}
\section{Symmetric $d$-choice Process}
\par In this section, we present various results on the bounds on the gap using the symmetric $d$-choice process including unweighted sequential and parallel multidimensional balls and bins and the sequential weighted scalar case.
\subsection{Markov Chain Specification}\label{sec:markov-chain}
\label{sec:markov}
As mentioned earlier, a balls-and-bins process can be characterized by a probability distribution vector $(p_1, p_2, p_3,...p_n)$, where, $p_i$ is the probability a ball is placed in the $i^{th}$ most loaded multidimensional bin. Let $x_i^d(t)$ be the random variable, that denotes the \textit{weight in dimension $d$ for bin $i$} and is equal to the load of the $d^{th}$ dimension of the $i^{th}$ bin minus the average load in dimension $d$. So, $\sum_{i=1}^{n} x_i^d(t) = 0, \forall d \in [1..D]$. Each md-ball has $f$ populated dimensions, where $f$ could be constant across the balls or a random variable with a given distribution. Let, $s_i(t)$ denote the sum of the loads (minus corresponding dimension averages) across all $D$ dimensions for the bin $i$ at time $t$, expressed as $s_i(t) = \sum_{d=1}^D x_i^d$. It is assumed that bins are sorted by $s_i(t)$. So, $s_i \ge s_{i+1} \forall i \in [1..n-1]$. The process defines a Markov chain over the matrices, $x(t)$ as follows:
\begin{itemize}
\item Sample $j \in_p [n]$.
\item Set $r_i = s_i(t) + f(1 - 1/n)$, for $i = j$. Since, an md-ball has $f$ non-zero entries , so each of these $f$ dimensions in the bin, $i$, will be incremented by $1 - 1/n$.
\item Set $r_i = s_i(t) - f/n$, for $i \ne j$. Since, an md-ball has $f$ non-zero entries, so the each of the corresponding $f$ dimensions in the bin, $i$, will be decremented by $1/n$. This ensures that for each dimension the sum across all the bins is $0$.
\item Obtain $s(t+1)$ by sorting $r(t)$.
\end{itemize}
Fig.~\ref{fig:md-balls-bins} (in the Appendix~\ref{app:fig}) illustrates a multidimensional balls and bins scenario. The bounds on the gap will be proven for a family of probability distribution vectors $p$. As mentioned earlier, the md-bins are sorted based on their total dimensional load, i.e. sum of the weights across all dimensions for each bin ($s_i$ for bin $i$).
In the remaining analysis, we assume that when an md-ball arrives, then the selection of the bins is based on $s_i$, i.e. total sum of weights across all dimensions for the randomly selected bins (Fig.~\ref{fig:md-balls-bins} in Appendix~\ref{app:fig}). In particular, for the $d=2$ choice process, when $d$ bins are randomly selected, the md-ball (with $f$ non-zero entries) is assigned to the md-bin with the lowest $s_i$. Using this selection mechanism, we prove the upper and lower bounds on the gap obtained for the $d$ choice process. Note that, this is a different allocation mechanism than that considered in~\cite{md-mm} where the \textit{max} criteria is used over the restricted set of $f$ populated dimensions in the current md-ball. Further, we prove upper bound for the case when $m >> n$, while~\cite{md-mm} considers $m = O(n)$ case. The proofs below hold for even the case when $d > 2$, though we consider the case for $d=2$ for sake of clarity.
\subsection{Upper Bound On Gap for Unweighted Case}\label{sec:unweighted}
\par Let there be some constants, $\epsilon > 0$, $\theta > 1$, $\gamma_1$ and $\gamma_2$, $\gamma_3$, $\gamma_4$, where, $0 < \gamma_1 < \gamma_3 < 1/2 < \gamma_4 < \gamma_2 < 1$ and $\theta\gamma_1 = 1$; $\gamma_1 + \gamma_2 = 1$, and $\gamma_3 + \gamma_4 = 1$. Since we consider the $d$-choice process, the probability of selecting the bins has strong bias in favor of the lightly loaded bins. For $d=2$, this results in the following:
\begin{equation}\nonumber
\begin{aligned}
p_{(n\gamma_3)} & \le \frac{2n\gamma_3 - 1}{n^2}\\
p_{(n\gamma_4)} & \ge \frac{2n\gamma_4 - 1}{n^2}
\end{aligned}
\end{equation}
This implies that:$\sum_{i \ge (n\gamma_2)} p_i \ge (1 - \gamma_2^2)$ and $\sum_{i \le (n\gamma_1)} p_i \le \gamma_1^2$. We assume that $\epsilon \le 1/4$. Further, let $\alpha = \epsilon/2f$. In the analysis below, we assume each md-ball has exactly $f$ populated dimensions ($f$ constant, fixed-f case). This is similar to the \textit{unweighted case} with scalar balls.
\par The md-bins can be arranged in a partial order, according to their $s_i$ values. Define an \textit{equi-load} group (say $p$) as a set of bins with the same $s_i$ value. Define the potential of an equi-load group ($p$) as $\Phi(G_p) = \sum_{k=0}^{|G_p|-1} \frac{e^{\alpha.s_k}}{p_0 + k}$, where $p_0$ is the beginning index for the group, $|G_p|$ is the size of the $p^{th}$ group and $e^{\alpha.s_k} = e^{\alpha.s_{k+1}}, \forall k \in [p_0..(p_0 + |G_p|-2)]$. The $n$ bins are partitioned into disjoint equi-load groups (total $|G|$ groups), i.e. each bin is assigned to only a single equi-load group. The group structure defined here helps in characterizing the change in index of the bin that gets the ball (after sorting).
\par Similarly, define another potential function for an equi-load group as, $\Psi(G_p) = \sum_{k=0}^{|G_p|-1} \frac{e^{-\alpha.s_k}}{p_0 + k}$. Now, define the following potential functions over all the groups:
\begin{equation}
\begin{aligned}
\Phi(t) & = \Phi(s(t)) = \sum_{p=1}^{|G|} \Phi(G_p)\\
\Psi(t) & = \Psi(s(t)) = \sum_{p=1}^{|G|} \Psi(G_p)\\
\Gamma(t) & = \Gamma(s(t)) = \Phi(t) + \Psi(t)\\
& = \sum_{i=1}^{n} [ \frac{e^{\alpha.s_i}}{i} + \frac{e^{-\alpha.s_i}}{i} ]\\
\end{aligned}
\end{equation}
where, $s_i(t) = \sum_{d=1}^{D} x_i^d(t)$
In the beginning, each dimension for each bin has $0$ weight, thus $s_i = 0, \forall i$ and hence, $\Gamma(0) = 2\ln(n)$. We show that if $\Gamma(x(t)) \ge a\ln(n)$ for some $a > 0$, then $\mathbb{E}[\Gamma(t+1) | x(t)] \le (1 - \frac{\epsilon}{8n(1 + \epsilon\gamma_1)}) * \Gamma(t)$. This helps in demonstrating that for every given $t$, $\mathbb{E}[\Gamma(t)] \in O(\ln(n))$. This implies that the maximum gap is $O(\ln\ln(n))$ w.h.p.
First, consider the change in $\Phi(t)$ (also refers to $\Phi$ by default) and $\Psi(t)$ (also refers to $\Psi$ by default) separately when a ball is thrown with the given probability distribution.
\begin{lemma}\label{lemma-phi-base}
When an md-ball is thrown into an md-bin, the following inequality holds:
\begin{equation}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ] \le \sum_{i=1}^{n} [ p_i * (\alpha.f + (\alpha.f)^2) - \alpha.f/n ]. e^{\alpha.s_i}
\end{equation}
\end{lemma}
\begin{proof}
Let $\Delta_i$ be the expected change in $\Phi$ if the ball is put in bin, $i$. So, $r_i(t+1) = s_i + f(1-1/n)$; and for $j \ne i$, $r_j(t+1) = s_j(t) - f/n$. The new values i.e. $s(t+1)$ are obtained by sorting $r(t+1)$ and $\Phi(s) = \Phi(r)$. When, an md- ball is committed to bin $i$, then it moves to the end of the previous equi-load group or it creates a new equi-load group and hence can be located at index $p_0$ (beginning location of its prior group) in the new sorted order of the bins. Thus, the expected contribution of bin, $i$, to $\Delta_i$ is given as follows:
\begin{equation}\nonumber
\begin{aligned}
\EE[ \frac{e^{\alpha.(s_i + f(1-1/n))}}{p_0} ] - \frac{e^{\alpha.s_i}}{p_0}
& = \frac{e^{\alpha.s_i}}{p_0} [ e^{\alpha.f(1-1/n)} - 1 ]
\end{aligned}
\end{equation}
Similarly, the expected contribution of bin, $j$ ($j \ne i$) to $\Delta_i$ is given as:
\begin{equation}\nonumber
\begin{aligned}
\EE[ \frac{e^{\alpha.(s_j - f/n)}}{j} ] - \frac{e^{\alpha.s_j}}{j}
& = \frac{e^{\alpha.s_j}}{j} [ e^{-\alpha.f/n} - 1 ]
\end{aligned}
\end{equation}
Therefore, $\Delta_i$ is given as follows:
\begin{equation}\nonumber
\begin{aligned}
\Delta_i & = \Phi_i [ e^{\alpha.f(1-1/n)} - 1] \, + \,
\sum_{j \ne i} \Phi_j(e^{-\alpha.f/n} - 1)\\
& = \Phi_ie^{-\alpha f/n} (e^{\alpha.f} - 1) + (e^{-\alpha.f/n} - 1).\Phi
\end{aligned}
\end{equation}
Thus, we get the overall expected change in $\Phi$ as follows:
\begin{equation}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ] & = \sum_{i=1}^{n} p_i * \Delta_i\\
& = \sum_{i=1}^{n} p_i * [ \Phi_ie^{-\alpha f/n} (e^{\alpha.f} - 1) + (e^{-\alpha.f/n} - 1).\Phi ]\\
& = \sum_{i=1}^{n} p_i * e^{-\alpha f/n}\Phi_i (e^{\alpha.f} - 1) + (e^{-\alpha.f/n} - 1).p_i.\Phi\\
& = \sum_{i=1}^{n} [ p_i * e^{-\alpha.f/n} (e^{\alpha.f} - 1) + (e^{-\alpha.f/n} - 1) ]\Phi_i
\end{aligned}
\end{equation}
Now, $e^{(-\alpha.f/n)} * (e^{\alpha.f} - 1)$ can be approximated as follows:
\begin{equation}\nonumber
\begin{aligned}
e^{(-\alpha.f/n)}.(e^{\alpha.f} - 1) & \le
(1 - \alpha.f/n + (\alpha.f/n)^2) * (1 + \alpha.f + (\alpha.f)^2 - 1)\\
& \sim \alpha.f + (\alpha.f)^2 + O((\alpha.f)^2/n)\\
e^{(-\alpha.f/n)}.(e^{\alpha.f} - 1) & \lessapprox (\alpha.f + (\alpha.f)^2)
\end{aligned}
\end{equation}
Above, since, $(\alpha.f)^2/n$ is very small for large $n$, we have ignored the small terms. Similarly, $(e^{-\alpha.f/n} - 1) \lessapprox -\alpha.f/n$
Hence, the expected change in $\Phi$ can be given by:
\begin{equation}\label{eq-phi-base}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ]
\le \sum_{i=1}^{n} [ p_i * (\alpha.f + (\alpha.f)^2) - \alpha.f/n ]\Phi_i
\end{aligned}
\end{equation}
\end{proof}
Simplifying further and observing that $\Phi_i$ decreases and $p_i$ increases with increasing $i$ from $1$ to $n$, one gets the following Corollary.
\begin{corollary}\label{cor-phi-gen}
$\EE[ \Phi(t+1) - \Phi(t) | x(t) ] \le (\alpha.f)^2 * \Phi / n$
\end{corollary}
\begin{proof}
Since, $p_i$ are increasing and $\Phi_i$ are decreasing, the maximum value taken by RHS of equation~\eqref{eq-phi-base} will be when $p_i = 1/n$ for all $i \in [1..n]$. Simplifying, we get the result.
\end{proof}
Similarly, the change in $\Psi$ can be derived. For detailed proof refer to Appendix~\ref{app:1_proof}.
\begin{lemma}\label{lemma-psi-base}
When an md-ball is thrown into an md-bin, the following inequality holds:
\begin{equation}\label{eq-psi-base}
\EE[ \Psi(t+1) - \Psi(t) | x(t) ] \le
\sum_{i=1}^{n} [ p_i * (-\alpha.f + (\alpha.f)^2) + \alpha.f/n ]\Psi_i
\end{equation}
\end{lemma}
Further observing that $p_i > 0$, one gets the following Corollary.
\begin{corollary}\label{cor-psi-gen}
$\EE[ \Psi(t+1) - \Psi(t) | x(t) ] \le (\alpha.f.\Psi)/ n$
\end{corollary}
In the next two lemmas, Lemma~\ref{lemma-phi-easy-case} and Lemma~\ref{lemma-psi-easy-case}, we consider a reasonably balanced md-bins scenario. We show that for such cases, the expected potential decreases. Specifically, for $s_{(n\gamma_2)} \le 0$, the expected value of $\Phi$ decreases and for $s_{(n\gamma_1)} \ge 0$, the expected value of $\Psi$ decreases.
\begin{lemma}\label{lemma-phi-easy-case}
Let $\Phi$ be defined as above. If $s_{(n\gamma_2)}(t) < 0$ then, $\EE[ \Phi(t+1) | x(t) ] \le (1-\frac{\alpha f}{2n})\Phi$
\end{lemma}
\begin{proof}
From equation~\eqref{eq-phi-base}, we get,
\begin{equation}\label{eq-phi-1}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t)] & \le \sum_{i=1}^{n} (p_i * (\alpha f + (\alpha f)^2) - \alpha f/n). \Phi_i\\
& \le \sum_{i < n\gamma_2} (p_i * (\alpha f + (\alpha f)^2)).\Phi_i - \frac{\alpha f\Phi}{n} +
\sum_{i \ge n\gamma_2} p_i * (\alpha f + (\alpha f)^2).e^{-\alpha s_i}
\end{aligned}
\end{equation}
Now, we need to upper bound the term $\sum_{i < n\gamma_2} (p_i * (\alpha.f + (\alpha.f)^2).\frac{e^{\alpha.s_i}}{i})$. Since $p_i$ is non-decreasing and $\Phi_i$ is non-increasing, the maximum value is achieved when $e^{\alpha s_i} ( \sum_{i=1}^{n\gamma_2} 1/i) = \Phi$ for each $i < n\gamma_2$. Hence, $e^{\alpha s_i} = \frac{\Phi}{\ln(n\gamma_2)}$. Hence, the maximum value is given as follows.
\begin{equation}
\begin{aligned}
\sum_{i=1}^{n\gamma_2} p_i\Phi_i & \le \frac{\Phi}{\ln(n\gamma_2)} * \sum_{i=1}^{n\gamma_2} [ \frac{2i-1}{n^2} * \frac{1}{i} ] \\
& \le \frac{2\gamma_2\Phi}{n\ln(n\gamma_2)} - \frac{\Phi}{n^2}
\end{aligned}
\end{equation}
Similarly, one can upper bound the term, $\sum_{i \ge n\gamma_2} (p_i \frac{e^{-\alpha.s_i}}{i})$. Since $p_i$ is non-decreasing and $\Phi_i$ is non-increasing, the maximum value is achieved when $e^{-\alpha s_i} ( \sum_{i=(n\gamma_2)}^{n} 1/i) = \Phi_{(\ge n\gamma_2)}$ for each $i \ge n\gamma_2$. Hence, $e^{-\alpha s_i} = \frac{\Phi_{(\ge n\gamma_2)}}{\ln(1/\gamma_2)}$. Hence, the required upper bound is given as follows.
\begin{equation}
\begin{aligned}
\sum_{i=n\gamma_2}^{n} p_i\Phi_i & \le \frac{\Phi}{\ln(1/\gamma_2)} * \sum_{i=n\gamma_2}^{n} [ \frac{2i-1}{n^2} * \frac{1}{i} ] \\
& \le \frac{2(1-\gamma_2)\Phi}{n\ln(n)} \because \quad \Phi_{(\ge n\gamma_2)} \le \frac{\Phi}{ln(n)}
\end{aligned}
\end{equation}
Thus, the expected change in $\Phi$ can be computed, using equation~\eqref{eq-phi-1} and the above bound, as follows:
\begin{equation}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t)] & \le (\alpha.f + (\alpha.f)^2)(\frac{2\gamma_2\Phi}{n\ln(n\gamma_2)} - \frac{\Phi}{n^2}) - \alpha.f/n * \Phi + (\alpha.f + (\alpha.f)^2)* \frac{2(1-\gamma_2)\Phi}{n\ln(n)}\\
& \le (\alpha.f)\Phi/n(\frac{2\gamma_2}{\ln(n\gamma_2)} - 1) + \frac{\epsilon^2\gamma_2}{2n\ln(n\gamma_2)} + \alpha.f\frac{2(1-\gamma_2)\Phi}{n\ln(n)}\\
& \le \frac{-\alpha f\Phi}{2n}
\end{aligned}
\end{equation}
\end{proof}
\begin{lemma}\label{lemma-psi-easy-case}
Let $\Psi$ be defined as above. If $s_{(n\gamma_1)}(t) \ge 0$ then,
$\EE[ \Psi(t+1) | x(t) ] \le (1-\frac{\alpha f}{8n})\Psi$
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma~\ref{lemma-phi-easy-case}. See Appendix~\ref{app:2_proof} for details of the proof.
\end{proof}
Now, we consider the remaining cases and show that in case the load across the bins , at time $t$, is not reasonably balanced, then for $s_{n\gamma_2} > 0$, either $\Psi$ dominates $\Phi$ or $\Gamma < c$, where, $c = poly(1/\epsilon)$.
\begin{lemma}
\label{lemma-s-gamma-phi}
Let, $s_{(n\gamma_2)} > 0$ and $\, \EE[\Delta\Phi|x(t)] \ge -\epsilon\Phi/4n$. Then, either $\Phi < \epsilon\gamma_1 * \Psi$, or $\Gamma < c$ for some $c = poly(1/epsilon)$.
\end{lemma}
\begin{proof}
From equation~\eqref{eq-phi-base}, we get:
\begin{equation}
\begin{aligned}
\EE[\Delta\Phi | x(t)] & \le \sum_{i=1}^{n} (p_i * (\alpha.f + (\alpha.f)^2) - \alpha.f/n).\Phi_i\\
& \le \sum_{i \le n\gamma_3} (p_i * (\alpha.f + (\alpha.f)^2) - \alpha.f/n).\Phi_i + \sum_{i > n\gamma_3} (p_i * (\alpha.f + (\alpha.f)^2) - \alpha.f/n) * \Phi_i\\
& \le [(\alpha.f + (\alpha.f)^2)*\sum_{i \le (n\gamma_3)} \frac{2i-1}{n^2i}].\frac{\Phi_{(\le n\gamma_3)}}{\ln(n\gamma_3)} +\\
& ((\sum_{i > (n\gamma_3)} \frac{2i-1}{n^2i}))* (\alpha.f + (\alpha.f)^2)\frac{\Phi_{(> n\gamma_3)}}{\ln(1/\gamma_3)} - \frac{\alpha f\Phi}{n}\\
& \le \frac{\alpha f\Phi_{\le n\gamma_3}}{n} [\frac{2\gamma_3}{\ln(n\gamma_3)} - 1 - 1/n] + \frac{\alpha f\Phi_{> n\gamma_3}}{n} [\frac{2\gamma_4}{\ln(1/\gamma_3)} - 1 - 1/n]\\
& \le \frac{\alpha f\Phi_{\le n\gamma_3}}{n} [\frac{2\gamma_3 - \ln(n\gamma_3)}{\ln(n\gamma_3)}] + \frac{\alpha f\Phi_{>n\gamma_3}}{n} [\frac{2\gamma_4 - \ln(1/\gamma_3)}{\ln(1/\gamma_3)}]
\end{aligned}
\end{equation}
Now, since $\EE[\Delta\Phi|x(t)] \ge -\alpha f\Phi/2n$, we get: $\Phi \le 4\Phi_{(> n\gamma_3)} [\frac{\gamma_4\ln(n)}{\ln(n\gamma_3)\ln(1/\gamma_3)}]$.
Let, $B = \sum_{i} max(0, s_i)$. Note, $\sum_{i} s_i = 0$, since for each dimension $d$, the update maintains that, $\sum_{d=1}^{D} x_i^d(t) = 0$. Further, because, $s_{n\gamma_3} > 0$, $\Phi_{(>n\gamma_3)} \le \ln(1/\gamma_3) * e^{(\frac{\alpha.B}{n\gamma_3})}$. This implies that, $\Phi \le \frac{4\gamma_4\ln(n) e^{(\frac{\alpha.B}{n\gamma_3})}}{\ln(n\gamma_3)}$.
Since, $s_{(n\gamma_2)} > 0$, so, $\Psi \ge \ln(n\gamma_2) * e^{\frac{\alpha.B}{n\gamma_1}}$. If $\Phi < \epsilon\gamma_1 * \Psi$, then we are done. Else, $\Phi \ge \epsilon\gamma_1 * \Psi$. This implies:
\begin{equation}\nonumber
\frac{4\gamma_4\ln(n)e^{\frac{\alpha.B}{n\gamma_3}}}{\ln(n\gamma_3)} \ge \Phi \ge \epsilon\gamma_1 * \Psi \ge \epsilon\gamma_1\ln(n\gamma_2) * e^{\frac{\alpha.B}{n\gamma_1}}
\end{equation}
Thus, $e^{\alpha.B/n} \le (\frac{4\gamma_4}{\epsilon\gamma_1})^{\frac{\gamma_1\gamma_3}{(\gamma_3 - \gamma_1)}}$. So, $\Gamma \le (\frac{1 + \theta}{\epsilon} * \Phi \le \frac{1 + \theta}{\epsilon} * \frac{4\gamma_4\ln(n)}{\ln(n\gamma_3)} * e^{\frac{\alpha.B}{n\gamma_3}} \le \frac{1 + \theta}{\epsilon} * \frac{4\gamma_4\ln(n)}{\ln(n\gamma_3)} * (\frac{4\gamma_4}{\epsilon\gamma_1})^{\frac{\gamma_1}{(\gamma_3 - \gamma_1)}}$. Hence, $\Gamma < c$, where, $c = poly(1/\epsilon)$.
\end{proof}
In the Lemma below, we consider the case where the load across the bins at time, $t$, is not reasonably balanced, and $s_{(n\gamma_1)} < 0$. Here, we show that either $\Phi$ dominates $\Psi$ or the potential function is less than $c$ for $c = poly(1/\epsilon)$.
\begin{lemma} \label{lemma-s-gamma-psi}
Let, $s_{(n\gamma_1)} < 0$ and $\EE[\Delta\Psi|x(t)] \ge -\alpha f\Psi/8n$. Then, either $\Psi < \epsilon\gamma_1 * \Phi$, or $\Gamma < c\ln(n)$ for some $c = poly(1/epsilon)$.
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma~\ref{lemma-s-gamma-phi}. See Appendix~\ref{app:3_proof} for details of the proof.
\end{proof}
Now, we consider combinations of the cases considered so far and can show that the potential function, $\Gamma$, behaves as a super-martingale.
\begin{theorem}\label{thm-super-mart}
For the potential function, $\Gamma$, $\EE[ \Gamma(t+1) | x(t)] \le (1 - \frac{\epsilon}{24n(1 + \epsilon\gamma_1)})\Gamma(t) + \frac{c\ln(n)}{n}$, for constant $c = poly(1/\epsilon)$.
\end{theorem}
\begin{proof}
We consider the following cases on intervals of values for $s_i$.
\begin{itemize}
\item \textbf{Case 1:} $s_{(n\gamma_1)} \ge 0$ and $s_{(n\gamma_2)} \le 0$. Using, Lemma~\ref{lemma-phi-easy-case} and Lemma~\ref{lemma-psi-easy-case}, we can immediately see that, $\EE[ \Gamma(t+1) | x(t)] \le (1 - \epsilon^2/16n)\Gamma(t)$ and hence, the result is also true.\\
\item \textbf{Case 2:} $s_{n\gamma_1} \ge s_{n\gamma_2} > 0$. This represents a high load imbalance across the bins. In some cases, $\Phi$ may grow but the asymmetry in the load implies that $\Gamma$ is dominated by $\Psi$. Thus, the decrease in $\Psi$ offsets the increase in $\Phi$ and hence the expected change in $\Gamma$ is negative.\\
Specifically, if $\EE[\Delta\Phi|x] \le \frac{-\alpha f\Phi}{2n}$, then using Lemma~\ref{lemma-psi-easy-case} we get that $\EE[ \Gamma(t+1) | x(t)] \le (1 - \alpha f/8n)\Gamma(t)$; else we consider the following two cases:\\
\begin{itemize}
\item \textbf{Case 2a:} $\Phi < \epsilon\gamma_1 * \Psi$. Here, using Lemma~\ref{lemma-psi-easy-case} and Corollary~\ref{cor-phi-gen}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & = \EE[\Delta\Phi|x] + \EE[\Delta\Psi|x]\\
& \le \frac{(\alpha.f)^2}{n}.\Phi - \frac{\alpha f}{8n} * \Psi\\
& \le -\frac{\epsilon}{24n}.\Psi\\
& \le - \frac{\epsilon}{24n(1 + \epsilon\gamma_1)}\Gamma
\end{aligned}
\end{equation}
\item \textbf{Case 2b:} $\Gamma < c\ln(n)$. Here, using Corollary~\ref{cor-psi-gen} and Corollary~\ref{cor-phi-gen}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & \le \alpha.f/n * \Gamma
& \le \frac{c\alpha.f\ln(n)}{n}
\end{aligned}
\end{equation}
But, $c\ln(n)/n - ((\epsilon/4n) * \Gamma) \ge c\ln(n)/n(1 - \epsilon/4) \ge c\ln(n)/n(1 - \epsilon/2) \ge \frac{c\alpha.f\ln(n)}{n}$. Hence,$\EE[\Delta\Gamma|x] \le - \frac{\epsilon\Gamma}{4n} + \frac{c\ln(n)}{n}$.
\end{itemize}
\item \textbf{Case 3:} $s_{n\gamma_2} \le s_{n\gamma_1} < 0$. Here, if $\EE[\Delta\Psi|x] \le \frac{-\epsilon}{16n}\Psi$, then using Lemma~\ref{lemma-phi-easy-case}, we get that $\EE[ \Gamma(t+1) | x(t)] \le (1 - \epsilon/16n)\Gamma(t)$; else we consider the following two cases:
\begin{itemize}
\item \textbf{Case 3a:} $\Psi < \epsilon\gamma_1 * \Phi$. Here, using Lemma~\ref{lemma-phi-easy-case} and Corollary~\ref{cor-psi-gen}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & = \EE[\Delta\Phi|x] + \EE[\Delta\Psi|x]\\
& \le -(\epsilon/4n)\Phi + \alpha.f/n * \Psi\\
& \le -(\epsilon/4n).\Phi + (\gamma_1\epsilon^2/2n)\Phi\\
& \le \frac{-\epsilon}{8n}\Phi\\
& \le \frac{-\epsilon}{(8n(1+\epsilon\gamma_1))}*\Gamma
\end{aligned}
\end{equation}
\item \textbf{Case 3b:} $\Gamma < c\ln(n)$. Here, using Corollary~\ref{cor-phi-gen} and Corollary~\ref{cor-psi-gen}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & = \alpha.f/n * \Gamma
& \le c\alpha.f\ln(n)/n
\end{aligned}
\end{equation}
Hence, this case follows similarly as \textit{Case 2b} above.
\end{itemize}
\end{itemize}
\end{proof}
Now, we can prove using induction that the expected value of $\Gamma$ remains bounded.
\begin{theorem}\label{thm-pot-up-bound}
For any time $t \ge 0$, $\EE[\Gamma(t)] \le \frac{24c(1 + \epsilon\gamma_1)}{\epsilon}\ln(n)$
\end{theorem}
\begin{proof}
Using induction we can prove this claim. For $t = 0$, it is trivially true since $\Gamma(0) = 2\ln(n)$.
Using Theorem~\ref{thm-super-mart}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Gamma(t+1)] & = E[E[\Gamma(t+1)| \Gamma(t)]]\\
& \le \EE[ (1 - \frac{\epsilon}{24n(1+\epsilon\gamma_1)})\Gamma(t) + \frac{c\ln(n)}{n}]\\
& \le \frac{24c(1 + \epsilon\gamma_1)}{\epsilon}\ln(n) - c\frac{\ln(n)}{n} + \frac{c\ln(n)}{n}\\
& \le \frac{24c(1 + \epsilon\gamma_1)}{\epsilon}\ln(n)
\end{aligned}
\end{equation}
\end{proof}
Now, we can upper bound the gap across all the $D$ dimensions across all the $n$ md-bins. This gap is defined as follows:
\begin{equation}
Gap(t) = \max_{d=1}^{D} [ \max_{i=1}^{n} x_i^d ]
\end{equation}
\begin{theorem}\textbf{Fixed $f$ Case:}\label{thm-fixed-f}
Using the bias in the probability distribution in favor of lightly loaded md-bins as given by the $d$-choice algorithm, and assuming that $f$ dimensions are exactly populated in each md-ball with uniform distribution of $f$ dimensions over $D$, the expected and probabilistic upper bound on the gap (maximum dimensional gap) across the multidimensional bins is given as follows. Let, $\delta = \frac{24c(1 + \epsilon\gamma_1)}{\epsilon}$, then:
\begin{equation}\nonumber
\begin{aligned}
E[Gap(t)] & \le 2\log\log(n)/\epsilon + 2\log\log(\delta)/\epsilon\\
Pr[Gap(t) & > (\frac{mf}{nD})^{1/2+\zeta}*(4\log\log(n)/\epsilon + 4\log(\log\delta)/\epsilon)] \le D/mf\\
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}
Let, $a$ be the winning md-bin and $m$ be the winning dimension that represents $Gap(t)$. Now,
from Theorem~\ref{thm-pot-up-bound}, we get, $\EE[ e^{\alpha.s_a} ] \le \delta\log(n)$. So,
$\EE[ e^{\alpha.(x_a^m + \sum_{d \ne m} x_a^d)} ] \le \delta\log(n)$.
Let, $y_a$ denote the gap as measured by the number of md-balls in bin $a$ minus the average number of balls across the bins. Then,
\begin{equation}\label{eq:sum-bound}
\begin{aligned}
E[s_a] & \le 1/\alpha * \log\log(n) + 1/\alpha * \log\log(\delta)\\
\Rightarrow E[s_a] & \le 2f\log\log(n)/\epsilon + O(2f\log\log(\delta)/\epsilon)\\
\Rightarrow f.E[y_a] & \le 2f\log\log(n)/\epsilon + O(2f\log\log(\delta)/\epsilon)\\
\Rightarrow E[y_a] & \le 2\log\log(n)/\epsilon + O(2\log\log(\delta)/\epsilon)
\end{aligned}
\end{equation}
The third inequality uses the fact that each ball has exactly $f$ populated dimensions. Since the $f$ dimensions are chosen uniformly and randomly from $D$ dimensions, the expected gap in any dimension (and hence the winning dimension with the maximum gap) is bounded by $O(\log\log(n))$.
Now, consider the case of a non-uniform distribution, where we assume that each dimension is chosen with probability at most $\kappa_2$ in each md-ball and each md-ball still has fixed $f$ populated dimensions. Here, one can see that the expected gap can be bounded by $O(\kappa_2\log\log(n))$.
Now, the $Pr[s_a > 4f\log\log(n)/\epsilon + 4f\log\log(\delta)/\epsilon] \le
Pr[ \Gamma(t) \ge nE[\Gamma(t)]] \le 1/n$ (using Markov's Inequality); where $s_a = \sum_{d=1}^{D} x_a^d$. Further, the probability that within a single md-bin, a particular dimension has more than the expected number of $1s$, can be given by the Chernoff Bound as follows. Let $m/n$ balls be thrown into an md-bin. The number of ones in any dimension follows a Binomial distribution, $B(m/n, f/D)$. Using Chernoff Bound, and assuming $t = (\frac{mf}{nD})^{1/2 + \zeta}$, we have:
\begin{equation}
\begin{aligned}
Pr[B(m/n, f/D) > (mf/nD + t)] & \le (\frac{mf/nD}{mf/nD + t})^{mf/nD + t} * e^{t}\\
\Rightarrow Pr[B(m/n, f/D) > (mf/nD + t)] & \le nD/mf
\end{aligned}
\end{equation}
Hence, $Pr[y_a > (\frac{mf}{nD})^{1/2 + \zeta}*(4\log\log(n)/\epsilon + 4\log\log(\delta)/\epsilon)] \le 1/n * nD/mf = D/mf$.
\end{proof}
\subsection{Lower Bound for Unweighted Case}
We can show that the expected upper bound, for fixed $f$ case with uniform distribution, proved in section~\ref{sec:unweighted} is tight to within $f/D$ factor. Consider the case when, $m$ balls are thrown into $n$ bins, using the $d$ choice process. The expected dimensional sum load per bin is $fm/n$. Berenbrink et.al.~\cite{petra-heavy-case} show that when $m >> n$ balls are thrown using the $d$ choice process into $n$ bins, then the load of the most loaded bin is at least $O(\ln\ln(n))$ balls more than the average $m/n$. Thus, for md-balls the sum load of the most loaded md-bin is at least $\Omega(f\ln\ln(n) + fm/n)$. Since, each ball has $f$ populated dimensions, hence, there are at least $\Omega(\ln\ln(n) + m/n)$ balls in this max sum load bin. Since, in each ball $f$ dimensions are uniformly distributed over $D$ dimensions, there exists a dimension whose load is at least $\Omega(f\ln\ln(n)/D)$ more than the average $mf/nD$. Hence, the lower bound is $O(f\ln\ln(n)/D)$.
\subsection{Parallel Multidimensional Balls \& Bins: Unweighted Case}
\par Consider the following parallel $d$-choice process. Let $m$ balls be thrown in parallel using $d$-choice process into $n$ bins. In each round, a bin sends the (ball's) rank to the ball with the lowest ID. The ball chooses the bin (out of $d$ bins it selected) that gives the lowest rank. It can be shown that this parallel process produces exactly the same distribution of balls in the bins as a sequential \textit{Greedy with Ties} process~\cite{adler95}. In the sequential \textit{Greedy with Ties} process, when there are multiple bins with same lowest load, all of these bins get the ball. Using the potential function analysis as above, we can show that the gap in this case, can also be bounded by $O(\log\log(n))$. We provide an overview of the proof below.
Consider the change in $\Phi(t)$ (also refers to $\Phi$ by default) and $\Psi(t)$ (also refers to $\Psi$ by default) separately when a ball is thrown with the given probability distribution.
\begin{lemma}\label{lemma-phi-base-par}
When an md-ball is thrown into an md-bin, the following inequality holds:
\begin{equation}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ] \le \sum_{i=1}^{n} [ d*p_i * (\alpha.f + (\alpha.f)^2) - d\alpha.f/n ]. e^{\alpha.s_i}
\end{equation}
\end{lemma}
\begin{proof}
In the Greedy with Ties process, some number (less than $d$) of bins each with the same load (and hence belonging to the same equi-load group) can get the (\textit{replicated}) ball. In the worst case all the $d$ randomly selected bins, chosen by the ball, have the same load and hence get the md-ball. All of these md-bins, then move to the previous equi-load group or a new equi-load group is created. Let $\Delta$ be the expected change in $\Phi$ when the ball is put in a certain number (less than $d$) of bins. If one of these bins is $i$, then, $r_i(t+1) = s_i + f(1-d/n)$. For bins $j \ne i$, that do not get the md-ball, $r_j(t+1) = s_j(t) - df/n$. The new values i.e. $s(t+1)$ are obtained by sorting $r(t+1)$ and $\Phi(s) = \Phi(r)$. Thus, the expected contribution of bin, $i$, to $\Delta$ is given as follows:
\begin{equation}\nonumber
\begin{aligned}
\EE[ \frac{e^{\alpha.(s_i + f(1-d/n))}}{i} ] - \frac{e^{\alpha.s_i}}{i}
& = \frac{e^{\alpha.s_i}}{i} [ e^{\alpha.f(1-d/n)} - 1 ]
\end{aligned}
\end{equation}
Similarly, the expected contribution of bin (that does not get the ball), $j$ ($j \ne i$) to $\Delta$ is given as:
\begin{equation}\nonumber
\begin{aligned}
\EE[ \frac{e^{\alpha.(s_j - df/n)}}{j} ] - \frac{e^{\alpha.s_j}}{j}
& = \frac{e^{\alpha.s_j}}{j} [ e^{-\alpha.df/n} - 1 ]
\end{aligned}
\end{equation}
Assuming that the bins that get the replicated ball are $i_1, i_2,..i_d$, $\Delta$ is given as follows:
\begin{equation}\nonumber
\begin{aligned}
\Delta & = d\Phi_i [ e^{\alpha.f(1-d/n)} - 1] \, + \,
\sum_{j \ne (i_1, i_2,..i_d)} \Phi_j(e^{-\alpha.df/n} - 1)\\
& = d\Phi_ie^{-\alpha df/n} (e^{\alpha.f} - 1) + (e^{-\alpha.df/n} - 1).\Phi
\end{aligned}
\end{equation}
Thus, we get the overall expected change in $\Phi$ as follows:
\begin{equation}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ] & = \sum_{i=1}^{n} p_i * \Delta\\
& = \sum_{i=1}^{n} p_i * [ d\Phi_ie^{-\alpha df/n} (e^{\alpha.f} - 1) + (e^{-\alpha.fd/n} - 1).\Phi ]\\
& = \sum_{i=1}^{n} p_i * de^{-\alpha df/n}\Phi_i (e^{\alpha.f} - 1) + (e^{-\alpha.df/n} - 1).p_i.\Phi\\
& = \sum_{i=1}^{n} [ p_i * de^{-\alpha.df/n} (e^{\alpha.f} - 1) + (e^{-\alpha.df/n} - 1) ]\Phi_i
\end{aligned}
\end{equation}
Now, $e^{(-\alpha.fd/n)} * (e^{\alpha.f} - 1)$ can be approximated as follows:
\begin{equation}\nonumber
\begin{aligned}
e^{(-\alpha.df/n)}.(e^{\alpha.f} - 1) & \le
(1 - \alpha.df/n + (\alpha.df/n)^2) * (1 + \alpha.f + (\alpha.f)^2 - 1)\\
& \sim \alpha.f + (\alpha.f)^2 + O((\alpha.df)^2/n)\\
\Rightarrow e^{(-\alpha.df/n)}.(e^{\alpha.f} - 1) & \lessapprox (\alpha.f + (\alpha.f)^2)
\end{aligned}
\end{equation}
Above, since, $(\alpha.fd)^2/n$ is very small for large $n$, we have ignored the small terms. Similarly, $(e^{-\alpha.df/n} - 1) \lessapprox -\alpha.df/n$
Hence, the expected change in $\Phi$ can be given by:
\begin{equation}\label{eq-phi-base}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ]
\le \sum_{i=1}^{n} [ p_i * d(\alpha.f + (\alpha.f)^2) - \alpha.df/n ]\Phi_i
\end{aligned}
\end{equation}
\end{proof}
Similarly, one can show the following.
\begin{lemma}\label{lemma-psi-base-par}
When an md-ball is thrown into an md-bin, the following inequality holds:
\begin{equation}
\EE[ \Psi(t+1) - \Psi(t) | x(t) ] \le \sum_{i=1}^{n} [ d*p_i * (-\alpha.f + (\alpha.f)^2) + d\alpha.f/n ]. e^{-\alpha.s_i}
\end{equation}
\end{lemma}
Following similar lines of proof as for the sequential multidimensional case, one can hence show that:\\
\begin{theorem}\label{thm-pot-up-bound-par}
For any time $t \ge 0$, $\EE[\Gamma(t)] \le \frac{24cd(1 + \epsilon\gamma_1)}{\epsilon}\log(n)$
\end{theorem}
Thus, this parallel balls and bins process with $m >> n$ balls and $n$ bins, takes $O(\frac{m}{n} + \log\log(n))$ rounds and results in maximum bin load $O(\frac{m}{n} + \log\log(n))$ resulting in upper bound on the gap of $O(\log\log(n))$. Hence, one can derive that the gap (using Theorem~\ref{thm-fixed-f}) for the multidimensional parallel scenario is also bounded by $O((mf/D)^{(1/2+\zeta)}\log\log(n))$ with high probability.
\subsection{Upper Bound On Gap: Weighted Case}\label{sec:weighted}
\par Here, we consider the case when the multidimensional balls have variable number of populated dimensions, $f$. The sum of dimensional load in an md-ball, $f$, is thus a random variable. We assume that the distribution for $f$ has a finite second moment and average value, $f^*$. For this distribution, we assume that there is a
$\lambda > 0$ such that the moment generating function $M[\lambda] =
E[e^{\lambda.f} ] < \infty$. Note that $M''(z) = E[f^2e^{zf}] \le \sqrt{E[f^4]E[e^{2zf}]}$. The above assumption implies that there is a $S \ge 1$,
such that for every $|z| < \lambda/2$ it holds that $M''(z) < 2S$. Our analysis below is primarily for integer valued $f$ and for the multidimensional case. However, it can be easily seen that similar analysis holds for scalar balls and bins with real valued weight per ball $W$ and $E[W] = 1$ (still assuming that the distribution of $W$ has finite second moment).
\par The weighted case is more challenging that the unweighted case, since we have to carefully consider the change in the rank of a bin when an md-ball of total dimensional load (weight) $f$ falls in it, as the change in rank could increase the potential by a large amount. Thus, the potential function used in section~\ref{sec:unweighted} might not work in this case and we need to devise a new one. Assume that $\epsilon \le 1/4$. Further, let $\alpha = \min{(\frac{\epsilon}{6S}, \frac{2}{\lambda}, \frac{\epsilon}{2f^*})}$. Define the following potential functions over the bins:
\begin{equation}
\begin{aligned}
\Phi(t) & = \Phi(s(t)) = \sum_{i=1}^{n} \frac{e^{\alpha.s_i}}{n^2+i}\\
\Psi(t) & = \Psi(s(t)) = \sum_{i=1}^{n} \frac{e^{-\alpha.s_i}}{n^2+n-i+1}\\
\Gamma(t) & = \Gamma(s(t)) = \Phi(t) + \Psi(t)\\
& = \sum_{i=1}^{n} [ \frac{e^{\alpha.s_i}}{n^2+i} + \frac{e^{-\alpha.s_i}}{n^2+n-i+1} ]\\
\end{aligned}
\end{equation}
where, $s_i(t) = \sum_{d=1}^{D} x_i^d(t)$
In the beginning, each dimension for each bin has $0$ weight, thus $s_i = 0, \forall i$ and hence, $\Gamma(0) \le 2(n/(n^2 + 1)) \le 2/n$. We show that if $\Gamma(x(t)) \ge a/n$ for some $a > 0$, then $\mathbb{E}[\Gamma(t+1) | x(t)] \le (1 - \frac{\alpha.f^*}{16n(1 + \epsilon\gamma_1)}) * \Gamma(t)$. This helps in demonstrating that for every given $t$, $\mathbb{E}[\Gamma(t)] \in O(1/n)$. This implies that the maximum gap is $O(\log(n))$ w.h.p.
First, consider the change in $\Phi(t)$ (also refers to $\Phi$ by default) and $\Psi(t)$ (also refers to $\Psi$ by default) separately when a ball is thrown with the given probability distribution. Let there be constants, $0 < \gamma_1 < \gamma_2 < 1/2 < \gamma_4 < \gamma_3$, such that $\gamma_2 + \gamma_3 > 1$ and $\gamma_1 + \gamma_4 < 1$ and $\gamma_2 < 7/16$
\begin{lemma}\label{lemma-phi-base-w}
When an md-ball is thrown into an md-bin, the following inequality holds:
\begin{equation}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ] \le \sum_{i=1}^{n} [ p_i * (\alpha.f^* + 1/n + S\alpha^2) - \alpha.f^*/n ]. e^{\alpha.s_i}
\end{equation}
\end{lemma}
\begin{proof}
Let $\Delta_i$ be the expected change in $\Phi$ if the ball is put in bin, $i$. So, $r_i(t+1) = s_i + f(1-1/n)$; and for $j \ne i$, $r_j(t+1) = s_j(t) - f/n$. The new values i.e. $s(t+1)$ are obtained by sorting $r(t+1)$ and $\Phi(s) = \Phi(r)$. When, an md-ball is committed to bin $i$, then it jumps to an index $i_{new}$ which is less than or equal to $i$ in the new bin order. Thus, the expected contribution of bin, $i$, to $\Delta_i$ is given as follows:
\begin{equation}\label{eq-jump}
\begin{aligned}
\EE[ \frac{e^{\alpha.(s_i + f(1-1/n))}}{n^2 + i_{new}} ] - \frac{e^{\alpha.s_i}}{n^2 + i}\\
& \le \frac{e^{\alpha.s_i}}{n^2 + i} [ \frac{M(\alpha(1-1/n))(n^2+n)}{n^2 + 1} - 1 ]\\
& \le \Phi_i [ (M(0) + M'(0).\alpha(1-1/n) + M''(0)(\alpha(1-1/n))^2)(1+1/n) - 1 ]\\
& \le \Phi_i [ (1 + f^*\alpha(1-1/n) + S\alpha^2)(1 + 1/n) - 1] \\
&\because \quad M(0) = 1, M'(0) = E(f) = f^*, M''(0) \le 2S\\
& \le \Phi_i [ f^*\alpha(1-1/n) + S\alpha^2 + (1 + f^*\alpha(1-1/n) + S\alpha^2)/n]\\
& \le \Phi_i [ f^*\alpha + 1/n + S\alpha^2]
\end{aligned}
\end{equation}
The bins that were at index $j \in [i_{new}..(i-1)]$, shift right by one position and hence the expected contribution of such a bin, $j$ to $\Delta_i$ is given as:
\begin{equation}\label{eq-one-shift}
\begin{aligned}
& \EE[ \frac{e^{\alpha.(s_j - f/n)}}{n^2 + j+1} ] - \frac{e^{\alpha.s_j}}{n^2 + j}\\
& \le \frac{e^{\alpha.s_j}}{n^2 + j} [ M(-\alpha/n) - 1 ]\\
& \le \Phi_j [ M(0) + M'(0)(-\alpha/n) + M''(0)\frac{\alpha^2}{2n^2}]\\
& \le \Phi_j [ \frac{-f^*\alpha}{n} + \frac{S\alpha^2}{n^2} ]\\
\end{aligned}
\end{equation}
For all other bins, their rank does not change in the new bin order, hence, their expected contribution to $\Delta_i$ is given as:
\begin{equation}\label{eq-no-shift}
\begin{aligned}
& \EE[ \frac{e^{\alpha.(s_j - f/n)}}{n^2 + j} ] - \frac{e^{\alpha.s_j}}{n^2 + j}\\
& = \Phi_j [ \frac{-f^*\alpha}{n} + \frac{S\alpha^2}{n^2} ]
\end{aligned}
\end{equation}
Using equations~\eqref{eq-jump},~\eqref{eq-one-shift} and~\eqref{eq-no-shift}, $\Delta_i$ is given as follows:
\begin{equation}\nonumber
\Delta_i = (-\alpha.f^*/n + 1/n + S\alpha^2)\Phi_i + \frac{\alpha.f^*\Phi}{n}
\end{equation}
Hence, the expected change in $\Phi$ can be given by:
\begin{equation}\label{eq-phi-base-w}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ]
\le \sum_{i=1}^{n} [ p_i * (\alpha.f^* + 1/n + S\alpha^2) - \alpha.f^*/n ]\Phi_i
\end{aligned}
\end{equation}
\end{proof}
Simplifying further and observing that $\Phi_i$ decreases and $p_i$ increases with increasing $i$ from $1$ to $n$, one gets the following Corollary.
\begin{corollary}\label{cor-phi-gen-w}
$\EE[ \Phi(t+1) - \Phi(t) | x(t) ] \le (\alpha.f^* + 2S\alpha^2)\frac{\Phi}{n}$
\end{corollary}
\begin{proof}
Since, $p_i$ are increasing and $\Phi_i$ are decreasing, the maximum value taken by RHS of equation~\eqref{eq-phi-base-w} will be when $e^{\alpha.s_i} * \sum_{i=1}^{n} \frac{1}{(n^2 + i)} = \Phi$. Thus, $e^{\alpha.s_i} = n\Phi$. Hence,
\begin{equation}
\begin{aligned}
\sum_{i=1}^{n} p_i\Phi_i &\le n\Phi * \sum_{i=1}^{n} \frac{2i - 1}{n^2(n^2 + i)}\\
&\le \frac{\Phi}{n} * \frac{2n-1}{n+1}
\end{aligned}
\end{equation}
Using, equation~\eqref{eq-phi-base-w}, we get:
\begin{equation}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ] &\le (\alpha.f^*(1+1/n) + 1/n + S\alpha^2) * \frac{(2n-1)\Phi}{n(n+1)} - \frac{\alpha.f^*\Phi}{n}\\
&\le (\alpha.f^* + 2S\alpha^2)\frac{\Phi}{n}
\end{aligned}
\end{equation}
\end{proof}
\par Similarly, the change in $\Psi$ can be derived. For detailed proof refer to Appendix~\ref{app:weighted_1_proof}.
\begin{lemma}\label{lemma-psi-base-w}
When an md-ball is thrown into an md-bin, the following inequality holds:
\begin{equation}\label{eq-psi-base-w}
\EE[ \Psi(t+1) - \Psi(t) | x(t) ] \le
\sum_{i=1}^{n} [ p_i * (-\alpha.f^* + \frac{S\alpha^2)}{n^2} + \alpha.f^*/n ]\Psi_i
\end{equation}
\end{lemma}
Further observing that $p_i > 0$, one gets the following Corollary.
\begin{corollary}\label{cor-psi-gen-w}
$\EE[ \Psi(t+1) - \Psi(t) | x(t) ] \le (\alpha.f^*\Psi)/ n$
\end{corollary}
In the next two lemmas, Lemma~\ref{lemma-phi-easy-case-w} and Lemma~\ref{lemma-psi-easy-case-w}, we consider a reasonably balanced md-bins scenario. We show that for such cases, the expected potential decreases. Specifically, for $s_{(n\gamma_2)} \le 0$, the expected value of $\Phi$ decreases and for $s_{(n\gamma_1)} \ge 0$, the expected value of $\Psi$ decreases.
\begin{lemma}\label{lemma-phi-easy-case-w}
Let $\Phi$ be defined as above. If $s_{(n\gamma_2)}(t) < 0$ then, $\EE[ \Phi(t+1) | x(t) ] \le (1-\frac{\alpha f^*}{8n})\Phi$
\end{lemma}
\begin{proof}
From equation~\eqref{eq-phi-base-w}, we get,
\begin{equation}\label{eq-phi-w-1}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t)] & \le \sum_{i=1}^{n} (p_i * (\alpha f^* + S(\alpha f)^2) - \alpha f^*/n). \Phi_i\\
& \le \sum_{i < n\gamma_2} (p_i * (\alpha f^* + S(\alpha)^2)).\Phi_i - \frac{\alpha f^*\Phi}{n} +
\sum_{i \ge n\gamma_2} p_i * (\alpha f^* + S(\alpha)^2).e^{-\alpha s_i}
\end{aligned}
\end{equation}
Now, we need to upper bound the term $\sum_{i < n\gamma_2} (p_i * (\alpha.f^* + (\alpha)^2).\frac{e^{\alpha.s_i}}{n^2+i})$. Since $p_i$ is non-decreasing and $\Phi_i$ is non-increasing, the maximum value is achieved when $e^{\alpha s_i} \sum_{i=1}^{n\gamma_2} \frac{1}{(n^2+i)} = \Phi$ for each $i < n\gamma_2$. Hence, $e^{\alpha s_i} = \frac{\Phi(n+\gamma_2)}{\gamma_2}$. Hence, the maximum value is given as follows.
\begin{equation}
\begin{aligned}
\sum_{i=1}^{n\gamma_2} p_i\Phi_i & \le \frac{\Phi(n+\gamma_2)}{\gamma_2} * \sum_{i=1}^{n\gamma_2} [ \frac{2i-1}{n^2} * \frac{1}{n^2+i} ] \\
& \le \frac{(n+\gamma_2)\Phi}{n^2\gamma_2} * \frac{\gamma_2(2n\gamma_2 - 1)}{(n+\gamma_2)}\\
&\le \frac{(2n\gamma_2 - 1)\Phi}{n^2}
\end{aligned}
\end{equation}
Similarly, one can upper bound the term, $\sum_{i \ge n\gamma_2} (p_i \frac{e^{\alpha.s_i}}{(n^2+i)})$. Since $p_i$ is non-decreasing and $\Phi_i$ is non-increasing, the maximum value is achieved when $e^{\alpha s_i} ( \sum_{i=(n\gamma_2)}^{n} \frac{1}{n^2+i}) = \Phi_{(\ge n\gamma_2)}$ for each $i \ge n\gamma_2$. Hence, $e^{\alpha s_i} = \frac{\Phi_{(\ge n\gamma_2)}(n+\gamma_2)}{(1-\gamma_2)}$.
Thus, the expected change in $\Phi$ can be computed, using equation~\eqref{eq-phi-w-1} and the above bound, as follows:
\begin{equation}
\begin{aligned}
\EE[ \Delta\Phi | x(t)] & \le (\alpha.f^* + S\alpha^2)\frac{(2n\gamma_2 - 1)\Phi}{n^2} - \frac{\alpha.f^*}{n} * \Phi + (\alpha.f^* + S\alpha^2)* \frac{(2n-1)(n+\gamma_2)\Phi_{(\ge n\gamma_2)}}{n^2(n+1)}\\
& \le \frac{2\alpha.f^*\gamma_2\Phi}{n} - \frac{\alpha.f^*\Phi}{n}\\
& \le \frac{(2\gamma_2 - 1)\alpha.f^*\Phi}{n}\\
& \le \frac{-\alpha.f^*\Phi}{8n} \because \gamma_2 < (1/2 - 1/16)
\end{aligned}
\end{equation}
\end{proof}
\begin{lemma}\label{lemma-psi-easy-case-w}
Let $\Psi$ be defined as above. If $s_{(n\gamma_1)}(t) \ge 0$ then,
$\EE[ \Psi(t+1) | x(t) ] \le (1-\frac{\alpha.f^*}{2n})\Psi$
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma~\ref{lemma-phi-easy-case-w}. See Appendix~\ref{app:2_w_proof} for details of the proof.
\end{proof}
Now, we consider the remaining cases and show that in case the load across the bins , at time $t$, is not reasonably balanced, then for $s_{n\gamma_2} > 0$, either $\Psi$ dominates $\Phi$ or $\Gamma < c/n$, where, $c = poly(1/\epsilon)$.
\begin{lemma}\label{lemma-s-gamma-phi-w}
Let, $s_{(n\gamma_2)} \ge 0$ and $\, \EE[\Delta\Phi|x(t)] \ge -\alpha.f^*\Phi/8n$. Then, either $\Phi < \epsilon\gamma_1 * \Psi$, or $\Gamma < \frac{c}{n}$ for some $c = poly(1/epsilon)$.
\end{lemma}
\begin{proof}
From equation~\eqref{eq-phi-base}, we get:
\begin{equation}
\begin{aligned}
\EE[\Delta\Phi | x(t)] & \le \sum_{i=1}^{n} (p_i * (\alpha.f^* + S\alpha^2) - \alpha.f^*/n).\Phi_i\\
& \le \sum_{i \le n\gamma_3} (p_i * (\alpha.f^* + S\alpha^2) - \alpha.f^*/n).\Phi_i + \sum_{i > n\gamma_3} (p_i * (\alpha.f^* + S\alpha^2) - \alpha.f^*/n) * \Phi_i\\
& \le \frac{n\Phi_{(\le n\gamma_3)}}{\gamma_3}(\alpha.f^* + S\alpha^2)*\sum_{i \le (n\gamma_3)} \frac{2i-1}{n^2(n^2+i)} +\\
& (\alpha.f^* + S\alpha^2)\frac{n\Phi_{(> n\gamma_3)}}{1-\gamma_3} * \sum_{i > (n\gamma_3)} \frac{2i-1}{n^2(n^2+i)} - \frac{\alpha f^*\Phi}{n}\\
& \le \frac{\alpha f^*\Phi_{\le n\gamma_3}}{n} \frac{(2n\gamma_3 - 1)}{n+\gamma_3} + \frac{\alpha f^*\Phi_{> n\gamma_3}}{1-\gamma_3} \frac{\gamma_3}{n+1} - \frac{\alpha.f^*\Phi}{n}\\
& \le \frac{\alpha f^*\Phi}{n} (\frac{2\gamma_3}{n+\gamma_3} - 1) + \alpha f^*\Phi_{>n\gamma_3} [-\frac{2n\gamma_3 - 1}{n(n+\gamma_3)} + \frac{\gamma_3}{(1-\gamma_3)(n+1)}]
\end{aligned}
\end{equation}
Now, since $\EE[\Delta\Phi|x(t)] \ge -\alpha f\Phi/8n$, we get: $\Phi \le 4n\Phi_{(> n\gamma_3)} [\frac{(n-2)\gamma_3 + 1}{(n+1)(1-\gamma_3)(4n+3\gamma_3)}]$.
Let, $B = \sum_{i} max(0, s_i)$. Note, $\sum_{i} s_i = 0$, since for each dimension $d$, the update maintains that, $\sum_{d=1}^{D} x_i^d(t) = 0$. Further, because, $s_{n\gamma_3} > 0$, $\Phi_{(>n\gamma_3)} \le \frac{1-\gamma_3}{n+\gamma_3} * e^{(\frac{\alpha.B}{n\gamma_3})}$. This implies that, $\Phi \le \frac{4(n-2)\gamma_3 e^{(\frac{\alpha.B}{n\gamma_3})}}{(4n+3\gamma_3)(n+\gamma_3)}$.
Since, $s_{(n\gamma_2)} > 0$, so, $\Psi \ge \frac{n}{\gamma_2} * e^{\frac{\alpha.B}{(n-n\gamma_2)}}$. If $\Phi < \epsilon\gamma_1 * \Psi$, then we are done. Else, $\Phi \ge \epsilon\gamma_1 * \Psi$. This implies:
\begin{equation}\nonumber
e^{\frac{\alpha.B}{n\gamma_3}} \frac{4(n-2)\gamma_3}{(4n+3\gamma_3)(n+\gamma_3)} \ge \Phi \ge \epsilon\gamma_1 * \Psi \ge \frac{n\epsilon\gamma_1}{\gamma_2} * e^{\frac{\alpha.B}{(n-n\gamma_2)}}
\end{equation}
Thus, $e^{\alpha.B/n} \le (\frac{4(n-2)\gamma_3\gamma_2}{n\epsilon\gamma_1(4n+3\gamma_3)(n+\gamma_3)})^{\frac{(1-\gamma_2)\gamma_3}{(\gamma_3 + \gamma_2 - 1)}}$. So, $\Gamma \le \frac{1 + \theta}{\epsilon} * \Phi \le \frac{1 + \theta}{\epsilon} * \frac{4(n-2)
\gamma_3}{(4n+3\gamma_3)(n+\gamma_3)} * e^{\frac{\alpha.B}{n\gamma_3}}$
Hence, $\Gamma < c/n$, where, $c = poly(1/\epsilon)$.
\end{proof}
In the Lemma below, we consider the case where the load across the bins at time, $t$, is not reasonably balanced, and $s_{(n\gamma_1)} < 0$. Here, we show that either $\Phi$ dominates $\Psi$ or the potential function is less than $c/n$ for $c = poly(1/\epsilon)$.
\begin{lemma} \label{lemma-s-gamma-psi-w}
Let, $s_{(n\gamma_1)} < 0$ and $\EE[\Delta\Psi|x(t)] \ge -\alpha.f^*\Psi/2n$. Then, either $\Psi < \epsilon\gamma_1 * \Phi$, or $\Gamma < c/n$ for some $c = poly(1/epsilon)$.
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma~\ref{lemma-s-gamma-phi-w}. See Appendix~\ref{app:3_w_proof} for details of the proof.
\end{proof}
Now, we consider combinations of the cases considered so far and can show that the potential function, $\Gamma$, behaves as a super-martingale.
\begin{theorem}\label{thm-super-mart-w}
For the potential function, $\Gamma$, $\EE[ \Gamma(t+1) | x(t)] \le (1 - \frac{\alpha.f^*}{16n(1 + \epsilon\gamma_1)})\Gamma(t) + \frac{c}{n^2}$, for constant $c = poly(1/\epsilon)$.
\end{theorem}
\begin{proof}
We consider the following cases on intervals of values for $s_i$.
\begin{itemize}
\item \textbf{Case 1:} $s_{(n\gamma_1)} \ge 0$ and $s_{(n\gamma_2)} < 0$. Using, Lemma~\ref{lemma-phi-easy-case-w} and Lemma~\ref{lemma-psi-easy-case-w}, we can immediately see that, $\EE[ \Gamma(t+1) | x(t)] \le (1 - \alpha.f^*/8n)\Gamma(t)$ and hence, the result is also true.\\
\item \textbf{Case 2:} $s_{n\gamma_1} \ge s_{n\gamma_2} > 0$. This represents a high load imbalance across the bins. In some cases, $\Phi$ may grow but the asymmetry in the load implies that $\Gamma$ is dominated by $\Psi$. Thus, the decrease in $\Psi$ offsets the increase in $\Phi$ and hence the expected change in $\Gamma$ is negative.\\
Specifically, if $\EE[\Delta\Phi|x] \le \frac{-\alpha f\Phi}{8n}$, then using Lemma~\ref{lemma-psi-easy-case-w} we get that $\EE[ \Gamma(t+1) | x(t)] \le (1 - \alpha f/8n)\Gamma(t)$; else we consider the following two cases:\\
\begin{itemize}
\item \textbf{Case 2a:} $\Phi < \epsilon\gamma_1 * \Psi$. Here, using Lemma~\ref{lemma-psi-easy-case-w} and Corollary~\ref{cor-phi-gen-w}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & = \EE[\Delta\Phi|x] + \EE[\Delta\Psi|x]\\
& \le \frac{\alpha.f^*}{n}.\Phi - \frac{\alpha f^*}{2n} * \Psi\\
& \le -\frac{\epsilon}{4n}.\Psi\\
& \le - \frac{\epsilon}{4n(1 + \epsilon\gamma_1)}\Gamma
\end{aligned}
\end{equation}
\item \textbf{Case 2b:} $\Gamma < c/n$. Here, using Corollary~\ref{cor-psi-gen-w} and Corollary~\ref{cor-phi-gen-w}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & \le \frac{\alpha.f^*}{n} * \Gamma
& \le \frac{c\alpha.f^*}{n^2}
\end{aligned}
\end{equation}
But, $c/n^2 - ((\alpha.f^*/8n) * \Gamma) \ge c/n^2(1 - \alpha.f^*/8) \ge c/n^2(1 - \alpha.f^*/2) \ge \frac{c\alpha.f^*}{n^2}$.\\
Hence,$\EE[\Delta\Gamma|x] \le - \frac{\alpha.f^*\Gamma}{8n} + \frac{c}{n}$.
\end{itemize}
\item \textbf{Case 3:} $s_{n\gamma_2} \le s_{n\gamma_1} < 0$. Here, if $\EE[\Delta\Psi|x] \le \frac{-\alpha.f^*}{2n}\Psi$, then using Lemma~\ref{lemma-phi-easy-case-w}, we get that $\EE[ \Gamma(t+1) | x(t)] \le (1 - \alpha.f^*/8n)\Gamma(t)$; else we consider the following two cases:
\begin{itemize}
\item \textbf{Case 3a:} $\Psi < \epsilon\gamma_1 * \Phi$. Here, using Lemma~\ref{lemma-phi-easy-case-w} and Corollary~\ref{cor-psi-gen-w}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & = \EE[\Delta\Phi|x] + \EE[\Delta\Psi|x]\\
& \le -(\alpha.f^*/8n)\Phi + \alpha.f^*/n * \Psi\\
& \le -(\alpha.f^*/8n).\Phi + (\gamma_1\epsilon\alpha.f^*/n)\Phi\\
& \le \frac{-\alpha.f^*}{16n}\Phi\\
& \le \frac{-\alpha.f^*}{16n(1+\epsilon\gamma_1)}*\Gamma
\end{aligned}
\end{equation}
\item \textbf{Case 3b:} $\Gamma < c/n$. Here, using Corollary~\ref{cor-phi-gen-w} and Corollary~\ref{cor-psi-gen-w}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & = \alpha.f^*/n * \Gamma
& \le \frac{c\alpha.f^*}{n^2}
\end{aligned}
\end{equation}
Hence, this case follows similarly as \textit{Case 2b} above.
\end{itemize}
\end{itemize}
\end{proof}
Now, we can prove using induction that the expected value of $\Gamma$ remains bounded.
\begin{theorem}\label{thm-pot-up-bound-w}
For any time $t \ge 0$, $\EE[\Gamma(t)] \le \frac{16c(1 + \epsilon\gamma_1)}{n\alpha.f^*}$
\end{theorem}
\begin{proof}
Using induction we can prove this claim. For $t = 0$, it is trivially true since $\Gamma(0) \le 2/n$.
Using Theorem~\ref{thm-super-mart-w}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Gamma(t+1)] & = E[E[\Gamma(t+1)| \Gamma(t)]]\\
& \le \EE[ (1 - \frac{\alpha.f^*}{16n(1+\epsilon\gamma_1)})\Gamma(t) + \frac{c}{n}]\\
& \le \frac{16c(1 + \epsilon\gamma_1)}{n\alpha.f^*} - \frac{c}{n^2} + \frac{c}{n^2}\\
& \le \frac{16c(1 + \epsilon\gamma_1)}{n\alpha.f^*}
\end{aligned}
\end{equation}
\end{proof}
\begin{theorem}\textbf{Variable $f$ Case (Weighted Case) Gap:}
Using the bias in the probability distribution in favor of lightly loaded md-bins as obtained from the $d$-choice process, and assuming that in each ball, each dimension is chosen as $1$ with probability $q$ (variable $f$ case); the expected and probabilistic upper bound on the gap (maximum dimensional gap) across the multidimensional bins is given as follows. Let, $\delta = \frac{16c(1 + \epsilon\gamma_1)}{\alpha.f*}$, and $\zeta > 0$, then:
\begin{equation}\nonumber
\begin{aligned}
E[Gap(t)] & \le 2q\log(n)/\epsilon + 2q\log(\delta)/\epsilon\\
Pr[Gap(t) & > (mq/n)^{1/2+\zeta}(4q\log(n)/\epsilon + 4q\log(\delta)/\epsilon]) \le \frac{1}{qm}\\
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}
Since, each dimension is assigned $1$ with probability $q$, the average number of ones per md-ball is $f^* = Dq$.
Let, $a$ be the winning md-bin and $m$ be the winning dimension that represents $Gap(t)$. The number of ones in any ball, $f$, follows a Binomial($D$, $q$) distribution and has finite second moment. Using the analysis for the weighted balls case, we get, $\EE[ e^{\alpha.s_a} ] \le n\delta$, where $\alpha.f^* \le \epsilon/2$. So,
$\EE[ e^{\alpha.(x_a^m + \sum_{d \ne m} x_a^d)} ] \le n\delta$.
Taking, logarithm of both sides, we get:
\begin{equation}\label{eq:sum-bound-w}
\begin{aligned}
E[x_a^m] + \sum_{d \ne m} E[x_a^d] & \le \log(n)/\alpha + \log(\delta)/\alpha\\
\end{aligned}
\end{equation}
If $k$ is the expected number of balls were thrown in bin $a$ minus the average number of balls per bin, then $E[x_a^m] = kq$ and similarly, $\sum_{d \ne m} E[x_a^d] = (D-1)kq$. Hence, we get:
\begin{equation}\nonumber
\begin{aligned}
Dkq & \le 2f^*\log(n)/\epsilon + 2f^*\log(\delta)/\epsilon\\
\Rightarrow k & \le 2\log(n)/\epsilon + 2\log(\delta)/\epsilon\\
\Rightarrow E[x_a^m] & \le 2q\log(n)/\epsilon + 2q\log(\delta)/\epsilon\\
\end{aligned}
\end{equation}
The probabilistic bound can be computed similar to the fixed $f$ case (Theorem~\ref{thm-fixed-f}) using the Chernoff bound.
$\Box$
\end{proof}
Note that the for the scalar case, when the expected weight of the distribution is $W^*$, the upper bound on the gap obtained is $O(W^*\log(n))$, which after normalization, i.e. $E(W) = W^* = 1$, leads to $O(\log(n))$ gap. This improves upon the best prior known bound of $O(n^c)$ given in~\cite{kunal-weighted}.
\section{$(1+\beta$) Choice Process with Multidimensional Balls and Bins}
\par In this section we present upper and lower bounds on the gap for the $(1+\beta)$ choice process with multidimensional balls and bins.
\subsection{Markov Chain Specification}\label{sec:markov-chain-beta}
\label{sec:markov}
As mentioned earlier, a balls-and-bins process can be characterized by a probability distribution vector $(p_1, p_2, p_3,...p_n)$, where, $p_i$ is the probability a ball is placed in the $i^{th}$ most loaded multidimensional bin. Let $x_i^d(t)$ be the random variable, that denotes the \textit{weight in dimension $d$ for bin $i$} and is equal to the load of the $d^{th}$ dimension of the $i^{th}$ bin minus the average load in dimension $d$. So, $\sum_{i=1}^{n} x_i^d(t) = 0, \forall d \in [1..D]$. Let, $s_i(t)$ denote the sum of the loads (minus corresponding dimension averages) across all $D$ dimensions for the bin $i$ at time $t$, expressed as $s_i(t) = \sum_{d=1}^D x_i^d$. It is assumed that bins are sorted by $s_i(t)$. So, $s_i \ge s_{i+1} \forall i \in [1..n-1]$. The process defines a Markov chain over the matrices, $x(t)$ as follows:
\begin{itemize}
\item Sample $j \in_p [n]$.
\item Set $r_i = s_i(t) + f(1 - 1/n)$, for $i = j$. Since, each md-ball has $f$ non-zero entries , so each of these $f$ dimensions in the bin, $i$, will be incremented by $1 - 1/n$.
\item Set $r_i = s_i(t) - f/n$, for $i \ne j$. Since, each md-ball has $f$ non-zero entries, so the each of the corresponding $f$ dimensions in the bin, $i$, will be decremented by $1/n$. This ensures that for each dimension the sum across all the bins is $0$.
\item Obtain $s(t+1)$ by sorting $r(t)$.
\end{itemize}
Fig.~\ref{fig:md-balls-bins} (in the Appendix~\ref{app:fig}) illustrates a multidimensional balls and bins scenario. The bounds on the gap will be proven for a family of probability distribution vectors $p$. As mentioned earlier, he md-bins are sorted based on their total dimensional load, i.e. sum of the weights across all dimensions for each bin ($s_i$ for bin $i$).
We make the following assumptions:
\begin{itemize}
\item
$\forall i \in [1,n-1], p_i \le p_{i+1}$
This assumption states that the allocation rule is no worse than the $1$-choice scheme.
\item
For some constants, $\epsilon > 0$, $\theta > 1$ and $0 < \gamma_3 < \gamma_4 < 1$, where $\gamma_3 + \gamma_4 = 1$, it holds that:
\begin{equation}\label{ass-2}
p_{(n\gamma_3)} \le \frac{(1 - \theta\epsilon)}{n},\quad \text{and},\quad p_{(n\gamma_4)} \ge \frac{(1 + \theta\epsilon)}{n}
\end{equation}
This assumption states that the allocation rule strictly prefers the least loaded $\gamma_3$ fraction of the $n$ bins over the most loaded $(1 - \gamma_4)$ fraction.
\end{itemize}
These assumptions imply that for some constants, $\gamma_1$ and $\gamma_2$, where, $0 < \gamma_1 < \gamma_3 < 1/2 < \gamma_4 < \gamma_2 < 1$ and $\theta\gamma_1 = 1$; $\gamma_1 + \gamma_2 = 1$, we have the following:\\ $\sum_{i \ge (n\gamma_2)} p_i \ge (\gamma_1 + \epsilon)$ and $\sum_{i \le (n\gamma_1)} p_i \le (\gamma_1 - \epsilon)$. This will be useful in the proof. Note that the $(1+\beta)$ choice process satisfies these assumptions for $\epsilon = \beta(1 -2\gamma_3)/\theta$, since $p_{(n\gamma_3)} \le (1 - \beta)/n + 2(n\gamma_3 - 1)\beta/n^2 \le (1 - \beta(1-2\gamma_3))/n$, and similarly $p_{(n\gamma_4)} \ge (1 + \beta(2\gamma_4 - 1))/n$.
In the remaining analysis, we assume that when an md-ball arrives, then the selection of the bins is based on $s_i$, i.e. total sum of weights across all dimensions for the randomly selected bins (Fig.~\ref{fig:md-balls-bins} in Appendix~\ref{app:fig}). In particular, for the $(1+\beta)$ choice process, when two bins are randomly selected (with $\beta$ probability), the md-ball (with $f$ non-zero entries) is assigned to the md-bin with the lowest $s_i$. Using this selection mechanism, we prove the upper and lower bounds on the gap obtained for the $(1+\beta)$ choice process. Note that, this is a different allocation mechanism than that considered in~\cite{md-mm} where the \textit{max} objective is considered over the restricted set of $f$ populated dimensions in the current md-ball.
\subsection{Upper Bound On the Gap}\label{sec:upper-bound-beta}
We assume that $\epsilon \le 1/4$. Further, let $\alpha = \epsilon/2f$. Define the following potential functions:
\begin{equation}
\begin{aligned}
\Phi(t) & = \Phi(s(t)) = \sum_{i=1}^{n} e^{\alpha.s_i}\\
\Psi(t) & = \Psi(s(t)) = \sum_{i=1}^{n} e^{-\alpha.s_i}\\
\Gamma(t) & = \Gamma(s(t)) = \Phi(t) + \Psi(t)
\end{aligned}
\end{equation}
where, $s_i(t) = \sum_{d=1}^{D} x_i^d(t)$
In the beginning, each dimension for each bin has $0$ weight, thus $s_i = 0, \forall i$ and hence, $\Gamma(0) = 2n$. We show that if $\Gamma(x(t)) \ge an$ for some $a > 0$, then $\mathbb{E}[\Gamma(t+1) | x(t)] \le (1 - \frac{\epsilon^2(1 - 2\gamma_1)}{4n(1 + \epsilon\gamma_1)}) * \Gamma(t)$. This helps in demonstrating that for every given $t$, $\mathbb{E}[\Gamma(t)] \in O(n)$. This implies that the maximum gap is $O(\log (n))$ w.h.p.
First, consider the change in $\Phi(t)$ (also refers to $\Phi$ by default) and $\Psi(t)$ (also refers to $\Psi$ by default) separately when a ball is thrown with the given probability distribution.
\begin{lemma}
When an md-ball is thrown into an md-bin, the following inequality holds:
\begin{equation}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ] \le \sum_{i=1}^{n} [ p_i * (\alpha.f + (\alpha.f)^2) - \alpha.f/n ]. e^{\alpha.s_i}
\end{equation}
\end{lemma}
\begin{proof}
Let $\Delta_i$ be the expected change in $\Phi$ if the ball is put in bin, $i$. So, $r_i(t+1) = s_i + f(1-1/n)$; and for $j \ne i$, $r_j(t+1) = s_j(t) - f/n$. The new values i.e. $s(t+1)$ are obtained by sorting $r(t+1)$ and $\Phi(s) = \Phi(r)$. The expected contribution of bin, $i$, to $\Delta_i$ is given as follows:
\begin{equation}\nonumber
\begin{aligned}
\EE[ e^{\alpha.(s_i + f(1-1/n))} ] - e^{\alpha.s_i}
& = e^{\alpha.s_i} [ e^{\alpha.f(1-1/n)} - 1 ]
\end{aligned}
\end{equation}
Similarly, the expected contribution of bin, $j$ ($j \ne i$) to $\Delta_i$ is given as:
\begin{equation}\nonumber
\begin{aligned}
\EE[ e^{\alpha.(s_j - f/n)} ] - e^{\alpha.s_j}
& = e^{\alpha.s_j} [ e^{-\alpha.f/n} - 1 ]
\end{aligned}
\end{equation}
Therefore, $\Delta_i$ is given as follows:
\begin{equation}\nonumber
\begin{aligned}
\Delta_i & = e^{\alpha.s_i} [ e^{\alpha.f(1-1/n)} - 1] \, + \,
\sum_{j \ne i} e^{\alpha.s_j} (e^{-\alpha.f/n} - 1)\\
& = e^{\alpha(s_i- f/n)} (e^{\alpha.f} - 1) + (e^{-\alpha.f/n} - 1).\Phi
\end{aligned}
\end{equation}
Thus, we get the overall expected change in $\Phi$ as follows:
\begin{equation}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ] & = \sum_{i=1}^{n} p_i * \Delta_i\\
& = \sum_{i=1}^{n} p_i * [ e^{\alpha(s_i- f/n)} (e^{\alpha.f} - 1) + (e^{-\alpha.f/n} - 1).\Phi ]\\
& = \sum_{i=1}^{n} p_i * e^{\alpha(s_i- f/n)} (e^{\alpha.f} - 1) + (e^{-\alpha.f/n} - 1).p_i.\Phi\\
& = \sum_{i=1}^{n} [ p_i * e^{-\alpha.f/n} (e^{\alpha.f} - 1) + (e^{-\alpha.f/n} - 1) ]. e^{\alpha.s_i}
\end{aligned}
\end{equation}
Now, $e^{(-\alpha.f/n)} * (e^{\alpha.f} - 1)$ can be approximated as follows:
\begin{equation}\nonumber
\begin{aligned}
e^{(-\alpha.f/n)}.(e^{\alpha.f} - 1) & \le
(1 - \alpha.f/n + (\alpha.f/n)^2) * (1 + \alpha.f + (\alpha.f)^2 - 1)\\
& \sim \alpha.f + (\alpha.f)^2 + O((\alpha.f)^2/n)\\
e^{(-\alpha.f/n)}.(e^{\alpha.f} - 1) & \lessapprox (\alpha.f + (\alpha.f)^2)
\end{aligned}
\end{equation}
Above, since, $(\alpha.f)^2/n$ is very small for large $n$, we have ignored the small terms. Similarly, $(e^{-\alpha.f/n} - 1) \lessapprox -\alpha.f/n$
Hence, the expected change in $\Phi$ can be given by:
\begin{equation}\label{eq-phi-base-beta}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t) ]
\le \sum_{i=1}^{n} [ p_i * (\alpha.f + (\alpha.f)^2) - \alpha.f/n ]. e^{\alpha.s_i}
\end{aligned}
\end{equation}
\end{proof}
Simplifying further and observing that $\Phi_i$ decreases and $p_i$ increases with increasing $i$ from $1$ to $n$, one gets the following Corollary.
\begin{corollary}\label{cor-phi-gen-beta}
$\EE[ \Phi(t+1) - \Phi(t) | x(t) ] \le (\alpha.f)^2 * \Phi / n$
\end{corollary}
\begin{proof}
Since, $p_i$ are increasing and $\Phi_i$ are decreasing, the maximum value taken by RHS of equation~\eqref{eq-phi-base-beta} will be when $p_i = 1/n$ for all $i \in [1..n]$. Simplifying, we get the result.
\end{proof}
Similarly, the change in $\Psi$ can be derived as follows.
\begin{lemma}
When an md-ball is thrown into an md-bin, the following inequality holds:
\begin{equation}\label{eq-psi-base-beta}
\EE[ \Psi(t+1) - \Psi(t) | x(t) ] \le
\sum_{i=1}^{n} [ p_i * (-\alpha.f + (\alpha.f)^2) + \alpha.f/n ]. e^{-\alpha.s_i}
\end{equation}
\end{lemma}
\REM{
\begin{proof}
Let $\Lambda_i$ be the expected change in $\Psi$ if the ball is put in bin, $i$. So, $r_i(t+1) = s_i + f(1-1/n)$; and for $j \ne i$, $r_j(t+1) = s_j(t) - f/n$. The new values i.e. $s(t+1)$ are obtained by sorting $r(t+1)$ and $\Psi(s) = \Psi(r)$. The expected contribution of bin, $i$, to $\Lambda_i$ is given as follows:
\begin{equation}\nonumber
\begin{aligned}
\EE[ e^{-\alpha.(s_i + f(1-1/n))} ] - e^{-\alpha.s_i}
& = e^{-\alpha.s_i} [ e^{-\alpha.f(1-1/n)} - 1 ]
\end{aligned}
\end{equation}
Similarly, the expected contribution of bin, $j$ ($j \ne i$) to $\Lambda_i$ is given as:
\begin{equation}\nonumber
\begin{aligned}
\EE[ e^{-\alpha.(s_j - f/n)} ] - e^{-\alpha.s_j}
& = e^{-\alpha.s_j} [ e^{\alpha.f/n} - 1 ]
\end{aligned}
\end{equation}
Therefore, $\Lambda_i$ is given as follows:
\begin{equation}\nonumber
\begin{aligned}
\Delta_i = e^{-\alpha.s_i} [ e^{-\alpha.f(1-1/n)} - 1] +\\
\sum_{j \ne i} e^{-\alpha.s_j} (e^{\alpha.f/n} - 1)
& = e^{-\alpha(s_i- f/n)} (e^{-\alpha.f} - 1) + (e^{\alpha.f/n} - 1).\Psi
\end{aligned}
\end{equation}
Thus, we get the overall expected change in $\Psi$ as follows:
\begin{equation}
\begin{aligned}
\EE[ \Psi(t+1) - \Psi(t) | x(t) ] = \sum_{i=1}^{n} p_i * \Lambda_i
& = \sum_{i=1}^{n} p_i * [ e^{-\alpha(s_i- f/n)} (e^{-\alpha.f} - 1) + (e^{-\alpha.f/n} - 1).\Psi ]
& = \sum_{i=1}^{n} p_i * e^{-\alpha(s_i- f/n)} (e^{-\alpha.f} - 1) + \\
& (e^{-\alpha.f/n} - 1).p_i.\Psi\\
& = \sum_{i=1}^{n} [ p_i * e^{\alpha.f/n} (e^{-\alpha.f} - 1) + (e^{\alpha.f/n} - 1) ]. e^{-\alpha.s_i}
\end{aligned}
\end{equation}
Now, $e^{\alpha.f/n)} (e^{-\alpha.f} - 1)$ can be approximated as follows:
\begin{equation}\nonumber
\begin{aligned}
e^{\alpha.f/n)}.(e^{-\alpha.f} - 1)
& \le (1 + \alpha.f/n + (\alpha.f/n)^2) * (1 - \alpha.f + (\alpha.f)^2 - 1)
& \sim -\alpha.f + (\alpha.f)^2 + O((\alpha.f)^2/n)\\
\text{Since, $(\alpha.f)^2/n$ is very small for large $n$, we ignore these small terms. Hence,}
e^{\alpha.f/n)}.(e^{-\alpha.f} - 1) \lessapprox (-\alpha.f + (\alpha.f)^2)
\end{aligned}
\end{equation}
Similarly, $(e^{\alpha.f/n} - 1) \lessapprox \alpha.f/n$
Hence, the expected change in $\Psi$ can be given by:
\begin{equation}\label{eq-psi-base-beta}
\begin{aligned}
\EE[ \Psi(t+1) - \Psi(t) | x(t) ]
\le \sum_{i=1}^{n} [ p_i * (-\alpha.f + (\alpha.f)^2) + \alpha.f/n ]. e^{-\alpha.s_i}
\end{aligned}
\end{equation}
\end{proof}}
Further observing that $p_i > 0$, one gets the following Corollary.
\begin{corollary}\label{cor-psi-gen-beta}
$\EE[ \Psi(t+1) - \Psi(t) | x(t) ] \le (\alpha.f.\Psi)/ n$
\end{corollary}
In the next two lemmas, Lemma~\ref{lemma-phi-easy-case-beta} and Lemma~\ref{lemma-psi-easy-case-beta}, we consider a reasonably balanced md-bins scenario. We show that for such cases, the expected potential decreases. Specifically, for $s_{(n\gamma_2)} \le 0$, the expected value of $\Phi$ decreases and for $s_{(n\gamma_1)} \ge 0$, the expected value of $\Psi$ decreases.
\begin{lemma}\label{lemma-phi-easy-case-beta}
Let $\Phi$ be defined as above. If $s_{(n\gamma_2)}(t) \le 0$ then, $\EE[ \Phi(t+1) | x(t) ] \le (1-\frac{\epsilon^2}{4n})\Phi + 1$
\end{lemma}
\begin{proof}
From equation~\eqref{eq-phi-base-beta}, we get,
\begin{equation}\label{eq-phi-1-beta}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t)] & \le \sum_{i=1}^{n} (p_i * (\alpha f + (\alpha f)^2) - \alpha f/n). e^{\alpha s_i}\\
& \le \sum_{i < n\gamma_2} (p_i * (\alpha f + (\alpha f)^2) - \alpha f/n).e^{\alpha s_i} +
\sum_{i \ge n\gamma_2} p_i * (\alpha f + (\alpha f)^2).e^0\\
& \le \sum_{i < n\gamma_2} (p_i * (\alpha f + (\alpha f)^2) - \alpha f/n).e^{\alpha s_i} + 1
\end{aligned}
\end{equation}
The last inequality follows since $\alpha.f < 1/2$ and $\sum_{i \ge (n\gamma_2)} p_i < 1$.
Now, we need to upper bound the term $\sum_{i < n\gamma_2} (p_i * (\alpha.f + (\alpha.f)^2).e^{\alpha.s_i})$. Since $p_i$ is non-decreasing and $\Phi_i$ is non-increasing, the maximum value is achieved when $\Phi_i = (\Phi/(n\gamma_2))$ for each $i < n\gamma_2$. Hence, the maximum value is: $(\alpha.f + (\alpha.f)^2)(\gamma_2 - \epsilon)\Phi/(n\gamma_2)$.
Thus, the expected change in $\Phi$ can be computed, using equation~\eqref{eq-phi-1-beta} and the above bound, as follows:
\begin{equation}
\begin{aligned}
\EE[ \Phi(t+1) - \Phi(t) | x(t)] & \le (\alpha.f + (\alpha.f)^2)(\gamma_2 - \epsilon)\Phi/(n\gamma_2) - \alpha.f/n * \Phi + 1\\
& \le (\alpha.f)^2.\Phi/n - (\alpha f\epsilon\Phi/n\gamma_2) + 1\\
& \le \epsilon^2.\Phi/4n -\epsilon^2\Phi/(2n\gamma_2) + 1\\
& \le \frac{-\epsilon^2}{4n}\Phi + 1
\end{aligned}
\end{equation}
\end{proof}
\begin{lemma}\label{lemma-psi-easy-case-beta}
Let $\Psi$ be defined as above. If $s_{(n\gamma_1)}(t) \ge 0$ then,
$\EE[ \Psi(t+1) | x(t) ] \le (1-\frac{\epsilon^2}{4n})\Psi + 1$
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma~\ref{lemma-phi-easy-case-beta}. See Appendix~\ref{app:1_proof_beta} for details of the proof.
\end{proof}
Now, we consider the remaining cases and show that in case the load across the bins , at time $t$, is not reasonably balanced, then for $s_{n\gamma_2} > 0$, either $\Psi$ dominates $\Phi$ or the potential function is $O(n)$.
\begin{lemma}
\label{lemma-s-gamma-phi-beta}
Let, $s_{(n\gamma_2)} > 0$ and $\, \EE[\Delta\Phi|x(t)] \ge -\epsilon^2\Phi/4n$. Then, either $\Phi < \epsilon\gamma_1 * \Psi$, or $\Gamma < cn$ for some $c = poly(1/epsilon)$.
\end{lemma}
\begin{proof}
From equation~\eqref{eq-phi-base-beta}, we get:
\begin{equation}
\begin{aligned}
\EE[\Delta\Phi | x(t)] & \le \sum_{i=1}^{n} (p_i * (\alpha.f + (\alpha.f)^2) - \alpha.f/n).e^{\alpha.s_i}\\
& \le \sum_{i \le n\gamma_3} (p_i * (\alpha.f + (\alpha.f)^2) - \alpha.f/n).e^{\alpha.s_i} + \sum_{i > n\gamma_3} (p_i * (\alpha.f + (\alpha.f)^2) - \alpha.f/n) * e^{\alpha.s_i}\\
& \le [((1 - \theta\epsilon)/n) * (\alpha.f + (\alpha.f)^2) - \alpha.f/n)].\Phi_{(\le n\gamma_3)} + (\alpha.f)^2.\Phi_{(> n\gamma_3))}/n\\
& \le [-\epsilon^2/(2n\gamma_1) + \epsilon^2/4n].\Phi_{(\le n\gamma_3)} + \epsilon^2.\Phi_{(> n\gamma_3)}/4n\\
& \le [-\epsilon^2/(2n\gamma_1) + \epsilon^2/4n].\Phi + \epsilon^2\theta\Phi_{(> n\gamma_3)}/2n \quad \because \theta\gamma_1 = 1
\end{aligned}
\end{equation}
Now, since $\EE[\Delta\Phi|x(t)] \ge -\epsilon^2\Phi/4n$, we get: $\Phi \le \Phi_{(> n\gamma_3)}/\gamma_2$.
Let, $B = \sum_{i} max(0, s_i)$. Note, $\sum_{i} s_i = 0$, since for each dimension $d$, the update maintains that, $\sum_{d=1}^{D} x_i^d(t) = 0$. One can observe that, $\Phi_{(>n\gamma_3)} \le n\gamma_4 * e^{\alpha.B/(n\gamma_3)}$, since $\gamma_3 + \gamma_4 = 1$. This implies that, $\Phi \le ((n\gamma_4)/(\gamma_2)) * e^{\alpha.B/(n\gamma_3)}$.
Since, $s_{(n\gamma_2)} > 0$, so, $\Psi \ge n\gamma_1 * e^{\alpha.B/(n\gamma_1)}$. If $\Phi < \epsilon\gamma_1 * \Psi$, then we are done. Else, $\Phi \ge \epsilon\gamma_1 * \Psi$. This implies:
\begin{equation}\nonumber
(n\gamma_4/\gamma_2) * e^{\alpha B/(n\gamma_3)} \ge \Phi \ge \epsilon\gamma_1 * \Psi \ge \epsilon.n\gamma_1^2 * e^{\alpha B/(n\gamma_1)}
\end{equation}
Thus, $e^{\alpha.B/n} \le (\frac{\theta^2\gamma_4}{\epsilon\gamma_2})^{\frac{\gamma_1\gamma_3}{(\gamma_3 - \gamma_1)}}$. So, $\Gamma \le ((1 + \theta)/\epsilon) * \Phi \le ((1 + \theta)/\epsilon) * (n\gamma_4/\gamma_2) * e^{\alpha.B/(n\gamma_3)} \le ((1 + \theta)/\epsilon) * (n\gamma_4/\gamma_2) * (\frac{\theta^2\gamma_4}{\epsilon\gamma_2})^{\frac{\gamma_1}{(\gamma_3 - \gamma_1)}}$. Hence, $\Gamma \le cn$, where, $c = poly(1/\epsilon)$.
\end{proof}
In the Lemma below, we consider the case where the load across the bins at time, $t$, is not reasonably balanced, and $s_{(n\gamma_1)} < 0$. Here, we show that either $\Phi$ dominates $\Psi$ or the potential function is $O(n)$.
\begin{lemma}
\label{lemma-s-gamma-psi-beta}
Let, $s_{(n\gamma_1)} < 0$ and $\EE[\Delta\Psi|x(t)] \ge -\epsilon^2\Phi/4n$. Then, either $\Psi < \epsilon\gamma_1 * \Phi$, or $\Gamma < cn$ for some $c = poly(1/epsilon)$.
\end{lemma}
\begin{proof}
The proof is similar to that of Lemma~\ref{lemma-s-gamma-phi-beta}. See Appendix~\ref{app:2_proof_beta} for details of the proof.
\end{proof}
Now, we consider combinations of the cases considered so far and can show that the potential function, $\Gamma$, behaves as a super-martingale.
\begin{theorem}\label{thm-super-mart-beta}
For the potential function, $\Gamma$, $\EE[ \Gamma(t+1) | x(t)] \le (1 - \frac{\epsilon^2(1 - 2\gamma_1)}{4n(1 + \epsilon\gamma_1)})\Gamma(t) + c$, for constant $c = poly(1/\epsilon)$.
\end{theorem}
\begin{proof}
We consider the following cases on intervals of values for $s_i$.
\begin{itemize}
\item \textbf{Case 1:} $s_{(n\gamma_1)} \ge 0$ and $s_{(n\gamma_2)} \le 0$. Using, Lemma~\ref{lemma-phi-easy-case-beta} and Lemma~\ref{lemma-psi-easy-case-beta}, we can immediately see that, $\EE[ \Gamma(t+1) | x(t)] \le (1 - \epsilon^2/4n)\Gamma(t) + c$, for constant $c = poly(1/\epsilon)$ and hence, the result is also true.\\
\item \textbf{Case 2:} $s_{n\gamma_1} \ge s_{n\gamma_2} > 0$. This represents a high load imbalance across the bins. In some cases, $\Phi$ may grow but the asymmetry in the load implies that $\Gamma$ is dominated by $\Psi$. Thus, the decrease in $\Psi$ offsets the increase in $\Phi$ and hence the expected change in $\Gamma$ is negative.\\
Specifically, if $\EE[\Delta\Phi|x] \le -\epsilon^2/4n * \Phi$, then using Lemma~\ref{lemma-psi-easy-case-beta} we get that $\EE[ \Gamma(t+1) | x(t)] \le (1 - \epsilon^2/4n)\Gamma(t) + c$; else we consider the following two cases:\\
\begin{itemize}
\item \textbf{Case 2a:} $\Phi < \epsilon\gamma_1 * \Psi$. Here, using Lemma~\ref{lemma-psi-easy-case-beta} and Corollary~\ref{cor-phi-gen-beta}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & = \EE[\Delta\Phi|x] + \EE[\Delta\Psi|x]\\
& \le \frac{(\alpha.f)^2}{n}.\Phi - \frac{\epsilon^2}{4n} * \Psi + 1\\
& \le -(1 - \epsilon\gamma_1)* \frac{\epsilon^2}{4n}.\Psi + 1
& \le - \frac{\epsilon^2(1 - \epsilon\gamma_1)}{4n(1 + \epsilon\gamma_1)}\Gamma + 1
\end{aligned}
\end{equation}
\item \textbf{Case 2b:} $\Gamma < cn$. Here, using Corollary~\ref{cor-psi-gen-beta} and Corollary~\ref{cor-phi-gen-beta}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & \le \alpha.f/n * \Gamma
& \le c\alpha.f
\end{aligned}
\end{equation}
But, $c - ((\epsilon^2(1 - \epsilon\gamma_1)/4n) * \Gamma) \ge c(1 - \epsilon^2(1 - \epsilon\gamma_1)/4) \ge c(1 - \epsilon/2) \ge c\alpha.f$. Hence, the result follows.
\end{itemize}
\item \textbf{Case 3:} $s_{n\gamma_2} \le s_{n\gamma_1} < 0$. Here, if $\EE[\Delta\Psi|x] \le \frac{-\epsilon^2/}{4n}\Psi$, then using Lemma~\ref{lemma-phi-easy-case-beta}, we get that $\EE[ \Gamma(t+1) | x(t)] \le (1 - \epsilon^2/4n)\Gamma(t) + c$; else we consider the following two cases:
\begin{itemize}
\item \textbf{Case 3a:} $\Psi < \epsilon\gamma_1 * \Phi$. Here, using Lemma~\ref{lemma-phi-easy-case-beta} and Corollary~\ref{cor-psi-gen-beta}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & = \EE[\Delta\Phi|x] + \EE[\Delta\Psi|x]\\
& \le -(\epsilon^2/4n)\Phi + \alpha.f/n * \Psi + 1\\
& \le -(\epsilon^2/4n).\Phi + (\gamma_1\epsilon^2/2n)\Phi\\
& \le \frac{\epsilon^2(\gamma_1 - 1/2)}{2n}\Phi + 1
& \le \frac{-\epsilon^2(1 - 2\gamma_1)}{(4n(1+\epsilon\gamma_1))}*\Gamma + 1
\end{aligned}
\end{equation}
\item \textbf{Case 3b:} $\Gamma < cn$. Here, using Corollary~\ref{cor-phi-gen-beta} and Corollary~\ref{cor-psi-gen-beta}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Delta\Gamma|x] & = \alpha.f/n * \Gamma
& \le c\alpha.f
\end{aligned}
\end{equation}
Hence, this case follows similarly as \textit{Case 2b} above.
\end{itemize}
\end{itemize}
\end{proof}
Now, we can prove using induction that the expected value of $\Gamma$ remains bounded.
\begin{theorem}\label{thm-pot-up-bound-beta}
For any time $t \ge 0$, $\EE[\Gamma(t)] \le \frac{4c(1 + \epsilon\gamma_1)}{\epsilon^2(1-2\gamma_1)}n$
\end{theorem}
\begin{proof}
Using induction we can prove this claim. For $t = 0$, it is trivially true since $\Gamma(0) = 2n$.
Using Theorem~\ref{thm-super-mart-beta}, we get:
\begin{equation}\nonumber
\begin{aligned}
\EE[\Gamma(t+1)] & = E[E[\Gamma(t+1)| \Gamma(t)]]\\
& \le \EE[ (1 - \frac{\epsilon^2(1-2\gamma_1)}{4n(1+\epsilon\gamma_1)})\Gamma(t) + c]\\
& \le \frac{4c(1 + \epsilon\gamma_1)}{\epsilon^2(1-2\gamma_1)}n - c + c
\end{aligned}
\end{equation}
\end{proof}
Now, we can upper bound the gap across all the $D$ dimensions across all the $n$ md-bins. This gap is defined as follows:
\begin{equation}
Gap(t) = \max_{d=1}^{D} [ \max_{i=1}^{n} x_i^d ]
\end{equation}
\begin{theorem}\textbf{Fixed $f$ Case:}\label{thm-fixed-f-beta}
Using the bias ($p_{n\gamma_3} \le (1 - \theta\epsilon)/n$ and $p_{n\gamma_4} \ge (1 + \theta\epsilon)/n$) in the probability distribution in favor of lightly loaded md-bins, and assuming that $f$ dimensions are exactly populated in each md-ball with uniform distribution of $f$ dimensions over $D$, then the expected and probabilistic upper bound on the gap (maximum dimensional gap) across the multidimensional bins is given as follows. Let, $\delta = \frac{4c(1 + \epsilon\gamma_1)}{\epsilon^2(1-2\gamma_1)}$, then:
\begin{equation}\nonumber
\begin{aligned}
E[Gap(t)] & \le 2\log(n)/\epsilon + 2f\log(\delta)/\epsilon\\
Pr[Gap(t) & > 4\log(n)/\epsilon + 4\log(\delta)/\epsilon] \le 1/n\\
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}
Let, $a$ be the winning md-bin and $m$ be the winning dimension that represents $Gap(t)$. Now,
from Theorem~\ref{thm-pot-up-bound-beta}, we get, $\EE[ e^{\alpha.s_a} ] \le n\delta$. So,
$\EE[ e^{\alpha.(x_a^m + \sum_{d \ne m} x_a^d)} ] \le n\delta$.
Let, $y_a$ denote the gap as measured by the number of md-balls in bin $a$ minus the average number of balls across the bins. Then,
\begin{equation}\label{eq:sum-bound}
\begin{aligned}
E[s_a] & \le 1/\alpha * \log(n) + 1/\alpha * \log(\delta)\\
\Rightarrow E[s_a] & \le 2f\log(n)/\epsilon + O(2f\log(\delta)/\epsilon)\\
\Rightarrow f.E[y_a] & \le 2f\log(n)/\epsilon + O(2f\log(\delta)/\epsilon)\\
\Rightarrow E[y_a] & \le 2\log(n)/\epsilon + O(2\log(\delta)/\epsilon)
\end{aligned}
\end{equation}
The third inequality uses the fact that each ball has exactly $f$ populated dimensions. Since the $f$ dimensions are chosen uniformly and randomly from $D$ dimensions, the expected gap in any dimension (and hence the winning dimension with the maximum gap) is bounded by $O(\frac{\log(n)}{\beta})$, since $\epsilon = \Theta(\beta)$ (section~\ref{sec:markov-chain-beta}).
Now, the $Pr[s_a > 4f\log(n)/\epsilon + 2f/\epsilon * \log(\delta)] \le
Pr[ \Gamma(t) \ge nE[\Gamma(t)]] \le 1/n$ (using Markov's Inequality); where $s_a = \sum_{d=1}^{D} x_a^d$. Hence, $Pr[y_a > 4\log(n)/\epsilon + 4\log(\delta)/\epsilon] \le 1/n$.
\end{proof}
\begin{theorem}\textbf{Variable $f$ Case:}
Using the bias ($p_{n\gamma_3} \le (1 - \theta\epsilon)/n$ and $p_{n\gamma_4} \ge (1 + \theta\epsilon)/n$) in the probability distribution in favor of lightly loaded md-bins, and assuming that each dimension is chosen as $1$ with probability $q$ (non-fixed $f$ case); the expected and probabilistic upper bound on the gap (maximum dimensional gap) across the multidimensional bins is given as follows. Let, $\delta = \frac{4c(1 + \epsilon\gamma_1)}{\epsilon^2(1-2\gamma_1)}$, then:
\begin{equation}\nonumber
\begin{aligned}
E[Gap(t)] & \le 2\log(n)/\epsilon + m/n(1-q) + 2\log(\delta)/\epsilon\\
Pr[Gap(t) & > 4\log(n)/\epsilon + m/n(1-q) + 4\log(\delta)/\epsilon] \le 1/n\\
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}
Since, each dimension is assigned $1$ with probability $q$, the average number of ones per md-ball is $f^* = Dq$.
Let, $a$ be the winning md-bin and $m$ be the winning dimension that represents $Gap(t)$. The number of ones in any ball, $f$, follows a Binomial($D$, $q$) distribution and has finite second moment. Using the analysis similar as for Theorem~\ref{thm-pot-up-bound-beta}, we can get (proof omitted for brevity), $\EE[ e^{\alpha.s_a} ] \le n\delta$, where $\alpha.f^* \le \epsilon/2$. So,
$\EE[ e^{\alpha.(x_a^m + \sum_{d \ne m} x_a^d)} ] \le n\delta$.
Taking, logarithm of both sides, we get:
\begin{equation}\label{eq:sum-bound}
\begin{aligned}
E[x_a^m] + \sum_{d \ne m} E[x_a^d] & \le \log(n)/\alpha + \log(\delta)/\alpha\\
\Rightarrow E[l_a^m] + \sum_{d \ne m} E[l_a^d] - (D * \frac{mq}{n})& \le \log(n)/\alpha + \log(\delta)/\alpha
\end{aligned}
\end{equation}
In the second inequality, $l_a^d$, represents the load in dimension $d$ for bin $a$. Further, since the average load in each dimension is $\frac{mq}{n}$. If $k$ balls were thrown in bin $a$, then $E[l_a^m] = kq$ and similarly, $\sum_{d \ne m} E[l_a^d] = (D-1)kq$. Hence, we get:
\begin{equation}\nonumber
\begin{aligned}
Dkq & \le 2f^*.\log(n)/\epsilon + 2f^*\log(\delta)/\epsilon + mf^*/n\\
\Rightarrow k & \le 2\log(n)/\epsilon + 2\log(\delta)/\epsilon + m/n\\
\Rightarrow E[x_a^m] & \le 2\log(n)/\epsilon + 2\log(\delta)/\epsilon + m/n(1 - q)\\
\end{aligned}
\end{equation}
The probabilistic bound can be computed similar to the fixed $f$ case (Theorem~\ref{thm-fixed-f-beta}).
$\Box$
\end{proof}
\subsection{Lower Bound}
We can show that the upper bound, for fixed $f$ case with uniform distribution, proved in section~\ref{sec:upper-bound-beta} is tight to within $f/D$ factor. Consider the case when, $an \log(n)/\beta^2$ balls are thrown into $n$ bins, using the $(1 + \beta)$ choice process. The expected dimensional sum load per bin is $af\log(n) / \beta^2$. Now, the expected number of balls thrown using the $(1+\beta)$ choice process is $an(1-\beta)\log(n)/\beta^2$. Raab and Steger~\cite{simple-analysis-bb} show that when $cn\log(n)$ balls are thrown uniformly and randomly into $n$ bins, then the load of the most loaded bin is at least $(c + \sqrt{c}/10)\log(n)$ balls. Using, $c = a(1-\beta)/\beta^2$, one can see that sum load in the maximum sum load bin is at least:
\begin{equation}\nonumber
\begin{aligned}
& (\frac{a(1-\beta)}{\beta^2} + \sqrt{\frac{a(1-\beta)}{100\beta^2}} ) * f\log(n)
& = (\frac{a}{\beta^2} + \frac{\sqrt{a(1-\beta)} - a}{10\beta} ) * f\log(n)
\end{aligned}
\end{equation}
Since, each ball has $f$ populated dimensions, hence, there are at least $O(\log(n)/\beta + a\log(n)/\beta^2)$ balls in this max sum load bin. Since, in each ball $f$ dimensions are uniformly distributed over $D$ dimensions, there exists a dimension whose load is at least $O(f\log(n)/D\beta)$ more than the average. Hence, the lower bound is $O(f\log(n)/D\beta)$.
\REM
\subsection{Bound for Multiple Choice Process}
We consider the witness tree based analysis~\cite{vocking-tree} for the multidimensional balls and bins. Consider selection of bins using the $d$ choice ($d \ge 2$) process and assign an md-ball to the md-bin that has the least total dimensional load. Now, first consider the witness tree construction~\cite{vocking-tree} with unique balls in the tree. One can see that the root node at height $L+4$ corresponds to the total dimensional load for that bin (containing this ball) as $(L+4)f$. This root node would have $d$ children that have height $L+3$ and total dimension load for the corresponding bins as at least $(L+3)f$. This is so, since each ball has exactly $f$ ones in it and these children balls would be there in the $d$ chosen bins during bin selection by the root node (ball). Continuing in this fashion, one can see that this witness tree will have leaves that represent balls at height $3$ and total dimension load of the corresponding bin as at least $3f$. Thus, one can say that the "bad" event is that there exists
an activated witness tree with distinct balls at any fixed time $t$ such that the total dimensional load of the root is $f(L+4)$. The probability of such a event is bounded by $n2^{-d^L}$ (~\cite{vocking-tree}). Choosing, $fL \ge f\log_2\log_d(n) + f\log_d(1+c)$, the probability that the maximum total dimensional load across all bins exceeds $O(f(\log\log(n))$ is bounded by $1 - 1/n^c$. Similar argument can be made (for bound on maximum total dimensional load across all bins) for the full and the pruned witness trees~\cite{vocking-tree}.
Now, assume that bin $a$ has the dimension, say, $m$, that has the maximum gap ($Gap(t)$) across all dimensions. The total dimensional sum for this bin, $a$, is also bounded by $O(f\log\log(n))$ with high probability. Now, since each ball has exactly $f$ ones, the number of ones over all dimensions except $m$, is at least $(f-1)\log\log(n)$, therefore, the maximum load in the dimension $m$ is bounded by $O(\log\log(n))$. For, this dimension the minimum load in any bin can be $0$, hence, the maximum gap across any dimension is bounded by $O(\log\log(n))$.
}
\section{Introduction}
\label{sec:intro}
Balls-into-bins processes serve as a useful abstraction for resource balancing tasks in distributed and parallel systems. Assume $m$ balls are to be put sequentially into $n$ bins, where typically the goal is to minimize the load, measured by the number of balls, in the most loaded bin. In the classic \textit{single choice} process each ball is placed in a bin chosen independently and uniformly at random. For the case of $n$ bins and $m = n$ balls it is well known that the load of the heaviest bin is at most $(1 + o(1)) \frac{\ln (n)}{\ln\ln (n)}$ balls with high probability (w.h.p.). Further, if $m > n\ln (n)$ then the load in the heaviest bin is given by at most $\frac{m}{n} + \sqrt{(\frac{m\log(n)}{n})}$~\cite{simple-analysis-bb}.
An interesting and substantial decrease in the maximum load is achieved by the use of the \textbf{{\it multiple choice}} paradigm (also referred to as \textbf{$d$ choice} paradigm), given as: Let $Greedy(U,d)$ denote the algorithm where each ball is inserted into the lesser loaded among the $d \ge 2$ bins, independently sampled from $U$, where $U$ denotes the uniform distribution over the bins. In a seminal paper Azar et. al.~\cite{bal-alloc-azar} proved that when $m = n$ and the balls are inserted by $Greedy(U,d)$ the heaviest bin has load of $\frac{\ln\ln(n)}{\ln (d)} + \theta(1)$ w.h.p.. The case for $d = 2$ was proved by Karp et.al. in~\cite{karp-pram-sim}, later being generalized by Berenbink et.al.~\cite{petra-heavy-case} to prove the following:
\begin{theorem}
Let $\gamma$ denote a suitable constant. If $m$ balls are allocated into $n$ bins using $Greedy(U,d)$ with $d \ge 2$ then the number of bins with load at least $\frac{m}{n} + i + \gamma$ is at most $n.exp(-d^i)$ with probability at least $1 - 1/n$. (\cite{petra-heavy-case})
\end{theorem}
An immediate corollary is that w.h.p. the heaviest bin has a load of $\frac{m}{n} + \frac{\log\log(n)}{\log(d)} + O(1)$. Thus, the additive gap between the maximum load and the average load is \textit{independent} of the number of balls thrown.
The multiple-choice paradigm and balls-and-bins models have several interesting applications. In particular, the two-choice paradigm can be used to reduce the maximum time required to search a hash table. If instead of using a single perfectly random hash function as in a typical hash table implementation (with maximum chain length as $O(\ln(n))$), we use two perfectly random hash functions, then the length of the longest chain reduces to $O(\ln\ln(n))$. In efficient PRAM simulation, the two-choice paradigm helps in reducing the contention (~\cite{pram-sim-meyer}) of processors to access the same memory (DRAM). Further, the two-choice scheme can be advantageous in situations, for example when one hopes to fit one full chain in a single cache line~\cite{lookup-mm}. The multiple-choice approach has also proven useful in online (dynamic) assignment of tasks to servers (disk servers, network servers etc). By using multiple-choice one would get much better load balance across the servers as compared to the single-choice approach.
In many practical problems, the underlying data can be \textit{multidimensional}. This is especially true for parallel data mining and machine learning problems, where the input data has many dimensions such as text search where the distinct words in the document set can be considered as the dimensions and the total number of dimensions equals the size of the vocabulary that could potentially run into millions of words. Because the collection of pages to be indexed is so large, it has to be split among $n$ servers. When a user makes a query to a front-end machine, the query is sent to all $n$ servers; results are returned to the front-end machine for merging and presentation. Hence, the time to serve the query is determined by the slowest of the servers, the critical process. The time for each server to process a one-word query is roughly proportional to the number of documents at that server containing the word of interest. Thus, to achieve better efficiency, it is necessary to efficiently split the documents among servers in such way that the number of documents containing a given word is roughly equal.
Further, many application domains such as Telecommunication, Finance and others also involve huge number of dimensions such as genres and sub genres of songs and videos for collaborative filtering~\footnote{http://en.wikipedia.org/wiki/Collaborative\_filtering} type correlational analysis between the users. Here, one would like to predict what type of item (song or video) one user could prefer based on his inferred relationship with other similar users. Due to \textit{massive size} of the multidimensional data in such distributed data mining and machine learning problems, one needs to devise \textit{online load balancing} algorithms.
While, dimensionality reduction techniques can reduce the total number of dimensions to work on, even then, one needs to handle data with large number of dimensions. Further, this data is highly sparse, i.e. number of filled entries in the \textit{(user * item)} matrix is a small fraction of total possible entries in the matrix. Thus, distributed data mining and machine learning (for example in cloud computing environments), suffer from severe scalability and parallel efficiency issues due to huge load imbalance across the machines in the compute cluster (cloud). Hence, there is a strong need to address load balancing for multidimensional datasets.
\subsection{Probability Distribution for Bin Selection}
\label{subsec:prob}
The $d$-choice scheme can be characterized by a probability vector $p = (p_1, p_2, p_3,...,p_n)$, where $p_i$ denotes the probability a ball falls in the $i^{th}$ most loaded bin. Here, the bins are ordered from the most loaded to the least loaded (ties are broken arbitrarily). Then, $p_1$ denotes the probability that the most loaded bin receives the current ball, $p_2$ denotes the probability that the second bin (in the order) receives the ball and so on. In general, in the $d$-choice scheme, $p_i = (\frac{i}{n})^d - (\frac{i-1}{n})^{d}$. For $d = 1$, $\forall i: p_i = 1/n$, while for $d > 1$, $p_i > p_j$ for $i > j$. Thus, for $d > 1$, the process has bias towards the lighter bins. This biasing leads to an overall lower gap for $d > 1$ choice as compared to single choice ($d = 1$) process.
\par In this paper, we consider the multidimensional variant of the balls and bins problem. One multidimensional variant, proposed by~\cite{md-mm} is as follows: Consider throwing $m$ balls into $n$ bins, where each ball is a uniform $D$-dimensional $0$-$1$ vector of weight $f$. Here, each ball has exactly $f$ non-zero entries chosen uniformly among all $\binom{D}{f}$ possibilities (Fig.~\ref{fig:md-balls-bins} (in the Appendix~\ref{app:fig})). The average load in each dimension for each bin is given as $mf/nD$. Let $l(a,b)$ be the load in the dimension $a$ for the $b^{th}$ bin. The gap in a dimension (across the bins) is given by $gap(a) = \max_b l(a,b) - avg(a)$, where $avg(a)$ is the average load in the dimension $a$. The maximum gap across all the dimensions, $\max_a gap(a)$, then determines the load balance across all the bins and the dimensions.
Thus, for the multidimensional balanced allocation problem, the objective is to minimize the maximum gap (across any dimension). We refer to the multidimensional ball as \textit{md-ball} and the multidimensional bin as \textit{md-bin}.
In another variation of multidimensional balanced allocation the constraint of uniform distribution for populated entries is removed. Here again, each ball is a $D$ dimensional $0$-$1$ vector and each ball has exactly $f$ populated dimensions, but these populated dimensions can have an arbitrary distribution. In the third variation that is most general of the three, the number of populated dimensions, $f$, may be different across the balls, where $f$ then is a random variable with an appropriate distribution.
Mitzenmacher et.al. in ~\cite{md-mm} addressed both the single choice and $d$-choice paradigm for multidimensional balls and bins under the assumption that balls are uniform $D$-dimensional $(0,1)$ vectors, where each ball has exactly $f$ populated dimensions. They show that the gap for multidimensional balls and bins, using the two-choice process, is bounded by $O(\log\log(nD))$. However, this result is not tight and assumes that $f$ is $polylog(n)$. Due to arbitrary number of dimensions and the resulting discrepancy across the dimensions along with the general case of $m >> n$, the balanced allocations for multidimensional balls and bins is a challenging problem. In this paper, we compute bounds on the gap for the symmetric $d$-choice process for multidimensional balls and bins for both sequential and parallel scenarios.
\subsection{Summary of Key Results \& Techniques}
\label{subsec:key}
\par We present detailed analysis for the online sequential and parallel multidimensional balls and bins using the symmetric $d$-choice process and show that for $n$ bins and $m = O(n)$ balls, the gap (assuming that exactly $f$ populated dimensions are uniformly distributed over $D$ per ball) achieved is $O(\ln\ln(n))$. We establish the first ever known bound for $d$-choice process and also show that this bound is tight (within $D/f$ factor) by providing the lower bound as well. This improves upon the best prior bound of $O(\log\log(nD))$~\cite{md-mm}. For the general case of $m >> n$, the upper bound on the gap is $O((\frac{mf}{nD})^{(1/2+\zeta)}\ln\ln(n))$ w.h.p., while the expected gap is still $O(\ln\ln(n))$. For non-uniform distribution with fixed $f$ and for variable $f$ with binomial distribution, we show that the expected gap is still independent of $m$.
\par In order to arrive at these results, a novel generic potential function based approach along with \textit{sum load} across the dimensions per bin is used. This is much more challenging than the analysis presented by~\cite{kunal-beta} for $(1+\beta)$-choice process, as we obtain a much tighter bound of $O(\ln\ln(n))$ (as compared to $O(\frac{\log(n)}{\beta})$ in~\cite{kunal-beta}). This requires a novel potential function as well as a much tighter analysis in each lemma to ensure that the expected value of the potential function is less than $O(\ln(n))$ at all time $t$ and satisfies the super-martingale property.
\par For parallel multidimensional balls and bins with multiple rounds using the $d$ choice process, we show the upper bound on the gap as $O(\log\log(n))$ for $m = O(n)$ balls; and extend this bound on the gap to the general case of $m >> n$. This is tighter than the $O(\log\log(nD))$ that can be obtained using the analysis similar to~\cite{md-mm}.
\par For the weighted and heavy case ($m >> n$) using symmetric multiple choice sequential process for scalar balls, we prove an upper bound of $O(W^*\log(n))$ (where $W^*$ is the expected weight of the distribution), which improves upon the best prior bound of $O(n^c)$~\cite{kunal-weighted}. Our analysis technique also provides an alternate proof for the symmetric $d$-choice process with scalar unweighted $m >> n$ balls into $n$ bins, that is simpler and elegant as compared to~\cite{petra-heavy-case}.
\par Further, we present the analysis for bounds on the gap for $(1+\beta)$ choice multidimensional process and prove that for $m = O(n)$ the upper bound on the gap is $O(\frac{\log(n)}{\beta})$ w.h.p. for uniform distribution of $f$ dimensions over the $D$ dimensions. For non-uniform distribution with fixed $f$ and also for variable $f$ the expected gap is $O(\frac{\log(n)}{\beta})$ which is independent of $m$. Table~\ref{table-comp} summarizes the comparison between our upper bounds and the best known prior bounds, with key results highlighted
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|l|r|l|}
\hline
\textbf{Process: $d$-choice} & \textbf{Best Prior Bound} & \textbf{Our Bound}\\
\hline
Multidim, Fixed-f,$m=O(n)$ & $O(\log\log(nD))$~\cite{md-mm} & $O(\ln\ln(n))$\\
\hline
\textbf{Multidim, Fixed-f},$m>>n$ & None & $O(\ln\ln(n))$ (expected)\\
\hline
\textbf{Multidim, Var-f} & None & $O(\log(n))$ (expected)\\
\hline
\textbf{Weighted Scalar}, $m>>n$ & $O(n^c)$ (short memory via coupling~\cite{kunal-weighted}) & $O(\log(n))$\\
\hline
Unweighted Scalar, $m>>n$ & $O(\ln\ln(n))$ (using layered induction & $O(\ln\ln(n))$\\
& and short memory~\cite{petra-heavy-case}) & (using potential function)\\
\hline
Parallel Multidim, $m=O(n)$ & $O(\log\log(nD))$ (adaptation of~\cite{md-mm}) & $O(\ln\ln(n))$\\
\hline
\textbf{Parallel Multidim}, $m>>n$ & None & $O(\ln\ln(n))$ (expected)\\
\hline
Parallel Scalar & $O(\ln\ln(n))$ (for $m=O(n)$~\cite{adler95}) & $O(\ln\ln(n))$ (for $m>>n$)\\
\hline
\hline
\textbf{Process: $(1+\beta)$-choice} & \textbf{Best Prior Bound} & \textbf{Our Bound}\\
\hline
Multidim, Fixed-f,$m=O(n)$ & None & $O(\frac{\log(n)}{\beta})$\\
\hline
\textbf{Multidim, Fixed-f},$m>>n$ & None & $O(\frac{\log(n)}{\beta})$ (expected)\\
\hline
\textbf{Multidim, Var-f} & None & $O(\frac{\log(n)}{\beta})$ (expected)\\
\hline
\end{tabular}
\caption{Upper Bound Comparison for $d$-choice and $(1+\beta)$ Process}
\label{table-comp}
\end{center}
\end{table}
\section{Related Work}
\label{sec:related}
Balls into bins is a well studied abstraction for load balancing problems. Numerous results are known for sequential (single dimensional) allocation case when $m$ balls are thrown into $n$ bins; such as: for multiple choice paradigm the expected gap between the heaviest bin and the average load is $O(\frac{\log\log(n)}{\log(d)})$~\cite{petra-heavy-case}, $(1+\beta)$ choice paradigm with $O(\frac{\log(n)}{\beta})$ gap~\cite{kunal-beta} as well as for single choice paradigm having $O(\sqrt{\frac{m\log(n)}{n}})$ gap~\cite{mm-thesis}.~\cite{bal-alloc-azar} showed that the bound of $O(\frac{\log\log(n)}{\log(d)})$ for the symmetric $d$ choice process is stochastically optimal, i.e. any other greedy approach using the placement information of the previous balls to place the current ball majorizes to their approach. However, if the alternatives are drawn from different groups then different rules for tie breaking result in different allocations.~\cite{vocking-tree} presents such an \textit{asymmetric} strategy and using witness tree based analysis proves that this leads to improvement in load balance to $O(\frac{\log \log(n)}{d\log(\phi_d)})$ w.h.p. where, $\phi_2$ is the golden ratio and $\phi_d$ is a simple generalization.
\par The multiple choice and in particular the two-choice paradigm and balls-and-bins models have several interesting applications.
In particular, the two-choice paradigm can be used to reduce the maximum search time in a hash table. Instead of using a single perfectly random hash function as in a typical hash table implementation (with maximum chain length as $O(\ln(n))$), if we use two perfectly random hash functions, then the length of the longest chain reduces to $O(\ln\ln(n))$. In the latter case, when inserting a key, we apply both hash functions to determine the two possible table entries where the key can be inserted. Then, of the two possible entries, we add the key to the shorter of the two chains. To search for an element, we have to search through the chains at the two entries given by both hash functions. If $n$ keys are sequentially inserted into the table, the length of the longest chain is $O(\log\log n)$ with high probability, implying that the maximum time needed to search the hash table is $O(\log\log n)$ with high probability.
Further, the two-choice scheme can be advantageous in situations for example, when one hopes to fit one full chain in a single cache line~\cite{lookup-mm}. The two-choice approach has also proven useful in online (dynamic) assignment of tasks to servers (disk servers, network servers etc). By using two-choice one would get much better load balance across the servers as compared to the single-choice approach. If we use $(1+\beta)$ choice then, we would get around $\log(n)$ gap (as compared to $O(\log\log(n)$ gap for the two-choice) but the communication cost to query the load of the servers will be lesser by $(1 + \beta)/2$ factor as compared to the two-choice approach.
Cole et al.~\cite{routing-cole} show that the two-choice paradigm can be applied effectively in a different context, namely, that of routing virtual circuits in interconnection networks with low congestion. They show how to incorporate the two-choice approach to a well-studied paradigm due to Valiant for routing virtual circuits to achieve significantly lower congestion.
Kunal et.al.~\cite{kunal-beta} present that for online sequential $(1+\beta)$ choice process with $n$ bins and $m >> n$ balls, a tight gap of $O(\frac{\log(n)}{\beta})$ can be obtained. They use a potential function based technique and further use a majorization argument to generalize their result. We present a novel generic potential function based approach with \textit{sum load} function across all dimensions of a bin for multidimensional balls and bins and obtain tight bounds on the gap for the $d$-choice process for both sequential and parallel scenarios. Our analysis is much more challenging than~\cite{kunal-beta} since we prove a tighter bound that requires a much tighter analysis in each lemma to prove that the expected value of potential function is less than $O(\ln(n))$ at all time $t$. Further, the lower and upper bounds for the $(1+\beta)$ choice process with multidimensional balls and bins have also been provided in this paper.
\par Mitzenmacher et.al. in~\cite{md-mm} address both the single choice and $d$-choice paradigm for multidimensional balls and bins under the assumption that balls are uniform $D$-dimensional $(0,1)$ vectors, where each ball has exactly $f$ populated dimensions. They show that the gap for multidimensional balls and bins, using the two-choice process, is bounded by $O(\log\log(nD))$. We provide better bound on the gap ($O(\log\log(n))$) and also provide the bound for the general case of $m >> n$. Further, while~\cite{md-mm} assumes that $f$ is $polylog(n)$ we don't make any such assumptions. For the multiple round multidimensional parallel balls and bins process where in each round, each bin accepts at max only a single ball, one can use layered induction based proof~\cite{md-mm} to get a similar bound on the gap as $O(\log\log(nD))$. Using our novel potential function based analysis we show a tighter upper bound of $O(\log\log(n))$. The bound for the general case of $m >> n$ is also provided.
\par Berenbrink et.al.~\cite{petra-heavy-case} prove an upper bound of $O(\log\log(n))$ for the general case of $m >> n$ balls and $n$ bins using a sophisticated analysis involving two main steps. In the first step, they show that when the number of balls is polynomially bounded by the number of bins the gap can be bounded by $O(\ln\ln(n))$, using the concept of layered induction and some additional tricks. In particular, they consider the entire distribution of the bins in the analysis (while in typical $m = O(n)$ case the bins with load smaller than the average could be ignored). In the second step, they extend this result to general $m >> n$ case, by showing that the multiple-choice processes are fundamentally different from the classical single-choice process in that they have \textit{short memory}. This property states that given some initial configuration with gap $\Delta$, after adding $poly(n)$ more balls the initial configuration is \textit{forgotten}. The proof of the short memory property is done by analyzing the mixing time of the underlying Markov chain describing the load distribution of the bins. The study of the mixing time is via a new variant of the coupling method (called \textit{neighboring coupling}). We prove the same result on the gap ($O(\log\log(n))$) for the symmetric $d$ choice process with $m >> n$ but by using a much simpler and elegant potential function based approach.
\par Kunal et.al.~\cite{kunal-weighted} prove that for weighted balls (weight distribution with finite fourth moment) and $m >> n$, the expected gap is independent of the number of balls and is less than $n^c$, where $c$ depends on the weight distribution. They first prove the weak gap theorem which says that w.h.p $Gap(t) < t^{2/3}$. Since in the weighted case the $d$ choice process is not dominated by the one choice process, they prove the weak gap theorem via a potential function argument. Then, the \textit{short memory theorem} is proved. While in~\cite{petra-heavy-case} the short memory theorem is proven via coupling,~\cite{kunal-weighted} uses similar coupling arguments but defines a different distance function and use a sophisticated argument to show that the coupling converges. ~\cite{kunal-weighted} also presents a reduction from the real-weighted case to the integer-weighted case. We present the results for weighted case (with integer and real weights and weight distribution with finite second moment) using an elegant and much simpler potential function based argument and show that the gap for arbitrary $m >> n$ is bounded by $O(W^*\log(n))$, where $W^*$ is the expected weight of the distribution.
Adler et.al.~\cite{adler95} consider parallel balls and bins with multiple rounds. They present analysis for $O(\frac{\log\log(n)}{\log(d)})$ bound on the gap (for $m = O(n)$) using $O(\frac{\log\log(n)}{\log(d)} + O(d))$ rounds of communication. We generalize this result to the case of parallel multidimensional balls and bins and arbitrary $m >> n$ balls with similar bound on the gap.
\input{md-bb.dchoice.gen.techrep.tex}
\input{md-bb.oneplusbeta.gen.techrep.tex}
\input{md-bb.concls.techrep.tex}
|
2105.13537
|
\section{Introduction}
In the quest to understand the processes governing planetary system development, the peculiar case of short-orbit gas giants (i.e., Jovian and sub-Jovian exoplanets, orbiting their star with periods of less than a few weeks that we will refer as hot Jupiters or HJs hereafter), is a real challenge as classical theories describing their formation and evolution do not predict their presence in the vicinity of their parent star. Although representing a significant fraction of all exoplanets discovered (between 10 and 15 per cent \footnote{\url{https://exoplanetarchive.ipac.caltech.edu/}}), their true occurrence rate is estimated to be around 1 per cent for mature Solar-type stars \citep{Wright2012}. Even though this discrepancy can be explained through observing biases, their scarcity raises the question of the formation channel generating this population.\par
In the most accepted explanation, future HJs form in the colder region of the protoplanetary disk (more than a few au) and later experience orbital decay to eventually reach a close-in orbit. Two migration mechanisms are proposed: gas disk migration (see \citealt{Baruteau2014} for a review), where the planet migrates inward as the result of angular momentum exchange between the gas giant and the disk, and high eccentricity tidal migration. In this last scenario, the planet is sent to a highly eccentric orbit following a strong perturbation (planet-planet scattering, e.g. \citealt{Chatterjee2008} or secular interactions, see \citealt{Beauge2012}, \citealt{Petrovich2016}, \citealt{Petrovich2015} and \citealt{Hamers2017} for the different proposed mechanisms). Now being close enough to the star at periastron, tidal forces exerted by the star act to circularise the planet's orbit.\par
Confronting migration theories is in-situ formation, where the HJ forms in the vicinity of the host star and remains in close-orbit. This explanation has been historically rejected as it sets restrictive constraints on the inner stellar disk, i.e. there must be enough available material to form the core of these gas giants and that core forming process needs to be completed before the star depletes all the gas from the area for the future HJ to successfully accrete its gaseous envelope. Due to these constraints, it is unlikely to occur according to the \emph{Solar nebula theory}, assuming a disk composition similar to the one that gave birth to our Solar system \citep{Perryman2011}. Now realising that our Solar system may be far from being the norm in the great diversity of planetary systems, in-situ formation has come back under the spotlight \citep{Batygin2015,Boley2015}. Recent studies, such as \cite{Bailey2018} or \cite{Dawson2018}, suggest that HJs could have a different origin in different systems and/or that a combination of the proposed mechanisms could be at play. \par
In their review paper, \cite{Dawson2018} propose to test the different theories by searching for correlations between properties of HJs and their parent stars. Among the 15 studied properties, two are flagged as requiring further observations: HJ obliquities and host star ages. This paper focuses on the latter.\par
Studying young stars is a privileged approach as it would help to discriminate between early-stage mechanisms such as in-situ formation or gas disk migration versus more prolonged and late-stage mechanisms, like high eccentricity migration. However, \cite{Dawson2018} warn that high-eccentricity migration driven HJs could arrive in close-in orbit fairly early in the system's formation, showing that the dependence of these mechanisms with stellar age is not yet completely clear. Therefore, young stars (< 20-50 Myr), and even more so, younger (< 10 Myr) low mass (< 3 $M_{\odot}$) T Tauri pre main sequence (PMS) stars as defined in \citet{1989A&ARv...1..291A} are probably the best candidates. \par
Unfortunately, with the exception of direct imaging surveys, the youngest stars are typically systematically avoided when searching for exoplanets, as they exhibit particularly strong intrinsic variability, or stellar activity. For such stars, this activity-induced correlated noise results primarily from surface brightness features, linked to complex internal processes and a strong magnetic field. Surface features yield spurious radial velocity (RV) signatures that generally completely mask exoplanet signatures, hence preventing their discovery. Additionally, \citealt{Nava2019} showed that activity can generate unexpected spurious peaks in a periodogram analysis, which could lead to false positives if no adequate treatment of the activity is applied. \par
Filtering, or mitigating this stellar activity becomes crucial if one hopes to find traces of exoplanets orbiting young active stars. It is also important to note that effective activity mitigating strategies are key in the search of Earth sized planet around less active stars. In those cases, both the activity level and the planetary signature are up to 2 orders of magnitude smaller, but present a similar situation in relative terms. However, it is still slightly different as additional phenomena are also at play (i.e. granulation, pulsations). The exoplanet community is actively trying to develop and assess these strategies (see \citealt{cabot20}).\par
Available data on planets with periods less than 15\,days orbiting very young stars (< 50 Myr) is very scarce. Six planet-hosting stars have been found from transits \citep{David2016,David2019,Newton2019,Rizzuto_2020,Plavchan2020,bouma20} and three from RV searches: CI Tau b \citep{Johns-Krull2016,Flagg2019}, V830 Tau b \citep{Donati2015,Donati2016,Donati2017} and TAP 26 b \citep{Yu2017b}. Recently, however, the existence of both V830 Tau b and CI Tau b have been challenged by \citealt{Damasso2020} and \citealt{Donati2020}. V830 Tau b and TAP 26 b were found by the MaTYSSE (Magnetic Topologies of Young Stars and the Survival of massive close-in Exoplanets) observation programme in a sample of 33 weak-line T Tauri stars \citep{Yu2017a}. If real, these 2 planets would indicate a fraction of HJs as high as 6 per cent for newly born stars. In this context, it is crucial to carry on the search for close-in gas giants around young stars to better estimate their occurrence rate at that stage. \par
In this paper, we investigated the case of searches for short-period gas giants orbiting very young and active stars solely using RV data. More specifically, we injected various RV signatures mimicking single circular planet systems behind real data of the young active G dwarf HD 141943 (not known to host a massive planetary companion) and assessed our detection limits using two distinct strategies: Doppler Imaging (DI) activity filtering (Section~\ref{subsec:Method1}) and Gaussian Process (GP) Regression (Section~\ref{subsec:method_GP}). \par
Although already used in the past (DI + GP in \citealt{Donati2016,Donati2017,Yu2017b,Yu2019,Klein2020} and GP in most exoplanet searches for the past few years), testing the respective performance of these two methods in legacy datasets has not been performed. We note that the underlying data were not optimised to search for exoplanets and was obtained using a non stabilised spectrograph (e.g. with $\approx$50-100\,\,m\,s$^{-1}$ uncertainty on radial velocities). The limitations we describe should therefore be significantly improved with RV stabilised datasets. However, they provide a strong baseline for what is achievable and are useful to investigate other datasets of this nature already available (i.e. in the Bcool \citep{Marsden2014} or TOUPIES \citep{Folsom2016,Folsom2018} surveys). We compared our results to the planet 'hide and seek' study done on the same star with no specific treatment for stellar activity \citep{Jeffers2014}.\par
This paper is organised as follows. Details on the techniques used to reduce the data, more specifically to get from raw spectra to radial velocities are given in Section~\ref{sec:DataAnlaysis}. We then cover the methods addressing stellar variability in Section~\ref{sec:filtering}. Part~\ref{sec:maindataset} of this paper focuses on our reanalysis of HD 141943's raw dataset. Section~\ref{sec:simulations} explains how we set up our simulated datasets, and results from the analysis are laid out in ~\ref{sec:Results}. Finally, we give our conclusions and future prospects in Section~\ref{sec:Conclusions} and ~\ref{sec:futurework}.
\begin{figure}
\includegraphics[width=\columnwidth]{plots/Diagram_technique.pdf}
\caption{Diagram of the data analysis procedure, from raw spectra to periodic signature identification. Each block is a step of the process. Bold text and the associated numbers in parenthesis respectively indicate the method used to progress from one block to the next and the section of this paper detailing the corresponding process. Each point on the bottom plot results from the analysis of a single spectrum using the entire procedure described here. }
\label{fig:diagram}
\end{figure}
\section{Data analysis}
\label{sec:DataAnlaysis}
\subsection{From spectra to line profiles}
\label{sec:spectratoline}
Both methods we utilised to disentangle stellar activity from planetary signals take radial velocity time series as input. The extraction of RV values from raw stellar spectra were performed by finding the centroid (described in Section~\ref{subsec:RV}) of a `mean line profile' obtained using Least-square Deconvolution (LSD, \citealp{Donati1997}, \citealp{Kochukhov2010}).
LSD convolves an observed stellar spectrum with a spectral line mask. Given an appropriate mask, the result is an enhanced peak signal to noise ratio (S/N) `mean line profile' exhibiting stellar activity induced line features. We chose the stellar mask best matching our star in the list of masks designed in the scope of the Bcool survey \citep{Marsden2014} using VALD \footnote{\url{http://vald.astro.uu.se/}} for a star with an effective temperature of $T_{\mathrm{eff}}$ = 6000 K, surface gravity of log~$g$ = 4.5\,\logg\, and [$\mathrm{Fe/H}$] = +0.2. Only spectral lines deeper than 20 per cent of the maximum line depth were kept for the LSD computation, yielding a total of 4097 lines. The outcome was a S/N increase from $\approx$ 50-100 for the observed spectra (depending on the spectrum and spectral order considered) to $\approx$ 1000 for the LSD mean line profiles.
\subsection{From line profiles to radial velocities}
\label{subsec:RV}
Classically, each RV is taken to be the mean of a Gaussian profile fitted to the obtained line profile. However, for active stars, the distortion and here the 'flat-bottom' (see the centre plot on Figure~\ref{fig:diagram}) of the line shows that a Gaussian fit is not suitable. We considered two alternatives. \par
Firstly, we chose a Generalised Normal Distribution (GND, \citealt{Nadarajah2005}), as shown in green on the central plot of Figure~\ref{fig:diagram} and described by the following p.d.f:
\begin{equation}
\mathrm{GND}(x) = \frac{\beta}{2\sigma\Gamma\left(\frac{1}{\beta}\right)}\exp\left(-\left|\frac{x-\mu}{\sigma}\right|^{\beta}\right)
\end{equation}
where $\Gamma$ denotes the gamma function, $\mu$ the position parameter (mean), $\sigma$ the scale parameter, and $\beta$ the shape parameter. $\beta < 2$ results in wings more extended than a normal distribution and a sharper distribution peak. When $\beta = 2$, the GND becomes a Gaussian distribution (where $\sigma$ is the standard deviation). For $\beta > 2$, the distribution yields wings less extended than a normal distribution and tends to a uniform distribution as $\beta\to\infty$ . This grants more flexibility to the distribution resulting in a better fit to broadened profiles. Error bars on the GND parameters are given by the fitting method. The centroid $\mu$ and the associated error bars for each LSD profile constituted our RV time series. \par
Secondly, we derived RV values using the first-order moment (generalised centroid, FOM hereafter) of each LSD profile, computed as:
\begin{equation}
\mathrm{RV} = \frac{\int\left( I_c-I(\nu)\right)\nu d\nu}{\int\left( I_c-I(\nu)\right)d\nu}
\end{equation}
with $I(\nu)$ the intensity of the profile at radial velocity $\nu$ and $I_c$ the continuum level. Here, error bars were propagated using the LSD derived uncertainties. We note that results given by FOM are sensitive to integration limits (i.e. the limits on the line profiles used to compute it). This is further described in Section~\ref{subsec:RV_extractin}.
\section{Stellar activity filtering}
\label{sec:filtering}
Stellar activity distorts line profiles, causing a shift in the line's centroid and therefore in the measured RV. Modelling the activity thus aims to correct for these distortion induced shifts.
\subsection{Method \#1: Filtering activity using Doppler Imaging}
\label{subsec:Method1}
Section~\ref{subsec:DopperImaging} describes Doppler Imaging, representing the core of our filtering process, following \cite{Donati2014}. Sections~\ref{subsec:ZDI} and ~\ref{subsec:diffrot} describe magnetic imaging (ZDI) and differential rotation, complementary to the DI technique. The actual filtering process is described in~\ref{subsec:DI filtering}.
\subsubsection{Doppler Imaging}
\label{subsec:DopperImaging}
Doppler Imaging (DI) is a tomographic technique that, for rapidly rotating stars ($v\sin i \gtrsim 10$\,\,km\,s$^{-1}$), uses spectroscopic observations to infer the brightness features at their surface \citep{Brown1991,Donati1997_2}.
Practically, a time-series of observed pseudo-line profiles obtained through LSD are iteratively adjusted using a tomographic algorithm. Irregularities in the profiles are interpreted as surface bright/dark spots that enhance/block Doppler shifted light due to stellar rotation. Then, iteratively, synthetic profiles, derived from the DI surface maps, are fit to the observed ones. To reach a unique solution to the ill-posed problem of DI inversion (as a single line profile can be generated from different surface map solutions), a maximum entropy selection of the solution is adopted (i.e. minimising the information content of the brightness map), while ensuring that the $\chi^2$ is kept below a defined threshold. This is done following the routine of \cite{Skilling1984} and using the entropy as defined in \cite{Hobson1998}. Further details can be found in Appendix B of \cite{Folsom2016}. The model output is constituted of a synthetic set of LSD profiles, and of the brightness surface map producing this spectral information.\par
Synthetic line profiles are obtained by integrating the Doppler-shifted flux (due to the rotation of the star) emerging from each point of the visible hemisphere. This flux is scaled according to the local surface cell projected area, brightness and limb darkening. The local line profiles are calculated using a Voigt profile, a convolution of a Gaussian and a Lorentzian profile. \par
Output products of DI include a set of synthetic profiles and a surface brightness map (or a magnetic map for Zeeman Doppler Imaging, see next section). The use of DI also enables us to constrain the stellar fundamental parameters by selecting the parameter values that optimise the brightness model (i.e., inclination of the stellar rotational axis with respect to the line-of-sight $i$, line-of-sight projected equatorial rotation velocity $v\sin i$, stellar equatorial rotation period $P_{\mathrm{eq}}$, stellar mean radial velocity $\overline{RV}$ and differential rotation $\mathrm{d}\Omega$) and line profile parameters (i.e., line depth, Gaussian and Lorentzian equivalent widths). The DI analysis of HD 141943 is described in Section~\ref{subsec:refinedstellarparam} and Figure~\ref{fig:maps_comparison}.
\subsubsection{Zeeman Doppler Imaging}
\label{subsec:ZDI}
Although Zeeman Doppler Imaging is not part of the filtering process, it is similar to the stellar mapping process and is therefore described here. \par
Similarly to DI, Zeeman Doppler Imaging (ZDI, e.g. \citealt{Semel1989}) is a technique that uses polarimetric information (i.e. Stokes V LSD profiles) to reconstruct the magnetic field structure at the surface of the star. Here we used a spherical harmonic expansion to describe the large-scale components of the magnetic field (i.e., poloidal and toroidal, \citealt{Donati2006}). The Zeeman effect allows one to infer the strength and direction of the surface magnetic field, provided one has high enough S/N line profiles, rendered possible by the LSD technique. Like DI, solving for a magnetic field configuration is an ill-posed problem and ZDI also relies on maximum entropy image reconstruction. The ZDI analysis of HD 141943 is described in Section~\ref{subsec:refinedstellarparam} and Figure~\ref{fig:mag_map}.
\begin{figure}
\includegraphics[width=\columnwidth]{plots/differential_rotation.pdf}
\caption{Reduced $\chi^2$ surface of the differential rotation (\,rad\,d$^{-1}$) (y-axis) versus equatorial rotation period (x-axis), equivalent to the rotation frequency $\Omega_\mathrm{{eq}} = 2\pi / P_{\mathrm{eq}}$. Contours show confidence levels at 1, 3 and 5$\sigma$. Colour bar shows the reduced $\chi^2$ values.}
\label{fig:Chisquaremap}
\end{figure}
\subsubsection{Surface differential rotation}
\label{subsec:diffrot}
The information used to generate a snapshot of the stellar surface through DI and ZDI often spans multiple stellar rotation cycles. Thus, the effect of differential rotation needs to be accounted for. The code we used models that differential rotation as a simplified solar-like differential rotation law:
\begin{equation}
\Omega(\theta) = \Omega_{\mathrm{eq}} - \mathrm{d}\Omega \sin^2 \theta
\end{equation}
with $\Omega(\theta)$ the rotation rate at latitude $\theta$, $\Omega_{\mathrm{eq}}$ (=$\frac{2\pi}{P_{\mathrm{eq}}}$) the rotation rate at the equator and $\mathrm{d}\Omega$ the difference in rotation rate between the equator and the poles (i.e., the differential rotation). Following \cite{Petit2002} and \cite{Donati2003}, we explored the $\mathrm{d}\Omega$ and $\Omega_{\mathrm{eq}}$ parameter space, by running DI inversions for various values of the two parameters, looking for the doublet that optimizes the DI model, i.e. the $\mathrm{d}\Omega$ and $\Omega_{\mathrm{eq}}$ values that minimize the $\chi^2$ of our model at fixed entropy level. The resulting $\chi^2$ surface is used to derive our uncertainty on these two parameters. \par
We performed our DI, ZDI and differential rotation analyses using the python \textsc{zdipy} code (see Appendix from \citet{Folsom2018} for a more detailed description of the code). The code has been adapted to run on Fawkes, the High Performance Computing (HPC) facility at the University of Southern Queensland. The HPC allowed us to quickly explore our parameter space. Practically, we varied the stellar parameters (up to 3 at a time) to find the best solution. By best solution we mean the set of parameters that fit our line profiles down to the target $\chi^2$ (< 1 due to the LSD process, see \citealt{Cang2020} for a similar case and follow up explanations) and also maximises the entropy value. The differential rotation analysis of HD 141943 is described in Section~\ref{subsec:refinedstellarparam} and Figure~\ref{fig:Chisquaremap}.
\subsubsection{Filtering the activity}
\label{subsec:DI filtering}
Following \cite{Donati2014}, we removed the stellar activity contribution by subtracting the RV time series derived from our modelling of the activity alone (i.e., the synthetic profiles centroids obtained from DI) from the values obtained from the raw/observed LSD profiles (i.e. the observed/raw LSD profiles centroids). We assumed that for very active stars, stellar variability is in a first approximation entirely due to features present on the stellar surface. We then searched for periodicity in the resulting filtered RVs, utilising a Lomb Scargle (LS) periodogram (\citealt{Lomb1976,Scargle1982}). We point out that residuals exhibit some red noise leftovers, while LS periodograms are designed for uncorrelated/white noise \citep{Vanderplas2018}. We keep this approach here for maximal consistency with previous papers of \cite{Donati2014,Donati2016,Yu2017b,Yu2019}.
\par
The nature of stellar variability (i.e. correlated/red noise) combined with the imperfect filtering (see Figure~\ref{fig:per_5}) of the activity using DI, results in residuals exhibiting some red noise leftovers. As LS periodograms are designed for uncorrelated/white noise \citep{Vanderplas2018}, this approach is limited and should not be used alone to claim a planet detection. To assess significance of a detection, we use the false alarm probability (FAP)\footnote{The FAP limit indicates the likelihood that a peak caused by random fluctuations in the data would reach a given height/power (see dashed lines in Figures~\ref{fig:per_5}, \ref{fig:periodograms32_36} and \ref{fig:per_22} ). However, it does not indicate the probability of a dataset to have a periodic component given the data.}. To compute the FAP levels, we used the Baluev approximation (see \citealt{Baluev2008}). We also tried a bootstrap approach, which yielded very similar results.\par
\begin{table}
\centering
\caption{Prior distributions of parameters used for the Gaussian Process regression. The right column gives describes the prior for each parameter of the model. J(\emph{min},\emph{max}) stands for Jeffreys priors, MJ(\emph{max},\emph{knee}) for Modified Jefferys priors, $\mathcal{N}$(\emph{mean},\emph{std}) for Gaussian priors and $\mathcal{U}$[\emph{min},\emph{max}] for Uniform priors. $\overline{\sigma}_{\mathrm{RV}}$ is the mean of the RV uncertainties. $RV_{\mathrm{max}}$ is the largest absolute value in the dataset and $RV_{\mathrm{std}}$ the standard deviation of all RV values.}
\label{tab:GPpriors}
\begin{tabular}{ll}
\hline
Parameters & Priors \\
\hline
\hline
Stellar activity & \\
$\theta_1$ (\,km\,s$^{-1}$)& MJ(1.5$\times\,RV_{\mathrm{max}}$,$\overline{\sigma}_{\mathrm{RV}}$)\\
$\theta_2$ (d)& J(1,100)\\
$\theta_3$ (d)& $\mathcal{N}$(2.2,0.05) \\
$\theta_4$ [0:1]& $\mathcal{U}$[0:1] \\
\hline
Planet & \\
$K$ (\,km\,s$^{-1}$) & MJ(2$\times\,RV_{\mathrm{max}}$,$\overline{\sigma}_{\mathrm{RV}}$)\\
$P_{\mathrm{orb}}$ (d) & J(0.1,15) \\
$\Phi$ [0:1] & $\mathcal{U}$[0:1] \\
\hline
Telescope and Noise & \\
$RV_\mathrm{0}$ (\,km\,s$^{-1}$)& $\mathcal{U}$[-$RV_{\mathrm{max}}$:$RV_{\mathrm{max}}$] \\
$\mathrm{\sigma_s}$ (\,km\,s$^{-1}$)& MJ($RV_{\mathrm{std}}$,$\overline{\sigma}_{\mathrm{RV}}$) \\
\end{tabular}
\end{table}
\subsection{Method \#2: Modelling the activity using a Gaussian Process regression}
\label{subsec:method_GP}
Our second approach uses a Gaussian Process (GP hereafter) Regression to model the stellar activity induced RV and its temporal evolution as first suggested in \cite{haywood_planets_2014} and \cite{rajpaul_gaussian_2015}. The GP regression treats stellar activity as Gaussian red (correlated) noise. This Bayesian approach is driven by the data points, considered to be random correlated Gaussian variables and the covariance matrix \textbf{C}, specifying the correlation between each pair of data points. Following \cite{haywood_planets_2014} we computed each entry $\mathrm{\boldsymbol{\mathrm{C}}_{ij}}$ of this co-variance matrix using the following physically driven quasi-periodic kernel made of a sinusoidal component to account for the rotation of the star combined with an exponential component for the surface features appearance/decay:
\begin{equation}
\label{eq:kernel}
\mathrm{\boldsymbol{\mathrm{C}}_{ij}} = \theta_1^2 . \exp \left[ - \frac{\left( t_i - t_j\right)^2 }{\theta_2^2}- \frac{ \sin^2\left( \frac{\pi (t_i-t_j)}{\theta_3}\right)}{\theta_4^2} \right] + \left( \sigma_i^2 + \sigma_s^2 \right)\delta_{ij}
\end{equation}
Where the four hyper-parameters can be interpreted as:
\begin{itemize}
\item $\theta_1$ (\,km\,s$^{-1}$) : Semi-amplitude of the activity RV signature.
\item $\theta_2$ (d) : Decay parameter, or typical surface feature lifetime.
\item $\theta_3$ (d) : Recurrence timescale, expected to be very close to $P_{\mathrm{eq}}$.
\item $\theta_4$ [0:1] : Smoothing parameter or amount of high frequency in the signal. Smaller and larger value of $\theta_4$ respectively indicates variations on a longer and shorter timescale. From experience (\citealp{Haywood2018,Jeffers2009}), light curves and RV timeseries exhibit values of around 0.3 to 0.4 for this parameter. We chose a uniform prior that guarantees to largely encompass these values.
\end{itemize}
$\sigma_i$ is the uncertainty of datapoint $i$ and $\sigma_s$ an extra white, uncorrelated noise parameter accounting for variations due to other sources and not explicitly captured by the model. $\sigma_i$ and $\sigma_s$ were added in quadrature and only applied to the diagonal of our matrix (i.e. variance of the datapoints). \par
Our global model is the sum of: the GP model accounting for the stellar activity ($RV_{\mathrm{GP}}$), a sinusoid for the circular planetary signature ($RV_{\mathrm{pla}}$) and a constant offset ($RV_\mathrm{0}$): \par
\begin{equation}
\begin{split}
\mathrm{RV_{tot}} = &\quad \; RV_{0} \\
& + \;RV_{ \mathrm{GP}}(t,\theta_1,\theta_2,\theta_3,\theta_4,\sigma_i,\sigma_s) \\
& +\; \mathrm{RV}_{\mathrm{pla}}(t, K, P_{\mathrm{orb}},\Phi)
\end{split}
\end{equation}
We ended up with a parameter space to explore containing 5 ($\theta_1$, $\theta_2$, $\theta_3$, $\theta_4$ and $\sigma_s$) + 3 $\times$ $n$ parameters + $\mathrm{RV}_\mathrm{0}$, for $n$ planets (i.e. 9 parameters for single planet model). Then, two aspects need to be considered in order to confidently claim the presence of a periodic signal in the data. The first part is parameter estimation, where we explore the parameter space yielding posterior distributions from which the most likely set of parameters, as well as their mean and uncertainty values, can be recovered. The second aspect is model selection, where we assess how much more likely a model containing one planet is, compared to one with stellar activity only. Commonly, parameter space exploration is performed using Monte Carlo approaches. Despite the efficiency of some algorithms (e.g emcee from \citealt{Foreman-Mackey2013}), the bottleneck of planet searches is usually model selection. \par
Model selection is performed by comparing the \emph{marginal} likelihood (or evidence, $\mathcal{Z}$) of different models (i.e. activity only, activity with 1 planet, 2 planets, etc.). A detailed description of the evidence is given in Appendix~\ref{sec:appendixA}. Accurate estimation of this evidence is computationally expensive as it implies multi-dimensional integration over potentially large parameter spaces. Recently, \citealt{Nelson_2020} compared different methods for computing the evidence applied by different research groups. Although this was preliminary and would require follow up studies to completely generalise the results, some approaches proved to be more consistent than others.
Following their results, we developed our GP code using \textsc{pymultinest} \citep{Buchner2014}, a Python implementation of \textsc{multiNest} \citep{Feroz2009}. This Importance Nested Sampling algorithm estimates the evidence and provides, as a by-product, the posterior probabilities and can therefore also be used for parameter estimation. \par For the rest of this paper, when comparing models, we will refer to the Bayes Factor (BF) and/or the associated probability (p) in favour of a single planet model (model $\mathcal{M}_1$) over an activity only model (model $\mathcal{M}_0$):
\begin{equation}
\mathrm{BF} = \frac{\mathcal{Z}_1}{\mathcal{Z}_0}
\label{eq:11}
\end{equation}
with $\mathcal{Z}_0$ and $\mathcal{Z}_1$ the marginal likelihood for $\mathcal{M}_0$ and $\mathcal{M}_1$. We used the metric of \cite{Jeffreys1961} (see Table~\ref{tab:Bayes factor}) to assess significance from the BF. \par
\subsubsection{Likelihood and Priors}
\label{subsec:priors}
Two ingredients are needed to recover the posterior probabilities; likelihoods and prior probabilities. \par
In our case, the natural logarithm of the likelihood (i.e. probability of the data given the model and its parameters, $\mathrm{p}(\boldsymbol{y}|\theta,\mathcal{M}_i)$ or $\mathcal{L}$), is given by:
\begin{equation}
2 \ln \mathcal{L} = -n\ln(2\pi) - \ln \left( | \mathbf{C} | \right) - \mathbf{y}^T\left( \mathbf{C} \right)^{-1}\mathbf{y}
\end{equation}
With $\mathbf{y}$ the vector (of length $n$) containing the residuals after having removed both $RV_{\mathrm{pla}}$ and $RV_\mathrm{0}$ from the original RVs and $\mathbf{C}$ the co-variance matrix computed using our GP kernel from Equation~\ref{eq:kernel}. \par
Our priors, physically motivated following \cite{Gregory2007}, are listed in Table~\ref{tab:GPpriors}. Because the evidence is dependent on prior probabilities, we emphasise the importance of favouring uninformative priors, such as uniform, Jeffrey's (uniform prior in logarithmic space, see \citealt{Gregory2007}) or Modified Jeffrey's (Jeffrey's prior, approaching a uniform distribution for values $\ll$ to the \emph{knee} parameter of the modified Jeffrey's prior to handle priors that have 0 as a lower boundary, also see \citealt{Gregory2007}), or at least priors independent of the studied data when previous and statistically valid knowledge is available in the literature. Using informative priors, without justification, would act to artificially boost the evidence. This is especially true for parameters that are not shared by the compared models (the planetary parameters in our case). The only informative prior we used here is $\theta_3$ as $P_{\mathrm{eq}}$ has been constrained from DI. \par
We ran \textsc{pymultinest} with an efficiency of 0.3 and 2000 live points (see \citealt{Nelson_2020}). For each run, the parameter search drew between $\approx$ 50,000 samples from the posterior for the model with no activity and $\approx$ 150,000 for the single-planet model. Details of the results for all datasets are in Table~\ref{tab:results}. \par
\begin{table}
\centering
\caption{Fundamental parameters of HD 141943.}
\label{tab:HD141943}
\begin{tabular}{lc}
\hline
Parameter & HD 141943 \\
\hline
Spectral type & G2V \\
Distance (pc) & 60.028 $\pm$ 0.083 \textsuperscript{d}\\
Age (Myr) & 17-32\textsuperscript{b} \\
$M_{\mathrm{\star}}$ (M$_{\odot}$)& 1.3\textsuperscript{a} \\
Photospheric temperature $T_{\mathrm{eff}}$ ($K$) & 5850 $\pm$ 100\textsuperscript{a} \\
Spot temperature ($K$) & $\approx$ 3950 \textsuperscript{a} \\
$R_{\mathrm{eq}}$ (R$_{\odot}$) & 1.5$^{+0.06}_{-0.05}$\textsuperscript{c} \\
$i$ ($^{\circ}$) & 70 $\pm$ 10\textsuperscript{a} \\
$v\sin i$ (\,km\,s$^{-1}$) & 35.6 $\pm$ 0.7 \textsuperscript{e} \\
Equatorial rotation period $P_{\mathrm{eq}}$ (d) & $2.198\pm 0.002$\textsuperscript{e} \\
$\mathrm{d}\Omega$ (\,rad\,d$^{-1}$) & $0.1331^{+0.0095\textsuperscript{e}}_{-0.0094}$ \\
\hline
\multicolumn{2}{l}{\textsuperscript{a}\footnotesize{M11A}}\\
\multicolumn{2}{l}{\textsuperscript{b}\footnotesize{\cite{Hillenbrand2008}}}\\
\multicolumn{2}{l}{\textsuperscript{c}\footnotesize{Gaia DR2: \cite{Gaia2016,Gaia2018}}}\\
\multicolumn{2}{l}{\textsuperscript{d}\footnotesize{Gaia EDR3: \cite{Gaia2016,Gaia2020}}}\\
\multicolumn{2}{l}{\textsuperscript{e}\footnotesize{This study}}
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=\columnwidth]{plots/bright_map.pdf}
\includegraphics[width=\columnwidth]{plots/bright_map_stephen.png}
\includegraphics[width=\columnwidth]{plots/bright_map_70.pdf}
\includegraphics[width=\columnwidth]{plots/bright_map_dark.pdf}
\caption{Comparison of the surface brightness maps for HD 141943's original dataset (\#5). Each map is a maximum entropy reconstructed image of the brightness features at the surface of the star. The blue and brown patches indicate regions that are warmer or colder than the photosphere, respectively. The maps are polar projected, with the centre being the visible rotation pole and the full ring labelled 90 the equator. Values on the outermost ring give the rotational phase and the ticks indicate the phase of each observations. Top left: Map obtained using the parameters from the first line in Table~\ref{tab:bestparam} and mapping for bright and dark features. Top right: Map extracted from M11A (second line of Table~\ref{tab:bestparam}) and mapping only dark features. In their approach, the colour scale expresses the spot filling factor. Bottom left: Map obtained using the parameters from the third line of Table~\ref{tab:bestparam}, with a forced inclination parameter of 70$^{\circ}$ and mapping for bright and dark features. Bottom right: Map obtained using the parameters from the fourth line of Table~\ref{tab:bestparam} and mapping only dark features.}
\label{fig:maps_comparison}
\end{figure*}
\section{Analysis of HD 141943}
\label{sec:maindataset}
Before attempting to recover injected planets behind HD141943's activity, we analysed the raw observations (dataset \#5 containing no planet) to recover stellar parameters and ensure the star does not host any planet \emph{that we can detect}.
\subsection{Spectropolarimetric dataset}
\label{sec:spectrodataset}
Spectroscopic Stokes \emph{I} (Intensity) and \emph{V} (polarised) observations of HD 141943 used in this study were aquired using the SEMPOL instrument, visitor polarimeter operating together with the University College London Echelle Spectrograph \citep{Donati2003} and mounted on the 3.9m Anglo-Australian Telescope (AAT) in Siding Spring, Australia. Available data comprises 92 spectra spread over 11 days between March 30 and April 09, 2007, covering 4.68 stellar revolutions, offering a well-sampled rotational phase coverage as required for DI and ZDI (further details on the data can be found in \citealt{Marsden2011a}) and a suitable timescale to search for hot Jupiters. The 92 spectra were taken in chunks of four 30 minute consecutive exposures, each in different polarisation states to perform ZDI. As each 2 hour observing run represents a very short time frame compared to $P_{\mathrm{eq}}$ (stellar equatorial rotation period) and any simulated hot Jupiters' orbital period, this dataset can be treated as containing 23 epochs rather than 92. A previous DI and ZDI analysis of this dataset is in available in \citealt{Marsden2011a,Marsden2011b} (M11A/B hereafter).
Reduction of raw spectra was done using the \textsc{esprit} pipeline \citep[Echelle SPectra Reduction: an Interactive Tool,][]{Donati1997}.
\subsection{Stellar parameters and surface mapping}
\label{subsec:refinedstellarparam}
\begin{table}
\centering
\caption{Set of parameters resulting from 4 different analysis: (i) Bright and dark features mapping, (ii) from M11A, (iii) bright and dark features mapping with an inclination angle constrained to 70$^{\circ}$ matching M11A's value and (iv) dark features only.}
\label{tab:bestparam}
\begin{tabular}{lllll}
\hline
Best value & $v$ $\mathrm{sin}$ $i$ (\,km\,s$^{-1}$) & $P_{\mathrm{eq}}$ (d) & $\mathrm{d\Omega}$ (\,rad\,d$^{-1}$) & $i$ ($^{\circ}$) \\
\hline
This work & 35.6 & 2.198 & 0.13 & 43 \\
M11A/B & 35 & 2.182 & 0.36 & 70 \\
Fixed $i$ & 35.6 & 2.197 & 0.12 & 70 (fixed) \\
Dark only & 35.4 & 2.214 & 0.02 & 42 \\
\hline
\end{tabular}
\end{table}
HD 141943 is a young ($\approx$ 17-32 Myr, M11A and \citealt{Hillenbrand2008}), nearby (60 $\pm$ 0.08 pc, estimated using \citealt{vo:eDR3_lite_dist} with Gaia EDR3 data \citealt{Gaia2016,Gaia2020}) active G pre-main sequence star. This Sun like star has a mass of 1.3 M$_{\odot}$ and a radius of 1.5 $^{+0.06}_{-0.05} \ \mathrm{R_{\odot}}$ (Gaia DR2, \citealt{Gaia2018}). \cite{Soummer2014} also identified a surrounding near edge-on debris disk, consistent with a planetesimal belt populated by two dust components at respective grain temperatures of 60 $K$ and 202 $K$. The extended list of stellar parameters can be found in Table~\ref{tab:HD141943}. \par
We inferred stellar parameters by analysing the raw HD 141943 dataset, containing no injected planet. These are marked with the superscript \textsuperscript{d} in Table~\ref{tab:HD141943}: $v$ $\mathrm{sin}$ $i$ = 35.6 $\pm$ 0.7\,\,km\,s$^{-1}$, $i$ = 43 $\pm$ 10$^{\circ}$, $P_{\mathrm{eq}}$ = 2.198 $\pm$ 0.002\,d and $\mathrm{d}\Omega$ = $0.1331^{+0.0095}_{-0.0094}$\,\,rad\,d$^{-1}$. \par
These parameters are close although not exactly matching the previous analysis from M11A/B (see the first 2 lines from Table~\ref{tab:bestparam}). This discrepancy could be explained by the fact that the DI/ZDI code used between M11A/B is slightly different than ours. Mainly, \textsc{zdipy} let us map bright and dark surface features (spots) in contrast with only dark spots in M11A/B. The inclination is the parameter with the largest difference (43$^{\circ}$ vs 70$^{\circ}$), and also the hardest to constrain. To further investigate, we derived the best solution when fitting both (i) only for dark spots and (ii) using both dark and bright spots but forcing $i$ to match M11A/B's value (i.e. 70$^{\circ}$). Obtained Doppler maps and best parameters for the three cases (dark + bright, only dark and dark + bright with imposed 70$^{\circ}$ inclination) are given in Figure~\ref{fig:maps_comparison} and Table~\ref{tab:bestparam}, respectively.\par
These three cases yielded similar results, however noting the negligible differential rotation when fitting only the dark features. Forcing the $i$ to 70$^{\circ}$ did not change the overall solution, and we found good agreement between the dark + bright non forced and forced analysis. The contrast difference on the Doppler maps as seen on the bottom-left map of Figure~\ref{fig:maps_comparison} is due to the effect of projection imposed by $i$. Spot locations are consistent across all maps and with M11A. The difficulty to constrain the inclination angle prevents a reliable deduction of the stellar radius $\mathrm{R_{eq}}$ and we, therefore, used the Gaia DR2's value given in Table~\ref{tab:HD141943}.
Our main objective for this paper was to filter out as efficiently as possible any rotationally-modulated signal in RVs. Since setting $i$ to 43$^{\circ}$ optimises this task, we adopted this value for the inclination in the rest of this study.
\par
Figure~\ref{fig:mag_map} shows the radial (top), azimuthal (middle), and meridional (bottom) magnetic field distribution, derived with ZDI using Stokes \emph{V} LSD profiles. We find a magnetic field with 52 and 48 per cent distribution for the poloidal and toroidal components respectively, well agreeing with the 47 and 52 per cent from M11A. The mean strength is $\approx$ 52 Gauss, much lower than M11A's value of 91 G. This can be explained again by the difference in inclination angle. Indeed, re-applying ZDI with a forced $i$ = 70$^{\circ}$ yields a field strength of 85 G, better agreeing with M11A.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{plots/magnetic_maps.png}
\caption{Maximum entropy image reconstruction of the radial (top), azimuthal (middle), and meridional (bottom) components of the magnetic field for HD 141943. Positive/negative field modulus values (in Gauss) are displayed in yellow/red and blue respectively. The horizontal line shows the equator with the number describing the phase. Ticks translate each measurement's epoch.}
\label{fig:mag_map}
\end{figure}
\subsection{Planet search}
\label{subsec:Planet search}
Before injecting planets in the HD 141943 dataset, we ensured
it did not exhibit any sign of hosting a planet. \par
The top panel of Figure~\ref{fig:per_5} shows the periodogram of the raw RVs, where we identified $P_{\mathrm{eq}}$ and its harmonics, the strongest signature being present at $P_{\mathrm{eq}}/2$. Second, third and fourth panels are periodograms of the filtered RVs, respectively from dark and bright, dark and bright with imposed $i = 70^{\circ}$ features and only dark analysis. All show similar features but one peak (around 2.7 days) did show different height across analysis, and was above the 0.001 FAP threshold for the dark spot only analysis. However, it did not reach overwhelming significance. This dataset did not allow us to assess the impact of the varying DI solutions (dark, dark + bright and dark + bright with forced inclination) on the planet retrieval as it has no injected planet. To test that, we performed a second analysis using these three configurations for dataset 22 (see section \ref{sec:simulations} for details on simulated datasets), containing a simulated planet in the `uncertain' range of detection. We found that the different DI solutions did not change our conclusions regarding the planet search (detailed analysis available in Appendix~\ref{sec:appendixC}).
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{plots/periodogram_5.pdf}
\caption{Lomb-scargle periodograms for the original dataset (\#5, containing no planet). 1st panel: Observed (raw) RVs. All other panels are for filtered RVs (i.e. raw RVs - synthetic RVs, where synthetic RVs are derived from the Doppler Imaging fitting). 2nd panel: Filtered RVs using the dark and bright (d \& b) features for DI. 3rd panel: Filtered RVs using the dark and bright features for DI and with the 70$^{\circ}$ constrain on $i$ (d \& b, $i$ = 70). 4th panel: Filtered RVs using only dark features (d) for DI. We note that the peak around 2.6 days is likely caused by the rotation period at intermediate latitudes (offering a maximal visibility given the inclination angle of the star). This value is larger than the equatorial period depicted by the red, vertical lines, as expected in the case of a differentially rotating surface (see Section~\ref{subsec:diffrot}).}
\label{fig:per_5}
\end{figure*}
The GP confirmed that we were not able to detect any significant planet in the raw dataset. We found a Bayes Factor in favour of the single planet model over the activity only model of only 0.3 (p $\approx$ 0.23).
\begin{figure*}
\includegraphics[width=\textwidth]{plots/result_DI.pdf}
\caption{Results using the Doppler Imaging method. Each marker on the plot is a dataset containing a simulated injected planet. Orbital period (d) is on the x-axis and semi-amplitude (\,m\,s$^{-1}$) is on the y-axis. Green circles: identification of the correct planet with a periodogram peak above a FAP of 0.001. Orange square: two peaks of similar height (see Section~\ref{fig:result_DI}) were found preventing a safe conclusion or when the width of the correctly identified peak yielded a deviation of more than 10 per cent between the retrieved and injected orbital period. Grey crosses: no signature with FAP > 0.001 could be identified. Red crosses: the most significant peak was not matching the injected period peak. Horizontal dashed lines show the stellar activity semi-amplitude ($K_{\mathrm{activity}}$) and the error bars on the retrieved RVs ($\mathrm{\sigma_{RV}}$). Vertical dashed lines show the rotation period of the star and its harmonics.Blue points show, for the `Detection' and `To be confirmed' datasets, the $P_{\mathrm{eq}}$ and $K$ values corresponding to the highest peak in the periodogram.\textsuperscript{1}: For dataset \#21, the correct peak was found, at a FAP of $1.1\times10^{-13}$. However, that peak being very broad, it yielded a 10.6 per cent deviation between the retrieved and the injected period, slightly outside our 10 per cent limit.}
\label{fig:result_DI}
\end{figure*}
\section{simulated exoplanet datasets}
\label{sec:simulations}
We created 37 datasets, each containing a single planet on a circular orbit around HD 141943, following a procedure described in~\ref{subsec:Planets} and \ref{subsec:completedatasets}. Each planet was incorporated into the raw spectra studied in the previous section.
\subsection{Injected Planets}
\label{subsec:Planets}
Injected planets were chosen to be massive short-period exoplanets, with masses ranging from 0.38 up to 5.9\,$\mathrm{M_{J}}$ and periods shorter than 6 days. We set the orbits to be circular, as it is believed to be the case for most HJs, especially for orbits shorter than 3 days \citep{Dawson2018}. We should nonetheless bear in mind that eccentricity can be a crucial aspect in favour of high-eccentricity migration and should not be overlooked especially when attempting to detect the slightly cooler warm Jupiters ($P_{\mathrm{orb}}$ > 10 days). The RV shift induced by each planet was defined as:
\begin{equation}
RV_{\mathrm{pla}}(t) = K\sin \left[2\pi\left(\frac{t}{P_{\mathrm{orb}}} - \Phi + 0.5 \right)\right]
\label{eq:planet}
\end{equation}
with $K$ the semi-amplitude of the signal, $P_{\mathrm{orb}}$ the planet's orbital period and $\Phi$ the phase. $\mathrm{\Phi} \in \left[ 0:1\right]$ and was defined such that when $\mathrm{\Phi} = 0$, the planet crosses the plane containing the line of sight. We set $\mathrm{\Phi} = 0$ to match the mid-point of the observations ($BJD_{{\mathrm{\star}_{mid}}}$ = 2454195.153776).
For the rest of this paper, we will refer to semi-amplitude values ($K$) for the planets rather than mass. The equivalence between $K$ and mass is described in Section~\ref{subsec:recoveredexoplanets}.
\subsection{Complete datasets}
\label{subsec:completedatasets}
To build our datasets, we generated an RV time series using Equation~\ref{eq:planet} at times matching our observing epochs and then shifted each spectrum accordingly in wavelength space. In order to explore our planetary parameter space (made of $K$, $P_{\mathrm{orb}}$ and $\Phi$) without being overwhelmed with the number of datasets to analyse ($\mathrm{n_{K}} \times \mathrm{n_{P_{orb}}} \times \mathrm{n_{\Phi}}$), we used the following strategy: \par
First, we created 7 datasets (\#1 to \#8, excluding \#5, the original one) at a fixed period (3.653 days) with $K$ ranging from 50 to 500 $\mathrm{m.s}^{-1}$ and random $\Phi$. This initial analysis provided an estimate of the limiting semi-amplitude range for detectability. \par
Then we generated additional datasets 4-5 at a time, filling areas in the parameter space that seemed relevant, i.e. around the noise limit, around $P_{\mathrm{eq}}$ and harmonics and to cover empty areas of the parameter space. After generating each batch of datasets, these were randomly assigned a mock value before analysis to avoid biases. \par
We ended up with 37 datasets (38 when including the original dataset) spanning the following ranges: 42 to 532\,\,m\,s$^{-1}$ in $K$, 0.288 to 5.69 days for $P_{\mathrm{orb}}$ and 0.02 to 0.99 in $\Phi$. See Table~\ref{tab:results} for specific details.
\subsection{Radial velocity extraction}
\label{subsec:RV_extractin}
We tested the RV extraction from the line profiles using both a first order moment approach and a Generalised Normal distribution fit. For the original dataset (\#5 with no injected planet) and given our limits on the integration for the FOM approach the average difference between FOM and GND extracted RVs (from the observed profiles) is 8\,\,m\,s$^{-1}$ with a maximum difference of 144\,\,m\,s$^{-1}$ for the most extreme point. The uncertainty on the RVs also differed as GND yielded uncertainties twice as large as those from FOM (148.5$\pm$2.0\,\,m\,s$^{-1}$ vs 70.4$\pm$1.8\,\,m\,s$^{-1}$). \par
As previously mentionned, FOM derived RVs and uncertainties are dependent on the number of points taken into account for its computation (i.e. chosen integration limits for the line profile) and where to cut in the wings of the line profile can be somewhat arbitrary. On both sides of each profiles (and for all of them), we cut at 43\,\,km\,s$^{-1}$ relative to the line center. We tested different limits and chose the one that gave the least average difference with the GND approach, which is an analytical function and therefore not prone to this effect.\par
\section{Results}
\label{sec:Results}
All datasets are extensively described in Table~\ref{tab:results} and will be referred to using their number (\#1, \#2, ...,\#38).
\subsection{Stellar parameters}
Stellar parameters inferred from the datasets containing injected planets are consistent with our refined parameters derived from the raw dataset (\#5, see Section~\ref{subsec:refinedstellarparam}). The mean of the retrieved stellar parameters across all datasets, along with the largest deviation from the mean (given by the $\pm$) are: $v\sin i$ = 35.6 $^{+0.27}_{-0.45}$ \,km\,s$^{-1}$, $i$ = 43 $^{+5}_{-3}$ $^\circ$, $P_{\mathrm{eq}}$ = 2.1995 $^{+0.007}_{-0.006}$ days and $\mathrm{d\Omega}$ = 0.119 $\pm 0.08$ \,rad\,d$^{-1}$. In all cases, spot distributions are similar, with slight differences in terms of contrast. This can be explained by the fact that the fit to the line profiles was sometimes performed to a slightly different $\chi^2$ level. Typically, the presence of a planet with a semi-amplitude significantly larger than the activity level (e.g. for dataset \#6) slightly impacts the performance of the DI. However, even such a large planet signature did not hamper the capacity of the DI to identify spot locations and recover the planet. \par
\begin{figure*}
\includegraphics[width=\textwidth]{plots/result_GP.pdf}
\caption{Results using the Gaussian Process method. Each marker on the plot is a simulated injected planet. Orbital period (d) is on the x-axis and semi-amplitude (\,m\,s$^{-1}$) is on the y-axis. Green circles: the evidence between the single-planet model and with activity only is $\Delta$ ln($\mathcal{Z}$) > 10, associated with at least a strong detection (probability p > 0.909). Orange squares: 10 > $\Delta$ ln($\mathcal{Z}$) > 3 or substantial evidence (0.95 > p > 0.75). Grey crosses: $\Delta$ ln($\mathcal{Z}$) < 3 (p < 0.75). Horizontal dashed lines show the stellar activity semi-amplitude ($K_{\mathrm{activity}}$) and the error bars on the retrieved RVs ($\sigma_{{RV}}$). Vertical dashed lines show the rotation period of the star and its harmonics. Blue points show the Maximum A Posteriori (MAP) values for $P_{\mathrm{eq}}$ and $K$, linked by a line to the corresponding dataset.}
\label{fig:result_GP}
\end{figure*}
\subsection{Planet detection: Methods performance}
\subsubsection{Method 1: Doppler Imaging activity filtering}
\label{subsec:resultDI}
Results are shown in Figure~\ref{fig:result_DI}. Each marker represents a dataset with its corresponding number in order to easily refer to Table~\ref{tab:results} containing more details. Marker positions indicate the injected planets' $K$ and $P_{\mathrm{orb}}$ (as we did not identify a systematic impact of $\Phi$ on retrievals, it was omitted for clarity). Each dataset is identified by a specific marker/colour combination: A green circle when a periodogram peak was identified above a FAP threshold of 0.001 (0.1 per cent) and with a deviation from the true period of $< 10$ per cent, an orange square when two peaks above a FAP of 0.001 and of similar height were found or when the right peak was identified but with a deviation from the true period of $> 10$ per cent, preventing a safe conclusion, a grey cross when no signature above FAP = 0.001 could be identified and finally a red cross when a peak was present above the FAP threshold but was not matching the injected planetary period, i.e. a false positive. \par
This approach yielded 16 positive detections, 6 inconclusive findings, 11 non-detections and 4 false positives. These 4 datasets confirm that using FAP as a measure of significance is not the safest approach, as discussed in Section~\ref{subsec:DI filtering}. Rigorous estimation of the significance was performed with the GP analysis. \par
All 6 simulated planets with semi-amplitudes larger than 150\,\,m\,s$^{-1}$\, were well retrieved and half (8/18) between 100\,\,m\,s$^{-1}$\, and 150\,\,m\,s$^{-1}$. This fraction increased to 60 per cent (8/13) when removing all datasets close to $P_{\mathrm{eq}}$ and its harmonics. Only 1/13 planets below 100\,\,m\,s$^{-1}$\, were found (noting that \#25 was a very weak detection with FAP $= 3.5\times10^{-4}$). \par
We note that all analysis that identified the right peak but with deviation of more than 5 per cent (up to 10.6 per cent for \#21) from the true period (\#19, \#21, \#25 and \#35) had fewer than 2.5 orbital periods within our observation time span. This inaccuracy was due to the width of the peaks in the periodogram arising as the period represented a significant fraction of the time span. It is safest, given our number of samples, to cover at least 2 to 3 orbital periods to achieve sufficient precision on $P_{\mathrm{orb}}$. \par
The 6 `to be confirmed' (orange squares) datasets: \#2, \#15, \#32, \#34 and \#36 all exhibited two competing peaks above the FAP threshold and at a similar height (FAP of $1.8\times10^{-4}$ \& $7.2\times10^{-4}$ for \#2, $1.6\times10^{-4}$ \& $1.4\times10^{-4}$ for \#15, $1.1\times10^{-11}$ \& $1.1\times10^{-11}$ for \#32, $1.25\times10^{-6}$ \& $2.6\times10^{-4}$ for \#34 and $1.15\times10^{-10}$ \& $2.43\times10^{-9}$ for \#36) preventing us from being able to choose the correct period. For \#2, \#15 and \#32, the peaks are just above our detection threshold and it is therefore not surprising to find competing features. For \#32 and \#36 however, the competing peaks were both very significant. We are not sure as to how to interpret these, which vanished as we filter the signature associated with one of the two peaks. As shown in Figure~\ref{fig:periodograms32_36}, these spurious peaks did not seem to correspond to any harmonics of neither the planet nor the star. See periodograms. Although complex interactions between the uneven data sampling and the periodic signatures cannot be ruled out, no significant peak could be identified in the window function (see Figure~\ref{fig:window}). The complete analysis of these 2 datasets can be found in Appendix~\ref{sec:appendixB}. Dataset \#21 also falls in the 'orange square' category with a very wide identified peak, yielding a 10.6 per cent deviation between the retrieved and injected period, slightly over our 10 per cent threshold. \par
False positives arose as the highest peak was not the simulated one, which would lead to false identification (if relying solely on DI) for \#1 and \#17. Regarding \#9 and \#24, the peaks were barely above the FAP of 0.001 and would not have led to a significant detection. \par
Although we could not identify a systematic impact, phase is expected to play a role in the injected planets retrieval and we can see this occurring in the zoomed box in Figure~\ref{fig:result_DI}. The only noticeable difference between \#13 and \#15 is their phases (respectively $\Phi_{13}$ = 0.4769 and $\Phi_{15}$ = 0.1093) and yet planet \#13 is recovered but not \#15. \par
Studying periodograms for all datasets indicated that planets with periods close to $P_{\mathrm{eq}}$ (\#12 and \#26), $P_{\mathrm{eq}/2}$ (\#33 and \#34), $P_{\mathrm{eq}/3}$ (\#9 and \#24) and $P_{\mathrm{eq}/4}$ (\#30 and \#32) seem to be affected by the activity filtering. The case of \#32 has been discussed above. This effect is to some extent expected as DI has the capacity to distort the line profiles, interpreting the rotationally modulated distortions as produced by spots on the brightness maps at harmonics of the rotation period and therefore is likely to absorb part of a planetary signature close to one of these periods. \par
For RV searches, the LS periodogram has limitations (choosing a FAP limit, interpreting the significance of a result, limitation to sinusoidal signals, see \citealt{Vanderplas2018}) and we emphasize that dedicated treatment for stellar activity should be performed. We therefore advocate for incorporating a second, complementary method, presented in the following section, allowing both better quantification of the significance of a retrieved signature and a more comprehensive modelling of the activity. \par
\begin{table*}
\scriptsize
\centering
\caption{Datasets 1 to 6. Each column represents a simulated dataset. The first section (rows 1-4) are the stellar parameters inferred from the Doppler Imaging analysis. The second section (rows 5-7) gives the values of the three parameters used to simulate the injected planet. The third section (rows 8-13) gives the results of the Method \#1 DI filtering. $RV_0$ is an offset, then we have the three recovered planet parameters, followed by the rms and $\chi^2$ of the residuals. The fourth section (rows 14-22) are the result of the GP (Method \#2) for the no planet (activity only) model. For the parameters ($\theta_1$ to $\sigma_s$), the values are given as: mean $\pm$ std (maximum a posteriori). We then have the rms and $\chi^2$ of the residuals and the resulting natural logarithm of the evidence. The last section (rows 23-36) are the result of the GP for the single planet model. Again, the parameters ($\theta_1$ to $\sigma_s$) values are given as: mean $\pm$ std (maximum a posteriori). We then have the rms and $\chi^2$ of the residuals and the resulting natural logarithm of the evidence. Finally, we give the Bayes Factor (BF), defined as the ratio between $\mathcal{Z}$ from the single planet model (row 34) and the no planet model (row 22). The last row is the probability in favour of the single planet model associated with the BF value. Only the first 6 rows (datasets) are shown here, the full version is available as online material.}
\label{tab:results}
\begin{tabular}{lccccccc}
\hline
Dataset & \#1 & \#2 & \#3 & \#4 & \#5 & \#6 & ...\\
\hline
\multicolumn{7}{l}{Doppler Imaging inferred stellar parameters}\\
$i$($^{\circ}$) & 43 & 40.5 & 41.5 & 42 & 42.5 & 45 & ...\\
$v\sin i$ (\,km\,s$^{-1}$) & 35.47 & 35.53 & 35.75 & 35.789 & 35.643 & 35.862 & ...\\
$P_{\mathrm{eq}}$ (d) & 2.20615 & 2.20345 & 2.1988 & 2.19408 & 2.19788 & 2.20179 & ...\\
$\mathrm{d}\Omega$ (\,rad\,d$^{-1}$) & 0.0769 & 0.09796 & 0.13333 & 0.14694 & 0.13333 & 0.12308 & ...\\
\hline
\multicolumn{7}{l}{Injected planet parameters}\\
\\
$K$ (\,m\,s$^{-1}$) & 60.6 & 82.9 & 154.9 & 267.1 & - & 532.2 & ...\\
$P_{\mathrm{orb}}$ (d) & 3.6531 & 3.6531 & 3.6531 & 3.6531 & - & 3.6531 & ...\\
$\Phi$ [0:1] & 0.271 & 0.303 & 0.445 & 0.311 & - & 0.432 & ...\\
\hline
\multicolumn{7}{l}{Method \#1 (DI) mean $\pm$ std} \\
\\
$RV_\mathrm{0}$ (\,m\,s$^{-1}$) & 13.1$\pm$9 & 1.7$\pm$9 & 0.5$\pm$9 & 12.2$\pm$10 & - & -60.0$\pm$10 & ...\\
$K$ (\,m\,s$^{-1}$) & 85.5$\pm$12 & 68.7$\pm$11 & 115.6$\pm$12 & 233.0$\pm$13 & - & 430.2$\pm$14 & ...\\
$P_{\mathrm{orb}}$ (d) & 2.549$\pm$0.054 & 1.413$\pm$0.022 & 3.538$\pm$0.067 & 3.649$\pm$0.041 & - & 3.640$\pm$0.022 & ...\\
$\Phi$ [0:1] & 0.612$\pm$0.026 & 0.209$\pm$0.031 & 0.435$\pm$0.018 & 0.301$\pm$0.009 & - & 0.427$\pm$0.006 & ...\\
rms (\,m\,s$^{-1}$) & 81.9 & 79.6 & 79.7 & 84.4 & - & 89.9 & ...\\
$\chi^2$ & 1.34 & 1.28 & 1.28 & 1.43 & - & 1.63 & ...\\
\hline
\multicolumn{7}{l}{Method \#2 (GP) / no planet model / mean $\pm$ std (MAP)} \\
\\
$\theta_1$ (\,m\,s$^{-1}$) & 314.2$\pm$113.4(217.4) & 276.8$\pm$89.9(208.2) & 265.4$\pm$68.0(217.1) & 320.5$\pm$74.1(277.5) & 357.6$\pm$109.1(296.6) & 462.9$\pm$98.0(394.1) & ...\\
$\theta_2$ (\,m\,s$^{-1}$) & 10.2$\pm$6.3(6.9) & 6.9$\pm$3.8(6.0) & 3.0$\pm$1.4(4.2) & 1.7$\pm$0.5(1.7) & 19.2$\pm$9.3(15.7) & 1.4$\pm$0.3(1.1) & ...\\
$\theta_3$ (\,m\,s$^{-1}$) & 2.190$\pm$0.036(2.148) & 2.190$\pm$0.041(2.141) & 2.205$\pm$0.045(2.164) & 2.206$\pm$0.049(2.255) & 2.215$\pm$0.020(2.216) & 2.207$\pm$0.049(2.268) & ...\\
$\theta_4$ (\,m\,s$^{-1}$) & 0.527$\pm$0.148(0.399) & 0.468$\pm$0.136(0.347) & 0.415$\pm$0.137(0.234) & 0.587$\pm$0.146(0.511) & 0.628$\pm$0.083(0.597) & 0.723$\pm$0.151(0.605) & ...\\
$RV_\mathrm{0}$ & 12.5$\pm$156.8(-20.5) & 19.7$\pm$127.6(16.6) & 35.6$\pm$97.7(52.1) & 15.8$\pm$113.3(-0.7) & 25.3$\pm$194.0(50.9) & 17.4$\pm$160.3(49.6) & ...\\
$\sigma_{s}$ & 12.3$\pm$9.5(0.4) & 11.7$\pm$9.1(2.8) & 12.0$\pm$9.2(9.8) & 12.0$\pm$9.2(3.0) & 11.5$\pm$8.8(0.6) & 12.3$\pm$9.5(0.7) & ...\\
rms (\,m\,s$^{-1}$) & 55.9 & 54.9 & 54.8 & 55.8 & 59.0 & 56.6 & ...\\
$\chi^2$ & 0.62 & 0.59 & 0.59 & 0.61 & 0.69 & 0.63 & ...\\
$\mathrm{ln{\mathcal{Z}}}$ & -550.99 & -552.15 & -560.46 & -563.85 & -545.03 & -571.93 & ...\\
\hline
\multicolumn{7}{l}{{Method \#2 (GP) / single planet model / mean $\pm$ std (MAP)}} \\
\\
$\theta_1$ (\,m\,s$^{-1}$) & 340.1$\pm$117.7(271.1) & 360.8$\pm$120.8(257.2) & 400.9$\pm$151.9(301.2) & 401.0$\pm$138.3(347.7) & 357.3$\pm$107.9(292.2) & 473.5$\pm$199.5(338.4) & ...\\
$\theta_2$ (\,m\,s$^{-1}$) & 14.0$\pm$7.8(13.3) & 17.2$\pm$10.0(17.1) & 20.9$\pm$13.2(15.3) & 21.7$\pm$12.3(19.0) & 19.9$\pm$9.7(15.9) & 27.7$\pm$16.7(22.3) & ...\\
$\theta_3$ (\,m\,s$^{-1}$) & 2.204$\pm$0.030(2.217) & 2.213$\pm$0.026(2.233) & 2.208$\pm$0.024(2.216) & 2.215$\pm$0.019(2.220) & 2.216$\pm$0.020(2.228) & 2.206$\pm$0.020(2.213) & ...\\
$\theta_4$ (\,m\,s$^{-1}$) & 0.595$\pm$0.151(0.602) & 0.642$\pm$0.154(0.601) & 0.641$\pm$0.163(0.570) & 0.677$\pm$0.143(0.645) & 0.631$\pm$0.081(0.592) & 0.705$\pm$0.156(0.680) & ...\\
$K$ (\,m\,s$^{-1}$) & 43.1$\pm$23.6(59.8) & 59.2$\pm$19.7(69.1) & 130.8$\pm$19.1(138.8) & 238.7$\pm$14.2(235.6) & 29.4$\pm$32.1(39.7) & 474.7$\pm$14.5(473.4) & ...\\
Period (d) & 3.008$\pm$2.386(3.331) & 3.223$\pm$1.341(3.420) & 3.571$\pm$0.438(3.598) & 3.580$\pm$0.102(3.603) & 2.481$\pm$2.528(0.172) & 3.616$\pm$0.035(3.610) & ...\\
Phase [0:1] & 0.345$\pm$0.210(0.259) & 0.305$\pm$0.115(0.286) & 0.444$\pm$0.036(0.448) & 0.306$\pm$0.013(0.309) & 0.571$\pm$0.293(0.821) & 0.432$\pm$0.005(0.430) & ...\\
$RV_\mathrm{0}$ & 38.6$\pm$166.5(-1.6) & 52.6$\pm$178.1(119.5) & 27.8$\pm$212.0(81.8) & 69.8$\pm$211.2(1.2) & 28.4$\pm$190.2(166.2) & 39.2$\pm$292.2(121.2) & ...\\
$\sigma_{s}$ & 11.0$\pm$8.3(6.8) & 11.1$\pm$8.4(3.9) & 12.4$\pm$9.4(3.2) & 11.8$\pm$9.1(5.1) & 10.7$\pm$8.2(6.3) & 12.7$\pm$9.6(2.8) & ...\\
rms (\,m\,s$^{-1}$) & 57.3 & 62.1 & 61.3 & 57.9 & 60.4 & 64.4 & ...\\
$\chi^2$ & 0.65 & 0.77 & 0.75 & 0.66 & 0.72 & 0.82 & ...\\
$\mathrm{ln{\mathcal{Z}}}$ & -551.06 & -550.91 & -556.77 & -556.41 & -546.25 & -560.60 & ...\\
Bayes Factor & 0.9 & 3.5 & 40.0 & 1702.8 & 0.3 & 83283.0 & ...\\
p($\mathrm{\mathcal{M}_1}$) & 0.48 & 0.78 & 0.98 & 1.00 & 0.23 & 1.00 & ...\\
\hline
\end{tabular}
\end{table*}
\subsubsection{Method 2: Gaussian Process regression activity modelling}
\label{subsec:GP}
Again, results are detailed in Table~\ref{tab:results} and summarised in Figure~\ref{fig:result_GP}. We defined successful retrievals (green circles) where the GP strongly favoured the single planet model over the activity only model with a probability p > 0.909 (computed from the marginal likelihood / Bayes factor, see Appendix~\ref{sec:appendixA} and Table~\ref{tab:Bayes factor} for further details). We then have substantial evidence (orange squares) of a planet (i.e. 0.75 < p < 0.909) and non-detections (grey crosses, i.e. p < 0.75). We note that most of the injected planets (28/37) were correctly identified by the GP, although not always significant enough to lead to a detection claim. \par
The GP yielded 16 positive detections, 4 `to be confirmed' findings (i.e. requiring further observations), 17 non-detections and more importantly no false positives. Again here, all 6 simulated planets with semi-amplitudes larger than 150\,\,m\,s$^{-1}$ were correctly found. This drops to half (9/18) between 100\,\,m\,s$^{-1}$\, and 150\,\,m\,s$^{-1}$ (same ratio as DI although not systematically on the same datasets), and increases to 70 per cent (9/13) when removing all datasets close to $P_{\mathrm{eq}}$ and its harmonics. Finally, for planets below 100\,\,m\,s$^{-1}$, only 1 out of 13 was found (along with 2 cases requiring further observations, with \#2 a correctly identified planet and \#27 a missed identification). The GP, compared to DI, is more conservative yet more reliable (i.e. no false positives) due to its accurate measure of the significance for each finding. Figure~\ref{fig:result_GP} shows that, similar to the DI analysis, it is difficult to identify planetary signatures close to $P_{\mathrm{eq}}$ or its harmonics. \par
We finally note that imprecision on the retrieved $
P_{\mathrm{orb}}$ increases for longer periods (see MAP values indicated on Figure~\ref{fig:result_GP}). This is because fewer periods are covered by the dataset as we move to the right of Figure~\ref{fig:result_GP}.
\subsubsection{Consistency between methods}
\label{subsec:result_method_comparison}
Utilising two distinct methods serves as cross-validation when a signature is found. However, the GP is the only Bayesian approach, therefore the only one allowing a rigorous quantification of the evidence favouring of a particular model (i.e. presence of a sinusoidal signature in the data or not). \par
For signatures above 150\,\,m\,s$^{-1}$ and after removing datasets close to $P_{\mathrm{eq}}$ harmonics, both methods yielded systematic detections except for the ambiguity on \#36 when using DI. For signatures between 100\,\,m\,s$^{-1}$\, and 150\,\,m\,s$^{-1}$, the GP showed more consistency than the DI which exhibited 3 false positives. Out of the 13 datasets below 100\,\,m\,s$^{-1}$\, we ended up with 1 detection for both GP and DI. \par
Even though the Bayesian approach using a GP can (i) better handle correlated noise and (ii) more reliably estimate the significance of a detection, the use of the DI filtering method allows an independent validation.
\subsubsection{Comparison with previous work}
In a study performed in 2014, \citet{Jeffers2014} (J14 hereafter) injected various planets behind simulated activity signatures of two young G and K stars. Varying parameters were the planet semi-amplitude, orbital period, and $v\sin i$ (shown to be correlated with the activity level). Stellar activity was generated based on DI maps and with different configurations (e.g. adding plages associated with spots, adding extra random spots, etc. see J14 for more details). The G dwarf was HD 141943, thus making the comparison particularly relevant. Each simulated dataset was composed of a single planet in a circular orbit, to which modelled stellar activity and instrumental signatures were added. In that study, the search for injected planets was performed \emph{without} a specific treatment for stellar activity and was considered successful for periodogram peaks below FAP = 0.01 (vs 0.001 in our study).\par
With 50 observational epochs and for their less complex simulation of activity (only based on the Doppler Imaging maps), J14 were able to retrieve signatures of semi-amplitude $K=110$\,\,m\,s$^{-1}$\, when $v\sin i$ = 20\,\,km\,s$^{-1}$\, and $K=525$\,\,m\,s$^{-1}$\, when $v\sin i$ = 50\,\,km\,s$^{-1}$. Regarding their most complex simulation of activity (Doppler Imaging maps + plages + random spots, see J14 for further details) the minimum attainable planetary signature was $K=525$\,\,m\,s$^{-1}$\, when $v\sin i$ = 20\,\,km\,s$^{-1}$. In the case of $v\sin i$ = 50\,\,km\,s$^{-1}$, 200 observational epochs were required to reach the $K=525$\,\,m\,s$^{-1}$\, detection threshold. \par
We note that the data sampling is different which might slightly hinder the comparison \footnote{J14 has one datum per night for 50, 100 or 200 consecutive nights whereas we have 23 epochs over 10 nights.}. With 23 unevenly spread epochs, the smallest signature we could reliably detect was was $K=100$\,\,m\,s$^{-1}$\, (down to $70$\,\,m\,s$^{-1}$\, for dataset \#10), emphasising the benefit granted by our activity filtering approach. Although now systematically applied by the community for planetary searches around active stars, this emphasises that an independent treatment of stellar activity combined with robust model selection is crucial to improve detection capabilities. \par
\subsubsection{Recovered exoplanets}
\label{subsec:recoveredexoplanets}
Here we translate our results into planetary mass/orbital periods for the case of HD 141943, given $M_{\mathrm{\star}} = 1.3 M_{\mathrm{\odot}}$ and $i$ = 43$^{\circ}$. We consider that the stellar rotation axis is normal to the planet's orbital plan. In this context, our lower detection threshold of 100\,\,m\,s$^{-1}$\, is equivalent to a planet with either:
\begin{itemize}
\item $M_{\mathrm{pla}}$ = 1 $M_{\mathrm{Jup}}$, $P_{\mathrm{orb}}$ = 1.6 d (a = 0.03 au);
\item $M_{\mathrm{pla}}$ = 2 $M_{\mathrm{Jup}}$, $P_{\mathrm{orb}}$ = 12.5 d (a = 0.12 au);
\end{itemize}
Using M11A's inclination value of $i$ = 70$^{\circ}$, we get:
\begin{itemize}
\item $M_\mathrm{{pla}}$ = 1 $M_{\mathrm{Jup}}$, $P_{\mathrm{orb}}$ = 5 d (a = 0.062 au);
\item $M_\mathrm{{pla}}$ = 2 $M_{\mathrm{Jup}}$, $P_{\mathrm{orb}}$ = 35 d (a = 0.23 au);
\end{itemize}
In a case of a star of 1 $M_{\mathrm{\odot}}$ with a transiting exoplanet, we can hope to detect a 1 $M_{\mathrm{jup}}$ orbiting at up to 10 days, using typical DI non-stabilised observations. This is, of course, given similar conditions in terms of data quality and quantity, observing constraints, and stellar variability level. As more numerous and precise RVs should be easily obtainable, it is fair to expect better results and identify HJs around very young Solar-type stars. \par
\subsubsection{Dependence of planet detection to various parameters}
\label{subsec:planetdetection}
In terms of semi-amplitude, our detection threshold of around 100\,\,m\,s$^{-1}$\, corresponds to half of the stellar activity rms and a quarter of its semi-amplitude ($\approx 400$ to $500$\,\,m\,s$^{-1}$\, looking at the maximum of the datapoints, or 357$\pm\,100$\,\,m\,s$^{-1}$ according the the GP applied to Dataset \#5). Given the scarcity of planets orbiting very young stars discovered solely using RV, comparisons with the literature are limited. When excluding searches in the small activity regimes (i.e. $\mathrm{rms_{activity}}<50$\,\,m\,s$^{-1}$) only two planets provide a direct comparison, V830 Tau b \citep{Donati2017} and TAP 26 b \citep{Yu2017b}. \par
TAP 26 b is thought to have a semi-amplitude of 160\,\,m\,s$^{-1}$, or $\frac{1}{8}$ to $\frac{1}{12}$ of the stellar variability semi-amplitude and V830 Tau b ($K\approx60$\,\,m\,s$^{-1}$) up to $\frac{1}{20}$. They both exhibit activity levels with a semi-amplitude of $\approx1200$\,\,m\,s$^{-1}$\,. We believe the difference in performance (detection threshold of $\approx \frac{1}{4}$ of the activity level for this work) can be explained by the fact that (i) both \cite{Yu2017b} and \cite{Donati2017} had more data ($\approx$ 30 and 60 epochs vs 23 for us), (ii) \cite{Donati2017} had slightly better uncertainties on the RVs ($\sigma_{\mathrm{RV}}\approx50$\,\,m\,s$^{-1}$ vs 75 for both \cite{Yu2017b} and this study) and most importantly, (iii) both had a longer baseline for the observations: 100 and 35 stellar rotation cycles vs 3 for us. We also used less constraining priors as previous knowledge was not available. \par
To ensure it was not our method implementation that hindered our capacity to find smaller signatures from our datasets, we ran our code on both \cite{Donati2016}'s and \cite{Yu2017a}'s data and were able to retrieve the published periodic signatures. We note that with no access to previous knowledge, our priors distributions were less restrictive (i.e. non Gaussian and/or broader for the concerned parameters), decreasing the evidence and yielding slightly more conservative results. Our limitations can be seen as an upper boundary and that data of better quality and quantity would be able to detect smaller planets. \par
We find that detecting planets with orbital periods conflicting (i.e. within 0.1 d of) either $P_{\mathrm{eq}}$ or its harmonics was unreliable, as illustrated in Figure~\ref{fig:result_DI} and~\ref{fig:result_GP}. Although longer period planets did not seem to be harder to detect, we noticed a significant loss of precision in the orbital period retrieved once we reach periods larger than 40 to 50 per cent of the observing time frame (see MAP values on Figure~\ref{fig:result_GP}). This is expected and good practice to sample a few orbital period at least to get reasonable constraint. A good example of a similar study can be found in \cite{Klein2020}, where the authors required 35/50 datapoints spread over 3 months ($\approx$ 15 orbital periods) to reliably detect 5/10~\,m\,s$^{-1}$ planets behind stellar activity (about 2/3 times the planetary signature level). Finally, as we saw for datasets \#32, \#36 (see Appendix~\ref{sec:appendixB} for the complete analysis of these 2 datasets) and to a lesser extent for \#2 and \#15, spurious periodicity signatures can appear with no relation to harmonics and no obvious relation with the window function, as suggested by \cite{Nava2019}. \par
Regarding orbital phase, the significant difference in peak height between datasets \#13 (periodogram peak power = 0.3, $\Phi$ = 0.4769) and \#15 (periodogram peak power = 0.5, $\Phi$ = 0.9183) given their identical period and comparable semi-amplitude suggests that phase impacts the detection capabilities. It is not surprising that particular phases would have an impact on the periodogram as the irregular sampling can yielded different phase coverage. That being said, we did not observe a general trend with phase across all datasets. \par
Finally, data obviously plays a huge role in the detection capabilities with crucial aspects being quality, quantity and sampling. To better characterise the activity, i.e. improve the hyperparameters of the GP, it is important to optimise the sampling (spanning multiple stellar orbits with as dense and as regular sampling as possible). Another successful strategy is to apply a GP to simultaneous photometric data, or at least not too far apart so that there is not too much evolution of the stellar surface features. We also tested (see Appendix~\ref{sec:appendixD}) the improvement brought by the knowledge of the period of the orbiting planet (i.e. characterising a known transiting planet).
\section{Conclusions}
\label{sec:Conclusions}
In this paper, we assessed our capabilities to detect exoplanets behind real stellar activity signatures. We used a previously published set of observations gathered with a non-stabilised spectrograph of the young, active G star HD 141943 in which we injected simulated planets. We then utilised two distinct strategies, Doppler Imaging and a Gaussian Process regression to filter out the stellar activity variability, aiming to recover these injected planets. \par
Our dedicated treatment of stellar activity allowed significant improvement in the detection capabilities compared to J14, a planet search study done on the same star with no dedicated activity mitigation. As previously shown by e.g. \cite{Yu2017a}, these strategies are amongst the best tools we have to deal with large activity signatures. Although now widely accepted in the community, we further confirmed that dedicated treatment of the activity is crucial, and showed that we can detect short-orbit gas giants even in non-optimally sampled datasets exhibiting 50-100\,\,m\,s$^{-1}$ RV precision. \par
We tested two alternatives to recover the RVs from the LSD line profiles (generalised normal distribution fit and first order moment), which yielded slightly different RV time series but more importantly different uncertainties. We favoured the FOM approach but other methods such as broadened profiles could be explored.\par
With a low number of epochs acquired with a non-stabilised spectrograph, the combined use of both GP and ZDI methods enabled us to set a planet detection threshold of around 100\,\,m\,s$^{-1}$\, or $\approx \frac{1}{4}$ of the activity level. Injected planets under this threshold were either non-detections or would require extra observations to confirm. The limitations we faced give a good idea of the upper limit we can hope to achieve for such systems in similar conditions. \par
Although Doppler Imaging shows less reliability than the GP, it allows us to strengthen the confidence of a finding. This lack of reliability could be explained by the fact that DI does not take into account surface variability due to the active regions appearance/disappearance. These can evolve quickly, as shown for another G type star in \cite{Petit2004}. We suggest that claiming planets around active star should be done with a dedicated treatment for stellar variability, and preferentially using a Bayesian framework for robustness and to allow a quantification of the evidence of the presence of an orbiting planet.\par
We attempted to identify some factors that could improve the likelihood to find exoplanets orbiting young stars. Larger and more precise datasets is an obvious one. Efficient sampling is also crucial, where dense sampling of the stellar rotation to better constrain the activity should be combined with coverage of multiple planetary orbits. \par
Orbital periods close to $P_{\mathrm{eq}}$ and its harmonics pose serious difficulties, and often leads to non-detections. In our case, it also appears to be a good rule of thumb to sample at least 2, even 3 orbital periods to constrain $P_{\mathrm{orb}}$ with sufficient precision. \par
Some datasets (\#2, \#15, \#32 and \#36) exhibited significant spurious peaks of mysterious origin that compete with the true planetary period, emphasising the difficulty of RV only searches. \par
Detecting young exoplanets that do not transit is difficult but essential if we want to expand the sample of massive short period exoplanets orbiting very young stars and progress toward settling the long lasting debate over their origin. This work demonstrates that we can realistically identify potential candidates for follow-up observations and even detect short-orbit gas giants planets in non-optimised datasets exhibiting large activity variability.
\section{Future work}
\label{sec:futurework}
As follow-up of this work, and to improve precision, producing mean line profiles with either classical approaches (i.e. CCF, shift and fit) or more recent proposals (\citealt{Rajpaul2020} or \citealt{Cameron2020}), rather than with LSD for the GP analysis could be explored. Indeed, the strength of LSD is the increase in S/N it provides, at the cost of a poorer estimations of the uncertainties (usually overestimated). It is more relevant to have a better estimation of the uncertainties when it comes to the RV used for the GP rather than a boosted S/N (required for Doppler Imaging). \par
Now having a better grasp on the capabilities of these activity mitigation strategies, it is possible to study real data of young solar type stars. Many projects such as the BCool \citep{Marsden2014} or the TOUPIES \footnote{\url{https://ipag.osug.fr/Anr_Toupies//}} (\citealt{Folsom2016,Folsom2018}) surveys, aimed at characterising star using DI and ZDI, would be good starting points. \par
Among the overwhelming number of targets observed by the TESS mission \citet{Ricker2015}, many are young Solar analogues. Careful planning of follow-up and the availability of photometry for transiting planets would drastically increase the characterisation capabilities, see Appendix~\ref{sec:appendixD}. In general, using complementary tools to diagnose the activity, such as activity indicators and photometry are strongly recommended (e.g. \citealt{Rajpaul2015,Jones2017,Oshagh2017,Kosiarek2020}).\par
\section*{Acknowledgements}
This research has been supported by an Australian Government Research Training Program Scholarship. \par
This paper uses data acquired in 2011 with the Anglo-Australian Telescope. We would like to thank the technical staff of the Australian Astronomical Observatory for their excellent assistance during these observations. We also wish to thank the observers: A. Collier Cameron, N. Dunstone, G. Hussain and J.C. Ramírez Vélez. \par
In the scope of this research, we used the University of Southern Queensland's (USQ) \href{https://www.usq.edu.au/hpc}{Fawkes HPC} which is co-sponsored by the Queensland Cyber Infrastructure Foundation (QCIF). \par
This research has made use of NASA's \href{https://ui.adsabs.harvard.edu/}{Astrophysics Data System} and the \href{http://simbad.u-strasbg.fr/simbad/}{SIMBAD database} operated at CDS, Strasbourg, France. \par
For this research we made use of the following Python packages: \textsc{astropy} \citep{astropy:2013}, \textsc{corner} \citep{corner}, \textsc{loguniform} (MIT licence, João Faria), \textsc{matplotlib} \citep{Hunter:2007}, \textsc{numpy} \citep{harris2020array}, \textsc{PyMultiNest} \citep{Buchner2014} and \textsc{scipy} \citep{2020SciPy}. \par
Finally, thanks to J.C. H for the insightful conversations on the science behind this research.
\section*{Data availability}
The data underlying this article will be shared on reasonable request
to the corresponding author. The full content of Table~\ref{tab:results} is available as online material.
\input{Main.bbl}
\newpage
|
1705.05335
|
\section{HornMaxSAT Algorithm with Hitting Sets} \label{sec:algs}
This section develops a MaxHS\xspace-like~\cite{bacchus-cp11,jarvisalo-sat16}
algorithm for HornMaxSAT\xspace. In addition, the section shows that this
MaxHS\xspace-like algorithm elicits the possibility of solving large scale
problems with abstraction.
\subsection{A MaxHS\xspace-Like HornMaxSAT\xspace Algorithm}
With the goal of exploiting the special structure of HornMaxSAT\xspace, a
MaxHS\xspace-like algorithm is
envisioned~\cite{bacchus-cp11,jarvisalo-sat16}.
\begin{algorithm}[t]
\input{./algs/hmaxhs.tex}
\caption{HMaxHS\xspace, a MaxHS\xspace-like~\cite{bacchus-cp11} HornMaxSAT\xspace algorithm}
\label{alg:hmaxhs}
\end{algorithm}
\autoref{alg:hmaxhs} summarizes the proposed approach. The key
observation is that each call to LTUR~\cite{minoux-ipl88} runs in
linear time. (Unit propagation as implemented in modern SAT solvers,
will also run in polynomial time, but it will be less efficient in
practice.) The original motivation for MaxHS is that finding a
minimum hitting set of $\fml{S}$ is expected to be much easier than
solving the MaxSAT\xspace problem. This is also the motivation for HMaxHS\xspace.
As observed in recent work~\cite{amms-sat15,msimp-jelia16}, MUSes
(minimal unsatisfiable subsets) can be computed in polynomial time in
the case of Horn formulas. MUS extraction, but also MCS (minimal
correction subset) extraction~\cite{msimp-jelia16}, are based on the
original LTUR algorithm~\cite{minoux-ipl88}.
It should be noted that some implementations of MaxHS\xspace use an ILP
(Integer Linear Programming) package (e.g.\ CPLEX or
SCIP)~\cite{bacchus-cp11,jarvisalo-sat16}\footnote{SCIP and CPLEX and
available, respectively, from \url{http://scip.zib.de/} and
\url{https://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/}.},
whereas others exploit SAT solvers for computing minimum hitting
sets~\cite{iplms-cp15,imms-ecai16}.
\subsection{Automatic Abstraction-Based Problem Solving}
\label{sec:abstract}
For some of the problems described in~\autoref{sec:probs} a possible
criticism of~\autoref{alg:hmaxhs} is that it will iteratively find
sets $\fml{U}$ consisting of a single clause, and it will essentially
add to $\fml{K}$ all the clauses in $\fml{H}$. Although this is in
fact a possibility for some problems (but not all, as investigated
in~\autoref{sec:apps}), this section shows that even for these
problems,~\autoref{alg:hmaxhs} can provide an effective problem
solving approach.
Consider the example graph in~\autoref{fig:graph01}, where the goal is
to compute a maximum independent set (or alternatively a minimum
vertex cover).
\begin{figure}[t]
\begin{subfigure}[b]{0.675\textwidth}
\scalebox{0.8275}{\input{./texfigs/graph01}}
\caption{Example graph} \label{fig:ex01-graph}
\end{subfigure}
%
\begin{subfigure}[b]{0.3\textwidth}
\scalebox{0.9}{
\renewcommand{\arraystretch}{1.25}
\renewcommand{\tabcolsep}{0.25em}
\hspace*{-0.75cm}
\begin{tabular}{c|c} \hline
Vertices & $(1+m)k$ \\ \hline
Edges & $k(k-1)/2+mk$ \\ \hline
MaxIS\xspace & $mk$ \\ \hline
MinVC\xspace & $k$ \\ \hline
MaxClique\xspace & $k$ \\ \hline
\end{tabular}
}
\caption{Graph stats} \label{fig:ex01-stats}
\end{subfigure}
\caption{Example graph for computing MaxIS\xspace \& MinVC\xspace} \label{fig:graph01}
\end{figure}
From the figure, we can conclude that the number of vertices is
$(1+m)k$, the number of edges is $(k(k-1)/2+km)$, the size of the
maximum independent set is $km$ and the size of the minimum vertex
cover is $k$.
From the inspection of the reduction from MaxIS\xspace (or MinVC\xspace) to HornMaxSAT\xspace,
and the operation of~\autoref{alg:hmaxhs}, one might anticipate
that~\autoref{alg:hmaxhs} would iteratively declare each hard clause
as an unsatisfiable core, and replicate the clause in the list
$\fml{K}$ of sets to hit, thus requiring a number of iterations no
smaller than the number of edges. (More importantly, for a MaxHS\xspace-like
algorithm, the number of iterations is worst-case
exponential~\cite{bacchus-cp11}.)
However, and as shown below, the operation of the HMaxHS\xspace actually
\emph{ensures} this is \emph{not} the case.
Without loss of generality, consider any of the vertices in the
clique, i.e.\ $v_1,\ldots, v_k$, say $v_i$. For this vertex, no more
than $k(k-1)/2+2k$ edges will be replicated, i.e.\ added to $\fml{K}$.
Observe that, as soon as two edges $(v_i, u_{ij_1})$ and
$(v_i,u_{ij_2})$ are replicated, a minimum hitting set will
necessarily pick $v_i$. As a result, after at most $k(k-1)/2+2k$
iterations, the algorithm will terminate with the answer $mk$.
Essentially, the algorithm is capable of \emph{abstracting} away
$(m-2)k$ clauses when computing the maximum independent set. Observe
that $m$ can be made arbitrarily large.
Abstraction is a well-known topic in AI, with important
applications~\cite{walsh-aij92}. The example in this section suggests
that HornMaxSAT\xspace and the HMaxHS\xspace algorithm can effectively enable automatic
abstraction for solving large scale (graph) optimization
problems. This remark is further investigated in~\autoref{sec:res}.
It should be noted that the result above highlights what seems to be a
fundamental property of the original MaxHS\xspace
algorithm~\cite{bacchus-cp11}.
Although in the worst case, the algorithm can require an exponential
number of steps to find the required set of clauses to remove to
achieve consistency, the result above illustrates how the MaxHS\xspace can be
effective at discarding irrelevant clauses, and focusing on the key
parts of the formula, thus being able to compute solutions in a number
of iterations not much larger than the minimum number of falsified
clauses in the MaxHS\xspace solution. Practical results from recent MaxSAT\xspace
Evaluations\footnote{\url{http://www.maxsat.udl.cat/}.} confirm the
practical effectiveness of MaxHS\xspace-like algorithms.
\section{HornMaxSAT in Practice} \label{sec:apps}
Besides the reference optimization problems analyzed
in~\autoref{sec:probs}, a number of practical applications can also be
shown to correspond to solving HornMaxSAT\xspace or can be reduced to HornMaxSAT\xspace.
This section investigates some of these problems, but also proposes
generic encodings from either SAT and CSP into HornMaxSAT\xspace.
\subsection{Sample Problems}
Different optimization problems in practical settings are encoded as
HornMaxSAT\xspace.
The winner determination problem (WDP) finds important applications in
combinatorial auctions. An immediate observation is that the encoding
proposed in~\cite{larrosa-jsat08} corresponds to HornMaxSAT\xspace.
The problem of coalition structure generation (CSG) also finds
important applications in multi-agent systems. An immediate
observation is that some of the encodings proposed
in~\cite{koshimura-ictai12} correspond to HornMaxSAT\xspace.
HornMaxSAT\xspace also finds application in the area of axiom pinpointing for
$\fml{EL}^{+}$\xspace description logic, but also for other lightweight description
logics.
For the concrete case of $\fml{EL}^{+}$\xspace, the problem encoding is well-known
to be Horn~\cite{sebastiani-cade09}, with the soft clauses being unit
positive. The use of LTUR-like algorithms has been investigated
in~\cite{amms-sat15}.
\jnoteF{
Goal models \& Coalition struture generation.\\
Detail model for winner determination problem.}
As shown in the sections below, it is actually simple to map different
decision (and optimization\footnote{
In the case of optimization problems, it is simple to apply the same
technique in the setting of Boolean Lexicographic Optimization
(BLO)~\cite{msagl-amai11}. Due to lack of space, details are
mitted.}) problems into HornMaxSAT\xspace.
\subsection{Reducing SAT to HornMaxSAT\xspace} \label{ssec:sat2horn}
Let $\fml{F}$ be a CNF formula, with $N$ variables $\{x_1\ldots,x_N\}$
and $M$ clauses $\{c_1,\ldots,c_M\}$.
Given $\fml{F}$, the reduction creates a Horn MaxSAT\xspace problem with hard
clauses $\fml{H}$ and soft clauses $\fml{S}$,
$\langle\fml{H},\fml{S}\rangle=\textsf{HEnc}(\fml{F})$.
For each variable $x_i\in X$, create new variables $p_i$ and $n_i$,
where $p_i=1$ iff $x_i=1$, and $n_i=1$ iff $x_i=0$. Thus, we need a hard
clause $(\neg p_i\lor\neg n_i)$, to ensure that we do not
simultaneously assign $x_i=1$ and $x_i=0$. (Observe that the added
clause is Horn.)
For each clause $c_j$ we require $c_j$ to be satisfied, by requiring
that one of its literals \emph{not} to be falsified. For each literal
$x_i$ use $\neg n_i$ and for each literal $\neg x_i$ use $\neg p_i$.
Thus, $c_j$ is encoded with a new (hard) clause $c'_j$ with the same
number of literals as $c_i$, but with only negative literals on the
$p_i$ and $n_i$ variables, and so the resulting clause is also Horn.
The set of soft clauses $\fml{S}$ is given by $(p_i)$ and $(n_i)$ for
each of the original variables $x_i$.
If the resulting Horn formula has a HornMaxSAT\xspace solution with at least
$N$ variables assigned value 1, then the original formula is
satisfiable; otherwise the original formula is unsatisfiable. (Observe
that, by construction, the HornMaxSAT\xspace solution cannot assign value 1 to
more than $N$ variables.)
Clearly, the encoding outlined in this section can be the subject of
different improvements, e.g.\ not all clauses need to be goal
clauses.
\begin{comment}
\begin{example}
Let the CNF formula be:
\begin{equation}
(x_1\lor\neg x_2\lor x_3)\land(x_2\lor x_3)\land(\neg x_1\lor\neg x_3)
\end{equation}
The new variables are $\{n_1,p_1,n_2,p_2,n_3,p_3\}$. Preventing
simultaneous assignment to 0 and 1 is guaranteed with the hard
clauses:
\begin{equation}
(\neg n_1\lor\neg p_1)\land(\neg n_2\lor\neg p_2)\land(\neg n_3\lor\neg p_3)
\end{equation}
The original clauses are reencoded as follows:
\begin{equation}
(\neg n_1\lor\neg p_2\lor\neg n_3)\land(\neg n_2\lor\neg n_3)\land(\neg p_1\lor\neg p_3)
\end{equation}
Finally, the soft clauses are
$\fml{S}=\{(n_1),(p_1),(n_2),(p_2),(n_3),(p_3)\}$.
\end{example}
\end{comment}
The transformation proposed can be related with the well-known
dual-rail encoding, used in different
settings~\cite{BryantBBCS87,mfmso-ictai97,RoordaC05,jmsss-jelia14,pimms-ijcai15}. To
our best knowledge, the use of a dual-rail encoding for deriving a
pure Horn formula has not been proposed in earlier work.
\begin{comment}
The SAT problem can be reduced to HornMaxSAT\xspace, more concretely to the
problem of deciding whether for some target Horn formula there exists
an assignment that satisfies a given number of clauses.
Let the number of variables be $N$ and the number of clauses be $M$.
For each variable $x_i\in X$, use new variables $x_{i,p}$ and
$x_{i,n}$, where $x_{i,p}=1$ iff $x_i=1$, and $x_{i,n}=1$ iff
$x_i=0$. Thus, we need a hard clause $(\neg x_{i,p}\lor\neg x_{i,n})$,
to ensure that we do not simultaneously assign $x_i=1$ and $x_i=0$.
(Observe that the added clause is Horn.)
For each clause $c_j$ we require that it be satisfied, by requiring
that one of its literals \emph{not} be falsified. For each literal
$x_i$ use $\neg x_{i,n}$ and for each literal $\neg x_i$ use
$\neg x_{i,p}$. Thus, $c_j$ is encoded with a new (hard) clause with
the same number of literals as $c_i$, but with only negative literals
on the $x_{i,p}$ and $x_{i,n}$ variables, and so the resulting clause
is also Horn.
The soft clauses are $(x_{i,p})$ and $(x_{i,n})$ for each of the
original variables $x_i$.
If the resulting Horn formula has a HornMaxSAT\xspace solution with at least
$N$ variables assigned value 1, then the original formula is
satisfiable; otherwise the original formula is unsatisfiable. (Observe
that, by construction, the HornMaxSAT\xspace solution cannot assign value 1 to
more than $N$ variables.)
Clearly, the encoding outlined in this section can be the subject of
different improvements, e.g.\ not all clauses need to be goal
clauses.
\end{comment}
\jnoteF{Relate with dual rail encoding.}
\jnoteF{Detail the CNF2Horn SAT encoding.}
\subsection{Reducing CSP to HornMaxSAT\xspace} \label{ssec:csp2horn}
This section investigates reductions of Constraint Satisfaction
Problems (CSP) into HornMaxSAT\xspace. Standard definitions are
assumed~\cite{walsh-bk06}.
A CSP is a triple $\langle X, D, C\rangle$, where $X=\langle
x_1,\ldots,x_N\rangle$ is an $n$-tuple of variables, $D$ is a
corresponding $N$-tuple of domains $D=\langle D_1,\ldots,D_N\rangle$,
such that $x_i\in D_i$, and $C$ is a $t$ tuple of constraints $C=\langle
C_1,\ldots,C_t\rangle$. $C_j$ is a pair $\langle R_{S_j}, S_j\rangle$,
where $R_{S_j}$ is a relation on the variables in $S_j$, and
represents a subset of the cartesian product of the domains of the
variables in $S_j$.
One approach to encode CSPs as HornMaxSAT\xspace is to translate the CSP to SAT
(e.g.\cite{walsh-cp00}), and then apply the Horn encoder outlined
in~\autoref{ssec:sat2horn}. There are however, alternative approaches,
one of which we now detail.
We show how to adapt the well-known direct encoding of CSP into
SAT~\cite{walsh-cp00}. The set of variables is $x_{iv}$, such that
$x_{iv}=1$ iff $x_i$ is assigned value $v\in D_i$. Moreover, we
consider the \emph{disallowed} combinations of values of each
constraint $C_j$. For example, if the combination of values
$x_{i_1}=v_{i_1}\land x_{i_2}=v_{i_2}\land\cdots\land x_{i_q}=v_{i_q}$
is disallowed, i.e.\ no tuple of the relation $S_j$ associated with
$C_j$ contains these values, then add a (Horn) clause
$(\neg x_{i_1v_{i_1}}\lor\cdots\lor\neg x_{i_qv_{i_q}})$.
For each $x_i$, require that no more than one value can be used:
$\sum_{v\in D_i}x_{iv}\le 1$; this AtMost1 constraint can be encoded
with Horn clauses as shown in~\autoref{prop:cardhorn}.
Finally, the goal is to assign as many variables as possible, and so
add a soft clause $(x_{i,v})$ for each $x_i$ and each $v\in D_i$.
It is immediate that the CSP is satisfiable iff the HornMaxSAT\xspace
formulation has a solution with at least $N$ satisfied soft clauses
(and by construction it cannot assign value 1 to more than $N$
variables).
\jnoteF{Detail the CSP2Horn SAT encoding.}
\subsection{Reducing PHP to HornMaxSAT\xspace}
The previous sections show that the optimization and decision problems
with simple reductions to HornMaxSAT\xspace are essentially endless,
as any decision problem that can be reduced to SAT or CSP can also
be reduced to HornMaxSAT\xspace.
However, it is also possible to develop specific reductions, that
exploit the original problem formulation. This section investigates
how to encode the representation of the pigeonhole principle (PHP) as
HornMaxSAT\xspace, for which propositional encodings are well-known and
extensively investigated~\cite{cook-jsl79}.
\begin{ndefinition}[Pigeonhole Principle, PHP\xspace~\cite{cook-jsl79}]
The pigeonhole principle states that if $m+1$ pigeons are
distributed by $m$ holes, then at least one hole contains more than
one pigeon. A more formal formulation is that there exists no
injective function mapping $\{1,2,...,m+1\}$ to $\{1,2,...,m\}$, for
$m\ge1$.
\end{ndefinition}
Propositional formulations of $\text{PHP\xspace}$ encode the negation of the
principle, and ask for an assignment such that the $m+1$ pigeons are
plansed into $m$ holes~\cite{cook-jsl79}. Given a propositional
encoding and the reduction proposed in~\autoref{ssec:sat2horn}, we can
encode PHP formulas into HornMaxSAT\xspace. We describe below an alternative
reduction.
\begin{nreduction}[$\text{PHP\xspace}\ensuremath \le_P\text{HornMaxSAT\xspace}$]
Let $x_{ij}=1$ iff pigeon $i$, with $1\le i\le m+1$, is placed in
hole $j$, with $1\le j\le m$.
%
For each hole $j$, $1\le j\le m$, at most 1 pigeon can be placed in
hole $j$:
\begin{equation} \label{eq:amo1}
\begin{array}{lcl}
\sum_{i=1}^{m+1}x_{ij}\le 1 & \quad\quad\quad & 1\le j\le m \\
\end{array}
\end{equation}
which can be encoded with Horn clauses, by~\autoref{prop:cardhorn}.\\
%
For each pigeon $i$, $1\le i\le m+1$, the pigeon is placed in at most 1
hole:
\begin{equation} \label{eq:amo2}
\begin{array}{lcl}
\sum_{j=1}^{m}x_{ij}\le 1 & \quad\quad\quad & 1\le i \le m+1 \\
\end{array}
\end{equation}
which can also be encoded with Horn clauses,
by~\autoref{prop:cardhorn}.\\
%
The soft clauses are $(x_{ij})$, with $1\le i\le m+1, 1\le j\le m$.
%
The PHP problem is satisfiable iff the HornMaxSAT\xspace problem has a
solution satisfying at least $m+1$ soft clauses, i.e.\ $m+1$ are
placed.
\end{nreduction}
\section{Conclusions \& Research Directions} \label{sec:conc}
The practical success of recent MaxSAT\xspace solvers not only motivates
investigating novel applications, but it also justifies considering
subclasses of the general MaxSAT\xspace problem.
This paper investigates the subclass of MaxSAT\xspace restricted to Horn
clauses, i.e.\ HornMaxSAT\xspace. The paper shows that a comprehensive set of
optimization and decision problems are either formulated as HornMaxSAT\xspace or
admit simple reductions to HornMaxSAT\xspace. The paper also shows that
fundamental decision problems, including SAT and CSP, can be
reduced to HornMaxSAT\xspace.
Although NP-hardness of HornMaxSAT\xspace guarantees that such reductions must
exist, the paper develops simple reductions, some of which were
unknown to our best knowledge.
The paper also proposes a HornMaxSAT\xspace algorithm, based on a well-known
family of MaxSAT\xspace algorithms~\cite{bacchus-cp11,jarvisalo-sat16}, but
which exploits the fact that the formulas to be analyzed are Horn.
The experimental results show the promise of reductions of HornMaxSAT\xspace and
motivate investigating further the use of HornMaxSAT\xspace as a generic problem
solving approach. This also motivates the development of more efficient
implementations of HMaxHS\xspace and of alternative approaches to HMaxHS\xspace.
\section{Introduction} \label{sec:intro}
Recent years have seen very significant improvements in MaxSAT\xspace solving
technology~\cite{bacchus-cp11,mhlpms-cj13,ansotegui-aij13,jarvisalo-sat16}.
Currently, the most effective MaxSAT\xspace algorithms propose
different ways for iteratively finding and blocking unsatisfiable
cores (or subformulas).
However, and despite the promising results of MaxSAT\xspace in practical
settings, past work has not investigated dedicated approaches for
solving subclasses of the MaxSAT\xspace problem, with one concrete example
being the MaxSAT\xspace problem over Horn formulas,
i.e.\ HornMaxSAT\xspace~\footnote{In contrast, for predicate logic and many of
its specializations, Horn clauses are used ubiquitously. This
includes logic programming, among many others applications.}.
The HornMaxSAT\xspace optimization problem is well-known to be
NP-hard~\cite{jaumard-ipl87}. In contrast to HornMaxSAT\xspace, the decision
problem for Horn formulas is well-known to be in P, with linear time
algorithms proposed in the 80s~\cite{gallier-jlp84,minoux-ipl88}.
This paper investigates practical uses of MaxSAT\xspace subject to Horn
formulas, and shows that
a vast number of decision and optimization problems are naturally
formulated as HornMaxSAT\xspace. More importantly, as this paper also shows, a
vast number of other decision and optimization problems admit simple
HornMaxSAT\xspace encodings.
One should observe that HornMaxSAT\xspace is NP-hard and so, by definition, any
decision problem in NP admits a polynomial time reduction to HornMaxSAT\xspace.
However, for many problems in NP, such reductions are not known, and
may result in large (even if polynomial) encodings.
With the purpose of exploiting the observation that many optimization
and decision problems have natural (and simple) reductions to HornMaxSAT\xspace,
this paper also proposes a novel algorithm for HornMaxSAT\xspace. The new
algorithm mimics
recent Implicit Hitting Set algorithms\footnote{Throughout the paper, these are referred to as MaxHS\xspace-family of MaxSAT\xspace algorithms.} proposed for
MaxSAT\xspace \cite{bacchus-cp11,jarvisalo-sat16}, thus exploiting the
fact that Horn formulas can be decided in polynomial (linear)
time~\cite{minoux-ipl88}, and for which minimal unsatisfiable cores
(or MUSes) can be computed in polynomial time~\cite{msimp-jelia16}.
The paper is organized as follows.~\autoref{sec:prelim} introduces the
definitions and notation used in the remainder of the
paper.~\autoref{sec:probs} shows that a large number of well-known
optimization, but also decision, problems already have simple HornMaxSAT\xspace
formulations which, to the best of our knowledge, have not been
exploited before.~\autoref{sec:algs} proposes a variant of recent
general-purpose MaxSAT\xspace algorithms, that is dedicated to the HornMaxSAT\xspace
problem. This section also shows that the new algorithm can elicit
automatic abstraction mechanisms when solving large scale optimization
problems. The potential of the work proposed in this paper is
assessed in~\autoref{sec:res}, and~\autoref{sec:conc} concludes the
paper.
\section{Preliminaries} \label{sec:prelim}
The paper assumes definitions and notation standard in propositional
satisfiability (SAT) and MaxSAT\xspace~\cite{sat-handbook09}.
Propositional variables are taken from a set $X=\{x_1,x_2,\ldots\}$.
A Conjunctive Normal Form (CNF) formula is defined as a conjunction of
disjunctions of literals, where a literal is a variable or its
complement. CNF formulas can also be viewed as sets of sets of
literals, and are represented with calligraphic letters, $\fml{A}$,
$\fml{F}$, $\fml{H}$, etc.
Given a formula $\fml{F}$, the set of variables is
$\vars(\fml{F})\subseteq X$.
A clause is a \emph{goal clause} if all of its literals are negative.
A clause is a \emph{definite clause} if it has exactly one positive
literal and all the other literals are negative; the number of
negative literals may be 0. A clause is Horn if it is either a goal or
a definite clause.
A truth assignment $\nu$ is a map from variables to $\{0,1\}$. Given a
truth assignment, a clause is satisfied if at least one of its
literals is assigned value 1; otherwise it is falsified. A formula is
satisfied if all of its clauses are satisfied; otherwise it is
falsified.
If there exists no assignment that satisfies a CNF formula $\fml{F}$,
then $\fml{F}$ is referred to as \emph{unsatisfiable}.
(Boolean) Satisfiability (SAT) is the decision problem for
propositional formulas, i.e.\ to decide whether a given propositional
formula is satisfiable.
Since the paper only considers propositional formulas in CNF,
throughout the paper SAT refers to the decision problem for
propositional formulas in CNF.
Modern SAT solvers instantiate the Conflict-Driven Clause Learning
paradigm~\cite{sat-handbook09}.
For unsatisfiable (or inconsistent) formulas, MUSes (minimal
unsatisfiable subsets) represent subset-minimal subformulas that are
unsatisfiable (or inconsistent), and MCSes (minimal correction
subsets) represent subset-minimal subformulas such that the complement
is satisfiable~\cite{sat-handbook09}.
To simplify modeling with propositional logic, one often represents
more expressive constraints. Concrete examples are cardinality
constraints and pseudo-Boolean constraints~\cite{sat-handbook09}.
A cardinality constraint of the form $\sum x_i\le k$ is referred to as
an $\textsf{AtMost}{}k$ constraint, whereas a cardinality constraint of the
form $\sum x_i\ge k$ is referred to as an $\textsf{AtLeast}{}k$ constraint.
Propositional encodings of cardinality and pseudo-Boolean constraints
is an area of active
research~\cite{warners-ipl98,bailleux-cp03,sinz-cp05,een-jsat06,sat-handbook09,nieuwenhuis-sat09,codish-lpar10,roussel-sat09,nieuwenhuis-cj11,nieuwenhuis-sat11,koshimura-ictai13a}.
The (plain) MaxSAT\xspace problem is to find a truth assignment that
maximizes the number of satisfied clauses. For the plain MaxSAT\xspace
problem, all clauses are \emph{soft}, meaning that these may not be
satisfied. Variants of the MaxSAT\xspace can consider the existence of
\emph{hard} clauses, meaning that these must be satisfied, and also
assign weights to the soft clauses, denoting the \emph{cost} of
falsifying the clause; this is referred as the weighted MaxSAT\xspace
problem, WMaxSAT\xspace. When addressing MaxSAT\xspace problems with weights, hard
clauses are assigned a large weight $\top$.
The HornMaxSAT\xspace problem corresponds to the MaxSAT\xspace problem when all clauses
are Horn. If clauses have weights, then HornWMaxSAT\xspace denotes the Horn
MaxSAT\xspace problem when the soft clauses have weights.
Throughout the paper, standard graph and set notations will be used.
An undirected graph $G=(V,E)$ is defined by a set $V$ of vertices
and a set $E\subseteq\{\{u,v\}\,|\,u,v\in V, u\not=v\}$. The notation
$(u,v)$ is used in this paper to represent the edges $\{u,v\}$ of $E$,
where the order of the vertices is irrelevant. Given $G=(V,E)$, the
\emph{complement graph} $G=(V,E^C)$ is the graph with the edges in
$\{\{u,v\}\,|\,u,v\in V, u\not=v\}$ that are not in $E$.
Moreover, it is assumed some familiarity with optimization problems
defined on graphs, including minimum vertex cover, maximum independent
set, maximum clique, among others.
Finally, the notation $\ensuremath \le_P$ is used to represent polynomial time
reducibility between problems~\cite[Section~34.3]{cormen-bk09}.
\section{Basic Reductions} \label{sec:probs}
This section shows that a number of well-known problems can be reduced
in polynomial time to the HornMaxSAT\xspace problem. Some of the reductions are
well-known; we simply highlight that the resulting propositional
formulas are Horn.
\subsection{Optimization Problems on Graphs}
\begin{ndefinition}[Minimum Vertex Cover, MinVC\xspace]
Given an undirected graph $G=(V,E)$, a vertex cover $T\subseteq V$
is such that for each $(u,v)\in E$, $\{u,v\}\cap T\not=\emptyset$.
%
A minimum (or cardinality minimal) vertex cover $T\subseteq V$ is a
vertex cover of minimum size\footnote{This corresponds to requiring
$T\subseteq V$ to be such that
$\forall_{U\subseteq V}|U|<|T|\to\exists_{(u,v)\in E},\{u,v\}\cap U=\emptyset$.
Throughout the paper, we will skip the mathematical representation
of minimum (but also maximum) size sets.}.
\end{ndefinition}
\begin{nreduction}[$\text{MinVC\xspace}\ensuremath \le_P\text{HornMaxSAT\xspace}$]
For $u\in V$, let $x_u=1$ iff $u$ is \emph{not} included in a vertex
cover. For any $(u,v)\in E$,
add a hard clause $(\neg x_u\lor \neg x_v)$. For each $u\in V$, add
a soft clause $(x_u)$. (Any non-excluded vertex $u\in V$
(i.e.\ $x_u=0$) is in the vertex cover.)
\end{nreduction}
\begin{nremark}
The proposed reduction differs substantially from the one originally
used for proving HornMaxSAT\xspace to be NP-hard~\cite{jaumard-ipl87}, but our
working assumptions are also distinct, in that we consider hard and
soft clauses.
\end{nremark}
\begin{ndefinition}[Maximum Independent Set, MaxIS\xspace]
Given an undirected graph $G=(V,E)$, an independent set
$I\subseteq V$ is such that for each $(u,v)\in E$ either
$u\not\in I$ or $v\not\in I$.
%
A maximum independent set is an independent set of maximum size.
\end{ndefinition}
\begin{nreduction}[$\text{MaxIS\xspace}\ensuremath \le_P\text{HornMaxSAT\xspace}$]
One can simply use the previous encoding, by noting the relationship
between vertex covers and independent sets. For any $(u,v)\in E$,
add a hard clause $(\neg x_u\lor\neg x_v)$. For each $u\in V$, add
a soft clause $(x_u)$.
\end{nreduction}
\begin{ndefinition}[Maximum Clique, MaxClique\xspace] \label{def:mxclq}
Given an undirected graph $G=(V,E)$, a clique (or complete subgraph)
$C\subseteq V$ is such that for every pair $\{u,v\}\subseteq C$,
$(u,v)\in E$.
%
A maximum clique is a clique of maximum size.
\end{ndefinition}
\begin{nreduction}[$\text{MaxClique\xspace}\ensuremath \le_P\text{HornMaxSAT\xspace}$]
A MaxSAT\xspace encoding for MaxClique\xspace is the following. For any $(u,v)\in
E^{C}$, add a hard clause $(\neg x_u\lor\neg x_v)$. For each $u\in
V$, add a soft clause $(x_u)$.
\end{nreduction}
\begin{ndefinition}[Minimum Dominating Set, MinDS\xspace] \label{def:mnds}
Let $G=(V,E)$ be an undirected graph. $D\subseteq V$ is a dominating
set if any $v\in V\setminus D$ is adjacent to at least one vertex in
$D$.
%
A minimum dominating set is a dominating set of minimum size.
\end{ndefinition}
\begin{nreduction}[$\text{MinDS\xspace}\ensuremath \le_P\text{HornMaxSAT\xspace}$]
Let $x_u=1$ iff $u\in V$ is excluded from a dominating set $D$.
For each vertex $u\in V$ add a hard Horn clause
$(\neg x_u\lor_{(u,v)\in E}\neg x_v)$.
%
The soft clauses are $(x_u)$, for $u\in V$.
\end{nreduction}
\subsection{Optimization Problems on Sets}
\begin{ndefinition}[Minimum Hitting Set, MinHS\xspace] \label{def:mnhs}
Let $\fml{C}$ be a collection of sets of some set $S$. A hitting set
$H\subseteq S$ is such that for any $D\in\fml{C}$,
$H\cap D\not=\emptyset$.
%
A minimum hitting set is a hitting set of minimum size.
\end{ndefinition}
\begin{nreduction}[$\text{MinHS\xspace}\ensuremath \le_P\text{HornMaxSAT\xspace}$]
For each $a\in S$ let $x_a=1$ iff $a$ is excluded from $H$.
For each $D\in\fml{C}$, create a hard Horn clause
$(\lor_{a\in D}\neg x_a)$.
%
The soft clauses are $(x_a)$, for $a\in S$.
\end{nreduction}
\begin{nremark} The minimum set cover (MinSC\xspace) is well-known to be
equivalent to the minimum hitting set problem. Thus, the same
reduction to HornMaxSAT\xspace can be applied.
\end{nremark}
\begin{ndefinition}[Maximum Set Packing, MaxSP\xspace] \label{def:mxsp}
Let $\fml{T}=\{T_1,\ldots,T_k\}$ be a family of sets.
$\fml{R}\subseteq\fml{T}$ is a set packing if
$\forall_{T_i,T_j\in\fml{R}}T_i\cap T_j=\emptyset$. A maximum set
packing is a set packing of maxim size.
\end{ndefinition}
\begin{nreduction}[$\text{MaxSP\xspace}\ensuremath \le_P\text{HornMaxSAT\xspace}$]
Let $x_{i}=1$ iff $T_i$ is included in the set packing.
%
For each pair $T_i,T_j$, such that $T_i\cap T_j\not=\emptyset$,
create a hard Horn clause $(\neg x_i\lor \neg x_j)$. The soft
clauses are $(x_i)$, for $T_i\in\fml{T}$.
\end{nreduction}
\begin{nremark}
It is well-known that the maximum set packing problem can be reduced
to the maximum clique problem. The reduction above exploits this
result.
\end{nremark}
It also immediate to conclude that the weighted version of any of the
optimization problems described in this and the previous sections can
be reduced to HornWMaxSAT\xspace.
\subsection{Handling Linear Constraints} \label{ssec:prob3}
This section argues that the propositional encodings of a number of
linear constraints are Horn. In turn, this enables solving a number of
optimization problems with HornMaxSAT\xspace.
The first observation is that \emph{any} of the most widely used CNF
encodings of AtMost$k$ constraints are composed \emph{exclusively} of
Horn clauses\footnote{To our best knowledge, this property of
propositional encodings has not been investigated before.}:
\begin{nproposition}[CNF Encodings of AtMost$k$ constraints]
\label{prop:cardhorn}
The following CNF encodings of AtMost$k$ constraints are composed
solely of Horn clauses:
pairwise and bitwise encodings~\cite[Chapter~2]{sat-handbook09},
totalizers~\cite{bailleux-cp03}, sequential
counters~\cite{sinz-cp05}, sorting networks~\cite{een-jsat06},
cardinality networks~\cite{nieuwenhuis-sat09,nieuwenhuis-cj11},
pairwise cardinality networks~\cite{codish-lpar10}, and
modulo totalizers~\cite{koshimura-ictai13a}.
\end{nproposition}
\begin{proof}
Immediate by inspection of each
encoding~\cite{sat-handbook09,bailleux-cp03,sinz-cp05,een-jsat06,nieuwenhuis-sat09,codish-lpar10,nieuwenhuis-cj11,koshimura-ictai13a}.
\end{proof}
For the case of the more general pseudo-Boolean constraints,
$\sum a_i x_i\le b$, with $a_i, b$ non-negative, there also exist Horn
encodings:
\begin{nproposition}[CNF Encodings of Pseudo-Boolean Constraints]\label{prop:pbhorn}
The Local Polynomial Watchdog~\cite{roussel-sat09} encoding for PB
constraints is
composed solely of Horn clauses.
\end{nproposition}
\begin{proof}
Immediate by inspection of the encoding in~\cite{roussel-sat09}.
\end{proof}
\jnoteF{Cardinality constraints, including AtMost1 constraints, have
Horn
encodings~\cite{bailleux-cp03,sinz-cp05,een-jsat06,nieuwenhuis-sat09,codish-lpar10,nieuwenhuis-cj11,koshimura-ictai13a}. More
importantly, pseudo-Boolean constraints also have Horn
encodings~\cite{roussel-sat09}. All these encodings
guarantee arc-consistency.}
These observations have immediate impact on the range of problems that
can be solved with HornMaxSAT\xspace and HornWMaxSAT\xspace. One concrete example is the
Knapsack problem~\cite{cormen-bk09}.
\begin{ndefinition}[Knapsack problem] \label{def:}
Let $\{1,\ldots,n\}$ denote a set of $n$ objects, each with value
$v_i$ and weight $w_i$, $1\le i\le n$, and a maximum weight value
$W$. The knapsack problem is to pick a subset of objects of maximum
value that is consistent with the weight constraint. By letting
$x_i=1$ iff object $i$ is picked, we get the well-known 0-1 ILP
formulation $\text{max}\sum_{i}v_ix_i;\:\text{s.t.}\sum_{i}w_ix_i\le
W$.
\end{ndefinition}
\begin{nreduction}[$\text{Knapsack\xspace}\ensuremath \le_P\text{HornMaxSAT\xspace}$]
From~\autoref{prop:pbhorn}, there exist Horn encodings for
Pseudo-Boolean constraints.
Thus, the hard constraint $\sum_{i}w_ix_i\le W$ can be encoded
with Horn clauses.
%
The soft clauses are $(x_i)$ for each object $i$, each with cost
$v_i$.
%
Both the soft and the hard clauses in the reduction are Horn.
\end{nreduction}
\section{Experimental Results} \label{sec:res}
This section provides a preliminary investigation into exploiting
reductions to HornMaxSAT\xspace in practice.
All the experiments were run in Ubuntu Linux on an Intel Xeon~E5-2630
2.60GHz processor with 64GByte of memory. The time limit was set to
1800s and the memory limit to 10GByte for each process to run.
Two classes of problem instances were considered.
The first being a set of 46 PHP\xspace instances that were generated ranging
the number of holes from 10 up to 100.
The second set of benchmarks corresponds to 100 instances generated
according to the example in \autoref{fig:graph01}, with $k$ ranging
from 10 to 100 and $m$ ranging from $k$ to $20k$.
In the experiments six different MaxSAT\xspace solvers were considered.
Some solvers are core-guide~\cite{mhlpms-cj13} (namely, OpenWBO16,
WPM3, MSCG and Eva), whereas others are based on implicit hitting sets
(namely, MaxHS and LMHS)~\cite{mhlpms-cj13}.
Additionally, a variant of LMHS was considered for which the option
"--no-equiv-seed" was set (LMHS-nes).
The results are summarized in the cactus plot shown
in~\autoref{fig:cactus}.
As can be observed, solvers based on implicit hitting sets (i.e.\ the
MaxHS family of MaxSAT\xspace algorithms), but also OpenWBO16, perform very
well one the instances considered\footnote{Any implementation of the
MaxHS-family of MaxSAT\xspace algorithms, by using a CDCL SAT solver,
implements a basic version of the algorithm proposed
in~\autoref{sec:algs}.
}.
The differences to the other solvers are solely due to the PHP
instances.
While propositional encodings of PHP are well-known to be extremely
hard for SAT solvers, the proposed MaxSAT\xspace encoding scales well for
MaxHS\xspace-like algorithms, but also for the core-guided MaxSAT\xspace solver
OpenWBO16.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.85\textwidth]{plots/cactus}
\caption{Cactus plot for selected solvers on PHP\xspace and MaxIS\xspace benchmarks.}
\label{fig:cactus}
\end{center}
\end{figure}
\begin{table}[t]
\begin{center}
\caption{Statistics on benchmarks generated according to the example in \autoref{fig:graph01}.} \label{tab:maxis-numbers}
\input{tables-data/maxis-numbers}
\end{center}
\end{table}
\paragraph{Analysis of the number of iterations.}
In order to validate the abstraction mechanism described in
\autoref{sec:abstract}, we considered the LMHS-nes variant,
and the benchmarks generated according to the example in
\autoref{fig:graph01}.
The reason to consider LMHS-nes is that soft clauses are all unit and
the set of soft clauses includes the complete set of variables of the
formula. If the option is not set, then the complete CNF formula is
replicated inside the MIP solver (CPLEX), \emph{as a preprocessing
step}, which results in exactly one call to
CPLEX~\cite{davies-sat13}.
\autoref{tab:maxis-numbers} presents the results obtained, where first
and second row show the $k$ and the $m$ parameters of the instance.
The third row (UB) shows the upper bound on the number of iterations
presented in \autoref{sec:abstract}.
The fourth and fifth rows show the number of disjoint cores (\#DC) and
the number of iterations (\#I) reported by LMHS-nes.
As can be concluded from the table, the number of iterations is always
smaller than the upper bound, suggesting that the algorithm is able to
abstract clauses more effectively than in the worst case scenario.
The ability of HMaxHS\xspace algorithms to find good abstractions is expected
to represent a significant step into deploying HornMaxSAT\xspace problem
solvers.
\section{Scratchpad}
\subsection{Introduction}
The maximum satisfiability (MaxSAT\xspace) problem for Horn formulas
(HornMaxSAT\xspace) is well-known to be NP-hard~\cite{jaumard-ipl87}, whereas
the decision problem for Horn formulas is well-known to be solved in
linear time~\cite{gallier-jlp84,minoux-ipl88}.
Algorithms for the general MaxSAT\xspace problem have seen remarkable
improvements in recent years~\cite{mhlpms-cj13}. These algorithms
propose different ways for iteratively finding and blocking
unsatisfiable cores (or subformulas).
One such family of algorithms~\cite{bacchus-cp11,jarvisalo-sat16},
which this paper refers to as MaxHS\xspace-like, consists of iteratively
computing a minimum hitting set $\fml{H}$ of a target formula
$\fml{T}$ (representing previously computed unsatisfiable cores) and
then checking the satisfiability of the original formula $\fml{F}$
without the computed minimum hitting $\fml{H}$.
If the original formula $\fml{F}$ is Horn, then each satisfiability
checking step can be done in \emph{linear time}.
This paper investigates a new family of HornMaxSAT\xspace algorithms building on
MaxHS\xspace-like~\cite{bacchus-cp11,jarvisalo-sat16} approaches.
This paper also shows that a wide range of optimization problems have
natural formulations as HornMaxSAT\xspace, including different optimization
problems over graphs or over sets. Furthermore, the paper remarks that
a large number of propositional encodings, some of which in practical
use for more than a decade, corresponds to Horn formulas. As a result,
a large number of practical applications where SAT and MaxSAT\xspace have
found success can be shown to be solving HornMaxSAT\xspace optimization
problems.
|
1603.01042
|
\section{Introduction}\label{sec:intro}
The majority of classical Cepheids are single-periodic, radial pulsators. More complex pulsation is not rare, however. Double-mode Cepheids pulsating simultaneously in the radial fundamental and in the radial first overtone modes (F+1O) and in the two lowest-order radial overtones (1O+2O) are known for years. These form of pulsation is recognized based on characteristic period ratios of the excited pulsation modes, $P_{\rm 2O}/P_{\rm 1O}\approx 0.80-0.81$ for 1O+2O pulsators and $P_{\rm 1O}/P_{\rm F}\approx 0.715-0.74$ for F+1O pulsators \citep[e.g.][]{ogle_cep_lmc,ogle_cep_smc}. A well known fact is that period ratio depends on metallicity; characteristic values may slightly differ for stars from different stellar systems. The Optical Gravitational Lensing Experiment \citep[OGLE,][]{ogleIII,ogleIV} observations led to the discovery of other forms of multi-periodic pulsation among Cepheids. Triple-mode radial pulsation, F+1O+2O and 1O+2O+3O, was identified \citep{pamtri,ogle_freaks,ogle_cep_smc,ogle_cep_blg,ogleIV_cep_multi}. A very interesting triple-mode Cepheid, with 1O, 2O and additional longer-period mode, was discovered with {\it CoRoT} \citep{pbw}. Rare and peculiar double-mode pulsations were also discovered, including 1O+3O pulsation with 2O apparently not excited \citep{ogle_freaks,ogleIV_cep_multi} and first double-mode 2O+3O Cepheid \citep{ogleIV_cep_multi}. For recent review see \cite{pam14}.
The analysis of 1O Cepheids revealed another, most interesting group of double-periodic pulsators. In more than hundred 1O Cepheids additional small-amplitude variability, with period shorter than first overtone period, was detected. The period ratios fall in the $P/P_{\rm 1O}\in(0.6,\,0.65)$ range and cannot correspond to two radial modes \citep{wdrs}. 35 stars were reported in the Large Magellanic Cloud \citep[LMC,][]{mk09,ogle_cep_lmc}, 138 stars in the Small Magellanic Cloud \citep[SMC,][]{ogle_cep_smc} and 1 star was found in the Galactic disc \citep{pietruk}. One LMC star with additional variability pulsates simultaneously in the radial fundamental and first overtone modes \citep{mk09}. Two stars were also identified in the {\it Kepler}-{\it K2} photometry (Plachy et al., in prep.). In the Petersen diagram, i.e. in the plot of shorter-to-longer period ratio versus the longer period, these stars group in the three well separated sequences.
Interestingly, very similar form of pulsation is present in RR~Lyrae stars, see the most in-depth and extensive studies of the phenomenon by \cite{pamsm15}, \cite{netzel1,netzel3} and \cite{jurcsik_M3}. Additional variability is detected in first overtone pulsators (RRc) or in double-mode F+1O pulsators (RRd). Period ratios fall in the similar range, $P/P_{\rm 1O}\in(0.60,\,0.64)$. Three sequences are present in the Petersen diagram as well, although they are not that well separated as in the case of Cepheids \citep{netzel3}. Thanks to ultra-precise observations by space telescopes, {\it Kepler} and {\it CoRoT} \citep[e.g.][]{szabo_corot,pamsm15,molnar,kurtz}, and detailed analysis of ground-based observations \citep{netzel1,netzel3,jurcsik_M3} this form of pulsation is well studied in RR~Lyr stars. In particular, we know that in the frequency spectra of these stars signal (power excess) at subharmonic of the additional frequency is present. Signals associated with the additional variability are broad and non-coherent. In the time domain it corresponds to strong variability of amplitude and frequency on a time-scale of a few tens to hundred of days. The phenomenon must be common among RRc/RRd stars, as 14 out of 15 stars observed from space show this form of pulsation \citep[for a summary see][]{pamsm15}.
In contrast to RR~Lyr stars, 1O Cepheids were not extensively observed from space (see Sect.~\ref{ssec:rrlcomp}). Analysis of ground-based data, in particular of the largest sample of 138 of these interesting stars from SMC is missing. \cite{ogle_cep_smc} only reported the discovery of these stars and provided their periods and period ratios. In the present study we analyse the OGLE-III data for these interesting objects in detail. We do not search for additional objects, but focus on those in which we know that additional variability is present. Our goal is to study the properties of the variability in detail. In particular, we check for the presence of subharmonics of the additional signal, analyse the amplitude distribution and time-variation of the additional signals. These are necessary information for the models and theories to explain this peculiar and puzzling form of pulsation.
\section{Data analysis}\label{sec:methods}
We analyse OGLE-III $I$-band photometry for 138 stars listed in \cite{ogle_cep_smc}. All these stars were identified as 1O Cepheids with additional small amplitude variability, with period ratios in the $P/P_{\rm 1O}\in(0.6,\,0.65)$ range. We use standard consecutive pre-whitening technique. We identify the dominant frequencies with the help of discrete Fourier transform (FT). Next, we fit the data with the sine series of the following form:
\begin{align}
m(t)=m_0+\sum_k A_k\sin(2\pi\nu_kt+\phi_k)\,,\label{eq:ssum}
\end{align}
using the non-linear least-square fitting. The FT of residual data is inspected for the presence of additional signals, which are iteratively included in eq.~\eqref{eq:ssum}. Only resolved frequencies are included. We consider two peaks as well resolved if their separation is larger than $2/\Delta T$, where $\Delta T$ is data length. In the FT the signal is considered significant if signal-to-noise ratio ($S/N$) exceeds 4. The criterion is relaxed for signals at combination frequencies, provided that peak is present exactly (within frequency resolution) at the expected position (we accept $S/N>3.5$). Typically our solution consists of low-order ($3-6$) Fourier series describing the dominant variability associated with the first overtone ($k\ifmmode \nu_{\rm 1O}\else$\nu_{\rm 1O}$\fi$), sine term with the frequency of the additional variability of interest ($\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$) and possibly with the combination frequency with the first overtone frequency (typically $\ifmmode \nu_{\rm 1O}\else$\nu_{\rm 1O}$\fi+\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$). Additional significant signals we find, that do not fall in the $P/P_{\rm 1O}\in(0.6,\,0.65)$ range, are also included in eq.~\eqref{eq:ssum}.
During the analysis we reject the outliers ($4\sigma$ criterion) and remove slow trends through subtracting from the original data the low-order polynomials or splines. These functions are fitted to the residuals. Quite often, after prewhitening with the first overtone frequency and its harmonics, residual signal remains at the location unresolved with $k\ifmmode \nu_{\rm 1O}\else$\nu_{\rm 1O}$\fi$. Typically it corresponds to the long-term variation of the first overtone phase (period change). This signal may be significant which increases the noise level in the FT and consequently may hide the additional variabilities. In such case we get rid of the non-stationary first overtone variation with the help of time-dependent prewhitening technique, described and applied to the {\it Kepler} data by \cite{pamsm15}. Application to the ground-based OGLE data is described in more detail in \cite{netzel1}.
Strong daily aliases and 1-yr aliases are inherent to ground-based OGLE observations of the SMC. As the signals we search for are relatively weak, the alias-related ambiguities can happen. In some stars, after prewhitening with the first overtone frequency, we detect a few significant peaks of similar height which are mutual daily aliases. If period corresponding to one of them falls in the $P/P_{\rm 1O}\in(0.6,\,0.65)$ range, then this peak is accepted as a true signal, even if it is not the highest peak. All such cases are reported explicitly in the study.
\section{Results}\label{sec:results}
\subsection{Overview}\label{ssec:overview}
\begin{table*}
\caption{Properties of 1O Cepheids with additional variability. The consecutive columns contain: star's id, first overtone period, $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi$, period of the additional variability, $\ifmmode P_{\rm x}\else$P_{\rm x}$\fi$, period ratio, $\ifmmode P_{\rm x}/P_{\rm 1O}\else$P_{\rm x}/P_{\rm 1O}$\fi$, amplitude of the first overtone, $A_{\rm 1O}$, and amplitude ratio, $A_{\rm x}/A_{\rm 1O}$, and remarks: `al' -- daily alias of signal at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is higher; `nsx' -- complex appearance of the signal at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$; `nsO' -- non-stationary first overtone; `cf' -- combination frequency of $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ and $\ifmmode \nu_{\rm 1O}\else$\nu_{\rm 1O}$\fi$ detected; `sh' -- power excess at subharmonic frequency (centred at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$) detected; `ap' -- additional periodicity detected; `tdp' -- time-dependent analysis was conducted; `?' -- weak detection ($S/N$ given in the parenthesis). Full Table is in the Appendix~\ref{app:table} (Tab.~\ref{tab:atab}).}
\label{tab:tab}
\begin{tabular}{lrrrrrr}
star & $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi$\thinspace (d) & $\ifmmode P_{\rm x}\else$P_{\rm x}$\fi$\thinspace (d) & $\ifmmode P_{\rm x}/P_{\rm 1O}\else$P_{\rm x}/P_{\rm 1O}$\fi$ & $A_{\rm 1O}$\thinspace (mag) & $A_{\rm x}/A_{\rm 1O}$ & remarks \\
\hline
OGLE-SMC-CEP-0056 & 0.9860208(7) & 0.60373(1) & 0.6123 & 0.1689 & 0.024 & ? ($S/N=3.77$) \\
OGLE-SMC-CEP-0212 & 1.741010(4) & 1.08766(4) & 0.6247 & 0.0997 & 0.036 & sh, nsx \\
OGLE-SMC-CEP-0251 & 1.796802(1) & 1.12279(2) & 0.6249 & 0.1399 & 0.029 & sh, nsx \\
OGLE-SMC-CEP-0280 & 1.675191(1) & 1.04344(2) & 0.6229 & 0.1377 & 0.026 & nsO, ap \\
OGLE-SMC-CEP-0281 & 1.2662457(7) & 0.774075(9)& 0.6113 & 0.1263 & 0.033 & al, nsx \\
OGLE-SMC-CEP-0307 & 0.9734743(7) & 0.59718(1) & 0.6134 & 0.1922 & 0.019 & nsO, ap \\
OGLE-SMC-CEP-0447 & 1.2651448(8) & 0.77624(1) & 0.6136 & 0.1300 & 0.024 & nsO, nsx \\
\ldots & & & & & & \\
\hline
\end{tabular}
\end{table*}
Results of our analysis are collected in Tab.~\ref{tab:atab} in the Appendix. For a reference a section of the Table is presented in Tab.~\ref{tab:tab}. In consecutive columns there are: star's id, first overtone period, $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi$, period of the additional variability in the $P/\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\in(0.6,\,0.65)$ range, $\ifmmode P_{\rm x}\else$P_{\rm x}$\fi$, period ratio, $\ifmmode P_{\rm x}/P_{\rm 1O}\else$P_{\rm x}/P_{\rm 1O}$\fi$, amplitude of the first overtone mode, amplitude ratio, $A_{\rm x}/A_{\rm 1O}$, and remarks. The resulting Petersen diagram is plotted in Fig.~\ref{fig:pet}.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{petersen.eps}}
\caption{Petersen diagram for 138 analysed Cepheids. Stars in which two periodicities were detected, corresponding to two sequences in the diagram, are marked with squares (two squares per star). Stars with weak detection of the additional periodicity are marked with open triangles. Filled symbols correspond to stars in which power excess centred at subharmonic frequency, $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi/2$, was detected (half-filled symbols are used to indicate weak detections).}
\label{fig:pet}
\end{figure}
In the frequency spectrum, the additional variability rarely appears as a single and coherent peak. Typically more complex structures are present; examples are illustrated in Fig.~\ref{fig:ilu}. Sometimes two dominant close peaks are detected, as illustrated in the top two panels of Fig.~\ref{fig:ilu}. In other cases the signal appears as a complex cluster of peaks, as illustrated in the two lower panels of Fig.~\ref{fig:ilu}. In our analysis we pick the highest peak in the cluster, or in a group of close peaks (marked with filled diamonds in Fig.~\ref{fig:ilu}), and include its frequency in eq.~\eqref{eq:ssum}. Its properties, period and amplitude, $\ifmmode P_{\rm x}\else$P_{\rm x}$\fi$ and $A_{\rm x}$, are then given in Tab.~\ref{tab:tab}. After prewhitening, residual, unresolved power is often detected. Such appearance of additional variability indicates that it is strongly non-stationary, with variable phase and/or amplitude (see Section~\ref{ssec:tv}). All stars in which more complex structures are detected at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ (two close peaks, clusters of peaks, residual power after prewhitening) are marked with `nsx' in the remarks column of Tab.~\ref{tab:tab}. These structures indicate that additional variability is non-stationary, see Sect.~\ref{ssec:tv}. The different appearance of the signal may result from different time-scales of the variability and different structure of the data (different length of the available data). There are 80 stars marked with `nsx' which is 58\thinspace per cent of the analysed sample. The complex structures at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ are common.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{ilustr.eps}}
\caption{Illustration of complex structures detected at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. In the two upper panels, two well separated peaks of similar height are present, while in the two lower panels, clusters of peaks are present. The extent of the horizontal bar corresponds to the separation expected for 1-yr alias. Filled diamonds mark the location of peaks included in Tab.~\ref{tab:tab}. In the two upper panels, open diamonds indicate the location of peaks adopted by Soszy\'nski et al. (2010) (see Appendix, Sect.~\ref{ssec:igor}).}
\label{fig:ilu}
\end{figure}
First overtone is often non-stationary as well, which appears as strong unresolved power at its frequency, after the prewhitening. These stars are marked with `nsO' in Tab.~\ref{tab:tab}. There are 56 such stars which constitutes nearly 41\thinspace per cent of the analysed sample. In some cases the unresolved power at $\ifmmode \nu_{\rm 1O}\else$\nu_{\rm 1O}$\fi$ dominates the frequency spectrum and significantly increases the noise level in the Fourier transform, which may hide the additional significant peaks. In all such cases we conducted time-dependent prewhitening to get rid of the unwanted signal. If time-dependent prewhitening was crucial for the detection of additional variability of interest, or significantly improved the $S/N$ of the interesting peak at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, then `tdp' is included in the remarks column of Tab.~\ref{tab:tab}. In these cases the frequency and amplitude of the additional variability are determined independently of the determination of amplitude and frequency of the first overtone, from the dataset with first overtone filtered out. An inherent part of the time-dependent prewhitening is time-dependent Fourier analysis, which shows how the amplitude and phase of the first overtone change in time. In the majority of cases we observed a pronounced phase change, while amplitude changes were insignificant. No firm case of Blazhko-like modulation was found, although in some cases variation of first overtone phase seemed periodic.
Fast period changes, on time-scale shorter than expected from evolutionary calculations are common in 1O Cepheids. Exemplary O-C diagrams may be found e.g. in studies by \cite{berd1} or \cite{berd2}. A detailed study of OGLE and MACH data for LMC Cepheids was conducted by \cite{poleskiPC} who detected period changes in 41 per cent of 1O LMC Cepheids and in 18 per cent of fundamental mode pulsators. The analysis of period changes of the first overtone is beyond the scope of the present analysis, however; dedicated study is planned.
In 25 stars we find peaks at combination frequency, $\ifmmode \nu_{\rm 1O}\else$\nu_{\rm 1O}$\fi+\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, and in one star we find peak at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi-\ifmmode \nu_{\rm 1O}\else$\nu_{\rm 1O}$\fi$ (OGLE-SMC-CEP-0797). These stars are marked with `cf' in the last column of Tab.~\ref{tab:tab}.
Stars in which signal at a daily alias of $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is higher are marked with `al'. As described in the previous section, we select the lower alias as a true signal if it falls well within one of the three sequences in the Petersen diagram.
In 17 stars marked with `ap' in Tab.~\ref{tab:tab} we detect additional significant periodicity that does not fall into the $P/\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\in(0.60,\,0.65)$ range and cannot be interpreted as due to other radial mode. In a few stars the additional peaks appear relatively close to the radial first overtone frequency. We note that similar detections were reported by \cite{mk09}, who argue that these signals may be intrinsic to the stars and correspond to non-radial pulsation. In no case we detect combination frequency with the first overtone, however. In principle these periodicities may result from blending. We postpone the discussion of these additional signals till the analysis of full sample of SMC Cepheids, which will allow to draw some statistically meaningful conclusions concerning their nature.
In stars plotted with triangle in Fig.~\ref{fig:pet} the detection of additional variability is weak. These stars are marked with `?' and $S/N$ value is given in the last column of Tab.~\ref{tab:tab}. In some cases the additional signal appeared only after the time-dependent prewhitening. There are eight such cases and we discuss them in more detail in the Appendix, in Sect.~\ref{ssec:igor}, which also contains detailed comparison of our results with those reported in \cite{ogle_cep_smc}. The period ratios for these stars fall well within the three sequences in the Petersen diagram. Despite the weak detection, we consider these stars as double-periodic in the following.
In six stars, two well separated and significant peaks were detected in the frequency range of interest -- the corresponding period ratios fall within two separate sequences in Fig.~\ref{fig:pet}. In the figure these stars are marked with squares, two for each of six stars. Their frequency spectra are plotted in Fig.~\ref{fig:2seq}. Typically, signal corresponding to one of the sequences is dominant, while the detection of peak corresponding to the other sequence is rather weak (but always with $S/N>4.0$). In Tab.~\ref{tab:tab}, two rows are present for these stars, with characteristics of the highest peaks falling within one of the sequences. All other stars, in which we detect a significant peak corresponding to only one sequence, are marked with circles in Fig.~\ref{fig:pet}.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{2seq.eps}}
\caption{Frequency spectra for six stars in which two significant peaks, corresponding to two sequences in the Petersen diagram, were detected. These peaks are marked with filled diamonds placed at the $S/N=4.0$ level. The extent of the horizontal bar, plotted in each panel, corresponds to separation expected for 1-yr aliases. Period-ratio scale, $P/\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi$, is plotted at the top of each panel, with numerical labels plotted in the top-most panel.}
\label{fig:2seq}
\end{figure}
Finally, in many stars we detect significant power centred at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, i.e. at subharmonic frequency. These stars are marked with `sh' in the remarks column of Tab.~\ref{tab:tab}, printed in {\it italics} if the detection is weak. In the Petersen diagram in Fig.~\ref{fig:pet}, stars with firm detection of the power excess at subharmonic frequency are marked with filled symbols. In the case of weak detection half-filled symbol is plotted. These signals will be discussed in detail in Section~\ref{ssec:sh}.
\subsection{The Petersen diagram and amplitude distributions}\label{ssec:amps}
Three well separated and slanted sequences are present in the Petersen diagram (Fig.~\ref{fig:pet}). 64 stars fall within the bottom sequence, 54 stars fall within the middle sequence and 26 stars fall within the top sequence. The numbers do not add up to 138, as six stars fall within two sequences simultaneously. Within each sequence period ratio drops with the increasing pulsation period. Stars forming the bottom sequence have, on average, shorter pulsation periods, while stars forming the top sequence have, on average, longer pulsation periods.
The number of stars in the top sequence is significantly smaller than in the middle and bottom sequences. On the other hand, long-period first overtone Cepheids are not as numerous as short-period overtone pulsators. In Fig.~\ref{fig:histoX}, we study the period distribution for all 1644 first overtone SMC Cepheids from OGLE-III collection (solid black line) and for stars with additional variability (hatched area; three different patterns show the contributions from the three sequences). Stars were counted in 0.5\thinspace d-wide bins, except $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi<1$\thinspace d, where we used smaller, 0.25\thinspace d-wide bins. This is because of sharp increase of Cepheid number as one moves from $0.25$-$0.5$\thinspace d bin through $0.5$-$0.75$\thinspace d bin to $0.75$-$1.0$\thinspace d bin. Within each bin the incidence rate of stars with additional variability is given with statistical errors calculated assuming that the population follows a Poisson distribution \citep[e.g.][]{alcock}.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{histo_allX.eps}}
\caption{Period distribution for all 1O SMC OGLE-III Cepheids (solid black line) and for stars with the additional variability (hatched area). Contributions from the three sequences in the Petersen diagram are marked with different patterns, as indicated in the key. Incidence rates are also provided.}
\label{fig:histoX}
\end{figure}
It is well visible that the discussed form of pulsation is not present in the shortest period 1O Cepheids with $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\!<\!0.75{\rm d}$, despite of 151 stars falling into this period range. For $0.75{\rm d}\!<\!\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\!<\!1.0{\rm d}$ there are 342 1O Cepheids and only in five of them the additional variability was found. Incidence rate is very low ($1.5\pm0.7$\thinspace per cent) as compared to the next longer-period bin ($10.8\pm1.4$\thinspace per cent). This is most likely due to selection effect. Shortest period overtone Cepheids are least luminous (because of the $P-L$ relation, see Fig.~\ref{fig:cwa_basics}), consequently one may expect higher noise level in the Fourier transform, which may hinder the detection of low-amplitude variability. This is discussed in more detail in Sect.~\ref{ssec:selection}. For $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\!>\!1{\rm d}$ the discussed form of pulsation is quite frequent. For $1{\rm d}\!\leq\!\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\!<\!2{\rm d}$ the incidence rate is $\approx\!10.5$\thinspace per cent, for $2{\rm d}\!\leq\!\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\!<\!3{\rm d}$ it is $\approx\!15.5$\thinspace per cent and for $3{\rm d}\!\leq\!\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\!<\!4{\rm d}$ it is $\approx\!8$\thinspace per cent. Taking into account the statistical errors, these numbers are not that different. We conclude that the top sequence is the least numerous of the three, mostly because there are fewer long-period 1O Cepheids than short-period ones and also because the incidence rate may be slightly lower for longer periods.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{amps.eps}}
\caption{First overtone amplitude as a function of $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi$ (top panel) and $\ifmmode P_{\rm x}/P_{\rm 1O}\else$P_{\rm x}/P_{\rm 1O}$\fi$ (middle panel). In the bottom panel we show $A_{\rm x}/A_{\rm 1O}$ as a function of $\ifmmode P_{\rm x}/P_{\rm 1O}\else$P_{\rm x}/P_{\rm 1O}$\fi$. Stars in which power excess at subharmonic, $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, was detected are plotted with filled symbols.}
\label{fig:amps}
\end{figure}
The additional variability is always weak as compared to radial first overtone. Tab.~\ref{tab:tab} provides the amplitude of the first overtone, $A_{\rm 1O}$, and amplitude ratio, $A_{\rm x}/A_{\rm1O}$. The top panel of Fig.~\ref{fig:amps} shows the amplitude of the first overtone as a function of the first overtone period. The amplitude drops with increasing pulsation period. The highest (Fourier) amplitude is slightly below $0.2$\thinspace mag, the lowest is around $0.06$\thinspace mag. The most typical values fall in the $0.10-0.16$\thinspace mag range. In the middle panel of Fig.~\ref{fig:amps} we plot the amplitude of the first overtone as a function of period ratio, $\ifmmode P_{\rm x}/P_{\rm 1O}\else$P_{\rm x}/P_{\rm 1O}$\fi$. The trace of the three sequences present in the Petersen diagram is well visible. It is clear that amplitudes are the highest in stars forming the bottom sequence (as these stars have shorter first overtone periods, on average) and the lowest in the stars forming the top sequence (stars with longer first overtone periods). The bottom panel of Fig.~\ref{fig:amps} shows the amplitude ratio, $A_{\rm x}/A_{\rm1O}$, as a function of period ratio. There is no significant difference between the stars corresponding to the three sequences. In stars that are plotted with filled symbols in Fig.~\ref{fig:amps}, a power excess at subharmonic, $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, is detected. This will be discussed in more detail in Sect.~\ref{ssec:sh}. Here we just note that for stars of the middle and top sequences (only there power excess at subharmonic was detected) there is no difference in first overtone amplitude between stars with, and stars without power excess at subharmonic. The histogram of amplitude ratios, $A_{\rm x}/A_{\rm 1O}$, for all the stars is presented in Fig.~\ref{fig:hia}. The distribution peaks at $A_{\rm x}/A_{\rm 1O}\approx 1.75-2.25$\thinspace per cent. Typical amplitudes of the additional variability are in the 2--5\thinspace mmag range (see also Fig.~\ref{fig:sh_amps}). It explains why the additional variability was discovered only recently -- high-quality observations are necessary to detect such low-amplitude variability. So far, the signal was detected mostly in the excellent OGLE data. Additional variability was also detected in two 1O Cepheids observed with {\it K2} (Plachy et al., in prep.).
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{histo_arat.eps}}
\caption{Histogram of $A_{\rm x}/A_{\rm 1O}$ values.}
\label{fig:hia}
\end{figure}
\subsection{Comparison with other first overtone Cepheids without additional variability}\label{ssec:cwa}
In this Section, we check whether Cepheids with additional variability differ significantly from other first overtone SMC Cepheids, that are single-periodic. In Fig.~\ref{fig:cwa_basics}, we plot the location of analysed stars in the period-luminosity (top panel) and in the colour-magnitude (bottom panel) diagrams. For the former diagram we use reddening free Wesenheit index as the luminosity indicator. In the plots, all 1O Cepheids are plotted with small black dots, while stars with additional variability are plotted with larger symbols; point shape and point colour code the star's location in the Petersen diagram (red circles -- bottom sequence, green diamonds -- middle sequence and blue squares -- top sequence). Stars in which variability corresponding to two sequences was detected, are plotted only once, as members of the sequence for which corresponding amplitude is higher. For this plot, periods and intensity mean $I$- and $V$-band magnitudes were taken directly from the OGLE-III ftp archive \citep{ogle_cep_smc}.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{cwa_dm.eps}}
\caption{Period-luminosity (top panel) and colour-magnitude (bottom panel) diagrams for all 1O Cepheids from the SMC OGLE-III collection (small dots) and for Cepheids with additional variability analysed in this paper (larger symbols; red circles correspond to the bottom sequence in the Petersen diagram, green diamonds to the middle sequence and blue squares to the top sequence). In addition, double-mode, F+1O Cepheids are plotted with small squares.}
\label{fig:cwa_basics}
\end{figure}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{cwa_fp.eps}}
\caption{Fourier decomposition parameters versus the first overtone period for all 1O Cepheids from the SMC OGLE-III collection (small dots) and for Cepheids with additional variability analysed in this paper (larger symbols, as in Fig.~\ref{fig:cwa_basics}).}
\label{fig:cwa_fp}
\end{figure}
Except the lack of additional variability in the shortest period stars ($\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi<0.8$\thinspace d), which we already noted in the previous Section, we see no significant differences in the distribution of stars with and without the additional variability in the two diagrams presented in Fig.~\ref{fig:cwa_basics}. They cover similar colour and Wesenheit index ranges. Separation of stars, members of the three sequences in the Petersen diagram, is clear and pronounced. As period-luminosity diagram indicates, it results from different first overtone periods characteristic for the three groups. Members of the bottom sequence have shorter periods and consequently are least luminous, while members of the top sequence have longer periods and are thus most luminous. For stars of each sequence the covered colour range is similar as for single-periodic stars of similar luminosity. In Fig.~\ref{fig:cwa_basics}, we also plotted double-mode F+1O Cepheids (small squares). These stars cover similar luminosity range as stars of the bottom sequence, but are shifted, on average, towards higher colour values.
In Fig.~\ref{fig:cwa_fp}, we compare the light curve shapes with the help of the Fourier decomposition parameters \citep{sl81}. Symbols used in the panels are exactly the same as in Fig.~\ref{fig:cwa_basics}. In the consecutive panels, from top to bottom, we plot: peak-to-peak amplitude, $R_{21}$, $R_{31}$, $\varphi_{21}$ and $\varphi_{31}$, all as a function of first overtone period. A lack of short period Cepheids with additional variability is again apparent. Also, additional variability is not detected in stars with low first overtone amplitude, $A\lesssim 0.2$\thinspace mag (and consequently in stars with lower $R_{21}$ and $R_{31}$), which is a selection effect. First, these stars are not as numerous as higher amplitude Cepheids. Second, with typical amplitude of the additional variability corresponding to $\sim\!2-4$\thinspace per cent of the first overtone amplitude, possible signals are also of lower amplitude and likely hidden in the noise. Otherwise, it seems that Cepheids with additional variability follow the same progressions as Cepheids without additional variability. The only exception seems the behaviour of $\varphi_{21}$ in the narrow period range of $1.4-1.6$\thinspace d. In this period bin, 1O Cepheids cover the $4\lesssim\varphi_{21}\lesssim 5$ range, but stars with additional variability and of the bottom sequence (red circles) prefer the low values, $\varphi_{21}\lesssim 4.3$.
We conclude that there is no significant difference between 1O Cepheids with and without the additional variability, with regards to their location on the period-luminosity and colour-magnitude diagrams, and light curve shapes (with the possible exception of $\varphi_{21}$ in relatively narrow period range).
\subsection{Subharmonics}\label{ssec:sh}
In many stars we detect significant signal centred at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, i.e. at subharmonic of $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. Typically the signal detected at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ has a complex form -- broad cluster of peaks, centred at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is detected. Stars in which such power excess was detected are marked with `sh' in Tab.~\ref{tab:tab}. The weak detection is marked with `{\it sh}'. What we regard as `weak' is somewhat subjective. In general, if $3.5\!<\!S/N\!<\!4.0$ for the highest peak at around $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, or the power excess was evident only after the time-dependent prewhitening, the star is marked as a weak detection. In all cases however, the power excess was clear. Altogether, it was detected in 48 stars, of which 14 are marked as weak detections. This constitutes 35\thinspace per cent of the analysed sample or, if weak detections are excluded, 25\thinspace per cent. Detailed characterization of frequency spectra of stars with power excess detected at subharmonic frequency is collected in Tab.~\ref{tab:sh}. Power excess is characterized by the frequency and amplitude of the highest peak detected at around $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, $\ifmmode \nu_{\rm sh}\else$\nu_{\rm sh}$\fi$ and $A_{\rm sh}$, respectively. Table contains: star's id, period ratio, $\ifmmode P_{\rm x}/P_{\rm 1O}\else$P_{\rm x}/P_{\rm 1O}$\fi$, frequency of the additional variability, $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, frequency of the highest peak detected around $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, $\ifmmode \nu_{\rm sh}\else$\nu_{\rm sh}$\fi$, amplitude of the additional variability, $A_{\rm x}$, and amplitude ratio, $A_{\rm sh}/A_{\rm x}$, approximate $S/N$ for the peak at $\ifmmode \nu_{\rm sh}\else$\nu_{\rm sh}$\fi$ and remarks: `weak' -- weak detection, `broad' -- broad power excess; `tdp' -- time-dependent prewhitening of all signals except $\ifmmode \nu_{\rm sh}\else$\nu_{\rm sh}$\fi$ conducted.
\begin{figure*}
\centering
\noindent\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-0251_new_f.eps}}
\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-1119_new_f.eps}}
\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-3343_new_f.eps}}\\
\noindent\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-4205_new_f.eps}}
\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-4262_new_f.eps}}
\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-4388_new_f.eps}}
\caption{Frequency spectra for selected stars with additional variability present at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ and simultaneously with significant power excess centred at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. In the top panel a section of frequency spectrum centred at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is plotted. In the bottom panel a section of frequency spectrum centred at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi/2$ is plotted. The frequency range is the same in two panels. The extent of the horizontal bar plotted in each panel (at $S/N=4.0$) corresponds to separation expected for 1-yr aliases.}
\label{fig:sh_nice}
\end{figure*}
\begin{figure*}
\centering
\noindent\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-1856_new_tdp_f.eps}}
\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-3944_new_f.eps}}
\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-4255_new_f.eps}}
\caption{The same as Fig.~\ref{fig:sh_nice}, but for stars with broad power excess at subharmonic frequency.}
\label{fig:sh_broad}
\end{figure*}
\begin{figure*}
\centering
\noindent\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-0708_new_tdp_f.eps}}
\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-2627_new_tdp_f.eps}}
\resizebox{0.33\hsize}{!}{\includegraphics{OGLE-SMC-CEP-3239_new_f.eps}}
\caption{The same as Fig.~\ref{fig:sh_nice}, but for stars with weak detection of power excess at subharmonic frequency.}
\label{fig:sh_weak}
\end{figure*}
\input{table_sh_r1.tex}
No firm detection of power excess at other subharmonic frequencies, i.e. at $3/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, $5/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, etc., is reported. There are a few ambiguous cases, in which power excess is present at $3/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, but sometimes it is an alias of power excess at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ or of unresolved residual power at first overtone frequency. Time-dependent prewhitening will not help here; by removing e.g. the non-stationary variation at $\ifmmode \nu_{\rm 1O}\else$\nu_{\rm 1O}$\fi$ we also remove the power at its daily aliases.
Before we discuss the properties of the signals detected at subharmonic frequencies, in Figs.~\ref{fig:sh_nice}, \ref{fig:sh_broad} and \ref{fig:sh_weak} we show some examples of structures detected in the frequency spectra of the stars at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ and at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. In Fig.~\ref{fig:sh_nice}, we show the cases in which signal at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is firmly detected and is relatively narrow. In Fig.~\ref{fig:sh_broad}, we show some of the cases in which the signal at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is broad. Finally, in Fig.~\ref{fig:sh_weak}, we show some cases in which detection of the power excess is weak. Structure of these three figures is the same. For each star two panels are plotted. In the top panel the frequency spectrum centred at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is plotted, while in the bottom panel the frequency spectrum centred at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is plotted. The plotted frequency range is the same in the two panels; it is wider in Figs.~\ref{fig:sh_broad} and \ref{fig:sh_weak} for better visualization of the signal at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$.
Based on the content of Tab.~\ref{tab:sh} and on Figs.~\ref{fig:sh_nice}, \ref{fig:sh_broad} and \ref{fig:sh_weak}, we now discuss the properties of the signal detected at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. We first note that the detected power excess is indeed well centred at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. The mean value of $\ifmmode \nu_{\rm sh}\else$\nu_{\rm sh}$\fi/\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ for all the stars is $0.5003\pm0.0010$; values of $\ifmmode \nu_{\rm sh}\else$\nu_{\rm sh}$\fi/\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi<0.5$ are as common as $\ifmmode \nu_{\rm sh}\else$\nu_{\rm sh}$\fi/\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi>0.5$. The values of $|\ifmmode \nu_{\rm sh}\else$\nu_{\rm sh}$\fi/\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi-0.5|$ (see Tab.~\ref{tab:sh}) never exceed $0.02$. The largest deviations are present for stars in which broad power excess at subharmonic is observed, like in those plotted in Fig.~\ref{fig:sh_broad} (OGLE-SMC-CEP-1856, -4255). Still, there is no doubt that the power excess is centred at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ (only the highest peak within the power excess is located a bit off).
Stars that do show power excess at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ are not uniformly distributed in the Petersen diagram. In Fig.~\ref{fig:pet}, stars which show firm power excess at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ are plotted with filled symbols, while for stars in which the detection of power excess is weak are marked with half-filled symbols. Majority of the 48 stars with power excess at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ fall within the middle sequence (40 stars, including 8 weak cases), a few stars fall within the top sequence (8 stars including 6 weak cases) and none falls within the bottom sequence. Thus $74$\thinspace per cent of stars from the middle sequence and $31$\thinspace per cent of the stars from the top sequence show the power excess at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. If we exclude the weak detections, the numbers are $59$ and $8$\thinspace per cent, respectively. We conclude that the occurrence of power excess at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is strongly correlated with the location of star on the Petersen diagram. The power excess is detected in the majority of stars from the middle sequence, in significantly smaller fraction of stars from the top sequence and in no star from the bottom sequence.
The amplitude of the signal at $\ifmmode \nu_{\rm sh}\else$\nu_{\rm sh}$\fi$ is always in the mmag range and is comparable to the amplitude of the signal at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. This is investigated in more detail in Fig.~\ref{fig:sh_amps}, in which in the top panel we plot the histogram of amplitude ratio, $\ifmmode A_{\rm sh}\else$A_{\rm sh}$\fi/\ifmmode A_{\rm x}\else$A_{\rm x}$\fi$, and in the bottom panel we plot $\ifmmode A_{\rm sh}\else$A_{\rm sh}$\fi$ versus $\ifmmode A_{\rm x}\else$A_{\rm x}$\fi$. The distribution of amplitude ratios is wide, without a pronounced peak. It is truncated at $\ifmmode A_{\rm sh}\else$A_{\rm sh}$\fi/\ifmmode A_{\rm x}\else$A_{\rm x}$\fi\approx 0.5$, which is not surprising. As the signals at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ are weak, detected with typical $S/N\approx 4-6$, we cannot detect the signals with amplitudes significantly lower, below $\approx0.5\ifmmode A_{\rm x}\else$A_{\rm x}$\fi$ -- these are hidden in the noise. We note eight cases in which peak at $\ifmmode \nu_{\rm sh}\else$\nu_{\rm sh}$\fi$ is higher than the peak detected at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. We may safely conclude that amplitudes, $\ifmmode A_{\rm x}\else$A_{\rm x}$\fi$ and $\ifmmode A_{\rm sh}\else$A_{\rm sh}$\fi$, are comparable. This is further illustrated in the bottom panel of Fig.~\ref{fig:sh_amps}, in which $\ifmmode A_{\rm sh}\else$A_{\rm sh}$\fi$ is plotted versus $\ifmmode A_{\rm x}\else$A_{\rm x}$\fi$. Green diamonds correspond to stars located within the middle sequence in the Petersen diagram, while blue squares correspond to stars located within the top sequence. The amplitudes are (weakly) correlated; the higher the $\ifmmode A_{\rm x}\else$A_{\rm x}$\fi$, the higher the $\ifmmode A_{\rm sh}\else$A_{\rm sh}$\fi$. Amplitudes in stars from the top sequence are in general smaller (see also Sect.~\ref{ssec:selection}).
\begin{figure}
\centering
\noindent\resizebox{\hsize}{!}{\includegraphics{sh_amps.eps}}
\caption{Histogram of amplitude ratios $\ifmmode A_{\rm sh}\else$A_{\rm sh}$\fi/\ifmmode A_{\rm x}\else$A_{\rm x}$\fi$ (top panel) and plot of $\ifmmode A_{\rm sh}\else$A_{\rm sh}$\fi$ versus $\ifmmode A_{\rm x}\else$A_{\rm x}$\fi$ (bottom panel).}
\label{fig:sh_amps}
\end{figure}
\subsection{Possible impact of observational selection effects on incidence rates}\label{ssec:selection}
The period distribution of stars with additional variability, and the incidence rate of power excess at subharmonic frequency within each sequence, may be affected by observational selection effects. The most important selection effect is related to star's luminosity. On the short-period end, the Cepheids are least luminous (Fig.~\ref{fig:cwa_basics}). Therefore, we expect larger noise level in the Fourier transform. As a consequence, low-amplitude variability may be hidden in the noise, which could explain the lack of additional variability at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ in the shortest period first overtone stars or lack of subharmonics in stars of the bottom sequence in the Petersen diagram. On long-period end, the stars are more luminous and the noise level should be lower.
To quantify the impact of noise on the detection of low-amplitude variability, we first estimate the noise level in the Fourier transform as a function of pulsation period. To this aim, we analysed the data for all SMC OGLE-III 1O Cepheids in the following, homogeneous way. Using the time-dependent prewhitening on a season-to-season basis we removed from the data (possibly non-stationary) variability associated with the first overtone (sixth order Fourier series). This technique also removes trends possibly present in the data. Possible signals at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ and at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ remain in the data but, as these signals are of low amplitude and present in less than 10\thinspace per cent of all 1O Cepheids, they should not alter our estimate significantly. Then, in the frequency spectrum of residual data for each star, we computed the average noise level in the frequency range $(0,\,3\ifmmode \nu_{\rm 1O}\else$\nu_{\rm 1O}$\fi)$. The resulting data, noise versus first overtone period, were fitted with spline function. This function, multiplied by 4, is plotted with dashed line in Fig.~\ref{fig:sel} and represents the estimate of the detection threshold as a function of the first overtone period.
In the top panel of Fig.~\ref{fig:sel}, we consider the influence of selection effects on the detection of additional variability at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. Different symbols represent the data for the three sequences. As noted in Sect.~\ref{ssec:amps} (see also Fig.~\ref{fig:histoX}) the discussed form of pulsation is not present for $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi<0.75{\rm d}$ and is very scarce in the $ 0.75{\rm d}\!<\!\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\!<\!1{\rm d}$ range in which 1O Cepheids are very numerous. The question we can answer is, whether the sharp decrease of the incidence rate at shorter periods can be explained by observational selection, assuming that the amplitude distribution of signals at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is similar as is observed for longer periods.
For the shortest periods, $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\!<\!0.75{\rm d}$, it is clear from Fig.~\ref{fig:sel} that the noise level is very high. It strongly depends on pulsation period, but already at $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi=0.75{\rm d}$ detection of signals with amplitudes below 4.5\thinspace mmag might be difficult and situation worsens fast as period is decreased further. Thus, the lack of additional variability for the $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\!<\!0.75{\rm d}$ range may be entirely due to selection effects.
In the $0.75{\rm d}\!<\!\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\!<\!1{\rm d}$ range the situation is more difficult. We note that for longer periods we can detect signals with amplitudes above $\sim\!2$ mmag. Thus, the very low number of stars with additional variability located within box A in Fig.~\ref{fig:sel}, which is due to high noise level, may be responsible for the small incidence rate in the discussed period range as compared to longer periods. How many detections may we miss in this box? As an estimate, we can count the stars within the same area, but at longer periods, for example within boxes marked B or C in Fig.~\ref{fig:sel}. There are 21 stars within box B and 15 stars within box C. These numbers are scaled by $N_{\rm A}/N_{\rm B}$ or $N_{\rm A}/N_{\rm C}$ factor, where $N_{\rm X}$ is the total number of 1O Cepheids within period range corresponding to a given box. Using these numbers to estimate the incidence rate within the $0.75{\rm d}\!<\!\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\!<\!1{\rm d}$ range we get $11.1\pm 1.7$\thinspace per cent or $14.6\pm 1.9$\thinspace per cent, using data from box B or C, respectively. We conclude that sharp decrease of the incidence rate at $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi<1{\rm d}$ may be entirely explained by observational selection.
For the period range characteristic for the middle and top sequences in the Petersen diagram, the noise level is roughly the same, it slowly decreases with increasing period. Thus, a small decrease of the incidence rate of the discussed form of pulsation, with pulsation period, noted in Sect.~\ref{ssec:amps}, is likely real and not a result of observational selection.
In the bottom panel of Fig.~\ref{fig:sel}, we consider the influence of observational selection on the incidence rate of power excess at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ within the three sequences. Symbols correspond to observational data. We first note that amplitudes of signals in stars of the top sequence are, on average, lower than amplitudes of signals in stars of the middle sequence. Also, the incidence rate of power excess at subharmonic is significantly lower for the top sequence. This is obviously not due to different noise levels; signals of the amplitude characteristic for the middle sequence should be easily detected at longer periods. The possible explanation is that amplitudes of the signals at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ in stars of the top sequence are lower than in stars of the middle sequence. Situation is similar for the stars of the bottom sequence. Although the noise level increases with the decreasing period, the increase is pronounced only for $\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\lesssim 1{\rm d}$. Amplitudes as high as in the middle sequence should be detected, but they are not. Thus, the amplitudes of signals at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ in stars of the bottom sequence must be lower than in stars of the middle sequence. In fact, to remain undetectable, they cannot be higher than in the top sequence. The sequence-dependent amplitude of power excess at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is in line with the theory proposed recently by \cite{wd16} to explain the discussed form of pulsation. In this theory, the signals at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ should be observed for all sequences and correspond to non-radial modes of different $\ell$, for which the observed amplitudes differ due to geometric cancellation. We discuss it in more detail in Sect.~\ref{ssec:nature}.
\begin{figure}
\centering
\noindent\resizebox{\hsize}{!}{\includegraphics{selef2.eps}}
\caption{The influence of the observational selection effects on the period distribution of stars with the additional variability at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ (top panel) and on the incidence rate of power excess at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ for stars of the three sequences in the Petersen diagram (bottom panel).}
\label{fig:sel}
\end{figure}
\subsection{Time-variability}\label{ssec:tv}
Complex, non-coherent, and often broad structures, detected both at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ and in some stars at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ (Figs.~\ref{fig:sh_nice}, \ref{fig:sh_broad} and \ref{fig:sh_weak}), indicate strong time variation of the amplitude and/or phase of the variability these structures represent. Because of the low amplitudes of these signals, typically between 2 and 5 mmags, and relatively high noise level in the ground-based observations, it is difficult to analyse this variability in more detail, with high time resolution. Still, some analysis is possible for stars in which the signals are detected at relatively high $S/N$. For these stars we divided the data into four groups, each consisting of two (or in some cases three) observing seasons. Then for each group we calculated the discrete Fourier transform and investigated the frequency spectrum at around $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ and $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. Results of the analysis are presented in Fig.~\ref{fig:s1119} (for OGLE-SMC-CEP-1119) and in Fig.~\ref{fig:s3944} (for OGLE-SMC-CEP-3944). Results are qualitatively similar for a few other stars for which such analysis was possible.
In Fig.~\ref{fig:s1119}, we observe that from season to season the amplitude and location of the peaks present at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ and at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ strongly vary. In particular, the signal at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is significant in 2001--2005 seasons, while it is not significant later on. The signal at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ was insignificant in the first observing seasons, while it was clearly present starting from 2003 and later on.
For OGLE-SMC-CEP-3944, analysed in Fig.~\ref{fig:s3944}, we observe that signal at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is always present, but its amplitude and/or phase clearly vary. The signal at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is weakly marked in between 2000 and 2006. Structure of the observed broad power excess vary in time. As a result, broad and essentially flat power excess is present in the analysis of all data.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{OGLE-SMC-CEP-1119_new_seasonal.eps}}
\caption{Seasonal analysis of the frequency spectra centred at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ (left panels) and at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ (right panels) for OGLE-SMC-CEP-1119. The bottom panels show the frequency spectra for all data. Structure at $\nu-0.5\nu_{\rm x}\approx 0.04$ (in the right panels) is a daily alias of structure centred at $0.5\nu_{\rm x}$.}
\label{fig:s1119}
\end{figure}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{OGLE-SMC-CEP-3944_new_seasonal.eps}}
\caption{Same as Fig.~\ref{fig:s1119}, but for OGLE-SMC-CEP-3944.}
\label{fig:s3944}
\end{figure}
\section{Discussion}
\subsection{Comparison with RR~Lyr stars}\label{ssec:rrlcomp}
A very similar form of variability is detected in RRc stars, as already mentioned in the Introduction. The obvious similarity is the characteristic period ratio, which falls into similar range, $P/\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\in(0.60,\,0.65)$, and the occurrence of the additional variability in first overtone stars (or in double-mode stars, with fundamental and first overtone modes simultaneously excited). The present study allow a more detailed comparison.
\begin{itemize}
\item {\bf Incidence rate of the phenomenon.} The phenomenon is common among RRc stars; space observations leave no doubt; 14 out of 15 RRc/RRd stars observed from space show the phenomenon \citep[see][]{pamsm15}. Incidence rates in the top-quality ground-based observations are also high, 27\thinspace per cent in the Galactic bulge sample of RRc stars studied by \cite{netzel3} and 38\thinspace per cent in the M3 sample observed by \cite{jurcsik_M3}. Unfortunately, for Cepheids we lack systematic analysis of large sample of stars. The 138 stars considered here constitute $8.4$\thinspace per cent of the OGLE-III sample of 1O SMC Cepheids. The incidence rate depends on the pulsation period; in particular the phenomenon does not occur in the shortest period ($\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\lesssim 0.8$\thinspace d) Cepheids. For longer periods the incidence rate is $\approx\!8-15$\thinspace per cent. Space observations of 1O Cepheids are scarce. Polaris was observed with star tracker on board {\it Coriolis} satellite \citep{bruntt}, SZ~Tau was observed with {\it MOST} \citep{evans_MOST} and 2 other stars were observed with {\it CoRoT} \citep{poretti}. In these stars additional variability was not detected. Two stars observed with {\it K2} show the discussed form of pulsation (Plachy et al. in prep.).
\item {\bf The Petersen diagram.} Both for Cepheids and for RR~Lyr stars three sequences are present -- see Fig.~\ref{fig:rrce}. The Cepheid sequences are slanted and well separated. The bottom sequence is most populated, but the middle and top sequences are well represented, too. In the case of RR~Lyr stars, the sequences are nearly horizontal, not that well separated and majority of stars fall within the bottom sequence. In both groups stars that belong to more than one sequence are found. While in the case of RR~Lyr stars there are very good examples of stars that belong to three sequences simultaneously \citep[see fig.~5 in][]{netzel3}, in the case of Cepheids only stars that belong to two sequences are found, and these are rather weak cases (Fig.~\ref{fig:2seq}).
\item{\bf Amplitudes.} Both in Cepheids and in RR~Lyr stars the additional periodicity is of low amplitude, in the mmag range. In both cases the most typical amplitude is around 2\thinspace per cent of the first overtone amplitude [compare fig.~7 in \cite{netzel3} and Fig.~\ref{fig:hia}].
\item{\bf Subharmonics.} Both in Cepheids and in RR~Lyr stars significant power excess centred at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is detected. Subharmonics are detected in 20\thinspace per cent of the sample of Galactic bulge RRc stars with additional variability analysed by \cite{netzel3} and in 35\thinspace per cent of the present Cepheid sample. \cite{netzel3} reported the power excess also at $3/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, other cases are known from space observations \citep[see e.g.][]{pamsm15,molnar,kurtz}. No firm detection of power excess at $3/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is found in the present Cepheid sample. Stars with power excess at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, both RR~Lyr and Cepheids, are not uniformly distributed in the Petersen diagram. This is illustrated in Fig.~\ref{fig:rrce} in which we plot the stars from the present sample of SMC Cepheids and from the study of \cite{netzel3} \citep[see also][]{netzel_vise}; stars with power excess at subharmonic are marked with filled symbols.
\item{\bf Time variability of the additional signals.} Both for Cepheids and for RR~Lyr stars the signals at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ and at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ are complex: power excesses, or clusters of peaks (sometimes very broad) are detected, rather than single and coherent peaks. Compare fig.~11 in \cite{netzel3} and 2 in \cite{netzel_vise} with Figs.~\ref{fig:sh_nice}, \ref{fig:sh_broad} and \ref{fig:sh_weak}. Such complex structures present in the frequency spectrum correspond to strong and irregular variability of signal's phase and/or amplitude.
\end{itemize}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics{pet_rrce.eps}}
\caption{Petersen diagram with RRc stars with additional periodicity from Netzel et al. (2015b) and 1O Cepheids from the present study. Filled symbols correspond to stars with power excess detected at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$. Arrow indicates the reciprocal of the golden ratio.}
\label{fig:rrce}
\end{figure}
\subsection{Nature of the additional variability}\label{ssec:nature}
Based on the just presented comparison we conclude, that the double-periodic pulsation observed both in RRc stars and in 1O Cepheids, with characteristic period ratios, $\ifmmode P_{\rm x}/P_{\rm 1O}\else$P_{\rm x}/P_{\rm 1O}$\fi\in(0.60,\, 0.65)$, is qualitatively very similar. Consequently, nature of the additional variability and mechanism of its excitation are most likely the same. In both cases it cannot be pulsation in two radial modes \citep{wdrs,pamsm15}. A model or theory common for the two groups should be searched for.
\cite{wd12} proposed an explanation for Cepheids. The additional variability at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ was interpreted as a non-radial f-mode of high angular degree. The three Cepheid sequences were reproduced assuming that additional modes have $\ell=42$ (top sequence) $\ell=46$ (middle sequence) and $\ell=50$ (bottom sequence). Due to geometric cancellation, modes of such large angular degree are expected to have very low observed amplitudes. Two problems arise then, however. First, the geometric cancellation is lower for even-$\ell$ modes and, at high degrees, is roughly the same for these modes (depends only on mode parity). Hence, if $\ell=42$, $46$ and $50$ are observed, then $\ell=44$ and $\ell=48$ should be observed as well (all these modes are linearly unstable), which is not the case. Second, the required intrinsic amplitudes are very high, implying large broadening of the spectral lines. Both problems are discussed in \cite{wd12}. When this model was proposed it was not known that in these stars subharmonics are detected. RR~Lyr stars were not studied by \cite{wd12}.
\cite{golden} focused on RR~Lyr stars, in particular on one star observed by {\it Kepler} (KIC5520878) and noticed that its period ratio is close to the reciprocal of the golden ratio, $1/\varphi\approx 0.618$. \cite{golden} argue that the dynamics driven by two frequencies in the golden ratio maximally resist perturbations. As noted by \cite{rs_iau} and also well visible in Fig.~\ref{fig:rrce}, in which we mark $1/\varphi$ with an arrow, the stars avoid the golden ratio. In our opinion the proximity of period ratio to $1/\varphi$ in some RR~Lyr stars is a pure coincidence.
Recently, \cite{wd16} proposed a new explanation, in which the signal observed at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ corresponds to non-radial, $\ell=7-9$ modes. The signal at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is its harmonic, which gains the significant amplitude due to non-linear, quadratic effect. According to this model, stars in which subharmonics are detected should not be distributed uniformly among sequences visible in the Petersen diagram. As geometric cancellation is lower for even-$\ell$ modes we should observe the power excess at subharmonic, i.e. we should detect the true non-radial mode, preferentially in sequences which correspond to even-$\ell$ pulsation. In the case of Cepheids it is the middle sequence ($\ell=8$; $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi=2\nu_8$). The top sequence corresponds to $\ell=7$, while the lower sequence to $\ell=9$. Geometric cancellation is slightly lower for $\ell=7$ than for $\ell=9$. In Fig.~\ref{fig:rrce}, we observe a nice qualitative confirmation of this theory for Cepheids.
In the case of RR~Lyr stars, the top sequence corresponds to even $\ell$ ($\ell=8$) and subharmonic detection (non-radial $\ell=8$ mode detection) should be more common there. According to the \cite{wd16} theory, the bottom, most populated sequence for RR~Lyr stars, corresponds to odd-$\ell$ mode ($\ell=9$). The weakly marked middle sequence corresponds to the combination frequency ($\nu_8+\nu_9$) and no subharmonics should be detected there. This is indeed what we observe -- Fig.~\ref{fig:rrce}. Detailed description of the model is in preparation (Dziembowski \& Smolec, in prep.)
\section{Summary}
We have analysed 138 1O Cepheids from the SMC in which \cite{ogle_cep_smc} reported additional variability with $P/\ifmmode P_{\rm 1O}\else$P_{\rm 1O}$\fi\in(0.60,\, 0.65)$. These stars form three sequences in the Petersen diagram. Our most important findings are the following.
\begin{itemize}
\item The three sequences in the Petersen diagram are not equally populated. In 64 stars we detect periodicities corresponding to the bottom sequence, in 54 stars corresponding to the middle sequence and in 26 stars corresponding to the top sequence. The numbers do not add up to 138 as in the frequency spectra of six stars we detect two significant periodicities that correspond to two of the three sequences.
\item The additional variability is always of low amplitude, typically about 2--4\thinspace per cent of the first overtone amplitude (between 2 and 5\thinspace mmag) -- Figs.~\ref{fig:amps} and \ref{fig:hia}.
\item In 35\thinspace per cent of stars (25\thinspace per cent if weak cases are excluded), we detect power excess centred at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$, i.e. at subharmonic. The power excess is often broad and of complex structure (Figs.~\ref{fig:sh_nice} and \ref{fig:sh_broad}). Amplitude of the signal detected at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ is comparable to amplitude of the signal detected at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ -- Fig.~\ref{fig:sh_amps}.
\item Stars in which power excess at subharmonic is detected are not uniformly distributed in the Petersen diagram -- Figs.~\ref{fig:pet} and \ref{fig:rrce}. Subharmonics are detected most frequently in the stars of the middle sequence (in 74\thinspace per cent of stars; 59\thinspace per cent excluding weak cases). Incidence rate is much lower for the top sequence (31\thinspace per cent or 8\thinspace per cent without weak cases). Subharmonics are not detected in stars of the bottom sequence.
\item The additional variability (both at $\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$ and at $1/2\ifmmode \nu_{\rm x}\else$\nu_{\rm x}$\fi$) is strongly non-stationary; its amplitude and/or phase, strongly vary in time (Figs.~\ref{fig:s1119} and \ref{fig:s3944}).
\item A similar form of pulsation, in which radial first overtone and additional low-amplitude variability is detected with period ratios $\ifmmode P_{\rm x}/P_{\rm 1O}\else$P_{\rm x}/P_{\rm 1O}$\fi\in(0.60,\,0.65)$, is also present in RR~Lyr stars. A detailed comparison we have done indicates, that the nature of this phenomenon is most likely the same in both groups of classical pulsators. Therefore, a common theory to explain this puzzling form of pulsation should be searched for.
\item In the Petersen diagram, the distribution of stars in which power excess at subharmonic is detected is not uniform, both for Cepheids and for RR~Lyr stars (Fig.~\ref{fig:rrce}). The observed distribution favors the model proposed recently by \cite{wd16} (Sect.~\ref{ssec:nature}).
\end{itemize}
It is very important to establish the incidence rate of the discussed form of pulsation in Cepheids, to check whether it is as common as in RR~Lyr stars. The systematic search in the OGLE data was started. It seems crucial however, to observe 1O Cepheids from space with {\it K2} mission. High-precision photometry it gathers, offers the possibility to detect periodicities of very low amplitude. Detection of power excess at subharmonic frequencies for stars that belong to different sequences, and study of the amplitude distribution of these signals, is crucial to test the model proposed by \cite{wd16}.
\section*{Acknowledgements}
This research is supported by the Polish National Science Centre through grants DEC-2012/05/B/ST9/03932 and DEC-2015/17/B/ST9/03421. We are grateful to Pawe\l{} Moskalik for detailed reading of the manuscript and many comments that significantly improved its content. Fruitful and stimulating discussions Wojtek Dziembowski are acknowledged. We acknowledge the summer student program at Nicolaus Copernicus Astronomical Center during which part of this work was completed.
|
2108.05107
|
\section{Introduction and summary}
Conformal defects are extended objects in conformal field theories that preserve a fraction of the full conformal symmetry. They are important physical observables and their properties should be studied with the same emphasis as the spectrum of local operators. In three dimensions, the critical Ising model has been the subject of intensive research during the past years, and part of this work has focused on its spectrum of defects: conformal boundary conditions were studied using bootstrap techniques in \cite{Liendo:2012hy,Gliozzi:2015qsa}, while the existence of a monodromy defect was proposed in \cite{Billo:2013jda}, and further studied in \cite{Gaiotto:2013nva}.
The motivation behind this work is the study of monodromy defects in the $\mathcal{N}=2$ Wess-Zumino model, which can be considered a supersymmetric counterpart to the standard $3d$ Ising model which preserves four supercharges.\footnote{The $\mathcal{N}=1$ super Ising model can also be formulated as a Wess-Zumino model \cite{Fei:2016sgs}, and has been studied successfully using the numerical bootstrap \cite{Rong:2018okz,Atanasov:2018kqw}.}
In order to achieve our goal, several intermediate results are necessary, and some of them are interesting on their own right.
In particular, our analysis contains applications valid for non-supersymmetric monodromy defects, for general codimension-two defects and for the Wess-Zumino model without defects.
The purpose of this detailed introduction is to summarize the paper and provide an outlook of the most relevant results.
Consider a $d$-dimensional Euclidean conformal field theory.
Whenever there is a complex scalar $\phi(x)$ invariant under $U(1)$ transformations $\phi(x) \to e^{i\alpha} \phi(x)$, a monodromy defect is introduced demanding that the scalar picks a phase when it goes around the origin as follows
\begin{align}
\begin{split}
\label{eq:monodromy-def}
\phi(r, \theta + 2\pi, \vec y) & = e^{2 \pi i v} \phi(r, \theta, \vec y) \, .
\end{split}
\end{align}
Here $0 \le v < 1$ is a real parameter that characterizes the monodromy, and we are using polar coordinates $(r,\theta)$ in the plane orthogonal to the defect.
The critical Ising model provides the simplest example: since the global symmetry is $\mathbb{Z}_2$, there exists a monodromy defect with $v = 1/2$.
This defect was studied in \cite{Billo:2013jda,Gaiotto:2013nva} using Monte-Carlo simulations, $\veps$--expansion calculations and numerical bootstrap (see also \cite{Yamaguchi:2016pbj}).
For the case of the $O(N)$ models, there exist monodromy defects with general $v$, which were studied in the $\veps$--expansion in \cite{Soderberg:2017oaa}, and recently the very systematic study of \cite{Giombi:2021uae} has extended these results and obtained new ones in the large-$N$ limit.\footnote{
The monodromy defect geometry is reminiscent of two intersecting boundaries at an angle $\theta = 2\pi v$, although the later setup breaks more symmetry \cite{Antunes:2021qpy}.}
In the present work, an important observable we consider are two-point correlation functions of scalar fields in the presence of a monodromy defect.
Since the monodromy partly breaks conformal symmetry, the two-point function depends on two conformal cross ratios $x$ and $\xb$, to be defined in \eqref{eq:def-cross-ratios}.
As a result, the correlator reads
\begin{align}
\label{eq:two-pt-def}
\langle \phi(x_1) \bar \phi(x_2) \rangle
= \frac{{\mathcal{G}}(x,\xb)}{(r_1 r_2)^{\Dp}} \, .
\end{align}
Analogously to four-point functions in homogeneous CFT, the correlator ${\mathcal{G}}(x,\xb)$ captures an infinite amount of CFT data thanks to the Operator Product Expansion (OPE).
In the presence of a defect, two different OPEs are possible, one as a sum of bulk operators, the other in terms of operators localized on the defect \cite{Billo:2016cpy}.
For two-point functions, these OPEs give two conformal block decompositions which must be equal:\footnote{Here and in the rest of the paper we use the shorthand notation $\mu_{\Dh,s} = |b_{\phi\widehat{\mathcal{O}}}|^2$ and $c_{\Delta,\ell} = a_{{\mathcal{O}}} \lambda_{\phi\bar\phi{\mathcal{O}}}$, where $b_{\phi\widehat{\mathcal{O}}}$, $a_{{\mathcal{O}}}$, $\lambda_{\phi\bar\phi{\mathcal{O}}}$ are OPE coefficients defined in the main text.}
\begin{align}
\label{eq:crossing-eq}
{\mathcal{G}}(x, \xb)
= \sum_{\Dh,s} \mu_{\Dh,s} \hat f_{\Dh,s}(x, \xb)
= \left( \frac{\sqrt{x \xb}}{(1-x)(1-\xb)}\right)^\Dp
\sum_{\Delta,\ell} c_{\Delta,\ell} f_{\Delta,\ell}(x, \xb) \, .
\end{align}
In this paper, we follow the bootstrap philosophy which uses the crossing equation \eqref{eq:crossing-eq} as the starting point.
Indeed, we will see that in favorable situations, \eqref{eq:crossing-eq} together with basic structural properties of the bulk theory and mild physical assumptions, can be used to fully determine the correlator ${\mathcal{G}}(x,\xb)$.
In the case of conformal boundaries, this approach has been successfully carried out in a number of interesting examples \cite{Liendo:2012hy,Bissi:2018mcq,Mazac:2018biw,Kaviraj:2018tfd,Gimenez-Grau:2020jvf,Dey:2020jlc}.
The main technical tool we will use to solve crossing analytically is the so-called Lorentzian inversion formula (LIF).
The original LIF was derived for four-point correlation functions in CFTs without defects \cite{Caron-Huot:2017vep}.
In the case of two-point functions in defect CFT, there exist two inversion formulas, one for each of the OPE channels in the crossing equation \eqref{eq:crossing-eq}.
These formulas were obtained in \cite{Lemos:2017vnx,Liendo:2019jpu} and were already used to study the $\mathbb Z_2$ Ising monodromy defect. In this work, we continue with this program and use the LIF to solve more general monodromy defects in the $\veps$--expansion.
We start in section \ref{sec:wf-monodromy} with the Wilson-Fisher (WF) fixed point with global $O(2N)$ symmetry.
This model is described in $d = 4-\veps$ dimensions by the non-trivial fixed point of the following Lagrangian
\begin{align}
\begin{split}
\label{eq:wf-lagrangian}
L_{\text{WF}}
= \frac12 (\partial_\mu \phi_i)^2
+ \frac{\lambda}{4!} (\phi_i \phi_i)^2 \, , \qquad
i = 1, \ldots, 2N \, .
\end{split}
\end{align}
We define the complex scalar $\phi = \phi_1 + i \phi_2$ and impose a monodromy $v$ under rotations \eqref{eq:monodromy-def}.
Since this model is weakly coupled for $0 < \veps \ll 1$, one can use the Lagrangian description to compute CFT data using Feynman diagrams \cite{Gaiotto:2013nva,Soderberg:2017oaa,Giombi:2021uae}.
However, this is not the approach we follow on this work. Although still perturbative in nature, our analysis relies solely on modern analytical bootstrap techniques. The bootstrap has several advantages which allow us to present improvements on previous results.
On the one hand, we obtain closed-form expressions for the correlation function ${\mathcal{G}}(x,\xb)$ to order $O(\veps)$, which allows us to extract previously unknown bulk CFT data in an efficient way.
On the other hand, we show that the correlator is an analytic function of the monodromy $v$, and the transformation $v \to v+1$ has the interpretation of a change of boundary condition for low-lying defect operators.
We also clarify subtleties related to codimension-two defects that had not appeared in the literature.
In particular, we obtain conformal blocks for odd-spin bulk operators, which are related to the existence of parity-odd one-point tensor structures when the codimension is two.
In order to accommodate these operators, we also have to extend the bulk-to-defect Lorentzian inversion formula \cite{Lemos:2017vnx}.
These results not only are applicable to monodromy defects, but to any type of codimension-two defect.
Having used the Wilson-Fisher model as a testing ground for our techniques, we move on to the Wess-Zumino (WZ) model, which is the simplest superconformal model preserving four supercharges.
This model consists of a complex scalar $\phi(x)$ and a two-component complex fermion $\psi(x)$.
The allowed interactions are fully fixed by supersymmetry, so the action depends on a single coupling constant $g$:
\begin{align}
\label{eq:wz-lagrangian}
L_{\text{WZ}}
= (\partial_\mu \bar \phi) (\partial_\mu \phi)
+ \psi^\dag \bar \sigma^\mu \partial_\mu \psi
+ \frac g2 (\psi \psi \phi + \psi^\dag \psi^\dag \bar \phi)
+ \frac {g^2}{4} (\phi \bar \phi)^2 \, .
\end{align}
Similarly to the WF case, this model has a fixed point in $d = 4-\veps$ dimensions that can be studied in diagrammatic perturbation theory.\footnote{See \cite{Fei:2016sgs} for a nice summary and introduction to the literature.}
Compared to the Wilson-Fisher fixed point, which has gotten a lot of attention from the bootstrap community \cite{Gopakumar:2016wkt,Gopakumar:2016cpb,Dey:2016mcs,Dey:2017oim,Alday:2017zzv,Henriksson:2018myn,Henriksson:2020fqi,Henriksson:2021lwn}, the literature on the Wess-Zumino model using modern conformal bootstrap is much scarcer, the most notable exceptions being \cite{Bobev:2015jxa,Bobev:2015vsa}.
In section \ref{sec:wess-zumino-bulk} we take a small detour in order to fill this gap.
In this section we forget momentarily about defects, and we start by modifying the original LIF \cite{Caron-Huot:2017vep} into a formula that directly extracts OPE coefficients of exchanged superconformal primaries.
The main virtues of this formula are that it unmixes the contributions of nearly-degenerate operators, and that it applies to general superconformal theories with four supercharges in any number of dimensions.
With this newly developed machinery, we carry out the bootstrap program for bulk four-point functions of chiral operators and extract bulk CFT data to leading order in $\veps$.
This is the simplest application of our formalism, and we hope to present a more detailed treatment of the Wess-Zumino model using LIF technology elsewhere.
In section \ref{sec:wess-zumino-defect} we put all the pieces together and study monodromy defects in the Wess-Zumino model.
We start by reviewing the relevant superconformal blocks \cite{Gimenez-Grau:2020jvf}, and then move on to use the input of section \ref{sec:wess-zumino-bulk} and the LIF to bootstrap two-point functions of chiral fields.
The final result can be written in a compact form in terms of a class of one- and two-variable special functions which are defined by their series expansions. Because these functions might be relevant for future bootstrap calculations, we study some of their analytic properties in more detail. In particular, we explain how to extract their behavior around $x,\xb \sim 1$ given their series expansions around $x,\xb \sim 0$. This amounts to extracting both bulk and defect CFT data to leading order in $\veps$, which was one of the original goals of this work.
\section{Wilson-Fisher: Monodromy defects}
\label{sec:wf-monodromy}
In this section we study monodromy defects in the Wilson-Fisher fixed point, previous work on this subject include~\cite{Soderberg:2017oaa,Giombi:2021uae}.\footnote{See also \cite{Bianchi:2021snj,Dowker:2021gqj} for other works using methods slightly different to ours.} Here we present some small improvements by obtaining the full correlation function at order $O(\veps)$ and extracting the bulk CFT data.
This model, interesting on its own, is also a good testing ground for our techniques, which we will later apply to the Wess-Zumino model in section \ref{sec:wess-zumino-defect}.
We start this section studying kinematics of codimension-two defects in $d$-dimensional Euclidean spacetime.
Even though kinematics of defect CFTs are well understood in general \cite{Billo:2016cpy}, the codimension-two case turns out to be subtle.
In particular, we obtain bulk conformal blocks for odd-spin operators, which have not appeared in the literature before.
Furthermore, we extend the bulk-to-defect inversion formula of \cite{Lemos:2017vnx}, in order to accommodate odd-spin operators for generic codimension-two defects.
We end the section by bootstrapping two-point functions of bulk scalars $\langle \phi(x_1) \bar \phi(x_2) \rangle$ in the presence of monodromy defects, first for free theories, and then for the more interesting case of the Wilson-Fisher fixed point.
\subsection{Conformal cross ratios}
As anticipated in the introduction, the two-point function of scalars in the presence of a defect depends on a function of two conformal cross ratios
\begin{align}
\label{eq:two-pt-def-2}
\langle \phi(x_1) \bar \phi(x_2) \rangle
= \frac{{\mathcal{G}}(x,\xb)}{|x_1^\bot|^{\Dp} |x_2^\bot|^{\Dp}} \, .
\end{align}
In this work, we use the same cross ratios as \cite{Lemos:2017vnx}, which are defined by\footnote{Our cross-ratios are related to the ones in \cite{Giombi:2021uae} as $e^{i\theta} = \sqrt{x/\xb}$ and $\xi = (1-\sqrt{x\xb})^2 / (4 \sqrt{x\xb})$.}
\begin{align}
\sqrt{x \xb} + \frac{1}{\sqrt{x \xb}}
= \frac{|x_{12}^{\|}|^2 + |x_1^\bot|^2 + |x_2^\bot|^2}{|x_1^\bot| |x_2^\bot|}
\, , \qquad
\sqrt{\frac{x}{\xb}} + \sqrt{\frac{\xb}{x}}
= \frac{2 x_1^\bot \cdot x_2^\bot}{|x_1^\bot| |x_2^\bot|}
\, .
\label{eq:def-cross-ratios}
\end{align}
Here we are assuming a flat defect, with $x^{\|}$ directions parallel to the defect and $x^\bot$ orthogonal directions.
In order to give a geometric interpretation of the cross ratios, it is convenient to use a conformal transformation to go to a simpler frame.
In the frame of interest, the defect sits at the origin, the two operators $\phi(x_1)$, $\bar\phi(x_2)$ lie on a plane orthogonal to the defect, and $\bar \phi(x_2)$ is set at one.
The position of $\phi(x_1)$ is unfixed and depends on two coordinates, which are precisely the two cross ratios in \eqref{eq:two-pt-def-2}.
In Euclidean signature, it is convenient to parametrize the position of $\phi(x_1)$ with complex conjugate coordinates $x$ and $\xb = x^*$, namely:
\begin{align}
\label{eq:cross-ratios-plane}
x_1 = \left( \tfrac12 (x + \xb), \tfrac{1}{2i}(x-\xb), \vec y \right), \qquad
x_2 = \left( 1, 0, \vec y \right).
\end{align}
Here $\vec y$ parametrizes the directions parallel to the defect.
Continuing the CFT to Lorentzian signature, one sees that the two cross ratios $x$, $\xb$ become real and independent.
Because of their interpretation as coordinates in a plane and their reality conditions, the defect CFT cross ratios $x,\xb$ are close analogs of the four-point cross-ratios $z,\zb$ which are familiar in homogeneous CFT.
\subsection{Conformal blocks}
The two point function ${\mathcal{G}}(x,\xb)$ admits two different expansions, the defect-channel expansion and the bulk OPE expansion, see \eqref{eq:crossing-eq}.
These expansions are formulated in terms of the conformal blocks that we now study.
The discussion that follows is always restricted to codimension-two defects.
\subsubsection{Defect channel}
The defect-channel expansion expresses a bulk field as an infinite sum of defect fields.
In the coordinates $x$, $\xb$ of equation \eqref{eq:cross-ratios-plane}, the defect sits at $x=\xb=0$, while the operator $\bar \phi(x_2)$ sits at $x = \xb = 1$.
The defect OPE limit dominates when $\phi(x_1)$ approaches the defect, namely when $x\xb \to 0$ keeping $x/\xb$ fixed.
To leading order in $x\xb$ and to all orders in $x/\xb$, we normalize the defect expansion as
\begin{align}
\label{eq:leading-def-ope}
\phi(x, \xb, \vec y)
\sim \sum_{\widehat {\mathcal{O}}} b_{\phi \widehat {\mathcal{O}}}
\left( \frac{\xb}{x} \right)^s
( x \xb)^{(\widehat \Delta-\Dp)/2}
\left[ \widehat {\mathcal{O}}(\vec y) + O(x\xb) \right] \, .
\end{align}
Inserting \eqref{eq:leading-def-ope} in the two-point function and comparing with the defect expansion \eqref{eq:crossing-eq} gives the leading behavior of defect blocks
\begin{align}
\label{eq:bdy-cond}
\hat f_{\hat\Delta,s}(x, \xb) \sim x^{(\Dh-s)/2} \xb^{(\Dh+s)/2} + O(x\xb) \, .
\end{align}
The full cross-ratio dependence of the conformal block $\hat f_{\hat\Delta,s}(x,\xb) = (\xb/x)^s g_{\hat\Delta}(x\xb)$ can be determined from the Casimir equation derived in \cite{Billo:2016cpy}, namely
\begin{align}
\left[
(1-y) y^2 \partial_y^2
- \frac{1}{2} y (d y+d-4) \partial_y
- \frac{1}{4} \Dh (\Dh-d+2) (1-y) \right] g(y)
= 0 \, .
\end{align}
This equation has two hypergeometric solutions, and the one with the correct boundary conditions \eqref{eq:bdy-cond} leads to the final form of defect-channel conformal blocks:
\begin{align}
\label{eq:cod2-def-block}
\begin{split}
\hat f_{\hat\Delta,s}(x, \xb)
& = x^{(\Dh-s)/2} \xb^{(\Dh+s)/2}
{}_2F_1 \big( \Dh,d/2-1;\Dh + 2 - d/2; x \xb \big) \,.
\end{split}
\end{align}
\paragraph{Monodromy defects:}
Even though this section applies to arbitrary codimension-two defects, let us return momentarily to monodromy defects.
Since we work in Euclidean signature, the cross ratios are complex conjugates of each other $x^* = \xb$, and moving $\phi(x_1)$ around the defect corresponds to analytically continuing $x$ ($\xb$) around the origin counterclockwise (clockwise).
Together with \eqref{eq:monodromy-def}, we conclude that our correlation function must satisfy the boundary condition
\begin{align}
\label{eq:twist-def}
{\mathcal{G}}(x^\circlearrowleft, \xb^\circlearrowright) = e^{+2\pi i v} {\mathcal{G}}(x, \xb) \, .
\end{align}
The monodromy \eqref{eq:twist-def} combined with the form of the defect block \eqref{eq:cod2-def-block} requires the defect spectrum to consists of non-integer transverse spins:
\begin{align}
\label{eq:monodromy-defect-spins}
s = -v + n \quad \text{for} \quad n \in \mathbb Z \, .
\end{align}
This observation will be important in modifying the Lorentzian inversion formula in section \ref{sec:inv-form}, and in the study of monodromy defects starting in section \ref{sec:gff-non-susy}.
\subsubsection{Bulk channel}
\label{sec:bulk-blocks}
Let us turn to the bulk-channel expansion, where the product $\phi(x_1) \bar \phi(x_2)$ is expanded in an infinite sum of bulk operators by means of the usual operator product expansion (OPE).
In the frame of equation \eqref{eq:cross-ratios-plane}, the second operator is located at $x = \xb = 1$, so the bulk-channel expansion dominates in the regime $(1-x)(1-\xb) \to 0$.
Since $\phi$ and $\bar \phi$ are unequal operators, the bulk OPE consists of both even and odd spin operators.
As is customary, we use index free notation ${\mathcal{O}}^{(\ell)}(x, u) = {\mathcal{O}}^{\mu_1 \ldots \mu_\ell}(x) u_{\mu_1} \ldots u_{\mu_\ell}$ and assume the following normalization for the OPE\footnote{The awkward factor $2^{\ell/2}$ leads to four-point blocks normalized as $g_{\Delta,\ell}(z,\zb) \sim z^{(\Delta-\ell)/2} \zb^{(\Delta+\ell)/2}$ in the lightcone limit.}
\begin{align}
\begin{split}
& \phi(x_1) \bar \phi(x_2)
\sim \sum_{{\mathcal{O}}^{(\ell)}} \lambda_{12{\mathcal{O}}} \, 2^{\ell/2}
\frac{{\mathcal{O}}^{(\ell)}(x_2, x_{12})}{x_{12}^{\Delta_1+\Delta_2-\Delta+\ell}}
+ \ldots \, ,
\end{split}
\end{align}
where we keep the leading order in the bulk OPE limit $x_{12}^2 \to 0$.
For general defects, only even-spin operators can have one-point functions \cite{Billo:2016cpy}.
However, a peculiarity of codimension-two defects is that odd-spin operators can also have one-point functions:
\begin{align}
\label{eq:one-pt-funcs}
\begin{split}
\ell \text{ even:} \qquad
& \langle {\mathcal{O}}^{(\ell)}(x, u) \rangle
= \frac{2^{\ell/2} a_{\mathcal{O}}}{|x^i|^{\Delta}}
\left( \frac{(x^i u^i)^2}{|x^i|^{2}} - u^i u^i \right)^{\ell/2} \, , \\
\ell \text{ odd:} \qquad
& \langle {\mathcal{O}}^{(\ell)}(x,u) \rangle
= - \frac{i \, 2^{\ell/2} a_{\mathcal{O}} \, \veps_{ij} u^i u^j}{|x^i|^{\Delta+1}}
\left( \frac{(x^i u^i)^2}{|x^i|^{2}} - u^i u^i \right)^{(\ell-1)/2} \, .
\end{split}
\end{align}
Here $i,j=1,2$ are indices in the two directions orthogonal to the defect, and $\veps_{ij}$ is the two-index antisymmetric tensor, which is an allowed tensor structure for codimension-two defects.
Combining the bulk OPE with the form of the one-point function gives the leading order behavior of blocks with even and odd spin:
\begin{align}
\lim_{x,\xb \to 1} f_{\Delta,\ell}(x, \xb)
= \big[ (1-x)(1-\xb) \big]^{(\Delta-\ell)/2} (x - \xb)^\ell \, .
\end{align}
It is perhaps surprising that odd-spin bulk blocks are antisymmetric under $x \leftrightarrow \xb$, but it is a direct consequence of the existence of parity-odd one-point functions \eqref{eq:one-pt-funcs}.
It is also interesting to consider the normalization of bulk blocks in the lightcone limit
\begin{align}
\label{eq:lightcone-asymptotics}
f_{\Delta,\ell}(x, \xb)
= \left\{
\begin{array}{ll}
(1-x)^{(\Delta-\ell)/2} (1-\xb)^{(\Delta+\ell)/2} &
\qquad 0 < 1-x \ll 1-\xb \ll 1 \, , \\
(-1)^\ell (1-\xb)^{(\Delta-\ell)/2} (1-x)^{(\Delta+\ell)/2} &
\qquad 0 < 1-\xb \ll 1-x \ll 1 \, .
\end{array}
\right.
\end{align}
As before, the full dependence of $f_{\Delta,\ell}$ on the cross-ratios can be obtained by solving the Casimir differential equation, which has been worked out in \cite{Billo:2016cpy,Isachenkov:2018pef}.
We are interested in the codimension-two case, when the differential operator in $x,\xb$ coordinates reads
\begin{align}
\label{eq:cas-eq}
\begin{split}
& \left(
D_x
+ D_\xb
+ (d-2) \frac{(1-x)(1-\xb)}{1 - x \xb}
\big( x \partial_x + \xb \partial_\xb \big)
- \frac{1}{2} c_2
\right) f_{\Delta,\ell}(x,\xb)
= 0 \, , \\
& D_x
= (1-x)^2 x \partial^2_x
+ (1-x)^2 \partial_x \, ,
\end{split}
\end{align}
and the Casimir eigenvalue is $c_2 = \Delta(\Delta-d) + \ell (\ell + d -2)$.
The similarity of \eqref{eq:cas-eq} with the Dolan and Osborn differential operator \cite{Dolan:2003hv,Dolan:2011dv} is apparent.
Indeed, it was originally pointed out in \cite{Billo:2016cpy} that in terms of $z,\zb$ coordinates
\begin{align}
\label{eq:map-xz}
x = 1-z \, , \qquad
\xb = (1 - \zb)^{-1} \, ,
\end{align}
the two differential operators are the same.
By comparing the lightcone limit of the defect block \eqref{eq:lightcone-asymptotics} with the lightcone limit of four-point blocks, we obtain the precise mapping
\begin{align}
\label{eq:map-4pt-def-blocks}
f_{\Delta,\ell}(x, \xb)
= (-1)^{-(\Delta+\ell)/2}
g_{\Delta,\ell}\left(1-x, \frac{\xb-1}{\xb}\right).
\end{align}
Our discussion makes it clear that this relation is valid both for even- and odd-spin bulk operators.
In the four-dimensional case, which is relevant for the present work, simple closed-form expressions for the four-point blocks are known \cite{Dolan:2000ut}, which in the defect case map to
\begin{align}
\label{eq:bulk-4d-blocks}
\begin{split}
f_{\Delta,\ell}(x, \xb)
& = \frac{(1-x)(1-\xb)}{1 - x\xb} \Big(
k_{\Delta-\ell-2}^{0,0}(1-x) k_{\Delta+\ell}^{0,0}(1-\xb)
+ (-1)^\ell \big( x \leftrightarrow \xb \big)
\Big) \, , \\
k^{r,s}_\beta(x)
& = x^{\beta/2}
{}_2F_1 \left( \frac{\beta-r}{2}, \frac{\beta+s}{2}, \beta, x \right) \, .
\end{split}
\end{align}
It is easy to check that this is normalized according to \eqref{eq:lightcone-asymptotics}.
For general space-time dimensions $d$, one makes an ansatz of the form \cite{Simmons-Duffin:2016wlq}
\begin{align}
\label{eq:ligh-exp-bos-blocks}
f_{\Delta,\ell}(x, \xb)
= \sum_{n=0}^\infty \sum_{j=-n}^n A_{n,j}(\Delta,\ell)
(1-x)^{(\Delta-\ell)/2+n} k_{\Delta+\ell+2j}^{0,0}(1-\xb) \, ,
\end{align}
and fixes the coefficients recursively with the Casimir equation \eqref{eq:cas-eq}.
This process can be implemented efficiently using a computer.
For the sake of clarity, we present some low-lying coefficients:
\begin{align}
A_{0,0} (\Delta,\ell) = 1 \, , \qquad
A_{1,0} (\Delta,\ell) = \frac{\Delta -\ell}{4} \, , \qquad
A_{1,-1}(\Delta,\ell) = -\frac{(d-2) \ell}{2 \ell + d - 4} \, .
\end{align}
\subsection{Bulk-to-defect inversion formula}
\label{sec:inv-form}
The Lorentzian Inversion Formula (LIF) \cite{Caron-Huot:2017vep,Simmons-Duffin:2017nub} is a central tool for the analytic bootstrap program.
In the presence of defects, one can consider a bulk-to-defect LIF \cite{Lemos:2017vnx} and a defect-to-bulk LIF \cite{Liendo:2019jpu}.
The bulk-to-defect LIF is of particular importance in this work, as will become clear in subsequent sections.
For codimension-two defects, we need a small extension of the formula presented in \cite{Lemos:2017vnx} which we outline below, and we refer the reader to \cite{Lemos:2017vnx} for further details.
\subsubsection{Derivation}
The LIF of \cite{Lemos:2017vnx} was derived assuming that the correlator ${\mathcal{G}}(x,\xb)$ is a symmetric function of $x,\xb$, which is true when the external scalars are identical and the theory preserves parity. In our setup, the bulk expansion generically contains even- and odd-spin blocks, which are symmetric and antisymmetric respectively, so the full correlator has no definite symmetry. Furthermore, our derivation is valid for non-integer values of $s$, which is the relevant situation for monodromy defects.
The central object of this discussion is the function $\mu(\Delta,s)$, which encodes dimensions of defect operators as poles and their OPE coefficients as residues:
\begin{align}
\label{eq:residue}
\mu_{\Dh_*, s} \equiv b^2_{\Dh_*, s} = - \text{Res}_{\Dh = \Dh_*} \mu(\Dh, s) \, .
\end{align}
Let us introduce coordinates $x = rw$ and $\xb = r/w$, which in Euclidean signature correspond to a radial coordinate $r$ and a phase $w$.
The conformal block \eqref{eq:cod2-def-block} can be decomposed as $\hat f_{\Dh,s}(r,w) = w^{-s} \hat f_{\Dh}(r)$, and the correlation function admits a partial wave expansion
\begin{align}
\label{eq:partial-wave-decomp}
{\mathcal{G}}(r, w)
= \sum_{s} \int_{p/2-i\infty}^{p/2+i\infty}
\frac{d\Dh}{2\pi i} \mu(\Dh, s) w^{-s} \Psi_\Dh(r) \, , \quad
\Psi_{\Dh}(r)
\equiv \frac{1}{2} \left(
\hat f_{\Dh}
+ \frac{K_{p-\Dh}}{K_{\Dh}} \hat f_{p-\Dh} \right) ,
\end{align}
where the sum runs for all $-\infty < s < \infty$ and we introduced $K_{\Dh} = \Gamma(\Dh) / \Gamma(\Dh - p/2)$ and $p = d-2$.
When the partial wave $\Psi_{\Dh}(r)$ has dimension $\Dh = p/2 + i \nu$ it obeys an orthonormality relation \cite{Lemos:2017vnx}:
\begin{align}
\begin{split}
& \int_0^1 dr \frac{(1-r^2)^{d-2}}{r^{d-1}} \Psi_{\Dh_1}(r) \Psi_{\Dh_2}(r)
= \frac{\pi}{2} \frac{K_{p-\Dh_2}}{K_{\Dh_1}}
[ \delta(\nu_1 - \nu_2) + \delta(\nu_1 + \nu_2) ] \, .
\end{split}
\end{align}
Furthermore, we assume the defect spectrum is such that the transverse spins are integer separated $s_1 - s_2 \in \mathbb{Z}$.
In this case, we have the orthonormality relation
\begin{align}
& \oint \frac{dw}{2\pi i w} w^{s_1-s_2} = \delta_{s_1,s_2} \, ,
\end{align}
where the integral is along the unit circle $|w| = 1$.
Combining the partial wave decomposition \eqref{eq:partial-wave-decomp} with the orthonormality of our basis, one readily obtains the Euclidean inversion formula:
\begin{align}
\mu(\Dh, s)
= \frac{2 K_\Dh}{K_{p-\Dh}}
\oint \frac{dw}{2\pi i w} w^{s}
\int_0^1 dr \frac{(1-r^2)^{d-2}}{r^{d-1}} \Psi_\Dh(r) {\mathcal{G}}(r, w) \, .
\end{align}
Let us stress that this formula is only valid for physical values of the transverse spin $s$.
Now we would like to deform the integration contour of $w$ into Lorentzian kinematics, leading to a formula analytic in $s$.
However, in order to deform the contour safely, one needs the asymptotic behavior of ${\mathcal{G}}(r, w)$ for large and small $w$:
\begin{align}
\label{eq:regge-behavior}
{\mathcal{G}}(r, w) \lesssim w^{-s^*_+} \quad \text{as} \quad w \to 0 \, , \qquad
{\mathcal{G}}(r, w) \lesssim w^{ s^*_-} \quad \text{as} \quad w \to \infty \, .
\end{align}
Then we conclude that for $s > s^*_+$ we can contract the contour towards the origin picking up a discontinuity around the cut $w \in [0, r]$.
Similarly, for $s < -s^*_-$ we blow up the contour to infinity, picking a discontinuity around the cut $w \in [1/r, \infty]$.
We then rewrite the resulting integral in terms of $x,\xb$, and keep only poles in $\mu(\Dh,s)$ corresponding to the exchanged operator and not its shadow.
After the dust settles, we obtain the bulk-to-defect Lorentzian inversion formula in its final form:
\begin{align}
\label{eq:inv-formula}
\mu(\Dh, s)
= \begin{cases}
\int_0^1 dx \int_1^{1/x} d\xb \, I_{\Dh,s}(x, \xb) \Disc_\xb {\mathcal{G}}(x, \xb)
\quad \text{for} \quad s>s^*_+ \\
\int_0^1 d\xb \int_1^{1/\xb} dx \, I_{\Dh,s}(x, \xb) \Disc_x {\mathcal{G}}(x, \xb)
\quad \text{for} \quad s<-s^*_- \\
\end{cases} .
\end{align}
In the above formula, the integration kernel and discontinuities are given by:
\begin{align}
\label{eq:details-lif}
\begin{split}
& I_{\Dh, s}(x, \xb)
= \frac{1}{4 \pi i}
\, x^{-\frac{\Dh-s+2}{2}} \xb^{-\frac{\Dh+s+2}{2}} (1 - x \xb)
{}_2F_1 \bigg( \! {\begin{array}{c c}
{1-\Dh, 2-d/2} \\
{d/2-\Dh}
\end{array}; x \xb} \bigg) \, , \\
& \Disc_x {\mathcal{G}}(x, \xb)
= {\mathcal{G}}(x + i 0, \xb) - {\mathcal{G}}(x - i0, \xb) \, , \\
& \Disc_\xb {\mathcal{G}}(x, \xb)
= {\mathcal{G}}(x, \xb + i 0) - {\mathcal{G}}(x, \xb - i0) \, .
\end{split}
\end{align}
This is equal to the inversion formula obtained in \cite{Lemos:2017vnx} for $s>s^*_+$, but one has to exchange the role of $x \leftrightarrow \xb$ to obtain the defect CFT data for for $s<-s^*_-$.
The difference arises because \cite{Lemos:2017vnx} assumed that the correlator ${\mathcal{G}}(x,\xb)$ is a symmetric function of $x$, $\xb$, which is true for defects of codimension greater than two and for codimension-two defects without parity-odd operators.
Instead, here we focus on codimension two and allow ${\mathcal{G}}(x,\xb)$ to have no definite symmetry.
As we will see in section \ref{sec:wess-zumino-defect}, this extension of the original LIF is necessary for applications in the Wess-Zumino model.
Let us also mention that for the particular case when ${\mathcal{G}}(x,\xb)$ is symmetric, the LIF can be simplified.
Indeed, for symmetric correlators equation \eqref{eq:regge-behavior} implies $s_-^* = s_+^* \equiv s^*$ and the two contributions in the inversion formula can be combined:
\begin{align}
\label{eq:inv-formula-even}
\mu(\Dh, s)
= \int_0^1 dx \int_1^{1/x} d\xb \, I_{\Dh,|s|}(x, \xb) \Disc_\xb {\mathcal{G}}(x, \xb)
\;\; \text{for} \;\; |s|>s^* \;\; \text{and} \;\; {\mathcal{G}}(x,\xb) = {\mathcal{G}}(\xb,x) \, .
\end{align}
The advantage is that now one recovers the positive and negative transverse-spin trajectories at the same time.
\subsubsection{Applications}
\label{sec:lif-applications}
Let us also briefly discuss how to use the inversion formula in practice.
The inversion formula uses the discontinuity across branch cuts that start at $x,\xb=1$.
It is thus possible to compute this discontinuity term by term using the bulk-channel expansion, which is an expansion in powers of $(1-x)$, $(1-\xb)$.
If follows from section \ref{sec:bulk-blocks} that bulk blocks have the structure
\begin{align}
\label{eq:red-blocks}
f_{\Delta,\ell}(x,\xb)
& = \big[ (1-x)(1-\xb) \big]^{(\Delta-\ell)/2} \tilde f_{\Delta,\ell}(x, \xb) \, .
\end{align}
Here the prefactor is possibly non-analytic around $x,\xb = 1$, while $\tilde f_{\Delta,\ell}$ is analytic at $x,\xb = 1$.
Equivalently, $\tilde f_{\Delta,\ell}$ admits a convergent power series in integer powers of $1-x$ and $1-\xb$:
\begin{align}
\tilde f_{\Delta,\ell}(x, \xb)
= \sum_{n,m \ge 0} k_{n,m} (1-x)^n (1-\xb)^m \, .
\end{align}
As a result, the discontinuity picks only the contribution from the prefactor in \eqref{eq:red-blocks}, so focusing on $\Disc_\xb$ for concreteness
\begin{align}
\label{eq:compute-disc}
\begin{split}
\Disc_\xb {\mathcal{G}}(x,\xb)
&= \Disc_\xb \left( \frac{\sqrt{x \xb}}{(1-x)(1-\xb)} \right)^{\Dp}
\sum_{\Delta,\ell} c_{\mathcal{O}} f_{\Delta,\ell}(x,\xb) \\
&= \left( \frac{\sqrt{x \xb}}{(1-x)} \right)^{\Dp}
\sum_{\Delta,\ell} c_{\mathcal{O}} \tilde f_{\Delta,\ell}(x,\xb)
\Disc_\xb (1-\xb)^{\frac{\Delta-\ell}{2} - \Dp} \, .
\end{split}
\end{align}
It should be mentioned here that there are two ways for $\Disc_\xb (1-\xb)^{\alpha} \ne 0$: if ${\alpha}$ is non-integer, or if $\alpha = -n$ is a negative integer.
In these two cases the discontinuity reads
\begin{align}
\label{eq:disc-cases}
\begin{split}
& \Disc_\xb (1-\xb)^{\alpha} = 2i \sin (\pi {\alpha}) \, (\xb-1)^{\alpha}
\quad \text{for} \quad \alpha \notin \mathbb{N} \, , \\
& \Disc_\xb \frac{1}{(1-\xb)^n} = 2 \pi i \frac{(-1)^n}{(n-1)!} \delta^{(n-1)}(1-\xb)
\quad \text{for} \quad n \in \mathbb{N}_+ \, .
\end{split}
\end{align}
The first formula follows straightforwardly from the definition of discontinuity \eqref{eq:details-lif}, while the second can be justified by integrating against a test function, see for example (3.7) in \cite{Bissi:2019kkx}.
All in all, comparing \eqref{eq:compute-disc} and \eqref{eq:disc-cases}, it is clear that only two classes of bulk operators contribute to the inversion formula:
\begin{enumerate}
\item Operators below the double-twist dimension $\Delta < 2 \Dp + \ell$.
The most important example of this kind is that bulk identity $\Delta = \ell = 0$, which is present in any CFT.
This contribution will be studied in detail in section \ref{sec:gff-non-susy}.
Another example are single-trace operators in large-$N$ CFTs \cite{Barrat:2021yvp}, but they play no role in the present paper.
\item Double-twist operators with anomalous dimension $\Delta = 2 \Dp + \ell + 2n + \gamma$. These operators are the ones that will contribute in our study of the Wilson-Fisher and Wess-Zumino models in subsequent sections.
\end{enumerate}
Summarizing, the LIF kills bulk operators with exact double-twist dimension $\Delta = 2\Dp + \ell + 2n$.
This is ultimately the reason why the LIF is so powerful.
\subsection{GFF monodromy defect}
\label{sec:gff-non-susy}
Having developed the necessary techniques, we are ready to study monodromy defects using analytic bootstrap.
We start with a generalized free field (GFF) $\phi(x)$ of dimension $\Dp$.
It is well known that the bulk spectrum of GFF consists of the identity and double-twist operators $\Delta_{\ell,n} = 2\Dp + \ell + 2n$, and we just discussed that these do not contribute to the inversion formula.
As a result, we can reconstruct the full defect CFT data from the discontinuity of the bulk identity:
\begin{align}
\Disc_\xb {\mathcal{G}}(x, \xb)
= \Disc_\xb \left( \frac{\sqrt{x \xb}}{(1-x)(1-\xb)}\right)^\Dp
= 2i \sin (\pi \Dp) \left( \frac{\sqrt{x}}{1-x}\right)^\Dp
\left( \frac{\sqrt{\xb}}{\xb-1}\right)^\Dp .
\end{align}
Plugging the discontinuity in the LIF \eqref{eq:inv-formula-even}, one can obtain the defect spectrum and the OPE coefficients.
This is worked out in detail in \cite{Lemos:2017vnx}, the main result is that the defect spectrum is given by $\Delta_{s,n} = \Dp + |s| + 2n$ with the following OPE coefficients:
\begin{align}
\label{eq:mft-defect}
\mu_{s,n}^\GFF(\Dp, d)
= \frac{( \Dp + 1 - d/2 )_n (\Dp)_{2 n+|s|}}{n! (n+|s|)! (\Dp+n+|s|+1-d/2 )_n} \, .
\end{align}
For now we assume that the LIF converges down to $s = 0$, and we come back to the problem of convergence in section \ref{sec:altern-bc}.
We would like to use the defect data, which is analytic in $s$, to consider a monodromy defect in a bulk GFF.
As pointed out around equation \eqref{eq:monodromy-defect-spins}, one obtains a monodromy defect by allowing the transverse spin to take non-integer values $s \in -v+\mathbb{Z}$.
Since we know the full defect CFT data, we can try to resum it and obtain the full correlation function:
\begin{align}
\begin{split}
{\mathcal{G}}^\GFF_{\Dp,d,v}(x, \xb)
= \sum_{n=0}^\infty \sum_{s \in \mathbb Z -v}
\mu_{s,n}^\GFF(\Dp, d)
\hat f_{\Dp + |s| + 2n, s}(x, \xb) \, .
\end{split}
\end{align}
As a consistency check, we note that the trivial case with no monodromy defect $v = 0$, resums to the bulk identity as one would expect:
\begin{align}
{\mathcal{G}}^\GFF_{\Dp,d,v=0}(x, \xb)
= \sum_{m=0}^\infty \sum_{s\in \mathbb Z}
\mu_{m,s}^\GFF(\Dp, d)
\hat f_{\Dp + s + 2m, s}(x, \xb)
= \left( \frac{\sqrt{x \xb}}{(1-x)(1-\xb)}\right)^\Dp \, .
\end{align}
In the sections below, we consider three simple cases where the two-point correlator ${\mathcal{G}}(x,\xb)$ can also be obtained in closed form.
\subsubsection{Free theory monodromy defect}
\label{sec:free-monodromy-def}
The first simplification is to consider free bulk fields, which have conformal dimension $\Delta_\phi^\free = (d-2)/2$.
In this case only the leading transverse-twist trajectory $n=0$ contributes to the defect expansion, see \eqref{eq:mft-defect}.
Ideally we would like to find
$ {\mathcal{G}}_{d,v}^\free(x, \xb) \equiv {\mathcal{G}}_{(d-2)/2, d,v}^\GFF(x, \xb)$ for general values of $d$ and $v$, but this turns out to be hard.\footnote{After this paper was submitted to the \texttt{arXiv}, we have been made aware by Y. Linke that there exists a closed form expression for ${\mathcal{G}}_{d,v}^\free(x, \xb)$ in terms of Appell $F_1$ functions. The precise formula can be provided by the authors upon request.}
Fortunately, for even spacetime dimension $d = 4, 6, \ldots$ the calculation simplifies dramatically and one can obtain closed form expressions.
For example, the $d = 4$ correlator is \cite{Giombi:2021uae}
\begin{align}
\begin{split}
\label{eq:free-4d-corr}
{\mathcal{G}}^\free_{4,v}(x,\xb)
& = \frac{\sqrt{x \xb}}{(1-x) (1-\xb)}
\frac{(1-\xb) x^v + (1-x) \xb^{1-v}}{1-x \xb} \, .
\end{split}
\end{align}
Similar expressions, though more lengthy, can be obtained for higher even values of $d$.
Keeping only the leading terms as $x \to 1$, the expressions simplify and it is possible to guess a formula for the correlator which is analytic in $d$
\begin{align}
\label{eq:free-lightcone-corr}
{\mathcal{G}}_{d,v}^\free(x, \xb)
= & \left( \frac{\sqrt{x \xb}}{(1-x) (1-\xb)} \right)^{\Delta^\free_\phi}
\bigg(1 + \\
& + C_{d,v}^\free \big( (1-x)(1-\xb) \big)^{\Delta^\free_\phi}
\bigg[ {}_2F_1
\bigg( \! \begin{array}{*{20}{c}}
{\Delta^\free_\phi, \Delta^\free_\phi+v } \\
{d-1}
\end{array};1-\xb \bigg) + O(1-x) \bigg] \bigg) \, , \nonumber
\end{align}
where we introduced the constants
\begin{align}
C^\GFF_{\Dp,d,v}
= -\frac{\Gamma (\Dp+1-v) \Gamma (\Dp+1+v)}{(\Dp+v) \Gamma (2 \Dp+1) \Gamma (v) \Gamma (1-v)} \, , \qquad
C^\free_{d,v} = C^\GFF_{(d-2)/2,d,v} \, .
\end{align}
Even though \eqref{eq:free-lightcone-corr} has been obtained by non-rigorous means, it passes a number of non-trivial consistency checks.
It is correct for any even $d = 4,6,\ldots$, it is consistent with the result \cite{Liendo:2019jpu} for $v=1/2$ and general $d$, and it is consistent with the result \eqref{eq:mft-oeps} in $d=4-\veps$ dimensions.
The power of equation \eqref{eq:free-lightcone-corr} is that it captures all the bulk CFT data.
Indeed, since the bulk theory is free, the spectrum consists of double-twist operators $\Delta_{\ell,0} = 2 \Delta_\phi^\free + \ell$, namely
\begin{align}
\label{eq:free-bulk-exp}
{\mathcal{G}}^\free_{d,v}(x, \xb)
= \left( \frac{\sqrt{x \xb}}{(1-x) (1-\xb)} \right)^{(d-2)/2} \left(
1
+ \sum_{\ell=0}^\infty c^\free_\ell f_{\ell+d-2,\ell}(x, \xb)
\right) \, .
\end{align}
Here we remind the reader that we use the shorthand notation $c_{{\mathcal{O}}} = \lambda_{\phi\bar\phi{\mathcal{O}}} a_{\mathcal{O}}$.
Using the bulk blocks \eqref{eq:ligh-exp-bos-blocks} and comparing \eqref{eq:free-lightcone-corr}-\eqref{eq:free-bulk-exp} at leading order in $(1-x)$, one can obtain the bulk CFT data order by order in $(1-\xb)$.
For the first few coefficients we find
\begin{align}
\label{eq:first_cL_coeffs}
\begin{split}
c_0^\free & = C^\free_{d,v}, \\
c_1^\free & = \frac{(d-2) (2 v-1)}{4 (d-1)} C^\free_{d,v}, \\
\end{split}
\begin{split}
c_2^\free & = \frac{(d-2) (v-1) v}{8 (d-1)} C^\free_{d,v}, \\
c_3^\free & = \frac{(d-2) (d+2) (v-1) v (2 v-1)}{96 (d-1) (d+1)} C^\free_{d,v} \, .
\end{split}
\end{align}
The first three coefficients are in perfect agreement with the explicit calculation of \cite{Giombi:2021uae} up to a difference in normalization.\footnote{The value of $c_3$ also agrees with \texttt{v2} of \cite{Giombi:2021uae}.}
The main advantage of knowing the correlation function is that we can extract the bulk data for very high values of the spin $\ell$.
In doing this, we observed the CFT data satisfies a simple two-step recursion relation
\begin{align}
\label{eq:non-susy-rec}
c_{\ell+2}^\free
= \frac{(2 v-1) (d+2 \ell)}{4 (\ell+2) (d+\ell)} c_{\ell+1}^\free
+ \frac{(\ell-1) (d+\ell-3) (d+2 \ell-2) (d+2 \ell)}
{16 (\ell+2) (d+\ell) (d+2 \ell-3) (d+2 \ell-1)} c_\ell^\free \, ,
\end{align}
with the initial conditions as given in \eqref{eq:first_cL_coeffs}.\footnote{For $d=4$ we managed to obtain a closed-form expression by inverting the exact correlator \eqref{eq:free-4d-corr}:
\begin{align}
\begin{split}
c_\ell^\free
\stackrel{d=4}{=} &
\frac{\Gamma (\ell-1) \Gamma (\ell+1)^2 \sin ^2(\pi v)}
{2^{4 \ell+1} \pi \Gamma (\ell+\frac{1}{2})
\Gamma(\ell+\frac{3}{2}) \Gamma(\ell-v+1) \Gamma (\ell+v)}
\Bigg[ \\
& \quad \Gamma (2-v) \Gamma (\ell+v) \,
{}_3F_2\left( {\begin{array}{*{20}{c}}
{\ell+1, \ell+1, \ell-1 } \\
{2(\ell+1), \ell-v+1}
\end{array};1} \right)
+ (-1)^\ell \big( v \leftrightarrow 1-v \big)
\Bigg].
\end{split}
\end{align}
}
\subsubsection{Alternate boundary condition}
\label{sec:altern-bc}
The inversion formula predicts $\Delta_s = \Delta_\phi^\free + |s|$ for the free theory defect spectrum.
However, as we pointed in section \ref{sec:inv-form}, this result only holds for spins $|s| > s_*$, where the threshold spin $s_*$ cannot be fixed from the bootstrap perspective.
In this subsection, we relax the assumption $s_* = 0$ for defects in free theories, which we show is related to continuing the correlator as $v \to v + n$ for $n \in \mathbb{Z}$.
In a free theory, the bulk equations of motion imply the defect spectrum is of the form
\begin{align}
\Delta^\pm_s
= \frac{d-2}{2} \pm |s| \, .
\end{align}
The positive modes $\Delta^+_s$ are given by the inversion formula, while the negative modes $\Delta^-_s$ can arise as low transverse-spin ambiguities for $|s| < s_*$.\footnote{In the setup of \cite{Giombi:2021uae}, the values $\Delta^\pm_s$ correspond to the two possible boundary conditions certain KK modes can have on the boundary of hyperbolic space $H^{d-1}$. We borrow the name of the section from this reference.}
The negative modes were studied in great detail in \cite{Lauria:2020emq,Behan:2020nsf} (see also \cite{Bianchi:2019sxz,Bianchi:2021snj}).
The outcome of these works is that if both $\Delta_s^+$ and $\Delta_s^-$ are present, the resulting defect is non-trivial.
Since we are interested on free defects, let us assume that for $s = -v$ we have a negative mode instead of a positive mode.
To obtain the correlator we substract the positive mode and add the negative one:
\begin{align}
\label{eq:def-altern-corr}
{\mathcal{G}}^{\free,-}_{d,v}(x, \xb)
= {\mathcal{G}}^{\free}_{d,v}(x, \xb)
- \frac{\Gamma(\Delta_\phi^\free+v)}{\Gamma (\Delta_\phi^\free) \Gamma (1+v)}
\hat f_{\Delta_{-v}^+, -v}(x, \xb)
+ \mu_{-v}^{\free,-}
\hat f_{\Delta_{-v}^-, -v}(x, \xb) \, .
\end{align}
The OPE coefficient $\mu_{-v}^{\free,-}$ cannot be obtained with the inversion formula because this operator lies outside the range of convergence.
Instead, we determine the OPE coefficient indirectly by demanding that ${\mathcal{G}}^{\free,-}_{d,v}(x, \xb)$ has a consistent bulk-channel expansion.
To achieve this, we expand the correlator to leading order in $1-x$ and order by order in $1-\xb$:
\begin{align}
\label{eq:raw-expansion}
\begin{split}
{\mathcal{G}}^{\free,-}_{d,v}(x, \xb)
= & \left( \frac{\sqrt{x \xb}}{(1-x) (1-\xb)} \right)^{(d-2)/2} \Bigg[
1 \\
& + \big[(1-x)(1-\xb)\big]^{\frac{d-2}{2}}
\left( k_0 + k_1 (1-\xb) + k_2 (1-\xb)^2 + \ldots \right) \\
& + \left(\frac{1-x}{1-\xb}\right)^{\frac{d-2}{2}}
\left( q_0 + q_1 (1-\xb) + q_2 (1-\xb)^2 +\ldots \right)
+ O\big((1-x)^{\frac{d}{2}} \big)
\Bigg] \, .
\end{split}
\end{align}
The constants $k_i$ and $q_i$ can be determined to high order expanding \eqref{eq:def-altern-corr} using a computer algebra software.
At the same time, because we are considering a free theory, we know the bulk spectrum consists of double-twist operators, so the block expansion takes the form:
\begin{align}
\label{eq:block-exp-altern}
{\mathcal{G}}^{\free,-}_{d,v}(x, \xb)
= \left( \frac{\sqrt{x \xb}}{(1-x) (1-\xb)} \right)^{(d-2)/2} \left(
1
+ \sum_{\ell=0}^\infty c^{\free,-}_\ell f_{\ell+d-2,\ell}(x, \xb)
\right) \, .
\end{align}
Perhaps unexpectedly, these two expansions are inconsistent with each other, because the powers $q_i (1-\xb)^i$ in \eqref{eq:raw-expansion} cannot be reproduced from the blocks in \eqref{eq:block-exp-altern}.
The only way out is that the unknown OPE coefficient should takes the value
\begin{align*}
\mu_{-v}^{\free,-}
= \frac{\Gamma(\Delta_\phi^\free-v)}{\Gamma (\Delta_\phi^\free) \Gamma (1-v)} \, ,
\end{align*}
in which case $q_i = 0$ for $i \ge 0$, rendering the bulk expansion consistent.
This formula for $\mu_{-v}^{\free,-}$ is in perfect agreement with the explicit calculation of \cite{Giombi:2021uae}.
Now, the $x \to 1$ limit of the free correlator is given by \eqref{eq:free-lightcone-corr}, using hypergeometric identities one can combine \eqref{eq:free-lightcone-corr} with \eqref{eq:def-altern-corr} to obtain
\begin{align}
\label{eq:corr-altern-bc}
{\mathcal{G}}_{d,v}^{\free,-}&(x, \xb)
= \left( \frac{\sqrt{x \xb}}{(1-x) (1-\xb)} \right)^{\Delta^\free_\phi}
\bigg(1 + \\
& C_{v+1,d} \big( (1-x)(1-\xb) \big)^{\Delta^\free_\phi}
\bigg[ {}_2F_1
\bigg( \! \begin{array}{*{20}{c}}
{\Delta^\free_\phi, \Delta^\free_\phi+v+1 } \\
{d-1}
\end{array};1-\xb \bigg) + O(1-x) \bigg] \bigg) \, . \nonumber
\end{align}
Interestingly, this is just the original expression with the replacement $v \to v+1$.
Since \eqref{eq:corr-altern-bc} determines completely the bulk CFT data, and the bulk spectrum is independent of $v$, the full correlators satisfies the same relation:
\begin{align}
\begin{split}
{\mathcal{G}}_{d,v}^{\free,-}(x, \xb) = {\mathcal{G}}_{d,v+1}^{\free}(x, \xb) \, .
\end{split}
\end{align}
As a result, the bulk OPE coefficients for alternate boundary conditions are obtained from \eqref{eq:first_cL_coeffs} by $v \to v+1$.
For spin $\ell =0,2$ we find perfect agreement with explicit calculation \cite{Giombi:2021uae}:
\begin{align}
\begin{split}
c_0^{\free,-} = C_{v+1,d}, \qquad
c_1^{\free,-} = \frac{(d-2) (2 v+1)}{4 (d-1)} C_{v+1,d} \, .
\end{split}
\end{align}
One can turn on more negative modes in a similar way. Note that in general these violate the defect unitarity bound, but this does not affect the discussion.
In particular, if we use negative modes for $s=-v,-v-1,\ldots,-v-n+1$, we find that the correlator is given by ${\mathcal{G}}^\free_{v+n,d}(x,\xb)$.
Similarly, if we turn on negative modes for $s=1-v,2-v,\ldots,n-v$ the correlator is given by ${\mathcal{G}}^\free_{v-n,d}(x,\xb)$.
More complicated choices of negative modes do not seem to generate such simple structure.
\subsubsection{GFF monodromy defect in \texorpdfstring{$d=4-\veps$}{d=4-eps}}
\label{sec:gff-monodromy-def}
In preparation for the analysis of the Wilson-Fisher fixed point, let us study GFF as a perturbation around free theory.
Consider a GFF scalar of dimension $\Dp = 1 - \delta_\phi \veps$ in $d=4-\veps$ dimensions. The defect data has been presented in equation \eqref{eq:mft-defect}.
In order to also extract bulk CFT data, it is necessary to resum the defect expansion.
The zeroth order result appears in \eqref{eq:free-4d-corr}, while here we carry out the resummation to leading order in $O(\veps)$.
For the leading transverse-twist family, there are contributions at $O(\veps)$ from the OPE coefficients, the defect blocks and the defect dimensions.
Furthermore, there are higher-twist families with $n > 0$ that only contribute with tree-level dimensions and OPE coefficients.
The complete $O(\veps)$ contribution is then:
\begin{align}
\label{eq:mft-oeps}
\begin{split}
{\mathcal{G}}^{\GFF,O(\veps)}_{1 - \delta_\phi \veps,4-\veps,v}(x, \xb)
& = \veps \sum_{n=0}^\infty \sum_{s \in -v + \mathbb{Z}}
\partial_\veps \left( \mu_{s,n}^\GFF(1-\delta_\phi\veps, 4-\veps) \hat f_{1 + |s| - \delta_\phi \veps, s}(x, \xb) \right)_{\veps=0} \\
& = -\delta_\phi \veps \frac{(x \xb)^{1/2}}{1-x\xb} \bigg[
\frac{x^v}{1-x}
\left(
\Phi(x, 1, v) + H_{v-1} + \log\left( \frac{\sqrt{x \xb}}{1-x\xb} \right)
\! \right) \\
& \qquad \qquad +
\frac{\xb x^{v}}{1-\xb}
\big( \Phi(x,1,v)-\Phi(x \xb,1,v) \big)
+ (x \leftrightarrow \xb, v \leftrightarrow 1-v) \bigg] \, .
\end{split}
\end{align}
The result is written in terms of harmonic numbers $H_{n}$ and Hurwitz-Lerch zeta function $\Phi(x,1,v)$, which has nice properties reviewed in appendix \ref{sec:hurwitz-zeta}.
As a consistency check, for a free defect $\delta_\phi = 1/2$, the correlation function \eqref{eq:mft-oeps} at leading order in $x \to 1$ agrees with \eqref{eq:free-lightcone-corr} at leading order in $\veps$.
Let us also mention that there is a curious non-trivial cancellation of terms such that the final result is proportional to $\delta_\phi$.
We are now ready to expand in the bulk channel.
Once more, since the bulk theory is of the GFF type, the spectrum contains higher-twist families:
\begin{align}
{\mathcal{G}}_{\Dp,d,v}^\GFF(x, \xb)
= \left( \frac{\sqrt{x \xb}}{(1-x) (1-\xb)} \right)^{1-\delta_\phi\veps} \left(
1
+ \sum_{n=0}^\infty \sum_{\ell=0}^\infty c^\GFF_{\ell,n} f_{2\Dp+\ell+2n,\ell}(x, \xb)
\right) \, .
\end{align}
As explained before, the bulk OPE coefficients can be extracted order by order in $(1-x)$,$(1-\xb)$ using the bulk blocks in the form \eqref{eq:ligh-exp-bos-blocks}.
Some of the low-lying coefficients are:
\begin{align}
\begin{split}
& c^\GFF_{0,0} = C^\GFF_{\Dp,d,v} + O(\veps^2) \, , \\
& c^\GFF_{1,0} = \frac{1}{18} C^\GFF_{\Dp,d,v} (2 v-1) (3 - \delta_\phi \veps)
+ O(\veps^2) \, , \\
& c^\GFF_{2,0} = \frac{1}{36} C^\GFF_{\Dp,d,v} v (v-1) (3 - \delta_\phi \veps)
+ O(\veps^2) \, , \\
& c^\GFF_{0,1} = \frac{\veps}{96} (2 \delta_\phi -1) v (v-1) \left(v^2-v+4\right)
+ O(\veps^2) \, .
\end{split}
\end{align}
The interested reader can find more OPE coefficients in the attached \texttt{mathematica}\xspace notebook.
\subsection{Wilson-Fisher monodromy defect}
The last model we consider in this section is the $O(2N)$ Wilson-Fisher (WF) fixed point in $d=4-\veps$ dimensions.\footnote{The literature on the WF $O(N)$ model without defects is too vast to review here. However, let us mention the nice references \cite{Alday:2017zzv,Henriksson:2018myn}, which use analytic bootstrap techniques that inspired our work.}
Following \cite{Soderberg:2017oaa,Giombi:2021uae}, we impose a monodromy $v$ to the complex scalar $\phi = \phi_1 + i \phi_2$.
Besides the defect CFT data, we improve on existing results by computing the two-point function to order $O(\veps)$ and by extracting the bulk CFT data.
As already announced, we use the Lorentzian Inversion formula \eqref{eq:inv-formula}, which reconstructs defect CFT data from the discontinuity of the correlator $\Disc {\mathcal{G}}(x,\xb)$.
In perturbative CFTs, the discontinuity can be computed using information which is known from bulk physics at lower orders in perturbation theory.
As discussed in section \ref{sec:lif-applications}, only the bulk identity and double-twist operators with anomalous dimensions can contribute to the discontinuity.
For the Wilson-Fisher fixed point, this leads to dramatic simplifications that make it easy to bootstrap the correlator.
The key property is that the leading-twist trajectory has anomalous dimensions starting at order $O(\veps^2)$, i.e. $\Delta = 2 \Dp + \ell + O(\veps^2)$ for $\ell > 0$, and only the $\ell=0$ operator gets corrected at order $O(\veps)$:
\begin{align}
\label{eq:anom-dim-ppb}
\Delta_{\phi\bar\phi}
= 2 \Delta_\phi + \gamma_{\phi\bar\phi}^{(1)} \veps + O(\veps^2)
= 2 \Delta_\phi + \frac{N+1}{N+4} \veps + O(\veps^2).
\end{align}
As a result, the discontinuity can be obtained to leading order $O(\veps)$ from a single bulk block
\begin{align}
\Disc_\xb {\mathcal{G}}(x,\xb) \big|_{O(\veps)}
= \Disc_\xb \left(\frac{\sqrt{x \xb}}{(1-x)(1-\xb)} \right)^\Dp
\left(1 + f_{\Delta_{\phi\bar\phi},0}(x,\xb) \right) \, .
\end{align}
Notice that the bulk identity contribution has been studied separately in sections \ref{sec:free-monodromy-def} and \ref{sec:gff-monodromy-def}.
In particular, the external scalar has dimension $\Dp = (d-2)/2 + O(\veps^2)$ so this part of the correlator behaves as in free theory.
In what follows we neglect the identity contribution, and focus on the piece generated by the $\phi\bar\phi$ operator.
It has been described in section \ref{sec:lif-applications} how to compute the discontinuity.
In particular, combining equations \eqref{eq:compute-disc} and \eqref{eq:disc-cases}, expanding in $\veps$ and keeping only the $O(\veps)$ term we find
\begin{align}
\Disc_x {\mathcal{G}}(x,\xb)
= \Disc_\xb {\mathcal{G}}(x,\xb)
= 2 \pi i \, \frac{\veps}{2} \frac{v(v-1)}{2} \frac{N+1}{N+4} \frac{(x \xb)^{1/2} \log(x \xb)}{1-x\xb} \, .
\end{align}
To obtain this discontinuity we also used the $d=4$ OPE coefficient $c_0^\free$ in \eqref{eq:first_cL_coeffs}, the anomalous dimension $\gamma_{\phi\bar\phi}^{(1)}$ in \eqref{eq:anom-dim-ppb}, and the bulk block $f_{2,0}$ from \eqref{eq:bulk-4d-blocks}.
Having derived the discontinuity, we are ready to extract the defect CFT data using the inversion formula.
Since the discontinuity is symmetric under $x \leftrightarrow \xb$, we can use the simpler formula \eqref{eq:inv-formula-even}, which applies to both positive and negative transverse spins.
Furthermore, since the discontinuity is $O(\veps)$, we can evaluate the LIF integration kernel exactly in $d=4$.
The resulting double integral is simple to do, giving
\begin{align}
\label{eq:wf-cft-data}
\begin{split}
\mu(\Dh, s)
& = \veps \frac{v (v-1)}{8} \frac{(N+1)}{(N+4)}
\int_0^1 dx \int_1^{1/x} d\xb \log (x \xb)
x^{-\frac{\Dh-|s|+1}{2}} \xb^{-\frac{\Dh+|s|+1}{2}} \\
& = - \veps \frac{v(v-1)}{2} \frac{(N+1)}{(N+4)}
\frac{1}{|s| \big( \Dh -|s|-1 \big)^2} \, .
\end{split}
\end{align}
It is well understood that in perturbative settings a double pole in $\Dh$ indicates defect anomalous dimensions, see \cite{Lemos:2017vnx} for details.
If one adds the contribution from the bulk identity to \eqref{eq:wf-cft-data}, then one concludes that the defect spectrum consists of a single family with CFT data
\begin{align}
\begin{split}
\Dh_s
& = \frac{d-2}{2} + |s| + \veps \hat \gamma_s^{(1)}
+ O(\veps^2) \, , \qquad
\hat \gamma_s^{(1)}
= \frac{v(v-1)}{2}
\frac{(N+1)}{(N+4)}
\frac{1}{|s|}\, , \\
\mu_s
& = \frac{(1-\veps/2)_{|s|}}{|s|!} + O(\veps^2) \, ,
\end{split}
\end{align}
This is in perfect agreement with the literature \cite{Soderberg:2017oaa,Giombi:2021uae}.
Let us now extract the bulk OPE coefficients to order $O(\veps)$.
As for the free and GFF case, the first step is to resum the defect expansion.
The contribution of the bulk identity to the full correlator has been computed in equations \eqref{eq:free-4d-corr} and \eqref{eq:mft-oeps}, where one has to set $\delta_\phi = 1/2$ because $\phi$ behaves as a free field plus $O(\veps^2)$ corrections.
There is a contribution which is new for the Wilson-Fisher fixed point, which comes from the defect anomalous dimensions:
\begin{align}
\begin{split}
{\mathcal{G}}_{\text{WF}}(x, \xb)
& = \veps \sum_{s \in -v + \mathbb{Z}}
b_{|s|}^2 \hat \gamma_{|s|}^{(1)} \partial_{\Dh} \hat f_{\Dh, s}(x, \xb)\big|_{\Dh=|s|+1} \\
& = \veps \frac{v(v-1)}{4} \frac{(N+1)}{(N+4)}
\frac{(x \xb)^{1/2} \log(x \xb)}{1-x\xb} \Big[
x^v \Phi(x, 1, v)
+ \xb^{1-v} \Phi(\xb, 1, 1-v)
\Big] \, .
\end{split}
\end{align}
For $v = 1/2$ and $N=1/2$, this reproduces the Ising $\mathbb Z_2$ monodromy defect result \cite{Liendo:2019jpu}.
We have obtained the full two-point correlation function to $O(\veps)$, so it is now an easy exercise to extract the bulk OPE coefficients.
Besides the twist-two family there is also a twist-four family:\footnote{The absence of higher-twist families at this order was suggested in \cite{Liendo:2012hy} for $\ell = 0$, and then proven in \cite{Alday:2017zzv}.}
\begin{align}
\begin{split}
{\mathcal{G}}^\free_{d,v}(x, \xb)
+ {\mathcal{G}}_{\text{WF}}(x, \xb)
& = \left( \frac{\sqrt{x \xb}}{(1-x) (1-\xb)} \right)^{\Dp} \Bigg(
1
+ c_{0,0} f_{d-2+\veps\gamma_{\phi\bar\phi}^{(1)},0}(x, \xb) \\
& \qquad \qquad
+ \sum_{\ell=1}^\infty c_{\ell,0} f_{\ell+d-2,\ell}(x, \xb)
+ \sum_{\ell=0}^\infty c_{\ell,1} f_{\ell+4,\ell}(x, \xb)
\Bigg) \, .
\end{split}
\end{align}
The OPE coefficients of the leading-twist trajectory take a particularly simple form after normalizing by the free piece
\begin{align}
\begin{split}
c_{0,0}
& = c_{0}^\free \left( 1 + \frac{\veps}{2} \frac{(N+1) }{(N+4)} (H_{v-1} + H_{-v})
+ O(\veps^2) \right) \, , \\
c_{1,0}
& = c_{1}^\free \left( 1 + \frac{3\veps}{2} \frac{(N+1)}{(N+4)} + O(\veps^2) \right) \, , \\
c_{2,0}
& = c_{2}^\free \left( 1 + \frac{\veps}{6} \frac{(N+1)}{(N+4)}
\frac{(3 v-2) (3 v-1)}{v(v-1)} + O(\veps^2) \right) \, , \\
c_{3,0}
& = c_{3}^\free \left( 1 + \frac{\veps}{6} \frac{(N+1)}{(N+4)}
\frac{\left(10 v^2-10 v+3\right)}{v(v-1)} + O(\veps^2) \right) \, .
\end{split}
\end{align}
On the other hand, the subleading-twist trajectory has the following CFT data:
\begin{align}
\begin{split}
c_{0,1}
& = \frac{\veps}{16} \frac{(N+1)}{(N+4)} v^2 (v-1)^2 + O(\veps^2) \, , \\
c_{1,1}
& = \frac{\veps}{144} \frac{(N+1)}{(N+4)} v^2 (v-1)^2 (2 v-1) + O(\veps^2) \, , \\
c_{2,1}
& = \frac{\veps}{1920} \frac{(N+1)}{(N+4)} v^2 (v-1)^2 \left(5 v^2-5 v+2\right) + O(\veps^2) \, .
\end{split}
\end{align}
All our results are in perfect agreement with the Ising $\mathbb Z_2$ monodromy defect \cite{Liendo:2019jpu}.
The interested reader can find the bulk OPE coefficients for higher values of $\ell$ in the attached \texttt{mathematica}\xspace notebook.
Before concluding, let us remind the reader that the bulk OPE coefficients are defined as $c_{\mathcal{O}} = \lambda_{\phi\bar\phi{\mathcal{O}}} a_{\mathcal{O}}$, where $a_{\mathcal{O}}$ is proportional to the one-point function of ${\mathcal{O}}$.
Therefore, one can obtain the one-point functions of leading-twist operators as $a_{\ell,0} = c_{\ell,0} / \lambda_{\phi\bar\phi {\mathcal{O}}_{\ell,0}}$, where the three-point OPE coefficient $\lambda_{\phi\bar\phi {\mathcal{O}}_{\ell,0}}$ is well know at order $O(\veps)$ \cite{Dey:2016mcs,Henriksson:2018myn}.
Unfortunately, the twist-four trajectory contains nearly-degenerate operators, so our OPE coefficient has to be interpreted as a sum over these operators
\begin{align}
c_{\ell,1}
= \langle\!\langle a_{\ell,1} \lambda_{\phi\bar\phi {\mathcal{O}}_{\ell,1}} \rangle\!\rangle
\equiv \sum_{\text{deg. ops. } {\mathcal{O}}^{(n)}}
a^{(n)}_{\ell,1} \lambda^{(n)}_{\phi\bar\phi {\mathcal{O}}_{\ell,1}} \, .
\end{align}
In this case, the best we can do is to extract an average density of one-point OPE coefficients defined as $\langle\!\langle a_{\ell,1} \rangle\!\rangle = \langle\!\langle a_{\ell,1} \lambda_{\phi\bar\phi {\mathcal{O}}_{\ell,1}} \rangle\!\rangle / \langle\!\langle \lambda^2_{\phi\bar\phi {\mathcal{O}}_{\ell,1}} \rangle\!\rangle^{1/2}$, where once again the average over three-point OPE coefficients $\langle\!\langle \lambda^2_{\phi\bar\phi {\mathcal{O}}_{\ell,1}} \rangle\!\rangle$ is known \cite{Henriksson:2018myn}.
\section{Wess-Zumino: Bulk theory}
\label{sec:wess-zumino-bulk}
Superconformal field theories (SCFTs) in non-integer dimensions were studied in \cite{Bobev:2015jxa,Bobev:2015vsa}, where the numerical bootstrap gave evidence that the Wess-Zumino model \eqref{eq:wz-lagrangian} is perhaps the simplest SCFT preserving four supercharges.
In this section we study the Wess-Zumino model in $d = 4-\veps$ dimensions (without defects) using the analytic bootstrap,
and the results will be needed for the study of defects in section \ref{sec:wess-zumino-defect}. We work to leading order in $O(\veps)$, but the same methods also work at higher orders in $\veps$, a subject that we plan to study in future work.
For the reader that is mostly interested on the final results, we present a self-contained summary of the CFT data in section \ref{sec:wz-summary}.
\subsection{Generalities}
Let us briefly review some generalities of SCFT in non-integer dimensions, more details can be found in \cite{Bobev:2015jxa}.\footnote{A different type of superconformal theories in non-integer dimensions also appear in the context of Parisi-Sourlas supersymmetry \cite{Kaviraj:2019tbg,Kaviraj:2020pwv}}
The conformal part of the algebra is generated by the usual operators $D$, $P_i$, $K_i$ and $M_{ij}$ with $i = 1, \ldots, d$.
There are exactly four Poincaré supercharges $Q^+_{\alpha}$ and $Q^-_{\dot{\alpha}}$ and four conformal supercharges $S^{{\dot{\alpha}}+}$ and $S^{{\alpha}-}$, where the indices take two values ${\alpha},{\dot{\alpha}}=1,2$ regardless of the spacetime dimension. The supercharges obey the usual supersymmetry algebra
\begin{align}
\{ Q^+_{\alpha}, Q^-_{\dot{\alpha}} \} = \Sigma^i_{{\alpha}{\dot{\alpha}}} P_i\,, \qquad
\{ S^{{\dot{\alpha}}+}, S^{{\alpha}-} \} = \bar \Sigma_i^{{\dot{\alpha}}{\alpha}} P_i \,.
\end{align}
There is also a generator $R$ of $U(1)_R$ symmetry, under which $Q^+_{\alpha}$ and $Q^-_{\dot{\alpha}}$ have charge $+1$ and $-1$ respectively.
The monodromy defects in section \ref{sec:wess-zumino-defect} will be naturally obtained by twisting this $U(1)_R$ symmetry.
In what follows, we focus our attention on chiral-primary operators $\phi$ and their complex conjugates $\bar \phi$.
These operators are killed by supercharges of the same chirality, and the superconformal algebra fixes their conformal dimension in terms of their $R$-charge:
\begin{align}
\label{eq:chiral-primary}
\left[ Q^+_{\alpha}, \phi(0) \right]
= \left[ Q^-_{\dot{\alpha}}, \bar\phi(0) \right]
= 0 \quad \Rightarrow \quad
\Delta_{\phi}
= \Delta_{\bar\phi}
= \frac{d-1}{2} R_\phi
= - \frac{d-1}{2} R_{\bar\phi}\,.
\end{align}
In order to bootstrap the Wess-Zumino model without defects, we consider four-point functions of $\phi$ and $\bar \phi$.
If we focus on the $s$-channel expansion, there are three inequivalent orderings of the external operators:
\begin{align}
\begin{split}
& \langle \phi(x_1) \phi(x_2) \bar \phi(x_3) \bar \phi(x_4) \rangle
= \frac{{\mathcal{F}}(z, \zb)}{(x_{12}^2 x_{34}^2)^\Dp}
= \frac{1}{(x_{12}^2 x_{34}^2)^\Dp}
\sum_{\Delta,\ell \text{ even}} a_{\Delta,\ell} \, g_{\Delta,\ell} (z, \zb)\, , \\
& \langle \phi(x_1) \bar \phi(x_2) \phi(x_3) \bar \phi(x_4) \rangle
= \frac{{\mathcal{G}}(z, \zb)}{(x_{12}^2 x_{34}^2)^\Dp}
= \frac{1}{(x_{12}^2 x_{34}^2)^\Dp}
\sum_{\Delta,\ell} b_{\Delta,\ell} \,
G_{\Delta,\ell}(z, \zb)\, , \\
& \langle \bar \phi(x_1) \phi(x_2) \phi(x_3) \bar \phi(x_4) \rangle
= \frac{\tilde {\mathcal{G}}(z, \zb)}{(x_{12}^2 x_{34}^2)^\Dp}
= \frac{1}{(x_{12}^2 x_{34}^2)^\Dp}
\sum_{\Delta,\ell} (-1)^\ell b_{\Delta,\ell} \, \tilde G_{\Delta,\ell}(z, \zb)\, .
\end{split}
\end{align}
In the above formula $a_{\Delta,\ell}$ and $b_{\Delta,\ell}$ are shorthand notation for three-point OPE coefficients squared.
The three orderings above are related to each other by simple crossing relations:
\begin{align}
\label{eq:susy-crossing}
\begin{split}
& {\mathcal{G}}(z, \zb)
= \left( \frac{z \zb}{(1-z)(1-\zb)} \right)^{\Dp} {\mathcal{G}}(1-z, 1-\zb) \, , \\
& {\mathcal{F}}(z, \zb)
= \left( \frac{z \zb}{(1-z)(1-\zb)} \right)^{\Dp} \tilde {\mathcal{G}}(1-z, 1-\zb) \, .
\end{split}
\end{align}
The functions ${\mathcal{G}}(z,\zb)$ and $\tilde {\mathcal{G}}(z,\zb)$ capture the same CFT data in their $s$-channel expansion, since they are related by $1 \leftrightarrow 2$.
The constraints of supersymmetry are accounted for by expanding the correlation function in terms of superconformal blocks \cite{Bobev:2015jxa}.
It can be shown that in the $\phi \times \phi$ OPE superconformal blocks reduce to regular non-supersymmetric blocks $g_{\Delta,\ell}$.
On the other hand, the superblocks $G_{\Delta,\ell}$ are non-trivial for the $\phi \times \bar\phi$ OPE.
Interestingly, in any dimension the superblocks take the simple form of non-supersymmetric blocks for unequal external operators with a suitable prefactor:
\begin{align}
\label{eq:superblocks-shift}
\begin{split}
G_{\Delta,\ell}(z, \zb)
= (z \zb)^{-1/2} g^{1,1}_{\Delta+1,\ell} \, , \qquad
\tilde G_{\Delta,\ell}(z, \zb)
= (z \zb)^{-1/2} g^{1,-1}_{\Delta+1,\ell} \, .
\end{split}
\end{align}
Superconformal blocks capture the contributions to the OPE of all exchanged operators that belong to the same supermultiplet, which means they should decompose as finite sums of non-supersymmetric blocks with relative coefficients fixed by susy.
This is indeed the case
\begin{align}
\label{eq:superblocks-explicit}
\begin{split}
& G_{\Delta,\ell}(z, \zb)
= g_{\Delta,\ell}
+ a_1 \, g_{\Delta+1,\ell+1}
+ a_2 \, g_{\Delta+1,\ell-1}
+ a_3 \, g_{\Delta+2,\ell} \, , \\
& \tilde G_{\Delta,\ell}(z, \zb)
= g_{\Delta,\ell}
- a_1 \, g_{\Delta+1,\ell+1}
- a_2 \, g_{\Delta+1,\ell-1}
+ a_3 \, g_{\Delta+2,\ell} \, , \\
\end{split}
\end{align}
where the explicit coefficients are
\begin{align}
\label{eq:coeffs-blocks}
\begin{split}
& a_1 = \frac{(\Delta+\ell)}{4(\Delta+\ell+1)} \, , \\
& a_2 = \frac{\ell(\ell+d-3)(\Delta-\ell-d+2)}
{(2\ell+d-4)(2\ell+d-2)(\Delta-\ell-d+3)} \, , \\
& a_3 = \frac{\Delta (\Delta-d+3)(\Delta+\ell)(\Delta-\ell-d+2)}
{4(2\Delta-d+4)(2\Delta-d+2)(\Delta+\ell+1)(\Delta-\ell-d+3)} \, .
\end{split}
\end{align}
\subsubsection{Comments on degenerate operators}
There is an important difference between the Wilson-Fisher fixed point studied in \cite{Alday:2017zzv,Henriksson:2018myn} and Wess-Zumino model studied here, which is the existence of nearly-degenerate operators in the leading-twist family $[\phi\bar\phi]_{\ell,0}$.
Indeed, from the Lagrangian \eqref{eq:wz-lagrangian} it is clear that we can construct two leading-twist operators for $\ell > 0$:
\begin{align}
\label{eq:two-ops}
\begin{split}
{\mathcal{O}}_{\ell,0}^{(1)}
\sim k_{11} \phi \partial_{\mu_1} \ldots \partial_{\mu_\ell} \bar \phi
+ k_{12} \psi \partial_{\mu_1} \ldots \partial_{\mu_{\ell-1}} \sigma_{\mu_\ell} \psi^\dag
\, , \\
{\mathcal{O}}_{\ell,0}^{(2)}
\sim k_{21} \phi \partial_{\mu_1} \ldots \partial_{\mu_\ell} \bar \phi
+ k_{22} \psi \partial_{\mu_1} \ldots \partial_{\mu_{\ell-1}} \sigma_{\mu_\ell} \psi^\dag \, .
\end{split}
\end{align}
The coefficients $k_{nm}$ are fixed demanding that the operators ${\mathcal{O}}^{(n)}_{\ell,0}$ are conformal-primary operators, with well-defined scaling dimensions in the interacting theory, and orthonormal with respect to two-point functions.
Near the free theory, when the anomalous dimensions $\gamma_{\ell,0}^{(n)}$ are small, the expansion of the four-point function in conformal blocks has to be interpreted as a sum over nearly-degenerate operators
\begin{align}
{\mathcal{G}}(z,\zb)
\sim p_{0,0} g_{2\Dp,0}
+ \sum_{\ell=1}^\infty \left(
\langle\!\langle p_{\ell,0} \rangle\!\rangle g_{2\Dp+\ell,\ell}
+ \langle\!\langle p_{\ell,0} \gamma_{\ell,0} \rangle\!\rangle \partial_\Delta g_{2\Dp+\ell,\ell}
+ \ldots \right)
+ \ldots \, .
\end{align}
In the previous equation higher-twist operators are neglected, and the expansion coefficients are sums over the two operators in \eqref{eq:two-ops}:
\begin{align}
\langle\!\langle p_{\ell,0} \rangle\!\rangle
= \sum_{n=1,2} p_{\ell,0}^{(n)} \, , \qquad
\langle\!\langle p_{\ell,0} \gamma_{\ell,0} \rangle\!\rangle
= \sum_{n=1,2} p_{\ell,0}^{(n)} \gamma_{\ell,0}^{(n)} \, , \qquad
\text{etc.}
\end{align}
Although we have focused on the leading-twist trajectory for clarity, similar complications also occur with higher-twist trajectories.
For a general CFT, it would be quite challenging to solve this mixing problem using bootstrap techniques.
Fortunately, the supersymmetry of the Wess-Zumino model allows for a simple resolution.
The main observation is that, in terms of supersymmetry representations, one of the combinations in \eqref{eq:two-ops} is a superprimary operator, while the other is a superdescendant operator.
This can be checked with the superconformal blocks \eqref{eq:superblocks-explicit}, noticing that for each superprimary operator with dimension $(\Delta, \ell)$, there is a superdescendant operator with equal twist and one more unit in spin $(\Delta+1, \ell+1)$.
For example, in free theory the first operator in the $\phi \times \bar\phi$ OPE is the superprimary $\phi\bar\phi$ with $(\Delta,\ell) = (2\Dp,0)$.
Then, the descendant $(\phi\bar\phi)_{\text{desc}}$ with quantum numbers $(2\Dp+1, 1)$ will be degenerate with a superprimary ${\mathcal{O}}^{\text{prim}}_{1,0}$ with the same quantum numbers.
Continuing in this way, the descendant of ${\mathcal{O}}^{\text{prim}}_{1,0}$ will be degenerate with the superprimary ${\mathcal{O}}^{\text{prim}}_{2,0}$, and so on and so forth.
The moral of the story is that for the Wess-Zumino model, the degeneracies in the leading-twist family can be understood as arising from the supersymmetry of the model.
Therefore, by using a superconformal block expansion
\begin{align}
{\mathcal{G}}(z, \zb)
= \sum_{\Delta,\ell} b_{\Delta,\ell} \, G_{\Delta,\ell}(z, \zb)\, ,
\end{align}
it is guaranteed that all degeneracies in the leading-twist family are taken into account.
In other words, we have argued that the OPE coefficients $b_{\ell,0}$ capture the contributions of individual superprimary operators.
If one is interested in the contribution of a certain superdescendant, it is then sufficient to use the superconformal blocks \eqref{eq:superblocks-explicit} to relate it to the superprimary.
On the other hand, we expect the OPE coefficients of higher-twist families $b_{\ell,n\ge1}$ to be sums over nearly-degenerate operators.
\subsection{Inversion formula}
The next tool we need are inversion formulas, which reconstruct the CFT data from certain discontinuities of correlators \cite{Caron-Huot:2017vep}.
The main object of interest are functions that encode dimensions as poles and OPE coefficients as residues:
\begin{align}
a_{\Delta,\ell} = - \Res_{\Delta'=\Delta} a(\Delta', \ell) \, , \qquad
b_{\Delta,\ell} = - \Res_{\Delta'=\Delta} b(\Delta', \ell).
\end{align}
Let us start with the inversion formula that reconstructs $a(\Delta, \ell)$.
Since the $\phi \times \phi$ OPE uses non-supersymmetric blocks, we can use the inversion formula originally derived by Caron-Huot \cite{Caron-Huot:2017vep}:
\begin{align}
\label{eq:inv-form-nonsus}
\begin{split}
& a(\Delta, \ell)
= \frac{1 + (-1)^\ell}{4} \kappa^{0,0}_{\Delta+\ell}
\int_0^1 \int_0^1 \frac{dz d\zb}{(z \zb)^d}
\left| z-\zb \right|^{d-2}
g_{\ell+d-1, \Delta+1-d}(z, \zb) \dDisc[{\mathcal{F}}(z, \zb)] \, . \\
\end{split}
\end{align}
The double discontinuity is defined in the usual way
\begin{align}
& \dDisc[{\mathcal{F}}(z, \zb)]
= {\mathcal{F}}(z, \zb)
- \frac{1}{2} {\mathcal{F}}(z, \zb^\circlearrowleft)
- \frac{1}{2} {\mathcal{F}}(z, \zb^\circlearrowright) \, ,
\end{align}
where the analytic continuation is performed around the branch point $\zb = 1$ in the directions indicated by the arrows.
The overall constant has the following value
\begin{align}
\begin{split}
& \kappa_{2\hb}^{2r,2s}
= \frac{\Gamma(\hb+r)\Gamma(\hb-r)\Gamma(\hb+s)\Gamma(\hb-s)}
{2\pi^2 \Gamma(2\hb-1) \Gamma(2\hb)} \, .
\end{split}
\end{align}
Similarly, there exists an inversion formula that reconstructs $b(\Delta,\ell)$.
In order to obtain it, note that superconformal blocks are non-supersymmetric blocks with shifted arguments \eqref{eq:superblocks-shift}.
Using the inversion formula for completely general external operators \cite{Caron-Huot:2017vep,Simmons-Duffin:2017nub}, after some manipulations we find
\begin{align}
\label{eq:inv-form-sus}
\begin{split}
b(\Delta, \ell)
= \frac{\kappa^{1,1}_{\Delta+\ell+1}}{4}
\int_0^1 \int_0^1 \frac{dz d\zb}{(z \zb)^d}
& |z - \zb|^{d-2}
\bigg(
g^{1,1}_{\ell+d-1, \Delta-d+2}(z, \zb)
\dDisc\big[(z \zb)^{1/2} {\mathcal{G}}(z,\zb)\big] \\
&
+ (-1)^{\ell+1}
g^{-1,1}_{\ell+d-1, \Delta-d+2}(z, \zb)
\dDisc \big[ (z \zb)^{1/2} \tilde {\mathcal{G}}(z,\zb) \big]
\bigg) \, .
\end{split}
\end{align}
A simple way to see that the $t$- and $u$-channel contributions must be different is to note that the superconformal blocks used in the expansion of ${\mathcal{G}}(z,\zb)$ and $\tilde {\mathcal{G}}(z,\zb)$ are different \eqref{eq:superblocks-shift}.
In practice, it is convenient to expand the integrand of the inversion formulas in the limit $z \to 0$ and integrate term by term.
In the limit $z \to 0$ the correlator has an expansion of the following form
\begin{align}
\label{eq:Fsing}
\begin{split}
{\mathcal{F}}(z, \zb)
& = \sum_{n=0}^\infty \sum_{p=0}^\infty \,
z^{\Dp+n} \log^p \! z \, {\mathcal{F}}_{n,p}(\zb) \, ,
\end{split}
\end{align}
and similarly for ${\mathcal{G}}(z,\zb)$ and $\tilde {\mathcal{G}}(z,\zb)$.
The inversion formula integration kernels can also be expanded in the limit $z \to 0$:
\begin{align}
\frac{1}{z} \left( \frac{\zb - z}{z \zb} \right)^{d-2}
g^{r,s}_{\ell+d-1,\Delta+1-d}(z, \zb)
= z^{-(\Delta-\ell)/2}
\sum_{m=0}^\infty \sum_{j=-m}^m
{\mathcal{C}}^{r,s}_{m,j}(\Delta,\ell) z^m k^{r,s}_{\Delta+\ell + 2j}(\zb) \, .
\end{align}
Similarly to equation \eqref{eq:ligh-exp-bos-blocks}, the coefficients in this expansion can be fixed recursively using the four-point Casimir equation.
This type of expansion has been described in detail in the appendix of \cite{Caron-Huot:2017vep,Liu:2020tpf}.
After expanding the inversion formula as above, the only non-trivial integrals left to do are of the form
\begin{align}
\label{eq:inv-integral}
\begin{split}
\INV[g(\zb)](\beta)
& = \int_0^1 \frac{d\zb}{\zb^2} k_{\beta}(\zb)
\dDisc \big[ g(\zb) \big] \, , \\
\SINV^\pm[g(\zb)](\beta)
& = \int_0^1 \frac{d\zb}{\zb^{3/2}} k^{\pm1,1}_{\beta+1}(\zb)
\dDisc \! \big[ g(\zb) \big] \, .
\end{split}
\end{align}
Finally, the last integral in $z$ is elementary and produces poles in $\Delta$.
Collecting the ingredients together, we have obtained new versions of the Lorentzian inversion formula.
For $a(\Delta,\ell)$ we find
\begin{align}
\label{eq:non-susy-inv}
\begin{split}
& a(\Delta, \ell)
= - \sum_{n,p = 0}^\infty
\frac{S_{n, p}(\Delta, \ell)}{(\Delta - \Delta_\phi - \ell - 2n)^{p+1}} \, , \\
& S_{n, p}(\Delta, \ell)
= \big( 1 + (-1)^\ell \big) 2^p p! \, \kappa_{\Delta+\ell}^{0,0}
\sum_{m=0}^n \sum_{k=-m}^m
{\mathcal{C}}^{0,0}_{m,k}(\Delta, \ell)
\INV[{\mathcal{F}}_{n-m, p}(\zb)](\Delta+\ell+ 2k) \, .
\end{split}
\end{align}
Similarly, one obtains $b(\Delta,\ell)$ using the following formula:
\begin{align}
\label{eq:susy-inv}
\begin{split}
& b(\Delta, \ell)
= - \sum_{n,p = 0}^\infty
\frac{S_{n, p}(\Delta, \ell)}{(\Delta - \Delta_\phi - \ell - 2n)^{p+1}} \, , \\
& S_{n, p}(\Delta, \ell)
= 2^p p! \, \kappa_{\Delta+\ell+1}^{1,1}
\sum_{m=0}^n \sum_{k=-m}^m \Bigg[
{\mathcal{C}}^{1,1}_{m,k}(\Delta+1, \ell)
\SINV^+[{\mathcal{G}}_{n-m, p}(\zb)](\Delta+\ell+ 2k) \\
& \hspace{10.5em}
+ (-1)^{\ell+1}
{\mathcal{C}}^{-1,1}_{m,k}(\Delta+1, \ell)
\SINV^-[\tilde {\mathcal{G}}_{n-m, p}(\zb)](\Delta+\ell+ 2k)
\Bigg] \, .
\end{split}
\end{align}
These new formulas are simpler to use in perturbative settings, such as the ones we consider in this paper.
\subsection{Generalized free field theory}
\label{sec:gff-bulk}
As a first application of the inversion technology, let us consider generalized free field theory (GFF).
In order to extract the CFT data $a_{\Delta,\ell}$ in the $\phi \times \phi$ OPE we have to use the GFF correlation function
\begin{align}
\begin{split}
{\mathcal{F}}(z, \zb) & = (z \zb)^\Dp + \left( \frac{z \zb}{(1-z)(1-\zb)} \right)^\Dp \, .
\end{split}
\end{align}
The first term is regular around $\zb = 1$ so it is killed by the discontinuity and it does not contribute to the inversion formula.
Expanding in $z \to 0$ and using the definition \eqref{eq:Fsing} we find
\begin{align}
\label{eq:sing-part-gff}
\begin{split}
{\mathcal{F}}(z, \zb)|_{\text{singular}}
= \left( \frac{z \zb}{(1-z)(1-\zb)} \right)^{\Dp} \quad \Rightarrow \quad
{\mathcal{F}}_{n,p}(\zb)
= \delta_{p,0} \frac{(\Dp)_n}{n!} \left( \frac{\zb}{1-\zb}\right)^\Dp.
\end{split}
\end{align}
The next step is to compute the integral \eqref{eq:inv-integral}.
A useful trick is to use the Euler representation of the hypergeometric function, and swap the order of integration.
The result is \cite{Caron-Huot:2017vep}:
\begin{align}
\begin{split}
\INV \left[ \left( \frac{\zb}{1-\zb}\right)^p \, \right](\beta)
& = 2 \pi^2 \frac{\Gamma (\beta)}{\Gamma (\beta/2)^2}
\frac{\Gamma (\beta/2 + p - 1)}{\Gamma (p)^2 \Gamma (\beta/2 - p +1)}.
\end{split}
\end{align}
All the ingredients can be combined using equation \eqref{eq:non-susy-inv} to obtain the dimensions and OPE coefficients for low values of $n$.
We find the family of operators $[\phi\phi]_{\ell,n}$ with dimension $\Delta_{\ell,n} = 2\Dp + \ell + 2n$ and their OPE coefficients agree with the results of \cite{Fitzpatrick:2011dm}:
\begin{align}
\label{eq:gff-abulk}
\begin{split}
a^{\GFF}_{\ell,n}(\Dp, d)
& = \frac{2 \left(\Dp+1-d/2\right)_n^2 (\Dp)_{\ell+n}^2}
{\ell! n! \left(\ell + d/2\right)_n (2 \Dp+n+1-d)_n (2 \Dp+\ell+2 n-1)_\ell } \\
& \hspace{16em}
\times \frac{1}{\left(2 \Dp+\ell+n - d/2\right)_n}.
\end{split}
\end{align}
A similar calculation allows one to obtain the OPE coefficients in the $\phi \times \bar\phi$ OPE.
Now the relevant GFF correlation functions are
\begin{align}
\begin{split}
{\mathcal{G}}(z, \zb) = 1 + \left( \frac{z \zb}{(1-z)(1-\zb)} \right)^{\Dp} \, , \qquad
\tilde {\mathcal{G}}(z, \zb) = 1 + (z \zb)^\Dp \, .
\end{split}
\end{align}
Clearly ${\mathcal{G}}(z,\zb)$ has the same singular part as ${\mathcal{F}}(z,\zb)$, see equation \eqref{eq:sing-part-gff}, while $\tilde {\mathcal{G}}(z,\zb)$ is regular around $\zb = 1$ and does not contribute to the LIF.
Using similar techniques as before one obtains the following integral
\begin{align}
\SINV^+ \left[ \left( \frac{\zb}{1-\zb}\right)^p \, \right](\beta)
& = 2 \pi ^2
\frac{\Gamma (\beta +1)}{\Gamma (\beta/2 +1)^2}
\frac{\Gamma (\beta/2 + p)}{\Gamma (p)^2 \Gamma (\beta/2 -p+1)} \, .
\end{align}
Once again, using \eqref{eq:susy-inv} one can obtain the first few OPE coefficients $b_{\ell,n}$ of the operators $[\phi \bar \phi]_{\ell,n}$.
They are in perfect agreement with the values reported in \cite{Bobev:2015jxa}
\begin{align}
\label{eq:gff-bbulk}
\begin{split}
b^\GFF_{\ell,n}(\Dp, d)
& = \frac{\left(\Dp+1-d/2\right)_n^2 (\Dp)_{\ell+n}^2}
{\ell! n! \left(\ell + d/2\right)_n (2 \Dp+n+2-d)_n (2 \Dp+\ell+2 n)_\ell} \\
& \hspace{15em}
\times \frac{1}{\left(2 \Dp+\ell+n+1 - d/2\right)_n} \, .
\end{split}
\end{align}
\subsection{Wess-Zumino model}
We are now ready to solve the Wess-Zumino model at leading order in $\veps = 4-d$.
There is a well-known Lagrangian formulation for this model \eqref{eq:wz-lagrangian}, which consists of a single chiral superfield $\Phi$ interacting with cubic superpotential ${\mathcal{W}} \sim \Phi^3$.
In this section we follow a bootstrap approach similar to \cite{Alday:2017zzv}, but it is useful to keep in mind the Lagrangian \eqref{eq:wz-lagrangian}.
At the end, we check that our results are in perfect agreement with the literature.
\subsubsection{A family of solutions to crossing}
At order $O(\veps^0)$ the theory consists of a free chiral multiplet in $d=4$.
The spectrum and OPE coefficients can be obtained from the previous section by setting $\Dp = 1$.
In particular, formulas \eqref{eq:gff-abulk} and \eqref{eq:gff-bbulk} imply that only the leading double-twist families $n=0$ contribute.
When we turn on interactions for small $\veps$, the dimension of the external chiral gets corrected $\Delta_\phi = 1 - \delta_\phi \veps + O(\veps^2)$.
Furthermore, the operators in the two OPEs $\phi\times\phi$ and $\phi\times\bar\phi$ can also get corrected, and new families of operators could appear in the OPEs.
Let us start studying the $\phi\times\phi$ CFT data at the next order $O(\veps)$.
The LIF \eqref{eq:inv-form-nonsus} reconstructs the CFT data from the discontinuity of ${\mathcal{F}}(z,\zb)$.
Using the crossing equation \eqref{eq:susy-crossing}, the discontinuity can be computed in terms of the $\phi \times \bar \phi$ CFT data.
There is one contribution from the bulk identity, which is considered in section \ref{sec:gff-bulk}, and a contribution from anomalous dimensions.
The corrections from anomalous dimensions are of order $O(\veps^2)$ and can be neglected.
Since the inversion formula is not expected to converge for low values of $\ell$, we should also include a term ${\mathcal{H}}(z, \zb)$ with finite support in spin:
\begin{align}
\begin{split}
{\mathcal{F}}(z, \zb) &
= (z \zb)^\Dp
+ \left( \frac{z \zb}{(1-z)(1-\zb)} \right)^\Dp
+ \veps {\mathcal{H}}(z, \zb) \, .
\end{split}
\end{align}
Solutions to crossing with finite support in spin were studied in \cite{Alday:2016jfr}, and it was found that around $d=4$ there is one such solution that takes the form
\begin{align}
{\mathcal{H}}(z, \zb)
= k \big( 1 - \partial_\Delta \big) g^{d=4}_{\Delta,0}(z, \zb) \big|_{\Delta=2} \, .
\end{align}
For now the constant $k$ should be treated as an unknown, but later its value will be fixed.
This correlator has the following decomposition in conformal blocks
\begin{align}
{\mathcal{F}}(z, \zb)
=
\left(a_{0,0}^{(0)} + \veps a_{0,0}^{(1)} \right)
g_{2\Dp+\veps\gamma,0}
+ \sum_{\substack{\ell=2 \\ \ell \text{ even}} }^\infty
a_{\ell, 0}
g_{2\Dp+\ell,\ell}
+ \sum_{\substack{\ell=0 \\ \ell \text{ even}} }^\infty
a_{\ell, 1} g_{2\Dp+2+\ell,\ell} \, .
\end{align}
Notice there is a new family of twist-four operators with tree-level OPE coefficients.
To the order we are working, we have $a_{\ell, n} = a_{\ell, n}^\GFF(\Dp,d)$.
The only exception is the $\ell=n=0$ case, when the $[\phi\phi]_{0,0}$ operator has the following CFT data:
\begin{align}
a_{0,0} = 2 + \veps k \, , \qquad
\gamma = - \frac k2 \, .
\end{align}
Let us now turn to the CFT data in the $\phi \times \bar\phi$ OPE.
The inversion formula \eqref{eq:inv-form-sus} has a $t$-channel contribution and a $u$-channel contribution.
As before, one uses the crossing equation \eqref{eq:susy-crossing} and the OPE expansion to see which terms contribute.
The $t$-channel contribution consists of the identity, which has been studied in section \ref{sec:gff-bulk}, and anomalous dimensions that contribute at order $O(\veps^2)$.
An unfamiliar feature of the supersymmetric inversion formula \eqref{eq:inv-form-sus} is that the $u$-channel contribution produces $O(\veps)$ corrections to the CFT data.
Using crossing, the part of $\tilde {\mathcal{G}}(z,\zb)$ proportional to $\log(1-\zb)$ is given by the $[\phi\phi]_{0,0}$ operator we just studied:\footnote{Here $g_{\Delta,\ell}(z,\zb) = (z \zb)^{(\Delta-\ell)/2} \tilde g_{\Delta,\ell}(z, \zb)$ is defined analogously to \eqref{eq:red-blocks}.}
\begin{align}
\label{eq:gtild-sing}
\begin{split}
\tilde {\mathcal{G}}(z,\zb) \big|_{\log(1-\zb)}
& = \frac{\veps}{2} a_{0,0} \gamma (z \zb)^\Dp \log(1-\zb)
\tilde g_{2,0}(1-z, 1-\zb) \\
& = -\frac{\veps}{2} k (z \zb)^\Dp \log(1-\zb) \frac{\log z - \log \zb}{z-\zb} \, .
\end{split}
\end{align}
From this result, it is clear that the only inversions integrals that one needs to do are:
\begin{align}
\label{eq:uchann-inv}
\begin{split}
& \SINV^- \! \big[ \zb^{-n} \log(1-\zb) \big](\beta)
= \frac{2 \pi ^2 \Gamma (\beta+1)}{\Gamma (\beta/2+1)^2} \, , \\
& \SINV^- \! \big[ \zb^{-n} \log(1-\zb) \log \zb \big](\beta)
= 0 \, .
\end{split}
\end{align}
In order to obtain these inversions, we expand the integrand in powers of $(1-\zb)/\zb$, integrate term by term, and in the end resum an asymptotic expansion in powers of $1/\beta$.
This procedure has been explained in detail in \cite{Alday:2017zzv,Alday:2019clp}, where the reader can find further details.
The ingredients \eqref{eq:gtild-sing}-\eqref{eq:uchann-inv} can be combined using \eqref{eq:susy-inv} to find $b(\Delta,\ell)$.
We find that to this order in $\veps$, the $\phi \times \bar\phi$ OPE consists only of the leading-twist family
\begin{align}
{\mathcal{G}}(z, \zb)
= 1
+ \sum_{\ell=0}^\infty
b_{\ell, 0}
G_{2\Dp+\ell+\veps\gamma_\ell,\ell}
+ O(\veps^2) \, ,
\end{align}
where the CFT data can be readily obtained using the inversion formula
\begin{align}
\label{eq:bulk-wz-cft-data-k}
\gamma_\ell = k \frac{(-1)^{\ell+1}}{\ell+1} \, , \qquad
b_{\ell,0}
= b_{\ell,0}^\GFF(\Dp,d) \left( 1
+ k (-1)^{\ell+1} \frac{ \left(H_\ell - H_{2\ell+1}\right)}{(\ell+1)} \veps
+ O(\veps^2) \right) \, .
\end{align}
An important observation is that this result makes sense even for spin $\ell = 0$.
Furthermore, we expect the Lorentzian inversion formula to have better convergence properties in supersymmetric theories \cite{Lemos:2021azv}.
Thus, we make the plausible assumption that \eqref{eq:bulk-wz-cft-data-k} is valid for all $\ell \ge 0$.
\subsubsection{Fixing the coefficients}
We have found a two-parameter family of solutions to crossing which depend on $k$ and $\delta_\phi$, let us now try to fix these coefficients from basic physical requirements.
The first condition is that the stress tensor is conserved.
The stress tensor belongs to a short multiplet with a superprimary of dimensions $\Delta=d-1$ and spin $\ell=1$, as can be seen from the form of the superconformal block:
\begin{align}
\label{eq:stress-tensor-block}
G_{d-1,1}
= g_{d-1,1}
+ \frac{d}{4(d+1)} g_{d,2}\,.
\end{align}
As a result, conservation of the stress tensor requires that the operator $[\phi\bar\phi]_{1,0}$ has dimension $d-1$.
This relates $\delta_\phi$ and $k$ as follows
\begin{align}
2 \Delta_\phi + 1 + \veps \gamma_1 = d-1
\qquad \Rightarrow \qquad
\delta_\phi = \frac{k+2}{4} \, .
\end{align}
On the other hand, the identification of the operator $[\phi\phi]_{0,0}$ allows to fix the remaining free parameter.
As it was discussed in \cite{Bobev:2015jxa}, this operator can be identified with a chiral-primary operator $\phi^2$, in which case:
\begin{align}
[\phi\phi]_{0,0} = \phi^2
\quad \Rightarrow \quad
2 \Delta_\phi + \veps \gamma_0 = 2 \Dp
\quad \Rightarrow \quad
k = 0, \;
\delta_\phi = \frac{1}{2} \, .
\end{align}
We conclude that if $[\phi\phi]_{0,0} = \phi^2$ the theory is free in $d=4-\veps$ dimensions.
A second possibility discussed in \cite{Bobev:2015jxa} is that $[\phi\phi]_{0,0}$ is a level-two descendant of $\bar \phi$:
\begin{align}
[\phi\phi]_{0,0} = (Q^+)^2 \bar \phi
\quad \Rightarrow \quad
2 \Delta_\phi + \veps \gamma_0 = \Dp + 1
\quad \Rightarrow \quad
k = - \frac{2}{3}, \;
\delta_\phi = \frac{1}{3} \, .
\end{align}
This leads to a non-vanishing $k$, so we have found a non-trivial supersymmetric CFT in $d=4-\veps$ dimensions.
In the following section we provide evidence that this CFT is indeed the Wess-Zumino model.
\subsubsection{Summary and discussion}
\label{sec:wz-summary}
Let us summarize our results on the Wess-Zumino model at order $O(\veps)$.
The first result of our bootstrap analysis is the dimension of the external chiral field:
\begin{align}
\label{eq:dimPhi}
\Dp = \frac{d-1}{3} \, .
\end{align}
This is actually a well-known result.
Recall that the Wess-Zumino model has a cubic superpotential ${\mathcal{W}} \sim \Phi^3$, which must have $R$-charge $R_{\mathcal{W}} = 2$ at the fixed point.
As a result, the chiral-primary field $\phi(x)$ must have charge $R_\phi = 2/3$, or equivalently $\Dp = (d-1)/3$, which means that \eqref{eq:dimPhi} is in fact an exact result to all orders in $\veps$.
The $\phi \times \phi$ OPE consists of double-twist operators $[\phi \phi]_{\ell,n}$, which are of the schematic form $\phi \Box^n \partial_{\mu_1} \ldots \partial_{\mu_\ell} \phi$.
The two families $n=0,1$ contribute at order $O(\veps)$, with CFT data given by the GFF results in section \ref{sec:gff-bulk}.
The only exception is the $[\phi \phi]_{0,0}$ operator, which has the following CFT data:
\begin{align}
a_{0,0} = 2 - \frac{2}{3} \veps + O(\veps^2) \, , \qquad
\Delta_{0,0} = 2 \Dp + \frac{\veps }{3} + O(\veps^2) \, .
\end{align}
The first observation is that $\Delta_{0,0} \ne 2 \Dp$ so we cannot interpret $[\phi\phi]_{0,0}$ as a chiral-primary operator $\phi^2$.
This is consistent because the Wess-Zumino model has a chiral ring relation $\phi^2 = 0$ due to the cubic superpotential.
Instead, the correct interpretation is $[\phi\phi]_{0,0} = (Q^+)^2 \bar\phi$, which agrees with our results since $\Delta_{0,0} = \Dp + 1$ and the $R$-charge is conserved.
The presence of such an operator is consistent with the OPE selection rules \cite{Bobev:2015jxa}, and it was also suggested by the numerical bootstrap results of \cite{Bobev:2015vsa}.
Thus, we expect the relation $\Delta_{0,0} = \Dp + 1$ to hold to all orders in $\veps$.
The $\phi \times \bar \phi$ OPE contains superconformal primaries and superconformal descendants, and their precise contribution can be obtained from the superconformal blocks \eqref{eq:superblocks-explicit}.
We expect superprimaries of the schematic form ${\mathcal{O}}_\ell = \phi \partial_{\mu_1} \ldots \partial_{\mu_\ell} \bar \phi + \psi \partial_{\mu_1} \ldots \partial_{\mu_{\ell-1}} \sigma^{\mu_\ell}\bar \psi$, where the precise relative coefficients should be fixed by demanding $S^{\pm}{\mathcal{O}}_\ell = 0$.
From our bootstrap analysis we found the following CFT data:
\begin{align}
\label{eq:bulk-wz-cft-data}
\begin{split}
b_{\ell}
& = b_{\ell,0}^\GFF(\Dp,d) \left(
1
+ (-1)^\ell \frac{2 \left(H_\ell - H_{2\ell+1}\right)}{3 (\ell+1)} \veps
+ {\mathcal{O}}(\veps^2)
\right) \, , \\
\Delta_{\ell}
& = 2 \Dp + \ell + \frac{2}{3} \frac{(-1)^{\ell}}{\ell+1} \veps
+ O(\veps^2) \, .
\end{split}
\end{align}
It is natural to identify the $\ell = 0$ operator with $\phi \bar \phi$, which has dimension $\Delta_{\phi\bar\phi} = 2 + O(\veps^2)$ \cite{Fei:2016sgs}, in perfect agreement with our results.
Finally, using \eqref{eq:stress-tensor-block} one can relate the OPE coefficient $b_1$ to the central charge\footnote{We define the central charge as in \cite{Poland:2018epd}, such that the stress-tensor contribution to the OPE is of the form:
$\langle \phi \bar \phi \phi \bar \phi \rangle \supset \frac14 (\frac{d}{d-1})^2 \frac{\Delta_\phi^2}{C_T} g_{d,2} \, $.
}
\begin{align}
C_T
= \frac{d(d+1)}{(d-1)^2} \frac{\Delta_\phi^2}{b_1}
= \frac{20}{3}-\frac{17 \veps }{9} + O(\veps^2) \, .
\end{align}
Once again this is in perfect agreement with the literature \cite{Fei:2016sgs}, up to a difference in normalization.
\section{Wess-Zumino: Monodromy defects}
\label{sec:wess-zumino-defect}
In this section we generalize the analysis of section \ref{sec:wf-monodromy} to superconformal theories with four Poincare supercharges. We study half-BPS monodromy defects that preserve two Poincare supercharges and focus on two-point functions of chiral operators.
We start the section with general results valid for monodromy defects in arbitrary superconformal theories, and then move on to the specific case of a monodromy defect for the Wess-Zumino model studied in section \ref{sec:wess-zumino-bulk}.
\subsection{Superconformal blocks}
Let us start by calculating the relevant superconformal blocks. We use techniques originally developed for bulk four-point functions \cite{Bobev:2015jxa,Bobev:2017jhk} and later applied to superconformal boundaries \cite{Gimenez-Grau:2020jvf}.
Here we only give an outline the calculation, the interested reader can find further details in the aforementioned references. We stress again that this section applies to general half-BPS codimension-two defects, which need not be monodromy defects.
\subsubsection{Defect superconformal algebra}
As in section \ref{sec:wf-monodromy}, we chose our codimension-two defect to sit at $x^1 = x^2 = 0$.
The subalgebra of conformal transformations that preserve the defect is generated by $D$, $P_a$, $K_a$ and $M_{ab}$, where $a,b = 3, \ldots, d$ are indices parallel to the defect.
Since translation symmetry is partly broken, at most half of the original supercharges can be preserved by the defect.
Following the conventions of section \ref{sec:wess-zumino-bulk}, we choose the preserved supercharges to be:
\begin{align}
\label{eq:defect-supercharges}
{\mathcal{Q}}_1 = Q^+_1 \,, \qquad
{\mathcal{Q}}_2 = Q^-_1 \,, \qquad
{\mathcal{S}}_1 = S^{1+} \,, \qquad
{\mathcal{S}}_2 = S^{1-} \,.
\end{align}
Using the following Clifford algebra representation $\Sigma^i_{{\alpha}{\dot{\alpha}}} = (\bar \Sigma_i^{{\dot{\alpha}}{\alpha}})^* = (\sigma_1, \sigma_2, \sigma_3, i \mathds{1})$, it is possible to check in $d = 3$ and $d=4$ that the supercharges generate a subalgebra of the full superconformal algebra.
For non-integer dimensions $3 \le d \le 4$ this construction is less rigorous, however we will obtain perfectly consistent results.
The anticommutators of the supercharges generate translations and special conformal transformations parallel to the defect:
\begin{align}
\{ {\mathcal{Q}}_A, {\mathcal{Q}}_B \} = \widehat \Sigma_{AB}^a P_a\,, \qquad
\{ {\mathcal{S}}_A, {\mathcal{S}}_B \} = \widehat \Sigma_{AB}^a K_a\,, \qquad
a = 3, \ldots, d, \quad
A, B = 1,2 \, .
\end{align}
Similarly, by considering anticommutators of the form $\{ {\mathcal{Q}}, {\mathcal{S}} \}$, we observe that the defect does not preserve $R$--symmetry or transverse rotations independently, but only a particular linear combination of them:\footnote{The full subalgebra for $d=3$ can be found in \cite{Agmon:2020pde} in conventions slightly different to ours.}
\begin{align}
\label{eq:twist-trans-spin}
{\mathcal{M}} = M_{12} + \frac{d-1}{2} R \,.
\end{align}
With these conventions in mind, we proceed to obtain the superconformal blocks.
\subsubsection{Defect channel}
Let us start with the defect OPE $\phi(x) \sim \sum \widehat {\mathcal{O}}(\vec y)$.
In this channel only one operator per defect supermultiplet contributes to the OPE, and as a result, the defect superconformal blocks $\hat F_{\Dh,s}(x,\xb)$ reduce to bosonic blocks $\hat f_{\Dh,s}(x,\xb)$.
In our conventions $\Dh,s$ label the conformal primary exchanged in the OPE, and not the superprimary in the corresponding multiplet.
We justify the above claim following an argument from \cite{Poland:2010wg}.
Since the chirality condition \eqref{eq:chiral-primary} is preserved by the defect supercharges \eqref{eq:defect-supercharges}, it turns out that $[{\mathcal{Q}}_1, \phi(x)] = [{\mathcal{S}}_1, \phi(x)] = 0$.
Inserting these relations in the OPE implies $[{\mathcal{Q}}_1, \widehat {\mathcal{O}}(\vec y)] = [{\mathcal{S}}_1, \widehat {\mathcal{O}}(\vec y)] = 0$.
However, only one operator in each defect supermultiplet can satisfy both of these conditions, hence superblocks in this channel are just standard bosonic blocks.
\subsubsection{Bulk channel}
\label{sec:bulk-4eps}
In the bulk channel, up to four conformal primaries in each supermultiplet can contribute to the OPE.
Their contributions are organized in superconformal blocks which we now calculate.
Following \cite{Dolan:2003hv,Fitzpatrick:2014oza}, we characterize superconformal blocks as solutions to the supersymmetric Casimir equation.
The superconformal Casimir can be split naturally into a non-supersymmetric and a supersymmetric piece: $C_{\text{full}} = C_{\text{bos}} + C_{\text{susy}}$.
The first contribution $\frac12 C_{\text{bos}}$ leads to the differential operator in equation \eqref{eq:cas-eq}.
The second contribution is due to supersymmetry:
\begin{align}
\label{eq:supercas}
\begin{split}
C_{\text{susy}}
& =
- \frac{d-1}{2} R^2
+ \frac{1}{2} [S^{{\dot{\alpha}}+}, Q^-_{\dot{\alpha}}]
+ \frac{1}{2} [S^{{\alpha}-}, Q^+_{\alpha}]\,.
\end{split}
\end{align}
Following \cite{Bobev:2015jxa}, our goal is to massage \eqref{eq:supercas} into a differential operator that can be added to \eqref{eq:cas-eq}.
Using the commutation relations, the chirality properties of $\phi$ and $\bar \phi$, and equation (51) from~\cite{Bobev:2015jxa} we find:
\begin{align}
\begin{split}
\left[ C_{\text{susy}}, \phi(x_1) \bar \phi(x_2) \right] |0\rangle
& = i x_{12}^\mu \bar \Sigma_\mu^{{\dot{\alpha}}{\alpha}}
\left[ Q^-_{\dot{\alpha}}, \phi_1(x_1) \right]
\left[ Q^+_{\alpha} , \bar \phi_2(x_2) \right] |0\rangle
+ 4 \Dp \phi(x_1) \bar \phi(x_2) |0\rangle \,.
\end{split}
\end{align}
Using superconformal Ward identities as in \cite{Bobev:2015jxa,Gimenez-Grau:2020jvf} to rewrite the $Q$-dependent part as a differential operator we get
\begin{align}
\label{eq:blk-contrib-PPb}
\frac12 C_{\text{susy}} \langle \phi_1(x_1) \bar \phi_2(x_2) \rangle
\to
-\big[(1-x) \partial_x
+ \xb (1-\xb) \partial_\xb \big] F_{\Delta,\ell}(x,\xb) \,.
\end{align}
Combining the bosonic equation \eqref{eq:cas-eq}, the supersymmetric one \eqref{eq:blk-contrib-PPb}, and using the appropriate supersymmetric eigenvalue $c_2 = \Delta (\Delta - d + 2) + \ell (\ell+d-2)$, we obtain a differential equation for the superconformal block $F_{\Delta,\ell}(x,\xb)$.
In $d=4$ the solution with correct boundary conditions takes a simple form:
\begin{align}
\begin{split}
F_{\Delta,\ell}(x, \xb)
= \frac{\sqrt{(1-x)(1-\xb)}}{1 - x\xb}
& \Big(
k_{\Delta-\ell-1}^{1,-1}(1-x) k_{\Delta+\ell+1}^{1,1}(1-\xb) \\
& + (-1)^\ell k_{\Delta+\ell+1}^{1,-1}(1-x) k_{\Delta-\ell-1}^{1,1}(1-\xb)
\Big) \,.
\end{split}
\end{align}
For general $d$, we use an expansion of the form
\begin{align}
\label{eq:lightcone-series-superblocks}
F_{\Delta,\ell}(x, \xb)
= \sum_{n=0}^\infty \sum_{j=-n}^n B_{n,j}(\Delta,\ell)
(1-x)^{(\Delta-\ell)/2 + n} (1-\xb)^{-1/2} k_{\Delta+\ell+1+2j}^{1,1}(1-\xb) \, ,
\end{align}
and we fix the coefficients using the supercasimir equation.
The procedure is easy to implement using a computer algebra system.
For the first few coefficients we find:
\begin{align}
B_{0,0}(\Delta,\ell) = 1 \, , \quad
B_{1,-1}(\Delta,\ell) = \frac{(2-d) \ell}{d+2 \ell-4} \, , \quad
B_{1,1}(\Delta,\ell) = \frac{(2-d) \Delta (\Delta +\ell) (\Delta +\ell+2)}{16 (2 \Delta +4-d) (\Delta +\ell+1)^2} \, .
\end{align}
Finally, let us mention that the superconformal block has a decomposition into a sum of four bosonic blocks:
\begin{align}
F_{\Delta,\ell}(x, \xb)
= f_{\Delta,\ell}(x, \xb)
+ a_1 \, f_{\Delta+1,\ell+1}(x, \xb)
- a_2 \, f_{\Delta+1,\ell-1}(x, \xb)
- a_3 \, f_{\Delta+2,\ell}(x, \xb) \, .
\end{align}
The coefficients can be found in \eqref{eq:coeffs-blocks}.
The fact that the coefficients are the same as the four-point blocks of chiral operators might seem surprising at first.
Actually, with the identification \eqref{eq:map-xz} the defect bulk blocks $F_{\Delta,\ell}(x,\xb)$ are the analytic continuation of the four-point blocks $\tilde G_{\Delta,\ell}(z,\zb)$ \cite{Gimenez-Grau:2020jvf}.
What we have found is that the close connection between codimension-two defects and four-point functions also holds at the superconformal level.
\subsection{Free and GFF half-BPS monodromy defect}
\label{sec:gff-superdef}
Armed with the superconformal blocks, we can now bootstrap superconformal monodromy defects.
In this section we focus on defects in (generalized) free theories, while we leave the more interesting defect in the Wess-Zumino model for the next section.
Fortunately, we can recycle many results from the non-supersymmetric case studied in section \ref{sec:wf-monodromy}.
Let us start with the case of a free bulk theory preserving four supercharges.
Since $\phi(x)$ is a free-field, its correlation function ${\mathcal{G}}^\free_{d,v}(x, \xb)$ is independent of the rest of the field content of the theory, so it is given by the non-supersymmetric formulas \eqref{eq:free-4d-corr}-\eqref{eq:free-lightcone-corr}.
Moreover, the defect superblocks reduce to non-supersymmetric blocks, so the defect CFT data is given by \eqref{eq:mft-defect}.
The story is more interesting in the bulk channel, because now in order to obtain the CFT data one must use superconformal blocks:
\begin{align}
{\mathcal{G}}^\free_{d,v}(x, \xb)
= \left( \frac{\sqrt{x \xb}}{(1-x) (1-\xb)} \right)^{(d-2)/2} \left(
1
+ \sum_{\ell=0}^\infty d_\ell^\free F_{d-2+\ell,\ell}(x, \xb)
\right) \, .
\end{align}
Once again, we use the shorthand notation $d_{\mathcal{O}} = \lambda_{\phi\bar\phi{\mathcal{O}}}a_{\mathcal{O}}$.
Since the bulk theory is free, only the leading-twist family contributes.
Using the series representation \eqref{eq:lightcone-series-superblocks} for the superblocks, we can extract the CFT data order by order in $(1-x)$ and $(1-\xb)$.
For the first few coefficients we find:
\begin{align}
\begin{split}
d_0^\free = C^\free_{d,v}, \quad
d_1^\free = \frac{(d-2) (v-1)}{2 (d-1)} C^\free_{d,v}, \quad
d_2^\free = \frac{(d-2) (v-1) (d v-d+v)}{8 (d-1) (d+1)} C^\free_{d,v} \, .
\end{split}
\end{align}
Similarly to section \ref{sec:wf-monodromy}, the coefficients satisfy a two-step recursion relation which can be used to efficiently go to high values of $\ell$:\footnote{Once again, in the $d = 4$ case it is possible to obtain a closed analytic formula:
\begin{align}
\begin{split}
d_\ell^\free
\stackrel{d=4}{=}
\frac{\Gamma (\ell) \Gamma (\ell+1) \Gamma (\ell+2) \sin ^2(\pi v)}
{2^{4 \ell+3}\pi \Gamma(\ell+3/2)^2}
\Bigg[
\frac{\Gamma (2-v)}{\Gamma (\ell-v+2)}
{}_3F_2\left( {\begin{array}{*{20}{c}}
{\ell, \ell+1, \ell+2 } \\
{2\ell+3, \ell-v+2}
\end{array};1} \right) \\
+ (-1)^{\ell+1} \frac{\Gamma (v)}{\Gamma (\ell+v)}
{}_3F_2\left( {\begin{array}{*{20}{c}}
{\ell, \ell+1, \ell+1 } \\
{2\ell+3, \ell+v}
\end{array};1} \right)
\Bigg] \, .
\end{split}
\end{align}}
\begin{align}
\begin{split}
d^\free_{\ell +2}
& = \frac{(d+2 \ell ) \left(d^2 (v-1)+d (4 v-3) \ell +d+(4 v-3) \ell ^2-v\right)}{2 (\ell +2) (d+\ell ) (d+2 \ell -1) (d+2 \ell +1)} d^\free_{\ell+1} \\
& \quad +\frac{\ell (d+\ell -2) (d+2 \ell -2) (d+2 \ell )}{16 (\ell +2) (d+\ell ) (d+2 \ell -1)^2} d^\free_{\ell} \, .
\end{split}
\end{align}
The next simplest example is a monodromy defect in a bulk GFF theory.
As in the free case, the full correlator ${\mathcal{G}}^\GFF_{\Dp,d,v}(x, \xb)$ is the same as in the non-supersymmetric theory, and the defect CFT data is given by \eqref{eq:mft-defect}.
For the bulk data we can use \eqref{eq:mft-oeps}, which is the leading order correlator in $\veps = 4-d$ around the free value $\Dp = 1 - \delta_\phi \veps$. Expanding in bulk blocks
\begin{align}
{\mathcal{G}}^\GFF_{\Dp,d,v}(x, \xb)
= \left( \frac{\sqrt{x \xb}}{(1-x) (1-\xb)} \right)^{\Dp} \left(
1
+ \sum_{n=0}^\infty \sum_{\ell=0}^\infty d_{\ell,n}^\GFF F_{2\Dp+\ell+2n,\ell}(x, \xb)
\right) \, ,
\end{align}
it is relatively straightforward to extract CFT data up to high values of $\ell$ and $n$ using the expansion \eqref{eq:lightcone-series-superblocks}.
Some of the low-lying coefficients are
\begin{align}
\label{eq:bulk-gff-sus-opes}
\begin{split}
d_{0,0}^\GFF & = C_{\Dp,d,v} + O(\veps^2) \, , \\
d_{1,0}^\GFF & = \frac{1}{9} (v-1) (3 - \delta_\phi \veps) C_{\Dp,d,v} + O(\veps^2) \, , \\
d_{0,1}^\GFF & = \frac{\veps}{96} (2 \delta_\phi -1) v (v-1) (v-2) (v-3) + O(\veps^2) \, , \\
\end{split}
\end{align}
while we give more coefficients in the attached \texttt{mathematica}\xspace notebook.
\subsection{Wess-Zumino model}
Finally, we proceed to bootstrap the two-point function of chiral operators in the Wess-Zumino model to order $O(\veps)$ in the $\veps$--expansion.
The derivation requires knowledge of the bulk theory derived in section \ref{sec:wess-zumino-bulk} and the inversion formula derived in section \ref{sec:inv-form}.
Although the calculations for the Wess-Zumino model are similar in spirit to the Wilson-Fisher fixed point, in practice they are more challenging and require extra technology which we develop in the appendix.
Let us remind the reader that the Wess-Zumino model is a theory of a single chiral superfield with cubic superpotential ${\mathcal{W}} \sim \Phi^3$.
At the fixed point, the chiral-primary field $\phi(x)$ must have charge $R_\phi = 2/3$, or equivalently $\Dp = (d-1)/3$.
Since the external dimension differs from free theory at order $O(\veps)$, there is a GFF contribution with $\delta_\phi = 1/3$, which has been discussed in section \ref{sec:gff-superdef}.
Furthermore, as discussed in section \ref{sec:wess-zumino-bulk}, the bulk OPE contains double-twist operators $[\phi\bar\phi]_{\ell,n}$.
Importantly, the leading-twist operators $n=0$ have OPE coefficients of order $O(1)$ and anomalous dimensions $\gamma_{\ell}$ of order $O(\veps)$, see \eqref{eq:bulk-wz-cft-data}.
As a result, the entire leading-twist family contributes to $\Disc {\mathcal{G}}(x,\xb)$.
Indeed, the part of the correlator with non-vanishing discontinuity is
\begin{align}
\label{eq:sing-part-wz}
\begin{split}
{\mathcal{G}}(x,\xb) |_{\text{singular}}
& = \frac{\veps}{2} (x \xb)^{\Dp/2} \log \! \big[ (1-x)(1-\xb) \big]
\sum_{\ell=0}^\infty d^\free_{\ell} \gamma_{\ell} \tilde F_{2\Dp+\ell, \ell}(x, \xb) \, \\
& = - \frac{\veps}{3} v(v-1) (x \xb)^{1/2} \log \! \big[ (1-x)(1-\xb) \big]
\frac{h \! \left(\frac{\xb-1}{\xb}\right) - h(1-x)}{1 - x \xb} \, ,
\end{split}
\end{align}
where we introduced $h(z) = z \, _3F_2(1,1,v+1;2,3;z)$.
From here it is in principle straightforward to extract the defect CFT data using the bulk-to-defect Lorentzian inversion formula.
However, for the sake of clarity, we defer the details to appendix \ref{sec:app-inversion-wz}.
Below we present the defect CFT data, which contains contributions from the bulk identity (GFF) and from \eqref{eq:sing-part-wz}.
\paragraph{Leading transverse-twist family:}
The first family are defect operators of transverse twist approximately one.
Since these operators are present in the free theory, their conformal dimensions can get corrected at this order in perturbation theory:
\begin{align}
\begin{split}
\label{eq:dim-lead-trans-twist}
\Dh_{s,0} & = \frac{d-1}{3} + |s| + \veps \hat \gamma_{s}^{(1)} + O(\veps^2) \, , \qquad
\hat \gamma_{s}^{(1)}
= \begin{cases}
0 & \text{for } s>0 \, , \\
\frac{2 (v-1)}{3 |s|} & \text{for } s<0 \, .
\end{cases}
\end{split}
\end{align}
Furthermore, their OPE coefficients also get corrected as follows:
\begin{align}
\begin{split}
& \mu_{s>0,0}
= 1
+ \frac{-(2 |s| + 1 - v)H_{|s|} + (|s|+1-v) H_{|s|+1-v} - (1-v) H_{1-v}}{3 |s|} \veps
+ O(\veps^2) \, , \\
& \mu_{s<0,0}
= 1
+ \frac{-(2|s| + v - 1)H_{|s|} + (|s|+v-1) H_{|s|+v-1} - (v-1) H_{v-1}}{3 |s|} \veps
+ O(\veps^2) \, .
\end{split}
\end{align}
An important feature of the CFT data is that it is not symmetric under $s \leftrightarrow -s$.
Even though this seems surprising at first, it follows because $\phi(x)$ is a complex field, complex conjugation relates positive transverse-spin modes from $\phi(x)$ with the negative modes from $\bar \phi(x)$.
From a technical point of view, this asymmetry is due to \eqref{eq:sing-part-wz} not being symmetric under $x \leftrightarrow \xb$.
In particular, one would observe a similar phenomena in the $O(N)$ Wilson-Fisher fixed point starting at order $O(\veps^2)$ and $N>1$.
\paragraph{Subleading transverse-twist families:}
The next families of defect operators have transverse twist $2n + 1$.
At this order in perturbation theory, only the tree-level dimensions contribute
\begin{align}
\begin{split}
\Dh_{s,n} & = 1 + |s| + 2n + O(\veps) \quad \; \text{for} \quad n \ge 1 \, .
\end{split}
\end{align}
Notice that these families receive contributions both from the bulk identity and from \eqref{eq:sing-part-wz}, and as a result, the defect OPE coefficients differ from the GFF values:
\begin{align}
\label{eq:wz-def-norm}
\begin{split}
\mu_{s>0,n}
= \frac{|s| + 2 (1-v)}{6 n (|s| + n)} \veps + O(\veps^2) \, , \qquad
\mu_{s<0,n}
=\frac{|s| + 2 (v-1)}{6 n (|s| + n)} \veps + O(\veps^2) \, .
\end{split}
\end{align}
\paragraph{Fractional transverse-twist families:}
Perhaps surprisingly, there is another family of defect operators with non-integer transverse twist.
Indeed, their tree-level conformal dimensions are
\begin{align}
\begin{split}
\Dh_{s>0,n}^{\text{fr}} = 1 + |s| + 2(n + 1 - v) \, , \quad
\Dh_{s<0,n}^{\text{fr}} & = 1 + |s| + 2(n + v - 1) \, ,
\qquad \text{for} \quad n \ge 1 \, .
\end{split}
\end{align}
Notice that this family is generated exclusively from the bulk leading-twist family \eqref{eq:sing-part-wz}.
Once more, the tree-level OPE coefficients take a rather simple form:
\begin{align}
\label{eq:wz-def-weird}
\begin{split}
& \mu_{s>0,n}^{\text{fr}}
= \frac{n}{3 (n+1-v) (|s| + n + 1-v)}\, , \quad
\mu_{s<0,n}^{\text{fr}}
=\frac{n}{3 (n+v-1) (|s| + n + v-1)} \, .
\end{split}
\end{align}
Having reviewed the structure of the defect CFT data, we can now resum the defect-channel expansion in order to obtain the full correlation function at order $O(\veps)$:
\begin{align}
{\mathcal{G}}^{\GFF}_{\frac{d-1}{3},d,v}(x,\xb)
+ {\mathcal{G}}_{\text{WZ}}(x,\xb)
= \sum_{s \in -v + \mathbb{Z}} \left(
\sum_{n=0}^\infty \mu_{s,n} \hat f_{\Dh_{s,n}, s}(x, \xb)
+ \sum_{n=1}^\infty \mu_{s,n}^{\text{fr}} \hat f_{\Dh_{s,n}^{\text{fr}}, s}(x, \xb)
\right) \, .
\end{align}
The GFF part can be found in equation \eqref{eq:mft-oeps} with $\delta_\phi = 1/3$.
The contribution which is new from the Wess-Zumino model is significantly harder:
{\allowdisplaybreaks
\begin{align}
\label{eq:wz-full}
{\mathcal{G}}_{\text{WZ}}(x,\xb)
& = -\frac{\veps}{3} \frac{\sqrt{x \xb}}{(1-x \xb)} \Bigg[ \nonumber \\
& + x^v (1-v) \big(j_{2 v-1,v}(x)-j_{v,v}(x)-H_{v-1} \Phi _v(x)+\Phi _v(x) \log (x \xb)\big) \nonumber \\
& +\xb^{1-v} (1-v) \big(j_{1-v,1-v}(\xb)-j_{2-2 v,1-v}(\xb)+H_{1-v} \Phi _{1-v}(\xb)\big) \nonumber \\
& + x^v \frac{H_{v-1}-H_{2 v-2}+\Phi _v(x)-\Phi _{2 v-1}(x)}{1-x} \nonumber \\
& + \xb^{1-v} \frac{H_{-v}-H_{1-2 v}+\Phi _{1-v}(\xb)-\Phi _{2-2 v}(\xb)}{1-\xb} \nonumber \\
& -x^{1-v} \xb^{2-2 v} \left((v-1) J_{2-2 v,1-v}(\xb,x)+\frac{\Phi _{2-2 v}(\xb)-x \Phi _{2-2 v}(x \xb)}{1-x}\right) \nonumber \\
& +x^{2 v-1} \xb^{v-1} \left((v-1) J_{2 v-1,v-1}(x,\xb)-\frac{\Phi _{2 v-1}(x)-\xb \Phi _{2 v-1}(x \xb)}{1-\xb}\right) \nonumber \\
& -\xb x^{v+1} \left((v-1) J_{v+1,1}(x,\xb)-\frac{\Phi _{v+1}(x)-\xb \Phi _{v+1}(x \xb)}{1-\xb}\right) \nonumber \\
& +x \xb^{2-v} \left((v-1) J_{2-v,1}(\xb,x)+\frac{\Phi _{2-v}(\xb)-x \Phi _{2-v}(x \xb)}{1-x}\right) \Bigg] \, .
\end{align}}
We could not express this correlation function in terms of elementary functions.
Instead, we introduced the following two special functions
\begin{align}
\begin{split}
j_{a,b}(x)
\equiv \sum_{n=0}^\infty \frac{x^n H_{n+a}}{n+b} \, , \qquad
J_{a,b}(x, \xb)
= \sum_{n=0}^\infty \sum_{m=0}^n \frac{x^n}{(n+a)} \frac{\xb^m}{(m+b)} \, .
\end{split}
\end{align}
In appendix \ref{sec:resum-wz} we derive some interesting properties of these functions, in particular we give an efficient algorithm to generate their expansion in powers of $(1-x)$ and $(1-\xb)$.
This allows us to expand the correlation function in the bulk channel
\begin{align}
{\mathcal{G}}_{\text{WZ}}(x, \xb)
= \left( \frac{\sqrt{x \xb}}{(1-x) (1-\xb)} \right)^{(d-1)/3} \left(
1
+ \sum_{\ell=0}^\infty d_{\ell,n}^{\text{WZ}} F_{\Delta_{\ell,n},\ell}(x, \xb)
\right) \, .
\end{align}
Once again, we can extract the CFT data using a software like \texttt{mathematica}\xspace.
Some of the low-lying bulk OPE coefficients are
\begin{align}
\label{eq:bulk-wz-opes}
& d_{0,0}^{\text{WZ}}
= \veps (v-1) \left(
\frac{1}{3} (2 v-1) \left(H_{-2 v}+H_{2 v}\right)
-\frac{1}{6} (3 v-2) \left(H_{-v}+H_v\right)
-\frac{5 v^2-v+1}{6 v}\right), \nonumber \\ \quad
& d_{1,0}^{\text{WZ}}
= \veps (v-1)^2 \left(
\frac{1}{18} (2 v-1) \left(H_{-2 v}+H_{2 v}\right)
-\frac{1}{36} (5 v-2) \left(H_{-v}+H_v\right)
+\frac{5 v^3+10 v^2-12 v+3}{108 v (v-1)}\right), \nonumber \\ \quad
& d_{0,1}^{\text{WZ}}
= \frac{\veps}{144} v (v-1) \left(17 v^2-37 v+18\right) \, .
\end{align}
Let us emphasize that the total OPE coefficients are obtained combining \eqref{eq:bulk-gff-sus-opes} and \eqref{eq:bulk-wz-opes}, namely $d_{\ell,n} = d_{\ell,n}^{\text{WZ}} + d_{\ell,n}^{\GFF}$.
An interesting feature of the CFT data is that the leading-twist family $d_{\ell,0}^{\text{WZ}}$ depends on harmonic numbers $H_a$, while all higher-twist families $d_{\ell,n\ge1}^{\text{WZ}}$ have only polynomial dependence in $v$.
This can be understood heuristically remembering that $d_{\mathcal{O}} = \lambda_{\phi\bar\phi{\mathcal{O}}} a_{{\mathcal{O}}}$.
For the leading-twist family, because $\lambda_{\phi\bar\phi{\mathcal{O}}} \sim O(\veps^0)$ then the $O(\veps)$ term in the one-point coefficient $a_{\mathcal{O}}$ contributes to $d_{\mathcal{O}}$.
Therefore, our result captures a one-point function calculated to one-loop in terms of Feynman diagrams, where the one-loop integrals would be responsible for the appearance of harmonic numbers.
On the other hand, for higher-twist families we have $\lambda_{\phi\bar\phi{\mathcal{O}}} \sim O(\veps)$, so only the tree-level part of $a_{\mathcal{O}}$ contributes to $d_{\mathcal{O}}$, giving an intuitive reason why no harmonic numbers appear in this case.
As usual, we give a larger list of bulk coefficients in the notebook attached to this publication.
\section{Conclusions}
In this work we used analytical bootstrap techniques to study monodromy defects in the $\veps$--expansion.
This program has been highly successful for four-point functions without defects, where CFT data has been extracted up to fourth order in $\veps$ for the Wilson-Fisher fixed point \cite{Alday:2017zzv,Henriksson:2018myn}.
Our analysis can be considered as the first step towards applying these techniques to monodromy defects in CFT.
Our main result is equation \eqref{eq:wz-full}, which describes the full leading-order two-point correlator of chiral fields in the Wess-Zumino model. In order to obtain the defect correlator, it was necessary to calculate the leading order CFT data of the Wess-Zumino model without defects (see section \ref{sec:wz-summary}), a result that is interesting on its own and that we plan to extend to higher orders in the future.
We also studied monodromy defects in the Wilson-Fisher $O(N)$ model, reproducing and in some cases improving previous results.
A natural extension of this work is to consider higher orders in the $\veps$--expansion, although this will require dealing with degeneracies in the bulk spectrum.
Another related system is the large-$N$ limit of the $O(N)$ model, which has been studied using bootstrap in \cite{Alday:2019clp}.
Monodromy defects in the large-$N$ limit have been studied in \cite{Giombi:2021uae}, and they might be good candidates for a bootstrap analysis.
Yet another system in which the techniques used in this paper are directly applicable is a Wilson line defect in ${\mathcal{N}}=4$ SYM at strong coupling.
The strong-coupling planar spectrum of ${\mathcal{N}}=4$ SYM contains double-trace operators which are killed by the discontinuity in the inversion formula. This is very similar to the setup of this paper, and indeed two-point functions of half-BPS operators can be reconstructed by inverting a finite number of conformal blocks \cite{Barrat:2021yvp}. It might also be possible to consider other maximally-supersymmetric models in $3 \le d \le 6$, and bootstrap their defect correlators in suitable limits.
On a more speculative side, the functions studied in appendix \ref{sec:resum-wz} are close cousins of the Hurwitz-Lerch zeta function. Perhaps these functions will find applications in other perturbative calculations or in other branches of mathematical physics. Finally, the study of higher-point functions is one of the long-term goals of the bootstrap. Progress in this direction was made in \cite{Buric:2020zea}, where higher-point functions in the presence of defects were studied. Eventually, one should be able to obtain the corresponding Lorentzian inversion formulas, and implement the multi-point bootstrap in order to obtain even more restrictive constraints.
\section*{Acknowledgments}
We are particularly grateful to J.~Barrat, E.~Lauria and P.~van Vliet for discussions and collaboration on related projects.
AG wants to acknowledge S.~Lacroix for many useful comments.
We also thank I.~Buric, A.~Kaviraj, J.~Rong and V.~Schomerus for interesting discussions, and the anonymous JHEP referee for many comments that helped improve this work.
Finally, we thank the Simons Collaboration on the Non-perturbative Bootstrap for many stimulating activities.
This work is supported by the DFG through the Emmy Noether research group ``The Conformal Bootstrap Program'' project number 400570283.
|
2109.08311
|
\section{Introduction}
\label{sec:introduction}
\IEEEPARstart{S}{emi-supervised} learning provides great significance in left atrium (LA) segmentation model learning with insufficient labelled data. Automated and accurate LA segmentation is a crucial task to aid the diagnosis and treatment for the patients with atrial fibrillation (AF) \cite{razeghi2020fully,xiong2019fully,yang2020simultaneous,chen2021jas}. Deep learning based approaches have great potential for the LA segmentation \cite{zhang2021fully,xiong2020global}. However, it is expensive and laborious to annotate large amounts of data by experienced experts for training an accurate LA segmentation model based on deep learning \cite{yu2019uncertainty}. Since semi-supervised learning can alleviate the need for the labelled data by effectively exploiting the unlabelled data to learn deep models \cite{cao2021uncertainty}. Semi-supervised learning is able to overcome the insufficient labelled data for advancing the accurate LA segmentation, benefiting the subsequent diagnosis and treatment for the patients with AF.
Generalising semi-supervised learning to cross-domain data for the LA segmentation is of high importance to improve model robustness. Semi-supervised learning aims to mine effective hidden information from unlabelled data to support model learning \cite{chapelle2009semi}. Because of the noise interference and the limited collection capabilities of data sources, a single data domain can not always provide sufficient high-quality unlabelled data and abundant data characteristics for robust semi-supervised LA segmentation. For example, the single data domain is usually subject to the limited LA varieties of contrast, shape and texture for robust model learning. Compared to the single data domain, cross-domain data not only can provide more available high-quality data, but also can provide complementary domain information and more comprehensive data characteristics to describe the LA of interest \cite{yang2019comprehensive}. Therefore, it is important to effectively ensemble cross-domain data for robust semi-supervised LA segmentation.
However, generalising semi-supervised to cross-domain data is difficult due to the difference of distributions and the sample mismatch as shown in Fig. \ref{fig:ahdc_v}: (1) The difference of cross-domain data distributions. Semi-supervised learning with the generative model, low-density separation and graph-based method can work but relies on the consistent data distribution under certain model assumptions including smoothness assumption, cluster assumption or manifold assumption \cite{chapelle2009semi}. Performance degradation of the semi-supervised model may occur whenever the assumptions adopted for a particular task do not match the characteristics of the data distribution \cite{chapelle2009semi}. In the real world, cross-domain data collected from different sources exhibit heterogeneous properties \cite{campello2021multi}, which can lead to the difference in distributions. For example, in medical image analysis, because of the different subject groups, scanners, or scanning protocols, the distributions of cross-domain data are different \cite{cheplygina2018transfer}. Therefore, generalising semi-supervised learning to cross-domain data directly is not trivial. (2) Sample mismatch of cross-domain data. Semi-supervised learning with the disagreement-based method requires matched samples from different domains, where the information of different domains is regarded as the different characteristics of matched samples \cite{dong2018tri}. Since the collection of cross-domain data is independent, the samples in different domains are not matched. This restricts the cross-domain generalisation of semi-supervised learning.
\begin{figure}[!hbtp]
\begin{center}
\scalebox{.99}{
\includegraphics[width=1\linewidth]{Challenge_solution.pdf
}
\end{center}
\caption{Our proposed adaptive hierarchical dual consistency overcomes the difference of data distribution and sample mismatch in different domains for the cross-domain semi-supervised segmentation.}
\label{fig:ahdc_v}
\end{figure}
In order to overcome the issues mentioned above, we propose an \emph{\textbf{A}daptive \textbf{H}ierarchical \textbf{D}ual \textbf{C}onsistency} framework called \textbf{AHDC} for semi-supervised LA segmentation on cross-domain data as shown in Fig. \ref{fig:ahdc_v}. The AHDC consists of two modules: (1) A Bidirectional Adversarial Inference module (BAI), which performs the mutual domain adaptation to align distributions and match samples for two different data domains. The adapted domains and two corresponding source domains are merged to obtain two matched domains. The obtained matched domains not only expand the number of data in a specific source domain, but also learns complementary representation for the samples in the specific source domain. (2) A Hierarchical Dual Consistency learning module (HDC), which performs a hierarchical semi-supervised segmentation with dual consistency on the obtained matched domains. The HDC builds two dual-modelling networks applied to the matched domains for mining the complementary information in both intra-domain and inter-domain. Within a specific domain, the segmentation task is represented as global modelling and local modelling. Then we perform a consistency between the complementary modelling LAs for intra-domain semi-supervised learning. For the inter-domain, we build a consistency between the outputs of dual-modelling networks estimated from different domains to exploit the complementary domain information.
Our main contributions are summarised as follows:
\begin{itemize}
\item We propose a semi-supervised LA segmentation framework for generalising across domains. It provides a solution for generalising semi-supervised LA segmentation to cross-domain data with effectiveness on both different distributions and mismatched samples.
\item We propose a paradigm of hierarchical dual consistency learning to mine the effective information in both inter-domain and intra-domain. It explicitly enforcing consistency under complementary information.
\item We have conducted comprehensive experiments on four 3D MR datasets from different centres and one 3D CT dataset. The experiment results demonstrated the feasibility and the superiority of our proposed cross-domain semi-supervised segmentation framework.
\end{itemize}
\section{Related Work}
\subsection{Domain Adaptation}
Domain adaptation, which aims to overcome the distribution difference of different domains, has drawn great attention in computer vision \cite{patel2015visual}. Because generative adversarial network (GAN) has great superiority in capturing data distribution, it has been widely used in domain adaptation for aligning distributions of different domains \cite{li2020towards,zhang2020collaborative, chen2019discriminative, chen2020unsupervised, chen2019joint}. There are different GAN based structures for achieving domain adaptation. For the domain adaptation with a single direction, GAN usually leverages a generator and a discriminator to improve the distribution of the source domain to approximate it to the distribution of the target domain by adversarial learning. To focus on the high-resolution image with emphasis on pixel-level reconstruction, Pix2pixHD extends conditional GANs to leverage a decomposed generator and three multi-scale discriminators to achieve domain adaptation \cite{wang2018high}. For the domain adaptation with bi-direction, CycleGAN \cite{zhu2017unpaired}, DualGAN \cite{yi2017dualgan} and DiscoGAN \cite{kim2017learning} concatenate two generators with two discriminators to ensure two cyclic consistency for the bidirectional domain adaptation of two different domains. ALI \cite{dumoulin2017adversarially} and BiGAN \cite{donahue2017adversarial} employ two generators and a discriminator to match joint distribution for different domains. However, ALI and BiGAN do not focus on pixel-level reconstruction, thus cannot effectively capture the position, colour, and style of targets. ALICE extends the ALI to exploit cycle-consistency to focus on pixel-level reconstruction for the target domain \cite{li2017alice}. It also proposes to enforce cycle-consistency using fully adversarial learning with an extra discriminator. Our used domain adaptation method is based on the ALICE framework. We extended it to focus on bidirectional pixel-level reconstruction for two domains simultaneously. In order to reduce computing resources and difficulty of training while using fully adversarial learning, we adopt the explicit cycle-consistency, thus exploiting two generators and a discriminator for bidirectional domain adaptation with pixel-level reconstruction.
\subsection{Semi-supervised Learning}
Semi-supervised learning alleviates the problem of the lack of labelled data. Here we only discuss related consistency-based and disagreement-based semi-supervised learning. More information about semi-supervised learning can be found in \cite{chapelle2009semi}. The consistency-based methods constrain the prediction consistency under different perturbations and ensembles. For example, the $\Pi$ model enforces the prediction consistency under the input perturbations with different Gaussian noise and the model perturbation with dropout operation \cite{samuli2017temporal}. Unsupervised data augmentation (UDA) replaces the traditional noise perturbations with high-quality data augmentations (e.g., RandAugment, Back-translation and TF-IDF) to improve consistency learning \cite{xie2020unsupervised}. FixMatch uses a separate weak augmentation and a strong augmentation on input data for consistency regularisation \cite{sohn2020fixmatch}. In contrast to these methods, Temporal Ensembling (TE) penalises the inconsistency between the current prediction and the integration of previous predictions based on an exponential moving average (EMA) \cite{samuli2017temporal}. Compared to the TE, the Mean Teacher proposes to average the weights of a base model \cite{tarvainen2017mean}. However, they need multiple reasoning processes to provide predictions for consistency learning, thus being subject to the computational cost.
The disagreement-based semi-supervised learning exploits the disagreement of predictions from multiple task learners during the learning process \cite{dong2018tri} including co-training and co-regularisation. Co-training leverages two sufficient and redundant views of data to train two task models for annotating the unlabelled data. Then the unlabelled data with high prediction confidence is added to the training set for further improving the model \cite{qiao2018deep,xia2020uncertainty}. Co-regularisation tries to directly minimise the prediction disagreement of unlabelled samples on different views \cite{zhao2017multi}.
\begin{figure*}[!ht]
\begin{center}
\scalebox{.92}{
\includegraphics[width=1\textwidth]{AHDC_framework.pdf
}
\end{center}
\caption{Overview of our proposed AHDC framework for cross-domain semi-supervised segmentation. The framework consists of a bidirectional adversarial inference (BAI) module and a hierarchical dual consistency learning (HDC) module. The BAI module employs two mapping networks to perform a mutual adaptation of two different domains of $D_{1}$ and $D_{2}$ to obtain matched domains of $D_{p1}$ and $D_{p2}$. The HDC module applies two dual-modelling networks to the matched domains for performing semi-supervised segmentation tasks. Each dual-modelling network contains a global-modelling branch ($S_{g1}$/$S_{g2}$) used to capture the global correlation of feature maps to estimate LA ($\widetilde{y}_{g1}/\widetilde{y}_{g2}$), and a local-modelling branch ($S_{l1}$/$S_{l2}$) used to capture the local correlation of feature maps to estimate LA ($\widetilde{y}_{l1}/\widetilde{y}_{l2}$). In intra-domain, a consistency is performed between $\widetilde{y}_{l1}/\widetilde{y}_{l2}$ and $\widetilde{y}_{g1}/\widetilde{y}_{g2}$ estimated by complementary modellings, respectively. In inter-domain, a consistency is performed between $\widetilde{y}_{l1}/\widetilde{y}_{g1}$ and $\widetilde{y}_{l2}/\widetilde{y}_{g2}$ estimated by complementary domain networks, respectively.}\label{fig:AHDC}
\end{figure*}
\begin{table}[!hbtp]
\captionsetup{justification=centering}
\caption{Summary of notations}
\centering
\scalebox{.80}{
\begin{tabular}{|c|c|c|c|}
\hline
Notion & Definition & Notion & Definition\cr \hline
$D_{1}$ &Domain from source1 &$D_{2}$ &Domain from source2\cr \hline
$D_{1t2}$ &\tabincell{c}{Domain adapted\\from $D_{1}$ to $D_{2}$} &$D_{2t1}$ & \tabincell{c}{Domain adapted\\from $D_{2}$ to $D_{1}$}\cr \hline
$D_{p1}$ &$D_{1}\cup D_{2t1}$ &$D_{p2}$ & $D_{2}\cup D_{1t2}$ \cr \hline
$D^{l},D^{u}$ &\tabincell{c}{Labelled domain, \\Unlabelled domain} &$G_{1}, G_{2}$ &\tabincell{c}{Mutual mapping nets \\ of $D_{1}$ and $D_{2}$} \cr \hline
\tabincell{c}{$S_{1}=\{S_{l1}, S_{g1}\},$ \\ $S_{2}=\{S_{l2},S_{g2}\}$} & \tabincell{c}{Dual-modelling nets \\ $\{$local net, global net$\}$} &$T$ &Discriminator \cr \hline
\tabincell{c}{$x_{1},x_{2},x_{1t2}$,\\$x_{2t1},x_{p1},x_{p2}$} &\tabincell{c}{Images from $D_{1},D_{2}$,\\$D_{1t2},D_{2t1},D_{p1},D_{p2}$} &\tabincell{c}{$\widetilde{y}_{l1},\widetilde{y}_{g1}$, \\ $\widetilde{y}_{l2},\widetilde{y}_{g2}$} & \tabincell{c}{Estimated LAs from \\ $S_{l1},S_{g1},S_{l2},S_{g2}$}\cr \hline
$\hat{x}_{1},\hat{x}_{2}$ & \tabincell{c}{Reconstructions\\of $x_{1},x_{2}$} & $x^{l},x^{u}$ &\tabincell{c}{Labelled data,\\unlabelled data} \cr \hline
$j(x_{p1},x_{p2})$ & \tabincell{c}{Joint distribution \\of $D_{p1}$,$D_{p2}$} &$y$ &Ground truth \cr \hline
\tabincell{c}{$p(x_{1}),q(x_{2})$, \\$p(x_{p1}),q(x_{p2})$} &\tabincell{c}{Marginal distributions \\of $D_{1}$,$D_{2}$,$D_{p1}$,$D_{p2}$} &\tabincell{c}{$p_{\varphi_{1}}(x_{2}|x_{1})$ \\ $q_{\varphi_{2}}(x_{1}|x_{2})$} &\tabincell{c}{Parameterised \\ conditional distributions} \cr \hline
$\varphi_{1},\varphi_{2}$ &Params of $G_{1},G_{2}$ &$\psi_{1}$ &Param of $T$ \cr \hline
\tabincell{c}{$\theta_{1}=$ \\ $\{\theta^{f}_{1},\theta^{l}_{1},\theta^{g}_{1}\}$} &\tabincell{c}{Param of $S_{1}$ in the \\ modules of feature, \\ local-modelling, \\ global-modelling} &\tabincell{c}{$\theta_{2}=$ \\ $\{\theta^{f}_{2},\theta^{l}_{2},\theta^{g}_{2}\}$} &\tabincell{c}{Param of $S_{2}$ in the \\ modules of feature, \\ local-modelling, \\ global-modelling} \cr \hline
$L (\cdot)$ & Loss function &$\lambda$ & Weight Param \cr \hline
\end{tabular}
}
\label{table:correlation results}
\end{table}
\section{Method}
\subsection{Overview}
The overview of our proposed AHDC framework is illustrated in Fig. \ref{fig:AHDC}. The notations are summarised in TABLE \uppercase\expandafter{\romannumeral1}. The AHDC framework consists of two modules: a BAI module and a HDC module. Given two different data domains denoted by $D_{1}$ and $D_{2}$. $D_{1}$ contains both labelled data $D^{l}_{1}$ and unlabelled data $D^{u}_{1}$, where $D^{l}_{1}=\{((x^{l}_{1})^{i},y^{i})\}^{n_{1}}_{i=1}$ with $n_{1}$ labelled samples and $D^{u}_{1}=\{(x^{u}_{1})^{i}\}^{n_{1}+n_{2}}_{i=n_{1}+1}$ with $n_{2}$ unlabelled samples, respectively. The $D_{2}$ only contains unlabelled data denoted as $D^{u}_{2}=\{(x^{u}_{2})^{i}\}^{m_{1}}_{i=1}$ with $m_{1}$ unlabelled samples. The BAI module employs two mapping networks of $G_{1}$ and $G_{2}$ to generate complementary domains by adapting $D_{1}$ and $D_{2}$ to each other, where the domain adapted from $D_{1}$ to $D_{2}$ is denoted as $D_{1t2}$ while the domain adapted from $D_{2}$ to $D_{1}$ is denoted as $D_{2t1}$. Then the targeted domains ($D_{1}$ and $D_{2}$) and the corresponding adapted domains ($D_{2t1}$ and $D_{1t2}$) merge to form two matched domains of $D_{p1}$ and $D_{p2}$. Finally, two dual-modelling networks of $S_{1}=\{S_{l1},S_{g1}\}$ and $S_{2}=\{S_{l2},S_{g2}\}$ are fed with matched samples sampled from $D_{p1}$ and $D_{p2}$ to predict LAs, where the LAs predicted by the local modelling $S_{l1}$ and the global modelling $S_{g1}$ are denoted as $\widetilde{y}_{l1}$ and $\widetilde{y}_{g1}$ while the LAs predicted by the local modelling $S_{l2}$ and the global modelling $S_{g2}$ are denoted as $\widetilde{y}_{l2}$ and $\widetilde{y}_{g2}$, respectively.
\subsection{Bidirectional Adversarial Inference for Distribution Alignment and Sample Matching.}
Consider a $D_{1}$ to $D_{2}$ domain mapping network $G_{1}:x_{1} \rightarrow x_{2}$. Meanwhile, consider a $D_{2}$ to $D_{1}$ domain mapping network $G_{2}:x_{2} \rightarrow x_{1}$. We denote two domain marginal distributions of $D_{1}$ and $D_{2}$ as $p(x_{1})$ and $q(x_{2})$. One domain can be inferred based on the other using parameterised conditional distributions, $p_{\varphi_{1}}(x_{2}|x_{1})$ and $q_{\varphi_{2}}(x_{1}|x_{2})$, where $\varphi_{1}$ and $\varphi_{2}$ denote the parameters of two distributions. Then, we have the joint distributions of $p_{\varphi_{1}}(x_{1},x_{2})=p_{\varphi_{1}}(x_{2}|x_{1})p(x_{1})$ and $q_{\varphi_{2}}(x_{1},x_{2})=q_{\varphi_{2}}(x_{1}|x_{2})q(x_{2})$. We aims to match $p_{\varphi_{1}}(x_{2})=\int p_{\varphi_{1}}(x_{2},x_{1})dx_{1}$ to $q(x_{2})$ and match $q_{\varphi_{2}}(x_{1})=\int q_{\varphi_{2}}(x_{1},x_{2})dx_{2}$ to $p(x_{1})$ by matching $p_{\varphi_{1}}(x_{1},x_{2})$ and $q_{\varphi_{2}}(x_{1},x_{2})$. Then we use a discriminator network $T_{\psi_{1}}(x_{1},x_{2})$ parameterised using $\psi_{1}$ to penalise mismatches in the joint distributions of $p_{\varphi_{1}}(x_{1},x_{2})$ and $q_{\varphi_{2}}(x_{1},x_{2})$. Specifically, we consider following objectives:
\begin{equation}
\begin{split}
&\mathop{\min_{\varphi_{1},\varphi_{2}}\max_{\psi_{1}}} O^{d}(\varphi_{1},\varphi_{2},\psi_{1})\\
&=E_{(x_{1},x_{2})\sim p_{\varphi_{1}}(x_{1},x_{2})}[log\ \sigma(T_{\psi_{1}}(x_{1},x_{2}))]\\
&+E_{(x_{1},x_{2})\sim q_{\varphi_{2}}(x_{1},x_{2})}[1-log\ \sigma( T_{\psi_{1}}(x_{1},x_{2}))]\\
\end{split}
\end{equation}\label{equationali}
where the $\sigma(\cdot)$ denotes the sigmoid function.
Intuitively, if equation (1) is achieved, $p_{\varphi_{1}}(x_{1},x_{2})$ and $q_{\varphi_{2}}(x_{1},x_{2})$ match each other, which not only implies that $p_{\varphi_{1}}(x_{2})$ and $q(x_{2})$ match each other, but also implies that $q_{\varphi_{2}}(x_{1})$ and $p(x_{1})$ match each other. However, the relationship between random variables $x_{1}$ and $x_{2}$ is not specified or constrained by equation (1). In order to obtain paired samples, according to \cite{li2017alice}, we extend the conditional entropies from single constraint to bi-direction constraints $(H(x_{1}|x_{2})$ and $H(x_{2}|x_{1}))$, which imposes constraints on the conditionals $p_{\varphi_{1}}(x_{2}|x_{1})$ and $q_{\varphi_{2}}(x_{1}|x_{2})$, simultaneously. Because there is no explicit distributions to compute the conditional entropies. According to \cite{li2017alice}, we bound the conditional entropies using the cycle-consistency ($L^{x_{1}\rightarrow \hat{x}_{1}}(\varphi_{1},\varphi_{2})$ and $L^{x_{2}\rightarrow \hat{x}_{2}}(\varphi_{1},\varphi_{2})$):
\begin{equation}
\begin{split}
&H(x_{1}|x_{2})\\
=&-E_{x_{1}\sim p(x_{1}),x_{2}\sim p_{\varphi_{1}}(x_{2}|x_{1})}[log p_{\varphi_{1}}(x_{1}|x_{2})]\\
=&-E_{x_{1}\sim p(x_{1}),x_{2}\sim p_{\varphi_{1}}(x_{2}|x_{1})}[log q_{\varphi_{2}}(x_{1}|x_{2})]\\
&-E_{x_{1}\sim p(x_{1}),x_{2}\sim p_{\varphi_{1}}(x_{2}|x_{1})}[log p_{\varphi_{1}}(x_{1}|x_{2})-log q_{\varphi_{2}}(x_{1}|x_{2})]\\
=&-E_{x_{1}\sim p(x_{1}),x_{2}\sim p_{\varphi_{1}}(x_{2}|x_{1})}[log q_{\varphi_{2}}(x_{1}|x_{2})]\\
&-E_{q_{\varphi_{2}(x_{2})}}[KL(p_{\varphi_{1}}(x_{1}|x_{2})||q_{\varphi_{2}}(x_{1}|x_{2}))]\\
\leq &-E_{x_{1}\sim p(x_{1}),x_{2}\sim p_{\varphi_{1}}(x_{2}|x_{1})}[log q_{\varphi_{2}}(x_{1}|x_{2})]= L^{x_{1}\rightarrow \hat{x}_{1}}(\varphi_{1},\varphi_{2})
\end{split}
\end{equation}
Similarly,
\begin{equation}
\begin{split}
&H(x_{2}|x_{1})\\
=&-E_{x_{2}\sim q(x_{2}),x_{1}\sim q_{\varphi_{2}}(x_{1}|x_{2})}[log q_{\varphi_{2}}(x_{2}|x_{1})]\\
\leq &-E_{x_{2}\sim q(x_{2}),x_{1}\sim q_{\varphi_{2}}(x_{1}|x_{2})}[log p_{\varphi_{1}}(x_{2}|x_{1})]= L^{x_{2}\rightarrow \hat{x}_{2}}(\varphi_{1},\varphi_{2})
\end{split}
\end{equation}
where the $\hat{x}_{1}$ and $\hat{x}_{2}$ are denoted as the reconstructions of $x_{1}$ and $x_{2}$. KL denotes the Kullback-Leible divergence. According to the equations of $(2)$ and $(3)$, on the one hand, we have a function $G_{3}:x_{1}\rightarrow \hat{x}_{1}$ defined by $G_{3}=G_{1} \circ G_{2}$, which first generates $x_{2}$ from $x_{1}$ based on $G_{1}$, then $G_{2}$ produces $\hat{x}_{1}$ from generated $x_{2}$. On the other hand, we also have a function $G_{4}:x_{2}\rightarrow \hat{x}_{2}$ defined by $G_{4}=G_{2} \circ G_{1}$, which first generates $x_{1}$ from $x_{2}$ based on $G_{2}$, then $G_{1}$ produces $\hat{x}_{2}$ from generated $x_{1}$. In contrast to the fully adversarial training for solving $L^{x_{1}\rightarrow \hat{x}_{1}}(\varphi_{1},\varphi_{2})$ and $L^{x_{2}\rightarrow \hat{x}_{2}}(\varphi_{1},\varphi_{2})$, we employ the reconstruction loss to reduce the difficulty of model training. Specifically, we consider following object:
\begin{equation}
\begin{split}
\mathop{\min_{\varphi_{1},\varphi_{2}}} &O^{x_{1}\rightarrow \hat{x}_{1}}(\varphi_{1},\varphi_{2})\\
&=E_{\hat{x}_{1}\sim q_{\varphi_{2}}(\hat{x}_{1}|x_{2}),x_{2}\sim p_{\varphi_{1}}(x_{2}|x_{1}) }\ L_{mae}(x_{1},\hat{x}_{1})
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\mathop{\min_{\varphi_{1},\varphi_{2}}} &O^{x_{2}\rightarrow \hat{x}_{2}}(\varphi_{1},\varphi_{2})\\
&=E_{\hat{x}_{2}\sim p_{\varphi_{1}}(\hat{x}_{2}|x_{1}),x_{1}\sim q_{\varphi_{2}}(x_{1}|x_{2})}\ L_{mae}(x_{2},\hat{x}_{2})
\end{split}
\end{equation}
where the $L_{mae}(\cdot)$ denotes the mean absolute error. Finally, we have the following object for BAI:
\begin{equation}
\begin{split}
\mathop{\min_{\varphi_{1},\varphi_{2}}\max_{\psi_{1}}}\ &\lambda_{d}O^{d}(\varphi_{1},\varphi_{2},\psi_{1}) \\
&+\lambda_{r}O^{x_{1}\rightarrow \hat{x}_{1}}(\varphi_{1},\varphi_{2})\\
&+\lambda_{r}O^{x_{2}\rightarrow \hat{x}_{2}}(\varphi_{1},\varphi_{2})
\end{split}
\end{equation}
where $\lambda_{d}$ and $\lambda_{r}$ are hyperparameters to balance the adversarial loss and the reconstruction loss.
\begin{figure}[!hbtp]
\begin{center}
\scalebox{.99}{
\includegraphics[width=1\linewidth]{BAI.pdf
}
\end{center}
\caption{Structure of bidirectional adversarial inference network. The mapping network $G_{1}$ and the mapping network $G_{2}$ have the same structure.}
\label{fig:BAI}
\end{figure}
\subsection{Hierarchical Dual Consistency for Semi-supervised Segmentation}
The BAI makes the cross-domain data adapt to each other to produce matched domains. In detail, the domain $D_{1}$ adapted to source $D_{2}$ is denoted as $D_{1t2}=D^{l}_{1t2} \cup D^{u}_{1t2}$, where $D^{l}_{1t2}=\{((x^{l}_{1t2})^{i},y^{i})\}^{n_{1}}_{i=1}$ with $n_{1}$ labelled samples and $D^{u}_{1t2}=\{(x^{u}_{1t2})^{i}\}^{n_{1}+n_{2}}_{i=n_{1}+1}$ with $n_{2}$ unlabelled samples. The domain $D_{2}$ adapted to source $D_{1}$ is denoted as $D_{2t1}=D^{u}_{2t1}=\{(x^{u}_{2t1})^{i}\}^{m}_{i=1}$ with $m$ unlabelled samples. Then we merge two source domains and two adapted domains to obtain the matched domains of $D_{p1}$ and $D_{p2}$. The $D_{p1}=D_{1}\cup D_{2t1}=D^{l}_{1}\cup D^{u}_{1}\cup D^{u}_{2t1}=D^{l}_{p1} \cup D^{u}_{p1}$, where $D^{l}_{p1}=\{((x^{l}_{p1})^{i},y^{i})\}^{n_{1}}_{i=1}$ with $n_{1}$ labelled samples and $D^{u}_{p1}=\{(x^{u}_{p1})^{i}\}^{n_{1}+n_{2}+m}_{i=n_{1}+1}$ with $n_{2}+m$ unlabelled samples. The $D_{p2}=D_{2}\cup D_{1t2}=D^{l}_{1t2}\cup D^{u}_{1t2}\cup D^{u}_{2}=D^{l}_{p2} \cup D^{u}_{p2}$, where $D^{l}_{p2}=\{((x^{l}_{p2})^{i},y^{i})\}^{n_{1}}_{i=1}$ with $n_{1}$ labelled samples and $D^{u}_{p2}=\{(x^{u}_{p2})^{i}\}^{n_{1}+n_{2}+m}_{i=n_{1}+1}$ with $n_{2}+m$ unlabelled samples. We denote two domain marginal distributions of $D_{p1}$ and $D_{p2}$ as $p(x_{p1})$ and $q(x_{p2})$, respectively. The joint distribution of $D_{p1}$ and $D_{p2}$ is denoted as $j(x_{p1}, x_{p2})$.
Based on the matched domains, we investigate complementary LA modelling and complementary domain knowledge learning to provide inherent prediction perturbation for the consistency based cross-domain semi-supervised learning. Therefore, a hierarchical dual consistency is investigated. Specifically, for the intra-domain, we consider two dual-modelling networks $S_{1}:x_{p1}\rightarrow (\widetilde{y}_{l1},\widetilde{y}_{g1})$ parameterised by $\theta_{1}=\{\theta^{f}_{1},\theta^{l}_{1},\theta^{g}_{1}\}$ and $S_{2}:x_{p2}\rightarrow (\widetilde{y}_{l2},\widetilde{y}_{g2})$ parameterised by $\theta_{2}=\{\theta^{f}_{2},\theta^{l}_{2},\theta^{g}_{2}\}$ applied to the matched domains of $D_{p1}$ and $D_{p2}$, respectively. Each dual-modelling network estimates two targets by considering local information and global information of image, where $S_{1}$ simultaneously performs the global modelling of $S_{g1}:x_{p1}\rightarrow \widetilde{y}_{g1}$ parameterised by $\{\theta^{f}_{1},\theta^{g}_{1}\}$ and the local modelling of $S_{l1}:x_{p1}\rightarrow \widetilde{y}_{l1}$ parameterised by $\{\theta^{f}_{1},\theta^{l}_{1}\}$. Similarly, the $S_{2}$ simultaneously performs the global modelling of $S_{g2}:x_{p2}\rightarrow \widetilde{y}_{g2}$ parameterised by $\{\theta^{f}_{2},\theta^{g}_{2}\}$ and the local modelling of $S_{l2}:x_{p2}\rightarrow \widetilde{y}_{l2}$ parameterised by $\{\theta^{f}_{2},\theta^{l}_{2}\}$. Then we encourage the global modelling and the local modelling of each dual-modelling network to predict consistent targets via the consistency loss:
\begin{equation}
\begin{split}
\min_{\theta_{1}}\ O^{intra1}(\theta_{1})=E_{x^{u}_{p1} \sim p(x_{p1})}L_{d}(S_{l1}(x^{u}_{p1}),S_{g1}(x^{u}_{p1}))
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\min_{\theta_{2}}\ O^{intra2}(\theta_{2})=E_{x^{u}_{p2} \sim q(x_{p2})}L_{d}(S_{l2}(x^{u}_{p2}),S_{g2}(x^{u}_{p2}))
\end{split}
\end{equation}
where $L_{d}(\cdot)$ denotes the dice loss function. For the dual consistency in inter-domain, we maximise the agreement on two matched domains. Therefore, we encourage $S_{1}$ and $S_{2}$ to predict similar outputs by:
\begin{equation}
\begin{split}
\min_{\theta_{1},\theta_{2}}\ &O^{inter}(\theta_{1},\theta_{2})\\
=&E_{(x^{u}_{p1},x^{u}_{p2}) \sim j(x_{p1},x_{p2})} L_{c}(S_{1}(x^{u}_{p1}),S_{2}(x^{u}_{p2}))\\
=&E_{(x^{u}_{p1},x^{u}_{p2}) \sim j(x_{p1},x_{p2})}(L_{c}(S_{l1}(x^{u}_{p1}),S_{l2}(x^{u}_{p2}))\\
+&L_{c}(S_{g1}(x^{u}_{p1}),S_{g2}(x^{u}_{p2})))
\end{split}
\end{equation}
where $L_{c}(\cdot)$ denotes the cross-entropy loss function. To avoid that $S_{1}$ and $S_{2}$ gradually resemble each other, we encourage the $S_{1}$ and $S_{2}$ to produce conditional independent features by orthogonalising the weights of feature layers:
\begin{equation}
\begin{split}
\min_{\theta_{1},\theta_{2}}\ O^{ow}(\theta_{1},\theta_{2})=\frac{1}{N}\sum^{N}_{i=1}(\frac{1}{K^{2}_{i}}\sum^{K^{2}_{i}}|\frac{(\theta^{f}_{1i})^{T}\theta^{f}_{2i}}{||\theta^{f}_{1i}||||\theta^{f}_{2i}||}|)
\end{split}
\end{equation}
where the $N$ denotes the number of layers in $S_{1}$ and $S_{2}$. $K_{i}$ represents the number of features in $i$th layer. $\theta^{f}_{1i}$ and $\theta^{f}_{2i}$ denote the parameters of $i$th feature layer in $S_{1}$ and $S_{2}$, respectively.
Beyond the consistency learning above, $S_{1}$ and $S_{2}$ can explicitly learns from $D^{l}_{p1}$ and $D^{l}_{p2}$ with the supervision of the labels:
\begin{equation}
\begin{split}
\min_{\theta_{1}}\ O^{super1}(\theta_{1})=&E_{x^{l}_{p1} \sim p(x_{p1})} L_{s}(S_{1}(x^{l}_{p1}),y)\\
=&E_{x^{l}_{p1} \sim p(x_{p1})}(L_{s}(S_{l1}(x^{l}_{p1}),y)\\
+&L_{s}(S_{g1}(x^{l}_{p1}),y))
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\min_{\theta_{2}}\ O^{super2}(\theta_{2})=&E_{x^{l}_{p2} \sim q(x_{p2})}L_{s}(S_{2}(x^{l}_{p2}),y)\\
=&E_{x^{l}_{p2} \sim q(x_{p2})}L_{s}(S_{l2}(x^{l}_{p2}),y)\\
+&L_{s}(S_{g2}(x^{l}_{p2}),y))
\end{split}
\end{equation}
where the $y$ denotes the LA label. $L_{s}(\cdot)$ denotes the supervised loss functions (cross-entropy loss function and dice loss function). Then the final training objective for the learning of $S_{1}$ and $S_{2}$ is denoted as:
\begin{equation}
\begin{split}
\min_{\theta_{1},\theta_{2}}\ O^{total}(\theta_{1},\theta_{2})&=\lambda_{super}(O^{super1} +O^{super2})\\
&+\lambda_{intra}(O^{intra1}+O^{intra2})\\
&+\lambda_{inter}O^{inter}+\lambda_{ow}O^{ow}\\
\end{split}
\end{equation}
where the $\lambda_{super}$, $\lambda_{intra}$, $\lambda_{inter}$ and $\lambda_{ow}$ are hyperparameters to balance the loss terms.
\begin{figure}[!hbtp]
\begin{center}
\scalebox{.99}{
\includegraphics[width=1\linewidth]{dual_modelling.pdf}
}
\end{center}
\caption{Dual-modelling network for intra-consistency learning. The local-modelling branch and global-modelling branch share a feature extractor. For the global-modelling branch, the extracted feature maps from input images are split into $8\times 8$ patches. These $8\times 8$ patches are taken as a sequence of vectors to be fed to a self-attention based global-modelling structure.}
\label{fig:global_modelling}
\end{figure}
\subsection{Network Configuration}
The BAI module contains three subnetworks: two domain mapping networks ($G_{1}$, $G_{2}$) and a discriminative network $T$. We use the 2D U-Net with bilinear upsampling as network backbones of both $G_{1}$ and $G_{2}$. $T$ has six convolution layers with the numbers of filters of $32, 64, 128, 256, 256, 1$, respectively. Each of the first five $3\times 3$ convolutional layers with a stride of $2$ is followed by a batch normalisation layer and a ReLU layer. The final $1\times 1$ convolutional layer with a stride of $1$ is followed by a sigmoid layer.
Hierarchical dual-modelling network contains two dual-modelling networks with the same structure. Each dual-modelling network contains a 2D U-Net with bilinear upsampling used to extract image features and two branch networks used to estimate targets. The two branch networks are the global modelling network and the local modelling network. The global modelling network is based on the self-attention \cite{dosovitskiy2021image,carion2020end,wang2018non} as shown in Fig. \ref{fig:global_modelling}. In the global modelling network, we use the sinusoidal position encoding to emphasise the sequential relationship between input feature patches \cite{vaswani2017attention}. The local modelling network consists of three convolution blocks. The details are shown in Fig. \ref{fig:global_modelling}.
\begin{table*}
\captionsetup{justification=centering}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\caption{Comparison of four LGE-CMRI datasets from different centres. Abbreviations: TE, Echo Time; TR, Repetition Time; CARMA, Comprehensive Arrhythmia Research and Management.}
\centering
\scalebox{.9}{
\setlength{\floatsep}{10pt plus 3pt minus 2pt}
\begin{tabular}{cccccc}
\addlinespace
\toprule
Centres & \multicolumn{1}{c}{Acquired Resolution} & \multicolumn{1}{c}{TE/TR} & \multicolumn{1}{c}{Scanner} & \multicolumn{1}{c}{Source} &\multicolumn{1}{c}{Amount of Data}\\ \midrule
C1 & $(1.4\sim 1.5)\times (1.4\sim 1.5)\times 4\ $mm$^{3}$ &2.2$/$5.2 ms &1.5 Tesla Avanto &Royal Brompton Hospital & $165\ $ LGE-CMR scans \\ \midrule
C2 & $1.25\times 1.25\times 2.5\ $ mm$^{3}$ &2.3/5.4 ms &\tabincell{c}{1.5 Tesla Avanto,\\ 3.0 Tesla Vario} & CARMA, University of Utah & $153\ $ LGE-CMR scans \\ \midrule
C3 &$1.4\times 1.4\times 1.4\ $mm$^{3}$ &2.1/5.3 ms &1.5T Philips Achieva &Beth Israel Deaconess Medical Center & $20\ $ LGE-CMR scans \\ \midrule
C4 &$1.3\times 1.3\times 4\ $mm$^{3}$ &2.1/5.3 ms &1.5T Philips Achieva &Imaging Sciences at King’s College London & $20\ $ LGE-CMR scans \\ \midrule
\end{tabular}
}
\label{table:datasets}
\end{table*}
\section{Experiments}
\subsection{Overview of Experiments}
Comprehensive experiments were performed to validate our proposed AHDC.
\textbf{(1) The feasibility of AHDC for generalising across domains}: Our proposed AHDC was validated on four 3D late gadolinium enhancement cardiac MR (LGE CMR) datasets and a 3D CT dataset combined in pairs, which followed the independent validation protocol. Furthermore, we also investigated the impact of different ratios ($r=\{5\%, 10\%, 20\%\}$) of the labelled data for validating our proposed AHDC.
\textbf{(2) The superiority of AHDC for generalising across domains}: We compared to widely used and state-of-the-art semi-supervised methods on cross-domain data for comparison, including mean teacher (MT) method \cite{tarvainen2017mean}, uncertainty-aware self-ensembling model (UA-MT) \cite{yu2019uncertainty}, Dual-Task consistency (DTC) \cite{luo2021semi} and Dual-Teacher \cite{li2020dual}. It is of note that MT, UA-MT and DTC were proposed for the single-domain semi-supervised learning while the Dual-Teacher method was proposed for the cross-domain learning. Besides, the Dual-Teacher required the labelled data from both cross-domain data for model learning. For a fair comparison, MT, UA-MT and DTC were performed on one of the matched domains, i.e., $D_{p1}$. We also compared with the joint training method that combining the cross-domain data directly for the LA segmentation based on our proposed semi-supervised method.
\textbf{(3) The effectiveness of the components in AHDC}: Firstly, we compared the performance between different architectures of the BAI module. On the one hand, to validate the effectiveness of bidirectional reconstruction for specifying the relationship of matched samples, an experiment was performed on bidirectional adversarial inference without using bidirectional reconstruction (BAI$_{wbr}$/ALI/BiGAN). On the other hand, to validate the effectiveness of skip connection of domain mapping network for keeping target structure consistent, an experiment was performed on bidirectional adversarial inference without using skip connection in domain mapping network (BAI$_{eds}$). Then, we further validated the performance of BAI by comparing it with the fully adversarial ALICE \cite{li2017alice} on the downstream semi-supervised tasks. Finally, for validating the effectiveness of HDC, we decomposed the HDC into independent intra-domain dual consistency learning (HDC$_{intra}$) by removing a dual-modelling network and inter-domain dual consistency learning (HDC$_{inter}$) by removing global modelling branch but retaining local modelling branch.
\textbf{(4) The effectiveness of the BAI for matching domains}: Firstly, we performed the principal components analysis to show the data distributions of source domains ($D_{1}$ and $D_{2}$) and the adapted domains ($D_{1t2}$ and $D_{2t1}$). The data distributions of source domains and the adapted domains were compared to validate the effectiveness of AHDC for aligning distributions. Then, we made a qualitative visualisation of images before and after the bidirectional adversarial inference to validate the effectiveness of AHDC for matching samples.
\textbf{(5) The effectiveness of the HDC for the availability of complementary information}: To validate the availability of complementary modelling information in the intra-domain, we compared the segmentation performance of dual modelling network (local-global modelling structure) to the ones without using dual-modelling structures. Specifically, we replaced the local-modelling branch with the global-modelling branch (global-global modelling structure) and replaced the global-modelling branch with the local-modelling branch (local-local modelling structure) in dual modelling network for experiments. To validate the availability of complementary domain information in inter-domain, we compared the segmentation performance of HDC with/without using the orthogonal weight constraint (WOW and WOOW).
\textbf{(6) The effects of parameter settings on model performance}: We explored two important parameter settings. (\romannumeral1) The impact of different patch sizes ($4\times 4$, $8\times 8$ and $16\times 16$) for global modelling. (\romannumeral2) The impact of different values of $\lambda_{ow}$ (0.0, 0.1, and 1.0 ) for inter-domain learning.
\subsection{Datasets}
To evaluate the performance of our proposed AHDC, four 3D LGE-MRI datasets (C1, C2, C3 and C4) and a 3D CT dataset (C5) were collected as a retrospective study. In our experiments, the collected datasets of C1 and C2 included segmentation of the LA epicardium and LA endocardium while the collected datasets of C3, C4 and C5 included segmentation of the LA endocardium. We have summarised the characteristics of the four 3D LGE-MRI datasets to emphasise their differences as shown in TABLE \ref{table:datasets}.
LGE-MRI scanning sequence of centre 1 (C1): Cardiac MR data were acquired in patients with longstanding persistent atrial fibrillation (AF) on a Siemens Magnetom Avanto 1.5T scanner (Siemens Medical Systems, Erlangen, Germany). Transverse navigator-gated 3D LGE-CMRI \cite{peters2009recurrence} was performed using an inversion prepared segmented gradient echo sequence (TE/TR 2.2ms/5.2ms) 15 minutes after gadolinium administration (Gadovist-gadobutrol, 0.1mmol/kg body weight, BayerSchering, Berlin, Germany) \cite{haissaguerre1998spontaneous}. The inversion time was set to null the signal from normal myocardium. The acquired resolution parameter of LGE-CMRI data was $(1.4-1.5)\times (1.4-1.5)\times 4$ mm$^{3}$ (reconstructed to $(0.7-0.75)\times (0.7-0.75)\times 2$ mm$^{3}$). LGE-CMRI data were acquired during free-breathing using a crossed-pairs navigator positioned over the dome of the right hemi-diaphragm with navigator acceptance window size of $5mm$ and CLAWS respiratory motion control \cite{keegan2014improved,keegan2014navigator}. The LGE CMR data were collected from the Royal Brompton Hospital. In total, 165 scans were used in this study.
LGE-MRI scanning sequence of centre 2 (C2): Cardiac MR data were obtained on a 1.5 Tesla Avanto scanners or a 3.0 Tesla Vario (Siemens Medical Solutions, Erlangen, Germany). The scan is acquired 20–25 minutes after 0.1 mmol/kg gadolinium contrast (Multihance, Bracco Diagnostics Inc., Princeton, NJ) using a 3D respiratory navigated, inversion recovery prepared gradient echo pulse sequence. Typical acquisition parameters are free breathing using navigator gating, a transverse imaging volume with voxel size = $1.25\times 1.25\times 2.5$ mm$^{3}$ (reconstructed to $0.625\times 0.625\times 2.5$ mm$^{3}$), TR/TE = 5.4/2.3 ms, inversion time (TI)=270-310 ms. The TI value for the LGE-MRI scan is identified using a scout scan. Typical scan times for the LGE-MRI study were between 8 and 15 min at 1.5 T and 6–11 min using the 3T scanner (for Siemens sequences) depending on subject respiration and heart rates. The LGE CMR data were collected from the Comprehensive Arrhythmia Research and Management, University of Utah. In total, 153 scans were used in this study.
LGE-MRI scanning sequence of center 3 (C3): C3 is from the ISBI 2012 Left Atrium Fibrosis and Scar Segmentation Challenge \cite{karim2013evaluation,li2021atrialgeneral}. The LGE CMR data were collected from the Beth Israel Deaconess Medical Center. In total, 20 scans were used in this study.
LGE-MRI scanning sequence of center 4 (C4): C4 is also from the ISBI 2012 Left Atrium Fibrosis and Scar Segmentation Challenge \cite{karim2013evaluation,li2021atrialgeneral}. The LGE CMR data were collected from the Imaging Sciences at King’s College. In total, 20 scans were used in this study.
CT scanning sequence of centre 5 (C5): C5 is from the Multi-modality Whole Heart Segmentation (MM-WHS) 2017 dataset \cite{zhuang2016multi,zhuang2013challenges,zhuang2015multiatlas,zhuang2010registration}. In total, 60 CT scans were used in this study.
\subsection{Experimental Setup}
\textbf{(1) Data partitioning}: For C1, the 3D LGE-MRI dataset with 165 scans was randomly split into a training set with 99 scans and a testing set with 66 scans (33 pre-ablation scans and 33 post-ablation scans). The training set then was randomly split into a labelled training set with 20 scans (20\%) and an unlabelled training set with 79 scans (80\%). For C2, the 3D LGE-MRI dataset with 153 scans was randomly split into a training set with 91 scans and a testing set with 62 scans (31 pre-ablation scans and 31 post-ablation scans). The training set then was randomly split into a labelled training set with 18 scans (20\%) and an unlabelled training set with 73 scans (80\%). For C3 and C4, each 3D LGE-MRI dataset with 20 scans was randomly split into a training set with 12 scans and a testing set with 8 scans (4 pre-ablation scans and 4 post-ablation scans). The training set then was randomly split into a labelled training set with 4 scans and an unlabelled training set with 8 scans. Because C5 only provides 60 CT scans including 20 labelled scans and 40 unlabelled scans, we randomly selected 15 scans from 20 labelled scans as a testing set. The remaining 5 labelled scans (labelled training set) and 40 unlabelled scans (unlabelled training set) together as a training set. Since each patient may contain multiple 3D LGE-MRI scans, the 3D LGE-MRI datasets were split under the strategy that all scans from each unique patient were only in one of the training or testing sets.
\textbf{(2) Implementation details}: Experiments were performed on five datasets combined in pairs for cross-centre study (C1 and C2, C3 and C4) and cross-modality study (C2 and C5). To reduce the dependence of models on annotated data and to avoid the impact of label variations from different centres, there were two kinds of experiment settings for each cross-domain data. Take experiments on C1 and C2 as an example: one used C1 to support C2 that the model was trained using the labelled training set (18 labelled cases) of C2, the unlabelled training set (73 unlabelled cases) of C2 and the whole training set (99 unlabelled cases) of C1. The other one used C2 to support C1 that the model was trained using the labelled training set (20 labelled cases) of C1, the unlabelled training set (79 unlabelled cases) of C1 and the whole training set (91 unlabelled cases) of C2. We denoted the results obtained by the fully supervised model trained with the labelled training set from C1 (20 cases), C2 (18 cases), C3 (4 cases), C4 (4 cases) and C5 (5 cases) as the baseline and the results obtained by the fully supervised model trained with the whole training set from C1 (99 cases), C2 (91 cases), C3 (12 cases) and C4 (12 cases) as the upper bound.
We pre-processed the data with the normalisation. Smaller patches of $256\times 256$ centred on the LA region were cropped. To avoid overfitting, we applied data augmentations with random rotation. The training time of our model is about 17.17 hours while the testing time for one 3D case is about 0.259 seconds. For the learning of the BAI network, we used the Adam method to perform the optimisation of two mapping networks with an initial learning rate of $0.001$ and a decayed rate of $0.98$. The optimiser used in the discriminative network was Adam with a fixed learning rate of $0.0001$. For the learning of two dual-modelling networks, we also used the Adam method with an initial learning rate of $0.001$ and a decayed rate of $0.98$. The current statistics of batch normalisation were used for both training and testing. All experiments were performed with an independent test. For the dual consistency learning, in each iteration, we first performed the intra-consistency with both labelled and unlabelled data simultaneously, then performed the inter-consistency with both labelled and unlabelled data simultaneously, performed supervised learning with labelled data in the last. Our deep learning model was implemented using Tensorflow $1.2.1$ on an Ubuntu $16.04$ machine (The code will be released publicly once the manuscript is accepted for publication via https://github.com/Heye-SYSU/AHDC). It was trained and tested using an Nvidia RTX 8000 GPU (48GB GPU memory).
The coefficients $\lambda_{d}$ and $\lambda_{r}$ used to balance the adversarial loss and the reconstruction loss, were automatically learned based on the strategy of uncertainty \cite{kendall2018multi}. The coefficient $\lambda_{intra}$ was dynamically changed over time with the function of $f(t)=e^{-5*(1-\frac{t}{t_{max}})^{2}}$. The coefficients $\lambda_{inter}$, $\lambda_{super}$ and $\lambda_{ow}$ were set to the values of $1.0$, $0.5$ and $0.1$, respectively.
\textbf{(3) Evaluation criteria}: To evaluate the segmentation performance, we used region-based metrics \cite{dice1945measures,taha2015metrics}, e.g., the Dice Similarity Coefficient (DSC) and the Jaccard Index (JI), to validate the predicted segmentation map against the manually defined ground-truth. We also used a surface-based metric called Average Surface Distance (ASD) to provide the distance in $\mathrm{mm}$ to quantify the accuracy of the predicted mesh ($S$) compared to the ground-truth mesh ($S^\prime$) \cite{taha2015metrics}.
\begin{table*}
\captionsetup{justification=centering}
\caption{\centering Quantitative comparison between our proposed AHDC and other methods on multi-centre data. Abbreviations: DSC, Dice Similarity Coefficient; JI, Jaccard Index; ASD, Average Surface Distance.}\label{table:c1c2c3}\label{fig:test}
\begin{subtable}[h]{0.99\textwidth}
\caption{Experiments on C1 (MR) and C2 (MR)}
\centering
\begin{tabular}{c|ccc|ccc}
\addlinespace
\toprule
\multirow{2}*{Method} & \multicolumn{3}{c|}{C2 (MR) supports C1 (MR)} & \multicolumn{3}{c}{C1 (MR) supports C2 (MR)}\\ \cline{2-7}
& DSC & JI &ASD (mm) & DSC & JI &ASD (mm)\\ \hline
Upper Bound & $0.932\pm0.026$ & $0.874\pm0.044$ & $1.28\pm0.847$ & $0.926\pm0.021$ & $0.863\pm0.036$ & $0.867\pm0.46$ \cr
Baseline & $0.869\pm0.078$ & $0.775\pm0.111$ & $2.81\pm2.08$ & $0.860\pm0.103$ & $0.765\pm0.122$ & $4.00\pm5.08$ \\ \hline
MT &$0.882\pm0.059$ &$0.793\pm0.090$ &$2.45\pm1.67$ &$0.880\pm0.071$ &$0.792\pm0.096$ &$1.79\pm1.96$ \cr
UA-MT &$0.885\pm0.060$ &$0.799\pm0.092$ & $2.19\pm1.47$ &$0.884\pm0.072$ &$0.799\pm0.098$ & $2.79\pm3.76$ \cr
DTC &$0.887\pm0.061$ &$0.803\pm0.094$ &$2.25\pm1.54$ &$0.888\pm0.076$ &$0.806\pm0.102$ &$2.40\pm3.48$ \cr
Dual-Teacher &$0.899\pm0.046$ &$0.820\pm0.073$ &$1.83\pm1.02$ &$0.896\pm0.064$ &$0.816\pm0.088$ &$1.97\pm3.13$ \cr \hline
Joint-training &$0.889\pm0.059$ &$0.805\pm0.091$ &$2.05\pm1.22$ &$0.887\pm0.056$ &$0.801\pm0.081$ &$1.59\pm1.56$ \cr \hline
AHDC & $\mathbf{0.916\pm0.041}$ &$\mathbf{0.848\pm0.066}$ & $\mathbf{1.47\pm0.846}$ & $\mathbf{0.917\pm0.026}$ &$\mathbf{0.848\pm0.043}$ & $\mathbf{1.17\pm1.60}$ \\ \midrule
\end{tabular}
\end{subtable}
\vspace{0.25cm}
\\
\begin{subtable}[h]{0.99\textwidth}
\caption{Experiments on C3 (MR) and C4 (MR)}
\centering
\begin{tabular}{c|ccc|ccc}
\addlinespace
\toprule
\multirow{2}*{Method} & \multicolumn{3}{c|}{C4 (MR) supports C3 (MR)} & \multicolumn{3}{c}{C3 (MR) supports C4 (MR)}\\ \cline{2-7}
& DSC & JI &ASD (mm) & DSC & JI &ASD (mm)\\ \hline
Upper Bound & $0.808\pm0.035$ & $0.679\pm0.050$ & $2.42\pm0.645$ & $0.841\pm0.043$ & $0.727\pm0.062$ & $2.07\pm0.543$ \cr
Baseline & $0.684\pm0.098$ & $0.528\pm0.109$ & $6.17\pm3.63$ & $0.742\pm0.081$ & $0.596\pm0.095$ & $4.20\pm1.08$ \\ \hline
MT &$0.749\pm0.073$ &$0.604\pm0.092$ &$3.59\pm1.77$ &$0.797\pm0.101$ &$0.673\pm0.122$ &$2.42\pm1.10$ \cr
UA-MT &$0.760\pm0.081$ &$0.619\pm0.100$ &$3.96\pm2.10$ &$0.811\pm0.086$ &$0.690\pm0.108$ &$2.21\pm0.718$ \cr
DTC &$0.765\pm0.066$ &$0.624\pm0.086$ &$3.40\pm1.72$ &$0.809\pm0.088$ &$0.687\pm0.111$ &$2.94\pm0.935$ \cr
Dual-Teacher &$0.773\pm0.050$ &$0.633\pm0.067$ &$3.03\pm0.988$ &$0.817\pm0.089$ &$0.699\pm0.112$ &$2.48\pm0.937$ \cr \hline
Joint-training &$0.770\pm0.056$ &$0.629\pm0.074$ &$3.45\pm1.51$ &$0.811\pm0.094$ &$0.691\pm0.115$ &$2.70\pm0.889$ \cr \hline
AHDC & $\mathbf{0.795\pm0.044}$ & $\mathbf{0.661\pm0.061}$ & $\mathbf{2.47\pm0.681}$ & $\mathbf{0.830\pm0.057}$ & $\mathbf{0.713\pm0.077}$ & $\mathbf{2.07\pm0.659}$ \\ \midrule
\end{tabular}
\end{subtable}
\end{table*}
\begin{table*}
\captionsetup{justification=centering}
\caption{\centering Quantitative comparison between our proposed AHDC and other methods on multi-modality data. Abbreviations: DSC, Dice Similarity Coefficient; JI, Jaccard Index; ASD, Average Surface Distance.}\label{table:MR_CT}
\centering
\scalebox{.99}{
\begin{tabular}{c|ccc|ccc}
\addlinespace
\toprule
\multirow{2}*{Method} & \multicolumn{3}{c|}{C5 (CT) supports C2 (MR)} & \multicolumn{3}{c}{C2 (MR) supports C5 (CT)}\\ \cline{2-7}
& DSC & JI &ASD (mm) & DSC & JI &ASD (mm)\\ \hline
Upper Bound & $0.923\pm0.025$ & $0.858\pm0.042$ & $1.20\pm1.96$ & - & - & - \cr
Baseline & $0.858\pm0.107$ & $0.763\pm0.121$ & $2.72\pm4.59$ & $0.828\pm0.115$ & $0.722\pm0.157$ & $4.88\pm3.53$ \\ \hline
MT &$0.874\pm0.072$ &$0.782\pm0.99$ &$2.42\pm3.14$ &$0.861\pm0.086$ &$0.765\pm0.121$ &$4.54\pm3.35$ \cr
UA-MT &$0.881\pm0.056$ &$0.791\pm0.084$ &$1.84\pm2.66$ &$0.878\pm0.050$ &$0.786\pm0.077$ &$2.39\pm1.47$ \cr
DTC &$0.888\pm0.059$ &$0.803\pm0.084$ &$1.93\pm3.29$ &$0.880\pm0.064$ &$0.791\pm0.096$ &$2.96\pm2.10$ \cr
Dual-Teacher &$0.888\pm0.041$ &$0.801\pm0.062$ &$1.41\pm1.47$ &$0.891\pm0.036$ &$0.806\pm0.057$ &$2.38\pm1.14$ \cr \hline
Joint-training &$0.869\pm0.069$ &$0.774\pm0.100$ &$2.18\pm3.89$ &$0.834\pm0.117$ &$0.731\pm0.156$ &$5.62\pm5.10$ \cr \hline
AHDC & $\mathbf{0.911\pm0.028}$ & $\mathbf{0.837\pm0.047}$ & $\mathbf{1.07\pm0.872}$ & $\mathbf{0.916\pm0.031}$ & $\mathbf{0.846\pm0.052}$ & $\mathbf{1.30\pm0.319}$ \\ \midrule
\end{tabular}
}
\end{table*}
\section{Results and Analysis}
In this section, we demonstrate the results of the above mentioned experiments to validate our proposed AHDC for the cross-domain semi-supervised segmentation.
\subsection{The Feasibility Analysis of AHDC for Generalising Across Domains:}
TABLE \ref{table:c1c2c3} and TABLE \ref{table:MR_CT} summarises the quantitative segmentation results of AHDC on multi-centre data and multi-modality data. As we can see, our proposed AHDC obtains consistent improvements in terms of the DSC, JI and ASD against the baselines. Furthermore, as the experiment results are summarised in TABLE \ref{table:ratio_label}, one can see that our proposed AHDC obtains consistent improvements against the fully supervised learning under the $5\%$, $10\%$, $20\%$ labelled data setting. Fig. \ref{fig:la_vis_comparison} and Fig. \ref{fig:vis_3d} provide the 2D and 3D qualitative LAs estimated by AHDC compared to the ground truth. It is observed that our proposed AHDC has the ability to segment LA accurately. These quantitative and qualitative results indicate the feasibility of our proposed AHDC for generalising across domains.
\subsection{The Superiority Analysis of AHDC for Generalising Across Domains:}
TABLE \ref{table:c1c2c3} and TABLE \ref{table:MR_CT} summarises the experiment results on multi-centre data and multi-modality data combined in pairs for comparison. It is observed that the widely used semi-supervised method of MT improves the segmentation accuracy of LA compared to the baseline. One can see that after adding uncertainty information to the MT, the performance of the MT is improved (UA-MT). The DTC method further improves the segmentation accuracy, indicating the effectiveness of dual task consistency for semi-supervised learning. Although these methods have the ability to mine effective information from unlabelled data to support task learning, they have no proper mechanism to exploit the cross-domain information, thus leading to limited segmentation results. Compared to these methods, Dual-Teacher leverages two teacher models to guide a student model for the learning of both intra-domain and inter-domain knowledge, thus achieving big improvements in terms of segmentation accuracy. Notably, our proposed AHDC obtains the best segmentation accuracy over these widely used and state-of-the-art semi-supervised methods, which shows its superiority for generalising across domains. Furthermore, it is observed that our proposed AHDC generally improves the segmentation accuracy compared to the joint training, which combines the cross-domain data directly for the semi-supervised LA segmentation. This demonstrates that our proposed AHDC can leverage cross-domain information to improve the model performance. We also provide qualitative comparison between different methods in Fig. \ref{fig:la_vis_comparison}. It is observed that the LAs estimated by other methods present fragmentary parts and unsmooth boundaries. While the LAs estimated by our proposed method are closer to the ground truth with smoother boundaries.
\subsection{Ablation Studies}
We performed ablation studies on $C1$ and $C2$ (C1 supports C2) to validate the effectiveness of our proposed AHDC for the cross-domain semi-supervised segmentation.
\begin{figure*}[!ht]
\begin{center}
\scalebox{.85}{
\includegraphics[width=1\textwidth]{la_vis_comparison.pdf
}
\end{center}
\caption{2D visual comparisons on LA segmentation results estimated by different methods. It is observed that our estimated LAs (AHDC) are more similar to the ground truth (GT) than others (DSC based segmentation accuracies of AHDC for the 2D slices from row 1 to row 4 are $0.859$, $0.897$, $0.907$ and $0.949$, respectively). Abbreviations: DSC, Dice Similarity Coefficient.}\label{fig:la_vis_comparison}
\end{figure*}
\begin{figure}[!hbtp]
\begin{center}
\scalebox{.99}{
\includegraphics[width=1\linewidth]{vis_3d.pdf}
}
\end{center}
\caption{3D visualization of LA segmentation results estimated by AHDC. Each DSC score is calculated for the whole 3D LGE-MRI image (The DSC based segmentation accuracies of AHDC for the 3D slices from column $1$ to column $3$ are $0.936$, $0.917$, and $0.898$, respectively). Abbreviations: DSC, Dice Similarity Coefficient.}
\label{fig:vis_3d}
\end{figure}
\begin{table}[!hbtp]
\captionsetup{justification=centering}
\caption{\centering The performance of AHDC on different percentages of labelled data. Abbreviations: Lx (\%): Lx (\%): the ratio of labelled data in the training set of centre x; Ux (\%): the ratio of unlabelled data in the training set of centre x; DSC, Dice Similarity Coefficient; JI, Jaccard Index; ASD, Average Surface Distance.}\label{table:ratio_label}
\centering
\scalebox{.7}{
\setlength{\floatsep}{10pt plus 3pt minus 2pt}
\begin{tabular}{c|cc|ccc}
\addlinespace
\toprule
\multirow{2}*{Method} & \multicolumn{2}{c|}{Rate} & \multicolumn{3}{c}{Metrics}\\ \cline{2-6}
& L2/U2 (\%) & L1/U1 (\%) & DSC & JI &ASD\\ \hline
Upper Bound & $100/0$ & $0/0$ & $0.926\pm0.021$ & $0.863\pm0.036$ & $0.867\pm0.46$ \\ \hline
Baseline & $20/0$ & $0/0$ & $0.860\pm0.103$ & $0.765\pm0.122$ & $4.00\pm5.08$ \cr
AHDC & $20/80$ & $0/100$ & $0.917\pm0.026$ &$0.848\pm0.043$ & $1.17\pm1.60$ \\ \hline
Baseline &$10/0$ & $0/0$ & $0.815\pm0.142$ & $0.706\pm0.153$ & $4.84\pm6.09$ \cr
AHDC & $10/90$ & $0/100$ &$0.891\pm0.039$ &$0.805\pm0.060$ &$1.98\pm2.65$ \\ \hline
Baseline & $5/0$ & $0/0$ &$0.776\pm0.134$ &$0.650\pm0.146$ &$6.51\pm5.52$ \cr
AHDC & $5/95$ & $0/100$ & $0.871\pm0.041$ &$0.773\pm0.060$ & $1.95\pm1.44$ \\ \midrule
\end{tabular}
}
\end{table}
\textbf{(1) Model variation study for bidirectional adversarial inference}: As the experimental results are summarised in TABLE \ref{table:bai_hcr}, the bidirectional adversarial inference with bidirectional reconstruction improves the LA segmentation accuracy in terms of DSC, JI and ASD compared with the BAI$_{wbr}/$ALI$/$BiGAN. The reason behind the improvements is that bidirectional reconstruction makes the relationship between matched samples specified and constrained. It guarantees that the matched samples are one-to-one correspondence for subsequent effective hierarchical dual consistency learning on cross-domain data. It is also observed that the segmentation accuracy is dropped while removing the skip connection from the domain mapping network. The reason behind this is that the domain mapping network (UNet structure) employs the skip connection to deliver the low-level information. It allows the samples adapted to another domain to maintain the same LA structures, which makes subsequent dual consistency learning effective. Furthermore, one can see that our proposed BAI has better performance for the downstream semi-supervised LA segmentation task compared to the fully adversarial ALICE method, which indicates the superiority of our proposed BAI.
\begin{table}[!hbtp]
\captionsetup{justification=centering}
\caption{\centering Model variation study on C1 and C2 (C1 supports C2). Abbreviations: Lx (\%): the ratio of labelled data in the training set of centre x; Ux (\%): the ratio of unlabelled data in the training set of centre x; DSC, Dice Similarity Coefficient; JI, Jaccard Index; ASD, Average Surface Distance.}\label{table:bai_hcr}
\centering
\setlength{\floatsep}{10pt plus 3pt minus 2pt}
\scalebox{.9}{
\begin{tabular}{c|ccc}
\addlinespace
\toprule
\multirow{2}*{Method} & \multicolumn{3}{|c}{Metrics}\\ \cline{2-4}
& DSC & JI &ASD\\ \hline
Lower Bound & $0.860\pm0.103$ & $0.765\pm0.122$ & $4.00\pm5.08$ \\ \hline
BAI$_{eds}$ + HDC &$0.879\pm0.039$ & $0.786\pm0.060$ &$1.59\pm0.875$ \cr
BAI$_{wbr}$ + HDC &$0.885\pm0.051$ & $0.798\pm0.076$ &$1.37\pm0.802$ \cr
ALICE + HDC &$0.896\pm0.033$ & $0.814\pm0.053$ &$1.42\pm1.09$ \cr
BAI + HDC$_{intra}$ &$0.893\pm0.048$ &$0.809 \pm 0.073$ &$1.90 \pm 2.81$ \cr
BAI + HDC$_{inter}$ &$0.900\pm0.045$ &$0.822\pm0.066$ &$1.51\pm1.24$ \cr
AHDC & $0.917\pm0.026$ &$0.848\pm0.043$ & $1.17\pm1.60$ \\ \midrule
\end{tabular}
}
\end{table}
\textbf{(2) Model variation study for hierarchical dual consistency}: As the experiment results are summarised in TABLE \ref{table:bai_hcr}, the independent intra-domain and inter-domain dual consistency learning can both improve the LA segmentation accuracy compared to the lower-bound model. This indicates that the intra-domain dual consistency learning and the inter-domain dual consistency learning are effective to exploit the unlabelled data from cross-domain data. Furthermore, it is also observed that the intra-domain dual consistency and the inter-domain dual consistency can promote each other for the cross-domain semi-supervised segmentation. These results demonstrate the effectiveness of the hierarchical dual consistency for semi-supervised segmentation on cross-domain data.
\begin{figure}[!hbtp]
\begin{center}
\scalebox{.85}{
\includegraphics[width=1\linewidth]{adaptation1.pdf}
}
\end{center}
\caption{Principal components analysis based visualisation for the data distribution of the testing tests of C1 and C2. (a) The data distribution of domain $D_{1}$. (b) The data distribution of domain $D_{2}$. (c) The data distribution of domain $D_{1}$ and the domain $D_{2t1}$ adapted from $D_{2}$ to $D_{1}$. (d) The data distribution of domain $D_{2}$ and the domain $D_{1t2}$ adapted from $D_{1}$ to $D_{2}$.}
\label{fig:adaptation}
\end{figure}
\subsection{The Effectiveness Analysis of BAI for Matching Domains}
The effectiveness of the bidirectional adversarial inference is further validated by the qualitative results on distribution alignment and sample matching in the testing set.
\textbf{(1) Distribution alignment}: In Fig. \ref{fig:adaptation}, we color samples from different domains and adapted domains to highlight their correspondence (brown and blue for the samples from domains of $D_{2}$ and $D_{1}$, respectively. Peru and green for the samples from the domain $D_{2t1}$ adapted from $D_{2}$ to $D_{1}$ and the domain $D_{1t2}$ adapted from $D_{1}$ to $D_{2}$, respectively). It is observed that the domains of $D_{1}$ and $D_{2}$ have different distributions as shown in Fig. \ref{fig:adaptation} (a) and Fig. \ref{fig:adaptation} (b). Besides, as shown in Fig. \ref{fig:adaptation} (c) and Fig. \ref{fig:adaptation} (d), after the bidirectional adversarial inference, the distribution of the domain $D_{2t1}$ adapted from $D_{2}$ to $D_{1}$ is consistent with the distribution of $D_{1}$. Meanwhile, the distribution of the domain $D_{1t2}$ adapted from $D_{1}$ to $D_{2}$ is consistent with the distribution of $D_{2}$. One also can find the adapted domains of $D_{1t2}$ and $D_{2t1}$ make the distribution spaces of $D_{1}$ and $D_{2}$ more complete. These results indicate the effectiveness of BAI for the distribution alignment.
\begin{figure}[!hbtp]
\begin{center}
\scalebox{.9}{
\includegraphics[width=1\linewidth]{Adaptation_img.pdf}
}
\end{center}
\caption{Qualitative visualisation of images and corresponding adapted images in the testing tests of C1 and C2. Abbreviations: $x_{1}$,\ image from domain $D_{1}$; $x_{1t2}$,\ image adapted from domain $D_{1}$ to domain $D_{2}$; $x_{2}$,\ image from domain $D_{2}$; $x_{2t1}$,\ image adapted from domain $D_{2}$ to domain $D_{1}$.}
\label{fig:adaptation_img}
\end{figure}
\textbf{(2) Sample matching}: Fig. \ref{fig:adaptation_img} provides 2D visualisation of some examples before and after bidirectional adversarial inference. The images of the first two columns are from the domain $D_{1}$ and the domain adapted from $D_{1}$ to $D_{2}$. The images of the last two columns are from the domain $D_{2}$ and the domain adapted from $D_{2}$ to $D_{1}$. It is observed that the target shape and structure in corresponding images are consistent. However, the texture and the brightness in corresponding images are different. These results illustrate that the bidirectional adversarial inference is effective to produce the matched samples.
\subsection{The Effectiveness Analysis of HDC for the Availability of Complementary Information}
\textbf{(1) Availability of the complementary modelling information}: TABLE \ref{table:local-global} summarises the experiment results on different modelling structures (Local-Global, Local-Local and Global-Global) for the intra-domain semi-supervised learning. It is observed that the dual-modelling structure (Local-Global) achieved higher segmentation accuracy. The reason behind this is that the dual modelling can complement each other during the model training, thus can provide effective prediction perturbation for consistency-based learning. We also visualize the examples estimated by local modelling branch and global modelling branch in different training epochs as shown in Fig. \ref{fig:local_global_difference}. One can see that the absolute difference between the local modelling and global modelling demonstrates that the local modelling branch and local modelling branch are modelled separately, which can provide effective prediction perturbation for consistency based learning.
\begin{table}[!hbtp]
\captionsetup{justification=centering}
\caption{\centering Performance comparison between dual structure (Local-Global) and non-dual structures (Local-Local and Global-Global) in terms of DCS, JI and ASD. The results are presented in the form of the mean (standard deviation). Abbreviations: DSC, Dice Similarity Coefficient; JI, Jaccard Index; ASD, Average Surface Distance; Local, local modelling network; Global, global modelling network.}\label{table:local-global}
\centering
\setlength{\floatsep}{10pt plus 3pt minus 2pt}
\begin{tabular}{c|ccc}
\addlinespace
\toprule
\multirow{2}*{Method} & \multicolumn{3}{|c}{Metrics}\\ \cline{2-4}
& DSC & JI &ASD\\ \hline
Local-Local & $0.875\pm0.071$ & $0.785\pm0.102$ & $2.66\pm4.18$ \cr
Global-Global &$0.879\pm0.069$ & $0.789\pm0.098$ &$3.01\pm4.80$ \cr
Local-Global &$0.893\pm0.048$ & $0.809\pm0.073$ &$1.90\pm2.81$ \cr \hline
\end{tabular}
\end{table}
\begin{figure}[!hbtp]
\begin{center}
\scalebox{.99}{
\includegraphics[width=1\linewidth]{local_global_difference.pdf
}
\end{center}
\caption{Visualization for the evolution of dual-modelling results (first row and second row) and their absolute difference (third row). The second to fourth columns correspond to the estimated LAs from 5, 25 and 50 epochs during model learning.}
\label{fig:local_global_difference}
\end{figure}
\textbf{(2) Availability of complementary domain information}: Fig. \ref{fig:feature_correlation} (a) provides the experiment results on hierarchical dual consistency learning with/without orthogonal weight constraint. Fig. \ref{fig:feature_correlation} (b) provides examples of feature correlations between corresponding layers of two dual-modelling networks with/without orthogonal weight constraint. It is observed that while removing the orthogonal weight constraint for inter-domain semi-supervised learning, the model segmentation performance is dropped. Meanwhile, the feature correlations between two dual-modelling networks become higher. The reason behind this is that the inter-domain semi-supervised learning with the orthogonal weight constraint can provide more effective prediction perturbation for consistency based learning. It is also observed that while removing the orthogonal weight constraint for inter-domain semi-supervised learning, the feature correlations between two dual-modelling networks are not high ($<0.3$). In this case, two dual-modelling networks also can learn the complementary domain knowledge for providing effective prediction perturbation, thus achieving a high segmentation accuracy of 0.907 in terms of DSC.
\begin{figure}[!hbtp]
\begin{center}
\scalebox{.9}{
\includegraphics[width=1\linewidth]{fcorrelation.pdf}
}
\end{center}
\caption{Segmentation performance comparison and feature correlation analysis between AHDC without orthogonal weights (WOOW) and AHDC with orthogonal weights (WOW). The experiments were performed on C1 and C2 (C1 supports C2). DMN1 and DMN2 represent two dual-modelling networks, respectively. The grape and asparagus bars denote the mean values with standard deviations.}
\label{fig:feature_correlation}
\end{figure}
\subsection{The Effects of Parameter Settings on Model Performance}
TABLE \ref{table:parameter_validation} presents the performances of our model for the LA segmentation using different parameter settings. It is observed that our model achieves the best performance when the patch size and the $\lambda_{ow}$ are set as $8\times 8$ and $0.1$, respectively.
\begin{table}[!hbtp]
\captionsetup{justification=centering}
\caption{\centering Parameter validation for AHDC framework. The results are presented in the form of mean $\pm$ standard deviation. Abbreviations: DSC, Dice Similarity Coefficient; JI, Jaccard Index; ASD, Average Surface Distance.}
\centering
\scalebox{.9}{
\begin{tabular}{ccccc}
\toprule
Parameter & \multicolumn{1}{c}{Value} & \multicolumn{1}{c}{DSC} & \multicolumn{1}{c}{JI} & \multicolumn{1}{c}{ASD (mm)}\\ \midrule
\multirow{3}*{Patch Size} & $4\times 4$ & $0.874\pm0.062$ &$0.780 \pm 0.086$ &$2.12 \pm 2.22$\cr
& $8\times 8$ &$0.893\pm0.048$ & $0.809\pm0.073$ &$1.90\pm2.81$\cr
& $16\times 16$ & $0.881\pm0.069$ &$0.794 \pm 0.098$ &$2.40 \pm 3.85$\\ \midrule
\multirow{3}*{$\lambda_{ow}$} & 0.0 & $0.907\pm0.033$ &$0.831\pm0.052$ & $1.34\pm1.58$\cr
& 0.1 & $0.917\pm0.026$ &$0.848\pm0.043$ & $1.17\pm1.60$ \cr
& 1.0 & $0.913\pm0.032$ &$0.841\pm0.052$ & $1.34\pm1.68$\\ \midrule
\end{tabular}
}
\label{table:parameter_validation}
\end{table}
\section{Discussion}
In this study, we have developed a semi-supervised LA segmentation framework for generalising across domains. The semi-supervised LA segmentation framework comprises a BAI module and a HDC module. The effectiveness of each module has been validated in our ablation study presented in TABLE \ref{table:bai_hcr}. It is of note that self-attention based global modelling requires more computational resources, which are proportional to the dimensions of the image. In our proposed framework, we performed the self-attention based global modelling branch on the image feature maps for correlating $8\times 8$ patches instead of all pixels, which greatly reduces the requirements of computational resources during model training. Besides, during the testing phase or the practical applications, the self-attention based global modelling branches will be removed from our proposed framework. Then, the LA targets will be only predicted by the local-modelling branch with low computational resources.
\begin{table}[!hbtp]
\captionsetup{justification=centering}
\caption{\centering Performance comparison between the vanilla model and our proposed AHDC using all the data available from four centres. Abbreviations: Lx ($\%$): the ratio of labelled data in the training set of centre x; Ux ($\%$): the ratio of unlabelled data in the training set of centre x; DSC, Dice Similarity Coefficient; JI, Jaccard Index; ASD, Average Surface Distance.}\label{table:annotation_difference}
\centering
\scalebox{.65}{
\setlength{\floatsep}{10pt plus 3pt minus 2pt}
\begin{tabular}{c|cccc|ccc}
\addlinespace
\toprule
\multirow{2}*{Method} & \multicolumn{4}{c|}{Rate} & \multicolumn{3}{c}{Metrics}\\ \cline{2-8}
& \tabincell{c}{L1$/$U1 \\ ($\%$)} & \tabincell{c}{L2$/$U2 \\ ($\%$)} & \tabincell{c}{L3$/$U3 \\ ($\%$)} & \tabincell{c}{L4$/$U4 \\ ($\%$)} & DSC & JI &ASD\\ \hline
\multirow{2}*{U-Net} & $0/0$ & $100/0$ &$0/0$ &$0/0$ & $0.923\pm0.025$ & $0.858\pm0.042$ & $1.20\pm1.96$ \cr
& $100/0$ & $100/0$ & $100/0$ &$100/0$ &$0.927\pm0.022$ & $0.864\pm0.037$ &$0.728\pm0.578$ \cr \hline
AHDC & $0/100$ & $100/0$ & $0/100$ & $0/100$ &$0.938\pm 0.015$ & $0.883\pm 0.026$ &$0.506\pm0.164$ \cr \midrule
\end{tabular}
}
\end{table}
The AHDC requires the complementary domain information for inter-domain learning. For multi-centre studies, although the domains from different sources exhibit heterogeneous properties \cite{campello2021multi}, they still share some specific information because they come from the same image modality of LGE. To make the model focus on the heterogeneous properties of different domains for the inter-domain learning, we use an orthogonal weight constraint to extract the conditional independent features of different domains for subsequent target modelling. We have explored the effectiveness of the orthogonal weight constraint together with its weight coefficient $\lambda_{ow}$ for the inter-domain learning. As the experiment results are shown in TABLE \ref{table:parameter_validation}, one can see that the orthogonal weight constraint generally improves the segmentation accuracy. Furthermore, the performance of AHDC is not very sensitive to the $\lambda_{ow}$ values of 0.1 and 1.0 while using the orthogonal weight constraint. Therefore, the orthogonal weight constraint could exploit the heterogeneous properties among different domains for inter-domain learning.
Considering the data annotation scarcity in medical image analysis, our proposed method only requires the labelled data from one of the multiple centres during cross-domain learning, thus further reducing the dependence of the model on annotated data. As the experiment results are shown in TABLE \ref{fig:test} and TABLE \ref{table:MR_CT}, our proposed method is able to generalise across two different domains simultaneously. We further explore how the task model generalises across multiple domains. Specifically, we have applied our proposed method to the LGE CMRI data available from four centres. We also trained a vanilla model (U-Net) with the LGE CMRI data available from a single target centre and all the LGE CMRI data available from four centres for comparison. As the experiment results are shown in TABLE \ref{table:annotation_difference}, compared with the results obtained by using all annotated data from a single domain, using all the data available from four centres only makes small improvements in the segmentation accuracy due to the domain shift and the label variations from different centres. While our proposed AHDC generally improves the segmentation accuracy, which indicates its ability for cross-domain semi-supervised learning.
\section{Conclusion}
In this paper, we proposed an adaptive hierarchical dual consistency for the cross-domain semi-supervised LA segmentation. The adaptive hierarchical dual consistency firstly overcomes the distribution difference and sample mismatch of different domains by the bidirectional adversarial inference. Then, it explores the complementary modelling and domain information in intra-domain and inter-domain for semi-supervised LA segmentation based on the hierarchical dual consistency. Comprehensive experiments on four 3D LGE CMR datasets and one CT dataset demonstrated the feasibility and superiority of our proposed method for the cross-domain semi-supervised LA segmentation.
|
2301.12206
|
\section{Introduction}
Tagging can always be seen as an initial step in any task such as dependency parsing as is done in \citep{Vacareanu2020} or part of speech(POS) tagging as well as named entity recognition(NER) tagging.
POS tagging as well as NER tagging for semantic parsing is very restricted and they determine lexical semantics with some shortcomings. Univeral semantic tagging(semtagging) is motivated to reduce and compensate such limitations and shortcomings.
Another motivation is that parsing community are shifting from syntactic dependency tree parsing to semantic dependency graph parsing and semtagging could be seen as an initial step in these investigations.
Semantic tagging is the task of assigning language-neutral semantic categories to words. The necessity of semantic tagging can be well realized in recent research on semantic parsing. \citep{Zheng2020} decomposes semantic parsing into two parts. In the first part, input utterance $x$ is tagged with semantic symbols. In the second part, a sequence to sequence model is used to use these semantic features to produce the final semantic parsing which could be represented in different meaning formalisms like lambda calculus or SQL queries. The semantic labels $z$ in \citep{Zheng2020} are unobserved and is considered as a latent variable which is learned to represent $p(z|x;\theta)$ where $\theta$ are the parameters of the model that is learned by a BiLSTM model.
\citep{Bjerva2016} uses deep residual networks, and enters the inputs as both word and character representations for the task of semantic tagging. They used the Groningen Meaning Bank (GMB) corpus as well as the Parallel Meaning Bank (PMB) datasets and obtained better signal propagation as well as less overfitting in these deep networks.
Semantic tagging has two major applications. One can use it for multitask learning like \citep{Abdou2018} or using it in a pipeline to improve quality of vector representations for downstream tasks such as machine translation as described in \citep{Belinkov2018}.
\section{Modeling Semantic Tagging}
\citep{Huang2015} uses a combination of biLSTM and CRF to predict tagging problem. \citep{Lample2016} improves the biLSTM-CRF model by better vector representation of the words which considers both compositional form and function which is inspired by \citep{Ling2015}. \citep{Ma2015} combines biLSTM-CRF with convolutional neural networks.
\begin{figure} [h!]
\centering
\includegraphics[width=100mm,scale=0.5]{model.png}
\caption{LSTM-CRF model for semtagging}
\label{fig:model}
\end{figure}
\subsection{LSTM-CRF Model}
Semtagging dataset in \citep{Abzianidze2017} has been used for training in the present paper. 73 sem-tags are grouped into 13 meta-tags. The baseline in the present paper is similar to \citep{Huang2015} but LSTM is used instead of biLSTM , and \citep{Huang2015} used it for named entity recognition while the present paper focuses on semantic tagging prediction. Figure~\ref{fig:model} shows the architecture of LSTM-CRF that takes a sentence as a sequence of words $x=x_{1},\ldots,x_{T}$ and outputs a sequence of semantic tags $y=y_{1},\ldots,y_{T}$.
Instead of using pretrained word embedding like GloVe, LSTM-CRF model in the present paper learns the embedding in an End-To-End approach by modeling the embedding as a simple linear layer that maps token indices to word vector representation and the weights of this linear model is learned along with the weights of LSTM and CRF.
The features are produced by the LSTM and CRF use tag informations and learns the parameters of transition matrix. The matrix of the scores which is output by the model are denoted by $f$ and its entries are $[f_{\theta}]_{t,y_{t}}$ which refers to the score of semtag $y_{t}$ at word in time $t$ with model parameters $\theta$ and is called emission score.
$A$ is the matrix of transition scores which is position independent(shared across time steps) and its entries $[A]_{y_{t},y_{t+1}}$ denotes the transition score from state $y_{t}$ to state $y_{t+1}$ for a pair of consecutive time steps . The score for a path of tags $Y$ for a sentence $X$ is given by the sum of transition scores and emission scores. The goal is to learn matrix $A$ and parameters $\theta$
\begin{equation}
s(x,y)=\sum_{t=1}^{T}( [A]_{y_{t},y_{t+1}} +[f_\theta]_{t,y_{t}} )
\end{equation}
Now, the probability of a semtag sequence y is the following softmax:
\begin{equation}\label{cond}
p(y|x)=\frac{e^{s(x,y)}}{\sum_{\tilde{y}\in Y_{X}} e^{s(x,\tilde{y})}}
\end{equation}
where $Y_{X}$ in \eqref{cond} represents all possible semtag sequences. The following log-probability of gold semtag sequence should be maximized during training.
\begin{equation}\label{final_cond}
\log p(y|x)= s(x,y) -\log \sum_{\tilde{y}\in Y_{X}} e^{s(x,\tilde{y})}
\end{equation}
The second term in \eqref{final_cond} is the logarithm of partition function and can be calculated efficiently using forward algorithm($\alpha$-algorithm) which is a dynamic programming algorithm.
Once the transition matrix parameters and emission function parameters are learned, Viterbi algorithm is used to obtain the most likely path of semtag sequence. Viterbi variable can be obtained using dynamic programming which is a datastructure that has the following recursive relation:
\begin{equation} \label{viterbi}
\pi[j,s]= \underset{s'\in 1,\ldots,k}\max \pi[j-1,s']\times A(s|s') \times f(x_{j}|s)
\end{equation}
Viterbi algorithm is also used during training. $\pi[j,s]$ in \eqref{viterbi} is the maximum probability of a sequence ending in state $s$ at time $j$. Backpointers $bp[j,s]$ are also recorded during Viterbi algorithm:
\begin{equation} \label{backpointers}
bp[j,s]= \underset{s'\in 1,\ldots,k}\argmax \pi[j-1,s']\times A(s|s') \times f(x_{j}|s)
\end{equation}
\subsection{BERT-LSTM-CRF Model}
Another approach for word representation is BERT \citep{Devlin2019}.
The second model in the present paper uses BERT for dynamic embedding of words.
BERT embeddings captures contextual information better than pre-trained embeddings or traditional embedding as is noticed in \citep{He2019}. Thus, BERT embedding is investigated as a vector representation for the input given to LSTM-CRF semtagging model.
Context-informed word embeddings like BERT capture other forms of information that result in more accurate feature representations than traditional word2vec algorithms. Word2Vec has a fixed representation and global while BERT has a dynamic representation which is conditioned on the context inside the given sentence and is very subjective. The second model in the present paper is shown in Figure~\ref{fig:secondModel} which is called BERT-LSTM-CRF. Since the dataset for semtagging is small, the second model which suffers from curse of dimensionality needs more data to tune the weights of the model.
\begin{figure} [h!]
\centering
\includegraphics[width=100mm,scale=0.5]{BERT-LSTM-CRF.png}
\caption{BERT-LSTM-CRF model for semtagging}
\label{fig:secondModel}
\end{figure}
\subsection{Experiments}
All experiments parameters are listed in table~\ref{fig:experiments}. The last experiment is done using BERT-LSTM-CRF while the rest of them are based on LSTM-CRF model. The number of epochs in all these experiments is fixed to 20 to be able to easily realize and observe the rate of convergence at different settings. Appendix~\ref{appendix:res} shows the training and validation accuracy and loss for all experiments. A dynamic learning rate has been used in a way that is reduced automatically every 10 epoch by a factor of 0.1 in all experiments.
\begin{table}[h!]
\begin{center}
\caption{different experiments for semtagging}
\label{fig:experiments}
\begin{tblr}{h{3em}h{3em}h{3em}h{3em}h{3em}h{3em}}
\hline
{ex} & {opt} & {epochs}& {batch size} &{embDim} &{hidDim} \\
\hline
1 & Adam & 20 & 5 & 50 & 8 \\
2 & Adam & 20 & 5 & 100 & 20 \\
3 & SGD & 20 & 5 & 100 & 20 \\
4 & Adam & 20 & 20 & 100 & 20 \\
5 & Adam & 20 & 5 & 100 & 30 \\
6 & Adam & 20 & 5 & 100 & 50 \\
7 & SGD & 20 & 5 & 768 & 600 \\
\end{tblr}
\end{center}
\end{table}
Experiment 6 shows a training accuracy of 95 percent and validation accuracy of 89 percent since model complexity is increased by setting embedding dimension of 100 and hidden dimension of 50 as is shown in Table~\ref{fig:experiments}.
Experiment 7 in table~\ref{fig:experiments} uses BERT embedding instead of an internal embedding layer and is harder to converge for the following main reason. The reason is because the hidden size is 768 for word embedding and this creates the curse of dimensionality since there is not enough data for semantic tagging and this added complexity makes the overall model very data hungry.
The fluctuations in Figure~\ref{fig:ex7} are for three reasons. The first reason originates from higher model complexity that hidden dimension of BERT creates. The second reason is for using relatively smaller batch size. The third reason which is the least significant factor arises from using SGD optimizer instead of Adam optimizer.
\section{Conclusion}
The importance of semtagging and some applications of it are explained. It is shown how LSTM-CRF and BERT-LSTM-CRF can predict semantic tags and the first model converges quickly even with small dataset since model complexity is relatively low.
It should be emphasized that semtagging could have major impact on semantic parsing improvement in all formalisms such as lambda calculus, abstract meaning representation (AMR), discourse representation structure(DRS).
A research direction is improving semantic operator prediction in \citep{Noravesh2023} either by augmenting POS tags with semtags or using semtags and pretrained word embedding. Another research direction is using knowledge distillation to have a low complex model since the current dataset for semtagging is relatively small for word embedding using BERT which has the default size of 768. Many knowledge distillation models have been done in the literature like \citep{Sanh2019} which produce models with smaller complexity which is suitable for small datasets like universal semantic tagging dataset.
\section{Appendix}
\label{appendix:res}
\begin{figure} [!h]
\centering
\includegraphics[width=100mm,scale=0.5]{ex1.png}
\caption{experiment 1}
\label{fig:ex1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=100mm,scale=0.5]{ex2.png}
\caption{experiment 2}
\label{fig:ex2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=100mm,scale=0.5]{ex3.png}
\caption{experiment 3}
\label{fig:ex3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=100mm,scale=0.5]{ex4.png}
\caption{experiment 4}
\label{fig:ex4}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=100mm,scale=0.5]{ex5.png}
\caption{experiment 5}
\label{fig:ex5}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=100mm,scale=0.5]{ex6.png}
\caption{experiment 6}
\label{fig:ex6}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=100mm,scale=0.5]{ex7.png}
\caption{experiment 7}
\label{fig:ex7}
\end{figure}
\bibliographystyle{agsm}
|
1112.5850
|
\section*{\refname}
\newtheorem{theorem}{Theorem}
\newtheorem{proposition}{Proposition}
\newtheorem{corollary}{Corollary}
\newtheorem{hypothesis}{Hypothesis}
\newtheorem{lemma}{Lemma}
\newtheorem{question}{Question}
\def\FP{{the dollar trader }}
\def\AP{{the euro trader }}
\def\RP{{the sterling trader }}
\def\MP{{the yen trader }}
\DeclareMathOperator*{\lcm}{lcm}
\newcommand{\calA}{\mathscr{A}}
\newcommand{\calR}{\mathscr{R}}
\newcommand{\calB}{\mathscr{B}}
\newcommand{\calT}{\mathscr{T}}
\newcommand{\calD}{\mathscr{D}}
\newcommand{\calI}{\mathscr{I}}
\newcommand{\bbG}{\mathbb{G}}
\newcommand{\bbD}{\mathbb{D}}
\newcommand{\bbR}{\mathbb{R}}
\newcommand{\bA}{\boldsymbol{A}}
\newcommand{\bD}{\boldsymbol{D}}
\newcommand{\bG}{\boldsymbol{G}}
\newcommand{\hbA}{\boldsymbol{\Hat{A}}}
\newcommand{\tbA}{\boldsymbol{\Tilde{A}}}
\newcommand{\ttbA}{\boldsymbol{\Tilde{\Tilde{A}}}}
\newcommand{\be}{\boldsymbol{e}}
\newcommand{\bv}{\boldsymbol{v}}
\newcommand{\bw}{\boldsymbol{w}}
\ifpdf
\usepackage[bookmarks=false
pdfstartview=FitH,linkbordercolor={0.5 1 1}
citebordercolor={0.5 1 0.5},unicode
pagebackref,hyperindex]{hyperref
\else
\fi
\renewcommand{\topfraction}{.97}
\renewcommand{\bottomfraction}{.97}
\renewcommand{\textfraction}{.03}
\renewcommand{\floatpagefraction}{.9}
\renewcommand{\dbltopfraction}{.97}
\renewcommand{\dblfloatpagefraction}{.9}
\newcommand{\zer}{\hphantom{-}0}
\newcommand{\on}{\hphantom{-}1}
\makeatletter
\newenvironment{smallmatrix-mod}{\null\,\vcenter\bgroup
\Let@\restore@math@cr\default@tag
\baselineskip6\ex@ \lineskip2.5\ex@ \lineskiplimit\lineskip
\ialign\bgroup\hfill$\m@th\scriptstyle##$\hfil&&\space\hfill
$\m@th\scriptstyle##$\hfil\crcr
}
\crcr\egroup\egroup\
}
\def\ps@pprintTitle
\let\@oddhead\@empty
\let\@evenhead\@empty
\def\@oddfoot{\footnotesize\itshape~\hfill\today
\let\@evenfoot\@oddfoot}
\makeatother
\begin{document}
\begin{frontmatter}
\title{Periodic Sequences of Arbitrage:
A Tale of Four Currencies\tnoteref{ak}}
\tnotetext[ak]{The authors are grateful to Andrew Caplin, Patrick
Minford, Stephen Ross and Dimitri Vayanos for useful comments or
suggestions. The usual disclaimer of responsibility applies. The
authors would also like to thank two anonymous referees of this
journal for their incisive and constructive suggestions as to the
revision of this paper.}
\author[uc]{Rod Cross}
\ead{rod.cross@strath.ac.uk}
\author[ittp]{Victor Kozyakin\corref{t1}}
\ead{kozyakin@iitp.ru}
\author[ucc]{Brian O'Callaghan}
\ead{briantoc@gmail.com}
\author[ucc]{Alexei Pokrovskii\corref{t1}\fnref{t2}}
\ead{A.Pokrovskii@ucc.ie}
\author[lce]{Alexey Pokrovskiy}
\ead{A.Pokrovskiy@lse.ac.uk}
\cortext[t1]{Authors were partially supported by the Federal Agency
for Science and Innovations of the Russian Federation (state
contract no. 02.740.11.5048)}
\fntext[t2]{It is with great sadness we report that Alexei
Pokrovskii died shortly after this paper had been completed.}
\address[uc]{Department of Economics,
University of Strathclyde,\\ Sir William Duncan Building, 130
Rottenrow, Glasgow, G4 OGE, Scotland}
\address[ittp]{Institute for Information Transmission Problems, Russian Academy of Sciences,\\
Bolshoj Karetny lane 19, Moscow
127994 GSP-4, Russia}
\address[ucc]{Department of Applied Mathematics
University College Cork, Ireland}
\address[lce]{London School of Economics and
Political Science,\\ Houghton Street, London WC2A 2AE, UK}
\begin{abstract}
This paper investigates arbitrage chains involving four currencies
and four foreign exchange trader-arbitrageurs. In contrast with the
three-currency case, we find that arbitrage operations when four
currencies are present may appear periodic in nature, and not
involve smooth convergence to a ``balanced'' ensemble of exchange
rates in which the law of one price holds. The goal of this
article is to understand some interesting features of sequences of
arbitrage operations, features which might well be relevant in
other contexts in finance and economics.
\end{abstract}
\begin{keyword}
Limits to arbitrage\sep Four currencies\sep Recurrent sequences\sep
Asyn\-chron\-ous systems
\medskip
\textit{JEL Classification}: C60, F31, D82
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{S-intro}
An arbitrage operation involves buying some good or asset for a
lower price than that for which it can be sold, taking advantage of
any imbalance in the quoted prices. The ``law of one price'' is a
statement of a key implication of the absence of arbitrage
opportunities. In turn arbitrage is often the process invoked to
explain why goods or assets that are in some sense ``identical''
should have a common price.
A study of commodity prices since 1273 concluded that ``\ldots
despite the steady decline in transportation costs over the past
700 years, the repeated intrusion of wars and disease, and the
changing fashions of commercial policy, the volatility and
persistence of deviations in the law of one price have remained
quite stable'' \cite[p. 18]{rogoff1996}. The present paper
investigates a relatively neglected complication regarding
arbitrage operations, namely the order in which information about
arbitrage opportunities is presented, illustrating this in relation
to arbitrage chains involving four currencies. The key finding is
that arbitrage operations can be periodic in nature, rather than
involving a smooth convergence to a law of one price.
The early literature on the law of one price is coeval with the
purchasing power parity explanation of foreign exchange rates. The
terminology was coined in \cite{Cassel1916}, involving arbitrage
between relatively homogeneous goods priced in different currencies
\cite{rogoff1996,rogoff2001}. Empirical tests suggest that
arbitrage operations in goods do not exert a strong influence on
exchange rates until the price index deviations involved exceed
about 25\% \cite{Engel1999,obstfeld2001}. Innovations that were
expected to reduce price dispersion, such as the European Single
Market legislation coming into effect in 1992, and the Economic and
Monetary Union project beginning in 1999, have had little effect on
price level disparities \cite{wolf2003}. The degree of price level
dispersion between US cities has displayed no marked trend over
time \cite{Rogers2001}. A study of the prices charged for identical
products in IKEA stores in twenty-five countries revealed typical
price divergences of 20--50\%, differences that could not be
attributed to just country or location-specific factors
\cite{Haskel2001}. Among the most cited reasons for deviations from
the law of one price are transaction costs, taxes, transport costs,
trade barriers, the costs of searching for price differences,
nominal price rigidities, customer market pricing, nominal exchange
rate rigidities and differences in market power \cite{taylor2002}.
In relation to assets, an early application of the law of one price
was to the interest rate parity theory of the forward exchange
rate, whereby the ratio of the forward to spot exchange rate
between two currencies is equal to the ratio of the interest rates
in the two currencies over the forward period in question
\cite[p.~130]{keynes1923}. An arbitrage opportunity in relation to
assets can be defined as ``an investment strategy that guarantees a
positive payoff in some contingency with no possibility of a
negative payoff and with no net investment''
\cite[online]{dybvig2008}. The absence of such arbitrage
opportunities has been seen as the unifying concept underlying
mainstream theories in finance, no-arbitrage principles being
applied in the Modigliani--Miller theorem of corporate capital
structure, in the Black--Scholes model of option pricing and in the
arbitrage pricing model of asset prices \cite{ross1978}. Actual
arbitrage operations in relation to assets often involve net
investment and risk and/or uncertainty, in addition to the
complications arising in relation to arbitrage in goods. Notable
deviations from the law of one price in financial markets have been
documented in relation to comparable circumstances applying to
closed-end country funds, American Depository Receipts, twin
shares, dual share classes and corporate spin-offs
\cite{lamont2003}. Among the limits to arbitrage in financial
markets are those arising from transactions costs
\cite{deardorff1979}, and those involving the capital requirements
of conducting arbitrage operations \cite{shleifer1997}. A
spectacular illustration of the capital limits to arbitrage was
provided by the demise of the Long-Term Capital Management (LTCM)
hedge funds. The arbitrage discrepancies being exploited in LTCM's
``convergence trades'' widened in 1998. LTCM attempted
unsuccessfully to raise new capital to finance its arbitrage
positions. To avoid a major financial collapse the New York
Federal Reserve Board organised a bail-out by creditors
\cite{lowenstein2000}.
In what follows we focus on the limits to arbitrage arising from
the order in which information is disseminated to arbitrage
traders. The illustration used is for a foreign exchange (FX)
market with four FX traders and four currencies, see
Sections~\ref{S-3currencies} and \ref{S-4currencies}. An Arbiter,
the metaphorical equivalent of an unpaid auctioneer in a Walrasian
system, knows all the actual exchange rates. The individual FX
traders, however, initially know only the exchange rates involving
their own, domestic currencies. Justification for the assumptions
used in our model is provided in Section~\ref{S-structFX}. So the
US FX trader knows the exchange rates for the dollar against the
euro, sterling and yen, but not the cross exchange rates for the
non-dollar currencies. There are no transactions costs, no net
capital requirements and no risks involved in the arbitrage
operations. Instead we focus on the information dissemination
problem, and show that the order in which information about cross
exchange rate discrepancies, and hence arbitrage opportunities, is
presented makes an important difference to the sequences of
arbitrage operations conducted.
A general discussion of arbitrage dynamics is given in
Section~\ref{S-arbitrages}. An unexpected feature of the processes
considered in this paper is that, \emph{rather than there being a
smooth convergence to an ensemble of exchange rates with no
arbitrage opportunities, the arbitrage operations may display
periodicity and no necessary convergence on a cross exchange rate
law of one price}. See Proposition~\ref{32} in Section~\ref{mrSS}
for a rigorous explanation. A further unexpected feature is that,
\emph{starting at an ensemble of exchange rates which is not
balanced, and using special periodic sequences of arbitrages, the
Arbiter can achieve \textrm{any} balanced (satisfying the law of
one price) exchange rate ensemble}. See, in particular,
Theorem~\ref{arbH} in Section~\ref{mrSS} and Theorem~\ref{irratBC}
in Section~\ref{S-gencase}. These counter-intuitive results are
new, as far as we are aware. In line with the renowned
``impossibility theorem'' of \cite{arrow1951} these results suggest
an ``arbitrage impossibility theorem''. Proofs are relegated to
Section~\ref{S-proofs}.
The mathematical approach taken in this paper to the analysis of
arbitrage operation chains may be understood as a typical example
of the asynchronous interactions that are important in systems
theory and in control theory, see the monographs
\citep{BertTsi:89,AKKK:92:e,KasBh:2000} and the surveys
\cite{Koz:BCRI03-13,Koz:ICDEA04}. The arbitrage chains are
particularly relevant to desynchronised systems theory, see
\cite{AKKK:92:e}. Presence of an asynchronous interaction often
leads to a dramatic complication of the related mathematical
problems. \cite{Koz:AiT90:6:e,Koz:AiT03:9:e} proved that many
asynchronous problems cannot be solved algorithmically, and also
\cite{BT:IPL97,BT:SCL00,BT:Autom00} and \cite{TB:MCSS97}
demonstrated that, even in the cases when the problem is
algorithmically solvable, it is typically as hard to solve
numerically as the famous ``Travelling salesman problem,'' see
\cite{NP2} (that is, in the mathematical language, the problem is
NP-hard which is an abbreviation for ``Non-deterministic
Polynomial-time hard'' which means in the theory of algorithms that
a problem is very hard, if possible, to solve, see \cite{NP}). In
this context the fact that the principal questions that arise in
analysis of arbitrage operation chains admit straightforward
combinatorial analysis came to the authors as a pleasant surprise.
Our construction uses a geometrical approach to visualisation of
arbitrage chains presented in
Sections~\ref{simpleASS}--\ref{S-proofs}, which may be useful in
relation to other problems in mathematical economics.
The periodicity results in this paper have implications for several
strands of literature. One is that dealing with the disequilibrium
foundations of equilibrium economics. The stability analysis of
Fisher poses the question: ``can one expect to prove that an
economy with rational agents conscious of disequilibrium and taking
advantage of arbitrage opportunities is driven (asymptotically) to
any equilibrium, Walrasian or constrained?''
\cite[pp.~86--87]{Fisher}. Fisher uses the assumption of ``no
favorable surprise'' as a means of demonstrating that a cessation
of exogenous shocks can lead to convergence to equilibrium. The
results in this paper suggest that there can be endogenous reasons,
arising from the cyclical response of arbitrage sequences to an
exogenous shock that gives rise to an arbitrage opportunity, why
convergence to equilibrium may not take place.
Another strand of literature to which our results relate is that on
market segmentation and arbitrage networks. Goods and assets are
not traded on a single exchange. Instead there are various trading
posts, such as commodity and stock exchanges. Other trades,
including a sizeable proportion of foreign exchange deals, are
conducted ``over the counter'' in direct transactions that bypass
formal exchanges. ``As a result, various clienteles trade on
different exchanges, and very few retail clients trade on more than
one exchange, let alone on all of them simultaneously''
\cite[p.~3]{RZ}. A key aspect of segmentation in the foreign
exchange ``market'' is that dealing rooms tend to specialise in
domestic currency trades. This provides a rationale for the
specification in this paper that foreign exchange dealers initially
are aware of only the exchange rates involving their domestic
currencies. We restrict our analysis to the case of 4 currencies,
with 6 principal exchange rates. The Financial Times gives daily
quotes for 52 currencies. The 1,326 principal exchange rates
involved suggest richer potential opportunities for arbitrage than
in the four currency case studied in the present paper. Bank for
International Settlements (BIS) data indicate that, in 2010,
transactions in these four currencies counted for 155.9\%{} of
global FX market turnover, the currency components being US dollars
(84.9\%), euros (39.1\%), Japanese yen (19.0\%) and pound sterling
(12.9\%). Because two currencies are involved in each transaction,
the \%{} shares sum to 200\%{} \cite[Table~B.4]{BIS}.
\section{Micro Structure of the FX market}\label{S-structFX}
In a centralised market trade takes place at prices that are public
information and traders face the same potential trading
opportunities. In contrast the FX market is decentralised, with the
end-user bank customers, banks, brokers and central banks involved
facing several possible methods of executing transactions, and
possibly different exchange rate quotes, some of which constitute
private information. BIS data for FX spot exchange rate
transactions in 2010 \cite[Table~E.24]{BIS} indicate the following
breakdown in execution methods as a \%{} of total global turnover:
inter-dealer direct (14.9\%), customer direct (21.6\%), voice
broker (8.6\%), electronic broking system (26.0\%), single-bank
electronic proprietary trading platforms (14.3\%) and multi-bank
dealing systems (14.5\%). Until the late 1980s FX transactions were
conducted largely by telephone, with FX dealers phoning
counterparties to get bid (buy) and offer (sell) quotes for
specific transaction amounts, there also being indirect dealing via
voice brokers who would search for matching interests between
clients, see \cite{GH}. The last two decades have seen a growth in
electronic methods of execution, a distinction being between
electronic broking systems such as Reuters Matching and the
Electronic Broking System Spot Dealing System (EBS), and single or
multi-bank proprietary dealing platforms.
A burgeoning literature investigates how this fragmented trading
structure impacts on price determination in FX markets, see
\cite{Lyons} and \cite{Evans} for surveys. The key contrast is
between the decentralised transactions conducted by FX dealers who
quote bid and offer prices that are not public information, and the
one-way bid or offer limit orders to buy and sell currencies at a
specific price that are accumulated by FX brokers in the
quasi-centralised segment of the market. This means that market
information is fragmented, FX dealers having private information
about the transactions forthcoming at their own quoted bid and
offer prices, and having access to the public information regarding
the order flows accumulated by the FX brokers. For analysis,
discussion and evidence regarding how order flows impact on
intra-day exchange rates see \cite{EL02,ST,EL08}.
The analysis in the present paper assumes that FX dealers
initially know only the exchange rates for their own domestic
currencies, the order in which they discover imbalances in the
exchange rate ensemble, in the form of cross exchange rate
discrepancies, playing a key role in the arbitrage sequences
conducted. The fragmented nature of information in FX markets
suggests that this strong assumption has a whiff of reality in that
different FX dealers are likely to have disjoint information sets
and can conduct trades at different prices. Individual FX dealers
conducting bi-lateral trades with end-users receive private
information in the form of the orders forthcoming at their quoted
bid and ask prices, and this can give rise to profitable arbitrage
opportunities. For example, an FX dealer specialising in US dollars
might simultaneously receive large buy orders for euros and large
sell orders for Japanese yen, and suspect that euros are
under-priced relative to Japanese yen. After checking out the euro
-- Japanese yen exchange rates quoted in the inter-dealer market,
or in the brokered section of the market where information is
public, the dealer might discover that this is indeed the case, and
exploit this arbitrage opportunity regarding which other FX traders
are initially unaware.
The BIS data on the geographical distribution of FX market turnover
is informative in relation to the assumption in the present paper
that FX dealers initially are aware of only the exchange rates
involving their own domestic currency. Banks located in the UK
account for 37\%{} of global FX turnover, followed by the US
(18\%), Japan (6\%), Singapore (5\%), Switzerland (5\%), Hong Kong
(5\%) and Australia (4\%) --- see \cite[Graph~B.7]{BIS}. Although
cross-border transactions account for nearly two-thirds of FX
market turnover, this still leaves 35\%{} of the turnover being
local in nature \cite[Table~3.2]{BIS}, suggesting that the tendency
of FX dealers initially to focus on the exchange rates involving
their own domestic currencies assumed in the present paper is
evident in a significant section of the FX market.
Traders could be better informed about exchange rate developments
involving their own domestic currencies for a variety of reasons.
This could be simply because their core end-users have the domestic
currency as a unit of account, and means of payment, so the
domestically-based FX dealers have a ``home bias'' when it comes to
the exchange rates that they consider first. The psychology
literature indicates that here are quite tight limits to the pieces
of information that the working memory can take into account when
decisions are made \cite{Bad04}, suggesting that there could well
be advantages to FX traders if they focus, at least initially, on a
limited number of exchange rates. Alternatively the ``home bias''
could be due to the existence of different time zones. So, for
example, Japanese FX traders may be more able to react to new
information relevant to the Japanese yen during the Asian trading
hours in which North American and European markets are closed. The
evidence is that most FX trades initiated in Japan and Australia
occur during Asian trading hours; most trades initiated in the US
and Canada occur during North American hours; while UK-initiated
trades tend to be bunched in the overlapping Asia -- Europe and
Europe -- North America time zones \cite[Table~2]{Souza}. A
further reason for ``home bias'' is that the localised or
institutionalised links that FX traders have with domestic clients
gives them order flow information about the likely course of the
exchange rates involving the domestic currency before the price
impact of this information becomes publicly available, via the
effects on inter-dealer trades, to FX traders operating in foreign
locations.
In \cite{CM02} the authors pose the question ``does Tokyo know more
about the yen?''. Prior to December 22, 1994 the Japanese FX market
closed for lunch from 12--00 to 13--30 hours, Tokyo time. On the
basis that local order flow conveys informational advantages, the
authors postulate that the trades of informed Tokyo traders would
be bunched before the lunch-time FX market closure, an effect that
would disappear once the lunch-time closure was abolished. They
found a significant tendency of foreign quotes on the Japanese yen
-- US dollar market to lag behind the Tokyo quotes in this
pre-lunch period, suggesting either that Tokyo-based traders were
better informed about the Japanese yen -- US dollar exchange rate
than FX traders based in foreign locations, or that foreign-based
traders believed this to be the case. Further evidence for a ``home
bias'' in the FX market was found in a study of the Canadian dollar
-- US dollar and Australian dollar -- US dollar markets
\cite{Souza}. The author calculates the impulse response functions
of the exchange rates to trades, measured by the order flows,
initiated in different locations. Trades initiated in Canada had a
larger long-run impact on the Canadian dollar -- US dollar exchange
rate than those initiated the US during North American trading
hours, and than Australian and Japanese trades initiated during
Asian trading hours. UK-initiated trades had a slightly larger
long-run effect during European trading hours, but this effect was
much larger before the start of North American trading hours.
Somewhat similarly, trades initiated in Australia had a larger
long-run impact on Australian dollar -- US dollar exchange rate
than trades initiated in the US and elsewhere. The conclusion is
that ``dealers operating both at the same time and in the same
geographic region as fundamentally driven customers have a natural
informational advantage'' \cite[pp.~23--24]{Souza}.
A major challenge to theories based on the idea that macroeconomic
``fundamentals'' drive exchange rates was presented by the
Meese-Rogoff results that such models did not forecast any better
than the ``naive'' postulate that the exchange rate rate would
remain unchanged \cite{MR83}. Engel and West \cite{EW05} showed
that exchange rates would display something close to the random
walk implied by the naive forecast if the fundamentals followed an
$\pm(1)$ process and the factor for discounting future fundamentals
was close to one. The microstructure literature has shown a way out
of this impasse, showing that micro-based information regarding
order flows, information which is not necessarily publicly
available, can explain a significant component of exchange rate
variation. So, for example, Evans and Lyons \cite{EL05} show that
end-user order flow data can explain around 16\%{} of the variance
in the monthly spot rate between the US dollar and the euro,
outperforming both standard macro fundamentals models and the
random walk specification. The microstructure literature has also
focussed attention onto high frequency data sets. Osler
\cite{Osl05}, for example, analyses minute-by-minute quotes for the
US dollar spot exchange rates with the Deutschmark, Japanese yen
and pound sterling, discovering significant effects from stop-loss
order flows, where the stop-loss order is one that instructs FX
dealers to buy (sell) a certain amount of a currency at the
``market'' rate once the exchange rate has risen (fallen) to a
pre-specified level.
A full survey of the theoretical and empirical literature on FX
exchange rate determination has been beyond the scope of the
present paper (see \cite{Evans} for such a survey). What we would
argue is that the foregoing selective review of the literature
provides some justification for the assumptions used in the
analysis of the arbitrage sequences that follows. The assumption
that FX dealers initially know only the exchange rates for their
domestic currency finds some support in the ``home bias'' evidence
cited above. The evidence on the fragmented nature of the FX market
lends support to the assumption that FX traders can have privileged
access to initially private information, stemming from order flows
from end-user clients, that would allow them to identify imbalances
in cross exchange rates, and hence identify arbitrage opportunities
before FX traders based in other locations can identify such
opportunities. There is also evidence that there are arbitrage
opportunities to be exploited. In \cite{MTY} the authors use
binding quote and transactions data from the electronic broking
system, EBS, for the US dollar, euros, Japanese yen, the pound
sterling and the Swiss franc. Triangular arbitrage opportunities
are identified within two-minute time horizons, and can be
exploited by three trades on the EBS trading screen. Each
identified arbitrage opportunity involved the US dollar and the
euro, the third currency being the Japanese yen, pound sterling or
Swiss franc. The estimated mean arbitrage profits, net of bid-offer
spreads and 0.2 basis point trade fees, ranged from 2.8 to 3.0
basis points \cite[p.~4]{MTY}. So here is evidence of, albeit
small, profits to be had from arbitrage operations on a
quasi-centralised, electronic broking trading platform. Once the
decentralised sections of the FX market are considered, the
existence of initially private information is likely extend the
range and size of profitable arbitrage arbitrage opportunities
available.
\section{The Three Currency Case}\label{S-3currencies}
Consider a foreign exchange (FX) market that involves only three
currencies: Dollars (\$), Euros (\euro) and Sterling (\pounds).
This FX market involves three pair-wise exchange operations:
\[
Dollar \leftrightarrows Euro, \quad Dollar \rightleftarrows Sterling,
\quad Euro \rightleftarrows Sterling.
\]
The currencies are measured in natural currency units, and the
corresponding (strictly positive) exchange rates,
$r_{\textrm{\$\euro}}$, $r_{\textrm{\$}\textrm{\pounds}}$,
$r_{\textrm{\euro\pounds}}$, are well defined. For instance, one
dollar can be exchanged for $r_{\textrm{\$\euro}}$ euros.
The rates related to the inverted arrows
are reciprocal:
\begin{equation}\label{rec3}
r_{\textrm{\euro}\textrm{\$}}=
\frac{1}{r_{\textrm{\$}\textrm{\euro}}},\quad
r_{\textrm{\pounds}\textrm{\$}}=
\frac{1}{r_{\textrm{\$}\textrm{\pounds}}},\quad
r_{\textrm{\pounds}\textrm{\euro}}=
\frac{1}{r_{\textrm{\euro}\textrm{\pounds}}}\ .
\end{equation}
We treat the triplet
\begin{equation}\label{prr3}
\left(
r_{\textrm{\$}\textrm{\euro}}, \
r_{\textrm{\$}\textrm{\pounds}}, \
r_{\textrm{\euro}\textrm{\pounds}}
\right)
\end{equation}
as the ensemble of \emph{principal exchange rates.}
We suppose that, prior to a reference time moment $0$, each FX
trader knows only the exchange rates involving his domestic
currency. So the dollar trader does not know the value of
$r_{\textrm{\euro}\textrm{\pounds}}$, the euro trader is unaware of
$r_{\textrm{\$}\textrm{\pounds}}$, and the sterling trader is
unaware of $r_{\textrm{\$}\textrm{\euro}}$. We are interested in
the case where the initial rates are unbalanced in the following
sense. By assumption, the dollar trader can exchange one dollar for
$r_{\textrm{\$}\textrm{\euro}}$ euros. Let us suppose that
unbeknownst to him the exchange rate between sterling and euro is
such that the the dollar trader could make a profit by first
exchanging a dollar for $r_{\textrm{\$}\textrm{\pounds}}$ units of
sterling and then exchanging these for euros. The inequality which
guarantees that dollar trader can take advantage of this arbitrage
opportunity is that the
product
$r_{\textrm{\$}\textrm{\pounds}}r_{\textrm{\pounds}\textrm{\euro}}$
is greater than $r_{\textrm{\$}\textrm{\euro}}$:
\begin{equation}\label{unb1}
r_{\textrm{\$}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\euro}}>
r_{\textrm{\$}\textrm{\euro}}.
\end{equation}
Let us consider the situation where the inequality \eqref{unb1}
holds, and, after the reference time moment $0$, one of the three
traders becomes aware of the third exchange rate. The evolution of
this FX market depends on \emph{which trader is the first to
discover the information concerning the third exchange rate}. The
following three cases are relevant.
\subsection{Case 1.}
The dollar trader becomes aware of the value of the rate
$r_{\textrm{\euro}\textrm{\pounds}}$. Therefore, the dollar trader
contacts the euro trader and makes a request to increase the rate
$r_{\textrm{\$}\textrm{\euro}}$ to the new fairer value
\[
r^{new}_{\textrm{\$}\textrm{\euro}}=
r_{\textrm{\$}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\euro}} =
\frac{r_{\textrm{\$}\textrm{\pounds}}}{r_{\textrm{\euro}\textrm{\pounds}}}.
\]
The reciprocal exchange rate $r_{\textrm{\euro}\textrm{\$}}$ is
also to be adjusted to the new level:
\[
r^{new}_{\textrm{\euro}\textrm{\$}}=
\frac{1}{r^{new}_{\textrm{\$}\textrm{\euro}}}.
\]
The result is that the principal exchange rates become balanced
at the levels:
\[
r^{new}_{\textrm{\$}\textrm{\euro}}=
\frac{r_{\textrm{\$}\textrm{\pounds}}}{ r_{\textrm{\euro}\textrm{\pounds}}}, \quad r_{\textrm{\$}\textrm{\pounds}}, \quad
r_{\textrm{\euro}\textrm{\pounds}}.
\]
\subsection{Case 2.}
The euro trader is the first to discover the third exchange rate
$r_{\textrm{\$}\textrm{\pounds}}$. By \eqref{rec3}, inequality
\eqref{unb1} may be rewritten as
\[
\frac{r_{\textrm{\$}\textrm{\pounds}}}{r_{\textrm{\euro}\textrm{\pounds}}}<\frac{1}{r_{\textrm{\euro}\textrm{\$}}},
\]
which is, in turn, equivalent to
$r_{\textrm{\euro}\textrm{\$}}\cdot
r_{\textrm{\$}\textrm{\pounds}}>
r_{\textrm{\euro}\textrm{\pounds}}$. In this case the euro trader
could do better by first exchanging euros for dollars, and then
by exchanging the dollars for sterling. Therefore, the euro trader
requests adjustment of the rate
$r_{\textrm{\euro}\textrm{\pounds}}$ to the value
\[
r^{new}_{\textrm{\euro}\textrm{\pounds}}=
r_{\textrm{\euro}\textrm{\$}}\cdot r_{\textrm{\$}\textrm{\pounds}}= \frac{r_{\textrm{\$}\textrm{\pounds}}}{r_{\textrm{\$}\textrm{\euro}}}.
\]
In terms of the principal exchange rates the outcome is that the
FX market adjusts to the following balanced rates:
\[
r_{\textrm{\$}\textrm{\euro}}, \quad
r_{\textrm{\$}\textrm{\pounds}}, \quad
r^{new}_{\textrm{\euro}\textrm{\pounds}}=
\frac{r_{\textrm{\$}\textrm{\pounds}}}{r_{\textrm{\$}\textrm{\euro}}}.
\]
\subsection{Case 3.}
The sterling trader is the first to discover the third exchange
rate $r_{\textrm{\$}\textrm{\euro}}$. The inequality \eqref{unb1}
may be rewritten as $ r_{\textrm{\pounds}\textrm{\euro}}\cdot
r_{\textrm{\euro}\textrm{\$}}>r_{\textrm{\pounds}\textrm{\$}}. $
Thus, the sterling trader requests adjustment of the rate
$r_{\textrm{\pounds}\textrm{\$}}$ to
$r^{new}_{\textrm{\pounds}\textrm{\$}}=
r_{\textrm{\pounds}\textrm{\euro}}\cdot
r_{\textrm{\euro}\textrm{\$}}$. In this case the principal
exchange rates become balanced at the levels:
\[
r_{\textrm{\$}\textrm{\euro}},\quad
r^{new}_{\textrm{\$}\textrm{\pounds}}=
{r_{\textrm{\$}\textrm{\euro}}\cdot
r_{\textrm{\euro}\textrm{\pounds}}},\quad
r_{\textrm{\euro}\textrm{\pounds}}.
\]
\emph{After the adjustment of the principal exchange rates
\eqref{prr3}, following the new information being revealed, the
exchange rates become balanced, and this is the end of the
arbitrage evolution of an FX market with three currencies. Having
established the reasonably straightforward application of arbitrage
to three currencies, we now turn to investigation what happens when
the FX market contains four currencies and four currency traders.}
\section{Four Currencies}\label{S-4currencies}
Consider an FX market \$\textrm{\euro}\pounds\yen{} that involves
four currencies: Dollars (\$), Euros (\textrm{\euro}), Sterling
(\pounds) and Yen (\yen). This FX market involves six exchange
relationships:
\begin{alignat*}{3}
Dollar &\rightleftarrows Euro,\quad & Dollar &\rightleftarrows
Sterling,\quad & Dollar &\rightleftarrows Yen,\\
Euro &\rightleftarrows Sterling, \quad & Euro &\rightleftarrows
Yen, &\quad Sterling &\rightleftarrows Yen.
\end{alignat*}
The exchange rates are:
\[
\begin{array}{llllll}
r_{\textrm{\$}\textrm{\euro}},& r_{\textrm{\$}\textrm{\pounds}},&r_{\textrm{\$}\textrm{\yen}}, &r_{\textrm{\euro}\textrm{\$}},& r_{\textrm{\euro}\textrm{\yen}},&r_{\textrm{\pounds}\textrm{\yen}},\\
r_{\textrm{\pounds}\textrm{\$}},& r_{\textrm{\pounds}\textrm{\euro}},&r_{\textrm{\pounds}\textrm{\yen}},& r_{\textrm{\yen}\textrm{\$}},& r_{\textrm{\yen}\textrm{\euro}},&r_{\textrm{\yen}\textrm{\pounds}}.
\end{array}
\]
The rates relating to the inverted arrows are reciprocal:
\begin{equation}\label{rec}
\begin{alignedat}{3}
r_{\textrm{\euro}\textrm{\$}}&=\frac{1}{r_{\textrm{\$}\textrm{\euro}}},\quad&
r_{\textrm{\pounds}\textrm{\$}}&=\frac{1}{r_{\textrm{\$}\textrm{\pounds}}},\quad&
r_{\textrm{\yen}\textrm{\$}}&=\frac{1}{r_{\textrm{\$}\textrm{\yen}}}, \\
r_{\textrm{\pounds}\textrm{\euro}}&=\frac{1}{r_{\textrm{\euro}\textrm{\pounds}}},\quad&
r_{\textrm{\yen}\textrm{\euro}}&=\frac{1}{r_{\textrm{\euro}\textrm{\yen}}},\quad &
r_{\textrm{\yen}\textrm{\pounds}}&=\frac{1}{r_{\textrm{\pounds}\textrm{\yen}}}.
\end{alignedat}
\end{equation}
Our market may be described by the ensemble of six {principal
exchange rates}
\begin{equation}\label{excr}
\calR=\left(r_{\textrm{\$}\textrm{\euro}},\
r_{\textrm{\$}\textrm{\pounds}},\
r_{\textrm{\$}\textrm{\yen}},\
r_{\textrm{\euro}\textrm{\pounds}},\
r_{\textrm{\euro}\textrm{\yen}},\
r_{\textrm{\pounds}\textrm{\yen}}\right)
\end{equation}
together with the reciprocal exchange rates \eqref{rec}.
The following characterisation of balanced, no-arbitrage, exchange
rates \eqref{excr}, that is the ensembles of exchange rates such
that no trader could do better by trading indirectly, is
convenient.
\begin{proposition}\label{balp}
Ensemble \eqref{excr} of the principal exchange rates is balanced
if and only if the following relationships hold:
\begin{equation}
r_{\textrm{\euro}\textrm{\pounds}} =
\frac{r_{\textrm{\$}\textrm{\pounds}}}{r_{\textrm{\$}\textrm{\euro}}},\quad r_{\textrm{\euro}\textrm{\yen}} =
\frac{r_{\textrm{\$}\textrm{\yen}}}{r_{\textrm{\$}\textrm{\euro}}},
\label{invss}\quad r_{\textrm{\pounds}\textrm{\yen}} =
\frac{r_{\textrm{\$}\textrm{\yen}}}{r_{\textrm{\$}\textrm{\pounds}}}.
\end{equation}
\end{proposition}
\begin{proof}
This assertion can be proved by inspection.
\end{proof}
\section{Arbitrages}\label{S-arbitrages}
Let us suppose that initially each trader is aware only of the
three exchange rates involving his domestic currency. For instance,
the dollar trader knows only the rates
$r_{\textrm{\$}\textrm{\euro}}$, $r_{\textrm{\$}\textrm{\pounds}}$,
$r_{\textrm{\$}\textrm{\yen}}$.
\emph{We are interested in the case where the rates
$r_{\textrm{\$}\textrm{\euro}}$, $r_{\textrm{\$}\textrm{\pounds}}$,
$r_{\textrm{\$}\textrm{\yen}}$,
$r_{\textrm{\euro}\textrm{\pounds}}$,
$r_{\textrm{\euro}\textrm{\yen}}$,
$r_{\textrm{\pounds}\textrm{\yen}}$ are unbalanced}.
For instance, let us suppose that \FP can make a profit by first
exchanging one dollar for $r_{\textrm{\$}\textrm{\pounds}}$ units
of sterling, and then by exchanging this sterling for euros. This
means that the product $r_{\textrm{\$}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\euro}}$ is greater than
$r_{\textrm{\$}\textrm{\euro}}$:
\begin{equation}\label{unb}
r_{\textrm{\$}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\euro}}>
r_{\textrm{\$}\textrm{\euro}}.
\end{equation}
Suppose that \FP becomes aware of the rate
$r_{\textrm{\euro}\textrm{\pounds}}$, and, therefore, about the
inequality \eqref{unb}. The dollar trader then asks \AP to increase
the exchange rate $r_{\textrm{\$}\textrm{\euro}}$ to the new fairer
value
\[
r^{new}_{\textrm{\$}\textrm{\euro}}=
{r_{\textrm{\$}\textrm{\pounds}}}\cdot
r_{\textrm{\pounds}\textrm{\euro}}=
\frac{r_{\textrm{\$}\textrm{\pounds}}}{r_{\textrm{\euro}\textrm{\pounds}}}.
\]
Along with the adjustment of the exchange rate
$r_{\textrm{\$}\textrm{\euro}}$ the reciprocal rate
$r_{\textrm{\euro}\textrm{\$}}$ would be adjusted to
\[
r^{new}_{\textrm{\euro}\textrm{\$}}=\frac{1}{r^{new}_{\textrm{\$}\textrm{\euro}}}.
\]
We call this procedure \$\textrm{\euro}\pounds-\emph{arbitrage},
and we use the notation
$\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}$ to represent
it. We denote by
$\calR\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}$ the
ensemble of the new principal exchange rates:
\[
\calR^{new}=\calR\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}=
\left(r^{new}_{\textrm{\$}\textrm{\euro}},\
r_{\textrm{\$}\textrm{\pounds}},\
r_{\textrm{\$}\textrm{\yen}},\
r_{\textrm{\euro}\textrm{\pounds}},\
r_{\textrm{\euro}\textrm{\yen}},\
r_{\textrm{\pounds}\textrm{\yen}}
\right).
\]
We also use the notation
$\calR\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}$ in the
case where the inequality \eqref{unb} does not hold. In this case,
of course,
$\calR\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}=\calR$, and
we say that arbitrage
$\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}$ is \emph{not
active} in this case. This particular arbitrage is an example of
the 24 possible arbitrages listed in Table~\ref{tab1}. We will also
use, where convenient, the notation $\calA^{(n)}$ for the arbitrage
number $n$ from this table: for instance,
$\calA^{(1)}=\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}$.
\begin{table}[!htbp]
\caption{List of arbitrages}\label{tab1}
\begin{tabular}{llll}
Number&Arbitrage&Activation condition&Actions\\
\hline \\
1& $\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}$&
${r_{\textrm{\euro}\textrm{\pounds}}}>r_{\textrm{\$}\textrm{\euro}}\cdot
r_{\textrm{\$}\textrm{\pounds}}$ &
$r^{new}_{\textrm{\$}\textrm{\euro}}=
r_{\textrm{\$}\textrm{\pounds}}\cdot
r_{\textrm{\euro}\textrm{\pounds}}^{-1}$
\\
2& $\calA_{\textrm{\$}\textrm{\euro}\textrm{\yen}}$&
$r_{\textrm{\$}\textrm{\yen}}>r_{\textrm{\$}\textrm{\euro}}\cdot
r_{\textrm{\euro}\textrm{\yen}}$ &
$r^{new}_{\textrm{\$}\textrm{\euro}}= r_{\textrm{\$}\textrm{\yen}}\cdot r_{\textrm{\euro}\textrm{\yen}}^{-1}$\\
3& $\calA_{\textrm{\$}\textrm{\pounds}\textrm{\euro}}$&
$r_{\textrm{\$}\textrm{\euro}}\cdot
r_{\textrm{\euro}\textrm{\pounds}}>r_{\textrm{\$}\textrm{\pounds}}$
&
$r^{new}_{\textrm{\$}\textrm{\pounds}}= r_{\textrm{\$}\textrm{\euro}}\cdot r_{\textrm{\euro}\textrm{\pounds}}$\\
4& $\calA_{\textrm{\$}\textrm{\pounds}\textrm{\yen}}$&
$r_{\textrm{\$}\textrm{\yen}}>r_{\textrm{\$}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\yen}}$ &
$r^{new}_{\textrm{\$}\textrm{\pounds}}= r_{\textrm{\$}\textrm{\yen}}\cdot r_{\textrm{\pounds}\textrm{\yen}}^{-1}$ \\
5& $\calA_{\textrm{\$}\textrm{\yen}\textrm{\euro}}$&
$r_{\textrm{\$}\textrm{\euro}}\cdot
r_{\textrm{\euro}\textrm{\yen}}>r_{\textrm{\$}\textrm{\yen}}$ &
$r^{new}_{\textrm{\$}\textrm{\yen}}= r_{\textrm{\$}\textrm{\euro}}\cdot r_{\textrm{\euro}\textrm{\yen}}$ \\
6& $\calA_{\textrm{\$}\textrm{\yen}\textrm{\pounds}}$&
$r_{\textrm{\$}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\yen}}>r_{\textrm{\$}\textrm{\yen}}$ &
$r^{new}_{\textrm{\$}\textrm{\yen}}= r_{\textrm{\$}\textrm{\pounds}}\cdot r_{\textrm{\pounds}\textrm{\yen}}$ \\
7& $\calA_{\textrm{\euro}\textrm{\$}\textrm{\pounds}}$&
$r_{\textrm{\$}\textrm{\pounds}}<r_{\textrm{\$}\textrm{\euro}}\cdot
r_{\textrm{\euro}\textrm{\pounds}}$ &
$r^{new}_{\textrm{\$}\textrm{\euro}}= r_{\textrm{\$}\textrm{\pounds}}\cdot r_{\textrm{\euro}\textrm{\pounds}}^{-1}$ \\
8& $\calA_{\textrm{\euro}\textrm{\$}\textrm{\yen}}$&
$r_{\textrm{\$}\textrm{\yen}}<r_{\textrm{\$}\textrm{\euro}}\cdot
r_{\textrm{\euro}\textrm{\yen}}$ &
$r^{new}_{\textrm{\$}\textrm{\euro}}= r_{\textrm{\$}\textrm{\yen}}\cdot r_{\textrm{\euro}\textrm{\yen}}^{-1}$\\
9& $\calA_{\textrm{\euro}\textrm{\pounds}\textrm{\$}}$&
$r_{\textrm{\$}\textrm{\pounds}}>r_{\textrm{\euro}\textrm{\pounds}}\cdot
r_{\textrm{\$}\textrm{\euro}}$ &
$r^{new}_{\textrm{\euro}\textrm{\pounds}}= r_{\textrm{\$}\textrm{\pounds}}\cdot r_{\textrm{\$}\textrm{\euro}}^{-1}$ \\
10 & $\calA_{\textrm{\euro}\textrm{\pounds}\textrm{\yen}}$&
$r_{\textrm{\euro}\textrm{\yen}}>r_{\textrm{\euro}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\yen}}$ &
$r^{new}_{\textrm{\euro}\textrm{\pounds}}= r_{\textrm{\euro}\textrm{\yen}}\cdot r_{\textrm{\pounds}\textrm{\yen}}^{-1}$ \\
11 & $\calA_{\textrm{\euro}\textrm{\yen}\textrm{\$}}$&
$r_{\textrm{\$}\textrm{\yen}}>r_{\textrm{\euro}\textrm{\yen}}\cdot
r_{\textrm{\$}\textrm{\euro}}$ &
$r^{new}_{\textrm{\euro}\textrm{\yen}}= r_{\textrm{\$}\textrm{\yen}}\cdot r_{\textrm{\$}\textrm{\euro}}^{-1}$ \\
12 & $\calA_{\textrm{\euro}\textrm{\yen}\textrm{\pounds}}$&
$r_{\textrm{\euro}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\yen}}>r_{\textrm{\euro}\textrm{\yen}}$
&$r^{new}_{\textrm{\euro}\textrm{\yen}}= r_{\textrm{\euro}\textrm{\pounds}}\cdot r_{\textrm{\pounds}\textrm{\yen}}$ \\
13& $\calA_{\textrm{\pounds}\textrm{\$}\textrm{\euro}}$&
$r_{\textrm{\$}\textrm{\euro}}\cdot
r_{\textrm{\euro}\textrm{\pounds}}<r_{\textrm{\$}\textrm{\pounds}}$
&$r^{new}_{\textrm{\$}\textrm{\pounds}}= r_{\textrm{\$}\textrm{\euro}}\cdot r_{\textrm{\euro}\textrm{\pounds}}$\\
14& $\calA_{\textrm{\pounds}\textrm{\$}\textrm{\yen}}$&
$r_{\textrm{\$}\textrm{\yen}}<r_{\textrm{\$}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\yen}}$ &
$r^{new}_{\textrm{\$}\textrm{\pounds}}= r_{\textrm{\$}\textrm{\yen}}\cdot r_{\textrm{\pounds}\textrm{\yen}}^{-1}$ \\
15& $\calA_{\textrm{\pounds}\textrm{\euro}\textrm{\$}}$&
$r_{\textrm{\$}\textrm{\pounds}}<r_{\textrm{\euro}\textrm{\pounds}}\cdot
r_{\textrm{\$}\textrm{\euro}}$ &
$r^{new}_{\textrm{\euro}\textrm{\pounds}}= r_{\textrm{\$}\textrm{\pounds}}\cdot r_{\textrm{\$}\textrm{\euro}}^{-1}$ \\
16 & $\calA_{\textrm{\pounds}\textrm{\euro}\textrm{\yen}}$&
$r_{\textrm{\euro}\textrm{\yen}}<r_{\textrm{\euro}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\yen}}$ &
$r^{new}_{\textrm{\euro}\textrm{\pounds}}= r_{\textrm{\euro}\textrm{\yen}}\cdot r_{\textrm{\pounds}\textrm{\yen}}^{-1}$ \\
17& $\calA_{\textrm{\pounds}\textrm{\yen}\textrm{\$}}$&
$r_{\textrm{\$}\textrm{\yen}}>r_{\textrm{\pounds}\textrm{\yen}}\cdot
r_{\textrm{\$}\textrm{\pounds}}$ &
$r^{new}_{\textrm{\pounds}\textrm{\yen}}=
r_{\textrm{\$}\textrm{\yen}}\cdot
r_{\textrm{\$}\textrm{\pounds}}^{-1}$
\\
18& $\calA_{\textrm{\pounds}\textrm{\yen}\textrm{\euro}}$&
$r_{\textrm{\euro}\textrm{\yen}}>r_{\textrm{\pounds}\textrm{\yen}}\cdot
r_{\textrm{\euro}\textrm{\pounds}}$ &
$r^{new}_{\textrm{\pounds}\textrm{\yen}}= r_{\textrm{\euro}\textrm{\yen}}\cdot r_{\textrm{\euro}\textrm{\pounds}}^{-1}$\\
19& $\calA_{\textrm{\yen}\textrm{\$}\textrm{\euro}}$&
$r_{\textrm{\$}\textrm{\euro}}\cdot
r_{\textrm{\euro}\textrm{\yen}}<r_{\textrm{\$}\textrm{\yen}}$ &
$r^{new}_{\textrm{\$}\textrm{\yen}}= r_{\textrm{\$}\textrm{\euro}}\cdot r_{\textrm{\euro}\textrm{\yen}}$ \\
20& $\calA_{\textrm{\yen}\textrm{\$}\textrm{\pounds}}$&
$r_{\textrm{\$}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\yen}}<r_{\textrm{\$}\textrm{\yen}}$ &
$r^{new}_{\textrm{\$}\textrm{\yen}}= r_{\textrm{\$}\textrm{\pounds}}\cdot r_{\textrm{\pounds}\textrm{\yen}}$ \\
21 & $\calA_{\textrm{\yen}\textrm{\euro}\textrm{\$}}$&
$r_{\textrm{\$}\textrm{\yen}}<r_{\textrm{\euro}\textrm{\yen}}\cdot
r_{\textrm{\$}\textrm{\euro}}$ &
$r^{new}_{\textrm{\euro}\textrm{\yen}}= r_{\textrm{\$}\textrm{\yen}}\cdot r_{\textrm{\$}\textrm{\euro}}^{-1}$ \\
22 & $\calA_{\textrm{\yen}\textrm{\euro}\textrm{\pounds}}$&
$r_{\textrm{\euro}\textrm{\pounds}}\cdot
r_{\textrm{\pounds}\textrm{\yen}}<r_{\textrm{\euro}\textrm{\yen}}$
&$r^{new}_{\textrm{\euro}\textrm{\yen}}= r_{\textrm{\euro}\textrm{\pounds}}\cdot r_{\textrm{\pounds}\textrm{\yen}}$ \\
23& $\calA_{\textrm{\yen}\textrm{\pounds}\textrm{\$}}$&
$r_{\textrm{\$}\textrm{\yen}}<r_{\textrm{\pounds}\textrm{\yen}}\cdot
r_{\textrm{\$}\textrm{\pounds}}$ &
$r^{new}_{\textrm{\pounds}\textrm{\yen}}=
r_{\textrm{\$}\textrm{\yen}}\cdot
r_{\textrm{\$}\textrm{\pounds}}^{-1}$
\\
24& $\calA_{\textrm{\yen}\textrm{\pounds}\textrm{\euro}}$&
$r_{\textrm{\euro}\textrm{\yen}}<r_{\textrm{\pounds}\textrm{\yen}}\cdot
r_{\textrm{\euro}\textrm{\pounds}}$ &
$r^{new}_{\textrm{\pounds}\textrm{\yen}}= r_{\textrm{\euro}\textrm{\yen}}\cdot r_{\textrm{\euro}\textrm{\pounds}}^{-1}$\\
\end{tabular}
\end{table}
\emph{The principal distinction of the FX market with four
currencies from that with only three currencies is that applying a
single arbitrage operation does not bring the FX market to a
balance in which no arbitrage opportunities exist, and in which the
law of one price holds.}
\section{Main Results}\label{mrSS}
One can apply arbitrages from Table~\ref{tab1} sequentially in any
order and to any initial exchange rates $\calR$. The situation that
we have in mind is the following. Suppose that there exists an
\emph{Arbiter} who knows current ensemble $\calR$ of exchange
rates. This \emph{Arbiter} could provide information to the FX
traders in any order he wants, thus activating the \emph{chain} (or
\emph{superposition}) of corresponding arbitrages. The principal
question is:
\begin{question}\label{que1}
How powerful is the \emph{Arbiter}?
\end{question}
The short answer is: the \emph{Arbiter is surprisingly powerful.}
Let us explain at a more formal level what we mean.
For a finite chain of arbitrages $ {\bA} = \calA_1 \cdots \calA_n$,
and for a given ensemble $\calR$ of initial exchange rates, we
denote the resulting ensemble of principal exchange rates as
\begin{equation}\label{arbsec}
\calR{\bA}=\calR\calA_1 \cdots \calA_n
\end{equation}
If $\calR$ is balanced, then $\calR{\calA}=\calR$ for any
individual arbitrage, and therefore $\calR{\bA}=\calR$ for any
chain \eqref{arbsec}. If, on the contrary, $\calR$ is not balanced,
then different arbitrage chains \eqref{arbsec} could result in
different balanced or unbalanced ensembles of principal exchange
rates. Denote by $S(\calR)$ the collection of the sets $\calR{\bA}$
related to all possible chains \eqref{arbsec}. Denote also by
$S^{bal}(\calR)$ the subset of $S(\calR)$, that includes only
balanced exchange rates ensembles. Our principal observation is the
following:
\emph{For a typical unbalanced exchange rate ensemble $\calR$, the
set $S^{bal}(\calR)$ is unexpectedly rich; therefore the Arbiter,
who prescribes a particular sequence of arbitrages, is an
unexpectedly powerful figure.}
To avoid cumbersome notation and technical details when providing a
rigorous formulation of this observation, we concentrate on the
simplest initial ensemble. Let us consider the ensemble
\begin{equation}\label{dist}
\bar\calR_{\alpha}=\left(\alpha \cdot \bar{r}_{\textrm{\$}\textrm{\euro}},\
\bar{r}_{\textrm{\$}\textrm{\pounds}},\ \bar{r}_{\textrm{\$}\textrm{\yen}},\ \bar{r}_{\textrm{\euro}\textrm{\pounds}},\ \bar{r}_{\textrm{\euro}\textrm{\yen}},\ \bar{r}_{\textrm{\pounds}\textrm{\yen}}
\right),
\end{equation}
where $\alpha>0, \alpha \not= 1$ and $\bar\calR$ is
a given balanced ensemble of principal exchange
rates. The ensemble \eqref{dist} is not balanced.
The ensemble \eqref{dist} may have emerged as
follows. Let us suppose that the underlying balanced
rates
\begin{equation}\label{distbar}
\bar\calR=\left(\bar{r}_{\textrm{\$}\textrm{\euro}},\ \bar{r}_{\textrm{\$}\textrm{\pounds}},\
\bar{r}_{\textrm{\$}\textrm{\yen}},\ \bar{r}_{\textrm{\euro}\textrm{\pounds}},\ \bar{r}_{\textrm{\euro}\textrm{\yen}},\ \bar{r}_{\textrm{\pounds}\textrm{\yen}}
\right)
\end{equation}
had been in operation up to a certain reference time moment $0$. At
this moment {\FP} has decided to increase his price for euros by a
factor $\alpha>1$. A natural respecification of Question \ref{que1}
is the following:
\begin{question}\label{que2}
To which balanced exchange rates can the Arbiter now bring the
foreign exchange market?
\end{question}
The possible general structure of elements from the corresponding
sets $S(\bar\calR_{\alpha})$ and $S^{bal}(\bar\calR_{\alpha})$ is
easy to describe. To this end we denote by $T_{\alpha}(\bar\calR)$
the collection of all sextuples of the form
\begin{equation}\label{prod}
\left(\alpha^{n_{1}} \cdot \bar{r}_{\textrm{\$}\textrm{\euro}},\
\alpha^{n_{2}} \cdot \bar{r}_{\textrm{\$}\textrm{\pounds}},\
\alpha^{n_{3}} \cdot \bar{r}_{\textrm{\$}\textrm{\yen}},\
\alpha^{n_{4}}\cdot \bar{r}_{\textrm{\euro}\textrm{\pounds}},\
\alpha^{n_{5}}\cdot \bar{r}_{\textrm{\euro}\textrm{\yen}} ,\
\alpha^{n_{6}}\cdot \bar{r}_{\textrm{\pounds}\textrm{\yen}}
\right),
\end{equation}
where $n_{i}$ are integer numbers (positive, negative or zero). We
also denote by $T^{bal}_{\alpha}$ the subset of elements of
$T_{\alpha}$, which satisfy the relationships
\[
n_{4} = n_{2}-n_{1},\quad n_{5} = n_{3}-n_{1},\quad n_{6} =
n_{3}-n_{2}.
\]
\begin{proposition}\label{rep1P}
The following inclusions hold:
\begin{align}\label{talp}
S(\calR_{\alpha})&\subset T_{\alpha}(\bar\calR),\\
\label{palpb}
S^{bal}(\bar\calR_{\alpha})&\subset T^{bal}_{\alpha}(\bar\calR).
\end{align}
\end{proposition}
\begin{proof}
The ensemble \eqref{distbar} belongs to $T$. To verify
\eqref{talp} we show that the set $T_{\alpha}$ is invariant with
respect to each arbitrage $\calA$ from Table~\ref{tab1}. This
statement can be checked by inspection. Let us, for instance, apply
to a sextuple \eqref{prod} the first arbitrage
$\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}$. Then, by
definition, either this arbitrage is inactive, or it changes the
first component $\alpha^{n_{1}} \cdot
\bar{r}_{\textrm{\$}\textrm{\euro}} $ of \eqref{prod} to the new
value
\begin{equation}\label{bb}
r^{new}_{\textrm{\$}\textrm{\euro}}= \frac{\alpha^{n_{2}} \cdot \bar{r}_{\textrm{\$}\textrm{\pounds}}}
{\alpha^{n_{4}}\cdot \bar{r}_{\textrm{\euro}\textrm{\pounds}}}= \alpha^{n_{2}-n_{4}}\cdot
\frac{\bar{r}_{\textrm{\$}\textrm{\pounds}}}{ \bar{r}_{\textrm{\euro}\textrm{\pounds}}}.
\end{equation}
However, the ensemble ${\bar\calR}$ is balanced, and, by the first
equation \eqref{invss},
$\frac{\bar{r}_{\textrm{\$}\textrm{\pounds}}}{\bar{r}_{\textrm{\euro}\textrm{\pounds}}}=
\bar{r}_{\textrm{\$}\textrm{\euro}}$.
Therefore, \eqref{bb} implies that the ensemble
$\bar\calR\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}$ also
may be represented in the form \eqref{prod}. We have proved the
first part of the proposition, related to the set
$S(\bar\calR_{\alpha})$. The inclusion \eqref{palpb} follows now
from Proposition~\ref{balp}.
\end{proof}
Proposition~\ref{rep1P} in no way answers Question~\ref{que2}. This
proposition, however, allows us to reformulate this question in a
more constructive form:
\begin{question}\label{que3}
How big is the set $S^{bal}(\bar\calR_{\alpha})$, compared with the
collection $T^{bal}_{\alpha}(\bar\calR)$ of all elements that
satisfy the restrictions imposed by Proposition~\ref{rep1P}?
\end{question}
The naive expectation would be that the set
$S^{bal}(\bar\calR_{\alpha})$ is finite and, at least for values of
$\alpha$ close to 1, that all elements of
$S^{bal}(\bar\calR_{\alpha})$ are close to $\bar\calR$. However,the
following statement, describing an unexpected feature of the power
of the \emph{Arbiter}, is true.
\begin{theorem}\label{arbH}
The set $S^{bal}(\bar\calR_{\alpha})$ coincides with
$T^{bal}_{\alpha}(\bar\calR)$:
\begin{equation}\label{palpe}
S^{bal}(\bar\calR_{\alpha})= T^{bal}_{\alpha}(\bar\calR).
\end{equation}
Moreover each balanced ensemble \eqref{prod} may be achieved via a
chain of arbitrage operations no longer than
\begin{equation}
\label{estN} N(n_{1},n_{2},n_{3})=3(|n_{1}-1|+|n_{2}|+|n_{3}|)+ 3.
\end{equation}
\end{theorem}
Loosely speaking, this theorem means that the \emph{Arbiter} is
extremely powerful. An assertion similar to Theorem~\ref{arbH}
was formulated as a hypothesis in \cite{KozCalPok:ArXiv10}. We
describe the algorithms corresponding to this theorem in the next
section.
The following assertion certifies that the estimate \eqref{estN}
from Theorem \ref{arbH} is pretty close to the optimal.
\begin{proposition}\label{VictorP}
The inequalities
\[
\label{dist2}
|n_1-n_2+n_4|,~|n_1-n_3+n_5)|,~|n_2-n_3+n_6|\le 1
\]
hold for any $\calR\in S(\bar\calR_{\alpha})$.
Here $n_i$ are the integers from representation \eqref{prod} of
$\calR$.
\end{proposition}
\begin{proof} This assertion is a special case of
Lemma~\ref{specialC} which will be considered below.
\end{proof}
Note that the set $S(\bar\calR_{\alpha})$ is, in contrast to
\eqref{palpe}, much smaller than the totality
$T_{\alpha}(\bar\calR)$ of all ensembles of the form \eqref{prod}.
In particular, the following assertion holds:
\begin{proposition}\label{boundP}
Let ${\bA}$ denote a chain of arbitrages of length $N$, and
$\calR=\bar\calR_{\alpha}{\bA}$. Then
$3(|n_{1}-1|+|n_{2}|+|n_{3}|)\le N+8$, where $n_1,n_2,n_3$ are the
integers from the representation \eqref{prod} of $\calR$.
\end{proposition}
Let us consider an infinite arbitrage chain:
\begin{equation}\label{perse}
{\bA}={\calA_{1}}{\calA_{2}}{\calA_{3}} \cdots {\calA_{n}} \dotsm .
\end{equation}
This chain is periodic with minimal period $p$ if
${\calA_{n}}={\calA_{n+p}}$ for $n=1,2,\ldots $, and $p$ is the
minimal positive integer with this property. Various periodic
chains of arbitrage play a special role in context of this
article, and we summairise below some interesting features of such
periodic arbitrage chains. For a periodic chain \eqref{perse} and
for an initial (unbalanced) exchange rate ensemble $\calR_{0}$ we
consider the sequence
\begin{equation}\label{pero}
{\calR_{0}},{\calR_{1}},{\calR_{2}}, \ldots , {\calR_{n}}, \ldots
\end{equation}
defined by $ \calR_{n}=\calR_{n-1}\calA_{n}$, $n=1,2,\dotsc$.
\begin{proposition} \label{perpro}
Either (i) the sequence \eqref{pero} is periodic for $n\ge 36p;$
or (ii) this sequence is diverging:
at least one of the following six relationships hold:
\[
{ r_{\textrm{\$}\textrm{\euro}}}_{n}\to 0,\
{\ r_{\textrm{\$}\textrm{\pounds}}}_{n}\to 0,\
{r_{\textrm{\$}\textrm{\yen}}}_{n}\to 0,\
{ r_{\textrm{\$}\textrm{\euro}}}_{n}\to \infty,\
{\ r_{\textrm{\$}\textrm{\pounds}}}_{n}\to \infty,\
{r_{\textrm{\$}\textrm{\yen}}}_{n}\to \infty.
\]
Moreover, in Case (i) the minimal period of the sequence is a
divisor of $24p$; in Case (ii) there exist a divisor $q$ of $24p$
and factors
$\gamma_{\textrm{\$}\textrm{\euro}},\ldots,\gamma_{\textrm{\pounds}\textrm{\yen}}$
such that the relationships ${r_{\textrm{\$}\textrm{\euro}}}_{n+q}=
\gamma_{\textrm{\$}\textrm{\euro}}
{r_{\textrm{\$}\textrm{\euro}}}_{n},\ \ldots,\
{r_{\textrm{\pounds}\textrm{\yen}}}_{n+q}=
\gamma_{\textrm{\pounds}\textrm{\yen}}
{r_{\textrm{\$}\textrm{\euro}}}_{n} $ hold for $n\ge 36p$.
\end{proposition}
\begin{proof}
This statement follows from Lemmas~\ref{comp} and~\ref{trans}.
\end{proof}
To conclude this discussion, we note one more unexpected feature of
periodic chains of arbitrage. A chain \eqref{perse} is
\emph{regular for the initial ensemble $\calR_0$} if this chain
includes all 24 arbitrages, and each arbitrage is active infinitely
many times while generating the sequence \eqref{pero}. By analogy
with typical results from the desynchronised systems theory, one
could expect a regular chain of arbitrage elements of the
corresponding sequence \eqref{pero} should be balanced for
sufficiently large $n$. However, this is not the case: the
sequences \eqref{pero} may be both periodic (after some transient
period) or diverging.
As an instructive example consider the 24-periodic chain
${\bA}_{*}$ which is defined by the following equations:
\begin{alignat*}{4}
\calA_{1}&=\calA^{(15)},\quad&\calA_{2}&=\calA^{(10)},\quad&\calA_{3}&=\calA^{(3)},\quad&\calA_{4}&=\calA^{(21)},\\
\calA_{5}&=\calA^{(11)},\quad&\calA_{6}&=\calA^{(8)},\quad&\calA_{7}&=\calA^{(24)},\quad&\calA_{8}&=\calA^{(17)},\\
\calA_{9}&=\calA^{(6)},\quad&\calA_{10}&=\calA^{(9)},\quad&\calA_{11}&=\calA^{(16)},\quad&\calA_{12}&=\calA^{(13)},\\
\calA_{13}&=\calA^{(12)},\quad&\calA_{14}&=\calA^{(22)},\quad&\calA_{15}&=\calA^{(14)},\quad&\calA_{16}&=\calA^{(18)},\\
\calA_{17}&=\calA^{(23)},\quad&\calA_{18}&=\calA^{(15)},\quad&\calA_{19}&=\calA^{(5)},\quad&\calA_{20}&=\calA^{(7)},\\
\calA_{21}&=\calA^{(4)},\quad&\calA_{22}&=\calA^{(19)},\quad&\calA_{23}&=\calA^{(1)},\quad&\calA_{24}&=\calA^{(5)}.
\end{alignat*}
\begin{proposition} \label{32}
For the initial ensemble $\calR_0= \bar\calR_{\alpha}$ the
corresponding sequence \eqref{pero} is periodic with minimal period
24, and all arbitrages from ${\bA}_{*}$ are active.
\end{proposition}
\begin{proof}
By inspection.
\end{proof}
This proposition demonstrates that arbitrage operation chains may
display periodicity and no necessary convergence on a cross
exchange rate law of one price. See Figs.~\ref{GraphLeha1_24},
\ref{GraphLeha3} and formula \eqref{geomrout} below for an
explanation of the geometrical meaning of the arbitrage chain
${\bA}_{*}$.
\section{The Basic Algorithm}\label{simpleASS}
Introduce the following chains of arbitrages of length $3$:
{\small\begin{alignat*}{3}
{\bA}_{+}^{(1)}&=\calA^{(21)}\calA^{(16)}\calA^{(1)},\quad&
{\bA}_{+}^{(2)}&=\calA^{(3)}\calA^{(17)}\calA^{(10)},\quad&
{\bA}_{+}^{(3)}&=\calA^{(5)}\calA^{(18)}\calA^{(12)},\\
{\bA}_{-}^{(1)}&=\calA^{(8)}\calA^{(9)}\calA^{(11)},\quad&
{\bA}_{-}^{(2)}&=\calA^{(15)}\calA^{(18)}\calA^{(14)},\quad&
{\bA}_{-}^{(3)}&=\calA^{(21)}\calA^{(23)}\calA^{(20)}.
\end{alignat*}}
\noindent It is convenient to define the mapping $\sigma(n)$ which
corresponds to a non-negative integer $n$ by the symbol ``$+$'',
and by the symbol ``$-$'' for a negative integer.
\begin{proposition}\label{algP}
The chain
\begin{equation} \label{alg0e} {\bA}(n_{1},n_{2},n_{3})=
{\left({\bA}_{\sigma( n_{3})}^{(3)}\right)}^{|n_{3}|}
{\left({\bA}_{\sigma (n_{2})}^{(2)}\right)}^{|n_{2}|}
\calA^{(15)}\calA^{(18)}
{\left({\bA}_{\sigma (n_{1})}^{(1)}\right)}^{|n_{1}-1|}
\calA^{(5)}
\end{equation}
satisfies Theorem~\ref{arbH}: the ensemble
$\bar\calR_{\alpha}{\bA}(n_{1},n_{2},n_{3})$ coincides with
\[
\left(\alpha^{n_{1}} \cdot \bar{r}_{\textrm{\$}\textrm{\euro}} ,\
\alpha^{n_{2}} \cdot \bar{r}_{\textrm{\$}\textrm{\pounds}},\
\alpha^{n_{3}} \cdot \bar{r}_{\textrm{\$}\textrm{\yen}} ,\
\alpha^{n_{1}-n_{2}}\cdot \bar{r}_{\textrm{\euro}\textrm{\pounds}},\
\alpha^{n_{1}-n_{3}}\cdot \bar{r}_{\textrm{\euro}\textrm{\yen}} ,\
\alpha^{n_{2}-n_{3}}\cdot \bar{r}_{\textrm{\pounds}\textrm{\yen}}
\right),
\]
and the length $N$ of the chain \eqref{alg0e} satisfies
$
N\le 3(|n_{1}-1|+|n_{2}|+|n_{3}|)+ 3.
$
\end{proposition}
The legitimacy of this algorithm may be verified by induction.
However a simple geometric proof is much more instructive. This
proof will be given later on. This chain is not always the
shortest: for instance, in the case $n_{1}= n_{2}=n_{3}=0$ the
shortest chain ${\bA}$ is of length one: ${\bA}=\calA_{7}$.
\section{General case}\label{S-gencase}
\subsection{Direct Generalisation}\label{directSS}
We begin with the following comment. The ensemble \eqref{dist} is
the first item in the list
\begin{equation}\label{list}
\begin{split}
\bar\calR^{1}_{\alpha}&=\left(\alpha \cdot \bar{r}_{\textrm{\$}\textrm{\euro}},\
\bar{r}_{\textrm{\$}\textrm{\pounds}},\ \bar{r}_{\textrm{\$}\textrm{\yen}},\ \bar{r}_{\textrm{\euro}\textrm{\pounds}},\ \bar{r}_{\textrm{\euro}\textrm{\yen}},\ \bar{r}_{\textrm{\pounds}\textrm{\yen}}\right),\\
\bar\calR^{2}_{\alpha}&=\left(\bar{r}_{\textrm{\$}\textrm{\euro}},\
\alpha\cdot\bar{r}_{\textrm{\$}\textrm{\pounds}},\ \bar{r}_{\textrm{\$}\textrm{\yen}},\ \bar{r}_{\textrm{\euro}\textrm{\pounds}},\ \bar{r}_{\textrm{\euro}\textrm{\yen}},\ \bar{r}_{\textrm{\pounds}\textrm{\yen}}\right),\\
\bar\calR^{3}_{\alpha}&=\left(\bar{r}_{\textrm{\$}\textrm{\euro}},\
\bar{r}_{\textrm{\$}\textrm{\pounds}},\ \alpha\cdot\bar{r}_{\textrm{\$}\textrm{\yen}},\ \bar{r}_{\textrm{\euro}\textrm{\pounds}},\ \bar{r}_{\textrm{\euro}\textrm{\yen}},\ \bar{r}_{\textrm{\pounds}\textrm{\yen}}\right),\\
\bar\calR^{4}_{\alpha}&=\left(\bar{r}_{\textrm{\$}\textrm{\euro}},\
\bar{r}_{\textrm{\$}\textrm{\pounds}},\ \bar{r}_{\textrm{\$}\textrm{\yen}},\ \alpha\cdot\bar{r}_{\textrm{\euro}\textrm{\pounds}},\ \bar{r}_{\textrm{\euro}\textrm{\yen}},\ \bar{r}_{\textrm{\pounds}\textrm{\yen}}\right),\\
\bar\calR^{5}_{\alpha}&=\left(\bar{r}_{\textrm{\$}\textrm{\euro}},\
\bar{r}_{\textrm{\$}\textrm{\pounds}},\ \bar{r}_{\textrm{\$}\textrm{\yen}},\ \bar{r}_{\textrm{\euro}\textrm{\pounds}},\ \alpha\cdot\bar{r}_{\textrm{\euro}\textrm{\yen}},\ \bar{r}_{\textrm{\pounds}\textrm{\yen}}\right),\\
\bar\calR^{6}_{\alpha}&=\left(\bar{r}_{\textrm{\$}\textrm{\euro}},\
\bar{r}_{\textrm{\$}\textrm{\pounds}},\ \bar{r}_{\textrm{\$}\textrm{\yen}},\ \bar{r}_{\textrm{\euro}\textrm{\pounds}},\ \bar{r}_{\textrm{\euro}\textrm{\yen}},\ \alpha\cdot\bar{r}_{\textrm{\pounds}\textrm{\yen}}
\right).
\end{split}
\end{equation}
A natural ``relabelling'' procedure confirms that the main results
described in Section~\ref{mrSS} hold without any changes for first
initial ensemble from the list \eqref{list}. In particular, Theorem
\ref{arbH} implies
\begin{corollary}\label{arbAH}
The equality $S^{bal}(\bar\calR^{i}_{\alpha})=
T^{bal}_{\alpha}(\bar\calR)$ holds for $i=2,3$. Moreover each
balanced ensemble \eqref{prod} may be achieved via a chain of
arbitrage operations no longer than $N^{i}(n_{1},n_{2},n_{3})$,
where
\begin{align*}
N^{2}(n_{1},n_{2},n_{3})&=3(|n_{1}|+|n_{2}-1|+|n_{3}|)+3,\\
N^{3}(n_{1},n_{2},n_{3})&=3(|n_{1}|+|n_{2}|+|n_{3}-1|)+3,\\
\end{align*}
\end{corollary}
To describe the corresponding algorithms we introduce the auxiliary
chains {\small\begin{alignat*}{3}
{\tbA}_{+}^{(1)}&=\calA^{(1)}\calA^{(21)}\calA^{(16)},\quad&
{\tbA}_{+}^{(2)}&=\calA^{(13)}\calA^{(23)}\calA^{(16)},\quad&
{\tbA}_{+}^{(3)}&=\calA^{(24)}\calA^{(12)}\calA^{(19)}.\\
{\tbA}_{-}^{(1)}&=\calA^{(9)}\calA^{(11)}\calA^{(8)},\quad&
{\tbA}_{-}^{(2)}&=\calA^{(9)}\calA^{(34)}\calA^{(4)},\quad&
{\tbA}_{-}^{(3)}&=\calA^{(6)}\calA^{(11)}\calA^{(17)};\\
{\ttbA}_{+}^{(1)}&=\calA^{(18)}\calA^{(12)}\calA^{(5)},\quad&
{\ttbA}_{+}^{(2)}&=\calA^{(23)}\calA^{(16)}\calA^{(13)},\quad&
{\ttbA}_{+}^{(3)}&=\calA^{(18)}\calA^{(12)}\calA^{(5)}.\\
{\ttbA}_{-}^{(1)}&=\calA^{(20)}\calA^{(21)}\calA^{(23)},\quad&
{\ttbA}_{-}^{(2)}&=\calA^{(4)}\calA^{(9)}\calA^{(24)},\quad&
{\ttbA}_{-}^{(3)}&=\calA^{(20)}\calA^{(21)}\calA^{(23)}.
\end{alignat*}}
The equation \eqref{alg0e} can be modified to the form
{\small\begin{alignat*}{1}
{\bA}_{2}(n_{1},n_{2},n_{3}) &=
{\left({\tbA}_{\sigma( n_{1})}^{(1)}\right)}^{|n_{1}|}
\calA^{(24)}\calA^{(12)} {\left({\tbA}_{\sigma
(n_{3})}^{(3)}\right)}^{|n_{3}|} {\left({\tbA}_{\sigma
(n_{2})}^{(2)}\right)}^{|n_{2}-1|} \calA^{(1)}
\end{alignat*}}
for $i=2$, and to the form
{\small\begin{alignat*}{1}
{\bA}_{3}(n_{1},n_{2},n_{3})
&= {\left({\ttbA}_{\sigma (n_{2})}^{(2)}\right)}^{|n_{2}|}
{\left({\ttbA}_{\sigma (n_{1})}^{(1)}\right)}^{|n_{1}|}
\calA^{(12)}\calA^{(10)}
{\left({\ttbA}_{\sigma (n_{3})}^{(3)}\right)}^{|n_{3}-1|}
\calA^{(3)}
\end{alignat*}}
for $i=3$.
Let us turn to the initial ensembles $\bar\calR^{i}_{\alpha}$,
$i=4,5,6$.
\begin{proposition}\label{arbA2H}
The equality $ S^{bal}(\bar\calR^{i}_{\alpha})=
T^{bal}_{\alpha}(\bar\calR)$ holds for $i=4,5,6$. Moreover each
balanced ensemble \eqref{prod} may be achieved via a chain of
arbitrage no longer than $N^{i}(n_{1},n_{2},n_{3})$, where
\[
N^{4,5,6}(n_{1},n_{2},n_{3})=3(|n_{1}|+|n_{2}|+|n_{3}|)+4.
\]
\end{proposition}
The corresponding chains ${\bA}_{4}(n_{1},n_{2},n_{3})$, $i=4,5,6$,
may be defined by the following equations:
\begin{multline*}
{\bA}_{4}(n_{1},n_{2},n_{3})=\calA^{(12)}{\bA}(n_{1}+1,n_{2},n_{3})\\
=\calA^{(12)}{\left({\bA}_{\sigma( n_{3})}^{(3)}\right)}^{|n_{3}|}
{\left({\bA}_{\sigma (n_{2})}^{(2)}\right)}^{|n_{2}|}
\calA^{(15)}\calA^{(18)}
{\left({\bA}_{\sigma( n_{1})}^{(1)}\right)}^{|n_{1}|}
\calA^{(5)},
\end{multline*}
\begin{multline*}
{\bA}_{5}(n_{1},n_{2},n_{3})=\calA^{(16)}{\bA}(n_{1}+1,n_{2},n_{3})\\
=\calA^{(16)}{\left({\bA}_{\sigma (n_{3})}^{(3)}\right)}^{|n_{3}|}
{\left({\bA}_{\sigma (n_{2})}^{(2)}\right)}^{|n_{2}|}
\calA^{(15)}\calA^{(18)}
{\left({\bA}_{\sigma (n_{1})}^{(1)}\right)}^{|n_{1}|}
\calA^{(3)},
\end{multline*}
\begin{multline*}
{\bA}_{6}(n_{1},n_{2},n_{3})=\calA^{(16)}{\bA}_{2}(n_{1}+1,n_{2},n_{3})\\
=\calA^{(10)} {\left({\tbA}_{\sigma (n_{1})}^{(1)}\right)}^{|n_{1}|}
\calA^{(24)}\calA^{(12)}
{\left({\tbA}_{\sigma (n_{3})}^{(3)}\right)}^{|n_{3}|}
{\left({\tbA}_{\sigma (n_{2})}^{(2)}\right)}^{|n_{2}-1|}
\calA^{(1)}.
\end{multline*}
\begin{proof}
This assertion may be proved analogously to Theorem~\ref{arbH}.
\end{proof}
\subsection{Arbitrage Discrepancies\label{DiscSS}}
To formulate further generalisations we need an additional notion.
To each ensemble $\calR= \left(r_{\textrm{\$}\textrm{\euro}},
r_{\textrm{\$}\textrm{\pounds}}, r_{\textrm{\$}\textrm{\yen}},
r_{\textrm{\euro}\textrm{\pounds}},
r_{\textrm{\euro}\textrm{\yen}},
r_{\textrm{\pounds}\textrm{\yen}}\right)$
we attach an {\em
arbitrage discrepancies ensemble,} using the relationships for
balanced principal exchange rates given in \eqref{invss} above
\[
\calD(\calR)=\left(d_{\textrm{\euro}\textrm{\pounds}}(\calR),d_{\textrm{\euro}\textrm{\yen}}(\calR),d_{\textrm{\pounds}\textrm{\yen}}(\calR)\right)
\]
as follows:
\begin{equation}\label{discrep}
\begin{split}
d_{\textrm{\euro}\textrm{\pounds}}(\calR)&=
\log r_{\textrm{\euro}\textrm{\pounds}}-\log r_{\textrm{\$}\textrm{\pounds}}+\log r_{\textrm{\$}\textrm{\euro}},\\
d_{\textrm{\euro}\textrm{\yen}}(\calR)&=
\log r_{\textrm{\euro}\textrm{\yen}}-\log r_{\textrm{\$}\textrm{\yen}}+\log r_{\textrm{\$}\textrm{\euro}},\\
d_{\textrm{\pounds}\textrm{\yen}}(\calR)&= \log
r_{\textrm{\pounds}\textrm{\yen}}-\log
r_{\textrm{\$}\textrm{\yen}}+\log r_{\textrm{\$}\textrm{\pounds}}.
\end{split}
\end{equation}
For instance
\begin{equation}\label{fori}
\begin{alignedat}{3}
\calD(\bar\calR^{1}_{\alpha})&=a(1,1,0),\quad&
\calD(\bar\calR^{2}_{\alpha})&=a(-1,0,1),\quad&
\calD(\bar\calR^{3}_{\alpha})&=a(0,-1,-1),\\
\calD(\bar\calR^{4}_{\alpha})&=a(1,0,0),\quad&
\calD(\bar\calR^{5}_{\alpha})&=a(0,1,0),\quad&
\calD(\bar\calR^{6}_{\alpha})&=a(0,0,1),
\end{alignedat}
\end{equation}
where $a=\log \alpha$.
\begin{proposition}\label{disc0P}
The ensemble $\calR$ is balanced, if and only if $\calD(\calR)=0$.
\end{proposition}
\begin{proof}
Follows from Proposition~\ref{balp} and equations \eqref{discrep}.
\end{proof}
\subsection{Case A\label{caseA}}
The case where two of the discrepancies \eqref{discrep} are equal
to zero was implicitly considered in Section~\ref{directSS}: see
the second line in \eqref{fori} and Proposition~\ref{arbA2H}.
\subsection{Case B\label{caseB}}
Consider now the case when one of the discrepancies in
\eqref{discrep} is equal to zero, while two others are not. We will
be particularly interested in the situation where two nonzero
discrepancies are different. This situation may have emerged, for
instance, as follows. Let us suppose that the underlying balanced
rates \eqref{distbar} had been in operation up to a certain
reference time moment $0$. At this moment the Euro trader has
decided to change two of three his rates, namely
$r_{\textrm{\euro}\textrm{\pounds}}$ and
$r_{\textrm{\euro}\textrm{\yen}}$, by different factors $\alpha$
and $\beta$. Then at this moment the two discrepancies would
acquire different non-zero values, while the third discrepancy
remains equal to zero.
Suppose, for example that $d_{\textrm{\pounds}\textrm{\yen}}=0$,
while $d_{\textrm{\euro}\textrm{\pounds}},
d_{\textrm{\euro}\textrm{\yen}}\not=0$. We introduce the ratio
\begin{equation}\label{ratio}
q(\calR)=\frac{d_{\textrm{\euro}\textrm{\yen}}(\calR)}{d_{\textrm{\euro}\textrm{\pounds}}(\calR)}.
\end{equation}
\begin{theorem}\label{irratBC}
Let the number \eqref{ratio} be irrational. Then set
$S^{bal}(\calR)$ is dense in the totality $T^{bal}$ of all possible
balanced ensembles.
\end{theorem}
A proof of this assertion will be given later on.
Consider also the case where $q=q(\calR)$ is a rational number:
$q=m/n$ with co-prime integers $m,n$ (including the possibilities
$m=1$ or $n=1$). Denote also
\[
\label{aplhaB} \alpha=\exp (d_{\textrm{\euro}\textrm{\yen}}/n).
\]
The following assertion is a straightforward analog of Proposition
\ref{rep1P}.
\begin{proposition}\label{repBP}
The inclusions $ S(\calR)\subset T_{\alpha}(\calR) $ and
$S^{bal}(\calR)\subset T^{bal}_{\alpha}(\calR)$ hold.
\end{proposition}
The following is an analog of Theorem~\ref{arbH}:
\begin{proposition}\label{arbBH}
The equality $S^{bal}(\calR)= T^{bal}(\calR)$ holds.
\end{proposition}
A proof of this assertion will be given later on.
Note that the expressions like (\ref{estN}) are not valid in
general. Similar expressions may be established, however, for the
cases $m=1$ or $n=1$. Note also that the case when the discrepancy
triplet is of one the forms $(a,a,0)$ or $(a,0,-a)$ or $(0,a,a)$,
$a\not=0$, was implicitly considered in Section~\ref{directSS}: see
the first line in \eqref{fori} and Proposition \ref{arbA2H}.
\subsection{Case C\label{caseC}}
Consider the case where all three arbitrage discrepancies
\eqref{discrep} are not equal to zero.
\begin{corollary}\label{irratCC}
Let at least one of the ratios
\begin{equation}\label{ratiose}
q_{1}(\calR)=
\frac{d_{\textrm{\euro}\textrm{\yen}}(\calR)}{d_{\textrm{\euro}\textrm{\pounds}}(\calR)},
\quad
q_{2}(\calR)=\frac{d_{\textrm{\pounds}\textrm{\yen}}(\calR)}{d_{\textrm{\euro}\textrm{\pounds}}(\calR)}
\end{equation}
be irrational. Then the set $S^{bal}(\calR)$ is dense in the
totality $T^{bal}$ of all possible balanced ensembles.
\end{corollary}
Suppose now that both ratios \eqref{ratiose} are rational:
\[
q_{1}(\calR)=\frac{m_{1}}{n_{1}},\quad
q_{2}(\calR)=\frac{m_{2}}{n_{2}}.
\]
Denote by $\lcm(n_1,n_2)$ the least common multiple of the
corresponding denominators. Denote
\[
\alpha(\calR)=\exp\left(\frac{d_{\textrm{\euro}\textrm{\pounds}}(\calR)}{\lcm(n_1,n_2)}\right).
\]
\begin{proposition}\label{rep1CP}
The relationships $ S(\calR)\subset T_{\alpha}(\calR)$ and
$S^{bal}(\calR)\subset T^{bal}_{\alpha}(\calR)$ hold.
\end{proposition}
\begin{corollary}\label{arbCH}
Let
\begin{equation}
\label{resC} \lcm(n_1,n_2)=n_{1}\cdot n_{2}.
\end{equation}
Then $S^{bal}(\calR)= T^{bal}_{\alpha}(\calR)$.
\end{corollary}
\begin{proof}
This assertion as well as formulated below Corollary~\ref{arbC2H}
follows from Proposition~\ref{arbBH} together with Lemma
\ref{colP}.
\end{proof}
Consider finally the case when the ratios $q_{1}(\calR)$ and
$q_{2}(\calR)$ are rational, but \eqref{resC} does not hold. In
this case we introduce the number $\gamma$ such that
$d_{i}=k_{i}\gamma$ where the numbers $k_{i}$ are integers and
their greatest common divisor, $\gcd(k_{1},k_{2},k_{3})$, is equal
to $1$. Consider also the following six numbers:
\begin{equation}\label{lcd2}
\begin{alignedat}{2}
a_{1}&=\gcd(k_{1},k_{2}), &a_{2}&=\gcd(k_{1},k_{3}),\\[1mm]
a_{3}&=\gcd(k_{2},k_{3}), &a_{4}&=\gcd(k_{1},k_{2}-k_{3}),\\[1mm]
a_{5}&=\gcd(k_{2},k_{1}+k_{3}),\quad &a_{6}&=\gcd(k_{3},k_{1}-k_{2}).
\end{alignedat}
\end{equation}
Introduce also the numbers $\alpha_{i}=\exp a_{i}$, $i=1,\ldots,
6$.
\begin{corollary}\label{arbC2H}
The equation $S^{bal}(\calR)=\cup_{i=1}^{6}
T^{bal}_{\alpha_{i}}(\calR)$ holds.
\end{corollary}
Note that all six numbers in \eqref{lcd2} may indeed be greater
than one. For instance, consider: $k_1 = 595$, $k_2 = 1683$, $k_3 =
308$. By inspection, $\gcd(k_1, k_2, k_3)=1$, and
\begin{alignat*}{3}
a_1&=\gcd(k_1, k_2)=17,& a_2&=\gcd(k_1, k_3)=7,\\
a_3&=\gcd(k_2, k_3)=11,& a_4&=\gcd(k_1 - k_2, k_3)=4,\\
a_5&=\gcd(k_1 + k_3, k_2)=3,\quad& a_6&=\gcd(k_1, k_2 - k_3)=5.
\end{alignat*}
\section{Proofs}\label{S-proofs}
From this point onward we discuss the proofs of the theorems
formulated above. This part of the paper is organised as follows.
In Section~\ref{S-sarb} we introduce, as a useful auxiliary tool,
stronger arbitrage procedures. Using strong arbitrages, we
``linearise the problem'', reducing it to investigation of all
possible products of 12 explicitly written $6\times 6$-matrices.
Afterwards, in Section~\ref{S-scs} we separate a family of 12
$3\times 3$-matrices $G^{(i)}$ such that the products of these
matrices completely describe the dynamics of the discrepancy
triplets. The properties of such products appear to be of key
importance, and these are investigated in Section~\ref{S-struct}.
The results are applied in Section~\ref{S-discrep}.
Sections~\ref{S-incdyn} and \ref{S-proofT} are dedicated to
finalising the proof of Theorem~\ref{arbH}. Finally, in
Sections~\ref{knotsSS}--\ref{fp2SS} we provide proofs for
Theorem~\ref{irratBC} and Proposition \ref{arbBH}.
\subsection{Strong Arbitrages}\label{S-sarb}
We use, as an auxiliary tool, stronger arbitrage procedures. Let us
begin with an example. Consider the currencies triplet
$(\textrm{\$}\textrm{\euro}\textrm{\pounds})$. For a given $\calR$
we define the strong arbitrage
$\Hat\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}\calR$ as
$\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}$ if the
inequality \eqref{unb} holds, and as
$\calA_{\textrm{\euro}\textrm{\$}\textrm{\pounds}}$, otherwise.
Note that in both cases the result in terms of principal exchange
rates is the same: the rate $r_{\textrm{\$}\textrm{\euro}}$ is
changed to $r_{\textrm{\$}\textrm{\euro}}^{new}=
\frac{r_{\textrm{\$}\textrm{\yen}}}{r_{\textrm{\euro}\textrm{\pounds}}}$.
The strong arbitrage
$\Hat\calA_{\textrm{\$}\textrm{\euro}\textrm{\yen}}$ is the second
entry in Table~\ref{starbT} of the possible 12 strong arbitrages.
The meaning of a strong arbitrage is simple. This is an arbitrage
balancing a sub-FX market such as
$\textrm{\$}\textrm{\euro}\textrm{\yen}$ by changing the exchange
rate for a pair such as $Dollar \leftrightarrows Euro$. We will
use, where convenient, the notation $\Hat\calA^{(n)}$ for the
arbitrage number $n$ from this table.
\begin{table}[!htbp]
\caption{Strong arbitrages}\label{starbT}
\begin{tabular}{llll}
Number& Strong arbitrage& Action& Numbers of arbitrages\\
\hline \\
1& $\Hat\calA_{\textrm{\$}\textrm{\euro}\textrm{\pounds}}$& $
r^{new}_{\textrm{\$}\textrm{\euro}}=
r_{\textrm{\$}\textrm{\pounds}}\cdot
r_{\textrm{\euro}\textrm{\pounds}}^{-1}$ & 1, 7
\\
2& $\Hat\calA_{\textrm{\$}\textrm{\euro}\textrm{\yen}}$ & $
r^{new}_{\textrm{\$}\textrm{\euro}}=
r_{\textrm{\$}\textrm{\yen}}\cdot
r_{\textrm{\euro}\textrm{\yen}}^{-1} $ &2, 8
\\
3& $\Hat\calA_{\textrm{\$}\textrm{\pounds}\textrm{\euro}}$ & $
r^{new}_{\textrm{\$}\textrm{\pounds}}=
r_{\textrm{\$}\textrm{\euro}}\cdot
r_{\textrm{\euro}\textrm{\pounds}}$ &3, 13
\\
4& $\Hat\calA_{\textrm{\$}\textrm{\pounds}\textrm{\yen}}$& $
r^{new}_{\textrm{\$}\textrm{\pounds}}=
r_{\textrm{\$}\textrm{\yen}}\cdot
r_{\textrm{\pounds}\textrm{\yen}}^{-1}$ & 4, 14
\\
5& $\Hat\calA_{\textrm{\$}\textrm{\yen}\textrm{\euro}}$ & $
r^{new}_{\textrm{\$}\textrm{\yen}}= r_{\textrm{\$},
\textrm{\euro}}\cdot r_{\textrm{\euro}\textrm{\yen}}$ &5, 19
\\
6 &$\Hat\calA_{\textrm{\$}\textrm{\yen}\textrm{\pounds}}$&
$ r^{new}_{\textrm{\$}\textrm{\yen}}= r_{\textrm{\$}\textrm{\pounds}}\cdot r_{\textrm{\pounds}\textrm{\yen}}$
& 6, 20
\\
7& $\Hat\calA_{\textrm{\euro}\textrm{\pounds}\textrm{\$}}$&
$ r^{new}_{\textrm{\euro}\textrm{\pounds}}= r_{\textrm{\$}\textrm{\pounds}}\cdot r_{\textrm{\$}\textrm{\euro}}^{-1}$
& 9, 15
\\
8& $\Hat\calA_{\textrm{\euro}\textrm{\pounds}\textrm{\yen}}$&
$ r^{new}_{\textrm{\euro}\textrm{\pounds}}= r_{\textrm{\euro}\textrm{\yen}}\cdot r_{\textrm{\pounds}\textrm{\yen}}^{-1}$
& 10, 16
\\
9& $\Hat\calA_{\textrm{\euro}\textrm{\yen}\textrm{\$}}$&
$ r^{new}_{\textrm{\euro}\textrm{\yen}}= r_{\textrm{\$}\textrm{\yen}}\cdot r_{\textrm{\$}\textrm{\euro}}^{-1}$
& 11, 21
\\
10& $\Hat\calA_{\textrm{\euro}\textrm{\yen}\textrm{\pounds}}$&
$ r^{new}_{\textrm{\euro}\textrm{\yen}}= r_{\textrm{\euro}\textrm{\pounds}}\cdot r_{\textrm{\pounds}\textrm{\yen}}$
& 12, 22
\\
11& $\Hat\calA_{\textrm{\pounds}\textrm{\yen}\textrm{\$}}$&
$ r^{new}_{\textrm{\pounds}\textrm{\yen}}= r_{\textrm{\$}\textrm{\yen}}\cdot r_{\textrm{\$}\textrm{\pounds}}^{-1}$
& 17, 23
\\
12& $\Hat\calA_{\textrm{\pounds}\textrm{\yen}\textrm{\euro}}$& $
r^{new}_{\textrm{\pounds}\textrm{\yen}}=
r_{\textrm{\euro}\textrm{\yen}}\cdot
r_{\textrm{\euro}\textrm{\pounds}}^{-1}$ & 18, 24
\end{tabular}
\end{table}
\begin{proposition}
For any arbitrage chain \eqref{arbsec}, and any initial exchange
rates $\calR$, there exists a chain $\hbA=\Hat\calA_1 \cdots
\Hat\calA_n $ of strong arbitrages such that
$\calR\hbA=\calR{\bA}$. Conversely, for any chain $\hbA=\Hat\calA_1
\cdots \Hat\calA_n $ of strong arbitrages, and any initial exchange
rates $\calR$, there exists a chain of arbitrages such that
$\calR\hbA=\calR{\bA}$.
\end{proposition}
This proposition reduces investigation of the questions from the
previous section to investigation of analogous questions related to
chains of strong arbitrages.
Now we relate each strong arbitrage to a $6\times 6$ matrix
$B(\calA)$ as follows:
{\small\[ B_{\textrm{\$}\textrm{\euro}
\textrm{\pounds}}=B^{(1)} = \left(
\begin{smallmatrix-mod}
\zer& \zer& \zer& \zer& \zer& \zer \\ -1& 1& \zer& \zer& \zer& \zer \\ \zer& \zer& 1& \zer& \zer& \zer\\
1& \zer& \zer& 1& \zer& \zer \\ \zer& \zer& \zer& \zer& 1& \zer \\ \zer& \zer& \zer& \zer& \zer& 1
\end{smallmatrix-mod}
\right),\quad
B_{\textrm{\$} \textrm{\euro}\textrm{\yen}}=B^{(2)}=
\left(
\begin{smallmatrix-mod}
\zer& \zer& \zer& \zer& \zer& \zer \\ \zer& 1& \zer& \zer& \zer& \zer \\ \zer& \zer& 1& \zer& \zer& \zer \\
\zer& \zer& \zer& 1& \zer& \zer \\ -1& \zer& \zer& \zer& 1& \zer \\ 1& \zer& \zer& \zer& \zer& 1
\end{smallmatrix-mod}
\right),
\]
\[
B_{\textrm{\$} \textrm{\pounds} \textrm{\euro}}=B^{(3)}=
\left(
\begin{smallmatrix-mod}
1& \zer& \zer& 1& \zer& \zer \\ \zer& 1& \zer& 1& \zer& \zer \\ \zer& \zer& 1& \zer& \zer& \zer\\
\zer& \zer& \zer& \zer& \zer& \zer \\ \zer& \zer& \zer& \zer& 1& \zer \\ \zer& \zer& \zer& \zer& \zer& 1
\end{smallmatrix-mod}
\right),\quad
B_{\textrm{\$} \textrm{\pounds} \textrm{\yen}}=B^{(4)}=
\left(
\begin{smallmatrix-mod}
1& \zer& \zer& \zer& \zer& \zer \\ \zer& 1& \zer& \zer& \zer& \zer \\ \zer& \zer& 1& -1& \zer& \zer\\
\zer& \zer& \zer& \zer& \zer& \zer \\ \zer& \zer& \zer& \zer& 1& \zer \\ \zer& \zer& \zer& 1& \zer& 1
\end{smallmatrix-mod}
\right),
\]
\[
B_{\textrm{\$} \textrm{\yen} \textrm{\euro}}=B^{(5)}=
\left
(\begin{smallmatrix-mod}
1& \zer& \zer& \zer& \zer& 1\\ \zer& 1& \zer& \zer& \zer& \zer \\ \zer& \zer& 1& \zer& \zer& \zer\\
\zer& \zer& \zer& 1& \zer& \zer \\ \zer& \zer& \zer& \zer& 1& 1\\ \zer& \zer& \zer& \zer& \zer& \zer
\end{smallmatrix-mod}
\right),\quad
B_{\textrm{\$} \textrm{\yen} \textrm{\pounds}}=B^{(6)}=
\left
(\begin{smallmatrix-mod}
1& \zer& \zer& \zer& \zer& \zer \\ \zer& 1& \zer& \zer& \zer& \zer \\ \zer& \zer& 1& \zer& \zer& 1\\
\zer& \zer& \zer& 1& \zer& 1\\ \zer& \zer& \zer& \zer& 1& \zer \\ \zer& \zer& \zer& \zer& \zer& \zer
\end{smallmatrix-mod}
\right),
\]
\[
B_{\textrm{\euro} \textrm{\pounds} \textrm{\$}}=B^{(7)}=
\left(\begin{smallmatrix-mod}
1& -1& \zer& \zer& \zer& \zer \\ \zer& \zer& \zer& \zer& \zer& \zer \\ \zer& \zer& 1& \zer& \zer& \zer \\
\zer& 1& \zer& 1& \zer& \zer \\ \zer& \zer& \zer& \zer& 1& \zer \\ \zer& \zer& \zer& \zer& \zer& 1
\end{smallmatrix-mod}\right),\quad
B_{\textrm{\euro} \textrm{\pounds} \textrm{\yen}}=B^{(8)}=
\left(\begin{smallmatrix-mod}
1& \zer& \zer& \zer& \zer& \zer \\ \zer& \zer& \zer& \zer& \zer& \zer \\ \zer& -1& 1& \zer& \zer& \zer \\
\zer& \zer& \zer& 1& \zer& \zer \\ \zer& 1& \zer& \zer& 1& \zer \\ \zer& \zer& \zer& \zer& \zer& 1
\end{smallmatrix-mod}\right),
\]
\[
B_{\textrm{\euro} \textrm{\yen} \textrm{\$}}=B^{(9)}=
\left(\begin{smallmatrix-mod}
1& \zer& \zer& \zer& -1& \zer \\ \zer& 1& \zer& \zer& \zer& \zer \\ \zer& \zer& 1& \zer& \zer& \zer \\
\zer& \zer& \zer& 1& \zer& \zer \\ \zer& \zer& \zer& \zer& \zer& \zer \\ \zer& \zer& \zer& \zer& 1& 1
\end{smallmatrix-mod}\right),\quad
B_{\textrm{\euro} \textrm{\yen} \textrm{\pounds}}=B^{(10)}=
\left(\begin{smallmatrix-mod}
1& \zer& \zer& \zer& \zer& \zer \\ \zer& 1& \zer& \zer& 1& \zer \\ \zer& \zer& 1& \zer& 1& \zer \\
\zer& \zer& \zer& 1& \zer& \zer \\ \zer& \zer& \zer& \zer& \zer& \zer \\ \zer& \zer& \zer& \zer& \zer& 1
\end{smallmatrix-mod}\right),
\]
\[
B_{\textrm{\pounds} \textrm{\yen} \textrm{\$}}=B^{(11)}=
\left(\begin{smallmatrix-mod}
1& \zer& \zer& \zer& \zer& \zer \\ \zer& 1& \zer& \zer& \zer& \zer \\ \zer& \zer& \zer& \zer& \zer& \zer \\
\zer& \zer& -1& 1& \zer& \zer \\ \zer& \zer& \zer& \zer& 1& \zer \\ \zer& \zer& 1& \zer& \zer& 1
\end{smallmatrix-mod}\right),\quad
B_{\textrm{\pounds} \textrm{\yen} \textrm{\euro}}=B^{(12)}=
\left(\begin{smallmatrix-mod}
1& \zer& \zer& \zer& \zer& \zer \\ \zer& 1& -1& \zer& \zer& \zer \\ \zer& \zer& \zer& \zer& \zer& \zer \\
\zer& \zer& \zer& 1& \zer& \zer \\ \zer& \zer& 1& \zer& 1& \zer \\ \zer& \zer& \zer& \zer& \zer& 1
\end{smallmatrix-mod}\right).
\]}
For any ensemble $\calR = \left(r_{\textrm{\$}\textrm{\euro}},
r_{\textrm{\$}\textrm{\pounds}},r_{\textrm{\$}\textrm{\yen}},
r_{\textrm{\euro}\textrm{\pounds}},
r_{\textrm{\euro}\textrm{\yen}}, r_{\textrm{\pounds}\textrm{\yen}}
\right)$ we denote
\[
\log\calR =\left(\log r_{\textrm{\$}\textrm{\euro}},\ \log
r_{\textrm{\$}\textrm{\pounds}},\ \log
r_{\textrm{\$}\textrm{\yen}},\
\log r_{\textrm{\euro}\textrm{\pounds}},\ \log r_{\textrm{\euro}\textrm{\yen}},\
\log r_{\textrm{\pounds}\textrm{\yen}}
\right).
\]
\begin{proposition}\label{oldprop}
The equation $\log (\calR \Hat\calA^{(i)}) = (\log \calR) B^{(i)}$
holds for $i=1,\ldots , 12$.
\end{proposition}
\begin{proof}
Follows from definitions.
\end{proof}
\subsection{A Special Coordinate System}\label{S-scs}
In the six-dimensional real coordinate space $\bbR^{6}$ we
introduce the vectors
\[
{\bv}_{1}=(1,-1,0,1,0,0), \
{\bv}_{2}=(1,0,-1,0,1,0), \
{\bv}_{3}=(0,1,-1,0,0,1).
\]
By definition for any ensemble $\calR$
\[
\langle {\bv}_{1}, \log \calR \rangle= d_{\textrm{\euro}\textrm{\pounds}}(\calR) , \
\langle {\bv}_{2}, \log \calR \rangle= d_{\textrm{\euro}\textrm{\yen}}(\calR), \
\langle {\bv}_{3}, \log \calR \rangle= d_{\textrm{\pounds}\textrm{\yen}}(\calR),
\]
where $\langle \cdot, \cdot \rangle$ denotes the usual inner
product in $\bbR^{6}$.
Propositions~\ref{balp} and~\ref{oldprop} together imply
\begin{corollary}
The three-dimensional subspace $\langle {\bv}_{1}, {\bv} \rangle =
\langle {\bv}_{2}, {\bv} \rangle
=\langle {\bv}_{3}, {\bv} \rangle =0$
is invariant with respect to each linear operator ${\bv}\to
{\bv}B^{(i)}$, $i=1,\ldots, 12$.
\end{corollary}
We introduce in $\bbR^{6}$ the new basis
\[
\{{\be}_{1},{\be}_{2},{\be}_{3},{\bv}_{1},{\bv}_{2},{\bv}_{3}\};
\]
here ${\be}_{1}=(1,0,0,0,0,0)$, ${\be}_{2}=(0,1,0,0,0,0)$,
${\bv}_{3}=(0,0,1,0,0,0)$. By the last corollary in this basis the
matrices of the linear operators ${\bv}\to {\bv}B^{(i)}$ have the
block-triangular form:
\[
D^{(i)}=\left(
\begin{array}{ll}
{\bf 1} & {\bf 0} \\
H^{(i)} & G^{(i)}
\end{array}
\right) .
\]
Here
\[
{\bf 0}=\left(\begin{array}{lll}
0 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & 0
\end{array}\right),
\quad
{\bf 1}=\left(\begin{array}{lll}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}\right),
\]
and $G^{(i)}, H^{(i)}$ are some $3\times 3$-matrices.
Denote
\[
{\bv}(\calR)= \left(
\log r_{\textrm{\$}\textrm{\euro}},\
\log r_{\textrm{\$}\textrm{\pounds}},\
\log r_{\textrm{\$}\textrm{\yen}},\
d_{\textrm{\euro}\textrm{\pounds}}(\calR),\
d_{\textrm{\euro}\textrm{\yen}}(\calR),\
d_{\textrm{\pounds}\textrm{\yen}}(\calR)
\right).
\]
\begin{proposition}\label{blockC}
The equality $ {\bv}(\calR \Hat{\calA}^{(i)})={\bv}(\calR)D^{(i)} $
holds for $i=1,\ldots, 12$.
\end{proposition}
\begin{proof}
Follows from Lemma~\ref{Qlem} and Proposition~\ref{oldprop}.
\end{proof}
The matrices $D^{(i)} $ may be written explicitly as
\begin{equation}\label{expl}
QB^{(i)}Q^{-1},
\end{equation}
where
\begin{equation}\label{explQ}
Q=\left(\begin{smallmatrix-mod}
1& \zer& \zer& \zer & \zer & \zer \\
0& 1& \zer&1& \zer& \zer \\
0& \zer& 1& \zer& \zer & \zer \\
1& -1& \zer& 1& \zer& \zer \\
1& \zer& -1& \zer& 1& \zer \\
0& 1& -1& \zer& \zer& 1
\end{smallmatrix-mod}\right),\quad
Q^{-1}=\left(\begin{smallmatrix-mod}
1& \zer& \zer& -1& -1& \zer \\
0& 1& \zer&1& \zer& -1\\
0& \zer& 1& \zer& 1& 1\\
0& \zer& \zer& 1& \zer& \zer \\
0& \zer& \zer& \zer& 1& \zer \\
0& \zer& \zer& \zer& \zer& 1
\end{smallmatrix-mod}\right).
\end{equation}
\begin{lemma}\label{Qlem}
The following equations are valid:
\[
G^{(1)}\hphantom{^{0}}=\left(\begin{smallmatrix-mod}
\zer & -1 & \zer \\
\zer & 1 & \zer \\
\zer & \zer & 1
\end{smallmatrix-mod}\right),\quad
G^{(2)}\hphantom{^{0}}=\left(\begin{smallmatrix-mod}
1 & \zer & \zer \\
-1 & \zer & \zer \\
\zer & \zer & 1
\end{smallmatrix-mod}\right),\quad
G^{(3)}\hphantom{^{0}}=\left(\begin{smallmatrix-mod}
\zer & \zer & 1 \\
\zer & 1 & \zer \\
\zer & \zer & 1
\end{smallmatrix-mod}\right),
\]
\[
G^{(4)}\hphantom{^{0}}=\left(\begin{smallmatrix-mod}
1 & \zer & \zer \\
\zer & 1 & \zer \\
1 & \zer & \zer
\end{smallmatrix-mod}\right),\quad
G^{(5)}\hphantom{^{0}}=\left(\begin{smallmatrix-mod}
1 & \zer & \zer \\
\zer & \zer & -1 \\
\zer & \zer & 1
\end{smallmatrix-mod}\right),\quad
G^{(6)}\hphantom{^{0}}=\left(\begin{smallmatrix-mod}
1 & \zer & \zer \\
\zer & 1 & \zer \\
\zer & -1 & \zer
\end{smallmatrix-mod}\right),
\]
\[
G^{(7)}\hphantom{^{0}}=\left(\begin{smallmatrix-mod}
\zer & \zer & \zer \\
\zer & 1 & \zer \\
\zer & \zer & 1
\end{smallmatrix-mod}\right),\quad
G^{(8)}\hphantom{^{0}}=\left(\begin{smallmatrix-mod}
\zer & \zer & \zer \\
1 & 1 & \zer \\
-1 & \zer & 1
\end{smallmatrix-mod}\right),\quad
G^{(9)}\hphantom{^{0}}=\left(\begin{smallmatrix-mod}
1 & \zer & \zer \\
\zer & \zer & \zer \\
\zer & \zer & 1
\end{smallmatrix-mod}\right),
\]
\[
G^{(10)}=\left(\begin{smallmatrix-mod}
1 & 1 & \zer \\
\zer & \zer & \zer \\
\zer & 1 & 1
\end{smallmatrix-mod}\right),\quad
G^{(11)}=\left(\begin{smallmatrix-mod}
1 & \zer & \zer \\
\zer & 1 & \zer \\
\zer & \zer & \zer
\end{smallmatrix-mod}\right),\quad
G^{(12)}=\left(\begin{smallmatrix-mod}
1 & \zer & -1 \\
\zer & 1 & 1 \\
\zer & \zer & \zer
\end{smallmatrix-mod}\right),
\]
and
\[
H^{(1)}=\left( \begin{smallmatrix-mod}
-1&\zer &\zer \\
\zer &\zer &\zer \\
\zer &\zer &\zer
\end{smallmatrix-mod}\right),\quad
H^{(2)}=\left( \begin{smallmatrix-mod}
\zer &\zer &\zer \\
-1&\zer &\zer \\
\zer &\zer &\zer
\end{smallmatrix-mod}\right),\quad
H^{(3)}=\left( \begin{smallmatrix-mod}
\zer &1&\zer \\
\zer &\zer &\zer \\
\zer &\zer &\zer
\end{smallmatrix-mod}\right),
\]
\[
H^{(4)}=\left( \begin{smallmatrix-mod}
\zer &\zer &\zer \\
\zer &\zer &\zer \\
\zer &-1&\zer
\end{smallmatrix-mod}\right),\quad
H^{(5)}=\left( \begin{smallmatrix-mod}
\zer &\zer &\zer \\
\zer &\zer &1\\
\zer &\zer &\zer
\end{smallmatrix-mod}\right),\quad
H^{(6)}=\left( \begin{smallmatrix-mod}
\zer &\zer &\zer \\
\zer &\zer &\zer \\
\zer &\zer &1
\end{smallmatrix-mod}\right),
\]
\[
H^{(i)}= { \bf 0},\quad i=7, \ldots , 12 .
\]
\end{lemma}
\begin{proof}
Follows by inspection from \eqref{expl}, \eqref{explQ}.
\end{proof}
\begin{proposition}\label{disc1P}
The discrepancy ensemble $\calD(\calR \Hat\calA)$ depends only on
$\calD(\calR)$ and $\Hat\calA$, and may be written as follows: $
\calD(\calR \Hat\calA^{(i)})= \calD(\calR) G^{(i)}$. Here $i$ is
the number of a strong arbitrage as listed in Table~\ref{starbT}.
\end{proposition}
\begin{proof}
Follows from Proposition~\ref{blockC}.
\end{proof}
By the last proposition a discrepancy ensemble $\calD(\calR \hbA)$
related to an arbitrage chain $\hbA=\Hat\calA_{1}\cdots
\Hat\calA_{n}$ may be written as
\[
\calD(\calR \hbA)=\calD(\calR)\prod_{i=1}^{n}G_{i}.
\]
Therefore the set ${\bbG}$ of all possible products of the matrices
$G^{(i)}$ is of interest.
\subsection{Structure of the Set ${\bbG}$}\label{S-struct}
The following assertion is the key observation of our paper:
\begin{lemma}\label{Gfini}
The set ${\bbG}$ consists of 229 elements.
\end{lemma}
\begin{proof}
By inspection
\end{proof}
Denote by $\Hat{\mathbb A}$ the totality of all finite chains of
strong arbitrages.
\begin{corollary}\label{Dfini}
For a given $\calR$ the set $ \bbD(\calR)=\{\calD(\calR \hbA): \
{\bA}\in {\mathbb A}\} $ consists of less than 230 elements.
\end{corollary}
Let us discuss briefly the structure of the set ${\bbG}$. A subset
${\bG}$ of ${\bbG}$ is called a connected component, if for any
$G_{1},G_{2}\in {\bG}$ there exists $G\in {\bbG}$ satisfying
$G_{2}=G_{1}G$. By the definition different connected components
do not intersect.
\begin{lemma}\label{comp}
The set ${\bbG}$ is partitioned into 14 connected components
$U_{1},\ldots U_{14}$. Each of the first six connected components
includes 24 matrices of range 2; each of the connected components
$U_{7}, \ldots U_{13}$ includes 12 matrices of range one; the last
component contains a single zero matrix.
\end{lemma}
The sets $U_{1},\ldots, U_{6}$ may be characterised by the following
inclusions:
\[
G^{(2i-1)},\ G^{(2i)}\in U_{i},\quad i=1, \ldots ,6.
\]
To identify the connected components $U_{7}, \ldots U_{13}$ we list
below the smallest lexicographical matrices from these components
\[
\left( \begin{smallmatrix-mod}
-1&-1&\zer \\
\zer &\zer &\zer \\
\zer &\zer &\zer
\end{smallmatrix-mod}\right)\in U_{7},\hphantom{_{0}}\quad
\left( \begin{smallmatrix-mod}
\zer &\zer &\zer \\
-1&-1&\zer \\
\zer &\zer &\zer
\end{smallmatrix-mod}\right)\in U_{8},\hphantom{_{0}}\quad
\left( \begin{smallmatrix-mod}
\zer &\zer &\zer \\
\zer &\zer &\zer \\
-1&-1&\zer
\end{smallmatrix-mod}\right)\in U_{9},\hphantom{_{0}}
\]
\[
\left( \begin{smallmatrix-mod}
-1&-1&\zer \\
1&1&\zer \\
\zer &\zer &\zer
\end{smallmatrix-mod}\right)\in U_{10},\quad
\left( \begin{smallmatrix-mod}
-1&-1&\zer \\
\zer &\zer &\zer \\
-1&-1&\zer
\end{smallmatrix-mod}\right)\in U_{11},\quad
\left( \begin{smallmatrix-mod}
\zer &\zer &\zer \\
-1&-1&\zer \\
1&1&\zer
\end{smallmatrix-mod}\right)\in U_{12},
\]
\[
\left( \begin{smallmatrix-mod}
-1&-1&\zer \\
1&1&\zer \\
-1&-1&\zer
\end{smallmatrix-mod}\right)\in U_{13}.
\]
One can move from one connected component $U_{i}$ to another
component $U_{j}$ applying a matrix $G^{(i)}$, $i=1,\ldots, 12$.
Let us describe the set of possible transitions. We will use the
notation $U_{i} \succ U_{j}$ if such a transition is possible.
\begin{lemma}\label{trans}
The following relationships hold:
\begin{alignat*}{3}
U_{1} &\succ U_{9},U_{10},U_{13},\quad & U_{2} &\succ
U_{8},U_{11},U_{13},\quad &U_{3} &\succ
U_{7},U_{12},U_{13}, \\
U_{4} &\succ U_{8},U_{9},U_{12},\quad & U_{5} &\succ
U_{7},U_{9},U_{11},\quad &U_{6} &\succ U_{7},U_{8},U_{10}.
\end{alignat*}
Also $U_{i} \succ U_{14}$, $i=1, \ldots,13$.
\end{lemma}
\begin{proof} By inspection.
\end{proof}
\begin{lemma}
For any $G\in{\bbG}$ either $G$ or $G^2$ or $G^3$ is a projector.
\end{lemma}
\begin{proof} By inspection.
\end{proof}
\subsection{Discrepancy Dynamics}\label{S-discrep}
The structure of the set ${\bbG}$ explained above induces
structuring of the set of discrepancies, which we discuss below. We
say that a set ${\bD}$ of discrepancies is a connected component if
for any $\calD_1,\calD_2\in {\bD}$ there exists an arbitrage chain
${\bA}$ satisfying $ \calD_1{\bA}=\calD_2$. For a given reals
$a,b$ we denote by ${\bD}(a,b)$ the set of different triplets from
the collection
\begin{equation}\label{24disc}
\begin{alignedat}{2}
\calD_{1}(a,b)&=\left(a,b,-a+b\right),\quad& \calD_{2}(a,b)&=\left(-a+b,b,a\right),\\
\calD_{3}(a,b)&=\left(a,a-b,-b\right),\quad& \calD_{4}(a,b)&=\left(-a+b,-a,-b\right),\\
\calD_{5}(a,b)&=\left(-b,a-b,a\right),\quad& \calD_{6}(a,b)&=\left(-b,-a,-a+b\right),\\
\calD_{7}(a,b)&=\left(0,b,-a+b\right),\quad&\calD_{8}(a,b)&=\left(a,0,-a+b\right),\\
\calD_{9}(a,b)&=\left(a,b,0\right),\quad& \calD_{10}(a,b)&=\left(0,b,a\right),\\
\calD_{11}(a,b)&=\left(-a+b,0,a\right),\quad& \calD_{12}(a,b)&=\left(-a+b,b,0\right),\\
\calD_{13}(a,b)&=\left(0,-a,-b\right),\quad& \calD_{14}(a,b)&=\left(-a+b,0,-b\right),\\
\calD_{15}(a,b)&=\left(-a+b,-a,0\right),\quad& \calD_{16}(a,b)&=\left(0,a-b,-b\right),\\
\calD_{17}(a,b)&=\left(a,0,-b\right),\quad& \calD_{18}(a,b)&=\left(a,a-b,0\right),\\
\calD_{19}(a,b)&=\left(0,a-b,a\right),\quad& \calD_{20}(a,b)&=\left(-b,0,a\right),\\
\calD_{21}(a,b)&=\left(-b,a-b,0\right),\quad& \calD_{22}(a,b)&=\left(0,-a,-a+b\right),\\
\calD_{23}(a,b)&=\left(-b,0,-a+b\right),\quad& \calD_{24}(a,b)&=\left(-b,-a,0\right).
\end{alignedat}
\end{equation}
\begin{lemma}
Each set ${\bD}(a,b)$ is a connected component, and each connected
component coincides with a certain set ${\bD}(a,b)$.
\end{lemma}
\begin{proof}
This statement may be proved by inspection.
\end{proof}
Let us discuss in brief the structure of the sets ${\bD}(a,b)$ for
different values $a,b$. Clearly, ${\bD}(0,0)$ consists of the
single zero triplet $\calD_{0}=(0,0,0)$. The connected components
${\bD}(\pm a,0)$, ${\bD}(0,\pm a)$, ${\bD}(a,a)$, ${\bD}(-a,-a)$
coincide and include the following 12 elements:
\begin{equation}\label{12disc}
\begin{alignedat}{2}
\calD_{1}(a)&=a(\hphantom{-}0,\hphantom{-}0,\hphantom{-}1),\quad&
\calD_{2}(a)&=a(-1,\hphantom{-}0,\hphantom{-}1),\\
\calD_{3}(a)&=a(-1,\hphantom{-}0,\hphantom{-}0),\quad&
\calD_{4}(a)&=a(-1,-1,\hphantom{-}0),\\
\calD_{5}(a)&=a(\hphantom{-}0,-1,\hphantom{-}0),\quad&
\calD_{6}(a)&=a(\hphantom{-}0,-1,-1),\\
\calD_{7}(a)&=a(\hphantom{-}0,\hphantom{-}0,-1),\quad&
\calD_{8}(a)&=a(\hphantom{-}1,\hphantom{-}0,-1),\\
\calD_{9}(a)&=a(\hphantom{-}1,\hphantom{-}0,\hphantom{-}0),\quad&
\calD_{10}(a)&=a(\hphantom{-}1,\hphantom{-}1,\hphantom{-}0),\\
\calD_{11}(a)&=a(\hphantom{-}0,\hphantom{-}1,\hphantom{-}0),\quad&
\calD_{12}(a)&=a(\hphantom{-}0,\hphantom{-}1,\hphantom{-}1).
\end{alignedat}
\end{equation}
We use notation ${\bD}(a)$ for this set. Geometrically the set
${\bD}(a)$ represents vertices of a partly distorted truncated
cuboctahedron, or triangular orthobicupola, shown in Fig.~\ref{tc}.
The structure of this component will be explained in more detail in
Section~\ref{S-proofT}. The set ${\bD}(a, -a)$, ${\bD}(a, 2a)$,
${\bD}(a, a/2)$, also consists of 12 elements. Geometrically these
sets ${\bD}(a)$ represent vertices of a distorted truncated
tetrahedron, shown in Fig.~\ref{tt}. Otherwise, a set ${\bD}(a,
b)$, consists of 24 elements, and represents vertices of a
distorted truncated octahedron, shown in Fig.~\ref{to}. The
structure of this component will be explained in more detail in
Section~\ref{knotsSS}.
\begin{figure}[htbp!]
\begin{center}
\hfill\includegraphics*[width=0.25\textwidth]{hull3d0505b}
\hfill
\includegraphics*[width=0.25\textwidth]{hull3d0505x}
\hfill~
\caption{Left: the form of a polyhedron with vertices ${\bD}(a)$, $a\not= 0$;
Right: the same polyhedron transparent.\label{tc}}
\end{center}
\end{figure}
\begin{figure}[htbp!]
\begin{center}
\hfill\includegraphics*[width=0.25\textwidth]{hull3dm1010b}
\hfill
\includegraphics*[width=0.25\textwidth]{hull3dm1010x}
\hfill~
\caption{Left: the form of polyhedrons with vertices ${\bD}(a,-a)$,
${\bD}(a,2a)$, or ${\bD}(a,a/2)$, $a\not= 0$;
Right: the same polyhedron transparent.\label{tt}}
\end{center}
\end{figure}
\begin{figure}[htbp!]
\begin{center}
\hfill\includegraphics*[width=0.25\textwidth]{hull3d05m1b}
\hfill
\includegraphics*[width=0.25\textwidth]{hull3d05m1x}
\hfill~
\caption{Left: a typical form of a generic polyhedron
with vertices
${\bD}(a,b)$;
Right: the same polyhedron transparent.\label{to}}
\end{center}
\end{figure}
We formulate also a corollary of Proposition~\ref{trans}. For a set
${\bD}$ of discrepancies we denote by $G({\bD})$ the collection of
elements of the form $\calD G^{(i)}$, $\calD\in {\bD}$, $i=1,
\ldots , 12$.
\begin{corollary}\label{Dfini2} The equality
\[
G({\bD}(a,b)) ={\bD}(a,b))\bigcup {\bD}(a)\bigcup {\bD}(b)\bigcup {\bD}(a-b),
\]
holds for $a\not= b$. Also $G({\bD}(a))={\bD}(a)\bigcup (0,0,0)$.
\end{corollary}
Some discrepancy triplets do not belong to any connected component;
however any element of the form $\calD G^{(i)}$ must belong to a
connected component. More precisely:
\begin{proposition}\label{colP}
The following inclusions hold:
\begin{alignat*}{2}
(a,b,c)G^{(1,2)}&\in {\bD}(c,-a+b),\quad&(a,b,c)G^{(3,4)}&\in {\bD}(a-c,b),\\
(a,b,c)G^{(5,6)}&\in {\bD}(-b+c,a),\quad&(a,b,c)G^{(7,8)}&\in {\bD}(c,b),\\
(a,b,c)G^{(9,10)}&\in {\bD}(a,-c),\quad&(a,b,c)G^{(11,12)}&\in
{\bD}(a,b).
\end{alignat*}
\end{proposition}
\begin{proof} This assertion may be proved by inspection.
\end{proof}
\subsection{Incremental Dynamics}\label{S-incdyn}
For a given sextuple $\calR$ we denote by $\calR'$ the triplet of
the first three components of $\calR$: $
\calR'=\left(r_{\textrm{\$}\textrm{\euro}},
r_{\textrm{\$}\textrm{\pounds}},
r_{\textrm{\$}\textrm{\yen}}\right)$. Denote further $
\calI(\calR,\Hat\calA)=\log (\calR\Hat\calA)'-\log\calR', $ where
$\Hat\calA$ is a strong arbitrage.
\begin{proposition}
\label{incrP} $\calI(\calR,\Hat\calA)$ depends only on $\Hat\calA$
and $\calD(\calR)$ and may be described as follows:
\begin{alignat*}{3}
\calI(\calR,\Hat\calA^{(1)})&=d(\calR)H^{(1)}=&-d_{\textrm{\euro}\textrm{\pounds}}(\calR)\left(
1,0,0 \right), \\
\calI(\calR,\Hat\calA^{(2)})&=d(\calR)H^{(2)}=&-d_{\textrm{\euro}\textrm{\yen}}(\calR)\left( 1,0,0 \right) , \\
\calI(\calR,\Hat\calA^{(3)})&=d(\calR)H^{(3)}=&\hphantom{-}d_{\textrm{\euro}\textrm{\pounds}}(\calR)\left(0,
1,0 \right),\\
\calI(\calR,\Hat\calA^{(4)})&=d(\calR)H^{(4)}=&-d_{\textrm{\pounds}\textrm{\yen}}(\calR)\left(0, 1,0 \right), \\
\calI(\calR,\Hat\calA^{(5)})&=d(\calR)H^{(5)}=&\hphantom{-}d_{\textrm{\euro}\textrm{\yen}}(\calR)
\left( 0,0,1\right), \\
\calI(\calR,\Hat\calA^{(6)})&=d(\calR)H^{(6)}=&\hphantom{-}d_{\textrm{\pounds}\textrm{\yen}}(\calR)
\left( 0,0,1\right).
\end{alignat*}
Also the equalities
$\calI(\calR,\Hat\calA^{(i)})=d(\calR)H^{(i)}=(0,0,0)$ hold for
$i=7,8,9,10,11,12$.
\end{proposition}
\begin{proof}
Follows from Corollary~\ref{blockC}.
\end{proof}
\subsection{Proof of Theorem~\ref{arbH}}\label{S-proofT}
This proceeds by graphing the detailed dynamics of the arbitrage
discrepancies. In this section we use the shorthand notation
$\calD_{i}$ instead of $\calD_{i}(a)$.
\begin{lemma}
\label{specialC} For any initial exchange rate ensemble belonging
to the list \eqref{list}, and for any arbitrage chain, the
corresponding sequence of discrepancies includes only elements from
the union $\calD_{0} \bigcup {\bD}(a)$, $a=\log\alpha$, see
\eqref{12disc}. The possible transition paths, arising from the
strong arbitrages listed in Table~\ref{tab1}, are given in
Table~\ref{tab3}.
Figure~\ref{GraphLeha1} plots the corresponding graph.
Figure~\ref{GraphLeha1_24} plots a similar graph, where the numbers
of the arbitrages from Table~\ref{tab1} are included, instead of
the numbers of strong arbitrages.
\end{lemma}
\begin{proof}
By inspection follows from Proposition~\ref{disc1P}.
\end{proof}
Ignoring the zero vertex $\calD_0$, the edges that lead to this
vertex and directions of the edges, another, polyhedral,
representation of the graph plotted in Fig.~\ref{GraphLeha1} is
given in Fig.~\ref{GraphLeha2}. The corresponding polyhedron is a
distorted triangular orthobicupola, shown in Fig.~\ref{tc}. The
incidence matrix $I$ of the graph plotted in Fig.~\ref{GraphLeha2}
is as follows:
\[
I=
\left(
\begin{smallmatrix-mod
1& 1& 0& 0& 1& 0& 0& 0& 1& 0& 0& 1\\
1& 1& 1& 1& 0& 0& 0& 0& 0& 0& 0& 1\\
0& 1& 1& 1& 0& 0& 1& 0& 0& 0& 1& 0\\
0& 1& 1& 1& 1& 1& 0& 0& 0& 0& 0& 0\\
1& 0& 0& 1& 1& 1& 0& 0& 1& 0& 0& 0\\
0& 0& 0& 1& 1& 1& 1& 1& 0& 0& 0& 0\\
0& 0& 1& 0& 0& 1& 1& 1& 0& 0& 1& 0\\
0& 0& 0& 0& 0& 1& 1& 1& 1& 1& 0& 0\\
1& 0& 0& 0& 1& 0& 0& 1& 1& 1& 0& 0\\
0& 0& 0& 0& 0& 0& 0& 1& 1& 1& 1& 1\\
0& 0& 1& 0& 0& 0& 1& 0& 0& 1& 1& 1\\
1& 1& 0& 0& 0& 0& 0& 0& 0& 1& 1& 1
\end{smallmatrix-mod}
\right) .
\]
\begin{table}[!htbp]
\caption{Transition, caused by strong arbitrages from
Table~\ref{tab1}}\label{tab3}
\begin{tabular}
{l|llllllllllll}
& $ \calD_{1} $ & $ \calD_{2} $ & $ \calD_{3} $ & $ \calD_{4} $ & $ \calD_{5} $ & $ \calD_{6} $ & $ \calD_{7} $ & $ \calD_{8} $ & $ \calD_{9} $ & $ \calD_{10} $ & $ \calD_{11} $ & $ \calD_{12}$ \\
\hline\\
$\Hat\calA^{(1)} $ & $ \calD_{1} $ & $ \calD_{12} $ & $ \calD_{11} $ & $ \calD_{0} $ & $ \calD_{5} $ & $ \calD_{6} $ & $ \calD_{7} $ & $ \calD_{6} $ & $ \calD_{5} $ & $ \calD_{0} $ & $ \calD_{11} $ & $ \calD_{12}$ \\
$\Hat\calA^{(2)} $ & $ \calD_{1} $ & $ \calD_{2} $ & $ \calD_{3} $ & $ \calD_{0} $ & $ \calD_{9} $ & $ \calD_{8} $ & $ \calD_{7} $ & $ \calD_{8} $ & $ \calD_{9} $ & $ \calD_{0} $ & $ \calD_{3} $ & $ \calD_{2}$ \\
$\Hat\calA^{(3)} $ & $ \calD_{1} $ & $ \calD_{0} $ & $ \calD_{7} $ & $ \calD_{6} $ & $ \calD_{5} $ & $ \calD_{6} $ & $ \calD_{7} $ & $ \calD_{0} $ & $ \calD_{1} $ & $ \calD_{12} $ & $ \calD_{11} $ & $ \calD_{12}$ \\
$\Hat\calA^{(4)} $ & $ \calD_{1} $ & $ \calD_{0} $ & $ \calD_{3} $ & $ \calD_{4} $ & $ \calD_{5} $ & $ \calD_{4} $ & $ \calD_{3} $ & $ \calD_{0} $ & $ \calD_{9} $ & $ \calD_{10} $ & $ \calD_{11} $ & $ \calD_{10}$ \\
$\Hat\calA^{(5)} $ & $ \calD_{1} $ & $ \calD_{2} $ & $ \calD_{3} $ & $ \calD_{2} $ & $ \calD_{1} $ & $ \calD_{0} $ & $ \calD_{7} $ & $ \calD_{8} $ & $ \calD_{9} $ & $ \calD_{8} $ & $ \calD_{7} $ & $ \calD_{0}$ \\
$\Hat\calA^{(6)} $ & $ \calD_{5} $ & $ \calD_{4} $ & $ \calD_{3} $ & $ \calD_{4} $ & $ \calD_{5} $ & $ \calD_{0} $ & $ \calD_{11} $ & $ \calD_{10} $ & $ \calD_{9} $ & $ \calD_{10} $ & $ \calD_{11} $ & $ \calD_{0}$ \\
$\Hat\calA^{(7)} $ & $ \calD_{1} $ & $ \calD_{1} $ & $ \calD_{0} $ & $ \calD_{5} $ & $ \calD_{5} $ & $ \calD_{6} $ & $ \calD_{7} $ & $ \calD_{7} $ & $ \calD_{0} $ & $ \calD_{11} $ & $ \calD_{11} $ & $ \calD_{12}$ \\
$\Hat\calA^{(8)} $ & $ \calD_{2} $ & $ \calD_{2} $ & $ \calD_{0} $ & $ \calD_{4} $ & $ \calD_{4} $ & $ \calD_{6} $ & $ \calD_{8} $ & $ \calD_{8} $ & $ \calD_{0} $ & $ \calD_{10} $ & $ \calD_{10} $ & $ \calD_{12}$ \\
$\Hat\calA^{(9)} $ & $ \calD_{1} $ & $ \calD_{2} $ & $ \calD_{3} $ & $ \calD_{3} $ & $ \calD_{0} $ & $ \calD_{7} $ & $ \calD_{7} $ & $ \calD_{8} $ & $ \calD_{9} $ & $ \calD_{9} $ & $ \calD_{0} $ & $ \calD_{1}$ \\
$\Hat\calA^{(10)} $ & $ \calD_{12} $ & $ \calD_{2} $ & $ \calD_{3} $ & $ \calD_{3} $ & $ \calD_{0} $ & $ \calD_{6} $ & $ \calD_{6} $ & $ \calD_{8} $ & $ \calD_{9} $ & $ \calD_{10} $ & $ \calD_{0} $ & $ \calD_{12}$ \\
$\Hat\calA^{(11)} $ & $ \calD_{0} $ & $ \calD_{3} $ & $ \calD_{3} $ & $ \calD_{4} $ & $ \calD_{5} $ & $ \calD_{5} $ & $ \calD_{0} $ & $ \calD_{9} $ & $ \calD_{9} $ & $ \calD_{10} $ & $ \calD_{11} $ & $ \calD_{10}$ \\
$\Hat\calA^{(12)} $ & $ \calD_{0} $ & $ \calD_{2} $ & $ \calD_{2} $
& $ \calD_{4} $ & $ \calD_{6} $ & $ \calD_{6} $ & $ \calD_{0} $ & $
\calD_{8} $ & $ \calD_{8} $ & $ \calD_{10} $ & $ \calD_{12} $ & $
\calD_{12}$
\end{tabular}
\end{table}
\begin{figure}[!htbp]
\begin{center}
\includegraphics*{ChBD}
\caption{Graph of the transitions caused by the strong arbitrages.
\label{GraphLeha1}}
\end{center}
\end{figure}
\begin{figure}[!htbp]
\begin{center}
\includegraphics*{ChBD24}
\caption{The previous graph with the arbitrage numbers, instead of
the strong arbitrage numbers. \label{GraphLeha1_24}}
\end{center}
\end{figure}
\begin{figure}[!htbp]
\begin{center}
\includegraphics*{pChBD}
\caption{The polyhedral representation of the principal
graph.\label{GraphLeha2}}
\end{center}
\end{figure}
Now let us deal with the coupled discrepancies and the incremental
dynamics.
\begin{corollary}\label{specialIC} For any arbitrage chain the
corresponding sequence of increments includes only the zero triplet
$\calI_{0}=(0,0,0)$ or one of the following six triplets:
\begin{alignat*}{3}
\calI_{1}&=a(1,\hphantom{-}0,\hphantom{-}0),\quad&
\calI_{2}&=a({-}1,\hphantom{-}0,\hphantom{-}0),\quad&
\calI_{3}&=a(0,\hphantom{-}1,\hphantom{-}0),\\
\calI_{4}&=a(0,{-}1,\hphantom{-}0),\quad&
\calI_{5}&=a(\hphantom{-}0,\hphantom{-}0,\hphantom{-}1),\quad&
\calI_{6}&=a(0,\hphantom{-}0,{-}1).\\
\end{alignat*}
\end{corollary}
The dynamics of the increments $\calI$ is conveniently visualised
in Fig.~\ref{GraphLeha3}.
\begin{figure}[!htbp]
\begin{center}
\includegraphics*{plabD}
\caption{The increment dynamics graph\label{GraphLeha3}}
\end{center}
\end{figure}
The correctness of this description of the dynamics of the
increments follows immediately from Corollary~\ref{specialC} and
Proposition~\ref{incrP}. The legitimacy of the algorithms relevant
to Theorem~\ref{arbH} and, therefore, proof of Theorem \ref{arbH}
and Proposition~\ref{arbA2H} follows from Figs.~\ref{GraphLeha1_24}
and~\ref{GraphLeha3}.
We note also that the 24-periodic chain of arbitrage from
Proposition~\ref{32} was also found looking at
Fig.~\ref{GraphLeha1_24} and~\ref{GraphLeha3}. The corresponding
route is quite natural from this perspective, and is given by
\begin{equation}\label{geomrout}
\begin{alignedat}{17}
\calD_{10}&\to&~\calD_{11}&\to&~\calD_{10}&\to&~\calD_{12}&\to&~\calD_{1}&\to& ~\calD_{12}&\to&
~\calD_{2}&\to&~\calD_{3}&\to&\\
\calD_{2}&\to&\calD_{4}& \to& \calD_{5}&\to&\calD_{4}&\to&
\calD_{6}&\to& \calD_{7}&\to& \calD_{6}& \to& \calD_{8}&\to\\
\calD_{9}&\to& \calD_{8}&\to&
\calD_{10}&\to&\calD_{8}&\to& \calD_{6}&\to& \calD_{4}&\to& \calD_{2}&\to&~\calD_{12}&\to&~\calD_{10}.
\end{alignedat}
\end{equation}
\subsection{Commuters, Terminals and Knots}\label{knotsSS}
Now we move to a proof of Theorem~\ref{irratBC} and Proposition
\ref{arbBH}. The case $d_{\textrm{\euro}\textrm{\pounds}}=
d_{\textrm{\euro}\textrm{\yen}}$ has been considered in
Section~\ref{directSS}. Thus we can assume that
$d_{\textrm{\euro}\textrm{\pounds}}\not=
d_{\textrm{\euro}\textrm{\yen}}$.
The focus is again on the dynamics of the exchange rate
discrepancies. The set of all discrepancies that may be achievable
from $\calD=(a, b, 0)$ contains altogether 61 different elements,
see Corollary~\ref{Dfini2}. The corresponding connected component
${\bD}(a ,b)$, which contains $\calD=(a, b, 0)$, see
\eqref{24disc}, contains 24 elements listed in \eqref{24disc}. To
describe the detailed structure of this set we will introduce a new
notation. The set ${\bD}(a ,b)$, contains six elements that have
all three components that are non-zero, and we re-denote these
elements by
\begin{alignat*}{3}
C_{1} &=\left(a,b,-a+b\right),& C_{2} &=\left(-a+b,b,a\right),&
C_{3} &=\left(a,a-b,-b\right),\\
C_{4} &=\left(-a+b,-a,-b\right),\quad& C_{5}
&=\left(-b,a-b,a\right),\quad& C_{6} &=\left(-b,-a,-a+b\right).
\end{alignat*}
We call these ensembles \emph{commuters} by way of analogy with
passenger travel.
We call an element with two non-zero components a \emph{terminal},
if $d_1\not= \pm d_2$. There are altogether 18 terminals in
${\bD}(a,b)$. To each commuter $C_{i}$, $i=1,\ldots, 6$, we relate
three \emph{terminals} $T^{j}_{i}$, $j=1,2,3$, as follows:
\begin{alignat*}{3}
T^{1}_{1}&=\left(0,b,-a+b\right), &
T^{2}_{1}&=\left(a,0,-a+b\right), &
T^{3}_{1}&=\left(a,b,0\right);\\
T^{1}_{2}&=\left(0,b,a\right), & T^{2}_{2}&=\left(-a+b,0,a\right),
&
T^{3}_{2}&=\left(-a+b,b,0\right);\\
T^{1}_{3}&=\left(0,-a,-b\right),&
T^{2}_{3}&=\left(-a+b,0,-b\right),&
T^{3}_{3}&=\left(-a+b,-a,0\right);\\
T^{1}_{4}&=\left(0,a-b,-b\right), &
T^{2}_{4}&=\left(a,0,-b\right),&
T^{3}_{4}&=\left(a,a-b,0\right);\\
T^{1}_{5}&=\left(0,a-b,a\right), & T^{2}_{5}&=\left(-b,0,a\right),
&
T^{3}_{5}&=\left(-b,a-b,0\right);\\
T^{1}_{6}&=\left(0,-a,-a+b\right),\quad&
T^{2}_{6}&=\left(-b,0,-a+b\right),\quad&
T^{3}_{6}&=\left(-b,-a,0\right).
\end{alignat*}
\begin{lemma}
\label{commuL} The equalities
\begin{alignat*}{2}
C_{i}G^{(7)}&=T^{1}_{i},&\quad
C_{i}H^{(7)}&=(0,0,0),\\
C_{i}G^{(9)}&=T^{2}_{i},&\quad
C_{i}H^{(9)}&=(0,0,0),\\
C_{i}G^{(11)}&=T^{3}_{i},&\quad
C_{i}H^{(11)}&=(0,0,0)
\end{alignat*}
hold for $i=1,\ldots,6$. Also the following equalities hold: $
T^{j}_{i}G^{(k)}=C_{i}$, for $i=1,\ldots, 6$, $j=1,2,3$,
$k=8,10,12$.
\end{lemma}
We group the commuters and terminals in six knots, $K_1, \ldots,
K_6$ as follows:
\[
K_{i}=\left\{ C_{i},T^{1}_{i},T^{2}_{i},T^{3}_{i}\right\},\quad
i=1,\ldots, 6.
\]
Figure~\ref{CommuterLeha} illustrates behaviour at a knot.
\begin{figure}[!htbp]
\begin{center}
\includegraphics*{commuter}
\caption{The ``commuter--terminals'' graph of a
knot\label{CommuterLeha}}
\end{center}
\end{figure}
\subsection{Travel Between Knots}\label{S-knots}
Departing from a particular terminal, and applying some arbitrages
with numbers $k=7, \ldots, 12$, one can travel to another terminal
belonging to a different knot, simultaneously ``loading some
cargo'' upon the corresponded triplet $\calR'$. Details are given
in the following proposition.
\begin{proposition}
\label{cargoP} The following groups of equalities hold:
{\footnotesize\[ \left\{
\begin{alignedat}{4}
T^{1}_{1}G^{(3)}&=T^{1}_{2},\quad &
T^{1}_{1}H^{(3)}&=(0,a,0),\quad&
T^{1}_{1}G^{(5)}&=T^{2}_{4},\quad & T^{1}_{1}H^{(5)}&=(0,0,b);\\
T^{2}_{1}G^{(1)}&=T^{1}_{6},\quad &
T^{2}_{1}H^{(1)}&=(-a,0,0),\quad &
T^{2}_{1}G^{(6)}&=T^{3}_{3},\quad & T^{2}_{1}H^{(6)}&=(0,0,-a);\\
T^{3}_{1}G^{(2)}&=T^{2}_{6},\quad &
T^{3}_{1}H^{(2)}&=(-b,0,0),\quad &
T^{3}_{1}G^{(4)}&=T^{3}_{2},\quad & T^{3}_{1}H^{(4)}&=(0,a-b,0);
\end{alignedat}
\right.\\[2mm]
\]
\[
\left\{
\begin{alignedat}{4}
T^{1}_{2}G^{(2)}&=T^{2}_{5},\quad & T^{1}_{2}H^{(2)}&=(-b,0,0),\quad &
T^{1}_{2}G^{(4)}&=T^{3}_{1},\quad & T^{1}_{2}H^{(4)}&=(0,-a,0);\\
T^{3}_{2}G^{(1)}&=T^{2}_{2},\quad & T^{3}_{2}H^{(1)}&=(a-b,0,0),\quad &
T^{3}_{2}G^{(6)}&=T^{3}_{3},\quad & T^{3}_{2}H^{(6)}&=(0,0,-a);
\end{alignedat}
\right.\\[2mm]
\]
\[
\left\{
\begin{alignedat}{4}
T^{1}_{3}G^{(2)}&=T^{2}_{4},\quad & T^{1}_{3}H^{(2)}=(a,0,0),\quad &
T^{1}_{3}G^{(4)}&=T^{3}_{6},\quad & T^{2}_{3}H^{(4)}=(0,b,0);
\end{alignedat}
\right.\\[2mm]
\]
\[
\left\{
\begin{alignedat}{4}
T^{1}_{4}G^{(2)}&=T^{2}_{3},\quad & T^{1}_{4}H^{(2)}&=(-a+b,0,0),\quad &
T^{1}_{4}G^{(4)}&=T^{3}_{5},\quad & T^{1}_{4}H^{(4)}&=(0,b,0); \\
T^{3}_{4}G^{(1)}&=T^{1}_{3},\quad & T^{3}_{4}H^{(1)}&=(-a,0,0),\quad &
T^{3}_{4}G^{(6)}&=T^{3}_{1},\quad & T^{3}_{4}H^{(6)}&=(0,0,-b);
\end{alignedat}
\right.\\[2mm]
\]
\[
\left\{
\begin{alignedat}{4}
T^{1}_{5}G^{(2)}&=T^{2}_{2},\quad & T^{1}_{5}H^{(2)}&=(-a+b,0,0),\quad &
T^{1}_{5}G^{(4)}&=T^{3}_{4},\quad & T^{1}_{5}H^{(4)}&=(0,-a,0); \\
T^{3}_{5}G^{(1)}&=T^{1}_{2},\quad & T^{3}_{5}H^{(1)}&=(b,0,0),\quad &
T^{3}_{5}G^{(6)}&=T^{3}_{6},\quad & T^{3}_{5}H^{(6)}&=(0,0,a);
\end{alignedat}
\right.\\[2mm]
\]
\[
\left\{
\begin{alignedat}{4}
T^{1}_{6}G^{(2)}&=T^{2}_{1},\quad & T^{1}_{6}H^{(2)}&=(a,0,0),\quad&
T^{1}_{6}G^{(4)}&=T^{3}_{3},\quad & T^{1}_{6}H^{(4)}&=(0,a-b,0).
\end{alignedat}
\right.
\]}
\end{proposition}
We introduce the ``travel between knots'' directed graph $\Gamma$,
shown in Fig.~\ref{GraphLehaCargo}, as follows. This graph has $6$
vertices that correspond to the knots $K_1, \ldots, K_6$. A knot
$K_i$ is connected by an arrow with another knot $K_j$ if one of
terminals belonging to $K^{j}$ figures in the rows belonging to the
$i$-th subset of equalities from Proposition~\ref{cargoP}. For
instance, the knot $K_{1}$ is connected with
$K_{2},K_{3},K_{4},K_{6}$. Moreover each arrow corresponds to the
three dimensional ``cargo vector(s)'': these vectors are related in
a natural way to the increment vectors in the equalities above. For
instance, we attach the cargo-vectors $(0,a-b,0)$ and $(0,a,0)$ to
the $K_1\to K_2$ arrow. The incidence matrix of this graph is
written below.
\[
I(\Gamma)=\left(
\begin{array}{cccccc}
0&1&0&1&0&1\\
1&0&1&0&1&0\\
0&0&0&1&0&1\\
1&0&1&0&1&0\\
0&1&0&1&0&1\\
0&1&0&0&0&0
\end{array}
\right)
\]
\begin{figure}[!htbp]
\begin{center}
\includegraphics*{knots}
\caption{The ``travel between knots'' graph
$\Gamma$\label{GraphLehaCargo}}
\end{center}
\end{figure}
\subsection{Finalising the proof of Theorem~\ref{irratBC}
and Proposition~\ref{arbBH}}\label{fp2SS}
If the single transition $K_{i}\to K_{j}$ is possible we use
$W^{i\to j}$ for the corresponding cargo; we will use $W^{i\to
j}_1,W^{i\to j}_2$ if two transitions are possible. In the latter
case $W^{i\to j}_1$ refers to the upper vector indicated at graph
$\Gamma$. For instance, $W^{1\to 2}_{1}=(0,a-b,0)$, $W^{1\to
2}_{2}=(0,a,0)$, $W^{2\to 1}=(0,-a,0)$, etc.
\begin{lemma}
\label{CyclesP} For any positive integers $N_{1},N_{2},N_{3}$ there
exists a chain ${\hbA}$ of strong arbitrages such that
$\calR{\hbA}$ has the form
\[
(r_{\textrm{\$}\textrm{\euro}}+m_{1}a-N_{1}b,
r_{\textrm{\$}\textrm{\pounds}}+m_{2}a+N_{2}b,
r_{\textrm{\$}\textrm{\yen}}+m_{3}a-N_{3}b)
\]
where $m_{1},m_{2},m_{3}$ are some positive integer numbers,
\end{lemma}
\begin{proof}
Since the moves from one terminal to another, within a particular
knot, are always possible and do not change $\calR'$ (see Lemma
\ref{commuL}), any route allowed by the graph $\Gamma$ can be
performed, and any combination of corresponding cargo can be
loaded. For the cycle $K_1\to K_{2}\to K_{5}\to K_{6}\to K_{1}$ we
have
\[
W^{1\to 2}_{1}+W^{2\to 5}_{1}+W^{5\to 6}+W^{6\to 1}=( a-b ,a,0).
\]
For the cycle $K_1\to K_{2}\to K_{3}\to K_{6}\to K_{1}$ we have
\[
W^{1\to 2}_{2}+W^{2\to 3}+W^{3\to 6}+W^{6\to 1}=( a ,a+b,a).
\]
For the cycle $K_1\to K_{2}\to K_{3}\to K_{4}\to K_{1}$ we have
\[
W^{1\to 2}_{2}+W^{2\to 3}+W^{3\to 4}+W^{4\to 1}=( a ,a,a-b).
\]
\end{proof}
\begin{corollary}
\label{abc} For any non-negative integers $N_1,N_{2},N_{3}$ and
$M_{1},M_{2},M_{3}$ there exists a chain ${\hbA}$ of strong
arbitrages such that $\calR{\hbA}$ has the form
$(r_{\textrm{\$}\textrm{\euro}}+M_{1}a-N_{1}b,
r_{\textrm{\$}\textrm{\pounds}}
+M_{2}a+N_{2}b,r_{\textrm{\$}\textrm{\yen}}+M_{3}a-N_{3}b)$.
\end{corollary}
\begin{proof}
From the lemma above it follows that we can achieve the state
\[
(r_{\textrm{\$}\textrm{\euro}}+m_{1}a-(N_{1}-1)b,
r_{\textrm{\$}\textrm{\pounds}}
+m_{2}a+(N_{2})b,r_{\textrm{\$}\textrm{\yen}}
+m_{3}a-(N_{3}+1)b,a,b,-a+b).
\]
Then moving to the terminal $T^{3}_{1}$ and applying arbitrage
$\Hat\calA^{(6)}$ we arrive at
\[
(r_{\textrm{\$}\textrm{\euro}}+(m_{1}-1)a-(N_{1})b,
r_{\textrm{\$}\textrm{\pounds}}
+m_{2}a+(N_{2})b,r_{\textrm{\$}\textrm{\yen}}
+m_{3}a-N_{3}b,0,a,0).
\]
However, from this state we can, by Proposition~\ref{arbAH}, adjust
the numbers $m_1,m_2, m_3$ to the targets $M_1,M_2, M_3$.
\end{proof}
Theorem~\ref{irratBC} and Proposition~\ref{arbBH} follow
immediately from this last corollary.
\section{Concluding Remarks}\label{S-conclude}
The key contribution of this paper is to ask what happens to
arbitrage sequences when the number of goods or assets under
consideration is four, rather than the two, or occasionally three,
usually considered. The model is illustrated with regard to a
foreign exchange market with four currencies and traders, so there
are $C_{4}^{2}=6$ principal exchange rates. Despite abstracting
from various complications -- such as transaction costs, capital
requirements and risk -- that are often invoked to explain the
limits to arbitrage, we find that the arbitrage operations
conducted by the FX traders can generate periodicity or more
complicated behaviour in the ensemble of exchange rates, rather
than smooth convergence to a ``balanced'' ensemble where the law of
one price holds.
We use the fiction of an Arbiter, who knows all the actual
exchange rates and what a balanced ensemble would be, to bring out
the information problem. FX traders tend to specialise in
particular currencies, so the assumption that the FX traders are
initially aware only of the exchange rates for their own
``domestic'' currencies is not entirely implausible. We show that
the order in which the Arbiter reveals information to individual
traders regarding discrepancies in exchange rate ensembles makes a
key difference to the arbitrage sequences that will be pursued. The
sequences are periodic in nature, and show no clear signs of
convergence on a balanced ensemble of exchange rates. The Arbiter
might know the law of one price exchange rate ensemble, but the
traders have little chance of stumbling onto such an ensemble by
way of their arbitrage operations.
The analysis in the present paper raises several issues to pursue
in future research. An obvious extension is to allow for a larger
number of currencies and ask what happens to the arbitrage
sequences as this number becomes large. One interesting
modification of the analysis would allow the FX traders to learn
that arbitrage sequences tend to be periodic and modify their
arbitrage strategies to take the periodicity into account. Another
modification would allow some arbitrage operations to be pursued
simultaneously, and ask what happens as the limiting case where all
arbitrage operations are exploited simultaneously is approached. An
alternative reformulation of the analysis would be as a Markov
process where the states are sextuples of exchange rates between
the four currencies and the passages between the states reflect the
effects of arbitrage operations pursued. It would be interesting to
see if this could be done without compromising the relative
simplicity of the present formulation. Finally, but by no means
exhaustively, it would be interesting to work with high frequency
data sets to test for the existence of the types of arbitrage
sequences postulated in the present paper.
\providecommand{\nosort}[1]{} \providecommand{\bbljan}[0]{January}
\providecommand{\bblfeb}[0]{February} \providecommand{\bblmar}[0]{March}
\providecommand{\bblapr}[0]{April} \providecommand{\bblmay}[0]{May}
\providecommand{\bbljun}[0]{June} \providecommand{\bbljul}[0]{July}
\providecommand{\bblaug}[0]{August} \providecommand{\bblsep}[0]{September}
\providecommand{\bbloct}[0]{October} \providecommand{\bblnov}[0]{November}
\providecommand{\bbldec}[0]{December}
|
1112.5827
|
\section{Introduction}
Magnetite (Fe$_3$O$_4$), the earliest known magnetic material, is an abundant mineral in nature and is important due to its potential applications in magnetoelectronics and in catalysis. It is one of the most interesting iron oxides formed during the corrosion processes (rusting). Ultra-thin films and nanostructures formed by noble metals on iron-oxide support show enhanced catalytic properties compared with a clean oxide surfaces. Thus, understanding the properties of magnetite surfaces is of utmost importance from the viewpoint of basic science and applications. However, even clean iron oxide surfaces are relatively little explored which is connected with difficulties in the preparation of well defined surfaces \cite{WeiRan02,GonFN08}.
At room temperatures and under normal pressure conditions magnetite crystallizes in the inverse spinel structure (space group $Fd\bar{3}m$) in which tetrahedral positions are occupied by ferric (Fe$^{3+}$) while octahedral ones contain equal number of ferric and ferrous (Fe$^{2+}$) iron atoms. Magnetite's primitive rhombohedral cell contains two formula units of Fe$_3$O$_4$ and its volume is equal to one quarter of the spinel unit cell. Bulk magnetite is a semi-metal. Since the Fe$^{3+}$ ions are aligned antiferromagnetically and the ratio of Fe$^{3+}$ and Fe$^{2+}$ ions is 2:1 the overall crystal structure is ferrimagnetic.
At $T\approx 120$ K, the metal-insulator Verwey transition occurs which is connected with a long-range change of the degree of localization of electrons in the octahedral Fe atoms \cite{PiePO96,Lodz07,RowPG09}.
The structure of magnetite can be also represented by the hexagonal conventional unit cell which contains eight formula units. In this stacking, (111) oriented layers of oxygen atoms separate alternate Fe monolayers of octahedral (Fe$_{\rm oct1}$) sites and Fe trilayers consisting of a Fe$_{\rm oct2}$ monolayer in octahedral sites with Fe$_{\rm tet1}$ and Fe$_{\rm tet2}$ layers in tetrahedral sites on either side. Thus the stacking sequence of the atomic planes perpendicular to the [111] direction can be written \cite{WeiRan02} as Fe$_{\rm oct2}$-Fe$_{\rm tet1}$-O$_1$-Fe$_{\rm oct1}$-O$_2$-Fe$_{\rm tet2}$-.
The (111) surface, which is the dominant cleavage plane of magnetite and is often exposed on naturally grown crystals can have six different terminations \cite{WeiRan02}.
Only four of them have been confirmed experimentally, namely, Fe$_{\rm oct2}$ \cite{LenCLMTV96}, Fe$_{\rm tet1}$ \cite{RitWei99}, and densely packed oxygen planes O$_2$, and O$_1$ \cite{BerMMS04,BerMMS04a}.
Both iron and oxygen terminated Fe$_3$O$_4$(111) surfaces are polar of type III according to Tasker's classification \cite{Tasker79}. The surface termination is very sensitive to the preparation conditions of the samples \cite{WeiRan02} which are either Fe$_3$O$_4$ single crystals or epitaxial Fe$_3$O$_4$ films on single crystal substrates. Despite many experimental investigations, information about the (111) surface at the atomic level is very scarce. There are not many data about its magnetic properties or how its structure and composition change with temperature and oxygen pressure.
Magnetite is a strongly correlated system and DFT with standard (local or semi-local) exchange-correlation functionals does not allow for a correct description of its electronic structure because of inadequate treatment of the strong Coulomb interaction between $3d$ electrons localized on the Fe ions.
This shortcoming of DFT is corrected in practice by either of two semi-empirical approaches: hybrid functionals, where the exact Hartree-Fock exchange is partially mixed with the DFT exchange, or DFT+$U$, where the on-site Coulomb repulsion is described by an additional Hubbard term $U$.
The question what is the most stable termination of the Fe$_3$O$_4$(111) surface is still open and controversies about its structural details seem to remain. An earlier scanning tunneling microscopy (STM) study \cite{LenCLMTV96} reported two coexisting surface terminations. One was assigned to Fe$_{\rm oct1}$ atoms and the other to Fe$_{\rm oct2}$--Fe$_{\rm tet1}$ layers. A low energy electron diffraction (LEED) study \cite{RitWei99} concluded that the (111) surface termination corresponds to 1/4 monolayer of Fe atoms over a hexagonal close-packed O layer underneath. This disagreed with \textit{ab initio} periodic Hartree-Fock calculations of Ahdjoudj \textit{et al.} \cite{AhdMMvHS99} who found the Fe$_{\rm oct2}$--Fe$_{\rm tet1}$ bilayer to be the most favorable termination of the clean surface. Lemire \textit{et al.}~\cite{LemMHSF04} studied the surface structure of Fe$_3$O$_4$(111) films by CO adsorption and concluded that the (111) surface is terminated with Fe$_{\rm oct2}$.
Recent full-potential linearized augmented plane-wave (FP-LAPW) calculations by Zhu \textit{et al.} \cite{ZhuYL06} determined the structure, composition and relative stability of five different terminations of the Fe$_3$O$_4$(111) surface. The effect of different functionals for exchange and correlation energy (generalized gradient approximation (GGA), local density approximation+$U$ (LDA+$U$)) has been also discussed. According to that study the Fe$_{\rm oct2}$ termination is the most stable one. This, however, was not confirmed by a more recent first principles calculations performed by Grillo \textit{et al.} \cite{GriFR08} within GGA+$U$ and Martin \textit{et al.} \cite{MarCVW09} using GGA, according to which the Fe$_{\rm tet1}$-terminated surface has the lowest surface energy. Recent STM experiments have reported that the stoichiometric (111) surface of magnetite corresponds to Fe$_{\rm tet1}$ termination \cite{PauSCSB07}. Another, recent combined STM and first principles calculation study \cite{ShiJKKK10} also predicted the Fe$_{\rm tet1}$ termination as the most stable one.
The adsorption of metal atoms on magnetite was very rarely studied. On the theoretical side only the interaction of alkali metal atoms with the Fe$_{\rm tet1}$-terminated Fe$_3$O$_4$(111) surface has been studied using DFT at the GGA level \cite{YanWLWJ09}. To our knowledge, studies of the adsorption of noble and transition metal atoms on Fe$_3$O$_4$(111), which are important in catalysis, have not been reported so far.
In this work we revisit first the calculations for different terminations of the clean Fe$_3$O$_4$(111) surface using DFT and DFT+$U$ approaches, in order to explore the influence of strong on-site electronic correlations on the structure and physical properties of magnetite, and to form a firm basis for our studies of Au and Pd atom adsorption on the Fe$_3$O$_4$(111) surface, which is the main subject of this work.
\begin{figure}
\includegraphics*[width=8.4cm]{./fig_1.eps}
\caption{(Color online) The Fe$_3$O$_4(111)$ slab used in the surface calculations. The successive terminations were created by removing top and bottom layer from the thickest Fe$_{\rm tet1}$ terminated slab. Iron and oxygen atoms are represented by small and large balls respectively. The right hand side figures show top views of the considered terminations. The parallelograms mark the 1$\times$1 surface cell applied in the calculations.}\label{fig1_slab}
\end{figure}
\section{Methods and computational details}
The calculations presented in this work are based on the spin density functional theory as implemented in the VASP package \cite{KreHaf93,KreFur96}. The calculations employed the GGA-PW91 version of the exchange and correlation energy functional \cite{PerCVJPSF92} with the spin interpolation of Vosko \textit{et al.} \cite{VosWN80}, and the GGA plus on-site Coulomb interaction term $U$ (GGA+$U$) using the Dudarev \textit{et al.} \cite{DudBSHS98} approach. Following previous calculations \cite{JenGH06,Lodz07} for the bulk and surfaces of Fe$_3$O$_4$ the GGA+$U$ calculations were performed with the effective parameter of interaction between electrons $U_{\rm eff}=U-J=3.61$ eV (the Coulomb and screened exchange parameters $(U,J)=(4.5,0.89)$ eV, respectively).
The electron ion-core interactions were described by the projector augmented wave (PAW) method \cite{KreJou98}. The plane waves basis with cut-off energy of 500 eV and the conjugate gradient algorithm were applied to determine the electronic ground state. The integrations over the Brillouin zone were performed using Monkhorst-Pack grids \cite{MonPac76}. A Gaussian broadening of the Fermi surface of 0.2 eV was applied to improve the convergence of the solutions. The results presented in this work were obtained using $k$-point meshes of 6$\times$6$\times$6 for the bulk, and 6$\times$6$\times$1 for surface calculations, which allowed to obtain total energy convergence within 1 meV. For surface calculations $\Gamma$-centered grids were used.
\begin{figure*}
\subfigure{\includegraphics*[width=8.3cm]{./fig_2a.eps}}
\hspace{0.5cm}
\subfigure{\includegraphics*[width=8.3cm]{./fig_2b.eps}}
\caption{(Color online) The dependence of surface energy on the oxygen chemical potential, $\mu_{\rm O}$, showing relative stability of six terminations of the Fe$_3$O$_4(111)$ surface calculated within the GGA and GGA+$U$. Vertical, dashed lines mark allowed range of the oxygen chemical potential. \label{phsdiag}
}
\end{figure*}
Calculations of the Fe$_3$O$_4$ bulk structure showed that among the nonmagnetic and different magnetic phases, the ferrimagnetic phase is most stable, with the magnetic moments on the Fe$_{\rm oct}$ atoms antiparallel to those on the Fe$_{\rm tet}$ atoms. The lattice constant and bulk modulus of the magnetite crystal calculated within GGA, 8.377 {\AA} and 172 GPa, respectively are in very good agreement with experiment (8.396 {\AA} \cite{OkuKM96}, 8.393 \AA\ \cite{Fleet82}; 181 GPa \cite{OkuKM96}) and other GGA calculations \cite{MarCVW09,PinEll06}. The GGA magnetic moments on the Fe$_{\rm tet}$ and Fe$_{\rm oct}$ atoms are $-$3.45$\mu_{B}$ and 3.49-3.61$\mu_{B}$ respectively. The moments on the O atoms are much smaller ($\sim$0.08$\mu_B$). The total magnetic moment (3.66$\mu_{B}$) per formula unit, is about 0.4$\mu_{B}$ lower than the experimental value \cite{Aragon92}. The GGA+$U$ calculations give the lattice constant (8.473 {\AA}) and bulk modulus (182 GPa) in good agreement with experimental data and other calculations \cite{PinEll06,GriFR08}. They improve the magnetic moments on the Fe atoms compared to GGA and give 4.04$\mu_{B}$, and 3.91-3.95$\mu_{B}$ on the Fe$_{\rm tet}$ and Fe$_{\rm oct}$ atoms, respectively. With the moments on the O atoms reduced by 50\%, the total magnetic moment per formula unit is 3.97$\mu_{B}$, in very good agreement with experiment (4.05$\mu_B$ \cite{Aragon92}).
The optimized lattice parameters were used to construct the Fe$_3$O$_4$(111) surface slabs consisting of 19 to 29 atomic layers separated by a vacuum region. Starting from the thickest symmetric slab for the Fe$_{\rm tet1}$-termination (Fig.~\ref{fig1_slab}) the other terminations were created by striping off subsequent atomic layers, without changing the supercell size, both from the top and the bottom of the slab. Thus the surface was separated from its periodic replicas by vacuum region ranging from 15.5 to 24 {\AA} (the latter is for the thinnest Fe$_{\rm oct2}$-terminated slab). In surface calculations the positions of all atoms were relaxed until the forces were smaller than 0.02 eV/\AA.
The stability of different terminations of the Fe$_3$O$_4(111)$ surface as a function of oxygen partial pressure was considered based on the \textit{ab initio} thermodynamics \cite{ReuSch01} in which the surface free energy $\gamma (T,P) $ is expressed as a function of the pressure and temperature by the chemical potentials $\mu_{\rm Fe}$, $\mu_{\rm O}$ of the constituents:
\begin{equation}
\gamma(T,P) = {1\over{2A}}\left[G_{{\rm Fe}_3{\rm O}_4}^{\rm slab} - N_{\rm Fe} \mu_{\rm Fe}(T,P) -N_{\rm O}\mu_{\rm O}(T,P)\right], \label{eq3}
\end{equation}
where the Gibbs free energy, $G_{{\rm Fe}_3{\rm O}_4}^{\rm slab}$, can be expressed by the total energy of the slab $E_{\rm tot}^{\rm slab}$, and $N_{\rm Fe}$ and $N_{\rm O}$ represent the number of Fe and O atoms in the system. The Gibbs free energy per magnetite formula unit, $g$, is related to the chemical potentials of iron and oxygen through the relation $g^{\rm bulk}_{{\rm Fe}_3{\rm O}_4}= 3\mu_{\rm Fe} + 4\mu_{\rm O}$. Consequently, the surface energy as a function of oxygen chemical potential can be written as
\begin{equation}
\gamma = {1\over{2A}}\left[E_{\rm tot}^{\rm slab} - {1\over 3}N_{\rm Fe} g^{\rm bulk}_{{\rm Fe}_3{\rm O}_4} + \left({4\over 3}N_{\rm Fe} - N_{\rm O}\right)\mu_{\rm O}\right], \label{eq6}
\end{equation}
where the Gibbs free energy is approximated by the internal energy from DFT calculations \cite{ReuSch01}. $\mu_{\rm O}$ is referenced with respect to the chemical potential of oxygen in a gas phase $\mu^{\rm gas}_{\rm O} = {{1}\over{2}}E^{\rm tot}_{{\rm O}_2}$, where the total energy of an oxygen molecule $E^{\rm tot}_{{\rm O}_2}$ is calculated in a large box.
Au and Pd atoms were adsorbed in four different adsorption sites of the 1$\times$1 surface unit cell, on both sides of the relaxed, symmetric Fe$_3$O$_4$(111) slab. The adsorption binding energy was calculated as
\begin{equation}
E_{\rm ad} = -(E^{{\rm X}/{\rm sub}} -E^{\rm sub} - 2E^{\rm X})/2 ,
\end{equation}
where $E^{{\rm X/sub}}$ is the total energy of the slab covered with adsorbate X, $E^{\rm sub}$ represents the energy of the relaxed bare oxide support, and $E^{\rm X}$ is the energy of a free adsorbate atom.
\section{Results}
\subsection{Clean Fe$_3$O$_4$(111) surface}
The variation of surface energy as a function of the oxygen chemical potential $\mu_{\rm O}$ is displayed in Fig.~\ref{phsdiag}. The accessible range of $\mu_{\rm O}$ is limited, at the lower limit by the onset of magnetite decomposition to form bulk iron and, at the upper limit, by start of oxygen condensation on the magnetite surface. It is seen that surface composition of the Fe$_{\rm tet1}$ termination minimizes the surface free energy over a wide range of $\mu_{\rm O}$ for both types of calculations, in agreement with previous predictions \cite{GriFR08,PauSCSB07,ShiJKKK10}.
However, at low $\mu_{\rm O}$ which correspond to very low oxygen pressures the GGA+$U$ results show that the Fe$_{\rm oct2}$ termination may turn out to be stable. Zhu et al. \cite{ZhuYL06} have predicted this termination to be most stable. The GGA+$U$ does not give the stable oxygen terminations of the Fe$_3$O$_4$(111) surface seen at higher pressures in the GGA results. In the following we investigated only four clean surfaces of magnetite -- the two iron terminated (Fe$_{\rm tet1}$, Fe$_{\rm oct2}$), and two oxygen terminated (O$_1$ and O$_2$) ones.
\begin{figure
\includegraphics[width=6.5cm]{./fig_3.eps}
\caption{(Color online) Relaxations, $\Delta_{ij}$, of the interplanar distance for different Fe$_3$O$_4$(111) terminations. Experimental data for the Fe$_{\rm tet1}$ termination are taken from Ref. \onlinecite{RitWei99}. In the ideal crystal bulk the subsequent separations of the (111) layers calculated within GGA (GGA+$U$) are as follows: Fe$_{\rm tet1}$-O$_1$, 0.637 (0.644) \AA; O$_1$-Fe$_{\rm oct1}$ and Fe$_{\rm oct1}$-O$_2$, 1.176 (1.190) \AA; O$_2$-Fe$_{\rm tet2}$, 0.637 (0.644) \AA; Fe$_{\rm tet2}$-Fe$_{\rm oct2}$ and Fe$_{\rm oct2}$-Fe$_{\rm tet1}$, 0.605 (0.611) \AA.
} \label{relaxations}
\end{figure}
After structural optimization the surfaces are strongly relaxed. The relaxation of the interplanar distance is calculated as $\Delta_{ij}=(d_{ij}-d)/d$, where $d_{ij}$ is the distance of the relaxed $i$ and $j=i+1$ planes, and $d$ is the corresponding distance in the bulk. The calculated relaxations (Fig.~\ref{relaxations}) show an oscillatory character, though not very regular, of the contraction-expansion type. In general, the geometry changes calculated within the GGA and GGA+$U$ approaches are qualitatively similar and predict a large contraction of the first interplanar distance. They differ, however, in the topmost layer relaxation of the Fe$_{\rm oct2}$ termination, where GGA predicts a large (40\%) contraction of the Fe$_{\rm oct2}$-Fe$_{\rm tet1}$-layer distance, whereas GGA+$U$ shows no relaxation. Top layers relaxation calculated within LDA+$U$ \cite{ZhuYL06} are substantially larger compared with our GGA values.
As seen from Fig.~\ref{relaxations} for the Fe$_{\rm tet1}$ terminated surface the relaxations are in very good agreement with experimental data \cite{RitWei99}, and agree well with results of the LDA+$U$ calculations \cite{ZhuYL06}. The largest relaxations (up to about 90\%) are observed in the O$_2$-termination. The first and second interlayer distances exhibit large contractions of about 50\% GGA (53\%, GGA+$U$) and 56\% GGA (60\%, GGA+$U$), respectively. These are almost compensated by a large expansion of the third interlayer spacing (92\%, GGA; 85\%, GGA+$U$). The relaxations of the O$_2$ surface are much larger (up to three times) than those found from LDA+$U$ \cite{ZhuYL06}.
They lead to a specific final configuration: a surface region of reduced thickness formed by a triple-layer, slightly separated from the deeper lying oxide layers. Similar structures are observed for the other considered terminations: separated triple layers for the Fe$_{\rm tet1}$- and Fe$_{\rm oct2}$- terminations, and a separated double layer for the O$_1$-terminated surface. In, general, the relaxations of the O-terminated surfaces of magnetite are much larger than those observed on O-terminated hematite (0001) surface \cite{KiePab11}. The topmost layer relaxations on the iron terminated surface of magnetite (111) show similarity to those calculated for hematite (0001) surfaces \cite{KiePab11}.
The calculated contraction of the spacing between the topmost Fe and O surface layers might be considered as a mechanism which reduces the total electrostatic dipole moment and thus stabilizes these polar surfaces. The ideal bulk O-layers show only a small corrugation. On the O$_1$ plane one of the O atoms is slightly shifted down giving a 0.03 \AA\ corrugation with respect to the average position of the plane. A similar corrugation is caused by an upward shift of one of the O atoms of the O$_2$ plane.
\begin{figure*
\subfigure{\includegraphics*[width=8.0cm]{./fig_4a.eps}}
\hspace{0.5cm}
\subfigure{\includegraphics*[width=8.0cm]{./fig_4b.eps}}
\caption{(Color online) Density of states of bulk magnetite and LDOS on atoms of six topmost atomic layers of different surface terminations decomposed into the contributions from the iron and oxygen atomic layers. Majority and minority spin states are respectively displayed as positive and negative. The left and right hand side panels show respectively GGA and GGA+$U$ results. } \label{f4_DOS_all}
\end{figure*}
Figure \ref{f4_DOS_all} shows the calculated density of states (DOS) of bulk magnetite and the local density of states (LDOS) at the four Fe$_3$O$_4$(111) surface terminations. As is seen the DOS and LDOS resulting from the respective GGA and GGA+$U$ calculations differ substantially. Both methods predict half-metallic behavior \cite{ZhaSap91} of the bulk magnetite with the band gap in majority spin electrons which is much wider when calculated within GGA+$U$. However, all considered (but O$_2$) surface terminations show metallic LDOS when calculated within GGA. The metallic character of the O$_1$ termination agrees with the results of a combined spin-polarized STM and DFT study reported by Berdunov \textit{et al.} \cite{BerMMS04a}. This prediction is at variance with the results from photoemission spectroscopy studies \cite{SchSTFKSSMBC05,FonDPRG07} and with our GGA+$U$ results (Fig. \ref{f4_DOS_all}) which show that almost all terminations remain half-metallic. The exception is Fe$_{\rm tet1}$ surface where the narrow ($\sim$0.3 eV) energy gap opens below the Fermi level.
The location of the band gap alters from the majority to minority spin band, depending on the termination. Compared with the GGA the LDOS calculated within GGA+$U$ extend over a wider energy range and the main weights of the Fe$_{\rm tet1}$ and Fe$_{\rm oct2}$ states are shifted to lower energies. At the Fe$_{\rm oct2}$ termination responsible for the half-metallic character are Fe $3d$ states, whereas at the O$_1$ and O$_2 $ terminations half-metallicity is due to the hybridized O $2p$ and Fe $3d$ states.
\begin{table
\caption{Work function of different Fe$_3$O$_4(111)$ surfaces. \label{tab1-WF} }
\begin{ruledtabular}
\begin{tabular}{lcc}
Termination & \multicolumn{2}{c}{Work function (eV)} \\ \cline{2-3}
& GGA & GGA+$U$\\ \hline
Fe$_{\rm tet1}$ & 4.93 & 5.48 \\
Fe$_{\rm oct2}$ & 3.58 & 3.90 \\
O$_1$ & 7.04 & 8.09 \\
O$_2$ & 6.88 & 7.66
\end{tabular}
\end{ruledtabular}
\end{table}
The difference in the electronic structure of the differently terminated surfaces is manifested in their work function values. It was calculated as the difference between the electrostatic potential in the vacuum and the Fermi energy of the slab. The work function of the O-terminated surfaces is 2-3.5 eV (GGA) and 2-4 eV (GGA+$U$) larger than that of the Fe-terminated surfaces. The GGA work functions are about 0.4-1.0 eV lower than those determined from GGA+$U$ (Table \ref{tab1-WF}). Generally, the work function is lowest for the Fe$_{\rm oct2}$-terminated surface and highest for the O$_1$ termination.
The calculated Bader charges \cite{Bader90,HenAJ06} on atoms at the exposed surface terminations show that the atoms in the topmost Fe layer, both of the Fe$_{\rm tet1}$ and Fe$_{\rm oct2}$-terminated surfaces, gain electron charge compared to that on atoms in the bulk crystal layers (0.28$e$; 0.76$e$ and 0.27$e$, respectively). The oxygen atoms of the O-terminated surfaces, as well as the O-atoms of the Fe$_{\rm tet1}$ termination, lose electrons. In contrast, in the Fe$_{\rm oct2}$ termination, not only the Fe$_{\rm oct2}$ and Fe$_{\rm tet1}$ atoms gain electrons but also most of the O-atoms of the subsurface layers except for one of the atoms of the O$_1$-layer which loses 0.14$e$.
The changes in the magnetic moments on the iron atoms due to the presence of the surface and its relaxation are limited practically to the 2-3 outmost surface layers. The directions of the magnetic moments do not change and the surface remains ferrimagnetic, but the magnitudes of the moments are smaller by about 0.1-1.6$\mu_{B}$ (GGA) and 0.05-0.56$\mu_B$ (GGA+$U$) compared to the corresponding atoms in magnetite bulk. The strongest change of the magnetic moments is observed for the topmost surface layer. At the Fe$_{\rm tet1}$-termination the moments of the topmost Fe layer are reduced to 3.11$\mu_B$ (GGA) and 3.51$\mu_{B}$ (GGA+$U$). At the Fe$_{\rm oct2}$-terminated surface the moment is enhanced to 4.02$\mu_{B}$ (GGA) and 4.45$\mu_B$ (GGA+$U$).
As can be seen from the lowest panel of Fig.~\ref{relaxations}, this is connected with the big change in the relaxation of the topmost Fe$_{\rm oct2}$ layer which for the GGA+$U$ is practically reduced to zero. The atoms of subsurface O-planes of the Fe-terminated surfaces exhibit small magnetic moment $\approx$0.1-0.4$\mu_{B}$.
The GGA+$U$ moments on oxygen atoms of the O$_1$ and O$_2$ terminated surfaces are in the range 0.15-0.23$\mu_{B}$ and ($-$0.02)-0.20$\mu_B$, for the O$_1$ and O$_2$ termination, respectively. For the O$_2$ termination one of the O atoms in the topmost layer has a very small ($\approx$0.02$\mu_{B}$) local moment opposite to the other three atoms. The moments on the Fe atoms of subsurface planes in these two oxygen terminations show a significant reduction when calculated within GGA. In contrast, the GGA+$U$ values deviate relatively little from the moments on bulk atoms.
\begin{figure
\includegraphics[width=7.0cm]{./fig_5.eps}
\caption{(Color online) Schematic illustration of the initial Au and Pd adsorption places at the Fe$_{\rm tet1}$ terminated surface in a top (left) and side (right) view.} \label{fig_sites_Fetet1}
\end{figure}
\begin{figure
\includegraphics[width=6.5cm]{./fig_6.eps}
\caption{(Color online) Relaxation of the interplanar distance after Au/Pd adsorption in most stable sites on the Fe$_{\rm tet1}$ terminated Fe$_3$O$_4$(111) surface. } \label{fig_relaxAuPd_Fe}
\end{figure}
\subsection{Adsorption on the Fe$_{\rm tet1}$ terminated surface}
The adsorbate atoms were initially placed well above four different sites (Fig. \ref{fig_sites_Fetet1}) of the Fe$_{\rm tet1}$-terminated surface: in site A, on-top of the Fe$_{\rm tet1}$ atom, site D above the O$_1$ atom, or in one of the hollow sites, B or C, formed respectively in a deep hollow over the O$_2$ atom, and in the hollow above the Fe$_{\rm oct1}$ atom. All of the adsorption sites appeared to be stable after structure relaxations. The most stable position for Au adsorption is site A, whereas Pd atoms adsorb preferably in site C.
The changes in the surface relaxation due to Au/Pd adsorption are presented in Fig. \ref{fig_relaxAuPd_Fe}. The adsorption of Au and Pd in any considered adsorption site tends to suppress the surface relaxation of the first interplanar distance of the Fe$_{\rm tet1}$ termination compared to the clean surface. The relaxation patterns resulting from GGA and GGA+$U$ calculations are very similar, with the relaxation of the first four layers being larger for GGA+$U$. For Au adsorbed in the most preferred on-top site A, a large contraction of the clean surface (35\%, GGA; 40\%, GGA+$U$) is converted into distinct expansion ($\sim$10\% GGA; $\sim$15\%, GGA+$U$), which makes the separation larger than respective one in the bulk crystal. A Pd adatom in the on-top site A causes much smaller changes of the interplanar distance which remains contracted compared to that in bulk magnetite. In general, the changes in the surface geometry due to Pd adsorption are much smaller than those caused by Au. A Au/Pd atom adsorbed in any other considered site does not change qualitatively the relaxation pattern. For both adsorbates, the distances between deeper oxide layers show relatively small changes.
\begin{figure
\includegraphics[width=7.0cm]{./fig_7.eps}
\caption{(Color online) Adsorption energy, $E_{\rm ads}$, work function change, $\Delta\Phi$, and adatom-surface distance for a single X (=Au,Pd) atom adsorbed in different sites of the Fe$_{\rm tet1}$ terminated surface.} \label{fig_adsAuPd_Fe}
\end{figure}
Figure \ref{fig_adsAuPd_Fe} displays the calculated adsorption energy and adatom-surface distance for a single Au/Pd atom adsorption. As is seen, in general, the GGA+$U$ binding is 0.2-0.5 eV stronger than that resulting from GGA. The Pd binding to the Fe$_{\rm tet1}$ terminated surface is stronger than that of Au in all sites, except for site A. This agrees with the results for Au/Pd adsorption on the iron terminated (0001) surface of hematite \cite{KiePab11}. The adsorption energies in different sites are less differentiated for Pd than for Au. For Au adatoms the most stable position is site A (on top of the Fe$_{\rm tet1}$ atom), whereas for Pd it is the threefold O-coordinated hollow C (Fig.~\ref{fig_sites_Fetet1}). The binding energy of the Au (Table \ref{tab2_ads_Fetet1}) in the most preferred on-top site A is 1.66 eV (GGA) and 1.98 eV (GGA+$U$). In this site, the Au--Fe$_{\rm tet1}$ bond is perpendicular to the magnetite surface with a bond length of 2.43 \AA\ (GGA and GGA+$U$).
For Pd adsorption the most stable site is the threefold O-coordinated hollow site C with an adsorption energy of 1.76 eV (GGA) and 2.20 eV (GGA+$U$). The respective Pd-O bond lengths are 2.18 {\AA}, 2.46 {\AA}, and 2.46 {\AA}, which is not much larger (2.68 \AA) than the distance between the Pd and Fe$_{\rm oct1}$ atom in the third surface layer.
\begin{table}
\caption{Adsorption energy, $E_{\rm ads}$, adatom-surface distance, $d_{\rm X-Fe_{tet1}}$, and work function change, $\Delta\Phi$, for X (=Au,Pd) adatom in most stable sites on the Fe$_{\rm tet1}$-terminated surface. For each quantity the left and right hand side columns display GGA and GGA+$U$ values, respectively.} \label{tab2_ads_Fetet1}
\begin{ruledtabular}
\begin{tabular}{c|cc|cc|cc}
X (site) & \multicolumn{2}{c|}{$E_{\rm ads}$ (eV)} & \multicolumn{2}{c|}{$d_{\rm X-Fe_{tet1}}$ ({\AA})} & \multicolumn{2}{c}{$\Delta\Phi$ (eV)} \\
\hline
Au (A) & 1.66 & 1.98 & 2.41 & 2.43 & 0.84 & 0.52\\
Pd (C) & 1.76 & 2.20 & 1.06 & 1.19 & -0.67 & -1.20\\
\end{tabular}
\end{ruledtabular}
\end{table}
The stronger binding of Pd than Au can be understood by inspecting the results of the layer-resolved LDOS presented in Fig.~\ref{ldos_AuPd_Fetet1}. They were calculated for the Au in site A and the Pd in site C, which are their most stable positions. The GGA LDOS of surface layers with Au and Pd show metallic character. The principal peaks of the Pd $4d$ electron states are higher in the energy and hybridize with the O $2p$ states closer to the Fermi level than the Au $5d$ peaks, which results in stronger binding of the Pd. In the case of Au adsorption the metallicity of the oxide is enhanced by the Au $5d$ states in the energy range between -3 eV and the Fermi level and a smaller contribution from the Au $6s$ states at the energies close to the Fermi level. In the GGA+$U$ approach the adsorbed Au introduces a small density of $6s$ states at the Fermi level which contributes to the reactivity of this surface and changes the semiconducting character of the surface. The Pd $4d$ states tend to close the energy gap on this oxide termination and convert it from semiconductor to semi-metal. For the GGA+$U$ the main weight of the Pd $4d$ states is about 1 eV closer to the Fermi level than that of the Au $5d$ states, which again means stronger binding of Pd than Au.
\begin{figure*
\includegraphics[width=15cm]{./fig_8.eps}
\caption{(Color online) Local density of states for Au and Pd adsorption in most stable sites on the Fe$_{\rm tet1}$ termination resulting from GGA and GGA+$U$ calculations. LDOS of the adsorbate atom and of the three (two Fe and one O) topmost atomic layers of the magnetite (111) surface are shown. Corresponding LDOS for the clean Fe$_{\rm tet1}$-termination are displayed for comparison}
\label{ldos_AuPd_Fetet1}
\end{figure*}
The differences between Au and Pd adsorption are also seen in the work function changes (Fig.\ \ref{fig_adsAuPd_Fe}). In all considered sites Pd lowers the work function while Au distinctly increases it only when adsorbed in the A site. In the remaining sites Au either slightly reduces the work function (GGA+$U$) or has only a very small effect on it (GGA). In general, GGA predicts a higher work function increase (by $\approx$0.3 eV) than GGA+$U$ for Au adsorption and a smaller (by 0.4-0.7 eV) lowering of the work function due to Pd adatom. For Au adsorption in the most preferred site A, the work function is increased by 0.84 eV (GGA) and 0.52 eV (GGA+$U$). In contrast, the work function of the Pd/Fe$_{\rm tet1}$ system, with Pd in the most stable site C, is lower by 0.67 eV (GGA) and 1.20 eV (GGA+$U$) compared with that of the clean surface.
The above work function changes are consistent with the electron charge transfer to/from the surface atoms due to adsorption of Au/Pd. The Bader charge analysis \cite{Bader90,HenAJ06} shows that a Au adatom in site A gains electrons (0.27$e$, GGA; 0.32$e$, GGA+$U$) at the expense of the surface Fe and O atoms. In the case of GGA+$U$ most of the charge (0.22$e$) is donated by the Fe$_{\rm tet1}$ atom from beneath the Au. In the case of Pd adsorption in the site C, the charge (0.42$e$, GGA; 0.36$e$, GGA+$U$) is transferred from the adatom to the surface, mostly to the Fe$_{\rm tet1}$ atom (0.13$e$, GGA; 0.05$e$, GGA+$U$).
The changes of the magnetic moments caused by Au/Pd adsorption are small and limited to substrate atoms in the 2-3 topmost layers. For Au in the most stable site A, the GGA+$U$ magnetic moment on the topmost Fe$_{tet1}$ atoms is enhanced by about 0.42$\mu_{B}$ with respect to that of the clean surface. For Pd GGA+$U$ adsorption in the most preferred site C the change is negligible ($\approx$0.01$\mu_{B}$).
Small magnetic moments appear on the Au and Pd adatoms. The moment on the Au atom in different sites is in the range of 0.14-0.20$\mu_B$ (GGA) and 0.10-0.19$\mu_{B}$ (GGA+$U$), the lowest values corresponding to the Au atom in most stable site A, and has the same direction as the moments on the Fe atoms in the topmost Fe$_{tet1}$ layer. The magnetic moment on the Pd adatom in sites A, C, and D is in the range of 0.07-0.27$\mu_{B}$ (GGA), and 0.03-0.16$\mu_{B}$ (GGA+$U$), and has the same direction as the moment on the Fe atoms in the topmost surface layer. The magnetic moment on Pd at the site B is 0.41$\mu_{B}$ and 0.62$\mu_{B}$ for GGA and GGA+$U$, respectively, and is oriented upwards. The moment on the Pd adatom in the most preferred site C is the same (-0.09$\mu_{B}$) for GGA and GGA+$U$ calculations.
\begin{figure
\includegraphics[width=7.0cm]{./fig_9.eps}
\caption{(Color online) Schematic illustration of the initial Au and Pd adsorption places at the O$_2$-terminated Fe$_3$O$_4$(111) surface in a top and side view.}\label{fig_sites_O2}
\end{figure}
\subsection{Adsorption on the O$_2$ terminated surface}
For the oxygen termination the following four different adsorption sites (Fig.~\ref{fig_sites_O2}) were considered: the hollow over the Fe$_{\rm oct2}$ atom (site A), a deep hollow over the O$_1$ atom (site B), a hollow over the Fe$_{\rm tet2}$ atom (site C), and the site above (on-top) the lower O$_2$ (site D). All of the considered positions appeared to be stable during structure optimization, and the energetically preferred site, both for Au and Pd adsorption, is site B, where the adsorbate atom is coordinated by three O atoms.
The adsorbate induced changes in the relaxation pattern resulting from GGA and GGA+$U$ calculations are qualitatively and quantitatively similar for both types of adsorbates (Fig.~\ref{fig_relax_AuPd_O2}). The adsorption of a Au/Pd atom on the O$_2$ terminated surface suppresses the relaxations of the first two interplanar distances. The contraction of the first interplanar distance is reduced by one-half with respect to the clean surface, but the distance still remains 25\% shorter than that in the bulk crystal. The changes of the relaxations of deeper distances are rather small. Interestingly, the difference in the sign of the relaxation of the fourth interlayer distance in the GGA and GGA+$U$ results is removed in the adsorbate--oxide system.
\begin{figure
\includegraphics[width=6.5cm]{./fig_10.eps}
\caption{(Color online) Interlayer relaxation (in \%) of the interplanar distance after Au and Pd adsorption in most stable sites on the O$_2$ terminated Fe$_3$O$_4$(111) surface.} \label{fig_relax_AuPd_O2}
\end{figure}
The adsorption binding energy in the considered sites and the adatom distance to the topmost oxide layer are displayed in Fig.~\ref{fig_ads_AuPd_O2}. Additionally, Table \ref{tab3_ads_O2} shows numerical values for the most stable site. In general, the GGA+$U$ binding is up to about 2 eV larger than that calculated within GGA. Similarly as for the Fe$_{\rm tet1}$ termination the Pd atoms bind stronger than Au to the O$2$-terminated surface. The Au binding energy is 1.74 eV (GGA) and 3.66 eV (GGA+$U$), whereas for Pd it is 3.39 eV (GGA) and 4.87 eV (GGA+$U$). The bond lengths between Au and the three coordinating oxygens are not symmetric and are 1.99, 2.07, and 2.07 {\AA} for GGA+$U$ (2.03, 2.29 and 2.29 {\AA} for GGA). These numbers are very close to the corresponding Pd-O bond lengths of 1.95, 2.10, and 2.10 {\AA}.
\begin{figure
\includegraphics[width=7.0cm]{./fig_11.eps}
\caption{(Color online) The same as in Fig. \ref{fig_adsAuPd_Fe} but for the O$_2$ terminated surface.}\label{fig_ads_AuPd_O2}
\end{figure}
\begin{table}
\caption{Adsorption energy, $E_{\rm ads}$, adatom-surface distance, $d_{\rm X-O_2}$, and work function change, $\Delta\Phi$, for X (=Au,Pd) adatom in most stable sites on the O$_{2}$-terminated surface. For each quantity the left and right hand side columns display GGA and GGA+$U$ values, respectively.} \label{tab3_ads_O2}
\begin{ruledtabular}
\begin{tabular}{c|cc|cc|cc}
X (site) &\multicolumn{2}{c|}{$E_{\rm ads}$ (eV)} & \multicolumn{2}{c|}{$d_{\rm X-O_2}$ ({\AA})} & \multicolumn{2}{c}{$\Delta\Phi$ (eV)}\\
\hline
Au (B) & 1.74 & 3.67 & 1.18 & 0.39 & -1.91 & -1.82\\
Pd (B) & 3.39 & 4.87 & 0.93 & 0.82 & -2.05 & -2.43\\
\end{tabular}
\end{ruledtabular}
\end{table}
Figure \ref{ldos_AuPd_O2} displays the layer-decomposed LDOS for the Au and Pd adsorbed in the energetically favored site B. The adsorbate covered surface remains half-metallic both when calculated in GGA and GGA+$U$ although in the latter case a nonvanishing LDOS of the hybridized Au $5d$ and O $2p$ states at the Fermi level makes the bands look more metallic-like. The Au and Pd states are much more delocalized and extended over a wider range of energies compared with the Fe$_{\rm tet1}$ termination. The half-metallicity of the GGA+$U$ bands of the Au/O$_2$ system is due to minority spin band of the Fe $3d$ states. The energy gap in the majority spin band is shifted to lower energies compared to the clean surface.
Upon Au adsorption the hybridization of the Au 5$d$ and O $2p$ states is strongest up to about 0.5 eV below the Fermi level while for the Pd $4d$ states the hybridization with the O $2p$ states occurs up to the Fermi level. This stronger (compared to that of Au) hybridization of the oxide and Pd states at energies closer to the Fermi level explains the stronger binding of Pd than Au to the O$_2$ terminated surface (Fig.~\ref{fig_ads_AuPd_O2}).
The resulting LDOS are dominated by the O $2p$ electron states of the surface O$_2$ layer hybridized with the Au 5$d$ or Pd $4d$ states. The states of Fe atoms of the underlying Fe$_{\rm tet2}$ and Fe$_{\rm oct1}$ layer contribute significantly only to the LDOS at the lower energy range, below $-6$ eV.
For Pd adsorption the LDOS on the Pd atom is more localized than on Au and its main weight extends between $-1.5$ eV and the Fermi level. In contrast to the clean termination, in the presence of adsorbed Pd, responsible for half-metallic character of the surface is the energy gap in the minority spin band.
\begin{figure*
\includegraphics[width=15cm]{./fig_12.eps}
\caption{(Color online) Same as in Fig.~\ref{ldos_AuPd_Fetet1} but for Au/Pd adsorption on the O$_2$ termination. }
\label{ldos_AuPd_O2}
\end{figure*}
The work function of the oxygen terminated surface decreases dramatically upon Au/Pd adsorption (Fig.~\ref{fig_ads_AuPd_O2}). Au adsorption in the most preferred site B decreases the work function by about 1.9 eV (GGA) and 1.73 eV (GGA+$U$). For Pd adsorption this decrease is even larger: 2.0 eV (GGA) and 2.4 eV (GGA+$U$) which means a nearly 30\% reduction with respect to the clean surface value. A reduction of work function is connected with a decrease of the surface dipole moment and indicates a decrease in polarity.
Both for Au and Pd a charge transfer from the adsorbate to the substrate is obtained. This is different from adsorption on the iron terminated surface, where a transfer from adsorbate to the surface is observed only for Pd, but is consistent with the large work function decrease described above. The Au adatom loses 0.70$e$ (GGA) or 0.77$e$ (GGA+$U$), Pd loses 0.89$e$ (GGA) or 0.92$e$ (GGA+$U$). Most of this charge is transferred to the atoms of the O$_2$ layer closest to the adatom (0.08$e$ (GGA) and 0.30$e$ (GGA+$U$) for Au, and 0.28$e$ (GGA) and 0.25$e$ (GGA+$U$) for Pd adsorption).
The magnetic moments on the oxygen atoms in the topmost surface layer are only little affected by the presence of adsorbate. For a Au atom in the most preferred site B, the GGA+$U$ moments on the O-atoms are changed by 0.11-0.16$\mu_{B}$. A Pd atom in the same site induces the smaller changes (0.02-0.07$\mu _B$). For both adsorbates in the most stable site B, the moment of one of the O atoms in the topmost substrate layer is oriented down which is opposite to the direction of the remaining three O atoms, and is similar to the clean O$_2$ terminated surface.
The changes on the Fe$_{\rm tet2}$ atoms of the first subsurface layer resulting from the GGA+$U$ calculations for Au and Pd adsorption are -0.40$\mu_{B}$ and -0.29$\mu_{B}$, respectively. A decreased magnitude of the magnetic moments of the Fe$_{\rm tet2}$ atoms makes them comparable with those in the bulk magnetite. The changes of moments of atoms in deeper layers compared with the bulk are negligible.
On the O$_2$-terminated surface the magnetic moments on the Au differ from those on Pd atoms. The moments of the Au atom are lower than 0.1$\mu_{B}$. Only the magnetic moments of the Au atom in the site B and C resulting from the GGA are larger than that (0.13$\mu_B$ and $-$0.15$\mu_B$, respectively). The moment on Au in the most preferred site B is $-$0.02$\mu_{B}$ (GGA+$U$). The magnetic moments on Pd are much larger than those on the Au and are in the range 0.4-0.9$\mu_{B}$. In the most stable site B the magnetic moment on Pd is 0.81$\mu_{B}$ (GGA) and 0.92$\mu_{B}$ (GGA+$U$).
\section{Summary and conclusion}
We have presented a detailed DFT and DFT+$U$ study of the structural, electronic, and magnetic properties of the clean magnetite (111) surface, and the adsorption of Au and Pd atoms on its two stable, Fe$_{\rm tet1}$ and O$_2$, terminations. Inclusion of on-site Coulomb correlations in GGA+$U$ approach modifies profoundly the electronic structure of magnetite surfaces. Based on \textit{ab initio} thermodynamics the Fe$_{\rm tet1}$ terminated surface is confirmed to be the most stable one over a broad range of oxygen pressures. It shows metallic character when calculated within GGA and half-metallic DOS with zeroth LDOS at the Fermi level when calculated within GGA+$U$. All terminations studied exhibit a large inward relaxation of the first interlayer distance. Both Au and Pd bind strongly to the magnetite surface and induce large changes in the surface geometry. At the Fe$_{\rm tet1}$ termination different sites are favored for adsorption of Au and Pd. For Au adsorption the most favorable site is on top of the Fe$_{\rm tet1}$ atom, whereas for Pd it is threefold coordinated hollow. At the O$_2$ termination the threefold hollow site B, where the adatom is coordinated by three O atoms, is most stable both for Au and Pd adsorption.
The GGA+$U$ bonding is respectively 0.2-0.5 eV and 0.5-1.5 eV stronger on the iron and the oxygen terminated surface, than that resulting from standard GGA calculations. The binding is stronger for Pd than Au and for both adsorbates is distinctly stronger on the oxygen than on the iron terminated surface.
The Au/Pd bonding either to iron or oxygen terminated (111) surface of magnetite shows close resemblance to that reported by us for Au/Pd adsorption on the hematite (0001) surface terminations both with respect to the preference of the sites and strength of the bonding \cite{KiePab11}.
\begin{acknowledgments}
This work was supported by the Polish Ministry of Science and Higher Education in the years 2008-11 under Grant No.\ N N202 072535. We are are grateful to Ernst Bauer for useful comments. AK acknowledges hospitality of Risto Nieminen, Aalto University, Finland, and the access to high performance computers of the CSC Espoo, Finland, under the HPC-Europa2 project (No.\ 228398) with the support of the European Commission -- Capacities Area -- Research Infrastructures. We also acknowledge provision of computer time from the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) of the Warsaw University within the Project No.\ G44-23.
\end{acknowledgments}
|
2002.11277
|
\section{Proof of Theorem~\ref{th:kronecker}}\label{app:th:kronecker}
Note that the current form of the objective function \eqref{learn_kronecker} can be expressed as
\begin{align} \label{eq:smoothness:kronecker}
&\frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \bar{\bx}_{m}^{T} (\underset{[K_{0}]}{\otimes} \bW_{i}) \bone - \bx_{m}^{T} (\underset{[K_{0}]}{\otimes} \bW_{i}) \bx_{m} \nonumber\\
&= \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \langle \bar{\cX}_{m},\one \underset{[K_{0}]}{\times} \bW_{i} \rangle - \langle \cX_{m},\cX_{m} \underset{[K_{0}]}{\times} \bW_{i} \rangle
\end{align}
using the properties of the Kronecker product. Let us define the following: the tensor $\cY_{m} = \cX_{m} \underset{[K_{0}] \setminus k}{\times} \bW_{i}^{1/2} \underset{k}{\times} \bI_{k}$, the matrix $\bS_{k} = \frac{1}{M_{0}}\sum_{m=1}^{M_{0}} \cY_{m(k)} \cY_{m(k)}^{T}$, $\bar{\bd}_{k}$ as a vector of weighted degrees of the product adjacency matrix $\underset{[K_{0}] \setminus k}{\otimes} \bW_{i}$, $\bd_{k}$ as the vector of weighted degrees of $\bW_{k}$, $\bar{\by}_{mj}$ as the $j$-th column of $\bar{\cX}_{m(k)}$, $\by_{mj}$ as the $j$-th column of $\cX_{m(k)}$, and finally the matrix $\bar{\bS}_{k} = \frac{1}{M_{0}}\sum_{m=1}^{M_{0}}\sum_{j=1}^{n/n_{k}} (\bar{\bd}_{k})_{j} \by_{mj} \by_{mj}^{T}$. Then we can further express the terms in \eqref{eq:smoothness:kronecker} as:
\begin{align} \label{eq:smoothness:kronecker:1}
& \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \langle \cX_{m},\cX_{m} \underset{[K_{0}]}{\times} \bW_{i} \rangle \nonumber\\
&= \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \tr \big( \bW_{k} \cX_{m(k)} (\underset{[K_{0}] \setminus k}{\otimes} \bW_{i}) \cX_{m(k)}^{T} \big) \nonumber \\
&= \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \tr \big( \bW_{k} \cY_{m(k)} \cY_{m(k)}^{T} \big) = \alpha~\tr\big( \bW_{k} \bS_{k} \big),
\end{align}
and,
\begin{align} \label{eq:smoothness:kronecker:2}
& \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \langle \bar{\cX}_{m},\one \underset{[K_{0}]}{\times} \bW_{i} \rangle \nonumber\\
&= \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \tr \big( \bW_{k} \bar{\cX}_{m(k)} (\underset{[K_{0}] \setminus k}{\otimes} \bW_{i}) \one_{(k)}^{T} \big) \nonumber \\
&= \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \tr \big( \bW_{k} \bar{\cX}_{m(k)} \bar{\bd}_{k} \bone^{T} \big) = \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \tr \big( \bd_{k}^{T} \bar{\cX}_{m(k)} \bar{\bd}_{k} \big) \nonumber \\
&= \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \tr \big( \bd_{k}^{T} [\bar{\by}_{m1}, \bar{\by}_{m2}, \cdots, \bar{\by}_{mn/n_{k}}] \bar{\bd}_{k} \big) \nonumber \\
&= \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}}~\underset{j = 1}{\overset{n/n_{k}}{\sum}} \tr \big( (\bar{\bd}_{k})_{j} \by_{mj}^{T} \bD_{k} \by_{mj} \big) = \alpha~\tr(\bD_{k} \bar{\bS}_{k})
\end{align}
Moreover, for the terms in \eqref{eq:smoothness:kronecker:1} and \eqref{eq:smoothness:kronecker:2}, the difference from their expected values takes the form of:
\begin{align} \label{eq:smoothness:kronecker:3}
&\alpha~\Big|\tr\big( \bW_{k} \bS_{k} \big) - \E_{\bx} \big[ \tr\big( \bW_{k} \bS_{k} \big) \big]\Big| \nonumber\\
&= \alpha~\Big| \tr\big( \bW_{k} \bS_{k} \big) - \tr\big( \bW_{k} \E_{\bx}[\bS_{k}]\big) \Big| \nonumber\\
&= \alpha~\Big| \tr\big( \bW_{k} (\bS_{k} - \E_{\bx}[\bS_{k}]) \big) \Big| \nonumber\\
&= \alpha~\Big| \underset{i,j}{\overset{ }{\sum}} \big( \bW_{k} (\bS_{k} - \E_{\bx}[\bS_{k}]) \big)_{i,j} \Big| \nonumber\\
&\leq \alpha~\underset{i,j}{\max}\Big|(\bS_{k} - \E_{\bx}[\bS_{k}])_{i,j}\Big|~\underset{i,j}{\overset{ }{\sum}} \big( \bW_{k} \big)_{i,j} \nonumber\\
&= \alpha n_{k}~\underset{i,j}{\max}\Big|(\bS_{k} - \E_{\bx}[\bS_{k}])_{i,j}\Big| \nonumber\\
&\leq C_{1} \frac{n_{k}^{2}\log(n_{k})}{nM_{0}} \Big({1+\big\|\underset{[K_{0}] \setminus k}{\otimes} \whbW_{i} - \underset{[K_{0}] \setminus k}{\otimes} \bW_{i}^{*} \big\|_{F}} \Big),
\end{align}
and
\begin{align} \label{eq:smoothness:kronecker:4}
&\alpha~\Big|\tr\big( \bD_{k} \bar{\bS}_{k} \big) - \E_{\bx} \big[ \tr\big( \bD_{k} \bar{\bS}_{k} \big) \big]\Big| \nonumber\\
&\leq \alpha~\underset{i,j}{\max}\Big|(\bar{\bS}_{k} - \E_{\bx}[\bar{\bS}_{k}])_{i,j}\Big|~\underset{i,j}{\overset{ }{\sum}} \big( \bD_{k} \big)_{i,j} \nonumber\\
&= \alpha n_{k}~\underset{i,j}{\max}\Big|(\bar{\bS}_{k} - \E_{\bx}[\bar{\bS}_{k}])_{i,j}\Big| \nonumber\\
&\leq C_{2} \frac{n_{k}^{2} \log(n_{k}) }{nM_{0}} \Big({1+\big\|\underset{[K_{0}] \setminus k}{\otimes} \whbW_{i} - \underset{[K_{0}] \setminus k}{\otimes} \bW_{i}^{*} \big\|_{F}} \Big),
\end{align}
with probability for both inequalities exceeding $1-4n_{k}^2 \Big[\exp\big(\frac{-nM_{0}}{2n_{k}}\big) + \exp\big(-(0.25+\sqrt{\log n_{k}})^2\big)\Big]$; details of the last inequalities of \eqref{eq:smoothness:kronecker:3} and \eqref{eq:smoothness:kronecker:4} in \cite[Lemma B.1]{sun2015non}. Here $\whbW_{i}$ is the current estimate of the $i$-th adjacency matrix while $\bW_{i}^{*}$ is the original $i$-th generating factor. The last inequalities in both expressions follow from \cite{sun2015non} by the fact that for estimating the $k$-th factor the estimates $\whbW_{i}$ for other factors are used, and by choosing $\alpha \leq \sqrt{\frac{n_{k}\log(n_{k})}{nM_{0}}}$.
With these bounds, the error between the sample-based objective and the population-based objective, which are defined as $\tr\big( \bW_{k} \bS_{k} \big) - \tr\big( \bD_{k} \bar{\bS}_{k} \big)$ with $M_{0}$ samples and as $\E_{\bx} \big[ \tr\big( \bW_{k} \bS_{k} \big) \big] + \E_{\bx} \big[ \tr\big( \bD_{k} \bar{\bS}_{k} \big) \big]$ with infinitely many samples, respectively, can be upper bounded as:
\begin{align}
&\alpha~\Big|\tr\big( \bW_{k} \bS_{k} \big) - \tr\big( \bD_{k} \bar{\bS}_{k} \big) \nonumber\\
&- \E_{\bx} \big[ \tr\big( \bW_{k} \bS_{k} \big) \big] + \E_{\bx} \big[ \tr\big( \bD_{k} \bar{\bS}_{k} \big) \big]\Big| \nonumber\\
&\leq \alpha~\Big|\tr\big( \bW_{k} \bS_{k} \big) - \E_{\bx} \big[ \tr\big( \bW_{k} \bS_{k} \big) \big]\Big| + \nonumber\\
& + \alpha~\Big|\tr\big( \bD_{k} \bar{\bS}_{k} \big) - \E_{\bx} \big[ \tr\big( \bD_{k} \bar{\bS}_{k} \big) \big]\Big| \nonumber\\
&\leq C \frac{n_{k}^{2}\log(n_{k})}{nM_{0}} \Big({1+\big\|\underset{[K_{0}] \setminus k}{\otimes} \whbW_{i} - \underset{[K_{0}] \setminus k}{\otimes} \bW_{i}^{*} \big\|_{F}} \Big).
\end{align}
To derive the bound on the error between the factor estimate $\whbW_{k}$ and the original generating $\bW_{k}^{*}$, let us first define the following convex function of $\bDelta$:
\begin{align} \label{eq:app:th:kronecker:FDelta}
F_{k}(\bDelta) &= \alpha~\tr\big( \bar{\bS}_{k} \diag(\bDelta \bone) \big) - \alpha~\tr\big( \bS_{k} \bDelta \big),
\end{align}
where $\bDelta = \bW_{k} - \bW_{k}^{*}$, and $\diag(\bDelta \bone) = \bD_{k} - \bD_{k}^{*}$.
Now, we want to prove that $F_{k}(\bDelta) > 0$, for $\bDelta \in \R^{n_{k}\times n_{k}}$ with $\|\bDelta\|_{F} = \|\bW_{k} - \bW_{k}^{*}\|_{F} = R\sqrt{\frac{n_{k}\log(n_{k})}{nM_{0}}}$ with a constant $R > 0$.
Consider $F_{k}(\cdot)$ at $\whbDelta = \whbW_{k} - \bW_{k}^{*}$, which is the minima of $F_{k}(\bDelta)$ because $\whbW_{k}$ is the minima of our factor-wise minimization of \eqref{learn_kronecker}. Then we have:
\begin{align}
F_{k} (\whbDelta) &= \alpha \tr\big( \whbD_{k} \bar{\bS}_{k} - \whbW_{k} \bS_{k} \big) - \alpha \tr\big( \bD_{k}^{*} \bar{\bS}_{k} - \bW_{k}^{*} \bS_{k} \big), \nonumber\\
&\leq F_{k} (\bzero) = \alpha \tr\big( \bD_{k}^{*} \bar{\bS}_{k} - \bW_{k}^{*} \bS_{k} \big) - \nonumber\\
&\quad\quad\quad\quad\quad\quad \alpha \tr\big( \bD_{k}^{*} \bar{\bS}_{k} - \bW_{k}^{*} \bS_{k} \big) = 0,
\end{align}
If we can prove that $F_{k}(\bDelta) > 0$ for $\bDelta \in \R^{n_{k}\times n_{k}}$ with a certain norm, then since $F_{k} (\whbDelta) \leq 0$, it must satisfy $\|\whbDelta\|_{F} < R\sqrt{\frac{n_{k}\log(n_{k})}{nM_{0}}}$.
To see that $F_{k} > 0$ for $\bDelta \in \R^{n_{k}\times n_{k}}$ with the prescribed norm, first consider the following using the property of the trace of product of matrices \cite{fang1994inequalities}:
\begin{align}
&\tr\big( \bar{\bS}_{k} \diag(\bDelta \bone) \big) \geq \lambda_{n_{k}}(\bar{\bS}_{k}) \tr\big( \diag(\bDelta \bone) \big) \nonumber\\
&\quad\quad\geq \lambda_{n_{k}}(\bar{\bS}_{k}) \|\diag(\bDelta \bone) \|_{F} = \lambda_{n_{k}}(\bar{\bS}_{k}) \|\bDelta \bone \|_{F} > 0
\end{align}
since $\|\bDelta \|_{F} > 0$, and where $\lambda_{n_{k}}(\bar{\bS}_{k})$ is the minimum eigenvalue of $\bS_{k}$.
Secondly, one can also see that:
\begin{align}
\tr\big( \bS_{k} \bDelta \big) &\leq \lambda_{1}(\bS_{k}) \tr\big( \bDelta \big) = 0,
\end{align}
where $\lambda_{1}(\bS_{k})$ is the largest eigenvalue of $\bS_{k}$, and $\tr\big( \bDelta \big) = 0$ because of the adjacency constraints. Using the upper and lower bounds on the trace terms in \eqref{eq:app:th:kronecker:FDelta} one can see that $F_{k}(\bDelta) > 0$ which completes the proof. \qed
\section{Proof of Theorem~\ref{th:cartesian}}\label{app:th:cartesian}
Let us focus on the Cartesian objective function in \eqref{learn_cartesian}:
\begin{align} \label{eq:smoothness:cartesian}
&\frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m}^{T} (\underset{[K_{0}]}{\oplus} \bW_{i}) \bx_{m} \nonumber\\
&= \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \underset{k = 1}{\overset{K_{0}}{\sum}} \bx_{m}^{T} ( (\underset{[k-1]}{\otimes} \bI_{i}) \otimes \bW_{k} \otimes (\underset{[K_{0}] \setminus [k]}{\otimes} \bI_{j}) ) \bx_{m}\nonumber\\
&= \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \underset{k = 1}{\overset{K_{0}}{\sum}} \langle \cX_{m},\cX_{m} \underset{k}{\times} \bW_{k} \rangle \nonumber\\
&= \frac{\alpha}{M_{0}} \underset{m = 1}{\overset{M_{0}}{\sum}} \underset{k = 1}{\overset{K_{0}}{\sum}} \tr \big( \bW_{k} \cX_{m(k)} \cX_{m(k)}^{T} \big) = \alpha \underset{k = 1}{\overset{K_{0}}{\sum}} \tr\big( \bW_{k} \bT_{k} \big),
\end{align}
where the matrix $\bT_{k} = \frac{1}{M_{0}}\sum_{m=1}^{M_{0}} \cX_{m(k)} \cX_{m(k)}^{T}$. Similar steps along the lines of \eqref{eq:smoothness:kronecker:2} can be followed to arrive at $\alpha/M_{0} \sum_{m=1}^{M_{0}} \bx_{m}^{T} (\underset{[K_{0}]}{\oplus} \bD_{i}) \bx_{m} = \alpha \sum_{k=1}^{K_{0}} \tr\big( \bD_{k} \bar{\bT}_{k} \big)$. Thus, one can clearly see that the objective function can be expressed as a sum of terms each of which is dependent on only one of the factor adjacency matrices $\bW_{k}$. As in Appendix~\ref{app:th:kronecker}, using \cite[Lemma B.1]{sun2015non} we can see that with high probability:
\begin{align}
&\underset{i,j}{\max}\Big|(\bT_{k} - \E_{\bx}[\bT_{k}])_{i,j}\Big| \leq C_{1} \frac{n_{k}^{2}\log(n_{k})}{nM_{0}},
\end{align}
since each term in the objective function \eqref{eq:smoothness:cartesian} is dependent on only one factor adjacency matrix.
After this, one can follow the steps in Appendix~\ref{app:th:kronecker} to obtain the final error bounds. \qed
\section{Proof of Theorem~\ref{th:strong}}\label{app:th:strong}
Focusing again on the objective in \eqref{learn_strong}, the terms involving only the $k$-th factor $\bL_{k}$ can be expressed as:
\begin{align} \label{eq:smoothness:strong}
&\frac{\alpha}{M_{0}}~\sum_{m = 1}^{M_0} \sum_{j = 0}^{K_{0}-1} \sum_{\bp}^{\bP(k,j)} \cX_{m} {:} (\cX_{m} \times_{\bp} \bW_{\bp} \times_{k} \bW_{k}) \nonumber\\
&= \frac{\alpha}{M_{0}}~\sum_{m = 1}^{M_0} \sum_{j = 0}^{K_{0}-1} \sum_{\bp}^{\bP(k,j)} \tr \big( \bW_{k} \cX_{m(k)} \bM_{\bp} \cX_{m(k)}^{T} \big) \nonumber \\
&= \frac{\alpha}{M_{0}}~\sum_{m = 1}^{M_0} \tr \big( \bW_{k} \cX_{m(k)} \Big[ \sum_{j = 0}^{K_{0}-1} \sum_{\bp \in \bP(k,j)} \bM_{\bp} \Big] \cX_{m(k)}^{T} \big) \nonumber \\
&= \frac{\alpha}{M_{0}}~\sum_{m = 1}^{M_0} \tr \big( \bW_{k} \cX_{m(k)} \bQ_{k} \cX_{m(k)}^{T} \big) = \alpha~\tr\big( \bW_{k} \bZ_{k} \big),
\end{align}
where $\bp$ denotes a column from matrix $\bP(k,j)$, $\sum_{\bp \in \bP(k,j)}$ denotes the summation over the columns of $\bP(k,j)$, and the columns of $\bP(k,j)$ are different combinations of indices given by $C^{[1,\dots,K_0]\setminus[k]}_{j}$. Additionally, $\bM_{\bp}$ denotes a matrix of size $(n-n_{k}) \times (n-n_{k})$ that contains an appropriate Kronecker product of identity matrices and factor adjacency matrices in accordance with the entries of the vector $\bp$, and $\bZ_{k} = \frac{1}{M_{0}}\sum_{m = 1}^{M_0} \cX_{m(k)} \bQ_{k} \cX_{m(k)}^{T}$.
Expressing the objective as a sum of terms that all contain the $k$-th factor facilitates the process of solving factor-wise problems.
To derive the error bounds see that the objective function in \eqref{learn_strong} can be expressed as: $\frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \bar{\bx}_{m}^{T} (\underset{[K_{0}]}{\boxtimes} \bW_{i}) \bone - \bx_{m}^{T} (\underset{[K_{0}]}{\boxtimes} \bW_{i}) \bx_{m}$.
The difference of the second term in this expression from its expected value can be further expressed and upper bounded as:
\begin{align}
&\frac{\alpha}{M_{0}} \Big|\underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m}^{T} (\underset{[K_{0}]}{\boxtimes} \bW_{i}) \bx_{m} - \E_{\bx} \big[ \bx_{m}^{T} (\underset{[K_{0}]}{\boxtimes} \bW_{i}) \bx_{m} \big]\Big| \nonumber \\
&=\frac{\alpha}{M_{0}} \Big|\tr \Big( (\underset{[K_{0}]}{\boxtimes} \bW_{i}) \underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m}^{T} \bx_{m}\Big) - \nonumber \\
&\quad\quad\quad\quad\quad\quad \tr \Big( (\underset{[K_{0}]}{\boxtimes} \bW_{i}) \E_{\bx} \big[\underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m}^{T} \bx_{m}\big] \Big) \Big| \nonumber \\
&\leq \frac{\alpha \lambda_{1}(\bW_{k})}{M_{0}} \Big|\tr \Big( ( \underset{[k-1]}{\boxtimes} \bW_{i} \boxtimes \bI_{k} \underset{[K_{0}] \setminus [k]}{\boxtimes} \bW_{j} ) \underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m}^{T} \bx_{m}\Big) \nonumber \\
&\quad\quad - \tr \Big( ( \underset{[k-1]}{\boxtimes} \bW_{i} \boxtimes \bI_{k} \underset{[K_{0}] \setminus [k]}{\boxtimes} \bW_{j} ) \E_{\bx} \big[\underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m}^{T} \bx_{m}\big] \Big) \Big| \nonumber \\
&\leq C_{1} \frac{n_{k}\log(n)}{M_{0}} \Big({1+2\big\|\underset{[K_{0},k]}{\boxtimes} \whbW_{i} - \underset{[K_{0}]}{\boxtimes} \bW_{i}^{*} \big\|_{F}} \Big),
\end{align}
where $\underset{[K_{0},k]}{\boxtimes} \whbW_{i} = \underset{[k-1]}{\boxtimes} \bW_{i} \boxtimes \bI_{k} \underset{[K_{0}] \setminus [k]}{\boxtimes} \bW_{j}$, and the last inequality follows with high probability from \cite[Lemma B.1]{sun2015non}. With this bound, the remaining steps are similar to the proof of Theorem~\ref{th:cartesian}, and are thus omitted in the interest of space. \qed
\section{Proof of Theorem~\ref{th:convergence}}\label{app:th:convergence}
To prove this theorem,
we first realize that since each mode-wise problem is convex, the update for each mode-wise problem is guaranteed to converge to its minimum.
The rate of convergence can then be established through the work in \cite{xu2013block} which provides convergence guarantees and rates of convergence for block coordinate descent for multiconvex objectives. It can be trivially seen that each factor-wise problem \eqref{eq:lagrangian} for learning factor graphs is strongly convex. The strong convexity of the factor-wise problems, in conjunction with \cite[Theorem~2.9]{xu2013block},
implies that Alg.~\ref{alg_learn_product_graph} presented in this paper converges to its critical points at a linear rate.\qed
\end{appendices}
\section{Conclusion}\label{sec:conc}
In this paper, we introduced a new linear formulation of the graph learning problem from graph signals. We demonstrated the performance of this new formulation with numerical experiments and derived bounds on its estimation performance.
Based on the proposed formulation, we also posed the problem to learn product graphs from data. We devised a block coordinate descent based algorithm for learning product graphs, and derived the associated error bounds for various product structures. Finally, we validated the performance characteristics and superior learning capabilities of our proposed method through numerical simulations on synthetic and real datasets.
\section{Introduction} \label{sec:intro}
Graph signal processing (GSP) is an emerging field in data science and machine learning that aims to generalize existing information processing methods to data that live on an irregular domain. This underlying irregular domain can be represented as a graph and analysis of signals on the vertices of this graph, aptly named graph signals, is enabled by the graph shift operator (GSO). Recent developments in GSP have already established that GSO-based data processing outperforms classical signal processing for several common tasks such as noise removal, signal filtering, wavelet representations, etc. \cite{bronstein2017geometric,sandryhaila2013discrete,sandryhaila2014big,shuman2013emerging,ortega2018graph}. The GSO is at the core of graph signal processing and could refer to either the adjacenecy matrix or one of the many types of Laplacian matrices associated with a graph. The exact choice of the GSO depends on the signal domain and the application of interest.
The eigenvectors of GSO provide bases for the spectral analysis of graph signals and generalize the concepts of bandlimited signals to the graph domain \cite{bronstein2017geometric,sandryhaila2013discrete,sandryhaila2014big,shuman2013emerging,ortega2018graph}. GSO also facilitates the synthesis of graph-based filters \cite{pasdeloup2017characterization,egilmez2018graph}
and plays a pivotal role in the description of the notion of \emph{smoothness} for graph signals \cite{bronstein2017geometric,sandryhaila2013discrete,sandryhaila2014big,shuman2013emerging,ortega2018graph}
The underlying graph (and hence the GSO) for some real-world datasets is either known apriori, or can trivially be constructed through domain knowledge. As an example, consider weather data collected over a region. In this example, different weather stations would act as nodes, their observations as graph signals, and one (possible) way to construct the graph would be to connect physically adjacent nodes.
For most real-world data, however, such a trivially constructed graph is either non-optimal or it cannot be constructed in the first place due to lack of precise knowledge about the data generation process. This presents the need to learn the true underlying graph from the data itself. In this regard, the problem of graph learning from the observed data (i.e., graph signals) has gained a lot of attention in the recent years \cite{pasdeloup2017characterization,segarra2017network,egilmez2018graph,chepuri2017learning,kalofolias2017large,egilmez2017graph,dong2016learning,bronstein2017geometric}.
Graph learning refers to the problem of learning an unknown underlying graph from observed graph signals by exploiting some property of the graph signals. Traditional approaches for graph learning have proposed algorithms whose best-case complexity scales quadratically with the number of nodes in the graph \cite{dong2016learning,kalofolias2017large,kalofolias2016learn,chepuri2017learning,egilmez2017graph}. These approaches might be suitable for learning small graphs, but even for moderately sized graphs the learning cost would be prohibitive. Moreover, for learning an arbitrary graph (Laplacian), the number of parameters one needs to learn also scales quadratically with the number of nodes. Both of these problems hinder the amenability of traditional graph learning approaches to large-scale real-world datasets. Our work on graph learning, in contrast, hinges on the fact that real-world data is often generated over graphs that have an additional inherent structure. This inherent structure is dictated by either the way the data is acquired, by the arrangement of the sensors, or by the inherent relation of variables being observed \cite{sandryhaila2014big}.
Moreover, this inherent structure of the graph being considered also presents itself in the associated GSO, which can incidentally be represented in terms of the product of several smaller \emph{factor} GSOs. In this paper, we will focus on three such structures that can be described in terms of three different products termed Kronecker, Cartesiasn and strong products. Although aware of the presence of these product structures in real-world graphs (and the associated GSOs) \cite{sandryhaila2014big}, the research community has yet to propose algorithms that incorporate the graph product structure in the graph learning procedure.\footnote{During the course of revising this paper, we became aware of a recent work~\cite{kadambari2020learning} that aims to learn a Cartesian structured graph from data. This work targets a different objective function than ours, and (additionally) our method also addresses the problems of learning Kronecker and strong graphs, both of which fall outside the scope of the method proposed in \cite{kadambari2020learning}.} Additionally, as the number of free parameters scales quadratically with the number of nodes in the graph, and given the massive nature of the datasets available today, it has become imperative to devise methods that fully utilize the product structure of graphs to reduce the number of parameters to be learned. Moreover, posing the problems in terms of smaller factor graphs instead of the graph itself can enable efficient data representation \cite{sandryhaila2014big}, and can result in reduced sample complexity as one has to learn fewer parameters.
To this end, the main objective of this work is to investigate the problem of learning product graphs from data in an efficient and scalable manner.
\subsection{Prior work} \label{sec:prior}
The existing works in graph signal processing can mainly be divided into four chronological categories.
The first set of works in GSP introduced the idea of information processing over graphs \cite{shuman2013emerging,sandryhaila2013discrete,ortega2018graph}. These works highlight the advantages and superior performance of graph-based signal processing approach (with known underlying graph) over classical signal processing.
The second wave of research in this area built upon the first one to exploit knowledge of the underlying graph for graph signal recovery from samples obtained over a subset of graph nodes or from noisy observations of all nodes \cite{chen2015signal,chen2016signal}. Through these works, the idea of bandlimitidness was extended to graph signals and the concept of smooth graph signals was introduced.
Since the underlying graph is not always available beforehand, the third wave of GSP analyzed the problem of recovering the underlying graph through observations over the graph \cite{pasdeloup2017characterization,segarra2017network,egilmez2018graph,chepuri2017learning,kalofolias2017large,egilmez2017graph,dong2016learning}. Finally, the fourth wave of research in GSP has focused on joint identification of the underlying graph and graph signals from samples/noisy observations using interrelated properties of graph signals and graphs \cite{ioannidis2018semi,ceci2018signal,sardellitti2019graph,dong2016learning}.
Within the third set of papers in GSP, our work falls in the category of combinatorial graph Laplacian estimation \cite{chepuri2017learning,kalofolias2017large,egilmez2017graph,dong2016learning} from graph signals. Combinatorial graph Laplacian refers to the unnormalized Laplacian of an unstructured graph with no self loops \cite{egilmez2017graph}. The earliest of works in this category \cite{dong2016learning} aims to jointly denoise noisy graph signals observed at the nodes of the graph and also learn the graph from these denoised graph signals. The authors pose this problem as a multiconvex problem in the graph Laplacian and the denoised graph signals, and then solve it via an alternating minimization approach that solves a convex problem in each unknown. The authors in \cite{segarra2017network} examine the problem of graph Laplacian learning when the eigenvectors of the graph Laplacian are known beforehand. They achieve this by formulating a convex program to learn a valid graph Laplacian (from a feasible set) that is diagonalized by the noiseless and noisy Laplacian eigenvectors.
The work in \cite{chepuri2017learning} takes a slightly different route and learns a sparse unweighted graph Laplacian matrix from noisy graph signals through an alternating minimization approach that restricts the number of edges in the graph. In contrast to earlier work, \cite{pasdeloup2017characterization} focuses on learning a graph diffusion process from observations of stationary signals on graphs through convex formulations. In this regard, the authors also consider different criteria in addition to searching for a valid Laplacian and devise specialized algorithms to infer the diffusion processes under these criteria. In \cite{kalofolias2016learn,kalofolias2017large} the graph learning problem is adressed by posing it in terms of learning a sparse weighted adjacency matrix from the observed graph signals. Finally, the authors in \cite{egilmez2017graph} provide a comprehensive unifying framework for inferring several types of graph Laplacians from graph signals. They also make connection with the state-of-the-art and describe where the past works fit in light of their proposed framework.
It should be mentioned here that since Laplacian matrices (and thus adjacency matrices) are related to precision matrices, defined as the (pseudo-)inverses of covariance matrices, imposing a structure on the graph adjacency matrix amounts to imposing a structure on the covariance of data. Earlier works in the field have already made comparisons of Laplacian learning approaches with those for learning precision matrices from data \cite{dong2016learning,egilmez2017graph}, and established the superior graph recovery performance of Laplacian-based learning. Some recent works have also investigated learning structured covariance and precision matrices, and their usefulness in efficiently representing real-world datasets \cite{friedman2008sparse,greenewald2017tensor,tsiligkaridis2013covariance}. While these models work well in practice, we will demonstrate through our experiments that there are scenarios where graph-based learning outperforms structured covariance-based learning (see Sec.~\ref{sec:numerical_experiments} for details).
\subsection{Our contributions} \label{sec:our}
Our first contribution in this work is a novel formulation of the graph learning problem as a linear program. We show, both theoretically and empirically, that graph adjacency matrices can be learned through a simple and fast linear program. We then shift our attention towards learning structured graphs. Most of the prior works regarding graph learning have considered arbitrary graphs with either some connectivity constraints \cite{kalofolias2017large}, or no constraints at all \cite{egilmez2018graph,dong2016learning}. In all cases, the complexity of the graph learning procedure and the number of free parameters scale quadratically with the number of nodes in the graph, which can be prohibitively large in real-world scenarios. In contrast, our work focuses on inferring the underlying graph from graph signals in the context of structured graphs. Specifically, we investigate graphs that can be represented as Kronecker, Cartesian, and strong products of several smaller graphs. We first show how, for these product graphs, the graph adjacency matrix, the graph Laplacian, the graph Fourier transform, and the graph smoothness measure can be represented with far fewer parameters than required for arbitrary graphs. This reduction in number of parameters to be learned results in reduced sample complexity and helps avoid overfitting. Afterwards, we outline an algorithm to learn these product graphs from the data and provide convergence guarantees for the proposed algorithm in terms of the estimation error of factor graphs. We validate the performance of our algorithm with numerical experiments on both synthetic and real data.
\subsection{Notation and organization}
The following notational convention is used throughout the rest of this paper. We use bold lower-case and bold-upper case letters to represent vectors and matrices, respectively. Calligraphic letters are used to represent tensors, which are arrays of three or more dimensions. For a tensor $\cT$, $\cT_{(i)}$ represents its matricization (flattening) in the $i$-th mode and $\tvec(\cT)$ represents its vectorization along the first mode \cite{kolda2009tensor}. Also, ``$:$" represents the scalar product or double dot product between two tensors, which results in a scalar \cite{kolda2009tensor}. The Hadamard product (elemntwise product) of two vectors or matrices is denoted by ``$\circ$". For a matrix $\bA$, $\|\bA\|_{F}$ represents its Frobenius norm, $\|\bA\|$ represents its spectral norm, and $\bA^\dagger$ represents its Moore-Penrose inverse. Moreover, $\|\bA\|_{1}$ represents the $\ell_{1}$-norm of the entries of $\bA$, while $\|\bA\|_{1,\text{off}}$ represents the $\ell_{1}$-norm of the off-diagonal entries of $\bA$. The Kronecker and Cartesian products of two matrices $\bA$ and $\bB$ are denoted by $\bA \otimes \bB$ and $\bA \oplus \bB$, respectively \cite{horn1991topics}. The strong product of two matrices is denoted by $ \bA \boxtimes \bB = \bA \otimes \bB ~+~ \bA \oplus \bB$, which is the sum of Cartesian and Kronecker products of the respective matrices. Furthermore, $\underset{\bs}{\otimes}$, $\underset{\bs}{\oplus}$, and $\underset{\bs}{\boxtimes}$, respectively denote the Kronecker, Cartesian, and strong products taken over the indices provided by the entries of the vector $\bs$. Finally, $\times_{i}$ represents matrix multiplication in the $i$-th mode of a tensor and $\underset{\bs}{\times}$ represents matrix multiplications in the modes of a tensor specified in the entries of the vector $\bs$.
We use $\diag(\bx)$ to denote a diagonal matrix with diagonal entries given by the entries of the vector $\bx$, $\bone$ to denote a vector of all ones with appropriate length, and $\mathds{1}$ to denote a tensor of all ones of appropriate size. We denote the set with elements $\{1,2,\dots,K\}$ as $[K]$, and $[K] \setminus k$ represents the same set without the element $k$. The sets of valid Laplacian and weighted adjacency matrices are represented by $\cL$ and $\cW$, respectively. The set of weighted adjacency matrices with any product structure is denoted by $\cW_{p}$.
{The rest of this paper is organized as follows. In Sec.~\ref{sec:prob_problem_form} we give a probabilistic formulation of the graph learning problem in line with existing literature. Then, we propose our novel formulation of the graph learning problem as a liner program in Sec.~\ref{sec:graph_learning_linear}. Sec.~\ref{sec:product_graphs} describes the motivation for product graphs and formulates the graph learning problem in the context of product graphs. In Sec.~\ref{sec:learning_algo} we propose an algorithm for learning product graphs from data and derive error bounds on estimated factor graphs. We present our numerical experiments with synthetic and real datasets in Sec.~\ref{sec:numerical_experiments}, and the paper is concluded in Sec.~\ref{sec:conc}.}
\section{Algorithm for learning product graphs} \label{sec:learning_algo}
In the previous section we highlighted some properties and advantages of product graphs and we posed the optimization problems for learning these graphs. We now propose an algorithm for solving these product graph learning problems. To this end, we first recognize that even though the problems posed in \eqref{learn_kronecker}, \eqref{learn_cartesian}, and \eqref{learn_strong} are nonconvex, minimization of each objective with respect to any single factor adjacency matrix is still convex (while all the other factors matrices are fixed). Moreover, these factor-wise minimization problems can be solved through Algorithm~\ref{alg_learn_graph} proposed in the earlier sections. These observations lead us to propose a block coordinate descent (BCD) based algorithm, named B-PGL (BCD for product graph learning), that minimizes over each factor adjacency matrix in cyclic fashion. The proposed algorithm is provided in Algorithm~\ref{alg_learn_product_graph}, and in the following discussion we present the factor-wise problems for each product graph.
\subsection{Kronecker graphs}
Since Algorithm~\ref{alg_learn_product_graph} utilizes factor-wise minimization, we can characterize the error for product graph learning in terms of the factor-wise errors of each factor adjacency matrix.
The error bounds for learning Kronecker graphs are provided in the following theorem with the proof in Appendix~\ref{app:th:kronecker}.
\begin{theorem} \label{th:kronecker}
After each outer iteration of Algorithm~\ref{alg_learn_product_graph}, the error between the sample-based minimization (with $M_{0}$ samples) and the population-based minimization (with infinitely many samples) of \eqref{learn_kronecker} with respect to the ${k}$-th factor $\bW_{k}$ satisfies $\cO \Big(\frac{n_{k}^{2}\log(n_{k})}{nM_{0}} \Big(1+\big\|\underset{[K_{0}] \setminus k}{\otimes} \whbW_{i} - \underset{[K_{0}] \setminus k}{\otimes} \bW_{i}^{*} \big\|_{F} \Big) \Big)$, with high probability as $\frac{nM}{n_{k}} \rightarrow \infty$, when $\alpha \leq \sqrt{\frac{n_{k}\log(n_{k})}{nM_{0}}}$.
Additionally, the error between the estimated factor $\whbW_{k}$ and the original generating factor $\bW_{k}^{*}$ also satisfies $\| \whbW_{k} - \bW_{k}^{*}\|_{F} = \cO \Bigg( \sqrt{\frac{n_{k}\log(n_{k})}{nM_{0}}} \Bigg)$, with high probability as $\frac{nM}{n_{k}} \rightarrow \infty$.
\end{theorem}
\begin{remark}
The error between the estimated $k$-th factor and the original generating $k$-th factor for learning Kronecker graphs also depends on the estimation errors of the other factors. This dependence, however, is absorbed in big-$\cO$.
\end{remark}
\subsection{Cartesian graphs}
The following theorem characterizes the error of the factor-wise minimization of the Cartesian graph learning problem.
\begin{theorem} \label{th:cartesian}
The objective function \eqref{learn_cartesian} for the Cartesian graph learning problem can be represented as a sum of terms that are linear in each factor adjacency matrix, and is therefore convex.
After each outer iteration of Algorithm~\ref{alg_learn_product_graph}, the error between the sample-based minimization (with $M_{0}$ samples) and the population-based minimization (with infinitely many samples) of \eqref{learn_cartesian} with respect to the ${k}$-th factor $\bW_{k}$ satisfies $\cO \Big(\frac{n_{k}^{2}\log(n_{k})}{nM_{0}} \Big)$, with high probability as $\frac{nM}{n_{k}} \rightarrow \infty$, when $\alpha \leq \sqrt{\frac{n_{k}\log(n_{k})}{nM_{0}}}$.
Additionally, the error between the estimated factor $\whbW_{k}$ and the original generating factor $\bW_{k}^{*}$ also satisfies $\| \whbW_{k} - \bW_{k}^{*}\|_{F} = \cO \Bigg( \sqrt{\frac{n_{k}\log(n_{k})}{nM_{0}}} \Bigg)$, with high probability as $\frac{nM}{n_{k}} \rightarrow \infty$.
\end{theorem}
The proof of this theorem is given in Appendix~\ref{app:th:cartesian}. The theorem states, remarkably, that the objective function for learning Cartesian product graphs is convex and separable in each factor adjacency matrix, i.e., the objective function can be represented as a sum of linear terms, each of which is dependent on only one factor adjacency matrix. As a consequence of this fact, for learning Cartesian graphs, one can optimize over all factor adjacency matrices in parallel!
\begin{remark}
The error between the estimated $k$-th factor and the original generating $k$-th factor for learning Cartesian graphs is independent of the estimation errors of the other factors due to the convexity and separability of \eqref{learn_cartesian}.
\end{remark}
\subsection{Strong graphs}
The following theorem, with its proof in Appendix~\ref{app:th:strong}, characterizes the behavior of factor-wise minimization for learning strong product graphs.
\begin{theorem} \label{th:strong}
After each outer iteration of Algorithm~\ref{alg_learn_product_graph}, the error between the sample-based minimization (with $M_{0}$ samples) and the population-based minimization (with infinitely many samples) of \eqref{learn_strong} with respect to the ${k}$-th factor $\bW_{k}$ satisfies $\cO \Big(\frac{n_{k}\log(n)}{M_{0}} \Big({1+2\big\|\underset{[K_{0},k]}{\boxtimes} \whbW_{i} - \underset{[K_{0}]}{\boxtimes} \bW_{i}^{*} \big\|_{F}} \Big) \Big)$, with high probability as $\frac{nM}{n_{k}} \rightarrow \infty$, when $\alpha \leq \sqrt{\frac{n_{k}\log(n_{k})}{nM_{0}}}$.
Additionally, the error between the estimated factor $\whbW_{k}$ and the original generating factor $\bW_{k}^{*}$ also satisfies $\| \whbW_{k} - \bW_{k}^{*}\|_{F} = \cO \Bigg( \sqrt{\frac{n_{k}\log(n_{k})}{nM_{0}}} \Bigg)$, with high probability as $\frac{nM}{n_{k}} \rightarrow \infty$.
\end{theorem}
\begin{remark}
The error between the estimated $k$-th factor and the original generating $k$-th factor for learning strong graphs is dependent on the estimation errors of the other factors. This dependence on other factors, however, is absorbed in big-$\cO$.
\end{remark}
\subsection{Error bound for arbitrary graphs} \label{sec:product_graph_prob:arbitrary}
As a byproduct of Theorem~\ref{th:kronecker}, we can also obtain an error bound for arbitrary graph learning problem. To this end, see that the objective in \eqref{learn_lap_3} for learning arbitrary graphs can be expressed as $\alpha/M_{0} \sum_{m=1}^{M_{0}}\bx_{m}^{T} \bL \bx_{m} = \alpha \tr( \bD \bar{\bS}) - \alpha \tr( \bW \bS)$, where $\bS$ and $\bar{\bS}$ are similiar to the definitions in Appendix~\ref{app:th:kronecker}.
Following along the lines of Theorem~\ref{th:kronecker}, by solving \eqref{learn_lap_3} via Algorithm~\ref{alg_learn_graph} one is guaranteed to converge to the original generating adjacency matrix of the unstructured graph with the error $\| \whbW - \bW^{*}\|_{F} = \cO_{P} \Big( \sqrt{\log(n)/M_{0}} \Big)$, with a high probability as $M \rightarrow \infty$.
\subsection{Computational complexity}
The computational complexity of solving each factor-wise problem scales quadratically with the number of nodes in the graph. This implies that when the product structure is imposed, one only has to solve $K_{0}$ smaller problems each with computational complexity of $\rmO(\bar{n}^{2})$, assuming the special case of $n_{1} = n_{2} = \dots = n_{K_{0}} = \bar{n}$. In contrast, for learning unstructured graphs the computational complexity would scale as $\rmO(n^{2}) = \rmO(\prod_{k=1}^{K_{0}} n_{k}^{2}) \sim \rmO(\bar{n}^{2K_{0}})$. Thus, the computational gains are huge in comparison to the original problem for learning unstructured graphs!
This computational gain is only made possible due to the way that we pose the original graph learning problem as a linear program. This way of posing the original problem made the objective amenable to factor-wise minimization, which would not be possible if the original graph learning problem was posed in the traditional way from the existing literature.
\subsection{Convergence properties \& sample complexity}
The overall convergence of the algorithm to a stationary point of the problem can be established through the following theorem, whose proof is provided in Appendix~\ref{app:th:convergence}.
\begin{theorem} \label{th:convergence}
Since each factor-wise problem for each graph learning problem is convex, the product graph learning algorithm Algorithm~\ref{alg_learn_product_graph} is guaranteed to converge to a stationary point at a linear rate.
\end{theorem}
We now provide an insight into our theorems statements with regards to the required number of observation. Theorem~\ref{th:kronecker}, \ref{th:cartesian}, and \ref{th:strong} claim that the estimated factor lies within a ball of radius $\frac{\log(n_{k})}{(n/n_{k})M_{0}}$ around the true factor. The accuracy of the estimate increases with the number of available observations $M_0$, and the product of the dimensions of the other factors $n/n_{k} = \prod_{j\neq k} n_{j}$. Moreover, the accuracy decreases with the increasing dimensions of the factor being estimated.
Taking a closer look at the error bounds for learning arbitrary and structured graphs reveals an important point. The denominator in the error bound is the number of observations available to estimate the graph. For $M_{0}$ observed graph signals, the number of observations available to arbitrary graph learning are (obviously) $M_{0}$; however, for estimating the $k$-th factor adjacency when learning product graphs the effective number of observations are $\prod_{j\neq k} n_{j} \times M_{0}$. This means that imposing the product structure results in an increased number of effective observations to estimate each factor adjacency matrix. This combined with the reduced number of parameters required to learn these graphs makes product graphs very attractive for real world applications.
\section{Numerical experiments} \label{sec:numerical_experiments}
This section provides results for learning product graphs from synthetic and real datasets. We first present experiments for learning arbitrary graphs through our proposed linear program in Sec.~\ref{sec:graph_learning_linear}, and then the results for learning products graphs from synthetic data through Alg.~\ref{alg_learn_product_graph}. Afterwards we validate the performance of our proposed algorithm for product graphs on real-world datasets.
\begin{figure*} [th]
\begin{center}
\begin{tabular}{cccc}
{\includegraphics[height=2.6cm] {er_fmeas.png}}
{\includegraphics[height=2.6cm] {gauss_fmeas.png}}
{\includegraphics[height=2.6cm] {regular_fmeas.png}}
{\includegraphics[height=2.6cm] {community_fmeas.png}}\\
{\includegraphics[height=2.6cm] {spiral_fmeas.png}}
{\includegraphics[height=2.6cm] {tree_fmeas.png}}
{\includegraphics[height=2.6cm] {grid_fmeas.png}}
{\includegraphics[height=2.6cm] {pa_fmeas.png}}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:arbitrary:fmeas_all}
F-measure values for various graphs for our proposed graph learning algorithm (GLP), LOG, and CGL.
}
\end{figure*}
\subsection{Synthetic data: Arbitrary graphs} \label{sec:numerical_experiments:synthetic_data:arbitrary}
To showcase the performance of our new formulation for graph learning, we run synthetic experiments on a graph with $n = 64$ nodes. We generate various different graph types such as: (i) a sparse random graph with Gaussain weights, (ii) an Erdos-Renyi random graph with edge probability 0.7, (iii) a scale-free graph with preferential attachment (6 edges at each step), (iv) a random regular graph where each node is connected to $0.7n$ other nodes, (v) a uniform grid graph, (vi) a spiral graph , (vii) a community graph , and (viii) a low stretch tree graph on a grid of points. The related details of how the graphs are simulated can be found in \cite{perraudin2014gspbox,dong2016learning}. For each kind of graph, we generate 20 different realizations, and for each realization we generate observations using a degenerate multivariate Gaussian distribution with the graph Laplacian as the precision matrix \cite{dong2016learning, egilmez2017graph,kalofolias2016learn}.
We compare the performance of our proposed method with two other state-of-the-art methods for arbitrary graph learning: (i) combinatorial graph learning from \cite{egilmez2017graph} (which we refer to as CGL), and (ii) graph learning method from \cite{kalofolias2017large} (which we refer to as LOG), which also aims to learn a combinatorial graph Laplacian through a slightly different optimization problem than \cite{egilmez2017graph}. We choose $\alpha$ for our algorithm in the range $0.75^{\{0:14\}} \times \sqrt{\frac{\log(n)}{M_{0}}}$, as dictated by the error bounds for learning graphs in App.~\ref{app:th:kronecker} and by the existing literature \cite{dong2016learning,egilmez2017graph,kalofolias2016learn}. Furthermore, we choose $\rho = 0.75/\log(M_{0})$ as the value that works best for most cases. For each algorithm, in the prescribed range of the optimization parameters, we choose the parameters that produce the best results.
The results of our experiments are shown as F-measure values in Fig.~\ref{fig:arbitrary:fmeas_all}. F-measure is the harmonic mean of precision and recall, and signifies the overall accuracy of the algorithm \cite{egilmez2017graph}. Precision here denotes the fraction of true graph edges recovered among all the recovered edges, and recall signifies the fraction of edges recovered from the true graph edges.
One can see that our algorithm (except for community graphs) performs just as well or better than the existing state-of-the-art algorithms. Moreover, the average performance over all graphs in Fig.~\ref{fig:arbitrary:fmeas_avg} shows that on average we outperform the existing algorithms. A runtime comparison of all algorithms in Fig.~\ref{fig:arbitrary:fmeas_avg} also reveals competitive run time for our proposed scheme. The LOG algorithm, with the smallest runtime, has a huge computational overhead incurred by the construction of a matrix of pairwise distances of all rows of data matrix $\bX$. In contrast, other algorithms work with the graph signal observations directly and do not require preprocessing steps.
\begin{remark}
For some graphs in Fig.~\ref{fig:arbitrary:fmeas_all}, the performance for GLP seems to worsen as number of observations grow. This is likely due to the limited range that we have considered for searching the optimization parameter. For a bigger range, this downward trend will disappear. The range that we have prescribed is the one mostly used in the literature and on average works well in most settings.
\end{remark}
\begin{figure} [t]
\begin{center}
\begin{tabular}{cccc}
{\includegraphics[height=2.6cm] {avg_fmeas.png}}
{\includegraphics[height=2.6cm] {avg_run_time.png}}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:arbitrary:fmeas_avg}
Average F-measure values over all graphs (left) from Fig.~\ref{fig:arbitrary:fmeas_all} for our proposed graph learning algorithm (GLP), LOG, and CGL. Average run times over 30 trials for each algorithm (right), with increasing number of nodes.}
\end{figure}
\subsection{Synthetic data: Product graphs} \label{sec:numerical_experiments:synthetic_data}
We now present the results of our numerical experiments involving synthetic data for product graphs. We run experiments for random Erdos-Renyi factor graphs with $n = n_{1} n_{2} n_{3} = 12 \times 12 \times 12$ nodes, and having either Cartesian, Kronecker or strong structure. We then use our proposed algorithms to learn the generated graphs with varying number of observations and compare the performance with LOG as its performance was the second best in Fig.~\ref{fig:arbitrary:fmeas_all}. The results for all three types of product graphs are shown in Fig.~\ref{fig:fmeas_run_time} (top). For a fixed number of observations, Cartesian product graphs can be learned with the highest F-measure score followed by strong and then Kronecker graphs. The figure also shows that for each graph, imposing product structure on the learned graph drastically improves the performance of the learning algorithm.
Fig.~\ref{fig:fmeas_run_time} (bottom) also shows the runtimes comparison of our approach BPGL with the algorithm in \cite{kalofolias2017large}. Even for a graph of this size, with total number of nodes $n = 1728$, we can see a considerable reduction in runtimes. Thus, our learning algorithm that explicitly incorporates the product structure of the graph enjoys superior performance, reduced computational complexity and faster runtimes.
\begin{figure} [h]
\begin{center}
\begin{tabular}{c}
{\includegraphics[height=4.5cm] {all_12_12_12_fmeas.png}} \\
{\includegraphics[height=4.5cm] {all_12_12_12_run_time.png}}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:fmeas_run_time}
Precision, recall and F-measure values for various values of the $\beta$ parameter. The plots shown are for Cartesian (top), Kronecker (middle), and strong (bottom) graphs when using only 5 observations for learning.}
\end{figure}
\subsection{United States wind speed data} \label{sec:numerical_experiments:us_wind}
The first real data we use for experimentation is NCEP wind speed data.
The NCEP wind speed data \cite{kalnay1996ncep} represents wind conditions in the lower troposphere and contains daily averages of U (east-west) and V (north-south) wind components over the years $1948-2012$. Similar to the experiments in \cite{tsiligkaridis2013covariance} with preprocessed data, we use a grid of $n_{1} n_{2} = 10 \times 10 = 100$ stations to extract the data available for the United States region. From this extracted data, we choose the years $2003-2007$ for training and keep the years $2008-2012$ for testing purposes. Using a non-overlapping window of length $n_{3} = 8$, which amounts to dividing the data into chunks of 8 days, we obtain $M_{0} = 228$ samples, each of length $n = n_{1} n_{2} n_{3}$. Therefore, for graph learning we have 228 samples, where each sample contains spatiotemporal data for 100 stations over 8 days. Next, the testing procedure consists of introducing missing values in each sample of the test data by omitting the data for the 8th day, and then using a linear minimum-mean-square-error (LMMSE) estimator \cite[Chapter 4]{luenberger1997optimization} to predict the missing values. As for learning, 228 samples are obtained for testing through the same procedure.
Our proposed method estimates the (structured) adjacency matrix of the graph (which is related to the precision matrix of the data), and we use $\bW + \bI$ in place of the data covariance for the LMMSE estimator.
We make a comparison of the following approaches: (1) sample covariance matrix (SCM) learning, (2) the permuted rank-penalized least squares (PRLS) approach \cite{tsiligkaridis2013covariance} with $r = 6$ Kronecker components, (3) PRLS with $r = 2$ Kronecker components, (4) time-varying graph learning (TVGL) approach from \cite{yamada2019time} (which was shown to outperform the approach in \cite{kalofolias2017learning}), (5) spatiotemporal strong graph with BPGL with a spatial component of size $n_{1} n_{2}$ and a temporal component of size $n_{3}$, and (6) spatiotemporal Cartesian graph with BPGL of the same dimensions. The parameters for PRLS were chosen for optimal performance as given in \cite{tsiligkaridis2013covariance}. The optimization parameters for TVGL and BPGL were manually tuned for best performance. It should be noted here that SCM and PRLS aim to learn a covariance matrix and a structured covariance matrix from the data, respectively.
The SCM aims to estimate $\frac{n(n+1)}{2}$ parameters, while the number of parameters that PRLS needs to estimate is $r ( \frac{n_{1} n_{2}(n_{1} n_{2}+1)}{2} + \frac{n_{3}(n_{3}+1)}{2} )$. On the other hand, TVGL estimates $n_{3} ( \frac{n_{1} n_{2}(n_{1} n_{2}+1)}{2})$ parameters, while BPGL needs to learn only $\frac{n_{1} n_{2}(n_{1} n_{2}+1)}{2} + \frac{n_{3}(n_{3}+1)}{2}$parameters for both strong and Cartesian graphs.
The mean prediction root-mean-squared errors (RMSE) for all methods are shown in Table~\ref{table_wind}. One can see that our proposed method outperforms PRLS and TVGL while estimating far fewer parameters than both. The table also shows that learning a strong graph for this data results in a higher RMSE reduction over the baseline (SCM), and is thus better suited for this data than the Cartesian product graph.
\begin{table}
\caption{Comparison of prediction RMSE for US wind speed data}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Method} & {\textbf{\begin{tabular}[c]{@{}c@{}}RMSE reduction \\over SCM (dB)\end{tabular}}} & \textbf{parameters} \\
\hline
{SCM} & -- & 320400 \\
\hline
{TVGL \cite{yamada2019time}} & 1.0461 & 40656 \\
\hline
{PRLS \cite{tsiligkaridis2013covariance}} ($r=6$) & 1.7780 & 30492 \\
\hline
{PRLS \cite{tsiligkaridis2013covariance}} ($r=2$) & -1.5473 & 10164 \\
\hline
\textbf{BPGL strong} & \textbf{1.8640} & \textbf{5082} \\
\hline
{BPGL Cartesian} & {1.3105} & 5082 \\
\hline
\end{tabular}
{\label{table_wind}\\
Comparison of our graph learning method with SCM, PRLS and TVGL. Our proposed method outperforms the existing methods for time-varying graph learning and for learning structured covariance matrix. Moreover, our proposed procedure outperforms while using considerably fewer parameters.}
\end{table}
\subsection{ABIDE fMRI data: Exploratory data analysis} \label{sec:numerical_experiments:fmri}
The second real data that we use as an application for our proposed algorithm is a part of the ABIDE fMRI dataset \cite{craddock2013neuro,narayan_2015}. Our aim is to learn the graphs over the fMRI data of control and autistic subjects and to use the learned graphs to higlight the differences in the control and autistic brains. The data we obtain is already preprocessed to remove various fMRI artifacts and for controlization of the obtained scans \cite{narayan2016mixed}. The final preprocessed data consists of measurements from $n_{1} = 111$ brain regions scanned over $116$ time instances for each subject. The data contains scans for control and autistic subjects. To avoid class imbalance we randomly choose $47$ subjects for each class. Out of the $47$ subjects for each class, we then randomly choose $30$ subjects for training and keep the remaining $17$ for testing purposes. We use a non-overlapping window length of $n_{2} = 29$ which results into $M_{0} = 120$ samples of length $n = n_{1} \times n_{2}$.
As before, we compare the performance of our proposed approach with SCM and PRLS. Table~\ref{table_fmri} shows the results of our experiments. One can see that our approach performs very similar to PRLS for both Cartesian and strong product graphs, all the while using much fewer parameters (five times fewer). We also see that strong product graphs are more suited to model brain activity. The work in \cite{narayan2016mixed} suggests that autistic brains exhibit hypoconnectivity in different regions of the brain as compared to control subjects. The results from our graph learning procedure go a step further and bring more insight into the spatiotemporal dynamics of the brain. Firstly, as already suggested in \cite{narayan2016mixed}, we see clear evidence of spatial hypoconnectivity (see Fig.~\ref{fig:fmri:spatial}). More importantly, our learned graphs in Fig.~\ref{fig:fmri:temporal} reveal that, in addition to spatial hypoconnectivity, autistic brains also suffer from temporal hypoconnectivity.
\begin{table}
\caption{Comparison of prediction RMSE for ABIDE fMRI data}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Method} & {\textbf{\begin{tabular}[c]{@{}c@{}}RMSE reduction \\over SCM (dB)\end{tabular}}} & \textbf{parameters} \\
\hline
{SCM} & -- & 5182590 \\
\hline \hline
\textbf{PRLS Normal} & \textbf{2.1793} & 33255 \\
\hline
{Cartesian GL Control} & 2.0980 & 6651 \\
\hline
{Strong GL Control} & {2.1753} & 6651 \\
\hline \hline
\textbf{PRLS Autism} & \textbf{2.375} & 33255 \\
\hline
{Cartesian GL Autism} & {2.3400} & 6651 \\
\hline
{Strong GL Autism} & {2.3563} & 6651 \\
\hline
\end{tabular}
{\label{table_fmri}\\
Comparison of our graph learning method with SCM and PRLS.}
\end{table}
\begin{figure}
\begin{center}
\begin{tabular}{c c}
{\includegraphics[height=3.8cm] {spatial_normal_strn_112_29_p5689.png}}
{\includegraphics[height=3.8cm] {spatial_autism_cart_112_29_p3893.png}}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:fmri:spatial}
This figure shows the adjacency matrix of the spatial components learned for control (left) and autism (right) subjects with strong graph learning algorithms, respectively. The images reveal, in line with the existing literature, that control brain is much more connected than the autistic brain.}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c c}
{\includegraphics[height=3.4cm] {temporal_normal_strn_112_29_p5689.png}}
{\includegraphics[height=3.4cm] {temporal_autism_cart_112_29_p3893.png}}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:fmri:temporal}
This figure shows the adjacency matrix of the temporal components learned for control (left) and autism (right) subjects with strong graph learning algorithms, respectively. The images reveal that control brains exhibit more temporal connections as compared to autistic brains. This is a new finding possible only by considering the spatiotemporal dynamics of the brain rather than just spatial connectivity analysis.}
\end{figure}
\subsection{Estrogen receptor data} \label{sec:numerical_experiments:er}
The final dataset that we experiment on is the estrogen receptor data \cite{dobra2009variable,li2010inexact}, which consists of 157 samples of 693 probe sets related to estrogen receptor pathway.
We aim to learn a Kronecker structured graph on this data using $120$ randomly selected samples for training and the remaining $37$ for testing. We choose Kronecker structured graph for this data because transposable models, i.e., models that learn a Kronecker structured data covariance, have been shown to work well for genomic data in the existing literature \cite{allen2010transposable}. And as pointed out in Sec.~\ref{sec:product_graph_prob:kron}, Kronecker structured adjacency matrix corresponds to a Kronecker structured data covariance.
For testing purposes, we follow a procedure similar to the previous subsections
We compare our graph learning approach with SCM, PRLS and sparse covariance estimation (SEC) from \cite{cui2016sparse}. Optimization parameters are manually tuned for best results for each method. We learn a graph through our method (and covariance through PRLS), with a Kronecker structure composed of two factor matrices of dimensions $n_{1} = 21$ and $n_{2} = 33$. We then use LMMSE estimator to predict $33$ probe set measurements removed from the test data. PRLS, SEC and BPGL result in an improvement of $\mathbf{0.91347}$ dB, $\mathbf{0.93598}$ dB, and $\mathbf{1.0242}$ dB over SCM, respectively. This demonstrates that our method outperforms the state-of-the-art unstructured and structured sparse covariance estimation techniques, and provides a better model for real datasets.
\section{Probabilistic Problem formulation} \label{sec:prob_problem_form}
In this section we formulate the arbitrary graph learning problem from a probabilistic standpoint. Let us assume access to $m = 1,\dots,M_{0}$ graph signals $\bx_{m} \in \R^n$ observed on $n$ nodes of an undirected graph $G = \{V, E\}$ without any self loops, where $V$ and $E$ represent the nodes and edges of the graph. Weighted edges of this graph can be represented as a weighted adjacency matrix $\bW \in \R^{n \times n}$, which has a zero diagonal owing to no self-loops in the graph. Based on the adjacency matrix $\bW$, one can define the degree matrix $\bD = \diag(W \bone)$, which is a diagonal matrix containing the weighted degree of each node at the respective diagonal entry. The associated unnormalized graph Laplacian can then be defined as $\bL = \bD - \bW$. The adjacency matrix $\bW$ of the graph can be decomposed as $\bU \bLambda \bU^{T}$ and its eigenvectors define the graph Fourier basis for the graph Fourier transform \cite{sandryhaila2014big}.
The signals observed on the nodes of a graph are assumed to have a joint distribution given by a multivariate normal distribution, i.e., $\bx_{m} \sim \cN(0, \bL^{\dagger})$, where $\bL^{\dagger}$ is the pseudoinverse of $\bL$ and $\bL$ represents the graph Laplacian. In words, signals generated over a graph can be seen as being generated over a Gaussian Markov Random Field (GMRF) whose precision matrix is the graph Laplacian \cite{dong2016learning}. Given independent observations $\{\bx_{m}\}$, the maximum likelihood estimate (MLE) of $\bL$ can be expressed as:
\begin{align} \label{learn_lap}
\widehat{\bL}_{\text{MLE}} &= \underset{\bL \in \cL}{\argmax} |\bL|^{\frac{M_{0}}{2}} \exp(-\frac{1}{2} \underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m}^{T} \bL \bx_{m}) \nonumber\\
&= \underset{\bL \in \cL}{\argmin} -\log|\bL| + \frac{1}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m}^{T} \bL \bx_{m},
\end{align}
where $\cL$ represents the class of valid Laplacians, i.e., a symmetric positive semi-definite matrix with rows that sum to zero and nonpositive off-diagonal entries. With the Laplacian constraints, the problem in \eqref{learn_lap} can be further expressed as:
\begin{align} \label{learn_lap_2}
&\widehat{\bL}_{\text{MLE}} =~\underset{\bL}{\argmin} ~ -\log|\bL| + \frac{1}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m}^{T} \bL \bx_{m} \nonumber\\
&\quad\quad\quad~\text{s.t. } \bL \bone = \bzero,~ \tr (\bL) = n,~(\bL)_{ij} = (\bL)_{ji} \leq 0.
\end{align}
Interactions in the real world tend to be mostly local, and thus not all nodes in a graph are connected to each other in real-world datasets. To impose only local interactions, usually a sparsity term regularizing the off-diagonal entries of the Laplacian matrix is added to the graph learning objective to learn \textit{sparse} graphs. Therefore, traditional graph learning approaches \cite{chepuri2017learning,kalofolias2017large,egilmez2017graph,dong2016learning,kalofolias2016learn} take a form similar to the following:
\begin{align} \label{learn_lap_sparse}
&\widehat{\bL}_{\text{REG}} =~\underset{\bL}{\argmin} ~ -\log|\bL| + \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m}^{T} \bL \bx_{m} + \beta \|\bL\|_{1,\text{off}} \nonumber\\
&\quad\quad\quad~\text{s.t. } \bL \bone = \bzero,~ \tr (\bL) = n,~(\bL)_{ij} = (\bL)_{ji} \leq 0,
\end{align}
where $\|\bL\|_{1,\text{off}}$ represents a sparsity penalty on the off-diagonal entries of $\bL$, and parameters $\alpha$ and $\beta$ control the penalty on the quadratic term and the density of the graph, respectively.
In the following section, we show that the traditional graph learning problem can be significantly simplified and that
graphs can actually be learned through a simple linear program.
\section{Graph learning as a linear program} \label{sec:graph_learning_linear}
Let us start by inspecting the traditional graph learning problem in \eqref{learn_lap_sparse}. In particular, let us first focus on the term $\log|\bL|$ in the objective function and the constraint $\tr (\bL) = n$. We can express this log-determinant term in the objective as $\log|\bL| = \sum_{i=1}^{n} \log \lambda_{i}$, where $\lambda_{i}$ is the $i$-th largest eigenvalue of $\bL$. Thus, through this $\log|\bL|$ term, \eqref{learn_lap_sparse} constrains the spectrum of the Laplacian matrix to be estimated.
However, for our problem of estimating the Laplacian matrix, the constraint $\tr (\bL) = \sum_{i=1}^{n} \lambda_{i} = n$, is already putting a hard constraint on the sum of eigenvalues of the Laplacian matrix. Moreover, the constraint $\bL \bone = \bzero$ is forcing the smallest eigenvalue of the Laplacian to be zero. In the presence of these constraints, the log-determinant regularization in the objective function is no longer necessary to arrive at a valid Laplacian matrix or to avoid trivial solutions. Another advantage of removing the log-determinant term is the massive savings in computational complexity as this term forces one to employ singular value decomposition at each step of the learning algorithm \cite{greenewald2017tensor,tsiligkaridis2013covariance}.
Let us also examine the term $\sum_{m=1}^{M_{0}} \bx_{m}^{T} \bL \bx_{m}$ in the objective in \eqref{learn_lap}. This term comes from the likelihood of the observed signals with the Laplacian as the precision matrix, and also represents the sum of Dirichlet energy or ``smoothness" of the observed graph signals \cite{kalofolias2016learn,dong2016learning,egilmez2017graph}. It has been shown in the existing literature \cite{kalofolias2016learn} that this term can be expressed as a weighted sparsity regularization on the graph adjacency matrix as $\sum_{m=1}^{M_{0}} \bx_{m}^{T} \bL \bx_{m} = \tr(\bX^{T} \bL \bX) = \| \bW \circ \bZ\|_{1}$. Here $\bX$ is the data matrix with $\bx_{m}$ as the $m$-th column, and $\bZ$ is the matrix of pairwise distances between rows of $\bX$ such that $(i,j)$-th entry in $\bZ$ is the Euclidean distance between the $i$-th and $j$-th row of $\bX$. This implies that the sum of Dirichlet energy in the objective implicitly regularizes the sparsity of $\bW$ and thus controls the density of edges in the graph. Therefore, presence of this term in the objective eliminates the need to explicitly regularize the sparsity of the graph to be learned.
In light of the preceding discussion, we propose to solve the following linear program \cite{boyd2004convex} for learning graphs:
\begin{align} \label{learn_lap_3}
\widehat{\bL} =~&\underset{\bL}{\min} ~~~ \frac{\alpha}{M_{0}}~\underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m}^{T} \bL \bx_{m} \nonumber\\
&\text{s.t. } \bL \bone = \bzero,~ \tr (\bL) = n,~(\bL)_{ij} = (\bL)_{ji} \leq 0.
\end{align}
where $\alpha$ is a regularization parameter that controls the smoothness of the graph signals and thus the sparsity of edges in the graph.
\subsection{Fast solver for the graph learning linear program} \label{sec:learning_algo:linear_problem}
We now present an algorithm, named \textbf{G}raph learning with \textbf{L}inear \textbf{P}rogramming (GLP), for solving the linear graph learning problem \eqref{learn_lap_3}.
To proceed, note that the objective term in the graph learning problem can be reformulated as:
\begin{align} \label{smoothness_adjacency}
\frac{1}{M_{0}}\underset{m = 1}{\overset{M_{0}}{\sum}} &\bx_{m}^{T} \bL \bx_{m} = \frac{1}{M_{0}}\underset{m = 1}{\overset{M_{0}}{\sum}} (\bx_{m}^{T} \bD \bx_{m} - \bx_{m}^{T} \bW \bx_{m}) \nonumber\\
&= \frac{1}{M_{0}}\underset{m = 1}{\overset{M_{0}}{\sum}} (\bx_{m}^{T} \diag(\bW \bone) \bx_{m} - \bx_{m}^{T} \bW \bx_{m}) \nonumber\\
&= \frac{1}{M_{0}}\underset{m = 1}{\overset{M_{0}}{\sum}} (\bx_{m}^{T} \diag(\bx_{m} ) \bW \bone - \bx_{m}^{T} \bW \bx_{m}) \nonumber\\
&= \frac{1}{M_{0}}\underset{m = 1}{\overset{M_{0}}{\sum}} (\bar{\bx}_{m}^{T} \bW \bone - \bx_{m}^{T} \bW \bx_{m}) \nonumber\\
&= \frac{1}{M_{0}}\underset{m = 1}{\overset{M_{0}}{\sum}} \tr(\bar{\bx}_{m}^{T} \bW \bone - \bx_{m}^{T} \bW \bx_{m}) \nonumber\\
&= \frac{1}{M_{0}}\tr \Big[ \bW \underset{m = 1}{\overset{M_{0}}{\sum}} \bone \bar{\bx}_{m}^{T} - \bW \underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m} \bx_{m}^{T} \Big] \nonumber\\
&= \tr \Big[ \bW \Big(\underset{m = 1}{\overset{M_{0}}{\sum}} \bone \bar{\bx}_{m}^{T} - \underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m} \bx_{m}^{T} \Big)/M_{0} \Big] \nonumber\\
&= \tr (\bW \wtbS) = \tvec(\wtbS)^{T}\tvec(\bW) = \wtbs^{T} \bM \bw,
\end{align}
where the matrix $\wtbS = (\underset{m = 1}{\overset{M_{0}}{\sum}} \bone \bar{\bx}_{m}^{T} - \underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m} \bx_{m}^{T} )/M_{0}$, the vector $\bar{\bx}_{m}^{T} = \bx_{m}^{T} \diag(\bx_{m} ) = \bx_{m}^{T} \circ \bx_{m}^{T}$, $\tvec(\bW) = \bM \bw$, $\bw$ is a vector of distinct elements of upper triangular part of the symmetric matrix $\bW$, and $\bM$ is the duplication matrix that duplicates the elements from $\bw$ to generate a vectorized version of $\bW$. With this rearrangement of the objective and $\bW$, our graph learning problem can be posed as:
\begin{align}
\widehat{\bw} =~&\underset{\bw}{\min}~\alpha~\wtbs^{T} \bM \bw \quad \text{s.t. } \bA \bw = \bb,~ \bw_{i} \geq 0, i \in F,
\end{align}
where $\bA$ is a matrix that represents the equality constraints from \eqref{learn_lap_3} in terms of equality constraints on $\bw$, $\bb = [\bzero^{T}, n]^{T}$, and $F$ is the set containing the indices of the off-diagonal elements in $\bw$. Once the solution $\whbw$ is obtained, it can be converted to the symmetric adjacency matrix $\whbW$, which can then be used to get $\whbL$.
The standard way of solving a linear program with mixed (equality and inequality) constraints is through interior point methods whose complexity scales quadratically with the problem dimension \cite{boyd2004convex}. A better alternative is to deploy a first-order method whose per-iteration complexity is linear in the number of nonzero entries of $\bA$. However, a first-order method would exhibit slow convergence for a linear program because of the lack of smoothness and strong convexity in linear programs \cite{wang2017new}.
To overcome these issues, linear programs have been solved through the Alternating Direction Method of Multiplier (ADMM) \cite{eckstein1990alternating,wang2017new}. To solve our proposed linear formulation of graph learning, we follow a recent algorithm proposed in \cite{wang2017new}. This ADMM-based algorithm for linear programs proposed a new variable splitting scheme that achieves a convergence rate of $\cO(\|\bA\|^2\log(1/\epsilon))$, where $\epsilon$ is the desired accuracy of the solution. To this end, we start by modifying the original graph learning problem with the introduction of an additional variable $\by$ as follows:
\begin{align}
\widehat{\bw} =~&\underset{\bw}{\min}~\bc^{T} \bw \quad \text{s.t. } \bA \bw = \bb,~ \bw = \by,~ \by_{i} \geq 0, i \in F,
\end{align}
where $\bc = \alpha \bM^{T} \wtbs$. The corresponding augmented Lagrangian can then be expressed as follows:
\begin{align} \label{eq:lagrangian}
L(\bw,\by,\bz) = \bc^{T} \bw + h(\by) + \bz^{T} (\bA_{\bw} \bw + \bA_{\by} \by - \wtbb) \nonumber\\
+ \rho/2 \| \bA_{\bw} \bw + \bA_{\by} \by - \wtbb \|_{2}^{2},
\end{align}
where $h(\by)$ denotes the non-negativity constraint on the entries of $\by$ indexed by $F$, i.e., $\forall i \in F$, $h(\by) = 0$ when $y_{i} \geq 0$, and $h(\by) = \infty$ when $y_{i} < 0$.
Moreover, $\bz = [\bz_{\bw}^{T}, \bz_{\by}^{T}]^{T}$, $\bz_{\bw}$ and $\bz_{\by}$ are the Lagrange multipliers, $\bA_{\bw} = [\bA^{T}, \bI]^{T}$, $\bA_{\by} = [\bzero, -\bI]^{T}$, and finally $\wtbb = [\bb^{T}, \bzero^{T}]^{T}$. One can then use ADMM to go through the
steps outlined in Algorithm~\ref{alg_learn_graph} until convergence to obtain $\whbw$.
\begin{algorithm}[t]
\caption{\textbf{: GLP---ADMM for graph learning with linear programming}}
\label{alg_learn_graph}
{\textbf{Input:} Observations $\{\bx_{m}\}_{m = 1}^{M_{0}} $, maximum iterations $T_{0}$, and parameters $\alpha,\rho > 0$}\\
\textbf{Initialize:} $\by^{(1)} \leftarrow \bzero$~,~$\bz^{(1)}
\leftarrow \bone$
\begin{algorithmic}
\STATE \textbf{for} $t=1$ to $T_{0}$
\STATE \hspace{\algorithmicindent} $\be^{(t+1)} \leftarrow - \bA_{\bw}^{T} [\bz^{(t)} + \rho(\bA_{\by} \by^{(t)} - \wtbb)] - \bc$
\STATE \hspace{\algorithmicindent} $\bw^{(t+1)} \leftarrow \rho^{-1} (\bI + \bA^{T}\bA)^{-1} \be^{(t+1)}$
\STATE \hspace{\algorithmicindent} $\by^{(t+1)} \leftarrow [\bw^{(t+1)} + \bz_{\by}^{(t)}/\rho]_{\underset{F}{\geqslant} \bzero}$
\STATE \hspace{\algorithmicindent} $\bz^{(t+1)} \leftarrow \bz^{(t)} + \rho(\bA_{\bw} \bw^{(t+1)} + \bA_{\by} \by^{(t+1)} - \wtbb)$
\STATE \textbf{end}
\end{algorithmic}
\textbf{Output:} Final adjacency estimate $\whbw \leftarrow \bw^{(t+1)}$
\end{algorithm}
In Algorithm~\ref{alg_learn_graph}, $[\cdot]_{\underset{F}{\geqslant} \bzero}$ is entrywise thresholding that projects the entries with indices in $F$ to the nonnegative orthant. As we can see from the algorithm, all updates have closed-form solutions and the most computationally expensive step is the $\bw^{(t+1)}$ update that involves matrix inversion. This matrix inversion, however, can be computed efficiently using the
identity $(\bI + \bA^{T}\bA)^{-1} = \bI - \bA^{T}(\bI + \bA\bA^{T})^{-1}\bA$. Since matrix $\bA$ is a fat matrix, $\bA\bA^{T}$ has smaller dimensions than $\bA^{T}\bA$. Moreover, one can easily see that $\bA\bA^{T}$ is a matrix of dimensions $(n+1) \times (n+1)$, and
\begin{align}
\bA\bA^{T} &=
\begin{bmatrix}
c_{n} & \bone^{T}\\
\bone & \bI
\end{bmatrix}.
\end{align}
where $c_{n} = 2n^2 - n$. In addition, the inverse only needs to be computed once at the start of the algorithm since this matrix is deterministic and depends only on the size of the adjacency matrix being estimated.
\subsection{Parameter and computational complexities}
The number of parameters that one needs to learn a graph adjacency matrix is $\frac{n(n+1)}{2}$. This implies that the number of unknown parameters scales quadratically with the number of nodes in the graph. Additionally, the per-iteration computational complexity of the proposed method also scales quadratically with the number of nodes \cite{wang2017new}. The same computational and memory complexities also hold for the existing state-of-the-art graph learning algorithms \cite{dong2016learning,kalofolias2017large,kalofolias2016learn,chepuri2017learning,egilmez2017graph}. However, while these complexities are manageable for small graphs, for real-world datasets with even hundreds of nodes current methods become prohibitive. To overcome these issues, we next examine the problem of learning product graphs.
\section{Product graphs: Problem formulation \\\quad\& advantages in data representation} \label{sec:product_graphs}
In this section we briefly review product graphs and their implications towards graph learning. We investigate how product graphs provide a way to efficiently represent graphs with a huge number of nodes, and we revisit the notion of smoothness of signals over product graphs.
Let us consider $K_{0}$ graphs $G_{k} = \{V_{k}, E_{k}\}$, for $k = 1,\dots,K_{0}$, where $V_{k}$ and $E_{k}$ represent the vertices and edges of the $k$-th graph. The product of these graphs would result in a product graph $G = \{V, E\}$, with $V$
and $E$ representing the vertices and edges of the resultant graph. The three most commonly investigated graph products and their respective adjacency matrices are discussed below. Note that graph adjacency matrices are considered in this work because each kind of product structure is directly reflected in adjacency matrix of the resultant graph.
\subsection{Kronecker graphs} \label{sec:product_graph_prob:kron}
For the Kronecker product of graphs $G_{k}$, for $k = 1,\dots,K_{0}$, with adjacency matrices $\bW_{k}$, the Kronecker product graph can be expressed as $G = \underset{[K_{0}]}{\otimes} G_{k} = G_{K_0} \otimes G_{K_0-1} \otimes \dots \otimes G_{1}$. The respective Kronecker-structured adjaceny matrix of the resultant graph can written in terms of component/factor adjacency matices as $\bW =\underset{[K_{0}]}{\otimes} \bW_{k}$. Additionally, if the factor adjacency matrix $\bW_{k}$ can be expressed via eigenvalue decomposition (EVD) as $\bW_{k} = \bU_{k} \bLambda_{k} \bU_{k}^{T}$, then the Kronecker adjaceny matrix can be written as (using properties in \cite{sandryhaila2014big}):
\begin{align} \label{eq:kronecker}
\bW &= (\bU_{K_{0}} \bLambda_{K_{0}} \bU_{K_{0}}^{T}) \otimes \dots \otimes (\bU_{1} \bLambda_{1} \bU_{1}^{T}) \nonumber\\
&= (\underset{[K_{0}]}{\otimes} \bU_{k})~(\underset{[K_{0}]}{\otimes} \bLambda_{k})~(\underset{[K_{0}]}{\otimes} \bW_{k}^{T}) = \bU \bLambda_{\mathrm{kron}} \bU^{T}.
\end{align}
One can see that both the eigenmatrix and the eigenvalue matrix of the Kronecker adjacency matrix have a Kronecker structure in terms of the component eigenmatrices and component eigenvalue matrices, respectively. Given the number of edges in the component graphs are $|E_{k}|$, the number of edges in the Kronecker graph are $|E| = 2^{K_{0}-1} \overset{K_{0}}{\underset{k = 1}{\prod}} |E_{i}|$.
An example of Kronecker product graph is the bipartite graph of a recommendation system like Netflix \cite{allen2010transposable} where the graph between users and movies can be seen as a Kronecker product of two smaller factor graphs.
In fact, the adjacency matrix of any bipartite graph can be represented in terms of a Kronecker product of appropriate factor matrices \cite{leskovec2010kronecker}. As adjacency matrices are also closely related to precision matrices (inverse covariance matrices), and inverse of Kronecker product is Kronecker product of inverses \cite{sandryhaila2014big}, imposing Kronecker structure on the adjacency matrix also amounts to imposing a Kronecker structure on the covariance matrix of the data.
The optimization problem in \eqref{learn_lap_3} can be specialized to the case of learning Kronecker graphs by explicitly imposing the Kronecker product structure on the adjacency matrix and posing the problems in terms of the individual factor adjacency matrices, rather than the bigger adjacency matrix produced after the product. This leads us to the following nonconvex problem for learning Kronecker graphs:
\begin{align} \label{learn_kronecker}
\underset{\{\bW_{k} \in \cW \}_{k=1}^{K_{0}}}{\min} &\frac{\alpha}{M_{0}}\tr \Bigg[ \big[ \underset{[K_{0}]}{\otimes} \bW_{k} \big] \Big(\underset{m = 1}{\overset{M_{0}}{\sum}} \bone \bar{\bx}_{m}^{T} - \underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m} \bx_{m}^{T} \Big) \Bigg].
\end{align}
\subsection{Cartesian graphs} \label{sec:product_graph_prob:cart}
The Cartesian product (also called Kronecker sum product) of graphs $G_{k}$ is represented as $G = \underset{[K_{0}]}{\oplus} G_{k} = G_{K_0} \oplus G_{K_0-1} \oplus \dots \oplus G_{1}$. The correspoding cartesian adjacency matrix can be written in terms of the component adjacency matrices as $\bW =\underset{[K_{0}]}{\oplus} \bW_{k}$. Furthermore, with the EVD of the component adjacency matrices, the Cartesian adjacency matrix can be decomposed as \cite{sandryhaila2014big}:
\begin{align} \label{eq:cartesian}
\bW &= (\bU_{K_{0}} \bLambda_{K_{0}} \bU_{K_{0}}^{T}) \oplus \dots \oplus (\bU_{1} \bLambda_{1} \bU_{1}^{T}) \nonumber\\
&= (\underset{[K_{0}]}{\otimes} \bU_{k})~(\underset{[K_{0}]}{\oplus} \bLambda_{i})~(\underset{[K_{0}]}{\otimes} \bW_{k}^{T}) = \bU \bLambda_{\mathrm{cart}} \bU^{T}.
\end{align}
This means that the eigenmatrix and the eigenvalue matrix of the Cartesian adjacency matrix are represented, respectively, as Kronecker and Cartesian products of component eigenmatrices and eigenvalue matrices. The number of edges in the Cartesian graph can be found as $|E| = \sum_{k = 1}^{K_{0}} (\underset{[K_{0}] \setminus k}{\otimes} n_{i}) |E_{k}|$, where $|E_{k}|$ represents the number of edges and $n_{k}$ represents the number of vertices in the $k$-th component graph.
A typical exmaple of a Cartesian product graph is images. Images reside on two-dimensional rectangular grids that can be represented as the Cartesian product between two line graphs pertaining to the rows and columns of the image \cite{sandryhaila2014big}. A social network can also be approximated as a Cartesian product of an inter-community graph with an intra-community graph \cite{sandryhaila2014big}.
Similar to the previous discussion, the optimization problem in \eqref{learn_lap_3} can be specialized to learning Cartesian graphs by explicitly imposing the Cartesian structure and posing the problem in terms of the factor adjacency matrices as follows:
\begin{align} \label{learn_cartesian}
\underset{\{\bW_{k} \in \cW \}_{k=1}^{K_{0}}}{\min} &\frac{\alpha}{M_{0}}\tr \Bigg[ \big[ \underset{[K_{0}]}{\oplus} \bW_{k} \big] \Big(\underset{m = 1}{\overset{M_{0}}{\sum}} \bone \bar{\bx}_{m}^{T} - \underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m} \bx_{m}^{T} \Big) \Bigg].
\end{align}
\subsection{Strong graphs} \label{sec:product_graph_prob:strong}
The strong product of graphs $G_{k}$ can be represented as $G = \underset{[K_{0}]}{\boxtimes} G_{k} = G_{K_0} \boxtimes G_{K_0-1} \boxtimes \dots \boxtimes G_{1}$. The respective strong adjacency matrix of the resultant strong graph is given in terms of the component adjacency matrices as $\bW =\underset{[K_{0}]}{\boxtimes} \bW_{k}$, and can be further expressed as:
\begin{align} \label{eq:strong}
\bW &= (\bU_{K_{0}} \bLambda_{K_{0}} \bU_{K_{0}}^{T}) \boxtimes \dots \boxtimes (\bU_{1} \bLambda_{1} \bU_{1}^{T}) \nonumber\\
&= (\underset{[K_{0}]}{\otimes} \bU_{k})~(\underset{[K_{0}]}{\boxtimes} \bLambda_{i})~(\underset{[K_{0}]}{\otimes} \bW_{k}^{T}) = \bU \bLambda_{\mathrm{str}} \bU^{T},
\end{align}
in terms of EVD of the component adjacency matrices.
The strong product graphs can be seen as the sum of Kronecker and Cartesian products of the factor adjacency matrices.
An example of data conforming to the strong product graph is a spatiotemporal sensor network graph, which consists of a strong product of a spatial graph and a temporal graph (representing the temporal dependencies of the sensors). The spatial graph has as many nodes as the number of sensors in the sensor network and represents the spatial distribution of sensors. On the other hand, the temporal graph has as many nodes as the number of temporal observations of the whole sensor network and represents the overall temporal dynamics (changes in connectivity over time) of the network \cite{sandryhaila2014big}.
By making the strong product structure explicit in terms of the factor adjacency matrices, the optimization problem for learning strong graphs can be expressed as the following nonconvex problem:
\begin{align} \label{learn_strong}
\underset{\{\bW_{k} \in \cW \}_{k=1}^{K_{0}}}{\min} &\frac{\alpha}{M_{0}}\tr \Bigg[ \big[ \underset{[K_{0}]}{\boxtimes} \bW_{k} \big] \Big(\underset{m = 1}{\overset{M_{0}}{\sum}} \bone \bar{\bx}_{m}^{T} - \underset{m = 1}{\overset{M_{0}}{\sum}} \bx_{m} \bx_{m}^{T} \Big) \Bigg].
\end{align}
\subsection{Product graph Fourier transform}
One can see from \eqref{eq:kronecker}, \eqref{eq:cartesian}, and \eqref{eq:strong} that the graph Fourier transform of a product graph (which is the eigenmatrix of the product adjacency matrix), has a Kronecker structure in terms of the eigenmatrices of the component graph adjacency matrices: $\bU = \underset{[K_{0}]}{\otimes} \bU_{k}$. In terms of the implementation of the graph Fourier transform, this structure provides an efficient implementation of the graph Fourier as (using the properties of Kronecker product and tensors \cite{kolda2009tensor}):
\begin{align}
\bU^{T} \bx = (\underset{[K_{0}]}{\otimes} \bU_{k})^{T} \bx = \tvec (\cX \underset{[K_{0}]}{\times} \bU_{k}),
\end{align}
where $\bx \in \R^{n_{1} n_{2} \dots n_{K_{0}}}$ is an arbitrary graph signal on the product graph, and $\cX \in \R^{n_{1} \times n_{2} \times n_{K_{0}}}$ represents appropriately tensorized version of the signal $\bx$. Because of this, one does not need to form the huge Fourier matrix $\bU$ and can avoid costly matrix multiplications by just applying the component graph Fourier matrices to each respective mode of the tensorized observation $\cX$ and then vectorizing the result.
\subsection{Smoothness } \label{sec:product_graphs:smoothness}
Smoothness of a graph signal is one of the core concepts in graph signal processing \cite{bronstein2017geometric,sandryhaila2013discrete,sandryhaila2014big,shuman2013emerging,ortega2018graph} and product graph Laplacians provide an efficient representation for the notion of smoothness. The smoothness of a graph signal can be measured through the Dirchlet energy defined as $\bx^{T} \bL \bx$. The Dirichlet energy can be reexpressed as: $\bx^{T} \bL \bx = \bx^{T} (\bD - \bW) \bx = \bx^{T} \bD \bx - \bx^{T} \bW \bx.$
Let us now focus on each term separately in the context of product graphs. For the term involving $\bW$ we have:
\begin{align} \label{eq:product_graphs:dirchilet:adjacency}
\bx^{T} \bW \bx &= \bx^{T} \bU \bLambda \bU^{T} \bx = (\bU^{T}\bx)^{T} \bLambda (\bU^{T}\bx) \nonumber\\
&= \tvec (\cX \underset{[K_{0}]}{\times} \bU_{k})^{T} \bLambda \tvec (\cX \underset{[K_{0}]}{\times} \bU_{k}).
\end{align} \label{eq:product_graphs:dirchilet:degree}
Similarly, the term involving $\bD$ can be re-expressed as:
\begin{align}
\bx^{T} \bD \bx &= \bx^{T} \diag( \bW \bone) \bx = \bx^{T} \diag(\bx) \bW \bone \nonumber\\
&= (\bx \odot \bx)^{T} \bW \bone = \bar{\bx}^{T} \bW \bone,
\end{align}
which can be computed efficiently along the lines of \eqref{eq:product_graphs:dirchilet:adjacency}. With this reformulation, one circumvents the need to explicitly form the prohibitively large eigenmatrix $\bU$ and can evaluate the Dirichlet energy much more efficiently with just mode-wise products with the smaller component eigenmatrices.
\subsection{Representation complexity} \label{sec:product_graphs:representation}
Let us consider an unknown graph $G$ with the number of nodes $|V| = n = \prod_{k =1}^{K_{0}} n_{k}$, where $n_{k}$ represents the number of nodes in each component graph and $K_{0}$ is total number of component graphs. If one were to learn this graph by means of an arbitrary adjacency matrix, the number of parameters that needs to be estimated would be $\frac{n(n + 1)}{2}$ (since the graph adjacency matrix is a symmetric matrix). On the other hand, for the same graph, by utilizing the product model of the graph adjacency matrix, one would need to estimate only $\sum_{k=1}^{K_{0}} \frac{n_{k}(n_{k} + 1)}{2}$ parameters. Considering the special case of $n_{1} = n_{2} = \cdots = n_{K_{0}}= \bar{n}$, and imposing the product structure on graph adjacency matrix reduces the number of parameters needed to be learned by $\bar{n}^{K_{0}-1}/K_{0}$!
|
2101.09061
|
\section{Introduction}
Developing glasses with tailored surface properties is
an important step in many types of applications such as
the production of ultra-thin flexible displays and energy
efficient windows, catalysis technology, electronics, and biomaterials
\cite{Pantano1989,bocko1991surface,bach1997advanced,zhuravlev_surface_2000,pugliara2016assessing,dey2016cleaning,zheng2019protein}.
Among the problems one faces in the design of glasses with specific
properties are the presence of surface defects which can lead to a
dramatical drop of the mechanical strength or a strong alternation of the
chemical reactivity of the samples. In spite of the considerable number
of experimental and computational studies that have probed the surface
properties of silicate glasses, we still lack atomistic insight
how the glass composition affects the structure of such surfaces or the
concentration of defects, or how the local structure of the surface
influences the spectroscopic and electronic properties.
Note that when we discuss here glass surfaces, we refer to surfaces that
have been obtained by cooling the glass-former from the melt, i.e., we
do not consider the case in which the surface is produce by a fracture
process~\cite{anderson_fracture_2017,zhang_surfcl_2020}.
Experimental techniques such as low-energy ion scattering (LEIS)
spectroscopy, X-ray photoelectron spectroscopy (XPS), or atomic force
microscopy (AFM) have provided information on surface composition and its
microstructure~\cite{kelso_comparison_1983,almeida_low-energy_2014,almeida_low-energy_2016,cushman2018low,radlein1997atomic,poggemann2001direct,poggemann2003direct,frischat2004nanostructure}.
The LEIS technique has, e.g., allowed to demonstrate that
melt-formed surfaces of binary oxide glasses are depleted of the
modifier atoms which seem to evaporate when the sample is still
in the liquid state~\cite{almeida_low-energy_2016}. AFM measurements have allowed
to probe the structural features of glass surfaces with atomic
resolution, and thus to obtain structural information such as interatomic
distances and grouping of atoms, but the nature of the defects could not be
determined~\cite{radlein1997atomic,poggemann2001direct,poggemann2003direct,frischat2004nanostructure}.
Information on the surface structure can also be
obtained from spectroscopic techniques such as infrared (IR)
spectroscopy, nuclear magnetic resonance (NMR), extended X-ray absorption fine structure (EXAFS) or electron paramagnetic resonance (EPR)~\cite{radzig2007point,berruyer2017three}. However, in order to obtain from such techniques
information about the structural properties of the surface it is
usually necessary to make a hypothesis on the nature of the defects
and/or to combine spectral, kinetic and computational data, a task that is not
straightforward at all~\cite{comas2017understanding,berruyer2017three}.
Pure silica is the simplest silicate glass and because
of its importance in industrial and engineering
applications such as support medium for modern heterogeneous
catalysts and biomolecules it has been widely studied in the past
\cite{zhuravlev_surface_2000,varshneya2013fundamentals,rimola_silica_2013,tielens2019Characterization}.
Experimental as well as theoretical studies have given evidence that the
local structure of the outermost layer of silica surfaces consists of
SiOSi bridges (called siloxane bridges) and SiO$_4$ tetrahedra bearing one
or two OH groups~\cite{zhuravlev_surface_2000,rimola_silica_2013}. Using
appropriate heating and thermal treatment (above 700$^{\circ}$~C),
the concentration of these silanol groups can be reduced,
allowing to generate partially or even fully dehydroxylated silica
surfaces~\cite{morrow1976infrared,bunker1989infrared,grabbe1995strained,coperet2003homogeneous,sot2019fully}.
With the reduction of the surface hydroxylation, defective structures
are generated, in particular strained two-membered (2M) rings, i.e.~two
tetrahedra that share an edge. The presence of this
type of defect, completely absent in the bulk sample, was inferred from
the appearance of certain features in the IR spectra, namely
two bands at 888 and 908~cm$ ^{-1} $, and a shoulder at 932~cm$ ^{-1}
$~\cite{morrow1976infrared,bunker1989infrared,grabbe1995strained,ferrari1995reactions}.
These 2M rings are under high local stress and hence are
considered to be important reactive sites capable to favor the
functionalization of the surface as indicated by various experimental
studies~\cite{coperet2003homogeneous,comas2017understanding}.
Other experiments indicate the existence of further local
defects, such as under-coordinated silicon and non-bridging
oxygen atoms, but their concentration and
the way they modify the network are not
known~\cite{vaccaro2008luminescence,sot2019fully}.
These experimental efforts have been complemented
by computer simulation studies, pioneered by the
classical molecular (MD) simulations of Garofalini and
co-workers~\cite{garofalini_molecular_1983,feuston_topological_1989,garofalini_molecular_1990}.
Using various types of interaction potentials, the surfaces of
silica glasses were investigated in detail in order to identify the different
structural features and in particular the concentration of the mentioned
defects~\cite{wilson2000hydrolysis,roder_structure_2001,rarivomanantsoa_classical_2001,wang2003molecular1,du_molecular_2005,gonccalves2016molecular,rimsza_surface_2017,halbert2018modelling}.
Although most of these studies did indeed report a finite
concentration of the various local defect sites, the values
did not match well the experimental data, likely because of the
used protocol to generate the samples or the inaccuracies of the
interaction potential. Similar investigations have also been
carried out for surfaces of more complex glasses and it was found
that their structure differed significantly from the one of the bulk
system~\cite{garofalini1985differences,ren_surface_2017,garofalini2018simulations}.
Note that most effective force fields used to carry out these simulations
have been developed to describe the bulk properties of glasses. Therefore
it is far from obvious whether or not such classical MD simulations are able to
give a quantitative correct description of the local structure of the
surface since the arrangement of the atoms is very different from the
one encountered in the bulk. This problem can be avoided by using an
\textit{ab initio} approach in which the forces are directly calculated
from the electronic degrees of freedom~\cite{kob_first-principles_2016}. This approach is
thus not only more reliable but in addition it also allows to determine
the electronic signatures of the main structural features of samples
with surfaces.
The goal of the present work is thus to provide a detailed description
of silicate glass surfaces in terms of their structural, vibrational, and
electronic properties and to probe how these properties depend on the
composition of the glass. The use of {\it ab initio} calculations will
allow us to extract the spectroscopic and electronic signatures of the
defective sites and in particular to understand how the presence of Na
atoms affect the various properties.
The reminder of the paper is organized as follows: The next section
gives details on the composition of the studied glasses, the protocol
used to generate samples with surfaces, as well as on the adopted
computational framework. In Sec.~\ref{sec:structure} we will present a description
of the structural properties of the surface domain and compare these to the ones
of the interior (bulk-like) one. Subsequently we will
show and discuss the vibrational (in Sec.~\ref{sec:vibrations}) and
electronic (in Sec.~\ref{sec:electronic}) properties of the studied
compositions. Finally, the last section will summarize the conclusions
and give perspectives of the present work.
\section{Simulation details} \label{sec:sim}
In this present study, we have considered three glass-forming systems: Pure silica
(SiO$_2$) and two binary sodo-silicates, Na$_2$O-5SiO$_2$ and
Na$_2$O-3SiO$_2$, denoted hereafter as NS5 and NS3, respectively. To
start we prepared a bulk liquid sample containing around 400 atoms
randomly placed in a cubic simulation box and carried out classical
molecular dynamics simulations at relatively high temperatures
(3600~K for SiO$_2$ and at 3000~K for the two sodosilicates), using
periodic boundary conditions. The initial box side was chosen so
that the density coincides with that of the glass of the corresponding
composition at room temperature~\cite{bansal_handbook_1986}.
More details can be found in Ref.~\cite{zhang_fracture_2020}.
The final
configurations of these classical simulations were then used as starting
points for the equilibration runs carried out within the framework of {\it
ab initio} molecular dynamics (AIMD) simulations at the same temperatures
and using the constant volume–constant temperature ($NVT$) ensemble. The
lengths of these AIMD trajectories were 12.2~ps, 15.6~ps, and 11.8~ps
for silica, NS5 and NS3, respectively, a time span that was sufficiently
long to completely equilibrate the samples. More details (composition,
number of atoms, densities, box sizes of bulk samples) are
given in Table~\ref{tab: ab simu-parameters}. For each composition,
two independent samples were prepared and the results presented in the
following sections are their averaged properties.
\begin{table*}[ht]
\small
\center
\begin{tabular}{lcccccc}
\hline
&{\hspace{2mm}} \#atoms {\hspace{2mm}} & Na$_2$O-mole\% {\hspace{2mm}} &{\hspace{2mm}} $L_{\rm bulk}$~(\AA) {\hspace{2mm}} & {\hspace{2mm}} $\rho_{\rm bulk}$~(g/cm$^3$) {\hspace{2mm}} & {\hspace{2mm}} $T_0$~(K) {\hspace{1mm}} & {\hspace{2mm}} $T_1$~(K) \\ \hline
SiO$_2$ & 384 &0.0 & 17.96 & 2.20 & 3600 & 2500 \\
NS5 & 414 &16.7 & 18.07 & 2.35 & 3000 & 2000 \\
NS3 & 396 &25.0 & 17.62 & 2.43 & 2200 & 1500 \\ \hline
\end{tabular}
\caption{
\label{tab: ab simu-parameters}
Simulation parameters. See the main text for the definitions of $T_0$ and $T_1$.}
\end{table*}
In order to generate samples that have surfaces, we cleaved the bulk liquid
samples along the $z$-axis and inserted a vacuum layer between the two
surfaces, thus creating a sample with a slab geometry. The height of
this vacuum layer was 18\AA, large enough to prevent the two surfaces
to interact with each other. Due to the cleavage process the structure
close to the surfaces was strongly out of equilibrium and hence we
re-equilibrated the sample at a temperature $T_0$
(see Tab~\ref{tab: ab simu-parameters}). Note that the presence of the free
surfaces requires that this equilibration is done with some caution:
On one hand, the temperature should be high enough to allow the atoms to
diffuse within a reasonable amount of time. On the other hand a temperature
that is too high will result in the evaporation of the surface atoms
and/or a large expansion of the sample. For this reason, the equilibration
temperature $T_0$ for silica and NS5 sample was identical to the one
at which the liquid was equilibrated, since the evaporation rate is
small, while for the Na-rich composition NS3, for which the rate is high,
we had to choose a lower temperature, namely $2200$~K.
The time for equilibration at $T_0$ was around 12~ps, which is long
enough for the structure to relax. The samples were subsequently quenched
down to an intermediate temperature $T_1$ using a nominal cooling rate
of $5\times10^{14}$~K$/$s, and then to 300~K using a faster cooling rate
of $2\times10^{15}$~K$/$s. The temperature $T_1$ was 2500~K, 2000~K, and
1500~K for silica, NS5, and NS3 respectively, values that were chosen
such they are below the glass transition temperature $T_g$ of
the simulated glasses which, due to the fast cooling rates, are above
the experimental $T_g$'s. Finally, the samples were annealed at room
temperature for another 3~ps. All simulations were carried out using
the $NVT$ ensemble. For the calculation and analysis of the observables
of interest, we discarded the first 4~ps from the total length of the
runs at $T_0$, and 0.5~ps at 300~K.
Finally the samples
were quenched to 0~K and relaxed, and then we calculated the dynamical matrix and
the Born charge tensors in order to compute the vibrational density of
states (VDOS) as well as the imaginary part of the dielectric function.
(See Ref.~~\cite{pedesseau_first-principles_2015-1} for
details).
The AIMD simulations were performed by using the Vienna \textit{ab initio}
package (VASP)~\cite{kresse_efficiency_1996,kresse_efficient_1996} which
implements the Kohn-Sham (KS) formulation of the density functional theory
(DFT)~\cite{kohn_self-consistent_1965,martin_electronic_2004} to
compute the electronic structure. For the exchange and correlation term,
we used the generalized gradient approximation (GGA) and the PBEsol
functional, respectively,~\cite{perdew_generalized_1996,perdew_restoring_2008}. The KS
orbitals were expanded in a plane-wave basis at the $\Gamma$ point and the
electron-ion interaction was described within the projector-augmented-wave
formalism~\cite{blochl_projector_1994,kresse_ultrasoft_1999}. The
plane-wave basis set included all components with energies up to 600~eV. For
solving the KS equations, the residual minimization method-direct
inversion was used in the iterative space, and the electronic convergence
criterion was fixed at $ 1\times 10^{-6}$~eV during the glass production
process and at $5\times 10^{-7}$~eV for the geometric optimization
procedure.
The time step for the simulations was 1~fs and temperature was
controlled by a Nos\'e thermostat~\cite{nose_molecular_1984}. We
note that the simulation parameters chosen here are similar
to the ones of previous \textit{ab initio} studies of silicate liquids and
glasses in the bulk~\cite{pedesseau_first-principles_2015,pedesseau_first-principles_2015-1,sundararaman_new_2018,sundararaman_new_2019},
and which have demonstrated that the resulting properties of the liquid
and glass compare very well with experimental results.
\begin{figure*}[t]
\centering
\includegraphics[width=0.3\textwidth]{fig1a-ab-ns0-snapshot-300K1.eps}
\includegraphics[width=0.3\textwidth]{fig1b-ab-ns5-snapshot-300K.eps}
\includegraphics[width=0.3\textwidth]{fig1c-ab-ns3-snapshot-300K.eps}
\caption{Snapshots of the atomic structure of the three glasses
at 300~K. Si, O, and Na atoms are represented by spheres in blue, red, and green,
respectively. The sticks represent Si-O bonds with bond length smaller
than 2~\AA.
}
\label{fig:ab nsx-snapshots-300K}
\end{figure*}
\section{Structure}\label{sec:structure}
In this section we describe how to identify the surface and interior
domains of our sandwich samples. Subsequently we will characterize their atomic
structure in terms of pair and bond angle distribution functions as well
as the concentration of the various species and local environments. These features
will then be discussed with respect to both their compositional dependence
and location in the two sub-domains, i.e. surface and interior.
\begin{figure*}[th]
\centering
\includegraphics[width=0.94\textwidth]{fig2abc-ab-nsx-r2-massdensity.eps}
\includegraphics[width=0.94\textwidth]{fig2def-ab-nsx-r2-numfraction.eps}
\caption{Atomic distribution along the $z-$direction. Panels (a)-(c) are
the mass density profiles for silica, NS5, and NS3. Panels (d)-(f) are the
atomic number fraction along the $z-$direction for silica, NS5, and NS3. In
all graphs, the solid lines with symbols are for the liquids at temperature $T_0$,
see Table~\ref{tab: ab simu-parameters}. The dashed lines are the corresponding quantities
for glasses at 300~K and for clarity are shown only for NS5. The vertical dashed
lines indicate the boundary between the surface and interior layers. }
\label{fig:ab nsx-density-numfrac}
\end{figure*}
\subsection{Defining the surface domain}
Figure~\ref{fig:ab nsx-snapshots-300K} shows snapshots of the simulation
boxes of the glasses for the three compositions at 300~K. One sees
that the slab has an atomic disordered network structure which
becomes increasingly depolymerized with the addition of Na$_2$O. In
Fig.~\ref{fig:ab nsx-density-numfrac} we plot the density and atomic
concentrations for the liquid state ($T=T_0$) as a function of the $z-$coordinate,
i.e.~perpendicular to the surface. (Note that in the $z$-direction
the center of mass of the sample is defined to be at $z=0$.) For all
three compositions the total density distributions show a relatively
flat region for $|z|\leq 6$ \AA, with densities around 2.2 g$/$cm$^3$
(silica), 2.3 g$/$cm$^3$ (NS5) and 2.4 g$/$cm$^3$ (NS3), Fig.~\ref{fig:ab
nsx-density-numfrac}(a)-(c). However, as we will discuss below, although
these values are similar to the bulk densities, reported in
Tab.~\ref{tab: ab simu-parameters}, this similarity does not imply that
the inner region of the sandwich sample presents the same properties as
a real bulk glass.
Also included in Fig.~\ref{fig:ab nsx-density-numfrac}(b) are the
density distributions for the NS5 glass at 300~K (dashed lines). Within
the available statistics we do not find significant differences between
the distributions for the liquid and the ones for the glass, except for
the fact that the latters are slightly narrower due to the shrinking of
the sample during the cooling process, resulting in a density of the
interior part which is slightly higher than that of the liquid. These
observations hold for all three compositions. This fact allows us to use
in the following a simple criterion for defining the different domains for
both liquids and glasses: Atoms having a $z$-coordinate with $ |z|\leq
6$ \AA\ will be defined to belong to the interior part of the sample,
while atoms with a $z$-coordinate beyond this threshold are defined to belong to
the surface layers. A similar strategy for defining surfaces was also
used in previous simulation studies of glass surfaces, see for examples
Refs.~\cite{roder_structure_2001,rarivomanantsoa_classical_2001,ren_surface_2017,halbert2018modelling}.
Figures~\ref{fig:ab nsx-density-numfrac}(d)-(f) depict the profiles
of the atomic number fraction along the $z-$direction. For silica,
we find that the concentration of oxygens in the surface regions
is higher than in the interior layer, indicating that the top
2~\AA~of the surface layers are enriched in O, fall slighty
below the bulk value at around 3-4~\AA, and, after a small
secondary peak, attains the bulk value, observations that are in
agreement with previous classical and ab initio simulations of silica
surface~\cite{garofalini_molecular_1983,roder_structure_2001,rimola_silica_2013,halbert2018modelling}.
For the sodium silicate glasses, i.e., NS5 and NS3, the surface layers
are strongly enriched in Na and consequently the fractions of Si and O
decrease. This Na enrichment reaches about a factor of 3 (5) with respect
to the bulk value of NS5 and NS3, respectively. For the NS3 surface the
Na fraction reaches in fact 100\%, i.e.~the whole outermost layer is composed
by pure Na. These findings are consistent with experimental observations of
the surfaces of alkali silicate glasses by using LEIS
spectroscopy~\cite{kelso_comparison_1983,almeida_low-energy_2014,almeida_low-energy_2016}
as well as with recent findings from classical molecular
simulations of sodosilicate glasses with reactive force
fields~\cite{mahadevan2020hydration}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\textwidth]{fig3abc-ab-nsx-r2-pdf-sio.eps}
\includegraphics[width=0.95\textwidth]{fig3def-ab-nsx-r2-pdf-sisi.eps}
\caption{Probability distribution function of nearest neighbor distance
for the glasses at 300~K. Upper and lower panels are for Si-O and Si-Si
pairs, respectively. From left to right the compositions are silica,
NS5 and NS3, the solid lines being for the surface domains, while the
dashed lines for the interior ones.
}
\label{fig:ab nsx-gr-sio-sisi}
\end{figure}
\subsection{Bond lengths and angles}
Further insight into the structure of the glass surfaces can be obtained
by investigating the interatomic distances, bond angles and by the
identification of the main local structural motifs. To start, we show in
Fig.~\ref{fig:ab nsx-gr-sio-sisi} the (normalized) probability distribution function
(PDF) of the nearest neighbor distances for the Si-O and Si-Si pairs
calculated for the interior and surface regions. For the Si-O pair,
panels (a)-(c) in Fig.~\ref{fig:ab nsx-gr-sio-sisi}, we clearly see
that the distribution for the surfaces shifts to larger distances with
respect to the interior ones. In addition, the Si-O PDFs are broader
for the surface domains of NS5 and NS3 systems, which reflects the
increase of the network depolymerization and disorder with respect to
the corresponding properties in the interior.
As we will see below, in the surface domain the fraction
of non-bridging oxygens (NBO) is indeed enhanced with respect to the
interior, while the fraction of bridging oxygens (BO) is smaller.
(Note that the BO atoms are oxygen atoms bonded to two silicons, while
NBOs connect to only one silicon). For the Si-Si pair, Fig.~\ref{fig:ab
nsx-gr-sio-sisi}(d)-(f), a prominent feature is the peak at around
2.4~\AA, in particular for the surfaces. This length corresponds
to the Si-Si first neighbor distances between two tetrahedra that share
an edge and thus form a two-membered (2M) ring, a structural
defect often found on dehydroxylated and dry surface of silica
glass~\cite{morrow1976infrared,michalske1984slow,bunker1989infrared,dubois1993bonding,dubois1993reaction,grabbe1995strained,tielens2019Characterization}.
Hence our results show that such structures are not only present at the
surface of silica but also in the sodo-silicate systems.
\begin{table}[htb]
\small
\center
\begin{tabularx}{13cm}{lYYY}
\hline
Glass & Silica & NS5 & NS3 \\ \hline
Bond & int.~[\AA] / surf.~[\AA] & int.~[\AA] / surf.~[\AA] & int.~[\AA] / surf.~[\AA] \\ \hline
Si-Si & 3.041 / 2.928 & 3.041 / 2.927 & 3.021 / 2.986 \\
csSi-csSi &3.048 / 3.006 &3.041 / 3.059 &3.023 / 3.024 \\
csSi-esSi &3.055 / 3.058 &3.189 / 3.000 &2.956 / 3.108 \\
esSi-esSi &2.443 / 2.483 &2.370 / 2.408 &2.437 / 2.408 \\ \hline
Si-O & 1.638 / 1.650 & 1.648 / 1.651 & 1.645 / 1.651 \\
Si-NBO & - / 1.563 & 1.578 / 1.576 & 1.588 / 1.585 \\
Si-BO & 1.638 / 1.652 & 1.652 / 1.664 & 1.655 / 1.670 \\
esSi-esBO & 1.721 / 1.684 & 1.678 / 1.695 & 1.704 / 1.695 \\ \hline
Na-O & - & 2.423 / 2.362& 2.423 / 2.375 \\
Na-NBO & - & 2.276 / 2.264& 2.347 / 2.315 \\
Na-BO & - & 2.526 / 2.531& 2.545 / 2.494 \\ \hline
~\end{tabularx}
\caption{
\label{tab:ab-bond-liquid-glass}
Average bond lengths for the sandwich glass samples at 300~K for both
surface and interior domains.
csSi and esSi denote, respectively, corner-sharing and edge-sharing Si.
}
\end{table}
In Tab.~\ref{tab:ab-bond-liquid-glass}, we report the average first
neighbors distances for the Si-O, Si-Si and Na-O pairs, as well
as the ones related to 2M-rings. The bond lengths with respect to
oxygens are further decomposed with respect to the two species BO and
NBO. For both domains, we see that the distances Si-NBO and Na-NBO
are significantly shorter than the Si-BO and Na-BO distances, as already pointed
out in simulations for bulk systems and in agreement with experimental
findings~\cite{ispas_structural_2001,tilocca2006structural,angeli_insight_2011,pedesseau_first-principles_2015-1,kilymis_vibrational_2019}.
For the sodo-silicates we find that the average Na-O distances in the
surface domains are shorter than the ones in the interior as a consequence
of the Na enrichment of the surface domains (see Fig.~\ref{fig:ab
nsx-density-numfrac}, e-f) which leads to the increased fraction of
NBOs and thus making the network less polymerized and hence with less
constraints compared to the interior part. The same trend has also been
found in a recent study of the surface structure of sodium silicates
glasses using classical MD~\cite{zhang_surfcl_2020}.
In silicate glasses, the Si-Si first neighbor distance is a measure of
the inter-tetrahedral distance between two corner sharing (cs) tetrahedra,
with typical values around $3.00-3.08$~\AA~\cite{wright1994neutron}. This
range is compatible with the values we find in the interior domain of our
three glasses, while for the surface domains this distance is shorter by 2-3\%,
see Tab.~\ref{tab:ab-bond-liquid-glass}. A further decomposition of
the structure into local motifs shows that this reduction in the Si-Si
distance for the surfaces is due to the presence of edge-sharing (es)
tetrahedra forming the 2M rings mentioned above. The 2M rings found in
our samples have tetrahedra that are strained, characterized by short
Si-Si distances, elongated Si-O bonds and reduced Si-O-Si and OSiO
angles (see below). As a consequence the esSi-esSi distance gives rise to an
additional peak seen in Fig.~\ref{fig:ab nsx-gr-sio-sisi}(d)-(f)
located somewhat around 2.4~\AA (see also Tab.~\ref{tab:ab-bond-liquid-glass}).
Table~\ref{tab:ab-bond-liquid-glass} shows that in our
sandwich samples the esSi-esSi distance is close to 2.4~\AA\, and within the accuracy of our data, independent of the composition and whether the atoms
are located in the surface or in the interior domain, values that
compare well with results obtained from previous classical MD simulations
\cite{garofalini_molecular_1983,feuston_topological_1989,roder_structure_2001,rarivomanantsoa_classical_2001,du_molecular_2005,halbert2018modelling},
showing that this distance is not very dependent on the
potential used for the simulations. The 2M rings found in
our samples are also characterized by Si-O bonds that are
stretched with respect to those in corner-sharing tetrahedra,
and the values reported in Tab.~\ref{tab:ab-bond-liquid-glass}
are in good agreement with those found in classical MD
simulations~\cite{garofalini_molecular_1983,feuston_topological_1989,du_molecular_2005}
as well as in a recent AIMD investigation of dehydroxylated silica
surfaces~\cite{comas2016amorphous}. This elongation of the bond is
also compatible with DFT calculations of cristalline fibrous silica
containing chains of 2M rings~\cite{hamann1997energies} as well
as Hartree-Fock calculations of clusters and molecules containing
2M rings~\cite{okeeffe1984defects,bunker1989infrared} which found
esSi-esBO bond lengths around 1.67~\AA~and esSi-esSi distances between
2.38-2.42~\AA. Finally we mention that the presence of elongated esSi-esBO
bonds together with small Si-O-Si angles as structural fingerprints
of 2M rings is also consistent with the findings from a recent DFT study considering
the surfaces of $\beta$-cristobalite~\cite{le2018structural}. As a consequence we
conclude that i) for silica the geometrical properties of our 2M rings are compatible
with previous results and ii) the geometry of these rings are basically independent
of the environment of the ring.
\begin{table}[htb]
\small
\center
\begin{tabularx}{\textwidth}{lYYYYYY}
\hline
& \multicolumn{2}{c}{Silica} & \multicolumn{2}{c}{NS5} & \multicolumn{2}{c}{NS3} \\ \cline{2-7}
& liquid & glass & liquid & glass & liquid & glass \\
\% & int. / surf. & int. / surf. & int. / surf. & int. / surf. & int. / surf. & int. / surf. \\ \hline
$N_{\rm domain}$ & 65.8 / 34.2 & 66.7 / 33.3 & 58.6 / 41.4 & 60.7 / 39.3 & 58.5 / 41.5 & 62.9 / 37.1 \\ \hline
Si & 33.2 / 33.7 & 33.3 / 33.4 & 29.1 / 26.0 & 29.1 / 25.7 & 25.5 / 24.4 & 25.8 / 23.7 \\
O & 66.8 / 66.3 & 66.7 / 66.6 & 61.9 / 60.0 & 62 / 59.7 & 58.3 / 58.4 & 58.3 / 58.4 \\
Na & 0 / 0 & 0 / 0 & 9.1 / 14.0 & 8.9 / 14.6 & 16.3 / 17.2 & 15.9 / 17.9 \\ \hline
Si$^3$ & 2.3 / 3.7 & 0 / 1.6 & 0.7 / 1.7 & 0 / 0 & 0.1 / 0.8 & 0 / 0 \\
Si$^4$ & 29.6 / 28.8 & 32.9 / 31.8 & 27.2 / 23.6 & 27.5 / 25.7 & 24.3 / 23.3 & 25.4 / 23.7 \\
Si$^5$ & 1.2 / 0.8 & 0.4 / 0 & 1.2 / 0.6 & 1.6 / 0 & 1.1 / 0.3 & 0.4 / 0 \\ \hline
NBO & 2.4 / 4.6 & 0 / 2.3 & 8.1 / 16.7 & 6.6 / 15.6 & 14.3 / 19.7 & 14.9 / 19 \\
BO & 64.4 / 61.7 & 66.7 / 64.3 & 53.8 / 43.3 & 55.4 / 44.1 & 43.9 / 38.7 & 43.5 / 39.3 \\ \hline
esBO & 4.9 / 11.1 & 1.4 / 12.8 & 3.4 / 7.5 & 1.6 / 9.8 & 1.8 / 4 & 0.8 / 4.1 \\
esSi & 4.6 / 11.3 & 1.6 / 12.5 & 3.3 / 7.1 & 1.2 / 10.5 & 1.6 / 4.1 & 0.4 / 4.8 \\ \hline
\end{tabularx}
\caption{
\label{tab:ab struc-liquid-glass}
Percentages of various atomic species in the interior and surface domains
for the silica and sodo-silicate samples. Liquids correspond to simulation
at $T_0$ (see Table~\ref{tab: ab simu-parameters}), and glasses are at
300~K. On the first row, $N_{\rm domain}$ denotes the percentage of atoms
in a specific domain with respect to the total number of atoms of the
sample. The proportions of the atomic species are given relative to their
concentration in the considered domain. Note that, for the surface
domain, we give the sum of the amounts on the top and bottom surface layer.
}
\end{table}
In order to characterize the structure of our sandwich systems in a more
quantitative manner we have determined the fractions of various atomic
species present in the interior and surface domains, and the data are
summarized in Table~\ref{tab:ab struc-liquid-glass}. One recognizes
that for the sodo-silicate systems the surface domains are enriched in sodium,
in agreement with the atomic distribution along the $z-$
axis, shown in Figs.~\ref{fig:ab nsx-density-numfrac}(d)-(f). For NS5
this enrichment is about 50\% while for NS3 it is still around 10\%,
independent whether one considers the liquid or the glass state.
Furthermore, we have decomposed in both domains the concentration
of the silicon and oxygen atoms with respect to their coordination
numbers. (For this we used a cutoff distance of 2.0~\AA~to define bonded pairs.) We find
that most Si atoms are 4-fold coordinated but also note the
presence of under-and over-coordinated atoms, Si$^3$ and Si$^5$,
respectively. For the systems with sodium we see that in the liquid
state there is a small concentration of 5-fold coordinated Si for
both surface and interior domains but that during the quench these
defects disappears in the surface domains, while a very small number
are still present in the interior domains, possibly as a consequence of
the high quench rate~\cite{pedesseau_first-principles_2015}. For the
3-fold Si we note that in the liquid state they are more concentrated
in the surface layers than in the interiors and that in the glassy state
they are absent for the sodium silicate systems
From the data reported in Tab.~\ref{tab:ab struc-liquid-glass} one also recognizes that the surface domains have a
significantly higher concentration of NBOs than the interior. With
increasing Na$_2$O content, the concentration of surface NBO increases,
and this may account for the reduction of both under- and over-coordinated
silicons since with increasing Na content more NBO are formed, allowing
the Si atoms to grab/shed O atoms and hence to form a more regular local
environment, i.e.~becoming four-fold coordinated.
The last two rows in Tab.~\ref{tab:ab struc-liquid-glass} give the percentages of the silicon and
oxygen atoms that form 2M rings (labelled esSi and esBO respectively). As
expected, these rings are more abundant on the surface than in the
interior, by a factor of around 2 in the liquid state and a factor 5-8
in the glass samples. This temperature dependence is mainly due to the
$T$-dependence of the concentration of the 2M rings in the interior since
on the surface this concentration is basically independent of $T$. This indicates
that these structures are energetically very
unfavorable in the bulk while they are a energetically reasonable
building block in the presence of a free surface.
From the numbers of 2M rings we can calculate their area density by
dividing this number by the surface area. For silica, we find a density of 1.5/nm$^2$ which has to be compared with the estimate obtained
from IR experiments which give values of 0.2 to 0.4/nm$^2$.
(As discussed
below, 2M rings have a spectroscopic signature in the absorption IR
spectrum, with two peaks at 888 and 908~cm$^{-1}$ and a weak shoulder at
932~cm$^{-1}$~\cite{morrow1976infrared,bunker1989infrared,ferrari1995reactions,grabbe1995strained,chiang1993first}.)
Also previous simulation studies on dry silica,
have reported smaller densities of
2M rings and this might be rationalized by the fact that most of
these simulations were carried out using classical MD approaches thus
allowing for quench rates that are considerably lower than the one used
in present study~\cite{rarivomanantsoa_classical_2001,halbert2018modelling}. To our knowledge, the models of silica surface labelled
as \textit{ab initio} in the literature were initially prepared by melting
and quenching a liquid silica using effective, i.e.~classical, potentials
and the obtained structure was processed within a first-principles
framework only at 300~K (see for example Ref.~\cite{rimola_silica_2013}
and references therein). The present samples are hence the first ones
generated by the quench of a liquid surface within an AIMD approach,
admittedly with a very high quench rate which prevents the annealing
and relaxation of the glass surface.
\begin{figure}[htb]
\centering
\includegraphics[width=0.95\textwidth]{fig4abc-ab-nsx-r2-bad-osio.eps}
\includegraphics[width=0.95\textwidth]{fig4def-ab-nsx-r2-bad-siosi.eps}
\caption{Bond angle distribution. Upper and lower panels are for O-Si-O
and Si-O-Si angles, respectively. From left to right the compositions are
silica, NS5, and NS3.
}
\label{fig:ab nsx-bad-osio-siosi}
\end{figure}
More insight into the structural differences between the surface
and interior domains and the compositional effect can be obtained by
computing the bond angle distributions (BAD) shown in Fig.~\ref{fig:ab
nsx-bad-osio-siosi} for the glass samples.
The main peak in the BAD for O-Si-O is located at around
109\degree, the expected angle for a perfect tetrahedron, panels
(a)-(c). One also notices that the distribution of surface O-Si-O
angle is slightly wider than the interior, which indicates that the
[SiO$_n$] units on the surface are more distorted than the interior
ones. For pure silica, we see that the main peak of the Si-O-Si BAD
($>100^\circ$), panel (d), is narrower and more asymmetric for the
surface domain. The peak at around 130$^\circ$ is similar to the one
found in the NS5 and NS3 systems, panels (e) and (f), i.e.~the glasses that
are more depolymerized.
For the NS5 system, panel (e), the mentionned asymmetry is still present but
less pronounced, while for the Na-rich glass it has basically disappeared
due to the presence of Na atoms. For the interior domains, the main
peak becomes sharper and shifts to smaller angle with the addition of
Na$_2$O, a trend pointed out also in a previous \textit{ab initio} study
of bulk sodium silicate glasses~\cite{kilymis_vibrational_2019}.
For both the O-Si-O and Si-O-Si BADs, we observe a peak at
around 90\degree, which is due to the 2M rings. This peak is
more pronounced for the surfaces, which is consistent with the
structural data discussed above, i.e.~the presence of small
Si-Si distances and a significant fraction of the esSi and
esBO in the surface domains. The location of these peaks
are in qualitative agreement with earlier MD simulations with classical
potentials~\cite{garofalini_molecular_1983,feuston_topological_1989,rarivomanantsoa_classical_2001,du_molecular_2005,halbert2018modelling}
and also the aforementioned DFT~\cite{hamann1997energies} and molecular orbital
calculations~\cite{okeeffe1984defects,bunker1989infrared} obtained
optimized structures of edge-sharing tetrahedra with similar highly
distorted Si-O-Si angles around $90\degree$.
In order to visualize some of the above mentioned structural features
of the surfaces and their compositional differences, we show in
Fig.~\ref{fig:ab nsx-surf-snapshot} snapshots corresponding to the outermost
atoms of the surface domains of silica and NS3 glasses. In
order to select the atoms shown in Fig.~\ref{fig:ab nsx-surf-snapshot},
we have first identified the Si atoms belonging to the surface using
the tetrahedralization-based method proposed by Edelsbrunner and
M\"{u}cke~\cite{edelsbrunner_three-dimensional_1994}. The probing sphere
radius used for this algorithm was chosen as 3.2~\AA, i.e., around the nearest neighbor distances
of Si-Si (see Refs.~\cite{zhang_fracture_2020,zhang_surfcl_2020}
for details). The first nearest O and Na neighbors of these surface
silicons have then been found and included in the snapshots, whereas
the other atoms have been removed from the sake of clarity. For silica,
Fig.~\ref{fig:ab nsx-surf-snapshot}(a), 2- to 9-M rings are found on the
surface. With the addition of sodium, the network of the atomic surface
layer of the NS3 glass, Fig.~\ref{fig:ab nsx-surf-snapshot}(b), becomes
less connected, and the proportion of 2M-rings is decreased: While for
silica we observe five 2M-rings, only two are found for NS3. As argued
above, this difference in concentration is likely due to the fact that
2M-rings are strongly strained, with smaller Si-O-Si bond angles and
longer Si-O bond lengths with respect to the typical values (see bond
color code in Fig.~\ref{fig:ab nsx-surf-snapshot}). The presence of Na
can effectively relieve surface tension by breaking some of these Si-O
bonds (notably in small rings) and thus make the surface energetically
more stable.
\begin{figure}[htb]
\centering
\includegraphics[width=0.98\textwidth]{fig5-ab-surf-top-view.eps}
\caption{Top view of snapshots showing structural motifs at the surfaces
of (a) silica and (b) NS3 glasses. Only the outermost layer of silicon
atoms and their nearest neighbors O and Na atoms are shown (see the text
for the construction method). Two-membered rings are highlighted with
a circle. Only Si-O bonds are shown and a color code is used taking into
account their length, given in \AA. The visualization was realized using Ovito~\cite{stukowski_visualization_2010}.
}
\label{fig:ab nsx-surf-snapshot}
\end{figure}
\section{Vibrational properties} \label{sec:vibrations}
\subsection{Vibrational density of states}
In this subsection we will discuss the vibrational properties of our
systems in terms of the total as well as partial vibrational density of
states (VDOS).
After having relaxed to 0~K, we have determined and diagonalized
its dynamical matrix from which one can obtain the
total VDOS as
\begin{equation}
g(\omega)=\frac{1}{3N-3}\sum_{p=4}^{3N}\delta(\omega-\omega_p),
\label{eq1}
\end{equation}
\noindent
where $N$ is the total number of atoms in the sample, $\omega$ is the
frequency and $\omega_p$ is one of the 3$N$ eigenvalues of the dynamical
matrix. This total VDOS can be decomposed further into the contributions
from different species, allowing to define the partial VDOS
\begin{equation}
g_\alpha(\omega)=\frac{1}{3N-3}\sum_{p=4}^{3N}\sum_{I=1}^{N_\alpha}
\sum_{k=1}^{3}|{\bf e}_{I,k}(\omega_p)|^2\delta(\omega-\omega_p) \quad .
\label{eq2}
\end{equation}
\noindent
Here $\alpha\in \{ \mathrm{Si, O, Na, BO, NBO, csSi, esSi, csBO,
esBO}\}$, $N_\alpha$ is the number of particles of type $\alpha$, and
${\bf e}_{I,k}(\omega_p)$ is the part of the $3N$-component eigenvector
${\bf e}(\omega_p)$ that contains the three components of the particle
$I$. (Note that in Eqs.~(\ref{eq1}) and (\ref{eq2}) we do not consider the three trivial
translational modes of the system.) All vibrational spectra that will
be discussed in the following have been obtained by convoluting the
discrete distribution given by Eq.~(\ref{eq2}) with a Gaussian function
with a full width at half maximum of 30~cm$^{-1}$ and averaged over two
independent samples.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\textwidth]{fig6-ab-vdos-nsx-tot-partial_new.eps}
\caption{(a) Total vibrational density of states (VDOS) of the
two sandwich glasses at 0~K for the interior and the surface domains.
Panels (b)-(d) show the partial
VDOS for the Si, O and Na atoms, respectively. Also included in (a)
is the total VDOS for a bulk silica glass from \textit{ab initio}
calculations~\cite{sundararaman_new_2018}. The total VDOS in panel (a)
are normalized to unity, which is equal to the sum of the partials as
depicted in (b)-(d).
}
\label{fig:ab nsx-VDOS-tot-partial}
\end{figure*}
The total VDOS of the silica and NS3 systems are shown in Fig.~\ref{fig:ab
nsx-VDOS-tot-partial}(a), alongside the partial contributions
of their constituent atoms $\alpha \in \{{\mathrm {Si,\, O,\,
Na} }\}$, Fig.~\ref{fig:ab nsx-VDOS-tot-partial}(b)-(d). We
recognize that each of these distributions have three main bands:
A low-frequency band with $\omega<500$~cm$^{-1}$, a mid-frequency
band with $500<\omega<900$~cm$^{-1}$, and a high-frequency band
with $\omega>900$~cm$^{-1}$. In order to recognize the influence
of the surface on the spectra, we have included in Fig.~\ref{fig:ab
nsx-VDOS-tot-partial}(a) also the total VDOS of a bulk silica glass sample
which was obtained from \textit{ab initio} calculations within a
framework that was similar to the used in the present work~\cite{sundararaman_new_2018}. The presence of the surfaces
makes that all the sharp peaks observed in the bulk glass at around 400,
800 and 1000~cm$^{-1}$ are significantly smeared out and that the high
frequency band is shifted to somewhat lower frequencies, making that
the gap between the mid and high frequency band is partially filled
up. In addition one recognizes that due to the surface the double
peak structure of the high frequency band is completely washed out.
The addition of Na$_2$O makes that the height of the peaks at 400 and
800~cm$^{-1}$ decreases further while the high frequency band is not
modified in a significant manner, although it does shift to lower $\omega$.
As it has been shown before, this softening is due to the depolymerization of
the network which increases the contribution from NBO-related
motions~\cite{zotov_calculation_1999,kilymis_vibrational_2019}.
Finally we note that the shape of the low frequency band
changes strongly in that a new peak at around 150~cm$^{-1}$
starts to grow with increasing Na concentration, a feature that is also
seen in spectra of bulk sodo-silicate glasses~\cite{kilymis_vibrational_2019}.
A better understanding of these changes can be obtained by
inspecting the partial VDOS, presented in Fig.~\ref{fig:ab
nsx-VDOS-tot-partial}(b)-(d). (Note that the sum of the
three partials gives the total VDOS shown in Fig.~\ref{fig:ab
nsx-VDOS-tot-partial}(a).) It is clearly seen from Fig.~\ref{fig:ab
nsx-VDOS-tot-partial}(b) that the band at around $800$~cm$^{-1}$ is
related to the vibrational motion of Si, in agreement with earlier
studies which have shown that the peak is related to the complex motion
of Si against BO~\cite{zotov_calculation_1999}. The decrease of the peak
height with the addition of Na can thus be expected to be related to the
(partial) breaking up of the network, i.e.~the decreasing number of BO.
Figure~\ref{fig:ab nsx-VDOS-tot-partial}(c) shows that oxygen is the
dominant contributor to the spectrum in the low frequency band and that
also in the high frequency band its partial VDOS is larger than the one
of Si.
\begin{figure*}[h]
\centering
\includegraphics[width=0.95\textwidth]{fig7-ab-vdos-nsx-cs-es.eps}
\caption{Per-atom VDOS of the surface atoms. Panels (a) and (b) are for corner-sharing and edge-sharing Si atoms, respectively. Panels (c) and (d) are for corner-sharing and edge-sharing BO, respectively. All curves are normalized to unity.}
\label{fig:ab nsx-VDOS-es-cs}
\end{figure*}
The experimental IR spectra for silica surfaces show two strong
peaks at 888 and 908~cm$^{-1}$ and a shoulder at 932~cm$^{-1}$
~\cite{morrow1976infrared,bunker1989infrared,chiang1993first,grabbe1995strained,ferrari1995reactions}.
These features have been related to the presence of 2M rings, an assignment
which is supported by electronic structure calculations for small
terminated 2M ring clusters~\cite{bromley2003tworing} as well as of
dehydroxylated silica surface~\cite{ceresoli_two-membered_2000}. In
order to identify the vibrational signal of 2M rings in our sandwich
samples, we have decomposed the partial VDOS of surface BO and Si into
contributions from edge-sharing and corner-sharing atoms, Fig.~\ref{fig:ab
nsx-VDOS-es-cs}. (We mention that to a first approximation the VDOS
of the corner-sharing atoms, Fig.~\ref{fig:ab nsx-VDOS-es-cs}(a) and
(c), are the same as the spectra for the bulk. In reality, however,
the presence of the surface gives rise to a slight modification of the
spectra.) Panels~\ref{fig:ab nsx-VDOS-es-cs}(b) and (d) clearly show that
esSi as well as esBO have a strong signal between 800 and 900~cm$^{-1}$,
a frequency range in which the spectra for csSi and csBO have low
intensity. The main peak in this range is at around 850~cm$^{-1}$, i.e.,~a
frequency which is somewhat lower than the experimental window which
ranges from 888 to 932~cm$^{-1}$, but a value that agrees well
with previous DFT calculations~\cite{ceresoli_two-membered_2000}.
In addition to the vibrational features discussed above, we note in
the partial VDOS for the esBO a further signature of the 2M rings in
the form of a pronounced peak at around 700~cm$^{-1}$, Fig.~\ref{fig:ab
nsx-VDOS-es-cs}(d). This peak is completely absent in the spectra for the
csBO and its position shifts to lower frequencies with increasing Na$_2$O
concentration. At this frequency also the partial VDOS of the csSi shows a peak, but its intensity is not very high.
To the best of our knowledge, the existence of these peaks
for the vibrational spectra of 2M rings has not been reported before and
at present we do not know to which type of motion it corresponds to.
Due to the presence of the surface one can expect that the vibrational
modes are no longer isotropic and that hence also the VDOS will become
anisotropic. That for the case of NS3 this is indeed the case is
demonstrated in Fig.~\ref{fig:ab nsx-VDOS-ns3-xyz} where we present the
partial VDOS as obtained for the three different directions: $x$ and $y$
parallel to the surface and $z$ orthogonal to it. We see that the curves
for the $x$ and $y$ directions coincide with high accuracy, indicating
that the error bars are small. The spectrum for the $z$-direction shows
significant deviations from the two other curves, see arrows, notably
at around $100$~cm~$^{-1}$, i.e.~the peak that is directly related
to the vibrational motion of the Na atoms. Panel (c) shows that the
vibrations in the $z$ direction are a bit softer than in the two other
directions (the peak is shifted to lower frequencies), a result that is
reasonable since the Na atoms are less constrained in the $z$
direction. The anisotropy can also be see in the high frequency band
in that the intensity of the spectrum in the $z$ direction for Si and O
is lower than the one in the orthogonal directions. This result can be
rationalized by the fact that close to the surface the Si-O-network is more anisotropic, since we have found, see Fig.~\ref{fig:ab nsx-density-numfrac},
that there is a layering effect in the composition.
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{fig8-ab-vdos-ns3-si-o-na-layers-xyz_avg.eps}
\caption{Decomposition of the partial VDOS of NS3 into contributions from
different directions. Panels (a)-(c) are for Si, O
and Na, respectively. The arrows indicate the locations at which
the spectra depend significantly on the direction. All curves
are normalized to unity.\\
}
\label{fig:ab nsx-VDOS-ns3-xyz}
\end{figure*}
\subsection{Infrared response}
In the previous subsection we have discussed the vibrational features
of our sandwich systems, focusing on the frequency and composition
dependences of the partial and total VDOS. In order to make a direct
connection to experimental data, it is useful to compute the IR response
of the samples. This quantity can be obtained directly from the frequency
dependence of the dielectric function $\epsilon(\omega)$ which can be
calculated from the vibrational eigenmodes and the Born effective charges
of the atoms. (The details of the method and the relevant relations
are documented in Ref.~\cite{pedesseau_first-principles_2015-1}).
In Fig.~\ref{fig:ir-spectra} we present $\epsilon_2(\omega)$, the
imaginary part of the dielectric function, for our three systems,
for bulk silica as well as the experimental spectrum from
Ref.~\cite{Philipp1998}. Since $\epsilon_2(\omega)$
has an \(\omega\)-dependence which is very similar to the one of the IR absorption,
see Ref.~\cite{pedesseau_first-principles_2015-1}, we
present here the former quantity. We also recall that the
IR experimental studies exhibiting the well-defined
frequency window between 888 and 932~cm$ ^{-1} $ assigned to 2M
rings~\cite{morrow1976infrared,bunker1989infrared,chiang1993first,ferrari1995reactions,grabbe1995strained}
are absorption spectra, thus motivating this choice.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\textwidth]{fig9-eps2-sandwich-bulk-SI-v4.eps}
\caption{Imaginary part of the dielectric function $\epsilon_2 (\omega)$
for bulk silica as well as for the three sandwich systems. Panel (a) shows
the calculated spectra of $\epsilon_2 (\omega)$ for bulk (black dotted
line) and sandwich sample (black line), and the experimental spectrum
(red dashed line) for bulk amorphous silica~\cite{Philipp1998}. Panel
(b) shows the calculated $\epsilon_2 (\omega)$ for the silica, NS5
and NS3 sandwich systems, black, blue and green full line, respectively.
}
\label{fig:ir-spectra}
\end{figure*}
Comparing in Fig.~\ref{fig:ir-spectra}(a) the theoretical spectra for the
bulk with the experimental data, we see that the simulation reproduces
correctly the three main resonances, although the peak positions are
down-shifted by about 25-30~cm$^{-1}$ and the height of the peak at
$\approx$430~cm$^{-1}$ is lower. These discrepancies might be due to the
small size of our system or related to the fact that the DFT functional
we have used in the present simulations is known to produce frequency
that are about 5\% too small~\cite{delapierre2016vibrational}. However, in overall
the obtained agreement between the calculated $\epsilon_2(\omega)$ and
the experimental data is good and on this basis we can proceed to
understand the evolution of the IR response due to the presence of the
surface as well as to the composition.
Also included in panel~(a) is the spectrum for the silica sandwich
sample. We see that with respect to the corresponding bulk data the curve
is shifted to lower frequencies by about 30~cm$^{-1}$ and that the three
main peaks have become broader. These changes can be explained by the fact
that the defective structures present in the sandwich samples (NBO, 2M
rings) induce distortion of the glass network and this increased disorder
leads to a softening of the vibrations and broadening of the peaks. This
modification is most pronounced for the band at $\approx$780~cm$ ^{-1}
$, which corresponds to the symmetric stretching of the SiOSi bridges.
This band not only becomes broader but also asymmetric, with a new
peak located close to 850~cm$ ^{-1} $, a frequency which coincides with
the one of the characteristic peaks of esBO and esSi discussed in the
context of Fig.~\ref{fig:ab nsx-VDOS-es-cs}. Thus we can conclude that
the IR spectra can indeed reveal the presence of 2M rings in the sample. However, we also note that at \(\omega \approx 700\)~cm$ ^{-1}$, we find no marked peak in \(\epsilon_2\), i.e. the peak we find at this frequency in the VDOS (see Fig.~\ref{fig:ab nsx-VDOS-es-cs}b) seems not to be IR active.
In order to understand the dependence of the spectrum on the composition
we present in Fig.~\ref{fig:ir-spectra}(b) the calculated imaginary part
of the dielectric function for the three sandwich samples. Firstly we
notice for the NS5 and NS3 glasses the presence of a broad band below
300~cm$^{-1}$, with an intensity that growth with the concentration
of Na. This trend is in agreement with experimental IR studies for bulk
glasses~\cite{merzbacher1988structure,kapoutsis1994alkali,ingram2000origins}
and a comparison with the VDOS from Fig.~\ref{fig:ab nsx-VDOS-tot-partial}
shows that this band is indeed directly related to the vibrational
motion of the sodium atoms. In contrast to this the pronounced peak at
around 400~cm$^{-1}$ depends only weakly on the concentration of sodium,
a result due to the fact that that rocking motions of SiOSi bridges, IR active modes, are not much affected by the Na presence~\cite{kilymis_vibrational_2019}.
A stronger dependence on the Na concentration is observed for
the band from 700 to 900~cm$ ^{-1} $ in that it shifts significantly
to lower frequencies, becomes more intense, and slightly broader. The
softening of this spectral region with the addition of Na has also been
seen in experimental IR spectra for bulk glasses and attributed to
the increasing depolymerization of the network, in agreement with our
observations for our sandwich samples (see Sec.~\ref{sec:structure}).
Regarding the 2M rings we recall that their concentration decreases with
increasing Na content, accompanied by a decreasing signal in the VDOS at
$\approx$850~cm$ ^{-1} $, see Fig.~\ref{fig:ab nsx-VDOS-es-cs}. Panel~(b)
shows that at this frequency the systems with sodium do not show any sign
of a peak, i.e.~for such glasses IR spectroscopy experiments cannot be
expected to detect the presence of 2M rings in this frequency range.
Finally we mention that in the high-frequency region the addition of
Na leads to an broadening of the band and a shift of the peak to lower
frequencies. These modifications are the signature of the increasing
number of NBOs, and they are consistent with the changes reported in
experimental
works~\cite{merzbacher1988structure,kapoutsis1994alkali,ingram2000origins}
\section{Electronic properties}\label{sec:electronic}
In this section we present the electronic properties of our samples,
i.e.~the electronic density of states (eDOS), Bader charges, and the
electron localization function (ELF). The presence of a surface in
combination with the Na addition makes that these properties change
significantly with respect the ones for bulk silica and we will discuss
these modification in connection with the defective structures such as
2M rings or NBO.
\subsection{Electronic density of states}
The eDOS, $D(E)$, can be obtained directly from the Kohn-Sham
energies calculated for the structure relaxed at $T=0$~K, see
Ref.~\cite{pedesseau_first-principles_2015-1} for details.
Figure~\ref{fig:ab nsx-eDOS-layers} shows the eDOS for the interior
and surface domains of the studied compositions. For the sake of
comparison, we include in panel (a) also the data for a bulk silica
glass (dashed line), computed using the same structural model as the VDOS
discussed in the previous section~\cite{SI-commpriv}. For this bulk
system we recognize features that have been document in the literature
before~\cite{sarnthein_model_1995,benoit_model_2000}: (i) The states
at around $-20$~eV are O~$2s$ states; (ii) The states from $-10$ to $-4$~eV
are bonding states between Si~$sp^3$ hybrids and (mainly) O~$2p$
orbitals; (iii) The states above $-4$~eV up to the Fermi level ($E=0$~eV)
are O~$2p$ nonbonding orbitals. The estimated band gap is found to
be around 5~eV, in good agreement with previous \textit{ab initio}
calculations~\cite{sarnthein_model_1995,benoit_model_2000,du_structure_2006}.
Here we mention that in general DFT calculations underestimate
the experimental band gap of materials and our result confirms
this flaw since the experimental value of the gap for silica is
9~eV~\cite{himpsel_inverse_1986,grunthaner_chemical_1986}.
For the interior layer of silica sandwich, one recognizes from
Fig.~\ref{fig:ab nsx-eDOS-layers}(a) that its eDOS is very similar to
the one of the bulk model. The main difference is that some of the peaks
are less sharp and that the main bands are shifted by around 1~eV to
higher energies. These results might be attributed to the protocol used
to prepare the samples (sandwich geometry, quench rate). (Glass produced
with a lower cooling rate are likely to be at a lower energy state.) No
difference is found in the high energy band which makes that the band
gap for the sandwich geometry is reduced to 4.1~eV.
The eDOS' for the interior layer of the NS5 and NS3 sandwich samples,
presented in Fig.~\ref{fig:ab nsx-eDOS-layers}(b) and (c), are quite
similar to the one for silica. Certain features do, however, depend on the
composition: 1) The eDOS shifts to lower frequencies when Na is added. 2)
The splitting between O~$sp-$Si~$sp^3$ bonding and anti-bonding states is
washed out. 3) The lowest energy band has a new peak at around $-17$~eV,
and its intensity grows if the Na content increases. Below we will see
that this peak is related to the electronic states of NBO and we will also
discuss the connection of other features with structural properties.
Note that the shift in the energy scales makes that the band gaps shrink
with respect to the values of silica: We find $ 2.9 $~eV for NS5, and $
2.7 $~eV for NS3. The latter two values are also compatible with the
calculated band gap ($ 2.8 $~eV) for sodium tetrasilicate glass (i.e.,
20 mol-\% of Na$_2$O)~\cite{ispas_structural_2001}.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{fig10-ab-edos-nsx-layers.eps}
\caption{Electronic density of states of the sandwich glasses at
0~K. Panels (a), (b) and (c) are for silica, NS5, and NS3, respectively. The
eDOS of the sandwich glasses are decomposed with respect to the surface
(surf.) and interior (int.) layers. Panel (a) shows also the eDOS for
bulk silica. All distributions are normalized with respect to
the number of atoms. The Fermi level energy $E_f$ is at 0~eV. \\
}
\label{fig:ab nsx-eDOS-layers}
\end{figure}
The eDOS for the surface layers do not differ strongly from their
counterpart for the interior layer. The distributions at negative energies
are shifted to slightly higher energies, by about 1~eV, an effect that is
likely related to the defective structures on the surface. In addition
we find that the height of the peaks is modified, notably the ones at
the lowest energies, i.e.~the O 2$s$ states, a result that is reasonable
since in the outermost layer the structure of oxygen is quite different
from the ones inside the bulk (see Fig.~\ref{fig:ab nsx-density-numfrac}.)
Finally we mention that for the case of silica the splitting between
O~$sp-$Si~$sp^3$ bonding and anti-bonding states has vanished for the
surface layer, i.e.~for these energies the eDOS is now very similar to the
one of the systems with sodium, an effect that is likely related to the increased
structural disorder.
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{fig11-ab-edos-nsx-surf-partial.eps}
\caption{Decomposition of the surface eDOS of the silica and NS3
glasses. Left panels: Silica. Right panels: NS3. The Fermi level energy
$E_f$ is at 0~eV. (a) and (d): Decomposition with respect to atomic
species, i.e., Si, O and Na. (b) and (e): Decomposition of O into NBO,
csO and esO. (c) and (f): Decomposition of Si into csSi and esSi.
The eDOS are normalized with respect to the number of atoms.
}
\label{fig:ab nsx-eDOS-partial-surf}
\end{figure*}
To get insight into the relationship between the atomic structure and the
electronic properties of the glasses, we have decomposed the eDOS of the
surface layers into partial contributions from the constituent atoms,
i.e.~Si, O, and Na, see Fig.~\ref{fig:ab nsx-eDOS-partial-surf}(a)
and (d). Subsequently we have decomposed the eDOS of Si atoms into
contributions from csSi and esSi atoms and the one of the O~atoms into
csBO, esBO and NBO atoms, i.e. the species we have found to be relevant
to characterize the structural properties of the samples, panels (b),
(c), (e), and (f). Figure~\ref{fig:ab nsx-eDOS-partial-surf}(a) and (b)
shows that the Si and O contribute both to the band at lowest energy,
but that the distribution per atom is about 3 times larger for O than for
Si. For the energies between $-10$~eV and $-5$eV both species have a very
similar density, as it is the case for the states with positive energy,
but that the band between $-5$~eV and the Fermi energy the signal is
strongly dominated by oxygen. These results hold also for the case of
NS3, panel (d), since we see that Na contributes basically only to the
band at positive energies, i.e.~the conduction band.
The further decomposition of the eDOS for the silica surface shows that
the two small peaks at around $-14$ and 2~eV in the total eDOS are mainly
due to states of NBO atoms, see Fig.~\ref{fig:ab nsx-eDOS-partial-surf}(b)
and (e), and with a weak contribution from csSi atoms, panel (c). Therefore, these two peaks can be assigned to Si-O dangling bonds,
in agreement with the findings of previous first principles simulations
for hydrated silica~\cite{benoit_nature_2008}. These NBO give also rise
to a signal at around $-17$~eV which makes that the total eDOS for silica
has a shoulder at around this energy, panel (a), and the one for NS3 a
pronounced peak, panel (d).
For the silica surface, we note that the main valence band for the
edge-sharing atoms is shifted by about 2~eV to higher energies, panel
(b). This shift makes that the peaks and valleys in the distributions for
the csBO and esBO cancels each other, resulting in a total distribution
that is rather featureless, i.e.~the splitting between O~$2p-$Si~$sp^3$
bonding and O~$2p$ nonbonding states in the total eDOS of silica surface
has disappeared, panel (a). The atoms of the 2M rings, i.e.~esSi and esBO, give rise
to peaks between $-20$ and $-15 $~eV and $-10$ and $ 0 $~eV,
features that are consistent with DFT calculation for cristalline fibrous silica
containing these particular defective structure~\cite{hamann1997energies}.
Comparing panels (b) and (c) for silica with the corresponding ones
for NS3, panels (e) and (f), one sees that the various distributions
are quite similar. The main difference is that the ones for NS3 are
slightly shifted to lower energies. Hence we can conclude that the
corresponding shift with sodium concentration, made already in the context
of Fig.~\ref{fig:ab nsx-eDOS-layers}, is due to the shift of the energies
of the individual species.
\subsection{Bader charges}
Further insight into the electronic properties of the glasses can be
obtained by analyzing how the charge density can be assigned to the various
type of atoms.
To this aim we have employed the ``atom in molecule''
(AIM) approach proposed by Bader~\cite{bader_atoms_1985}, which allows to
partition the electron density $\rho(\*r)$ among the constituent atoms
and thus to define the atomic charges. The Bader
charge is given by
\begin{equation}
Q_\alpha^{\rm Bader} = Z_\alpha -\int_{V_{\rm Bader} } \rho({\bf r})dV,
\end{equation}
\noindent
where $Z_\alpha$ is the number of valence electrons of an atom $\alpha$
and $V_{\rm Bader}$ is the so-called Bader volume around the atom.
By definition, the Bader volume is limited by a
surface $S({\bf r})$ which exhibits a zero flux property, i.e., the inner
product $\nabla\rho({\bf r})\cdot{\bf n}=0$, where ${\bf n}$ is the unit
vector oriented perpendicular to $S({\bf r})$~\cite{bader_atoms_1985}.
In Tab.~\ref{tab: ab bader-charge} we list the average Bader charges of
various atomic species in the three glasses. Note that, in contrast to
the structural analysis, for the charge analysis we did not distinguish
between the surface and interior layers since we found no significant
difference between the two. This suggests that the Bader partition scheme does not allow to establish direct relationships with the different structural and vibrational properties of the surface and interior domains (discussed in the previous sections).
\begin{table*}[ht]
\small
\center
\begin{tabularx}{10cm}{lYYY}
\hline
Charge ($e$) & Silica & NS5 & NS3 \\ \hline
Si & 3.154(0.151) & 3.150(0.106) & 3.146(0.025) \\
Si$^{3}$ & 2.458(0.469) & - & - \\
Si$^4$ & 3.176(0.018) & 3.156(0.024) & 3.146(0.025) \\
Si$^5$ & 3.201 & 3.178(0.009) & 3.158(0.027) \\ \hline
$Q_2$ & 3.136 & 3.113(0.007) & 3.105(0.017) \\
$Q_3$ & 3.142(0.009) & 3.133(0.018) & 3.134(0.015) \\
$Q_4$ & 3.177(0.018) & 3.169(0.016) & 3.168(0.015) \\ \hline
O & -1.577(0.08) & -1.586(0.055) & -1.588(0.03) \\
NBO & -1.106(0.243) & -1.529(0.071) & -1.543(0.01) \\
BO & -1.587(0.014) & -1.599(0.012) & -1.606(0.011) \\ \hline
esBO & -1.563(0.009) & -1.584(0.012) & -1.586(0.011) \\
esSi & 3.144(0.013) & 3.13(0.022) & 3.119(0.025) \\ \hline
Na & - & 0.847(0.015) & 0.84(0.016) \\ \hline
\end{tabularx}
\caption{
\label{tab: ab bader-charge}
Average Bader charge of atoms and various species found in the three
glasses at 0~K. The values given in parentheses are the standard deviation of
their distributions. No values in parenthesis means that only one such
specie has been found.
}
\end{table*}
For Si, the average charge of Si$^4$ (i.e.,~an Si bonded to four O) in
the silica glass is about +3.18~$e$, in good quantitative agreement with
the result found in quartz (+3.20~$e$)~\cite{gibbs_model_1999}, in bulk
amorphous silica~\cite{pasquarello1997dynamical}, for a silver/silica
interface~\cite{balout2019density} or for $\beta$-cristobalite
surfaces~\cite{le2018structural}. In addition, we find that
$q_{\rm Si}$ increases with increasing coordination number $n$,
see rows Si$^n$, in qualitative agreement with observations from a high-energy
synchrotron-radiation study of stishovite (the high-pressure polymorph
of silica)~\cite{kirfel_electron-density_2001}. Furthermore we note
that $q_{\rm Si}$ depends also on the character of the tetrahedron,
$Q_m$, where $m$ denotes the number of BO connected to the Si atom,
in that $q_{\rm Si}$ increases with $m$. By comparing the Si charge
of the three glasses, one notices that $q_{\rm Si}$ decreases with
increasing Na concentration. An inspection of the Na-dependence of the
$Q_m$ species shows that this decrease is likely due to the change in
the concentration of the $Q_n$ species and not to the Na-dependence of
their charge, since the latter is rather weak.
For oxygen we find that the average charge of BO is close to $-$1.59~$e$,
a value which is in agreement with the one obtained for $\alpha-$quartz,
$-1.60~e$~\cite{gibbs_model_1999}, and other systems containing silicon and
oxygen~\cite{pasquarello1997dynamical,balout2019density,le2018structural}.
The table also shows that $q_{\rm BO}$ is more negative than $q_{\rm
NBO}$, a deficiency of the Bader charge analysis which has already
been found in previous \textit{ab initio} simulations, see for example
Refs.~\cite{du_structure_2006,pedesseau_first-principles_2015-1}. Despite
this flaw, it is still instructive to discuss the atomic charges
in different systems using the same description. Table~\ref{tab:
ab bader-charge} shows, e.g., that the $q_{\rm O}$ becomes slightly
more negative with the addition of Na. This trend is mainly due to the
pronounced Na-dependence of the charge of the NBO.
Regarding the 2M~rings, we find that the esSi atoms are slightly less
charged than the average Si atoms. This can be rationalized by the fact
that in 2M~rings the two oxygen atoms are quite close to each other
which makes that their electron clouds are pushed in the direction
of the Si atoms, making that the charge of the latter decreases. This
interpretation is coherent with the observation that the esBO have a
charge that is less negative than the one of the ordinary BO.
Finally, we note that the Na charge has a value of $\approx+$0.84~$e$
and is basically independent of the Na concentration. This result is in
good quantitative agreement with a previous \textit{ab initio} simulation
of a sodium borosilicate glass, where a Bader charge of +0.83~$e$ was
found for Na ions~\cite{pedesseau_first-principles_2015}.
\subsection{Electron localization function}
\begin{figure*}[htp]
\center
\includegraphics[width=0.91\textwidth]{fig12abcdef-ELF-ns0-surface-isosurfs.eps}
\includegraphics[width=0.81\textwidth]{fig12gh-ELF-ns0-surface-2d-contour-plot_new.eps}
\caption{Analysis of chemical bonding on a SiO$_2$ surface by means of the
electron localization function (ELF). (a) The representation of the ELF for a small region
on the surface highlighting a SiO$_4$ tetrahedron, centered on an
Si atom labeled Si1, bonded to one NBO, O1, and three BO atoms (O2-O4).
The iso-surface (in yellow) corresponds to the ELF at a value of 0.83.
(b) and (c): 2D contour plots of the ELF in the planes defined by three
atoms: Si1-O2-Si2, panel (b), and O1-Si1-O4, panel (c), where the atoms are identified in panel (a).
The increment of iso-lines is 0.05. (d)-(f): The same
representation as in (a)-(c) but for a two-membered ring structure. (g) and
(h): Line profiles of the ELF along the bond paths as shown in (a)
and (d), respectively. Also included in (g) is the average ELF profile
of the BO-Si bonds that belongs to a Si-BO-Si connection in the interior
domain (green dash-dotted line). The O atom is at $r=0$. For each
bond path the point corresponding to the maximum ELF is indicated in the
parenthesis ($r$, ELF$(r)$). The arrows show the location of the average Si-O bond length. The
visualization of the ELF was realized by VESTA~\cite{momma_vesta_2011}.
}
\label{fig:ab elf-ns0-surf}
\end{figure*}
In this subsection we discuss the nature of the chemical
bonding in the glasses using the electron localization function
(ELF)~\cite{becke_simple_1990}. The ELF is related to the probability
distribution $\eta({\bf r})$ of electron pairs, divided by the
corresponding distribution for a uniform electron gas. By definition,
$\eta$ takes at any point of space a value that lies between 0 and
1. A value of 1 corresponds to a perfect localization of the electron
pairs, while a value of 0.5 corresponds to that of a uniform electron
gas. Details of the calculation can be found in Ref.~~\cite{savin_elf:_1997}.
In Fig.~\ref{fig:ab elf-ns0-surf} we illustrate some of the properties
of the ELF for the case of the silica glass surface. Panel (a) shows the
iso-surface of the distribution evaluated at the value $\eta=0.83$. The
region we consider includes a SiO$_4$ tetrahedron with one NBO (marked
as O1) and three BO (O2-O4). For each BO we observe a hemispherical
domain along each Si-O bond, see for example the bridge Si1-O2-Si2 in
panel (a), and this domain can be assigned to a
pair of bonding electrons. One also finds a banana-shaped domain
at the reflex side of the Si-BO-Si bridge, which is orthogonal to
the Si-BO-Si plane. This domain is assigned to two lone pairs of
electrons, i.e., the four valence electrons that are not involved in
bonding. These non-bonding domains are substantially
larger than the bonded hemispherical domains along the Si-O bonds,
in agreement with the ELF mapping of the SiOSi linkage in silicate
minerals~\cite{gibbs_mapping_2005}. For the NBO atoms, as for example
the atom labelled O1, we observe that, besides the bond pair domain,
a concave hemispherical-shaped domain can be found and it seems to
have a rotational symmetry along the Si-NBO direction. This domain
can be ascribed to the non-bonding electrons and it appears to have
a larger volume than the nonbonding domain electron domain for BO.
This observation is reasonable since presumably there are five nonbonding
electrons for the NBO while only four for the BO.
Figure~\ref{fig:ab elf-ns0-surf}(b) shows the two-dimensional contour
plot of the ELF in a plane spanned by Si1, O2, and Si2, i.e.~for a BO,
and, panel (c), for the plane given by O2, Si1, and O1, i.e.~for a NBO.
The aforementioned bonding and nonbonding domains are clearly visible
from the contour plots. In addition, one recognizes from panel (c) that
the probability distribution of electron pairs around the NBO is more
spread out than that of the BO. This observation can be rationalized by
the fact that the NBO has more free volume on the side opposite to the
Si-O bond than the BO atoms.
A further important structural unit, namely a 2M ring, is depicted in
Fig.~\ref{fig:ab elf-ns0-surf}(d). One notices that the O atoms in the
2M ring, O7 and O8, have electron pair domains that are similar to the
ones of ordinary BO atoms, e.g., O2 in panel (a). Figure~\ref{fig:ab
elf-ns0-surf} (e) and (f) show the ELF contour plots corresponding to
two Si-O-Si linkages associated with the 2M ring. (Note that the Si-O-Si
linkage in panel (e) involves an edge-sharing Si, Si4.) One sees that the
angle Si3-O5-Si4 is much larger than the one in panel (b), demonstrating
that the strong angular constraint in the 2M ring also affects the
linkages of its neighbors. Consequently, the bond and lone pair domains
around the BO in panel (e) are not as well structured as the ones in
panel (b). Panel (f) shows the ELF contour plots of the 2M ring. One
observes that the bond and lone pair domains are well structured and
can be clearly distinguished. Another noticeable feature is that the
bond paths, i.e. the lines connecting neighboring atoms, are no longer
axes of symmetry for the bond pair domains. This is likely due to the
strong repulsion of the electrons from the two opposing esBO atoms.
To describe the ELF in a more quantitative manner we show in
Fig.~\ref{fig:ab elf-ns0-surf}(g) and (h) the line profile of the
ELF along the bond paths starting from the oxygen atom ($r=0$). Note
that all BO in panel (a) and (g) are ordinary corner-sharing BO,
i.e.~csBO. Figure~\ref{fig:ab elf-ns0-surf}(g) shows that the ELF of the
NBO-Si bond is smaller than the one of the BO-Si bond, implying that the
ELF around the NBO is more spread out, in agreement with the contour
plot in panel (c). In addition we note that the BO-Si bond peaks at a
larger $r$ that the Si-NBO bond (see the values in the parentheses of the
legend), in agreement with the observation that for the NBO the ELF
is extended in the direction opposite to the Si-O bond. Also included in
panel (g) is the ELF profile corresponding to a Si-BO-Si linkage in the
interior of the sample and which has an angle close to the Si1-O2-Si2
linkage shown in panel (a).
The presence of the surface does not seem to affect in a significant manner the ELF profile of the Si-BO bonds, although the BO-Si bond length (indicated by the vertical arrows) in the
interior is slightly smaller than the surface BO-Si (see also Tab.
~\ref{tab:ab-bond-liquid-glass}).
Figure~\ref{fig:ab elf-ns0-surf}(h) compares
the ELF line profiles of the esBO-esSi and csBO-esSi bonds and one notices
that the ELF of the esBO-esSi bonds shifts to a larger $r$ relative to
the csBO-esSi bonds but seems to have the same maximum height. However,
since for the esBO-esSi bond the bond path does not pass through the
maximum of the ELF, see panel (f), the real maximum value of the ELF
for this bond is in fact higher than the one for the csBO-esSi bond, i.e. the electrons are
more localized.
Figure~\ref{fig:ab elf-ns3-surf} shows the ELF results for the NS3
glass surface. We note that, in addition to the structural modification
discussed in the previous sections, the presence of Na induces also changes
in the bonding. For example, panel (a), the bond pair domain for the
NBO-Si bond O1-Si1 is much smaller that the corresponding domain in silica,
Fig.~\ref{fig:ab elf-ns0-surf}(a). Figure~\ref{fig:ab elf-ns3-surf}(b)
shows that the presence of Na also leads to an asymmetry of the lone
pair domain of the NBO (i.e.~O1). This effect is also seen from the
two dimensional contour plot in the plane defined by Na1-O1-Si1, panel (c).
For the NBO, O1, we note
that the domains in the directions of the Na atoms can be ascribed to
the Na-O bond pair interaction superimposed on the lone pair domains,
panel (c). Similar results were found for earth materials containing alkali
metals~\cite{gibbs_mapping_2005}.
\begin{figure*}[htp]
\includegraphics[width=0.95\textwidth]{fig13abcdef-ELF-ns3-surface-isosurfs.eps}
\includegraphics[width=0.85\textwidth]{fig13gh-ELF-ns3-surface-2d-contour-plot_new.eps}
\caption{Analysis of chemical bonding on the surface of NS3 by the electron localization
function (ELF). (a) A map of the ELF for the structures
on the surface, highlighting a SiO$_4$ tetrahedron which has a Na in its neighborhood. The dashed lines are the O-Na bonds with $r_{\rm
O-Na}<2.5$~\AA. The iso-surface represents the ELF surface at a value of 0.83 and the
assignment of different domains are the same as in Fig.~\ref{fig:ab elf-ns0-surf}. (b)
and (c): 2D contour plots of the ELF in the planes defined by three
atoms. The increment of iso-lines is 0.05. (d-f): The same
representation as in (a)-(c) but for a two-membered ring structure. (g) and
(h): Line profiles of the ELF along the bond paths as shown in
(a) and (d), respectively. The oxygen atom is at $r=0$. For each bond path the
point corresponding to the maximum ELF is indicated in the parenthesis. The
arrows show the location of the average Si-O or Na-O bond length. NBO$^{\rm 2M}$
denotes the NBO bonded to an esSi. The
visualization of the ELF was realized by VESTA~\cite{momma_vesta_2011}.
}
\label{fig:ab elf-ns3-surf}
\end{figure*}
Figure~\ref{fig:ab elf-ns3-surf}(d) shows a 2M ring with one of the Si atoms connected to a
NBO and its nearby Na atoms.
Panel (e) shows that, for the 2M rings,
the distribution is no longer symmetric around the O7(esBO)-Si3 connection, an
observation that is coherent with the finding for the 2M rings in silica,
see Fig.~\ref{fig:ab elf-ns0-surf}. For the NBO, O6, we find that the ELF contour plot is quite similar to the one for O1 shown in panel (b), in spite of the presence of the neighboring 2M ring.
Figure~\ref{fig:ab elf-ns3-surf}(f)
clearly shows that the ELF for the esBO (O7) bonded to the Na is less spread out
than the distribution for the other esBO (O8) in the 2M ring, demonstrating that O7 is indeed
bonded to the Na atom.
Figure~\ref{fig:ab elf-ns3-surf}(g) shows the average ELF line profiles of
various types of O-Si bonds. (Note that the NBO atom connected to an esSi
atom is denoted as NBO$^{\rm 2M}$.) One observes that the ELF profile of
the NBO-csSi bond is very similar to the one of the NBO$^{\rm 2M}$-esSi
bond, indicating that the NBO-Si bond character is basically independent
of the Si type. Furthermore we find that the ELF values of the NBO-Si
bonds are smaller than that of the esBO-esSi bond, in accordance with
the fact that the distribution of the electron pairs around the NBO
is more spread out than the one for the esBO-esSi bond. (Also here we
recall that the ELF for the esBO-esSi is not symmetric with respect to
the connecting axis, see panel (c), and hence the maximum value is even
higher.) For all the three NBO-Si bonds, the maximum of the ELF is located
at $r\approx0.68$~\AA, independent of the bond type. Figure~\ref{fig:ab
elf-ns3-surf}(h) shows the profiles for the O-Na pairs and one sees
that the maxima of the curves are located
at $r\approx0.61$, 0.63 and 0.67~\AA\ for the NBO$^{\rm 2M}$-Na, NBO-Na,
and esBO-Na bonds, respectively. This results indicate that the character
of the O-Na bond is more sensitive to the changes in local environment
than the NBO-Si bond. We also note that the maxima of the ELF for the O-Na
bonds are closer to the oxygen atoms (at $r=0$) than the ones of the O-Si
bonds. This result demonstrates that the O-Na is less covalent (i.e. more
ionic) than the O-Si bonds. In addition, based on the locations of the
ELF maxima, it can be deduced that the esBO-Na bond is more covalent
than the NBO-Na bonds.
Finally, we note that the locations of the maxima of the ELF profiles
for the NBO-Si and esBO-esSi bonds are very close to the corresponding
values found for the silica glass. This similarity indicates that the
presence of Na affects the position of the bond pair domains of the O-Si
bonds only weakly.
\section{Summary and Conclusions} \label{sec:conclusions}
Using {\it ab initio} calculations, we have studied the structural,
vibrational, and electronic properties of the surface of amorphous silica
and two binary sodo-silicate glasses. Previous studies have shown that,
for the case of silica, two-membered rings are an important structural
motif at the surface~\cite{rimola_silica_2013,tielens2019Characterization}.
The present analysis of the compositional
dependence of the surface and interior domains of our sandwich samples
shows that the concentration of defect sites is considerable reduced
with increasing Na content since sodium migrates from the interior to
the surface and transforms energetically unfavorable local structures, such as 2M rings,
into more relaxed ones. As a consequence the frequency of two-membered
rings decreases rapidly with the addition of sodium.
From the dynamical matrix of the samples we have calculated the total
vibrational density of states as well as the contributions of the various atomic
species and structural elements to this distribution. This has allowed
us to identify the spectroscopic signatures of the 2M rings and see how
these change as a function of the sodium content. In addition we have
computed the IR spectra and have also determined also for this observable
the signature of the 2M rings.
These calculations show that not all vibrational modes of the 2M rings are IR active, thus pointing out the need to use additional experimental techniques to study these rings.
In
addition the present study can serve as a benchmark for simulations of
glass surfaces using effective potentials since our results will allow to
compare the results of the classical MD simulations with highly accurate
microscopic structural and vibrational data.
Taking advantage of the {\it ab initio} approach, we have probed the
electronic properties of the glass samples with a particular focus on
the surfaces. The analysis of the electron localization function shows
that 2M rings and NBO do have a distinct electronic distribution and we
have investigated how it is affected by the presence of sodium.
To the best of our knowledge, the current simulations and analysis
represent the first study that investigates simultaneously the structural,
spectroscopic, and electronic properties of silica glass surface and how
they evolve with Na addition. Hence this approach allows to circumvent the
frequently encountered problem that the samples probed with different
techniques usually have different production histories (cooling
rates, composition, atmospheres, etc.) which makes the unambiguous
identification of the various structural features difficult. As a
consequence the present work should be a relevant step forward in our
understanding of the properties of oxide glasses on a quantitative level.
\section*{Acknowledgements}
Z.Z. acknowledges financial support by China Scholarship Council (NO. 201606050112).
W.K. is member of the Institut Universitaire de France.
This work was granted access to the HPC resources
of CINES under the allocation A0030907572, A0050907572 and A0070907572
attributed by GENCI (Grand Equipement National de Calcul
Intensif).
|
2107.12292
|
\section{Introduction}
\begin{figure}[!tb]
\vspace{-0.33in}
\centering {\includegraphics[width=0.46\textwidth]{intro.pdf}}
\vspace{-0.1in}
\caption{Comparison between conventional self-attention and our Contextual Transformer (CoT) block. (a) Conventional self-attention solely exploits the isolated query-key pairs to measure attention matrix, but leaves rich contexts among keys under-exploited. Instead, (b) CoT block first mines the static context among keys via a 3$\times$3 convolution. Next, based on the query and contextualized key, two consecutive 1$\times$1 convolutions are utilized to perform self-attention, yielding the dynamic context. The static and dynamic contexts are finally fused as outputs.}
\label{fig:fig1}
\vspace{-0.28in}
\end{figure}
Convolutional Neural Networks (CNN) \cite{chollet2017xception,dai2017deformable,he2016deep,krizhevsky2012imagenet,simonyan2014very,szegedy2015going,tan2019efficientnet} demonstrates high capability of learning discriminative visual representations, and convincingly generalizes well to a series of Computer Vision (CV) tasks, e.g., image recognition, object detection, and semantic segmentation. The de-facto recipe of CNN architecture design is based on discrete convolutional operators (e.g., 3$\times$3 or 5$\times$5 convolution), which effectively impose spatial locality and translation equivariance. However, the limited receptive field of convolution adversely hinders the modeling of global/long-range dependencies, and such long-range interaction subserves numerous CV tasks \cite{mottaghi2014role,rabinovich2007objects}. Recently, Natural Language Processing (NLP) field has witnessed the rise of Transformer with self-attention in powerful language modeling architectures \cite{devlin2018bert,vaswani2017attention} that triggers long-range interaction in a scalable manner. Inspired by this, there has been a steady momentum of breakthroughs \cite{bello2019attention,carion2020end,dosovitskiy2020image,li2021scheduled,pan2020x,ramachandran2019stand,zhao2020exploring} that push the limits of CV tasks by integrating CNN-based architecture with Transformer-style modules. For example, ViT \cite{dosovitskiy2020image} and DETR \cite{carion2020end} directly process the image patches or CNN outputs using self-attention as in Transformer. \cite{ramachandran2019stand,zhao2020exploring} present a stand-alone design of local self-attention module, which can completely replace the spatial convolutions in ResNet architectures.
Nevertheless, previous designs mainly hinge on the independent pairwise query-key interaction for measuring attention matrix as in conventional self-attention block (Figure \ref{fig:fig1} (a)), thereby ignoring the rich contexts among neighbor~keys.
In this work, we ask a simple question - \emph{is there an elegant way to enhance Transformer-style architecture by exploiting the richness of context among input keys over 2D feature map}? For this purpose, we present a unique design of Transformer-style block, named Contextual Transformer (CoT), as shown in Figure \ref{fig:fig1} (b). Such design unifies both context mining among keys and self-attention learning over 2D feature map in a single architecture, and thus avoids introducing additional branch for context mining. Technically, in CoT block, we first contextualize the representation of keys by performing a 3$\times$3 convolution over all the neighbor keys within the 3$\times$3 grid. The contextualized key feature can be treated as a \emph{static} representation of inputs, that reflects the \emph{static} context among local neighbors. After that, we feed the concatenation of the contextualized key feature and input query into two consecutive $1\times1$ convolutions, aiming to produce the attention matrix. This process naturally exploits the mutual relations among each query and all keys for self-attention learning with the guidance of the \emph{static} context. The learnt attention matrix is further utilized to aggregate all the input values, and thus achieves the \emph{dynamic} contextual representation of inputs to depict the \emph{dynamic} context. We take the combination of the \emph{static} and \emph{dynamic} contextual representation as the final output of CoT block. In summary, our launching point is to simultaneously capture the above two kinds of spatial contexts among input keys, i.e., the \emph{static} context via 3$\times$3 convolution and the \emph{dynamic} context based on contextualized self-attention, to boost visual representation learning.
Our CoT can be viewed as a unified building block, and is an alternative to standard convolutions in existing ResNet architectures without increasing the parameter and FLOP budgets.
By directly replacing each 3$\times$3 convolution in a ResNet structure with CoT block, we present a new Contextual Transformer Networks (dubbed as CoTNet) for image representation learning.
Through extensive experiments over a series of CV tasks, we demonstrate that our CoTNet outperforms several state-of-the-art backbones.
Notably, for image recognition on ImageNet, CoTNet obtains a 0.9\% absolute reduce of the top-1 error rate against ResNeSt (101 layers). For object detection and instance segmentation on COCO, CoTNet absolutely improves ResNeSt with 1.5\% and 0.7\% mAP, respectively.
\section{Related Work}
\subsection{Convolutional Networks} Sparked by the breakthrough performance on ImageNet dataset via AlexNet \cite{krizhevsky2012imagenet}, Convolutional Networks (ConvNet) has become a dominant architecture in CV field. One mainstream of ConvNet design follows the primary rule in LeNet \cite{lecun1998gradient}, i.e., stacking low-to-high convolutions in series by going deeper: 8-layer AlexNet, 16-layer VGG \cite{simonyan2014very}, 22-layer GoogleNet \cite{szegedy2015going}, and 152-layer ResNet \cite{he2016deep}. After that, a series of innovations have been proposed for ConvNet architecture design to strengthen the capacity of visual representation. For example, inspired by split-transform-merge strategy in Inception modules, ResNeXt \cite{xie2017aggregated} upgrades ResNet with aggregated residual transformations in the same topology.
DenseNet \cite{huang2017densely} additionally enables the cross-layer connections to boost the capacity of ConvNet.
Instead of exploiting spatial dependencies in ConvNet \cite{jaderberg2015spatial,mottaghi2014role}, SENet \cite{hu2018squeeze,hu2020squeeze} captures the interdependencies between channels to perform channel-wise feature recalibration.
\cite{tan2019efficientnet} further scales up an auto-searched ConvNet to obtain a family of EfficientNet networks, which achieve superior accuracy and~efficiency.
\subsection{Self-attention in Vision} Taking the inspiration from self-attention in Transformer that continuously achieves the impressive performances in various NLP tasks, the research community starts to pay more attention to self-attention in vision scenario. The original self-attention mechanism in NLP domain \cite{vaswani2017attention} is devised to capture long-range dependency in sequence modeling. In vision domain, a simple migration of self-attention mechanism from NLP to CV is to directly perform self-attention over feature vectors across different spatial locations within an image. In particular, one of the early attempts of exploring self-attention in ConvNet is the non-local operation \cite{wang2018non} that severs as an additional building block to employ self-attention over the outputs of convolutions. \cite{bello2019attention} further augments convolutional operators with global multi-head self-attention mechanism to facilitate image classification and object detection. Instead of using global self-attention over the whole feature map \cite{bello2019attention,wang2018non} that scale poorly, \cite{hu2019local,ramachandran2019stand,zhao2020exploring} employ self-attention within local patch (e.g., 3$\times$3 grid). Such design of local self-attention effectively limits the parameter and computation consumed by the network, and thus can fully replace convolutions across the entirety of deep architecture. Recently, by reshaping raw images into a 1D sequence, a sequence Transformer \cite{chen2020generative} is adopted to auto-regressively predict pixels for self-supervised representation learning. Next, \cite{carion2020end,dosovitskiy2020image} directly apply a pure Transformer to the sequences of local features or image patches for object detection and image recognition. Most recently, \cite{srinivas2021bottleneck} designs a powerful backbone by replacing the final three 3$\times$3 convolutions in a ResNet with global self-attention layers.
\begin{figure*}[!tb]
\vspace{-0.1in}
\centering {\includegraphics[width=0.9\textwidth]{framework.pdf}}
\vspace{-0.05in}
\caption{The detailed structures of (a) conventional self-attention block and (b) our Contextual Transformer (CoT) block. \textcircled{+} and \textcircled{$\ast$} denotes the element-wise sum and local matrix multiplication, respectively.}
\label{fig:framework}
\vspace{-0.2in}
\end{figure*}
\subsection{Summary} Here we also focus on exploring self-attention for the architecture design of vision backbone. Most of existing techniques directly capitalize on the conventional self-attention and thus ignore the explicit modeling of rich contexts among neighbor keys. In contrast, our Contextual Transformer block unifies both context mining among keys and self-attention learning over feature map in a single architecture with favorable parameter budget.
\section{Our Approach}
In this section, we first provide a brief review of the conventional self-attention widely adopted in vision backbones. Next, a novel Transformer-style building block, named Contextual Transformer (CoT), is introduced for image representation learning. This design goes beyond conventional self-attention mechanism by additionally exploiting the contextual information among input keys to facilitate self-attention learning, and finally improves the representational properties of deep networks. After replacing 3$\times$3 convolutions with CoT block across the whole deep architecture, two kinds of Contextual Transformer Networks, i.e., CoTNet and CoTNeXt deriving from ResNet \cite{he2016deep} and ResNeXt \cite{xie2017aggregated}, respectively, are further elaborated.
\subsection{Multi-head Self-attention in Vision Backbones}
Here we present a general formulation for the scalable local multi-head self-attention in vision backbones \cite{hu2019local,ramachandran2019stand,zhao2020exploring}, as depicted in Figure \ref{fig:framework} (a). Formally, given an input 2D feature map $X$ with the size of $H \times W \times C$ ($H$: height, $W$: width, $C$: channel number), we transform $X$ into queries $Q = XW_q$, keys $K = XW_k$, and values $V=XW_v$ via embedding matrix ($W_q$, $W_k$, $W_v$), respectively. Notably, each embedding matrix is implemented as 1$\times$1 convolution in space. After that, we obtain the local relation matrix $R \in {\mathbb{R}}^{{H}\times{W}\times{(k \times k \times C_h)}}$ between keys $K$ and queries $Q$ as:
\begin{equation}\small
\label{eq:sa1}
R=K \oast Q,
\end{equation}
where $C_h$ is the head number, and $\oast$ denotes the local matrix multiplication operation that measures the pairwise relations between each query and the corresponding keys within the local $k \times k$ grid in space. Thus, each feature $R^{(i)}$ at $i$-th spatial location of $R$ is a $k \times k \times C_h$-dimensional vector, that consists of $C_h$ local query-key relation maps (size: $k \times k$) for all heads. The local relation matrix $R$ is further enriched with the position information of each $k \times k$ grid:
\begin{equation}\small
\label{eq:sa2}
\hat{R}=R+P \oast Q,
\end{equation}
where $P\in {\mathbb{R}}^{k \times k \times C_k}$ represents the 2D relative position embeddings within each $k \times k$ grid, and is shared across all $C_h$ heads.
Next, the attention matrix $A$ is achieved by normalizing the enhanced spatial-aware local relation matrix $\hat{R}$ with Softmax operation along channel dimension for each head: $A=\text{\texttt{Softmax}}(\hat{R})$. After reshaping the feature vector at each spatial location of $A$ into $C_h$ local attention matrices (size: $k \times k$), the final output feature map is calculated as the aggregation of all values within each $k \times k$ grid with the learnt local attention matrix:
\begin{equation}\small
\label{eq:sa3}
Y=V \oast A.
\end{equation}
Note that the local attention matrix of each head is only utilized for aggregating evenly divided feature map of $V$ along channel dimension, and the final output $Y$ is the concatenation of aggregated feature maps for all heads.
\subsection{Contextual Transformer Block}
Conventional self-attention nicely triggers the feature interactions across different spatial locations depending on the inputs themselves. Nevertheless, in the conventional self-attention mechanism, all the pairwise query-key relations are independently learnt over isolated query-key pairs, without exploring the rich contexts in between. That severely limits the capacity of self-attention learning over 2D feature map for visual representation learning.
To alleviate this issue, we construct a new Transformer-style building block, i.e., Contextual Transformer (CoT) block in Figure \ref{fig:framework} (b), that integrates both contextual information mining and self-attention learning into a unified architecture. Our launching point is to fully exploit the contextual information among neighbour keys to boost self-attention learning in an efficient manner, and strengthen the representative capacity of the output aggregated feature map.
\begin{table}[!tb]\scriptsize
\centering
\caption{\small The detailed structures of ResNet-50 (left) and CoTNet-50 (right). The shapes and operations within a residual building block are shown inside the brackets and the number of stacked blocks in each stage is listed outside. CoTNet-50 has a slightly smaller number of parameters and FLOPs than ResNet-50.}
\setlength\extrarowheight{1.1pt}
\begin{tabular}{c|c|c|c}
\Xhline{2\arrayrulewidth}
stage & ResNet-50 & \textbf{CoTNet-50} & \!\!output\!\! \\ \hline
res1 & 7 $\!\times\!$ 7 conv, 64, stride 2 & 7 $\!\times\!$ 7 conv, 64, stride 2 & \!\!112 $\!\times\!$ 112\!\!\\ \hline
\multirow{2}{*}{res2} & 3 $\!\times\!$ 3 max pool, stride 2 & 3 $\!\times\!$ 3 max pool, stride 2 & \multirow{2}{*}{\!\!56 $\!\times\!$ 56\!\!}\\ \cline{2-3}
& $\left[ \begin{array}{l} 1 \!\times\! 1,64\\ 3 \!\times\! 3,64\\ 1 \!\times\! 1,256 \end{array} \right] \!\times\! 3$
& $\left[ \begin{array}{l} 1 \!\times\! 1,64\\ {\textbf{\color{blue}CoT}},64\\ 1 \!\times\! 1,256 \end{array} \right] \!\times\! 3$
& \\ \hline
res3
& $\left[ \begin{array}{l} 1 \!\times\! 1,128\\ 3 \!\times\! 3,128\\ 1 \!\times\! 1,512 \end{array} \right] \!\times\! 4$
& $\left[ \begin{array}{l} 1 \!\times\! 1,128\\ {\textbf{\color{blue}CoT}},128\\ 1 \!\times\! 1,512 \end{array} \right] \!\times\! 4$
& \!\!28 $\!\times\!$ 28\!\! \\ \hline
res4
& $\left[ \begin{array}{l} 1 \!\times\! 1,256\\ 3 \!\times\! 3,256\\ 1 \!\times\! 1,1024 \end{array} \right] \!\times\! 6$
& $\left[ \begin{array}{l} 1 \!\times\! 1,256\\ {\textbf{\color{blue}CoT}},256\\ 1 \!\times\! 1,1024 \end{array} \right] \!\times\! 6$
& \!\!14 $\!\times\!$ 14\!\! \\ \hline
res5
& $\left[ \begin{array}{l} 1 \!\times\! 1,512\\ 3 \!\times\! 3,512\\ 1 \!\times\! 1,2048 \end{array} \right] \!\times\! 3$
& $\left[ \begin{array}{l} 1 \!\times\! 1,512\\ {\textbf{\color{blue}CoT}},512\\ 1 \!\times\! 1,2048 \end{array} \right] \!\times\! 3$
& \!\!7 $\!\times\!$ 7\!\! \\ \hline
& \makecell{global average pool \\ 1000-d fc, softmax} & \makecell{global average pool \\ 1000-d fc, softmax} & 1 $\!\times\!$ 1 \\ \hline
\# params & \textbf{25.56} $\!\times\!$ $10^6$ & \textbf{22.21} $\!\times\!$ $10^6$ & \\ \hline
FLOPs & \textbf{4.12} $\!\times\!$ $10^9$ & \textbf{3.28} $\!\times\!$ $10^9$ & \\ \Xhline{2\arrayrulewidth}
\end{tabular}
\vspace{-0.2in}
\label{table:ResNet}
\end{table}
In particular, suppose we have the same input 2D feature map $X \in {\mathbb{R}}^{H \times W \times C}$. The keys, queries, and values are defined as $K=X$, $Q=X$, and $V=XW_v$, respectively. Instead of encoding each key via 1$\times$1 convolution as in typical self-attention, CoT block first employs $k \times k$ group convolution over all the neighbor keys within $k \times k$ grid spatially for contextualizing each key representation. The learnt contextualized keys $K^1\in {\mathbb{R}}^{H \times W \times C}$ naturally reflect the static contextual information among local neighbor keys, and we take $K^1$ as the static context representation of input $X$. After that, conditioned on the concatenation of contextualized keys $K^1$ and queries $Q$, the attention matrix is achieved through two consecutive 1$\times$1 convolutions ($W_{\theta}$ with ReLU activation function and $W_{\delta}$ without activation function):
\begin{equation}\small
\label{eq:cot1}
A=[K^1, Q]W_{\theta}W_{\delta}.
\end{equation}
In other words, for each head, the local attention matrix at each spatial location of $A$ is learnt based on the query feature and the contextualized key feature, rather than the isolated query-key pairs. Such way enhances self-attention learning with the additional guidance of the mined static context $K^1$. Next, depending on the contextualized attention matrix $A$, we calculate the attended feature map $K^2$ by aggregating all values $V$ as in typical self-attention:
\begin{equation}\small
\label{eq:cot2}
K^2=V \oast A.
\end{equation}
In view that the attended feature map $K^2$ captures the dynamic feature interactions among inputs, we name $K^2$ as the dynamic contextual representation of inputs. The final output of our CoT block ($Y$) is thus measured as the fusion of the static context $K^1$ and dynamic context $K^2$ through attention mechanism \cite{li2019selective}.
\begin{table}[!tb]\scriptsize
\centering
\caption{\small The detailed structures of ResNeXt-50 with a 32$\times$4d template (left) and CoTNeXt-50 with a 2$\times$48d template (right). The shapes and operations within a residual building block are shown inside the brackets and the number of stacked blocks in each stage is listed outside. $C$ denotes the number of groups within grouped convolutions.
Compared to ResNeXt-50, CoTNeXt-50 has a slightly larger number of parameters but similar FLOPs.
}
\setlength\extrarowheight{1.1pt}
\begin{tabular}{c|c|c|c}
\Xhline{2\arrayrulewidth}
\!\!stage & ResNeXt-50 (32$\!\times\!$4d) & \textbf{CoTNeXt-50 (2$\!\times\!$48d)} & \!\!output\!\! \\ \hline
\!\!res1 & 7 $\!\times\!$ 7 conv, 64, stride 2 & 7 $\!\times\!$ 7 conv, 64, stride 2 & \!\!112 $\!\times\!$ 112\!\! \\ \hline
\multirow{2}{*}{\!\!res2}
& 3 $\!\times\!$ 3 max pool, stride 2 & 3 $\!\times\!$ 3 max pool, stride 2 & \multirow{2}{*}{\!\!56 $\!\times\!$ 56\!\!} \\ \cline{2-3}
& \!$\left[\!\! \begin{array}{l} 1 \!\times\! 1,128\\ 3 \!\times\! 3,128,C\!\!=\!\!32\\ 1 \!\times\! 1,256 \end{array} \!\!\right] \!\!\times\!\! 3$\!\!
& \!$\left[\!\! \begin{array}{l} 1 \!\times\! 1,96\\ {\textbf{\color{blue}CoT}},96,C\!\!=\!\!2\\ 1 \!\times\! 1,256 \end{array} \!\!\right] \!\!\times\!\! 3$\!\!
& \\ \hline
\!\!res3
& \!$\left[\!\! \begin{array}{l} 1 \!\times\! 1,256\\ 3 \!\times\! 3,256,C\!\!=\!\!32\\ 1 \!\times\! 1,512 \end{array} \!\!\right] \!\!\times\!\! 4$\!\!
& \!$\left[\!\! \begin{array}{l} 1 \!\times\! 1,192\\ {\textbf{\color{blue}CoT}},192,C\!\!=\!\!2\\ 1 \!\times\! 1,512 \end{array} \!\!\right] \!\!\times\!\! 4$\!\!
& \!\!28 $\!\times\!$ 28\!\! \\ \hline
\!\!res4
& \!$\left[\!\! \begin{array}{l} 1 \!\times\! 1,512\\ 3 \!\times\! 3,512,C\!\!=\!\!32\\ 1 \!\times\! 1,1024 \end{array} \!\!\right] \!\!\times\!\! 6$\!\!
& \!$\left[\!\! \begin{array}{l} 1 \!\times\! 1,384\\ {\textbf{\color{blue}CoT}},384,C\!\!=\!\!2\\ 1 \!\times\! 1,1024 \end{array} \!\!\right] \!\!\times\!\! 6$\!\!
& \!\!14 $\!\times\!$ 14\!\! \\ \hline
\!\!res5
& \!$\left[\!\! \begin{array}{l} 1 \!\times\! 1,1024\\ 3 \!\times\! 3,1024,C\!\!=\!\!32\\ 1 \!\times\! 1,2048 \end{array} \!\!\right] \!\!\times\!\! 3$\!\!
& \!$\left[\!\! \begin{array}{l} 1 \!\times\! 1,768\\ {\textbf{\color{blue}CoT}},768,C\!\!=\!\!2\\ 1 \!\times\! 1,2048 \end{array} \!\!\right] \!\!\times\!\! 3$\!\!
& \!\!7 $\!\times\!$ 7\!\! \\ \hline
& \makecell{global average pool \\ 1000-d fc, softmax} & \makecell{global average pool \\ 1000-d fc, softmax} & \!\!1 $\!\times\!$ 1\!\! \\ \hline
\# params & \textbf{25.03} $\!\times\!$ $10^6$ & \textbf{30.05} $\!\times\!$ $10^6$ & \\ \hline
FLOPs & \textbf{4.27} $\!\times\!$ $10^9$ & \textbf{4.33} $\!\times\!$ $10^9$ & \\ \Xhline{2\arrayrulewidth}
\end{tabular}
\vspace{-0.2in}
\label{table:ResNeXt}
\end{table}
\subsection{Contextual Transformer Networks}
The design of our CoT is a unified self-attention building block, and acts as an alternative to standard convolutions in ConvNet. As a result, it is feasible to replace convolutions with their CoT counterparts for strengthening vision backbones with contextualized self-attention. Here we present how to integrate CoT blocks into existing state-of-the-art ResNet architectures (e.g., ResNet \cite{he2016deep} and ResNeXt \cite{xie2017aggregated}) without increasing parameter budget significantly. Table \ref{table:ResNet} and Table \ref{table:ResNeXt} shows two different constructions of our Contextual Transformer Networks (CoTNet) based on the ResNet-50/ResNeXt-50 backbone, called CoTNet-50 and CoTNeXt-50, respectively. Please note that our CoTNet is flexible to generalize to deeper networks (e.g., ResNet-101).
\textbf{CoTNet-50.} Specifically, CoTNet-50 is built by directly replacing all the 3$\times$3 convolutions (in the stages of res2, res3, res4, and res5) in ResNet-50 with CoT blocks. As our CoT blocks are computationally similar with the typical convolutions, CoTNet-50 has similar (even slightly smaller) parameter number and FLOPs with ResNet-50.
\textbf{CoTNeXt-50.} Similarly, for the construction of CoTNeXt-50, we first replace all the 3$\times$3 convolution kernels in group convolutions of ResNeXt-50 with CoT blocks. Compared to typical convolutions, the depth of the kernels within group convolutions is significantly decreased when the number of groups (i.e., $C$ in Table \ref{table:ResNeXt}) is increased. In ResNeXt-50, the computational cost of group convolutions is thus reduced by a factor of $C$. Therefore, in order to achieve the similar parameter number and FLOPs with ResNeXt-50, we additionally reduce the scale of input feature map of CoTNeXt-50 from 32$\times$4d to 2$\times$48d. Finally, CoTNeXt-50 requires only 1.2$\times$ more parameters and 1.01$\times$ more FLOPs than ResNeXt-50.
\subsection{Connections with Previous Vision Backbones}
In this section, we discuss the detailed relations and differences between our Contextual Transformer and the previous most related vision backbones.
\textbf{Blueprint Separable Convolution \cite{haase2020rethinking}} approximates the conventional convolution with a 1$\times$1 pointwise convolution plus a $k \times k$ depthwise convolution, aiming to reduce the redundancies along depth axis. In general, such design has some commonalities with the transformer-style block (e.g., the typical self-attention and our CoT block). This is due to that the transformer-style block also utilizes 1$\times$1 pointwise convolution to transform the inputs into values, and the followed aggregation computation with $k \times k$ local attention matrix is performed in a similar depthwise manner. Besides, for each head, the aggregation computation in transformer-style block adopts channel sharing strategy for efficient implementation without any significant accuracy drop. Here the utilized channel sharing strategy can also be interpreted as the tied block convolution \cite{wang2020tied}, which shares the same filters over equal blocks of channels.
\textbf{Dynamic Region-Aware Convolution \cite{chen2020dynamic}} introduces a filter generator module (consisting of two consecutive 1$\times$1) to learn specialized filters for region features at different spatial locations. It therefore shares a similar spirit with the attention matrix generator in our CoT block that achieves dynamic local attention matrix for each spatial location. Nevertheless, the filter generator module in \cite{chen2020dynamic} produces the specialized filters based on the primary input feature map. In contrast, our attention matrix generator fully exploits the complex feature interactions between contextualized keys and queries for self-attention learning.
\textbf{Bottleneck Transformer \cite{srinivas2021bottleneck}} is the contemporary work, which also aims to augment ConvNet with self-attention mechanism by replacing 3$\times$3 convolution with Transformer-style module. Specifically, it adopts global multi-head self-attention layers, which are computationally more expensive than local self-attention in our CoT block. Therefore, with regard to the same ResNet backbone, BoT50 in \cite{srinivas2021bottleneck} only replaces the final three 3$\times$3 convolutions with Bottleneck Transformer blocks, while our CoT block can completely replace 3$\times$3 convolutions across the whole deep architecture. In addition, our CoT block goes beyond typical local self-attention in \cite{hu2019local,ramachandran2019stand,zhao2020exploring} by exploiting the rich contexts among input keys to strengthen self-attention learning.
\section{Experiments}
In this section, we verify and analyze the effectiveness of our Contextual Transformer Networks (CoTNet) as a backbone via empirical evaluations over multiple mainstream CV applications, ranging from image recognition, object detection, to instance segmentation. Specifically, we first undertake experiments for image recognition task on ImageNet benchmark \cite{deng2009imagenet} by training our CoTNet from scratch. Next, after pre-training CoTNet on ImageNet, we further evaluate the generalization capability of the pre-trained CoTNet when transferred to downstream tasks of object detection and instance segmentation on COCO dataset \cite{lin2014microsoft}.
\subsection{Image Recognition}
\textbf{Setup.} We conduct image recognition task on the ImageNet dataset, which consists of 1.28 million training images and 50,000 validation images derived from 1,000 classes. Both of the top-1 and top-5 accuracies on the validation set are reported for evaluation. For this task, we adopt two different training setups in the experiments, i.e., the default training setup and advanced training setup.
The default training setup is the widely adopted setting in classic vision backbones (e.g., ResNet \cite{he2016deep}, ResNeXt \cite{xie2017aggregated}, and SENet \cite{hu2018squeeze}), that trains networks for around 100 epochs with standard preprocessing. Specifically, each input image is cropped into 224$\times$224, and only the standard data augmentation (i.e., random crops and horizontal flip with 50\% probability) is performed. All the hyperparameters are set as in official implementations without any additional tuning. Similarly, our CoTNet is trained in an end-to-end manner, through backpropagation using SGD with momentum 0.9 and label smoothing 0.1. We set the batch size as $B=512$ that enables applicable implementations on an 8-GPU machine. For the first five epochs, the learning rate is scaled linearly from 0 to $\frac{0.1 \cdot B}{256}$, which is further decayed via cosine schedule \cite{loshchilov2016sgdr}. As in \cite{bello2021lambdanetworks}, we adopt exponential moving average with weight 0.9999 during training.
For fair comparison with state-of-the-art backbones (e.g., ResNeSt \cite{zhang2020resnest}, EfficientNet \cite{tan2019efficientnet} and LambdaNetworks \cite{bello2021lambdanetworks}), we additionally involve the advanced training setup with longer training epochs and improved data augmentation \& regularization. In this setup, we train our CoTNet with 350 epochs, coupled with the additional data augmentation of RandAugment \cite{cubuk2020randaugment} and mixup \cite{zhang2017mixup}, and the regularization of dropout \cite{srivastava2014dropout} and DropConnect \cite{wan2013regularization}.
\textbf{Performance Comparison.} We compare with several state-of-the-art vision backbones with two different training settings (i.e., default and advanced training setups) on ImageNet dataset. The performance comparisons are summarized in Tables \ref{table:r1} and \ref{table:r2} for each kind of training setup, respectively. Note that we construct several variants of our CoTNet and CoTNeXt with two kinds of depthes (i.e., 50-layer and 101-layer), yielding CoTNet-50/101 and CoTNeXt-50/101. In advanced training setup, as in LambdaResNet \cite{bello2021lambdanetworks}, we additionally include an upgraded version of our CoTNet, i.e., SE-CoTNetD-101, where the 3$\times$3 convolutions in the res4 and res5 stages are replaced with CoT blocks under SE-ResNetD-50 \cite{he2019bag,bello2021revisiting} backbone. Moreover, in default training setup, we also report the performances of our models with the use of exponential moving average for fair comparison against LambdaResNet.
\begin{table}[!tb]\small
\centering
\caption{Performance comparisons with the state-of-the-art vision backbones for image recognition on ImageNet (default training setup). Models with same depth (50-layer/101-layer) are grouped for efficiency comparison. $^\star$ indicates the use of exponential moving average during training.}
\setlength{\tabcolsep}{0.5pt}
\begin{tabular}{c|c|cc|cc}
\Xhline{2\arrayrulewidth}
Backbone & Res. & Params & GFLOPs & Top-1 Acc. & Top-5 Acc. \\ \hline
ResNet-50 \cite{he2016deep} & 224 & 25.5M & 4.1 & 77.3 & 93.6 \\
Res2Net-50 \cite{gao2019res2net} & 224 & 25.7M & 4.3 & 78.0 & 93.9 \\
ResNeXt-50 \cite{xie2017aggregated} & 224 & 25.0M & 4.2 & 78.2 & 93.9 \\
SE-ResNeXt-50 \cite{hu2018squeeze} & 224 & 27.6M & 4.3 & 78.6 & 94.2 \\
LR-Net-50 \cite{hu2019local} & 224 & 23.3M & 4.3 & 77.3 & 93.6 \\
Stand-Alone$^\star$ \cite{ramachandran2019stand} & 224 & 18.0M & 3.6 & 77.6 & - \\
AA-ResNet-50 \cite{bello2019attention} & 224 & 25.8M & 4.2 & 77.7 & 93.8 \\
BoTNet-S1-50 \cite{srinivas2021bottleneck} & 224 & 20.8M & 4.3 & 77.7 & - \\
ViT-B/16 \cite{dosovitskiy2020image} & 384 & - & - & 77.9 & - \\
SAN19 \cite{zhao2020exploring} & 224 & 20.5M & 3.3 & 78.2 & 93.9 \\
LambdaResNet-50$^\star$\cite{bello2021lambdanetworks} & 224 & 15.0M & - & 78.4 & - \\ \hline
\textbf{CoTNet-50} & 224 & 22.2M & 3.3 & \textbf{79.2} & \textbf{94.5} \\
\textbf{CoTNet-50}$^\star$ & 224 & 22.2M & 3.3 & \textbf{79.8} & \textbf{94.9} \\
\textbf{CoTNeXt-50} & 224 & 30.1M & 4.3 & \textbf{79.5} & \textbf{94.5} \\
\textbf{CoTNeXt-50}$^\star$ & 224 & 30.1M & 4.3 & \textbf{80.2} & \textbf{95.1} \\
\textbf{SE-CoTNetD-50} & 224 & 23.1M & 4.1 & \textbf{79.8} & \textbf{94.7} \\
\textbf{SE-CoTNetD-50}$^\star$ & 224 & 23.1M & 4.1 & \textbf{80.5} & \textbf{95.2} \\\hline\hline
ResNet-101 \cite{he2016deep} & 224 & 44.6M & 7.9 & 78.5 & 94.2 \\
ResNeXt-101 \cite{xie2017aggregated} & 224 & 44.2M & 8.0 & 79.1 & 94.4 \\
Res2Net-101 \cite{gao2019res2net} & 224 & 45.2M & 8.1 & 79.2 & 94.4 \\
SE-ResNeXt-101 \cite{hu2018squeeze} & 224 & 49.0M & 8.0 & 79.4 & 94.6 \\
LR-Net-101 \cite{hu2019local} & 224 & 42.0M & 8.0 & 78.5 & 94.3 \\
AA-ResNet-101 \cite{bello2019attention} & 224 & 45.4M & 8.1 & 78.7 & 94.4 \\ \hline
\textbf{CoTNet-101} & 224 & 38.3M & 6.1 & \textbf{80.0} & \textbf{94.9} \\
\textbf{CoTNet-101}$^\star$ & 224 & 38.3M & 6.1 & \textbf{80.9} & \textbf{95.3} \\
\textbf{CoTNeXt-101} & 224 & 53.4M & 8.2 & \textbf{80.3} & \textbf{95.0} \\
\textbf{CoTNeXt-101}$^\star$ & 224 & 53.4M & 8.2 & \textbf{81.3} & \textbf{95.6} \\
\textbf{SE-CoTNetD-101} & 224 & 40.9M & 8.5 & \textbf{80.5} & \textbf{95.1} \\
\textbf{SE-CoTNetD-101}$^\star$ & 224 & 40.9M & 8.5 & \textbf{81.4} & \textbf{95.6} \\ \Xhline{2\arrayrulewidth}
\end{tabular}
\vspace{-0.26in}
\label{table:r1}
\end{table}
\begin{table}[!tb]\small
\centering
\caption{Performance comparisons with the state-of-the-art vision backbones for image recognition on ImageNet (advanced training setup). Models with similar top-1/top-5 accuracy are grouped for efficiency comparison.}
\setlength{\tabcolsep}{0.5pt}
\begin{tabular}{c|c|cc|cc}
\Xhline{2\arrayrulewidth}
Backbone & Res. & Params & GFLOPs & Top-1 Acc. & Top-5 Acc. \\ \hline
ResNet-50 \cite{he2016deep} & 224 & 25.5M & 4.1 & 78.3 & 94.3 \\
CoaT-Lite Mini \cite{xu2021co} & 224 & 11M & 2.0 & 78.9 & - \\
EfficientNet-B1 \cite{tan2019efficientnet} & 240 & 7.8M & 0.7 & 79.1 & 94.4 \\
SE-ResNet-50 \cite{hu2018squeeze} & 224 & 28.1M & 4.1 & 79.4 & 94.6 \\
XCiT-T24 \cite{el2021xcit} & 224 & 12.1M & 2.3 & 79.4 & - \\
EfficientNet-B2 \cite{tan2019efficientnet} & 260 & 9.2M & 1.0 & 80.1 & 94.9 \\
BoTNet-S1-50 \cite{srinivas2021bottleneck} & 224 & 20.8M & 4.3 & 80.4 & 95.0 \\
ResNeSt-50-fast \cite{zhang2020resnest} & 224 & 27.5M & 4.3 & 80.6 & - \\
ResNeSt-50 \cite{zhang2020resnest} & 224 & 27.5M & 5.4 & 81.1 & - \\
Twins-PCPVT-S \cite{chu2021twins} & 224 & 24.1M & 3.7 & 81.2 & - \\
Swin-T \cite{liu2021swin} & 224 & 28.3M & 4.5 & 81.3 & - \\ \hline
\textbf{CoTNet-50} & 224 & 22.2M & 3.3 & \textbf{81.3} & \textbf{95.6} \\
\textbf{CoTNeXt-50} & 224 & 30.1M & 4.3 & \textbf{82.1} & \textbf{95.9} \\
\textbf{SE-CoTNetD-50} & 224 & 23.1M & 4.1 & \textbf{81.6} & \textbf{95.8} \\ \hline\hline
ResNet-101 \cite{he2016deep} & 224 & 44.6M & 7.9 & 80.0 & 95.0 \\
ResNet-152 \cite{he2016deep} & 224 & 60.2M & 11.6 & 81.3 & 95.5 \\
SE-ResNet-101 \cite{hu2018squeeze} & 224 & 49.3M & 7.9 & 81.4 & 95.7 \\
TNT-S \cite{han2021transformer} & 224 & 23.8M & 5.2 & 81.5 & 95.7 \\
EfficientNet-B3 \cite{tan2019efficientnet} & 300 & 12.0M & 1.8 & 81.6 & 95.7 \\
BoTNet-S1-59 \cite{srinivas2021bottleneck} & 224 & 33.5M & 7.3 & 81.7 & 95.8 \\
CoaT-Lite Small \cite{xu2021co} & 224 & 19.8M & 4.0 & 81.9 & - \\
ResNeSt-101-fast \cite{zhang2020resnest} & 224 & 48.2M & 8.1 & 82.0 & - \\
ResNeSt-101 \cite{zhang2020resnest} & 224 & 48.3M & 10.2 & 82.3 & - \\
LambdaResNet-101\cite{bello2021lambdanetworks} & 224 & 36.9M & - & 82.3 & - \\
XCiT-S24 \cite{el2021xcit} & 224 & 47.6M & 9.1 & 82.6 & - \\
CaiT-S-24 \cite{touvron2021going} & 224 & 46.9M & 9.4 & 82.7 & - \\
Twins-PCPVT-B \cite{chu2021twins} & 224 & 56.0M & 8.3 & 82.7 & - \\ \hline
\textbf{CoTNet-101} & 224 & 38.3M & 6.1 & \textbf{82.8} & \textbf{96.2} \\
\textbf{CoTNeXt-101} & 224 & 53.4M & 8.2 & \textbf{83.2} & \textbf{96.4} \\
\textbf{SE-CoTNetD-101} & 224 & 40.9M & 8.5 & \textbf{83.2} & \textbf{96.5} \\ \hline\hline
SE-ResNet-152 \cite{hu2018squeeze} & 224 & 66.8M & 11.6 & 82.2 & 95.9 \\
ConViT-B \cite{d2021convit} & 224 & 86.5M & 16.8 & 82.4 & 95.9 \\
BoTNet-S1-110 \cite{srinivas2021bottleneck} & 224 & 54.7M & 10.9 & 82.8 & 96.3 \\
TNT-B \cite{han2021transformer} & 224 & 65.6M & 14.1 & 82.9 & 96.3 \\
XCiT-L24 \cite{el2021xcit} & 224 & 189.1M & 36.1 & 82.9 & - \\
EfficientNet-B4 \cite{tan2019efficientnet} & 380 & 19.0M & 4.2 & 82.9 & 96.4 \\
CaiT-S-36 \cite{touvron2021going} & 224 & 68.2M & 13.9 & 83.3 & - \\
Twins-PCPVT-L \cite{chu2021twins} & 224 & 99.2M & 14.8 & 83.3 & - \\
Swin-B \cite{liu2021swin} & 224 & 87.7M & 15.4 & 83.3 & - \\
BoTNet-S1-128 \cite{srinivas2021bottleneck} & 256 & 75.1M & 19.3 & 83.5 & 96.5 \\
EfficientNet-B5 \cite{tan2019efficientnet} & 456 & 30.0M & 9.9 & 83.6 & 96.7 \\ \hline
\textbf{SE-CoTNetD-152} & 224 & 55.8M & 17.0 & \textbf{84.0} & \textbf{97.0} \\ \hline\hline
SENet-350 \cite{hu2018squeeze} & 384 & 115.2M & 52.9 & 83.8 & 96.6 \\
EfficientNet-B6 \cite{tan2019efficientnet} & 528 & 43.0M & 19.0 & 84.0 & 96.8 \\
BoTNet-S1-128 \cite{srinivas2021bottleneck} & 320 & 75.1M & 30.9 & 84.2 & 96.9 \\
Swin-B \cite{liu2021swin} & 384 & 87.7M & 47.0 & 84.2 & - \\
EfficientNet-B7 \cite{tan2019efficientnet} & 600 & 66.0M & 37.0 & 84.3 & 97.0 \\ \hline
\textbf{SE-CoTNetD-152} & 320 & 55.8M & 26.5 & \textbf{84.6} & \textbf{97.1} \\ \Xhline{2\arrayrulewidth}
\end{tabular}
\vspace{-0.26in}
\label{table:r2}
\end{table}
As shown in Table \ref{table:r1}, under the same depth (50-layer or 101-layer), the results across both top-1 and top-5 accuracy consistently indicate that our CoTNet-50/101 and CoTNeXt-50/101 obtain better performances against existing vision backbones with favorable parameter budget, including both ConvNets (e.g., ResNet-50/101 and ResNeXt-50/101) and attention-based models (e.g., Stand-Alone and AA-ResNet-50/101). The results generally highlight the key advantage of exploiting contextual information among keys in self-attention learning for visual recognition task. Specifically, under the same 50-layer backbones, by exploiting local self-attention in the deep architecture, LR-Net-50 and Stand-Alone exhibit better performance than ResNet-50, which ignores long-range feature interactions. Next, AA-ResNet-50 and LambdaResNet-50 enable the exploration of global self-attention over the whole feature map, and thereby boost up the performances. However, the performances of AA-ResNet-50 and LambdaResNet-50 are still lower than the stronger ConvNet (SE-ResNeXt-50) that strengthens the capacity of visual representation with channel-wise feature re-calibration. Furthermore, by fully replacing 3$\times$3 convolutions with CoT blocks across the entirety of deep architecture in ResNet-50/ResNeXt-50, CoTNet-50 and CoTNeXt-50 outperform SE-ResNeXt-50. This confirms that unifying both context mining among keys and self-attention learning into a single architecture is an effective way to enhance representation learning and thus boost visual recognition.
When additionally using exponential moving average as in LambdaResNet, the top-1 accuracy of CoTNeXt-50/101 will be further improved to 80.2\% and 81.3\% respectively, which is to-date the best published performance on ImageNet in default training setup.
Similar observations are also attained in advanced training setup, as summarized in Table \ref{table:r2}. Note that here we group all the baselines with similar top-1/top-5 accuracy or network depth. In general, our CoTNet-50 \& CoTNeXt-50 or CoTNet-101 \& CoTNeXt-101 perform consistently better than other vision backbones across both metrics for each group. In particular, the top-1 accuracy of our CoTNeXt-50 and CoTNeXt-101 can achieve 82.1\% and 83.2\%, making the absolute improvement over the best competitor ResNeSt-50 or ResNeSt-101/LambdaResNet-10 by 1.0\% and 0.9\%, respectively. More specifically, the attention-based backbones (BoTNet-S1-50 and BoTNet-S1-59) exhibit better performances than ResNet-50 and ResNet-101, by replacing the final three 3$\times$3 convolutions in ResNet with global self-attention layers. LambdaResNet-101 further boosts up the performances by leveraging the computationally efficient global self-attention layers (i.e., Lambda layer) to replace the convolutional layers. Nevertheless, LambdaResNet-101 is inferior to CoTNeXt-101 which capitalizes on the contextual information among input keys to guide self-attention learning. Even under the heavy setting with deeper networks, our SE-CoTNetD-152 (320) still manages to outperform the superior backbones of BoTNet-S1-128 (320) and EfficientNet-B7 (600), sharing the similar (even smaller) FLOPs with BoTNet-S1-128 (320).
\begin{figure}[!tb]
\vspace{-0.10in}
\centering {\includegraphics[width=0.4\textwidth]{default_inference_time.pdf}}
\vspace{-0.10in}
\caption{Inference Time vs. Accuracy Curve on ImageNet (default training setup).}
\vspace{-0.10in}
\label{fig:inference_time}
\end{figure}
\begin{figure}[!tb]
\vspace{-0.10in}
\centering {\includegraphics[width=0.4\textwidth]{advanced_inference_time.pdf}}
\vspace{-0.10in}
\caption{Inference Time vs. Accuracy Curve on ImageNet (advanced training setup).}
\vspace{-0.10in}
\label{fig:inference_time_advance}
\end{figure}
\begin{table}[!tb]\small
\centering
\caption{Performance comparisons across different ways on the exploration of contextual information, i.e., using only static context (\textbf{Static Context}), using only dynamic context (\textbf{Dynamic Context}), linearly fusing static and dynamic contexts (\textbf{Linear Fusion}), and the full version of CoT block. The backbone is CoTNet-50 and we adopt the default setup for training on ImageNet.}
\setlength{\tabcolsep}{3.5pt}
\begin{tabular}{c|c|c|c|c}
\Xhline{2\arrayrulewidth}
& Params & GFLOPs & Top-1 Acc. & Top-5 Acc. \\ \hline
Static Context & 17.1M & 2.7 & 77.1 & 93.5 \\
Dynamic Context & 20.3M & 3.3 & 78.5 & 94.1 \\
Linear Fusion & 20.3M & 3.3 & 78.7 & 94.2 \\ \hline
CoT & 22.2M & 3.3 & 79.2 & 94.5 \\ \Xhline{2\arrayrulewidth}
\end{tabular}
\vspace{-0.22in}
\label{table:as}
\end{table}
\textbf{Inference Time vs. Accuracy.}
Here we evaluate our CoTNet models with regard to both inference time and top-1 accuracy for image recognition task. Figure \ref{fig:inference_time} and Figure \ref{fig:inference_time_advance} show the inference time-accuracy curve under both default and advanced training setups for our CoTNet and the state-of-the-art vision backbones. As shown in the two figures, we can see that our CoTNet models consistently obtain better top-1 accuracy with less inference time than other vision backbones across both training setups. In a word, our CoTNet models seek better inference time-accuracy trade-offs than existing vision backbones. More remarkably, compared to the high-quality backbone of EfficientNet-B6, our SE-CoTNetD-152 (320) achieves 0.6\% higher top-1 accuracy, while runs 2.75$\times$ faster at inference.
\begin{table*}[!tb]\small
\centering
\caption{Effect of utilizing different replacement settings on the four stages (\textbf{res2}$\rightarrow$\textbf{res3}$\rightarrow$\textbf{res4}$\rightarrow$\textbf{res5}) in the basic backbone of ResNet-50 and two widely adopted architecture changes, ResNet-D \cite{he2019bag} and Squeeze-and-Excitation \cite{hu2018squeeze} (\textbf{D-SE}). $\checkmark$ denotes the stage is replaced with our CoT blocks. $\star$ denotes the use of architecture changes (D-SE). We adopt the default setup for training on ImageNet.}
\setlength{\tabcolsep}{3.8pt}
\begin{tabular}{c|ccccc|c|c|c|c|c}
\Xhline{2\arrayrulewidth}
& res2 & res3 & res4 & res5 & D-SE & Params & GFLOPs & Infer & Top-1 Acc. & Top-5 Acc. \\ \hline
ResNet-50 & & & & & & 25.5M & 4.1 & 508 ex/s & 77.3 & 93.6 \\ \hline
\multirow{4}{*}{CoTNet-50}
& & & & $\checkmark$ & & 23.5M & 4.0 & 491 ex/s & 78.5 & 94.1 \\
& & & $\checkmark$ & $\checkmark$ & & 22.4M & 3.7 & 443 ex/s & 79.0 & 94.3 \\
& & $\checkmark$ & $\checkmark$ & $\checkmark$ & & 22.3M & 3.4 & 390 ex/s & 79.0 & 94.4 \\
& $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & & 22.2M & 3.3 & 331 ex/s & 79.2 & 94.5 \\\hline
SE-ResNetD-50 & & & & & $\star$ & 35.7M & 4.4 & 444 ex/s & 79.1 & 94.5 \\
SE-CoTNetD-50 & & & $\checkmark$ & $\checkmark$ & $\star$ & 23.1M & 4.1 & 414 ex/s & 79.8 & 94.7 \\
\Xhline{2\arrayrulewidth}
\end{tabular}
\vspace{-0.22in}
\label{table:rs}
\end{table*}
\textbf{Ablation Study.}
In this section, we investigate how each design in our CoT block influences the overall performance of CoTNet-50. In CoT block, we first mine the static context among keys via a 3$\times$3 convolution. Conditioned on the concatenation of query and contextualized key, we can also obtain the dynamic context via self-attention. CoT block dynamically fuses the static and dynamic contexts as the final outputs. Here we include one variant of CoT block by directly summating the two kinds of contexts, named as Linear Fusion.
Table \ref{table:as} details the performances across different ways on the exploration of contextual information in CoTNet-50 backbone. Solely using static context (Static Context) for image recognition achieves 77.1\% top-1 accuracy, which can be interpreted as one kind of ConvNet without self-attention. Next, by directly exploiting the dynamic context via self-attention, Dynamic Context exhibits better performance. The linear fusion of static and dynamic contexts leads to a boost of 78.7\%, which basically validates the complementarity of the two contexts. CoT block is further benefited from the dynamic fusion via attention, and the top-1 accuracy of CoT finally reaches 79.2\%.
\textbf{Effect of Replacement Settings.}
In order to show the relationship between performance and the number of stages replaced with our CoT blocks, we progressively replace the stages with our CoT blocks in ResNet-50 backbone (res2$\rightarrow$res3$\rightarrow$res4$\rightarrow$res5), and compare the performances. The results shown in Table \ref{table:rs} indicate that increasing the number of stages replaced with CoT blocks can generally lead to performance improvement, and meanwhile the parameter number \& FLOPs are slightly decreased. When taking a close look on the throughputs and accuracies of different replacement settings, the replacement of CoT blocks in the last two stages (res4 and res5) contributes to the most performance boost. The additional replacement of CoT blocks in the fist stages (res1 and res2) can only lead to a marginal performance improvement (0.2\% top-1 accuracy in total), while requiring 1.34$\times$ inference time. Therefore, in order to seek a better speed-accuracy trade-off, we follow \cite{bello2021lambdanetworks} and construct an upgraded version of our CoTNet, named SE-CoTNetD-50, where only the 3$\times$3 convolutions in the res4 and res5 stages are replaced with CoT blocks under SE-ResNetD-50 backbone. Note that the SE-ResNetD-50 backbone is a variant of ResNet-50 with two widely adopted architecture changes (ResNet-D \cite{he2019bag} and Squeeze-and-Excitation in all bottleneck blocks \cite{hu2018squeeze}). As shown in Table \ref{table:rs}, compared to the SE-ResNetD-50 counterpart, our SE-CoTNetD-50 achieves better performances at a virtually negligible decrease in throughput.
\subsection{Object Detection}
\textbf{Setup.} We next evaluate the pre-trained CoTNet for the downstream task of object detection on COCO dataset. For this task, we adopt Faster-RCNN \cite{ren2016faster,ren2015faster} and Cascade-RCNN \cite{cai2018cascade} as the base object detectors, and directly replace the vanilla ResNet backbone with our CoTNet. Following the standard setting in \cite{xie2017aggregated}, we train all models on COCO-2017 training set ($\sim$118K images) and evaluate them on COCO-2017 validation set (5K images). The standard AP metric of single scale is adopted for evaluation. During training, for each input image, the size of the shorter side is sampled from the range of [640, 800]. All models are trained with FPN \cite{lin2017feature} and synchronized batch normalization \cite{zhang2018context}. We utilize the 1x learning rate schedule for training. For fair comparison with other vision backbones in this task, we set all the hyperparameters and detection heads as in \cite{zhang2020resnest}.
\textbf{Performance Comparison.} Table \ref{table:od} summarizes the performance comparisons on COCO dataset for object detection with Faster-RCNN and Cascade-RCNN in different pre-trained backbones. We group the vision backbones with same network depth (50-layer/101-layer). From observation, our pre-trained CoTNet models (CoTNet-50/101 and CoTNeXt-50/101) exhibit a clear performance boost against the ConvNets backbones (ResNet-50/101 and ResNeSt-50/101) for each network depth across all IoU thresholds and object sizes. The results basically demonstrate the advantage of integrating self-attention learning with contextual information mining in CoTNet, even when transferred to the downstream task of object detection.
\begin{table}[!tb]\scriptsize
\centering
\caption{Performance comparisons with the state-of-the-art vision backbones on the downstream task of object detection (Base detectors: Faster-RCNN and Cascade-RCNN). Average Precision (AP) is reported at different IoU thresholds and for three different object sizes: small, medium, large (s/m/l).}
\setlength{\tabcolsep}{2.5pt}
\begin{tabular}{c|c|ccc|ccc}
\Xhline{2\arrayrulewidth}
& Backbone & $AP$ & $AP_{50}$ & $AP_{75}$ & ${AP_s}$ & $AP_m$ & $AP_{l}$ \\ \hline
\multirow{10}{*}{\rotatebox[origin=c]{90}{Faster-RCNN}}
& ResNet-50 \cite{he2016deep} & 39.34 & 59.47 & 42.76 & 23.57 & 42.42 & 51.30 \\
& ResNeXt-50 \cite{xie2017aggregated} & 41.31 & 62.23 & 44.91 & 25.33 & 44.52 & 53.20 \\
& ResNeSt-50 \cite{zhang2020resnest} & 42.39 & 63.73 & 46.02 & 26.25 & 45.88 & 54.24 \\
& CoTNet-50 & \textbf{43.50} & \textbf{64.84} & \textbf{47.53} & \textbf{26.36} & \textbf{47.54} & \textbf{56.49} \\
& CoTNeXt-50 & \textbf{44.06} & \textbf{65.76} & \textbf{47.65} & \textbf{27.08} & \textbf{47.70} & \textbf{57.21} \\ \cline{2-8}
& ResNet-101 \cite{he2016deep} & 41.46 & 61.99 & 45.38 & 25.31 & 44.75 & 54.62 \\
& ResNeXt-101 \cite{xie2017aggregated} & 42.91 & 63.77 & 46.89 & 25.96 & 46.42 & 55.47 \\
& ResNeSt-101 \cite{zhang2020resnest} & 44.13 & 61.91 & 47.67 & 26.02 & 47.69 & 57.48 \\
& CoTNet-101 & \textbf{45.35} & \textbf{66.80} & \textbf{49.18} & \textbf{28.65} & \textbf{49.47} & \textbf{58.82} \\
& CoTNeXt-101 & \textbf{46.10} & \textbf{67.50} & \textbf{50.22} & \textbf{29.44} & \textbf{49.84} & \textbf{59.26} \\ \hline\hline
\multirow{10}{*}{\rotatebox[origin=c]{90}{Cascade-RCNN}}
& ResNet-50 \cite{he2016deep} & 42.45 & 59.76 & 46.09 & 24.90 & 45.64 & 55.86 \\
& ResNeXt-50 \cite{xie2017aggregated} & 44.53 & 62.45 & 48.38 & 27.29 & 48.01 & 57.87 \\
& ResNeSt-50 \cite{zhang2020resnest} & 45.41 & 63.92 & 48.70 & 28.77 & 48.69 & 58.43 \\
& CoTNet-50 & \textbf{46.11} & \textbf{64.68} & \textbf{49.75} & 28.71 & \textbf{49.76} & \textbf{60.28} \\
& CoTNeXt-50 & \textbf{46.79} & \textbf{65.54} & \textbf{50.53} & \textbf{29.74} & \textbf{50.49} & \textbf{61.04} \\ \cline{2-8}
& ResNet-101 \cite{he2016deep} & 44.13 & 61.91 & 47.67 & 26.02 & 47.69 & 57.48 \\
& ResNeXt-101 \cite{xie2017aggregated} & 45.83 & 63.61 & 49.89 & 27.75 & 49.53 & 59.14 \\
& ResNeSt-101 \cite{zhang2020resnest} & 47.51 & 66.06 & 51.35 & 30.25 & 50.96 & 61.23 \\
& CoTNet-101 & \textbf{48.19} & \textbf{67.00} & \textbf{52.17} & 30.00 & \textbf{52.32} & \textbf{62.87} \\
& CoTNeXt-101 & \textbf{49.02} & \textbf{67.67} & \textbf{53.03} & \textbf{31.44} & \textbf{52.95} & \textbf{63.17} \\
\Xhline{2\arrayrulewidth}
\end{tabular}
\vspace{-0.22in}
\label{table:od}
\end{table}
\subsection{Instance Segmentation}
\textbf{Setup.} Here we evaluate the pre-trained CoTNet in another downstream task of instance segmentation on COCO dataset. This task goes beyond the box-level understanding in object detection by additionally predicting the object mask for each detected object, pursuing the pixel-level understanding of visual content. Specifically, Mask-RCNN \cite{he2017mask,he2018mask} and Cascade-Mask-RCNN \cite{cai2018cascade} are utilized as the base models for instance segmentation. In the experiments, we replace the vanilla ResNet backbone in Mask-RCNN with our CoTNet. Similarly, all models are trained with FPN and synchronized batch normalization. We adopt the 1x learning rate schedule during training, and all the other hyperparameters are set as in \cite{zhang2020resnest}. For evaluation, we report the standard COCO metrics including both bounding box and mask AP (${AP}^{bb}$ and ${AP}^{mk}$).
\begin{table}[!tb]\scriptsize
\centering
\caption{Performance comparisons with the state-of-the-art vision backbones on the downstream task of instance segmentation (Base models: Mask-RCNN and Cascade-Mask-RCNN). The bounding box and mask Average Precision (${AP}^{bb}$, ${AP}^{mk}$) are reported at different IoU thresholds. Note that BoTNet-50/101 is fine-tuned with larger input size 1024$\times$1024 and longer epochs (36).}
\setlength{\tabcolsep}{1.8pt}
\begin{tabular}{c|c|ccc|ccc}
\Xhline{2\arrayrulewidth}
& Backbone & $AP^{bb}$ & $AP^{bb}_{50}$ & $AP^{bb}_{75}$ & $AP^{mk}$ & $AP^{mk}_{50}$ & $AP^{mk}_{75}$ \\ \hline
\multirow{12}{*}{\rotatebox[origin=c]{90}{Mask-RCNN}}
& ResNet-50 \cite{he2016deep} & 39.97 & 60.19 & 43.73 & 36.05 & 57.02 & 38.54 \\
& ResNeXt-50 \cite{xie2017aggregated} & 41.74 & 62.32 & 45.60 & 37.41 & 59.24 & 39.98 \\
& ResNeSt-50 \cite{zhang2020resnest} & 42.81 & 63.93 & 46.85 & 38.14 & 60.54 & 40.69 \\
& BoTNet-50 \cite{srinivas2021bottleneck} & 43.6 & 65.3 & 47.6 & 38.9 & 62.5 & 41.3 \\
& CoTNet-50 & \textbf{44.06} & 64.99 & \textbf{48.29} & \textbf{39.28} & 62.12 & \textbf{42.17} \\
& CoTNeXt-50 & \textbf{44.47} & \textbf{65.74} & \textbf{48.71} & \textbf{39.62} & \textbf{62.70} & \textbf{42.35} \\ \cline{2-8}
& ResNet-101 \cite{he2016deep} & 41.78 & 61.90 & 45.80 & 37.50 & 58.78 & 40.21 \\
& ResNeXt-101 \cite{xie2017aggregated} & 43.25 & 63.61 & 47.23 & 38.60 & 60.74 & 41.37 \\
& ResNeSt-101 \cite{zhang2020resnest} & 45.75 & 66.88 & 49.75 & 40.65 & 63.76 & 43.68 \\
& BoTNet-101 \cite{srinivas2021bottleneck} & 45.5 & - & - & 40.4 & - & - \\
& CoTNet-101 & \textbf{46.17} & \textbf{67.17} & \textbf{50.63} & \textbf{40.86} & \textbf{64.18} & 43.64 \\
& CoTNeXt-101 & \textbf{46.66} & \textbf{67.70} & \textbf{50.90} & \textbf{41.21} & \textbf{64.45} & \textbf{44.27} \\ \hline\hline
\multirow{10}{*}{\rotatebox[origin=c]{90}{Cascade-Mask-RCNN}}
& ResNet-50 \cite{he2016deep} & 43.06 & 60.29 & 46.55 & 37.19 & 57.61 & 40.01 \\
& ResNeXt-50 \cite{xie2017aggregated} & 44.91 & 62.66 & 48.80 & 38.57 & 59.83 & 41.59 \\
& ResNeSt-50 \cite{zhang2020resnest} & 46.23 & 64.62 & 50.15 & 39.64 & 61.86 & 42.88 \\
& CoTNet-50 & \textbf{46.94} & \textbf{65.36} & \textbf{50.69} & \textbf{40.25} & \textbf{62.37} & \textbf{43.38} \\
& CoTNeXt-50 & \textbf{47.63} & \textbf{65.93} & \textbf{51.64} & \textbf{40.76} & \textbf{63.32} & \textbf{44.01} \\ \cline{2-8}
& ResNet-101 \cite{he2016deep} & 44.79 & 62.31 & 48.46 & 38.51 & 59.33 & 41.53 \\
& ResNeXt-101 \cite{xie2017aggregated} & 46.24 & 64.01 & 49.92 & 39.77 & 61.19 & 43.06 \\
& ResNeSt-101 \cite{zhang2020resnest} & 48.44 & 66.80 & 52.60 & 41.52 & 64.03 & 45.02 \\
& CoTNet-101 & \textbf{48.97} & \textbf{67.42} & \textbf{53.10} & \textbf{41.98} & \textbf{64.81} & \textbf{45.39} \\
& CoTNeXt-101 & \textbf{49.35} & \textbf{67.88} & \textbf{53.53} & \textbf{42.20} & \textbf{65.00} & \textbf{45.69} \\
\Xhline{2\arrayrulewidth}
\end{tabular}
\vspace{-0.22in}
\label{table:is}
\end{table}
\textbf{Performance Comparison.}
Table \ref{table:is} details the performances of Mask-RCNN with different pre-trained vision backbones for the downstream task of instance segmentation on COCO dataset. Similar to the observations for object detection downstream task, our pre-trained CoTNet models yields consistent gains against both ConvNets backbones (ResNet-50/101 and ResNeSt-50/101) and attention-based model (BoTNet-50/101) over the most IoU thresholds. This generally highlights the generalizability of our CoTNet in the challenging instance segmentation task. In particular, BoTNet-50 achieves better performances than the best ConvNets (ResNeSt-50). This might attribute to the additional modeling of global self-attention in BoTNet plus the more advanced fine-tuning setup with larger input size (1024$\times$1024) and longer training epochs (36). However, by uniquely exploiting the contextual information among neighbor keys for self-attention learning, our CoTNet-50 manages to lead the performance boosts over the most metrics, even when fine-tuned with smaller input size and less epoches (12). The results again confirm the merit of simultaneously performing context mining and self-attention learning in our CoTNet for visual representation learning.
\section{Conclusions}
In this work, we propose a new Transformer-style architecture, termed Contextual Transformer (CoT) block, which exploits the contextual information among input keys to guide self-attention learning. CoT block first captures the static context among neighbor keys, which is further leveraged to trigger self-attention that mines the dynamic context. Such way elegantly unifies context mining and self-attention learning into a single architecture, thereby strengthening the capacity of visual representation. Our CoT block can readily replace standard convolutions in existing ResNet architectures, meanwhile retaining the favorable parameter budget. To verify our claim, we construct Contextual Transformer Networks (CoTNet) by replacing the 3$\times$3 convolutions in ResNet architectures (e.g., ResNet or ResNeXt). The CoTNet architectures learnt on ImageNet validate our proposal and analysis. Experiments conducted on COCO in the context of object detection and instance segmentation also demonstrate the generalization of the visual representation pre-trained by our CoTNet.
{\small
\bibliographystyle{ieee_fullname}
|
1211.5615
|
\section{Introduction}
The most popular paradigm for the origin of dark matter (DM) in the Universe is the {\em thermal freeze-out}.
In that scenario, the dark matter particle with mass $M_X$ annihilates into matter with a cross section
$\left\langle \sigma \, v \right\rangle_{\rm thermal} \sim 3 \times 10^{-26} {\rm cm}^3/{\rm s}$.
This ensures dark matter is in thermal equilibrium with the rest of the plasma in the early universe
while $T \gtrsim M$ but decouples when $T \sim M_X/20$, leaving the relic abundance in agreement with
the value $\Omega_{X} = 0.228 \pm 0.027$ measured by WMAP \cite{Komatsu:2010fb}.
Incidentally, $\left\langle \sigma \, v \right\rangle_{\rm thermal}$ is a generic cross section for a
weak scale mass
particle interacting with order one couplings, this fact being referred to as the {\em WIMP miracle}.
In spite of these attractive features,
non-thermal mechanisms of dark matter production have also received considerable attention.
Examples include right-handed neutrinos produced by oscillations \cite{Dodelson:1993je},
axions produced by vacuum misalignment \cite{Preskill:1982cy}, winos produced from moduli
decays \cite{Moroi:1999zb}, and super-massive dark matter ({\em WIMP-zillas}) produced
during reheating after inflation \cite{Wimpzillas}. These studies allow one to recognize a
wider range of possible collider and astrophysical signals of dark matter than what would result from
the thermal WIMP scenario.
In this paper we study the possibility of non-thermal dark matter production during a
first-order electroweak (EW) phase transition. Bubble collisions at the end of the EW phase
transition may give rise to abundant non-thermal particle production when a sizable amount
of the energy budget of the transition is stored in the bubble walls, possibly leading to
new and appealing scenarios. Many models of dark matter contain a direct coupling between the Higgs
and the dark matter candidate fields (MSSM and its extensions, Little Higgs theories
with T-parity and DM extensions of the standard model (SM) via the Higgs portal, to name a few).
It is thus reasonable to expect that dark matter may be abundantly produced non-thermally at the end of
a first-order EW phase transition. Note that, much like in the thermal WIMP case, dark matter would then be a particle
with $M_X \sim 10 \, \mathrm{GeV} - 10 \, \mathrm{TeV}$ with significant coupling to the SM, thus being
within reach of colliders and DM direct detection experiments.
There is however one generic problem with this scenario. Since the temperature of the Universe right after the EW phase transition is
$T_{\mathrm{EW}} \sim 50-100$ GeV (for strong transitions $T_{\mathrm{EW}}$ may be somewhat lower
than $100$ GeV), thermalization will typically lead to a wash-out of the non-thermal abundance, thus rendering the particle production
at the EW phase transition irrelevant for the subsequent evolution of the Universe.
The wash-out process can nevertheless be avoided in a number of ways,
each resulting in a scenario where non-thermal dark matter production is (fully or partially) responsible for the
observed dark matter relic density. One possibility, recently outlined in \cite{Konstandin:2011dr}, is to allow for a few e-foldings
of inflation prior to the beginning of the transition (which can happen for very strong EW phase transitions),
diluting the plasma and leaving the Universe partially empty. If the reheating temperature
after the phase transition is low, $T_{\mathrm{RH}} \ll 100$ GeV, it may be possible for a dark
matter candidate with weak couplings to the Higgs field and mass $M_X \sim 100$ GeV to remain
out of thermal equilibrium after the EW phase transition.
In this paper we investigate other scenarios allowing for a survival of the non-thermal abundance.
One possibility corresponds to the case of relatively heavy (multi-TeV) dark matter: for $M_X \gtrsim 1$~TeV, dark matter
will be very non-relativistic when the EW phase transition takes place, and the decoupling/freeze-out
temperature $T_{\mathrm{FO}}$ will satisfy $T_{\mathrm{FO}} \sim M_X / 20 \gtrsim T_{\mathrm{EW}}$.
Then, heavy dark matter produced non-thermally through bubble collisions may remain out of thermal equilibrium
after the EW phase transition (or at least wash-out will be partially avoided).
Another possibility is that bubble collisions produce super-heavy dark matter, $M_X \sim 10^6$-$10^8$ GeV,
a scenario we call \textit{``baby-zillas"}.
We argue this may be possible for a very strong EW phase transition and dark matter having a large coupling the Higgs.
In order for baby-zillas with $M_X \gg v_{\mathrm{EW}}$ to be a viable dark matter candidate,
they must have never reached thermal equilibrium in the early universe after inflation, since
otherwise they would have over-closed the universe.
This sets a relatively low upper bound on the reheating temperature after inflation in that scenario.
Finally, asymmetric dark matter production might allow to avoid complete wash-out
of the non-thermal abundance through thermalization after the EW phase transition.
The paper is organized as follows: in Section~2 we review the formalism
that describes particle production at the end of the EW phase transition for the case
of very elastic bubble collisions \cite{Hawking:1982ga,Watkins:1991zt} and extend it to the
case of very inelastic ones, highlighting the differences between
both scenarios \cite{KS}. Then, in Section~3 we explicitly
compute the particle production efficiency of scalar, fermion, and vector boson particles
coupled to the
Higgs (either directly or via an effective Higgs portal).
In Sections~4~and~5 we focus on dark matter production at the end of the EW phase transition.
First we discuss in Section~4 the conditions for non-thermally
produced dark matter to avoid subsequent
wash-out and constitute the bulk of the present dark matter density, selecting heavy
(multi-TeV) vector boson dark matter as a viable example.
We go on to analyze in detail non-thermal dark matter production in that scenario and the subsequent evolution
of the non-thermally generated abundance after the EW phase transition, including finally
a discussion on the current XENON100 bounds and direct detection prospects.
Then, in Section~5 we study the non-thermal production of very heavy ($M_X \gg v_{\mathrm{EW}}$) vector boson
dark matter, and outline the conditions under which these {\em baby-zillas} constitute a viable dark matter
candidate.
In the case of asymmetric non-thermal dark matter production, we find it difficult to avoid subsequent wash-out, and the discussion is left for an appendix.
We summarize our results and conclude in Section 6.
\section{Particle Production at the EW Phase Transition}
\subsection{Bubble Collisions in the EW Phase Transition}
\label{section21}
If the early Universe was hotter than $T_{\rm EW} \sim 100$ GeV it must have undergone an EW
phase transition at some point in its history. Within the SM, the EW phase transition is a smooth cross-over,
however it is conceivable that new degrees of freedom beyond the SM modify the Higgs potential so as to
make the transition first order. This is what we assume throughout this paper, without specifying the
full theory that makes the first order transition possible. In that case, the EW phase transition
proceeded through nucleation and expansion of bubbles of true Higgs vacuum, which eventually
collided among each other completing the transition. As this was happening during the
radiation dominated era, the bubble expansion process would then have taken place in a thermal
environment (except for the case when a period of inflation would have preceded the phase transition).
For a first order phase transition occuring in a thermal environment, the study of the bubble expansion
process reveals that the thermal plasma exerts some amount of friction on the expanding bubble wall,
and this friction tends to balance the pressure difference on the bubble wall driving the expansion.
In the usual picture, nucleated bubbles reach
a stationary state after a very short period of acceleration, with a constant wall velocity
depending specifically on the interactions of the bubble wall with the degrees of freedom in the
plasma \cite{Friction1,Friction2} and on the resulting
fluid dynamics \cite{Steinhardt:1981ct,Gyulassy:1983rq,Laine:1993ey} (see \cite{EKNS} for a review).
In this case, the amount of energy stored in the bubble walls at the time of the bubble collisions
is negligible compared to the available energy of the transition, since most of this available energy gets
converted into plasma bulk motion and thermal energy \cite{Kamionkowski:1993fg}.
However, this picture was recently challenged in \cite{BM}, where it was shown that the friction
exerted by the plasma may saturate to a finite value for ultrarelativistic bubble walls. As a consequence
the stationary state assumption will no longer be true when the pressure difference on the bubble wall is larger than the friction
saturation value, which may happen for strongly first order phase transitions. In this scenario, if there
are no hydrodynamic obstacles that prohibit the bubble walls
to become highly relativistic in the first place (see however \cite{KN}), bubbles will expand in an accelerated way
(`the so-called {\em runaway bubbles}), with almost all the energy of the transition being used to accelerate the bubble
walls\footnote[1]{This situation may also arise if, under very specific circumstances, a few e-foldings of inflation
are achieved prior to the beginning of the EW phase transition (see \cite{Konstandin:2011dr} for a natural realization
of this scenario), diluting the plasma
and leaving the Universe mostly empty. In this case the expansion of the bubbles effectively takes
place in vacuum, and the nucleated bubbles expand in an accelerated way due to the absence of friction.}\cite{EKNS}.
By the end of the phase transition (when bubbles start colliding), these runaway bubbles may reach very large values of $\gamma_w$:
\begin{equation}
\label{eq:gammaw}
\gamma_w \lesssim \gamma^{\mathrm{max}}_w \sim \frac{\beta^{-1}}{H^{-1}}\,
\frac{M_{\mathrm{pl}}}{v_T} \sim 10^{15}\, ,
\end{equation}
\noindent with $v_{T}$ the value of the Higgs VEV in the broken phase and
$\beta^{-1} \sim \left( 10^{-3} - 10^{-2} \right) H^{-1}$ being the duration of the phase
transition \cite{Hogan}.
The estimate (\ref{eq:gammaw}) follows from balancing the surface energy on the bubble wall
(\ref{energywall}) and the energy available inside the bubble.
Once bubbles start colliding, the energy stored on the bubble walls will be liberated into the plasma.
As argued above, for ``runaway" bubbles this will correspond to a very large portion of the energy
budget of the phase transition, and therefore this process can be very important. Under certain circumstances,
this may also hold true for highly relativistic bubble walls ($\gamma_w \gg 1$) that reach
a stationary state long before bubble collisions start (meaning that $\gamma_w \ll \gamma^{\mathrm{max}}_w$),
in which case the amount of energy stored in the bubble walls will be very small compared to the available energy of the transition,
but still important when released into the plasma at the end of the transition.
The process of bubble collisions
in cosmological first order phase transitions is by itself a very complicated one.
Consider a configuration of two planar bubble walls\footnote[2]{At the time of the collision, the
bubbles are so large compared to the relevant microscopical scales, that their walls
may be considered planar as a good approximation.} initially far away from each other, that approach and
collide \cite{Hawking:1982ga,Watkins:1991zt,Chinos}.
Depending of the shape of the potential for the scalar field $\phi$ driving the transition (in our case, the Higgs field $h$),
the bubble collision will be approximately elastic or partially inelastic \cite{Hawking:1982ga,Watkins:1991zt}
(see also \cite{KS}). In the first case, the walls reflect off one another after the collision, which reestablishes a region
of symmetric phase between the bubble walls. For a perfectly elastic collision the field profile of the colliding walls
in the limit of infinitely thin bubble walls (taken as step-functions) can be written as \cite{Watkins:1991zt}
\begin{equation}
\label{2WallsFieldConfiguration}
h(z,t) = h_{\infty} \equiv \left\lbrace\begin{array}{lll}
0 & \mathrm{if} \, \, v_w\, t < z < -v_w\, t & \quad t < 0, \\
0 & \mathrm{if} \, -v_w\, t < z < v_w\, t & \quad t > 0, \\
v_{T} & \mathrm{Otherwise},
\end{array} \right.
\end{equation}
\noindent where $v_w$ is the bubble wall velocity,
the bubble walls move in the $z$-direction and the collision is assumed to happen at $t = 0$.
Since we are ultimately interested in scenarios where
$\gamma_w \gg 1$, we will take the ultrarelativistic limit $v_w \rightarrow 1$ in the rest of the section.
The field profile (\ref{2WallsFieldConfiguration}) neglects the thickness of the bubble walls $l_w$
(generically, $l_w \sim (10-30)/T_{\mathrm{EW}}$, with
$T_{\mathrm{EW}} \sim 50 - 100$ GeV).
To capture the wall thickness effects one can consider another ansatz for the profile of the colliding
bubble walls:
\begin{eqnarray}
\label{2WallsFieldConfiguration2}
h(z,t) = h_{l_{w}} \equiv \frac{v_{T}}{2} \left[\mathrm{Tanh}\left(\gamma_w\frac{t + \left| z \right|}{l_w}\right) -
\mathrm{Tanh}\left(\gamma_w\frac{t - \left| z \right|}{l_w}\right) \right] \nonumber \\
= \frac{v_{T}}{2} \left[2 + \mathrm{Tanh}\left(\gamma_w\frac{z -\left| t \right|}{l_w}\right) -
\mathrm{Tanh}\left(\gamma_w\frac{z + \left| t \right|}{l_w}\right) \right].
\end{eqnarray}
A perfectly elastic collision is
however an idealized situation, as one expects a certain degree of inelasticity in a realistic collision.
Moreover, even for a very elastic collision the bubble walls will eventually be drawn back together by
vacuum pressure and collide
again. A quantitative picture of the collision of two planar bubble walls can be obtained by studying the evolution
equation for the scalar field configuration $h(z,t)$ subject to the potential $V(h)$:
\begin{equation}
\label{eqmotion}
\left(\partial^2_t - \partial^2_z \right) h(z,t) = -\frac{\partial V(h)}{\partial h},
\end{equation}
\noindent with the initial condition corresponding to two planar bubble walls far away from
each other and moving in opposite directions (given approximately by $h_{l_{w}}$ in the
limit $t \rightarrow - \infty$).
In the ultrarelativistic limit the ansatz (\ref{2WallsFieldConfiguration2})
will also be an approximate solution of (\ref{eqmotion}) before the bubble collision\footnote[3]{
Each bubble wall in (\ref{2WallsFieldConfiguration2}) interpolates between the symmetric
and broken minima of $V(h)$, and so $\partial V(h)/\partial h = 0$ outside the bubble wall. Then,
for very thin walls the
equation of motion approximately simplifies (before the collision) to
$\left(\partial^2_t - \partial^2_z \right) h(z,t) = 0$,
for which any function of the form $f(z+t)$ or $f(z-t)$ is a solution.} (for $t < 0$). In
this limit, the kinetic energy per unit
area contained in the field configuration $h(z,t)$ prior to the collision is given by
\begin{equation}
\label{energywall}
\frac{E_w}{A} = \frac{2}{3}\, v_{T}^2 \,\frac{\gamma_w}{l_w}.
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth, clip ]{PotentialSymmetric.eps} \hspace{5mm}
\includegraphics[width=0.45\textwidth, clip ]{PotentialAsymmetric.eps}
\caption{\small LEFT: Potential with nearly degenerate minima. RIGHT: Potential with very non-degenerate minima.
Each one shows the behaviour of the field immediately after the collision in the region close to the collision
point, as described in the text: 1) ``Kick" to field values larger than $v(T)$. 2) Large field oscillation, successful
(LEFT) or unsuccessful (RIGHT) in driving the field over the potential barrier. 3) Oscillations
around the symmetric (LEFT) or broken (RIGHT) minimum.}
\label{Fig:3}
\end{center}
\end{figure}
At the moment of the collision, the field configuration makes an ``excursion" to field values larger
than $v_{T}$ in a small region around the collision
point \cite{Chinos} (resulting in $\partial V /\partial h \neq 0$ in this region).
The subsequent evolution
of $h(z,t)$ strongly depends on the shape of the potential $V(h)$.
The field close to the collision region oscillates back after the initial ``kick" in field space, and for a potential
with nearly degenerate minima this oscillation is able to drive the field over the potential barrier and
into the basin of attraction of the symmetric minimum (Figure \ref{Fig:3} - Left), where it will perform small-amplitude
oscillations. In this case the collision is approximately elastic as described above, with
the bubble walls being effectively reflected as a region of symmetric phase is re-established between them.
The walls move then away from each other until vacuum pressure makes them approach and collide again, repeating the
process several times. In each collision some fraction of the energy stored in the walls is radiated
into scalar waves and quanta of the fields coupled to $h$, until all of the energy in the walls is radiated away.
In contrast to this scenario, for a potential $V(h)$
with very non-degenerate minima (Figure \ref{Fig:3} - Right), the field oscillation after the ``kick" in the region
close to the collision point does not effectively drive the field over the potential barrier. As a consequence, the field
stays in the basin of attraction of the broken minimum $v_T$ and performs relatively large-amplitude oscillations around it,
giving rise to a large amount of energy radiated into scalar waves (as opposed to the previous scenario).
In this case the collision is very inelastic.
Following \cite{KS}, we compute the numerical solution for the field profile
$h(z,t)$ corresponding to the collision of two bubble walls, obtained from solving (\ref{eqmotion}) with a toy potential $V(h)$
of the form
\begin{equation}
V(h) = a^2 h^2 - b^2 h^3 + \lambda h^4
\end{equation}
\noindent both in the case of nearly degenerate minima (Figure 1 - Left) and very non-degenerate minima
(Figure 1 - Right).
The results are shown in Figure \ref{Fig:4} (similar plots appeared earlier in \cite{joydivision}).
Figure 2 - Left (corresponding to the
potential of Figure 1 - Left) shows an approximately elastic bubble collision,
while Figure 2 - Right (corresponding to the
potential of Figure 1 - Right) shows a very inelastic one.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth, clip ]{CollisionSymmetric.eps}
\includegraphics[width=0.49\textwidth, clip ]{CollisionAsymmetric.eps}
\caption{\small Snapshots of the field profile $h(z,t)$ during a bubble collision ($t$ increasing downwards).
LEFT: Bubble collision for the potential with nearly degenerate minima (Figure 1 - Left).
RIGHT: Bubble collision for the potential with very non-degenerate minima (Figure 1 - Right). In both cases,
$\gamma_w = 10^2$, $l_w = 15/T_{\mathrm{EW}}$ and $T_{\mathrm{EW}} = 100$ GeV.}
\label{Fig:4}
\end{center}
\end{figure}
Guided by the numerical solution for $h(z,t)$ in the case of a very inelastic collision, we can
obtain an analytic solution
$h(z,t) = h_{\mathrm{TI}}$ for the case of a ``totally inelastic
collision" (as opposed to the ``perfectly elastic collision" described earlier), in which
all the energy is radiated in the form of scalar waves after the bubble collision. For $t < 0$
(before the collision) we have
\begin{equation}
\label{inelastic1}
h_{\mathrm{TI}}(z,t < 0) = v_{T} + \frac{v_{T}}{2}
\left[\mathrm{Tanh}\left(\gamma_w\frac{z + t}{l_w}\right) -
\mathrm{Tanh}\left(\gamma_w\frac{z - t}{l_w}\right) \right]
\end{equation}
\noindent which matches $h_{l_{w}}(z,t < 0)$. In order to obtain $h_{\mathrm{TI}}(z,t)$ for $t > 0 $, we note
that the field will
not leave the basin of attraction of the broken minimum $v_T$ after the collision. We can then approximate
the potential $V(h)$ the field will feel for $t > 0$ as
\begin{equation}
V(h) \simeq \frac{m_h^2}{2} \left(h - v_T \right)^2 = \frac{m_h^2}{2} \delta h^2
\end{equation}
\noindent This allows to solve the equation of motion (\ref{eqmotion}) explicitly for
$\delta h_{\mathrm{TI}}(z,t) \equiv h_{\mathrm{TI}}(z,t) - v_T$:
\begin{eqnarray}
\label{eqmotioninelastic}
\left(\partial^2_t - \partial^2_z \right) \delta h_{\mathrm{TI}}(z,t) = -m_h^2 \, \delta h_{\mathrm{TI}}(z,t) \nonumber \\
\, \nonumber \\
\delta h_{\mathrm{TI}}(z,0) = h_{l_{w}}(z,0) - v_T = 0 \\
\, \nonumber \\
\partial_t \, \delta h_{\mathrm{TI}}(z,0) =
\frac{v_T\, \gamma_w}{l_w \left[ \mathrm{Cosh}\left(\frac{\gamma_w\, z}{l_w} \right) \right]^2} \nonumber
\end{eqnarray}
\noindent where the boundary conditions follow from imposing continuity of $\delta h_{\mathrm{TI}}(z,t)$ and
$\partial_t \, \delta h_{\mathrm{TI}}(z,t)$ at $t = 0$. From (\ref{eqmotioninelastic}), we finally obtain
\begin{equation}
h_{\mathrm{TI}}(z,t > 0) = v_{T} \left[1 + \frac{l_w}{\gamma_w}
\int_{0}^{\infty} d p_z \, \frac{p_z}{\sqrt{p_z^2 + m_h^2}} \,
\frac{\mathrm{Cos}\left(p_z\, z \right)}{\mathrm{Sinh} \left(\frac{\pi\, l_w \, p_z}{2\, \gamma_w} \right)}
\, \mathrm{Sin}\left(\sqrt{p_z^2 + m_h^2}\, t \right) \right]
\end{equation}
Notice that in the limit $m_h \rightarrow 0$, (\ref{eqmotioninelastic}) becomes
$\left(\partial^2_t - \partial^2_z \right) \delta h_{\mathrm{TI}}(z,t) = 0$ and (\ref{inelastic1}) is also a solution for
$t > 0$, case in which the two bubble walls would pass through each other without actually colliding.
The analysis for the dynamics of bubble collisions presented here may be extended to phase transitions
involving multiple fields (see for example \cite{Chinos}), although in this case the analysis of the field evolution
after the bubble collision becomes much more complicated (since the scalar potential is multidimensional and the
field ``excursion" at the moment of the bubble collision will involve several fields), and we will not attempt
it here.
\subsection{Particle Production Through Bubble Collisions}
\label{section22}
The bubble collision processes analyzed in the previous section allow to liberate into the plasma the energy
contained in the bubble walls. This can happen either via direct particle production in the collisions
or via radiation of classical scalar waves which will subsequently decay into particles. For bubble collisions
taking place in a thermal environment, the number densities $n_{\alpha}$ for the different particle species created
during the collisions should very quickly approach the ones in thermal equilibrium $n^{\mathrm{EQ}}_{\alpha}$ after the phase transition,
thus rendering the particle production process irrelevant for the subsequent evolution of the Universe.
However, as it has been briefly discussed in the introduction, under certain conditions
fast thermalization of certain species after the phase transition
may be avoided, which can make the particle production process very important in that case.
In order to study the particle production through bubble collisions, we will treat the
scalar field configuration $h(z,t)$ as a classical external field and the states coupled to it as quantum
fields in the presence of this source. In doing so, we will neglect the back-reaction of particle production
on the evolution of the bubble walls themselves throughout the collision, which should be a good approximation
when the energy of the produced particles (for each species) is much less than the energy contained in the field configuration
$h(z,t)$. The probability of particle production is given by \cite{Watkins:1991zt}
\begin{equation}
\label{Number}
\mathcal{P} = 2 \, \mathrm{Im}\left(\Gamma\left[h\right]\right) \quad \quad \quad \quad \left( \mathcal{P} \ll 1 \right)
\end{equation}
\noindent where $\Gamma\left[ h \right]$ is the effective action.
$\Gamma\left[ h \right]$ is the generating
functional of 1PI Green functions, and to the quadratic order in $h$
\begin{equation}
\label{EffAction}
\Gamma\left[ h \right] = \frac{1}{2} \int d^{4}x_1 \, d^{4}x_2 \, h(x_1) \, h(x_2) \, \Gamma^{(2)} \left(x_1, x_2 \right)
\end{equation}
\noindent with $\Gamma^{(2)} \left(x_1, x_2 \right) \equiv \Gamma^{(2)} \left(x_1 - x_2 \right)$ being the 2-point 1PI Green function.
In terms of its Fourier transform $\tilde{\Gamma}^{(2)} \left(p^2\right)$, and using
(\ref{Number}) and (\ref{EffAction}) we get
\begin{equation}
\label{Number2}
\mathcal{P} = \int
\frac{d^{4}p}{(2 \pi)^4} \mathrm{Im} \left( \tilde{\Gamma}^{(2)} \left(p^2 \right)
\right) \int d^{4}x_1 \, d^{4}x_2 \, h(x_1) \, h(x_2) \, e^{i p (x_1-x_2)}
\end{equation}
The last integral in (\ref{Number2}) is just $\left| \tilde{h}(p) \right|^2$, with $\tilde{h}(p)$ being the Fourier transform
of the Higgs field configuration $h(x)$
\begin{equation}
\label{Number3}
\tilde{h}(p) = \int d^{4}x \, h(x) \, e^{i p \, x}
\end{equation}
For a background field configuration $h(z,t)$, its Fourier transform is given by $\tilde{h}(p) =
(2 \pi)^2 \, \delta (p_x) \, \delta (p_y) \, \tilde{h}(p_z,\omega)$. Then, using (\ref{Number2}), we obtain
the mean number of particles produced per unit area \cite{Watkins:1991zt}:
\begin{equation}
\label{Number4}
\frac{\mathcal{N}}{A} = 2 \int \frac{d p_z \, d \omega}{(2\,\pi)^2} \left| \tilde{h}(p_z,\omega) \right|^2
\mathrm{Im} \left( \tilde{\Gamma}^{(2)} \left(\omega^2 - p_z^2\right) \right)
\end{equation}
The physical interpretation of (\ref{Number4}) is rather simple \cite{Watkins:1991zt}: the scalar field configuration
$h(z,t)$, corresponding to the two bubble walls that approach and collide, can be decomposed into
modes of definite four-momentum $p^2 = \omega^2 -p_z^2$ via the Fourier transform. Modes with $p^2>0$ represent
propagating field quanta with mass squared $m^2 = p^2$. Then, (\ref{Number4}) integrates over the amount of field quanta
of mass $p^2$ contained in the field configuration multiplied by the probability of those quanta to decay.
The Fourier transform of the background field configuration $h(z,t)$ can be performed explicitly both for the case of a perfectly
elastic collision and of a totally inelastic one analyzed in the previous section. For a perfectly elastic collision, in the limit
of infinitely thin walls ($h(z,t) = h_{\infty}$), we obtain
\begin{equation}
\label{2WallsFieldConfigurationFT}
\tilde{h}(p_z,\omega) = \tilde{h}_{\infty}(p_z,\omega) \equiv \frac{4\, v_T}{\omega^2- p_z^2}
\end{equation}
However, since the highest values of $p_z$ and $\omega$ available in the field configuration are naively expected to be of order
$\gamma_w/l_w$ (modes with $p_z,\omega \gg \gamma_w/l_w$ will be exponentially damped), the integration in (\ref{Number4})
should in this case be cut-off
for $p_z > \gamma_w/l_w$ and $\omega > \gamma_w/l_w$. From (\ref{Number4}) and (\ref{2WallsFieldConfigurationFT})
we then obtain
\begin{equation}
\label{Number5}
\frac{\mathcal{N}_{\infty}}{A} = \frac{32\, v_T^2}{\pi^2} \int_0^{\frac{\gamma_w}{l_w}} d \omega
\int_0^{\frac{\gamma_w}{l_w}} d p_z \,
\frac{\mathrm{Im}\left(\tilde{\Gamma}^{(2)} \left(\omega^2 - p_z^2\right) \right)}{\left(\omega^2-p_z^2 \right)^2}
\end{equation}
Alternatively, when the thickness of the bubble walls is accounted for ($h(z,t) = h_{l_w}$), the Fourier transform
of (\ref{2WallsFieldConfiguration2}) gives
\begin{equation}
\label{2WallsFieldConfigurationFT2}
\tilde{h}(p_z,\omega) = \tilde{h}_{l_w}(p_z,\omega) \equiv \frac{\pi \, l_w \, \omega}{2\,\gamma_w} \,
\frac{4\, v_T}{\mathrm{Sinh} \left[ \frac{\pi \, l_w \, \omega}{2\, \gamma_w}\right]} \, \frac{1}{\omega^2-p_z^2}
\end{equation}
\noindent which automatically incorporates the exponential damping for $\omega, p_z \gg \gamma_w/l_w$.
The mean number of particles per unit area now reads
\begin{equation}
\label{Number5Bis}
\frac{\mathcal{N}_{l_w}}{A} = \frac{8\, v_T^2 \, l_w^2}{\gamma_w^2} \int_0^{\infty} d \omega
\int_0^{\infty} d p_z \,
\frac{\mathrm{Im}\left(\tilde{\Gamma}^{(2)} \left(\omega^2 - p_z^2\right) \right)}{\left(\omega^2-p_z^2 \right)^2}
\frac{\omega^2}{\left( \mathrm{Sinh} \left[ \frac{\pi \, l_w \, \omega}{2\, \gamma_w}\right] \right)^2}
\end{equation}
For the opposite case of a totally inelastic collision ($h(z,t) = h_{\mathrm{TI}}$), the Fourier transform is given by
\begin{equation}
\label{2WallsFieldConfigurationFT3}
\tilde{h}(p_z,\omega) = \tilde{h}_{\mathrm{TI}}(p_z,\omega) \equiv \frac{\pi \, l_w \, p_z}{2\,\gamma_w} \,
\frac{2\, v_T}{\mathrm{Sinh} \left[ \frac{\pi \, l_w \, p_z}{2\, \gamma_w}\right]} \,
\left(\frac{1}{\omega^2-p_z^2} - \frac{1}{\omega^2-p_z^2 - m_h^2} \right)
\end{equation}
The relative ``$- $" sign between the two contributions in (\ref{2WallsFieldConfigurationFT3})
can be easily understood noticing that in the limit $m_h \rightarrow 0$ the Fourier transform of $h_{\mathrm{TI}}(z,t)$ should give
$\tilde{h}(p_z,\omega) \sim \delta (\omega \pm p_z)$. From (\ref{2WallsFieldConfigurationFT3}), the mean number
of particles produced per unit area in the case of a totally inelastic collision is given by
\begin{equation}
\label{Number5BisBis}
\frac{\mathcal{N}_{\mathrm{TI}}}{A} = \frac{2\, v_T^2 \, l_w^2}{\gamma_w^2} \int_0^{\infty} d \omega
\int_0^{\infty} d p_z \,
\frac{m_h^4 \, \mathrm{Im}\left(\tilde{\Gamma}^{(2)} \left(\omega^2 - p_z^2\right) \right)}
{\left(\omega^2-p_z^2 \right)^2 \left(\omega^2-p_z^2 - m_h^2\right)^2}
\,
\frac{p_z^2 }{\left( \mathrm{Sinh} \left[ \frac{\pi \, l_w \, p_z}{2\, \gamma_w}\right] \right)^2}
\end{equation}
The expressions (\ref{Number5}), (\ref{Number5Bis}) and (\ref{Number5BisBis}) can be rewritten in a more compact
form by making the change of variables $\chi = \omega^2 - p_z^2$, $\Psi = \omega^2 + p_z^2$.
After performing the integral in $\Psi$, the mean number of particles produced per unit area finally reads
\begin{equation}
\label{Number6}
\frac{\mathcal{N}}{A} = \frac{1}{2\, \pi^2} \int_0^{\infty} d \chi
\, f(\chi) \,
\mathrm{Im}\left(\tilde{\Gamma}^{(2)} \left( \chi \right) \right)
\end{equation}
The function $f(\chi)$ encodes the details of the bubble collision process and quantifies the efficiency of particle production.
For a perfectly elastic collision, in the limit of infinitely thin bubble walls, we have
\begin{equation}
\label{Number7}
f(\chi) = f_{\infty}(\chi) \equiv \frac{16\, v_T^2 \,
\mathrm{Log}\left[\frac{2\,\left(\frac{\gamma_w}{l_w}\right)^2 -\chi +
2\,\frac{\gamma_w}{l_w} \sqrt{\left(\frac{\gamma_w}{l_w}\right)^2 - \chi} }{\chi}\right]}{\chi^2} \, \,
\Theta \left[\left(\frac{\gamma_w}{l_w}\right)^2 - \chi \right]
\end{equation}
For a perfectly elastic collision, and for bubble walls with finite thickness, we have
\begin{equation}
\label{Number7Bis}
f(\chi) = f_{l_w}(\chi) \equiv \frac{2\, \pi^2 \, l_w^2 \, v_T^2}{\gamma_w^2} \, \frac{1}{\chi^2}
\int_{\chi}^{\infty} d\Psi \, \frac{\Psi +\chi}{\sqrt{\Psi^2 - \chi^2}} \,
\frac{1}{\left( \mathrm{Sinh} \left[ \frac{\pi \, l_w \, \sqrt{\Psi + \chi}}{2\,\sqrt{2} \,\gamma_w}\right] \right)^2}
\end{equation}
Finally, for a totally inelastic collision, we have
\begin{equation}
\label{Number7BisBis}
f(\chi) = f_{\mathrm{TI}}(\chi) \equiv \frac{\pi^2 \, l_w^2 \, v_T^2}{2 \, \gamma_w^2} \,
\frac{m_h^4}{\chi^2 \left(\chi - m_h^2 \right)^2}
\int_{\chi}^{\infty} d\Psi \, \frac{\Psi -\chi}{\sqrt{\Psi^2 - \chi^2}} \,
\frac{1}{\left( \mathrm{Sinh} \left[ \frac{\pi \, l_w \, \sqrt{\Psi - \chi}}{2\,\sqrt{2} \,\gamma_w}\right] \right)^2}
\end{equation}
In Figure \ref{Fig:5} we compare the efficiency $f(\chi)$ for the various cases (\ref{Number7}), (\ref{Number7Bis}) and
(\ref{Number7BisBis}). Notice that $f_{\mathrm{TI}}(\chi)$ diverges as $\chi \rightarrow m_h^2$.
This divergence is artificial, due to considering $h(z,t)$ over infinite time and space, and should be cut-off since our solution
is not valid over distances larger than the bubble radius $R_B$.
Implementing this cut-off can be well approximated by replacing in (\ref{Number7Bis})
\begin{equation}
\label{Number7Shift}
\left(\chi - m_h^2 \right)^2 \rightarrow \left(\chi - m_h^2 \right)^2 + (m_h^6 \,l_w^2)/\gamma_w^2.
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth, clip ]{CollisionComparisonChiFinal.eps}
\includegraphics[width=0.49\textwidth, clip ]{CollisionComparisonChiFinal2.eps}
\caption{\small Particle production efficiency $f(\chi \equiv \omega^2 - p_z^2)$ for $\gamma_w = 10^2$ (LEFT) and $\gamma_w = 10^3$
(RIGHT), $l_w = 15/T_{\mathrm{EW}}$ and $T_{\mathrm{EW}} = 100$ GeV, in the case of a perfectly
elastic collision with infinitely thin bubble walls (\ref{Number7}) (solid red) and with a finite bubble wall thickness
(\ref{Number7Bis}) (dashed-black), and in the case of a totally inelastic collision (\ref{Number7BisBis}) (solid blue) with
$m_h = 125$ GeV. The $\chi$-axis is displayed in units of $(100\,\, \mathrm{GeV})^2$.}
\label{Fig:5}
\end{center}
\end{figure}
Defining $\chi_{\mathrm{min}}$ as the minimum value of $\chi$ for which particle production is possible (corresponding
to the squared sum of the masses $M_{\alpha}$ of the particles being produced), we immediately see from
Figure \ref{Fig:5} that for a totally inelastic collision, production of light particles ($\chi_{\mathrm{min}} < m_h^2$)
may be very efficient, while production of heavy particles ($\chi_{\mathrm{min}} \gg m_h^2$) will be extremely suppressed.
For a perfectly elastic collision, however, the production of heavy particles may be relatively efficient (we will
comment further on this point at the end of section \ref{section23}).
For the study of the efficiency of particle production in varios different scenarios in the next sections, we will use
(\ref{Number7}) for the case of an elastic collision, while for the case of a very inelastic one it is possible to show that
(\ref{Number7BisBis}) (together with (\ref{Number7Shift})) can be approximated as
\begin{equation}
\label{Number7BisBisBis}
f_{\mathrm{TI}}(\chi) \simeq 4 \, v_T^2\, m_h^4 \, \frac{
\mathrm{Log}\left[\frac{2\,\left(\frac{\gamma_w}{l_w}\right)^2 +\chi +
2\,\frac{\gamma_w}{l_w} \sqrt{\left(\frac{\gamma_w}{l_w}\right)^2 + \chi} }{\chi}\right]}{\chi^2
\left[\left(\chi - m_h^2 \right)^2 + m_h^6 \frac{l_w^2}{\gamma_w^2} \right]} \, .
\end{equation}
Let us now turn to the evaluation of the imaginary part of the 2-point 1PI Green function's Fourier transform
$\tilde{\Gamma}^{(2)} \left(\chi \equiv \omega^2 - p_z^2\right)$. Through the optical theorem, we can write:
\begin{equation}
\label{N/A2}
\mathrm{Im}\left(\tilde{\Gamma}^{(2)} \left(\chi \right) \right) =
\frac{1}{2} \sum_{\alpha} \int d\Pi_{\alpha} \left|\overline{\mathcal{M}}(h \rightarrow \alpha) \right|^2
\, \Theta \left[\chi- \chi_{\mathrm{min}} \right]
\end{equation}
\noindent where $\left|\overline{\mathcal{M}}(h\rightarrow \alpha) \right|^2$ is the spin-averaged squared amplitude
for the decay of $h$ into a set of particles $\alpha$ with masses $M_{\alpha}$, $\chi_{\mathrm{min}} \equiv \left(\sum M_{\alpha}\right)^2$
is the minimum value of $\chi$ for which this decay is possible and $d\Pi_{\alpha}$
is the \textit{relativistically invariant} $n$\textit{-body phase space} element
\begin{equation}
d\Pi_{\alpha} = \left(\prod_i \frac{d^3\,k_i}{(2\,\pi)^3} \frac{1}{2\,E_i}\right) (2\,\pi)^4 \, \delta^4(p-\sum_i k_i)
\end{equation}
Then, the number of particles of a certain type $\alpha$ produced per unit area during the bubble collision will simply read
from (\ref{Number6}) and (\ref{N/A2})
\begin{equation}
\label{N/A3}
\left.\frac{\mathcal{N}}{A}\right|_{\alpha} =
\frac{1}{4\, \pi^2} \int_{\chi_{\mathrm{min}}}^{\infty} d \chi
\, f(\chi) \, \int d\Pi_{\alpha} \left|\overline{\mathcal{M}}(h\rightarrow \alpha) \right|^2
\end{equation}
The amount of energy produced per unit area in the form of particles $\alpha$ is obtained by weighting (\ref{N/A3})
by the energy of each decaying Fourier mode. This yields
\begin{equation}
\label{E/A1}
\left.\frac{\mathcal{E}}{A}\right|_{\alpha} =
\frac{1}{4\, \pi^2} \int_{\chi_{\mathrm{min}}}^{\infty} d \chi
\, f(\chi) \sqrt{\chi} \, \int d\Pi_{\alpha} \left|\overline{\mathcal{M}}(h\rightarrow \alpha) \right|^2
\end{equation}
From (\ref{N/A3}) and (\ref{E/A1}), the non-thermally produced energy density $\rho_{\alpha}$
(assuming that the produced particles quickly diffuse into the bubble interior) reads
\begin{equation}
\label{Energydensity}
\rho_{\alpha} \equiv \left.\frac{\mathcal{E}}{V}\right|_{\alpha} = \left.\frac{\mathcal{E}}{A}\right|_{\alpha} \, \frac{A}{V}
\simeq \left.\frac{\mathcal{E}}{A}\right|_{\alpha} \, \frac{3}{2 \, R_B}
\end{equation}
\noindent with $A \sim 4 \,\pi\, R_B^2$ being the total collision area and $V$ the volume of the two colliding bubbles.
From (\ref{Energydensity}), and bearing in mind that $R_B \simeq \beta^{-1}$, the non-thermally generated comoving energy density is
\begin{equation}
\label{ComEnergydensity}
\Upsilon_\alpha = \frac{\rho_{\alpha}}{s(T_{\mathrm{EW}})} \simeq
\frac{20}{\sqrt{\pi\, g_*}} \, \frac{1}{M_{\mathrm{Pl}}\, T_{\mathrm{EW}}} \, \frac{\beta}{H} \,
\left.\frac{\mathcal{E}}{A}\right|_{\alpha}
\end{equation}
\noindent with $s(T_{\mathrm{EW}})$ the entropy density after the EW phase transition.
\section{Particle Production via the Higgs Portal}
\label{section23}
The efficiency of particle production may strongly depend on the nature of the particles being produced.
In this section we will analyze the particle production efficiency for scalars $S$, fermions $f$ and vector bosons
$V_{\mu}$ coupled to the Higgs field. Apart from estimating the production of SM fermions and gauge bosons through
this process, we will consider a simple Higgs-portal extension of the SM in order to study the production of other
possible scalar, fermion or vector boson particles. Furthermore, we will restrict ourselves to $Z_2$ symmetric
Higgs-portal scenarios, since we will ultimately be interested in dark matter analyses.
We also comment on how to interpret the results in the case when the calculated particle production
exceeds the energy available in the bubble wall.
\subsection{Scalars}
\label{section231}
For the complex scalar $S$ interacting with the SM via the Higgs portal, the relevant part of the
lagrangian is given by
\begin{equation}
\label{scalarsLagrangian}
-\Delta \mathcal{L}_s = m_s^2\, |S|^2
+ \lambda_s \left|H\right|^2 \, |S|^2 \quad \quad \quad \mathrm{with} \quad H =
\left(\begin{array}{c}
0 \\
\frac{h + v_T}{\sqrt{2}}
\end{array}\right).
\end{equation}
In this case, $\left|\mathcal{M}(h \rightarrow S \, \bar S) \right|^2 = \lambda_s^2 \, v_T^2$, and one immediately obtains
\begin{equation}
\label{scalarsProd1}
\mathrm{Im}\left[\tilde{\Gamma}^{(2)} \left( \chi \right) \right]_{S} =
\lambda_s^2 \, v_T^2 \int d\Pi_{S} = \sqrt{1-4 \frac{M_{s}^2}{\chi}}\, \frac{\lambda_s^2 \, v_T^2}{8 \pi}\, \,
\Theta \left(\chi- 4 M_{s}^2\right)
\end{equation}
\noindent with $M_s^2 \equiv m_s^2 + (\lambda_s/2)\, v_T^2$ being the scalar squared mass. Then,
using (\ref{Number6}), (\ref{Number7}), (\ref{Number7BisBisBis}), (\ref{ComEnergydensity}) and
(\ref{scalarsProd1}) we can compute the $S$-scalar comoving energy density generated through
the bubble collisions (normalized to the observed dark matter comoving energy
density) as a function of $M_s$ and $\lambda_s$. The results are shown in Figure \ref{Fig:9}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth, clip ]{ProductionElasticBoson.eps}
\includegraphics[width=0.49\textwidth, clip ]{ProductionInelasticBoson.eps}
\caption{\small Generated comoving energy density $\Upsilon$ in scalars $S$ (normalized to the observed dark
matter comoving energy density) as a function of the scalar mass $M_s$ in the
perfectly elastic collision limit (LEFT) and totally inelastic collision limit (RIGHT) for $\gamma_w = 10^8$,
$l_w = 15/T_{\mathrm{EW}}$ and $T_{\mathrm{EW}} = 100$ GeV. The solid-black line corresponds to the
observed dark matter comoving energy density, and the dashed-black line (not seen in LEFT) corresponds
to the maximum possibly generated comoving energy density ($\mathcal{E} = E_w$).}
\label{Fig:9}
\end{center}
\end{figure}
From Figure \ref{Fig:9} it can be clearly seen that scalar particle production is quite suppressed
for elastic collisions. For very inelastic collisions, heavy-scalar particle production is extremely suppressed,
while production of light scalars turns out to be very efficient in this case. In fact, Figure \ref{Fig:9} shows
that for large values of $\lambda_s$ ($\lambda_s \lesssim 1$) the naively calculated energy of the produced particles
$\mathcal{E}$ exceeds the amount of energy on the bubble walls $E_w$.
That inconsistency indicates that in these cases backreaction cannot be neglected.
We will comment and expand on this issue in section \ref{section24}.
\subsection{Fermions}
\label{section232}
Turning now to fermionic particle production, in the presence of a tree-level
Yukawa coupling between the Higgs and the fermions
$\lambda_{f} H\, \overline{f} \, f$, the squared decay amplitude reads
\begin{equation}
\label{fermionsProd1}
\left|\overline{\mathcal{M}}(h \rightarrow \overline{f}\, f) \right|^2
= 2\, \lambda_{f}^2 \left(p^2 - 4\, m_{f}^2 \right)
\end{equation}
\noindent which, in the case of SM fermions, leads directly to
\begin{equation}
\label{fermionsProd2}
\mathrm{Im}\left[\tilde{\Gamma}^{(2)} \left( \chi \right) \right]_{f} =
\frac{m_f^2}{4 \pi\,v_T^2} \, \,\chi \,\left(1-\frac{4\,m_{f}^2}{\chi} \right)^{\frac{3}{2}} \,\,
\Theta \left(\chi- 4 m_{f}^2\right)
\end{equation}
The production of (SM) fermions will then be enhanced with respect to the one of
Higgs-portal $S-$scalars (specially in the limit of very elastic collisions, see
Figure \ref{Fig:10}) due to the extra factor $\left(\chi - 4\, m_{f}^2 \right)$ in (\ref{fermionsProd2}).
Scenarios where the fermionic particle production might be important include (apart from the SM itself)
the MSSM and its various extensions, due to the tree-level coupling between
Higgses, Higgsinos and Gauginos\footnote[4]{In particular, the production of neutralino dark matter
might have an impact on the subsequent evolution of the Universe.}.
In the absence of a direct coupling, the interaction between the Higgs and the fermions will
occur via an effective operator. This is the case for the so-called fermionic Higgs-portal:
\begin{equation}
\label{fermionsLagrangian}
-\Delta \mathcal{L}_{f} = m_{f}\, \overline{f} f
+ \frac{\lambda_{f}}{\Lambda} \left|H\right|^2 \, \overline{f} f
\end{equation}
However, since bubble collisions may excite very massive Higgs field modes
($p^2 \gg T_{\mathrm{EW}}^2$), particle production in this case may be sensitive to
the $\mathrm{UV}$ completion of the Higgs-portal effective theory, making it unreliable
to compute the particle production in the
fermionic Higgs-portal via (\ref{fermionsLagrangian}).
Here we consider a simple $\mathrm{UV}$ completion for the fermionic Higgs-portal,
and compute the particle production in this case. We add a singlet scalar field $S$
as a mediator between the Higgs field and the fermion $f$, the relevant part of the lagrangian being
\begin{equation}
\label{fermionsLagrangian2}
-\Delta \mathcal{L}_{f} = \frac{m_s^2}{2}\, S^2
+ \frac{\lambda_s}{2} \left|H\right|^2 \, S^2 +\mu_s \left|H\right|^2 \, S + m_{f}\, \overline{f} f
+ \lambda_{f} S \, \overline{f} f
\end{equation}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth, clip ]{ProductionElasticFermion.eps}
\includegraphics[width=0.49\textwidth, clip ]{ProductionInelasticFermion.eps}
\caption{\small Generated comoving energy density $\Upsilon$ in fermions $f$ (normalized to the observed dark
matter comoving energy density) as a function of the fermion mass $m_f$ in the
perfectly elastic collision limit (LEFT) and totally inelastic collision limit (RIGHT) for $\gamma_w = 10^8$,
$l_w = 15/T_{\mathrm{EW}}$ and $T_{\mathrm{EW}} = 100$ GeV. Red lines: production
in the presence of a direct tree-level Yukawa coupling between fermions and Higgs (\ref{fermionsProd2}).
Blue lines: production for a tree-level effective coupling (\ref{fermionsProd5}),
for $\mu_s = M_s=500$ GeV (solid) and $5$ TeV (dashed). Yellow lines: production for a 1-loop effective coupling
(\ref{fermionsProd6}). The solid-black line corresponds to the observed dark matter comoving
energy density, and the dashed-black line corresponds to the maximum possible generated comoving number density
($\mathcal{E} = E_w$).}
\label{Fig:10}
\end{center}
\end{figure}
For simplicity, we will avoid a vev for $S$ (it can be done through the addition of a linear term for $S$ in
(\ref{fermionsLagrangian2})). For $\mu_s \neq 0$ the effective fermionic Higgs-portal operator
$\left|H\right|^2 \,\overline{f} f $ will be generated at tree-level. The squared decay amplitude for
$h \rightarrow \overline{f}\, f$ will then be
\begin{equation}
\label{fermionsProd3}
\left|\overline{\mathcal{M}}(h \rightarrow \overline{f}\, f) \right|^2
= 2\, \frac{\lambda_{f}^2\,\mu_s^2\,v_T^2}{\left(p^2 - M_{s}^2\right)^2 +
\Gamma^2_s\,M_{s}^2} \left(p^2 - 4\, m_{f}^2 \right)
\end{equation}
\noindent with
\begin{equation}
\label{fermionsProd4}
\Gamma_s = \frac{\lambda_{s}^2\,v_{T}^2 + \mu_s^2}{16\,\pi\,M_{s}} \sqrt{1- \frac{4\,m_{h}^2}{M_{s}^2}} \,\,
\Theta \left( M^2_{s} - 4\, m_{h}^2 \right) +
\frac{\lambda_{f}^2\, M_{s}}{8\,\pi} \left(1- \frac{4\,m_{f}^2}{M_{s}^2}\right)^{\frac{3}{2}} \,\,
\Theta \left( M^2_{s} - 4\, m_{f}^2 \right)
\end{equation}
\noindent leading finally to
\begin{equation}
\label{fermionsProd5}
\mathrm{Im}\left[\tilde{\Gamma}^{(2)} \left( \chi \right) \right]_{f} =
\frac{\lambda_{f}^2\,\mu_s^2\,v_T^2}{4 \pi} \, \frac{\chi}{\left(\chi - M_{s}^2\right)^2 + \Gamma^2_s\,M_{s}^2}
\, \left(1-\frac{4\,m_{f}^2}{\chi}\right)^{\frac{3}{2}}
\Theta \left(\chi- 4 m_{f}^2\right)
\end{equation}
When $\mu_s = 0$ the effective fermionic Higgs-portal operator
is not generated at tree-level, but rather the decay
$h \rightarrow \overline{f} \, f$ occurs via a finite 1-loop diagram, yielding
\begin{equation}
\label{fermionsProd6}
\mathrm{Im}\left[\tilde{\Gamma}^{(2)} \left( \chi \right) \right]_{f} =
\frac{\left(\lambda_s \, \lambda_{f}^2\right)^2}{\left(4 \pi\right)^5}
\, F \left[m_{f}^2,\, M_{s}^2,\, \chi \right]\, \chi \, \left(1-\frac{4\,m_{f}^2}{\chi} \right)^{\frac{3}{2}}\, \,
\Theta \left(\chi- 4 m_{f}^2\right)
\end{equation}
\noindent where $F \left[m_{f}^2,\, M_{s}^2,\, \chi \right]$ is a form factor that scales as
\begin{equation}
\label{fermionsProd7}
F \left[m_{f}^2,\, M_{s}^2,\, \chi \right] \, \longrightarrow \,
\frac{m_{f}^4}{\chi^{2}} \, \mathrm{Log}\left(\frac{\chi}{m_{f}^2}\right) \quad \quad \quad \chi
\gg m_{f}^2,\, M_{s}^2
\end{equation}
Fermionic Higgs-portal particle production both in the $\mu_s = 0$ and $\mu_s \neq 0$ is shown in
Figure \ref{Fig:10}, where it can be clearly seen that the production in the absence of a direct
coupling between the Higgs and the fermions $f$ differs from what would have been naively obtained
using (\ref{fermionsLagrangian}). As for the case of scalar particle production, under certain
circumstances the estimate of fermionic particle production neglecting backreaction exceeds the
amount of energy stored in the bubble walls ($\mathcal{E} > E_w$), and in order to obtain a
physically meaningful result backreaction should be included (We will expand on this issue in section
\ref{section24}).
\subsection{Vector Bosons}
\label{section233}
Finally, we study the production of vector boson particles. In the presence of a tree-level
coupling between the Higgs and the vector bosons $\lambda_{V} M_{V} \, h \, V_{\mu} V_{\mu}$,
the squared decay amplitude reads
\begin{equation}
\label{VectorProd1}
\left|\overline{\mathcal{M}}(h \rightarrow V_{\mu}\,V_{\mu}) \right|^2
= \lambda^2_{V} M^2_{V} \left(3 - \frac{p^2}{M^2_{V}} + \frac{p^4}{4\,M^4_{V}} \right)
\end{equation}
\noindent leading to
\begin{equation}
\label{VectorProd2}
\mathrm{Im}\left[\tilde{\Gamma}^{(2)} \left( \chi \right) \right]_{V} =
\frac{\lambda^2_{V} M^2_{V}}{8 \pi} \, \left(3 - \frac{\chi}{M^2_{V}} + \frac{\chi^2}{4\,M^4_{V}} \right)
\sqrt{1-4 \frac{M_{V}^2}{\chi}}\, \,
\Theta \left(\chi- 4 M_{V}^2\right)
\end{equation}
Comparing (\ref{scalarsProd1}), (\ref{fermionsProd2}) and (\ref{VectorProd2}) we immediately
observe the relative efficiency of particle production for scalars, fermions and vector bosons. While
$\mathrm{Im}\,[\tilde{\Gamma}^{(2)} \left( \chi \right)]$ scales as $\chi^0$ for scalars,
and as $\chi$ for fermions, in the case of vector bosons it scales as $\chi^2$, thus greatly enhancing
production of vector bosons with respect to scalars or fermions for very elastic collisions
(see Figure \ref{Fig:11}). It is then expected that most of the available energy from the EW
phase transition will go into $W_{\mu}$ and $Z_{\mu}$ gauge boson production and (possibly)
other vector bosons coupled at tree-level to the Higgs in extensions of the
SM\footnote[5]{such as Little Higgs theories or extra-dimensional
scenarios with gauge fields living in the bulk.}.
In the absence of a direct coupling, the interaction between the Higgs and the vector bosons may
occur via an effective operator, as in the so-called vector Higgs-portal \cite{Lebedev:2011iq}:
\begin{equation}
\label{vectorsLagrangian}
-\Delta \mathcal{L}_{V} = \frac{1}{2} m_{V}^2\, V_{\mu}V^{\mu}
+ \lambda_{V} \left|H\right|^2 \, V_{\mu}V^{\mu}
\end{equation}
However (like for the fermionic Higgs-portal) an analysis of vector boson particle production
in the context of the effective theory (\ref{vectorsLagrangian}) will be unreliable due to
very massive Higgs field modes ($p^2 \gg T_{\mathrm{EW}}^2$) being excited during the bubble
collisions. Vector boson particle production will then be sensitive to the way in which the
effective operator $\left|H\right|^2 \, V_{\mu}V^{\mu}$ is generated. One possible way of generating
the effective operator at tree-level, being $V_{\mu}$ a hidden $U(1)$ gauge field, is by
integrating out a $U(1)$-charged complex scalar $S$ which has a Higgs portal coupling
$\left|H\right|^2 \, S^* S$, the relevant part of the lagrangian then being
\begin{equation}
\label{vectorsLagrangian2}
-\Delta \mathcal{L}_{V} = \frac{1}{4}F_{\mu\nu}F^{\mu\nu} - D_{\mu}S^{*}D^{\mu}S + V(S) + \lambda_{hs}\,
\left|H\right|^2 \, S^*S
\end{equation}
In this scenario, the vector boson $V_{\mu}$ acquires a mass via the spontaneous breaking of the hidden $U(1)$,
through a vev $v_{S}$ for the $S$-scalar\footnote[6]{This implies that there may have been another phase
transition in the early Universe associated with the spontaneous breaking of the hidden $U(1)$
gauge symmetry, which we need to require to have happened long before the EW phase transition
since otherwise the EW phase transition would have been effectively multi-field and our present
analysis of particle production would be totally unrealistic.}. The squared decay amplitude for
$h \rightarrow V_{\mu}\, V_{\mu}$ will then be
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth, clip ]{ProductionElasticVector.eps}
\includegraphics[width=0.49\textwidth, clip ]{ProductionInelasticVector.eps}
\caption{\small
Generated comoving energy density $\Upsilon$ in vector bosons $V_{\mu}$ (normalized to the observed dark
matter comoving energy density) as a function of the vector boson mass $M_{V}$ in the
perfectly elastic collision limit (LEFT) and totally inelastic collision limit (RIGHT) for $\gamma_w = 10^8$,
$l_w = 15/T_{\mathrm{EW}}$ and $T_{\mathrm{EW}} = 100$ GeV. Red line: production
in the presence of a direct tree-level coupling between vector bosons and Higgs (\ref{VectorProd2}).
Blue line: production for a tree-level effective coupling (\ref{VectorProd5}),
for $\lambda_{hs} = 1$ and $M_s = 500$ GeV. The solid-black line corresponds to the observed dark matter comoving
energy density, and the dashed-black line corresponds to the maximum possible generated comoving number density
($\mathcal{E} = E_w$).}
\label{Fig:11}
\end{center}
\end{figure}
\begin{equation}
\label{VectorProd3}
\left|\overline{\mathcal{M}}(h \rightarrow V_{\mu}\, V_{\mu}) \right|^2
= \frac{\lambda_{hs}^2}{4} \, \frac{v_T^2\, M^4_{V}}{\left(p^2 - M_{s}^2\right)^2 + \Gamma^2_s\,M_{s}^2}
\,\left(3 - \frac{p^2}{M^2_{V}} + \frac{p^4}{4\,M^4_{V}} \right)
\end{equation}
\noindent with $\Gamma_s$ being the decay width of $S$. This leads to
\begin{equation}
\label{VectorProd5}
\mathrm{Im}\left[\tilde{\Gamma}^{(2)} \left( \chi \right) \right]_{V} =
\frac{\lambda_{hs}^2}{32 \,\pi} \, \,
\frac{v_T^2\,M^4_{V}
\left(3 - \frac{\chi}{M^2_{V}} + \frac{\chi^2}{4\,M^4_{V}} \right)}{\left(\chi - M_{s}^2\right)^2 +
\Gamma^2_s\,M_{s}^2}
\,\sqrt{1- \frac{4\,M_{V}^2}{\chi}}
\, \,
\Theta \left(\chi- 4 M_{V}^2\right)
\end{equation}
Vector boson effective Higgs-portal particle production is shown in Figure \ref{Fig:11}, resulting in a very
suppressed particle production with respect to the case in which the vector bosons and the Higgs couple
directly at tree-level, specially for very elastic collisions. From Figure \ref{Fig:11} it is also clear
that backreaction is most important for direct vector boson particle production (for which the production
estimate yields $\mathcal{E} \gg E_w$).
\subsection{Backreaction and Relative Efficiency}
\label{section24}
Clearly, for the present analysis of particle production to be physically meaningful it must be assumed that
the total energy of the produced particles is less than the energy contained in the background field
configuration $h(z,t)$. Moreover, when the energy of the produced particles starts being comparable to
the energy of the background field we expect backreaction on $h(z,t)$ due to the particle production
to be important. Then, in order for the previous analysis to be reliable, it is needed
\begin{equation}
\label{Backreaction}
\left.\frac{\mathcal{E}}{A}\right|_X \ll
\frac{E_w}{A} = \frac{2}{3}\, v_{T}^2 \,\frac{\gamma_w}{l_w}
\end{equation}
As it has been shown in the previous section, for fermion or vector boson particle production
the previous condition (\ref{Backreaction}) is not satisfied, and in some cases
even $\mathcal{E} \gg E_w$ is obtained (Figure \ref{Fig:11} LEFT),
signaling the extreme importance of backreaction in those scenarios.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth, clip ]{ProductionRelativeElastic.eps}
\includegraphics[width=0.49\textwidth, clip ]{ProductionRelativeElastic2.eps}
\caption{\small
Efficiency of vector boson (solid lines) and fermion (dashed line) particle production (scalars are too
inefficiently produced to be shown) for a perfectly elastic collision, normalized to the most efficiently
produced particles (in this case $W_{\mu}$ and $Z_{\mu}$) and to the energy contained in the bubble
walls, for $\gamma_w = 10^8$ (LEFT) and $\gamma_w = 10^{15}$ (RIGHT),
$l_w = 15/T_{\mathrm{EW}}$ and $T_{\mathrm{EW}} = 100$ GeV. The solid-black
line corresponds to the observed dark matter comoving energy density (normalized to the energy contained
in the bubble walls).}
\label{Fig:12}
\end{center}
\end{figure}
Since incorporating backreaction into the present analysis of particle production is extremely difficult
and lies beyond the scope of this paper, we simply note that the relative efficiency in particle
production for the different species in the present analysis should be roughly correct even when backreaction
is important. Then, an estimate of the particle production in cases where some of the species
are very efficiently produced may be obtained just by normalizing the production to the total energy in
the bubble walls. For very elastic bubble collisions, it has been shown in section \ref{section233}
that production of $W_{\mu}$ and $Z_{\mu}$ gauge bosons is extremely efficient, which will then leave
very little energy left in the bubble walls for producing other particle species. The relative efficiencies
(defined as ratios of energy in produced particles) of the different species for a perfectly elastic collision,
normalized to the energy contained in the
bubble walls (assuming that most of the available energy goes into producing $W_{\mu}$ and $Z_{\mu}$)
is shown in Figure \ref{Fig:12}. A good estimate of the non-thermally generated comoving
energy density (per particle species $\alpha$) in this case may then be given by
\begin{equation}
\label{ComEnergydensityRelative}
\Upsilon_{\alpha} \simeq \frac{20}{\sqrt{\pi\, g_*}} \,
\frac{1}{M_{\mathrm{Pl}}\, T_{\mathrm{EW}}} \, \frac{\beta}{H} \,
\left.\frac{\mathcal{E}}{A}\right|_{\alpha} \, \left(\left.\frac{\mathcal{E}}{A}\right|_{W_{\mu}} \right)^{-1}
\,\frac{E_w}{A}
\end{equation}
The fact that this is a reliable estimate of the particle production efficiency for the case of very elastic
collisions is due to the high-$p^2$ modes of the bubble wall carrying almost all the energy of the bubble wall.
The energy carried by the high-$p^2$ modes will then mostly go into vector boson production (their
production efficiency at high $p^2$ is much larger than fermionic or scalar ones), result that holds
even without incorporating backreaction into the analysis.
On the other hand, for very inelastic collisions the results from the previous section show that
particle production is only effective for light particles ($M_X \lesssim m_h / 2$). Therefore,
production of $W_{\mu}$ and $Z_{\mu}$ will be very suppressed in this case, along with any other
heavy particle, and most of the available energy will go into production of SM fermions (mainly bottom
quarks) and (possibly) new light scalars or fermions with sizable couplings to the Higgs.
\section{Non-thermal Multi-TeV WIMP Dark Matter}
\label{BabyWIMP}
In this section we focus on the case of relatively heavy dark matter, $M_X \gtrsim$~TeV,
and explore the conditions under which the amount of
non-thermally produced heavy dark matter can end-up accounting for a sizable part of the observed dark
matter relic density (dark matter may nevertheless still have a thermal component coming
from the usual freeze-out process).
The first condition is clearly that bubble collisions have to be fairly elastic: it has been
shown in sections \ref{section23} that for very inelastic bubble collisions only light
($M_X \lesssim m_h / 2$) particles are efficiently produced, while heavy particle production is extremely
suppressed. Since fast thermalization of light species after the EW phase transition seems unavoidable\footnote{%
Dark matter may be coupled to the Higgs weakly enough as to avoid thermalization, however in that case we find it is not produced in sufficient quantities to make up for the observed relic abundance. For a discussion of the asymmetric dark matter case, see apppendix \ref{section4}.}, for very inelastic bubble collisions either dark matter is
not efficiently produced or it thermalizes immediately after the end of the EW phase transition, not
having any influence on the subsequent evolution of the Universe.
For very elastic bubble collisions, the analysis from sections \ref{section23} and \ref{section24} shows
that electroweak gauge bosons $W_{\mu}$ and $Z_{\mu}$ are most efficiently produced, and the
relative production efficiency of heavy fermions and scalars is too low (for them to be able to
account for a sizable part of the observed dark matter relic abundance, see Figure \ref{Fig:12}).
This leaves heavy vector bosons with a direct coupling to the Higgs field
as the only viable candidate for non-thermally produced dark matter during the
EW phase transition.
In the following we perform an analysis of heavy vector boson dark matter coupled to the Higgs,
including an overview of thermal freeze-out and direct detection constrains from XENON100 \cite{XENON}
(see \cite{Lebedev:2011iq,Djouadi:2011aa} for more details), and a comparison between
the amount of non-thermally produced dark matter and the amount of dark matter produced
through thermal freeze-out. We also study the evolution of the non-thermally produced dark
matter component after the EW phase transition.
\subsection{Higgs-Vector Dark Matter Interplay}
Consider a vector boson dark matter candidate with mass $M_V$ and a tree-level coupling
to the Higgs \cite{Lebedev:2011iq,Hambye:2009fg},
\begin{equation}
\mathcal{L}_{V} = \frac{1}{2} M_V^2 \, V_\mu V_\mu + \lambda_V v_T h V_\mu V_\mu
\end{equation}
This coupling mediates the dark matter annihilation into Standard Model particles, as well as the
elastic scattering on nucleons relevant for dark matter direct detection. Concerning the former process,
the Higgs boson can mediate annihilation of dark matter into electroweak gauge bosons (for heavy dark matter
they are the most important annihilation channel) through the couplings
\begin{equation}
\frac{h}{v_T} \left (2 M_W^2 W_\mu^+ W_\mu^- + M_Z^2 Z_\mu Z_\mu \right )
\end{equation}
The spin-averaged amplitude squared for the annihilation
process $V_\mu V_\mu \to W_\mu^+ W_\mu^-$ in the limit $s \gg m_h^2$ is given by
\begin{equation}
\label{annVVWW}
|\overline{\mathcal{M}}_{VV\to W/Z,W/Z}|^2 \approx \frac{2}{3} \, \lambda_V^2
\left (\frac{s^2}{4 M_V^4} - \frac{s}{M_V^2} + 3 \right )
\end{equation}
Given (\ref{annVVWW}), the thermally averaged annihilation cross section is given by
\begin{equation}
\label{annVVWW2}
\left\langle \sigma v \right\rangle_{VV\to W/Z, W/Z} = \frac{z \, \lambda_V^2}{192\, \pi\, M_V^2 \, K_2(z)^2}
\int_{4}^\infty d x \, \sqrt{x - 4 }\, K_1(\sqrt{x}\, z) \, \left(\frac{x (x-4)}{4} + 3 \right)
\end{equation}
\noindent where $z = M_V/T$, and $K_1(z), K_2(z)$ are Bessel functions. For $z \gg 1$, (\ref{annVVWW2})
reduces to
\begin{equation}
\label{annVVWW3}
\left\langle \sigma v \right\rangle_{VV\to W/Z, W/Z} \approx \frac{\lambda_V^2}{16 \,\pi\, M_V^2}
\end{equation}
The thermal cross section giving rise to the observed value of the relic density
$\left\langle \sigma v \right\rangle_{\mathrm{WMAP}} \approx 2.6 \cdot 10^{-9}\, \mathrm{GeV}^{-2}$
corresponds, for heavy dark matter $M_V \gg m_h$ and using (\ref{annVVWW3}), to
\begin{equation}
\label{annVVWW4}
\left[ \frac{\lambda_V}{M_V(\mathrm{TeV})}\right]_{\mathrm{WMAP}} \approx 0.3
\end{equation}
\vspace{3mm}
Turning now to dark matter direct detection, the spin-averaged amplitude squared for Higgs-mediated
dark matter elastic scattering on nucleons reads
\begin{equation}
|\overline{\mathcal{M}}_{V N \to V N} |^2 =
\frac{8 \,\lambda_V^2\, f_N^2\, m_N^2}{3\, (t - m_h^2)^2}
\left( 2 + \frac{(M_V^2 - \frac{t}{2})}{M_V^2} \right ) \left (2 m_N^2 - \frac{t}{2} \right)
\approx \frac{16\, \lambda_V^2\, f_N^2\, m_N^4}{m_h^4}
\end{equation}
\noindent
Here, $m_N \approx 0.939$ GeV is the proton/neutron mass
and $f_N$ is the effective Yukawa coupling of the Higgs to nucleons which, following \cite{Djouadi:2011aa},
we take $f_N = 0.326$ based on the lattice estimate in \cite{Young:2009zb}.
In the last step we have taken the limit $t \ll m_N^2,m_h^2,M_V^2$.
The elastic scattering cross section then reads
\begin{equation}
\sigma_{V N \to V N} \approx \frac{\lambda_V^2 \,f_N^2 \, m_N^4}{\pi\, M_V^2\, m_h^4}
\approx 4.2 \cdot 10^{-44} {\rm cm}^2 \,\left[ \frac{\lambda_V}{M_V(\mathrm{TeV})} \right]^2
\end{equation}
On the other hand, the XENON100 bound on the dark matter elastic scattering cross
section on nucleons, for $M_V \gtrsim$ TeV is approximately
\begin{equation}
\label{Xenon}
\sigma_{V N \to V N} < M_V \cdot 2.2 \cdot 10^{-44} {\mathrm{cm}}^2
\end{equation}
Therefore, (\ref{annVVWW4}) and (\ref{Xenon}) leave a sizable window in the parameter space
$(M_V,\,\lambda_V)$ for which the dark matter abundance obtained via thermal freeze-out is significantly
smaller
than the observed dark matter relic density, and still the value of $\lambda_V$ (as a function of
$M_V$) is below the XENON100 bound (as shown in Figure \ref{Fig:13}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.65\textwidth, clip ]{ThermalvsNonThermalDM.eps}
\caption{\small
The dashed-black line corresponds to the limits on $\lambda_V$ from XENON100 (\ref{Xenon}).
The solid-black line corresponds to the value of $\lambda_V$ for which the observed DM relic density is obtained
via thermal freeze-out (\ref{annVVWW4}): below it the thermal DM density is larger
than the observed DM relic density (and thus this region is excluded). Above, the thermal DM density
is only a fraction of the observed DM relic density, and the red lines show the percentage of relic density
accounted for by the thermal density.}
\label{Fig:13}
\end{center}
\end{figure}
\subsection{Fate of Non-Thermally Produced Vector Dark Matter}
\label{section31}
Given the results from the previous section (summarized in Figure \ref{Fig:13}), it is fair
to ask if, in the region of $(M_V,\,\lambda_V)$ parameter space in which the thermal component
is not enough to account for the observed dark matter relic density, dark matter produced
non-thermally at the EW phase transition could account for the extra needed amount.
Using the results from production efficiency of heavy vector boson dark matter obtained in sections
\ref{section233} and \ref{section24}, we show in Figure \ref{Fig:14} the value of $\lambda_V$ (as a
function of $M_V$) for which the amount of non-thermal vector boson production equals the observed
dark matter relic density (dashed-blue line). Then, for values of $\lambda_V$ above the thermal
cross section giving the observed relic density $\left\langle \sigma\, v\right\rangle_{\mathrm{WMAP}}$,
non-thermal production of heavy vector bosons is so efficient as to generate amounts of dark matter much
larger than the observed dark matter relic density.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.65\textwidth, clip ]{ThermalvsNonThermalDM2.eps}
\caption{\small
Black lines are the same as in Figure \ref{Fig:13}.
The dashed-blue line corresponds to the value of $\lambda_V$ needed for the non-thermally produced energy
density in vector bosons (with a direct coupling to the Higgs) to be equal to the DM relic density, for
$\gamma_w = 10^8$. The red lines show the values of $\lambda_V$ yielding the "non-thermal" cross section
(\ref{Boltzmann4}) (for which the final dark matter abundance, taking into account its evolution
after non-thermal production, corresponds to the observed dark matter relic density)
for several values of $T_{\mathrm{EW}}$.}
\label{Fig:14}
\end{center}
\end{figure}
Assuming that at the time of the EW phase transition vector boson dark matter is already frozen-out
($T_{\mathrm{fo}} \simeq M_V/20 > T_{\mathrm{EW}}$), we can study the evolution of the non-thermally
generated dark matter abundance via a simple Boltzmann equation in which the comoving dark matter number
density $Y$ fulfills $Y(z) \gg Y_{EQ}(z)$ (with $Y_{EQ}(z)$ being the equilibrium comoving number density),
yielding
\begin{equation}
\label{Boltzmann1}
\frac{d Y}{dz} = -
\alpha \, \frac{ \left\langle \sigma \, v \right\rangle M_{\mathrm{Pl}} \,M_V}{z^2} \,
Y(z)^2 \, \longrightarrow
\, \frac{d y}{dz} = - \frac{1}{z^2}\, y^2(z)
\end{equation}
\noindent with $\alpha = (4 \pi^2 \sqrt{\xi \, g_{*}})/45 \simeq 2.642$ ($g_* \sim 100$ being the
number of relativistic degrees of freedom in the thermal plasma and $\xi \equiv 90/(32 \,\pi^3)$),
$M_{\mathrm{Pl}} = 1.2 \times 10^{19}$ GeV and $y(z) = \alpha \, \left\langle \sigma \, v
\right\rangle M_{\mathrm{Pl}} \,M_V \, Y(z)$.
Integration of (\ref{Boltzmann1}) for $z > z_{\mathrm{EW}}$ yields
\begin{equation}
\label{Boltzmann2}
\frac{1}{y(z)} - \frac{1}{y(z_{\mathrm{EW}})} = \frac{1}{z_{\mathrm{EW}}}-
\frac{1}{z}\, \longrightarrow \,
\frac{1}{y(\infty)} = \frac{1}{z_{\mathrm{EW}}} + \frac{1}{y(z_{\mathrm{EW}})}
\end{equation}
Then, given the fact that non-thermal vector boson dark matter production is much larger
than the observed relic density in the $(M_V,\,\lambda_V)$ region of interest, we can take the limit
$y(\infty) \ll y(z_{\mathrm{EW}})$, obtaining
\begin{equation}
\label{Boltzmann3}
y(\infty) \simeq z_{\mathrm{EW}}
\end{equation}
From (\ref{Boltzmann3}), we immediately obtain that the value of the annihilation
cross section that will yield the observed dark matter relic density once the non-thermally
generated dark matter evolves after the EW phase transition is simply given by
\begin{equation}
\label{Boltzmann4}
\left\langle \sigma\, v\right\rangle = \left\langle \sigma\, v\right\rangle_{\mathrm{WMAP}}
\frac{T_{\mathrm{fo}}}{T_{\mathrm{EW}}}
\end{equation}
The red lines in Figure \ref{Fig:14} show the values of $\lambda_V$ yielding the correct "non-thermal"
annihilation cross section (\ref{Boltzmann4}) for several values of $T_{\mathrm{EW}}$.
This analysis shows that non-thermal production of multi-TeV vector boson dark matter at the EW phase transition
(in $(M_V,\,\lambda_V)$ parameter space in which the amount of dark matter yielded by thermal
freeze-out is not enough to account for the observed dark matter relic density) is efficient
as to generate a dark matter amount much larger than the observed relic density. This results in a reactivation
of thermalization processes that lead to partial wash-out of the non-thermally
generated dark matter (wash-out is not complete due to the reactivation happening
for $T < T_{\mathrm{EW}} < T_{\mathrm{fo}}$),
meaning that multi-TeV dark matter may have a thermal spectrum despite a
large fraction of it having been produced non-thermally at the EW phase transition.
As shown in Figure \ref{Fig:14}, in the presence of these non-thermally produced WIMPs,
the relation between mass and coupling giving rise to the
observed dark matter relic density gets modified with respect to the usual
thermal freeze-out scenario, leading to better detection prospects in the multi-TeV region
for future dark matter direct detection experiments.
\section{\textit{Baby-zillas}: Super-Heavy Dark Matter from the EW Phase Transition}
\label{BabyWIMP2}
In this section we study the production of super-heavy dark matter with a mass $M_X$
satisfying $M_{\mathrm{GUT}} \gg M_X \gg v_{\mathrm{EW}}$ in the bubble collisions at the end of a very strong EW phase
transition. We call these dark matter particles {\em baby-zillas} because of many similarities (but smaller mass) to the WIMP-zilla scenario.
From Figure \ref{Fig:12}, it can be inferred that for $\gamma_w \sim 10^{14} - 10^{15}$ non-thermal heavy vector
boson production in elastic bubble collisions can be so efficient as to generate the observed dark matter relic
density even for
very large dark matter masses $M_V \sim 10^6 - 10^8$ GeV and perturbative values of the coupling $\lambda_V$.
Using (\ref{ComEnergydensityRelative}), we plot in Figure \ref{Fig:15} the region in parameter
space ($M_V, \lambda_V$) for which non-thermal $V_{\mu}$ production directly yields the observed dark matter
relic density.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.55\textwidth, clip ]{Babyzillas.eps}
\caption{\small
Region in the ($M_V, \lambda_V$) parameter space for which non-thermal $V_{\mu}$ production yields the observed dark
matter relic density for $l_w = 15/T_{\mathrm{EW}}$ (with $T_{\mathrm{EW}} = 100$ GeV) and
$\gamma_w = 10^{14} - 10^{15}$.}
\label{Fig:15}
\end{center}
\end{figure}
\subsection{Bounds on the Reheating Temperature After Inflation}
\label{section3}
A stable particle with mass $M_V \sim 10^5 - 10^8$ GeV would yield a much larger relic abundance than the observed DM relic density. were it in thermal equilibrium
at some stage after inflation.
For such a massive species, the annihilation cross is always smaller than the one needed to yield the observed DM relic
density through thermal freeze-out. It is then needed that this particle species never reached thermal
equilibrium after the end of inflation. This sets an upper bound on the reheating temperature after inflation,
specifically $T_{\mathrm{RH}} < T_{\mathrm{fo}}$ (with $T_{\mathrm{fo}}$ being the temperature below which
the particle is decoupled from the thermal plasma). For a heavy vector boson $V_{\mu}$
annihilating into $SU(2)$ gauge bosons (the most important annihilation channel in this case) through the Higgs,
$T_{\mathrm{fo}}$ satifies
\begin{equation}
\frac{M_V}{T_{\mathrm{fo}}} \simeq 20.4 + \mathrm{Log}\left( \frac{M_V}{100 \,\mathrm{GeV}}\right) +
\mathrm{Log}\left( \frac{\left\langle \sigma v \right\rangle}{10^{-9}\,\mathrm{GeV}^{-2}}\right)
\end{equation}
\noindent where the thermally averaged annihilation cross section $\left\langle \sigma v \right\rangle$
is given by (\ref{annVVWW2}). In Figure \ref{Fig:16} we plot
the minimum value of $z$ (corresponding to the maximum allowed value of the reheating
temperature $T_{\mathrm{RH}}$) as a function of the mass $M_V$ for the range of $\lambda_V$ values giving rise
to the observed dark matter relic abundance for $\gamma_w = 10^{14} - 10^{15}$ (see Figure \ref{Fig:15}). We see that
the upper bound on $T_{\mathrm{RH}}$ is relatively insensitive to the precise value of $\gamma_w$, and roughly scales
as $T^{\mathrm{max}}_{\mathrm{RH}} \sim M_V/10$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.55\textwidth, clip ]{Babyzillas2.eps}
\caption{\small
Bounds on the Reheating temperature after inflation for the requirement that dark matter
never reaches thermal equilibrium after inflation, namely $T_{\mathrm{RH}} \leq T_{\mathrm{fo}}$, as
a function of the dark matter mass $M_V$, and assuming $\lambda_V(M_V)$ for which non-thermal production
yields the observed relic abundance (as shown in Figure \ref{Fig:15}).}
\label{Fig:16}
\end{center}
\end{figure}
\section{Conclusions}
Dark matter may have been efficiently produced at the end of a first order EW phase transition if it has a
large coupling to the Higgs field. In this paper we investigated the conditions for this non-thermal
production mechanism to account for most of dark matter in the Universe.
We considered scalar, fermion and vector dark matter coupled to the SM through the Higgs (either via a
direct, tree-level interaction or an effective Higgs-portal coupling), and found that
production of vector bosons directly coupled to the Higgs is most efficient, while for scalar and fermions
most of the energy stored in the bubble walls is bound to be released into production of SM particles.
This analysis singles out vector dark matter in the present context.
For very inelastic bubble collisions only dark with $M_X \lesssim 100$ GeV can
be efficiently produced, while production of heavier dark matter is extremely suppressed.
Unfortunately, for a dark matter mass in this range, we did not find a way to avoid subsequent
thermalization and the wash-out of the non-thermal component, and therefore in this case dark matter
production at the EW phase transition is irrelevant.
The situation is quite different for highly elastic bubble collisions.
In that case, dark matter with $M_X \gg 100$ GeV can be efficiently produced for the so-called
{\em runaway} bubbles, that expand with a very large $\gamma$-factor.
We have identified two scenarios where wash-out of dark matter produced at the EW phase transition can be naturally
avoided. One has dark matter in the multi-TeV range, which makes it possible for
non-thermally produced dark matter to remain out of thermal equilibrium after the EW phase
transition. We determined the region in the parameter space of dark matter mass and coupling to the Higgs
where the correct relic abundance is reproduced. For a given mass, the coupling has to be {\em larger} than
in the usual thermal freeze-out scenario for Higgs portal dark matter, which can be especially relevant
for direct detection searches, as it opens the possibility of detecting a signal from multi-TeV
non-thermal dark matter in the near future by XENON100 and LUX experiments.
The other scenario is {\em baby-zilla} dark matter with $M_X \sim 10^6$-$10^8$ GeV.
Surprisingly enough, such super-heavy dark matter can be produced in important quantities at the end of a strongly
first-order EW phase transition, provided the dark matter coupling to the Higgs is large, and the $\gamma$ factor
of the bubble walls is near its maximal value of $\gamma_w \sim 10^{15}$.
In order for the baby-zillas to be a viable dark matter candidate, they must have never reached
thermal equilibrium, which then constrains the reheating temperature after inflation in this scenario.
\section*{Acknowledgments}
We specially thanks Francesco Riva for very useful discussions and collaboration
in the early stages of this work, and also Thomas Konstandin, Michel Tytgat, Yann Mambrini, Stephan
Huber and Stephen West for discussions and comments. The work of J.M.N. is supported by
the Science Technology and Facilities
Council (STFC) under grant number ST/J000477/1.
|
1211.5556
|
\section{Introduction}
Color difference perception of just notably different colors and defining distances between
very-similar colors has received considerable work
\cite{macadam1942vsc,robertson1990hdc,ciede2000,sharma2005ccd,color_book}. In the CIE
(International Commission on Illumination) community, distances of up to 7 CIELAB units, where 1
CIELAB unit approximately correspond to 1 just noticeable difference, are considered medium
distances \cite{large_color_diffs}. In this paper we refer to 0-7 CIELAB distances as very-similar
as they capture just a small fraction of similar colors.
See \figRefText \ref{euc7}.
\begin{figure}[h!] \centering
\psfrag{distance}[c][c]{{\footnotesize distance}}
\psfrag{distgraphtitle}[c][c]{{\footnotesize Euclidean distance to blue in L*a*b* space}}
\includegraphics[width=0.7\textwidth]{figs/euc/a.eps}
\caption{ This figure should be viewed in color, preferably on a computer screen. \newline
The x-axis is colors image, where the colors are sorted by their Euclidean distance in L*a*b* space to the blue color. We can see that distances up to 7 CIELAB units
capture just a small fraction of similar colors (zoom in to see in the left a black line touching the y-axes in 7 CIELAB units).
}
\label{euc7}
\end{figure}
The CIEDE2000 color difference is considered the state of the art perceptual color difference
\cite{ciede2000,ciede2000_test_on_crt,large_color_diffs}. CIEDE2000's recommended range for use is
0 to 5 CIELAB units \cite{cie_recommendation_5_units}. The COM dataset was used to train and
perceptually test the CIEDE2000 color difference. More than 95 percent of the distances between
color pairs in the COM dataset are below 5 CIELAB units apart.
It was pointed out that the resulting color differences do not correspond well with human
perception for medium to large distances. Rubner \etal \cite{rubner_emd} and Ruzon and Tomasi
\cite{ruzon_compass} used a negative exponent on the color difference. Namely, all totally
different colors are essentially assigned the same large distance. Pele and Werman
\cite{Pele-iccv2009} noted that a negative exponent changes the values in the small distance
range. Pele and Werman \cite{Pele-iccv2009} and Rubner \etal \cite{rubner_emd} observed a reduction
in performance due to this change. Pele and Werman suggested thresholding the color difference as
it does not change the small distances. Thresholding color distances is justified by the fact that
if people are directly asked for a judgment of the dissimilarity of colors far apart in color
space, subjects typically find themselves unable to express a more precise answer than ``totally
different'' \cite{indow1994metrics}.
An additional advantage of thresholding color distances is that it allows fast computation of cross-bin distances such as the Earth Mover's Distance \cite{Pele-iccv2009} or using the transformation to a similarity measure of one minus the distance divided by the threshold, the Quadratic-Chi \cite{peleeccv2010}.
\begin{figure}[h!] \centering
\psfrag{similar}[c][c]{{\footnotesize similar}}
\psfrag{different}[c][c]{{\footnotesize different}}
\psfrag{colors}[c][c]{{\footnotesize colors}}
\psfrag{(a)}[c][c]{(a)}
\psfrag{(b)}[c][c]{(b)}
\psfrag{(c)}[c][c]{(c)}
\psfrag{(d)}[c][c]{(d)}
\psfrag{(e)}[c][c]{(e)}
\psfrag{(f)}[c][c]{(f)}
\includegraphics[width=0.8\textwidth]{figs/color_patches/color_patches.eps}
\caption{ This figure should be viewed in color, preferably on a computer screen. \newline Each
sub-figure (a)-(d) contains a pair of similar colors and a pair of different colors.
In all of these examples, the CIEDE2000 distance between the visually similar
colors is higher than the distance between the different colors.
Our proposed distance succeeds in all of these examples.
}
\label{patchesFig}
\end{figure}
This paper shows that CIEDE2000 is not a good distance for the medium range and using any monotonic
function of CIEDE2000 (including a thresholding function) cannot solve the problem. For example, using a
thresholding function we cannot make DarkSkyBlue be more similar to Blue than to HotPink. See
\figRefText \ref{patchesFig} for more examples.
We suggest an improvement based on basic color terms. Specifically we use Berlin and Kay's eleven
English basic color terms\cite{berlin_kay}. However, the generalization to other color terms is
straight forward. We suggest adding to color differences the distance between their basic
color terms probability vectors. As basic color terms are correlated (\eg red and orange) we
suggest using a cross-bin distance for these probability vectors. That is, a distance which takes
the relationships between bins (each bin represents a basic color term) into account. Specifically
we use the Earth Mover's Distance \cite{rubner_emd} as it was used successfully in many
applications (\eg \cite{rubner_emd,rubner_emd_comparison,ruzon_compass,Pele-iccv2009} and
references within).
The probability vectors are obtained with the color naming method developed by van de Weijer \etal
\cite{vandeweijer2009lcn}. Other methods for color naming such as
\cite{conway1992experimental,lammens1994computational,seaborn1999fuzzy,benaventeBOV00,griffin2004optimality,mojsilovic2005computational,benavente2006data,menegaz2006discrete,menegaz2007semantics,benavente2008parametric}
can also be used. We chose van de Weijer \etal method as it has excellent performance on real-world
images and as the code for it is publicly available. However, CIEDE2000 was learned under
calibrated conditions, while van de Weijer \etal method was learned from natural images. Thus,
other color naming methods might produce better results. This is left for future work.
Our proposed solution is not equivalent to increasing the weight of the hue component in the color
difference. Color names are not equivalent to hue. For example, although a rainbow spans a
continuous spectrum of colors, people see in it distinct bands which correspond to basic color
terms: red, orange, yellow, green, blue and purple. In addition, some basic color terms are not
different in their hue component. \eg achromatic colors such as white, gray and black or orange and
brown which shares the same hue.
A second problem that occurs when using color differences for edge detection is that many small
distances around the just noticeable difference may account for false edges. We suggest to use a
sigmoid function to reduce the small distances effect. As we mentioned before, using a negative
exponent function in order to assign to all totally different color pairs the same distance reduced
performance \cite{rubner_emd,Pele-iccv2009}. We explain this by the fact that a negative exponent
is a concave function. We show that a convex function should be applied to small differences.
We present experimental results for color edge detection. We show that by using our new color
difference the results are perceptually more meaningful.
Our solution is just the first step of designing a perceptual color difference for the full range of
distances. Our major contribution is raising the problem of current state-of-the-art color
differences in the small and medium distance range.
This paper is organized as follows. Section \ref{related_work} is an overview of related
work. Section \ref{colDist_sec} introduces the new color difference. Section \ref{results_sec}
presents the results. Finally, conclusions are drawn in Section \ref{conclusions_sec}.
\begin{figure*}[t] \centering
\psfrag{a}[r][c]{{\tiny \coldist}}
\psfrag{b}[r][c]{{\tiny CIEDE2000}}
\includegraphics[width=0.95\textwidth]{figs/dists_for_col/a.eps}
\caption{ This figure should be viewed in color, preferably on a computer screen. Use the pdf
viewer's zoom to see the colors. We show colors, sorted by their distance to the color on the
left. \coldist{} is our new perceptual color difference. Several observations can be derived from
these graphs. First, our distance is perceptually better in the medium distance range. Note that
the group of similar colors (left side of each color legend) is more similar to the color on the
left using our distance. For example, in the top light blues are similar to blue, while using CIEDE2000
they are very different (thus appear on the right). It should be noted that our
distance uses a sigmoid function, so that very similar colors on the left are essentially
assigned the same small distance and totally different colors on the right are essentially
assigned the same large distance. Finally, although our distance is perceptually more
meaningful, it is still far from being perfect. }
\label{color_dist_graph}
\end{figure*}
\section{Related Work}
\label{related_work}
MacAdam's \cite{macadam1942vsc} pioneering work on chromaticity discrimination ellipses, which
measured human perception of just noticeable differences led the way to the development of the
L*a*b* space \cite{robertson1990hdc} which is considered perceptually uniform; \ie for very-similar
colors, the Euclidean distance in the L*a*b* space corresponds to the human perception of color
difference well. Luo \etal \cite{ciede2000} developed the CIEDE2000 color difference which is now
considered the state of the art perceptual color difference
\cite{ciede2000,ciede2000_test_on_crt,large_color_diffs}.
Although color is commonly experienced as an indispensable quality in describing the world around
us, state-of-the art computer vision methods are mostly based on shape description and ignore
color information. Recently this has changed with the introduction of new color descriptors
\cite{color_features_1,vdw12,vdw13,vdw11,vandeweijer2006clf,BurghoutsCVIU2009,songlocal}. However, although color is a
point-wise property (\eg bananas are yellow), most of these features capture geometric relations
such as color edges.
Wertheimer \cite{wert} suggested that among perceptual stimuli there are ``ideal types'' that are
anchor points for perception. Rosch \cite{rosch1975cognitive} proposed that in certain perceptual
domains, such as color, salient prototypes develop non arbitrarily. An influential paper by Berlin and
Kay \cite{berlin_kay} defined basic colors as color names in a language which are applied to
diverse classes of objects and whose meaning is not subsumable under one of the other basic color
names and which are used consistently and by consensus by most of the speakers of the language. In
their pioneering anthropological study, they found that color was usually partitioned into a
maximum of eleven basic color categories of which three were achromatic (black, white, grey) and
eight chromatic (red, green, yellow, blue, purple, orange, pink and brown). This partitioning was a
universal tendency to group color around specific focal points as was conjectured by Wertheimer
\cite{wert} and Rosch \cite{rosch1975cognitive}.
Considerable work has been carried out in the field of computational color naming, see \eg
\cite{conway1992experimental,lammens1994computational,seaborn1999fuzzy,benaventeBOV00,griffin2004optimality,mojsilovic2005computational,benavente2006data,menegaz2006discrete,menegaz2007semantics,benavente2008parametric,vandeweijer2009lcn}
and references within. Recently van de Weijer \etal \cite{vandeweijer2009lcn} presented a new color
naming method based on real-world images. The color names are Berlin and Kay's \cite{berlin_kay}
eleven English basic color terms. Van de Weijer and Schmid \cite{vandeweijer2007acn} showed that a
color description based on these color names outperforms descriptions based on photometric
invariants. The explanation is that photometric invariance reduces the discriminative power of the
descriptor.
Inspired by van de Weijer and Schmid's work we suggest using the basic color names to correct the
state-of-the-art color difference, CIEDE2000 in the medium distance range.
\section{\coldist : The New Color Difference}
\label{colDist_sec}
Given two colors $C^1=[R^1,G^1,B^1]$ and $C^2=[R^2,G^2,B^2]$, we first convert\footnote{We used
Matlab's default conversion which uses CIE illuminant D50 as the reference illuminant, known as
the white point, which is also the default illuminant specified in the International Color
Consortium specifications.} them into L*a*b* : $S^1=[L^1,a^1,b^1]$ and
$S^2=[L^2,a^2,b^2]$. Second, we compute the basic color term probability vectors: $P^1,P^2$; where
$P^{n}_i$ is the probability that the color $C^n$ is the basic color term $i$ (\ie black, blue,
brown, grey, green, orange, pink, purple, red, white or yellow). These probability vectors are
computed using the van de Weijer \etal color naming method \cite{vandeweijer2009lcn}. Now each
color $C^n$ is represented by an 14-dimensional vector: $V^n=[S^n,P^n]=[L^n , a^n , b^n, P^n_{1},
\ldots,P^n_{11}]$. The distance between the two colors (parameterized with $T$, $D$, $\alpha$ and
$Z$) is defined as:
\begin{align}
d_1(S^1,S^2)&= \frac{\min( \text{CIEDE2000}(S^1,S^2) , T )}{T} \label{d1_eq} \\
d_2(P^1,P^2)&= \text{EMD}(P^1,P^2,D) \label{d2_eq} \\
d_3(V^1,V^2)&= \alpha d_1 + (1-\alpha)d_2 \label{d3_eq} \\
\text{\coldist}& (V^1,V^2)= \frac{1}{1+e^{-(Zd_3 - \frac{Z}{2})}} \label{coldist_eq}
\end{align}
In \eqRefText \ref{d1_eq}, $d_1$ is a thresholded and scaled CIEDE2000 color difference. We threshold
it as it is recommended for use only for small distances \cite{cie_recommendation_5_units}. We
used $T=20$ as was used in Pele and Werman \cite{Pele-iccv2009}. We divide by $T$ so that $d_1$
is between 0 and 1.
In \eqRefText \ref{d2_eq} $d_2$ is the distance between the two basic colors probability vectors.
As the bins in the eleven basic colors probability vectors are correlated (\eg orange and red), we
use the Earth Mover's Distance that takes this correlation into account. The correlation is encoded
in $D$ which is an $11 \times 11$ matrix, where $D_{ij}$ is the distance between basic color term
$i$ to basic color term $j$. We estimated $D$ using the joint distribution of the basic color
terms. That is, given the matrix $M$ of all probability vectors for the colors in the RGB cube
($2^{15} \times 11$ matrix, as each dimension of the RGB cube was quantized with jumps of $8$
\cite{vandeweijer2009lcn}) we define $D=D_{ij}$ as:
\begin{align}
\hat{D}_{ij}&= 1 - 2 \left( \frac{\sum_{n} \min( M_{ni},M_{nj} ) }{ \sum_{n} M_{ni}+M_{nj} } \right) \\
D_{ij}&= \frac{ \min (\hat{D}_{ij},t) }{t}
\end{align}
We threshold $\hat{D}_{ij}$ as EMD is recommended to be used with thresholded ground distances
\cite{Pele-iccv2009}. We used the threshold $t=0.7$. Finally we scale it so that $0 \leq D_{ij}
\leq 1$ (which implies $0 \leq d_2 \leq 1$). The resulting matrix (see \figRefText
\ref{basic_color_ground_distance_fig}) is perceptually plausible; \ie similar basic color terms
are: grey and white, grey and black, orange and red, etc.
\begin{figure}[h!] \centering
\psfrag{0}{\scriptsize 0}
\psfrag{0.1}{\scriptsize 0.1}
\psfrag{0.2}{\scriptsize 0.2}
\psfrag{0.3}{\scriptsize 0.3}
\psfrag{0.4}{\scriptsize 0.4}
\psfrag{0.5}{\scriptsize 0.5}
\psfrag{0.6}{\scriptsize 0.6}
\psfrag{0.7}{\scriptsize 0.7}
\psfrag{0.8}{\scriptsize 0.8}
\psfrag{0.9}{\scriptsize 0.9}
\psfrag{1}{\scriptsize 1}
\psfrag{black}[c][l]{\scriptsize black}
\psfrag{blue}[c][l]{\scriptsize blue}
\psfrag{brown}[c][l]{\scriptsize brown}
\psfrag{grey}[c][l]{\scriptsize grey}
\psfrag{green}[c][l]{\scriptsize green}
\psfrag{orange}[c][l]{\scriptsize orange}
\psfrag{pink}[c][l]{\scriptsize pink}
\psfrag{purple}[c][l]{\scriptsize purple}
\psfrag{red}[c][l]{\scriptsize red}
\psfrag{white}[c][l]{\scriptsize white}
\psfrag{yellow}[c][l]{\scriptsize yellow}
\psfrag{blackx}[c][c]{\scriptsize black}
\psfrag{bluex}[c][c]{\scriptsize blue}
\psfrag{brownx}[c][c]{\scriptsize brown}
\psfrag{greyx}[c][c]{\scriptsize grey}
\psfrag{greenx}[c][c]{\scriptsize green}
\psfrag{orangex}[c][c]{\scriptsize orange}
\psfrag{pinkx}[c][c]{\scriptsize pink}
\psfrag{purplex}[c][c]{\scriptsize purple}
\psfrag{redx}[c][c]{\scriptsize red}
\psfrag{whitex}[c][c]{\scriptsize white}
\psfrag{yellowx}[c][c]{\scriptsize yellow}
\includegraphics[width=0.95\textwidth]{figs/basic_color_ground_distance/basic_color_ground_distance2.eps}
\caption{
The learned basic color terms ground distance matrix.
}
\label{basic_color_ground_distance_fig}
\end{figure}
The Earth Mover's Distance (EMD) \cite{rubner_emd} is defined as the minimal cost that must be paid
to transform one histogram into another, where there is a ``ground distance'' (that is, the matrix
$D$) between the basic features that are aggregated into the histogram. Here the basic features are
the eleven English basic color terms. The formula for the EMD between the two probability vectors
$P^1$ and $P^2$ is defined as\footnote{This is a simplification of the original definition for the
case where the histograms are probability vectors.}:
\begin{align}
\begin{split}
\text{EMD}(P^1,P^2,D)&= \min_{\{F_{ij}\}} \sum_{i,j} F_{ij} D_{ij} \;\;\;\; s.t \\
\sum_j F_{ij} = P^1_i \;\; &, \;\;
\sum_i F_{ij} = P^2_j \;\; , \\
\sum_{i,j} F_{ij} = 1 \;\; &, \;\; F_{ij} \geq 0
\end{split}
\label{EMD_orig_eq}
\end{align}
In \eqRefText \ref{d3_eq}, $d_3$ is a linear combination of $d_1$ and $d_2$. We used
$\alpha=\frac{1}{2}$.
Finally, in \eqRefText \ref{coldist_eq}, the distance is scaled so that it is in the range
$[-\frac{Z}{2},\frac{Z}{2}]$ (we used $Z=10$) and then the logistic function (a sigmoid function)
is finally applied. The sigmoid function reduces the effect of small distances and essentially
gives all totally different colors the same distance.
\section{Results}
\label{results_sec}
In this section we present color edge detection results. We used Ruzon and Tomasi generalized
compass edge detector \cite{ruzon_compass}. We used this method for two reasons. First, the code is
publicly available. Second, the code uses only color cues for the edge detection which enables us to
isolate color difference performance.
Ruzon and Tomasi's method \cite{ruzon_compass} divides a circular window around each pixels in half
with a line segment. Then it computes a sparse color histogram (coined signature in their paper)
for each half and computes the Earth Mover's Distance (EMD) \cite{rubner_emd} between the two
histograms. The EMD uses a ground distance matrix $D$ between the colors. Ruzon and Tomasi
converted the images to L*a*b* and then used a negative exponent of the Euclidean distance as the
ground distance between colors:
\begin{align}
d_e(S^1,S^2)&= 1-e^{ \frac{-||[L^1,a^1,b^1]-[L^2,a^2,b^2]||_2}{\gamma}}
\end{align}
Ruzon and Tomasi used $\gamma=14$ in their experiments. We compare the edge detection results using
this distance to our proposed \coldist. We compared also to $d_1$ which is a thresholded CIEDE2000
distance as was used by Pele and Werman for image retrieval \cite{Pele-iccv2009}. We also tried our
proposed \coldist{} without the sigmoid function or without the color correction ($\alpha=1$) or
without the CIEDE2000 term ($\alpha=0$) but the results using \coldist{} were the best. Results
are presented in \figsRefText \ref{res_1},\ref{res_2},\ref{res_3},\ref{res_4}. The results show that the new color
difference is able to detect color edges much better than the state of the art. The resulting edge
maps are much cleaner. See figures captions for more details.
\begin{figure*}[htbp] \centering
\begin{tabular}{cc}
\includegraphics[width=0.3\textwidth]{figs/RES/1/I1.eps} &
\includegraphics[width=0.3\textwidth]{figs/RES/1/I2.eps} \\
(NE) & (TC) \\
\includegraphics[width=0.3\textwidth]{figs/RES/1/I3.eps} &
\includegraphics[width=0.3\textwidth]{figs/RES/1/im.eps} \\
(\coldist) & (IM)
\end{tabular}
\caption{ Edge detection with the generalized compass edge detection \cite{ruzon_compass} using the
following color differences: (NE) A negative exponent applied on the Euclidean distance in L*a*b*
space (used in \cite{ruzon_compass}). (TC) A thresholded CIEDE2000 distance (used in
\cite{Pele-iccv2009} for image retrieval). See \eqRefText \ref{d1_eq}. (\coldist) Our proposed
\coldist. (IM) The original image. \newline Our result is much cleaner. Note that our method
detects the right boundary of the basket without detecting many false edges, while in (NE) and
(TC) the false edges magnitude is larger than the right boundary of the basket. }
\label{res_1}
\end{figure*}
\begin{figure*}[htbp] \centering
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{figs/RES/2/I1.eps} &
\includegraphics[width=0.45\textwidth]{figs/RES/2/I2.eps} \\
(NE) & (TC) \\
\includegraphics[width=0.45\textwidth]{figs/RES/2/I3.eps} &
\includegraphics[width=0.45\textwidth]{figs/RES/2/im.eps} \\
(\coldist) & (IM) \\
\newline \\
\includegraphics[width=0.45\textwidth]{figs/RES/3/I1.eps} &
\includegraphics[width=0.45\textwidth]{figs/RES/3/I2.eps} \\
(NE) & (TC) \\
\includegraphics[width=0.45\textwidth]{figs/RES/3/I3.eps} &
\includegraphics[width=0.45\textwidth]{figs/RES/3/im.eps} \\
(\coldist) & (IM)
\end{tabular}
\caption{ Edge detection with the generalized compass edge detection \cite{ruzon_compass} using the
following color differences: (NE) A negative exponent applied on the Euclidean distance in L*a*b*
space (used in \cite{ruzon_compass}). (TC) A thresholded CIEDE2000 distance (used in
\cite{Pele-iccv2009} for image retrieval). See \eqRefText \ref{d1_eq}. (\coldist) Our
proposed \coldist. (IM) The original image. \newline
Our results are much cleaner. Note that on the top the clean detection of the bushes boundaries.
}
\label{res_2}
\end{figure*}
\begin{figure*}[htbp] \centering
\begin{tabular}{cc}
\includegraphics[width=0.3\textwidth]{figs/RES/5/I1.eps} &
\includegraphics[width=0.3\textwidth]{figs/RES/5/I2.eps} \\
(NE) & (TC) \\
\includegraphics[width=0.3\textwidth]{figs/RES/5/I3.eps} &
\includegraphics[width=0.3\textwidth]{figs/RES/5/im.eps} \\
(\coldist) & (IM)
\end{tabular}
\caption{ Edge detection with the generalized compass edge detection \cite{ruzon_compass} using the
following color differences: (NE) A negative exponent applied on the Euclidean distance in L*a*b*
space (used in \cite{ruzon_compass}). (TC) A thresholded CIEDE2000 distance (used in
\cite{Pele-iccv2009} for image retrieval). See \eqRefText \ref{d1_eq}. (\coldist) Our
proposed \coldist. (IM) The original image. \newline Our results are much cleaner. }
\label{res_3}
\end{figure*}
\begin{figure*}[htbp] \centering
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{figs/RES/6/I1.eps} &
\includegraphics[width=0.45\textwidth]{figs/RES/6/I2.eps} \\
(NE) & (TC) \\
\includegraphics[width=0.45\textwidth]{figs/RES/6/I3.eps} &
\includegraphics[width=0.45\textwidth]{figs/RES/6/im.eps} \\
(\coldist) & (IM) \\
\newline \\
\includegraphics[width=0.45\textwidth]{figs/RES/4/I1.eps} &
\includegraphics[width=0.45\textwidth]{figs/RES/4/I2.eps} \\
(NE) & (TC) \\
\includegraphics[width=0.45\textwidth]{figs/RES/4/I3.eps} &
\includegraphics[width=0.45\textwidth]{figs/RES/4/im.eps} \\
(\coldist) & (IM)
\end{tabular}
\caption{ Edge detection with the generalized compass edge detection \cite{ruzon_compass} using the
following color differences: (NE) A negative exponent applied on the Euclidean distance in L*a*b*
space (used in \cite{ruzon_compass}). (TC) A thresholded CIEDE2000 distance (used in
\cite{Pele-iccv2009} for image retrieval). See \eqRefText \ref{d1_eq}. (\coldist) Our proposed
\coldist. (IM) The original image. \newline Our results are much cleaner. Note the strong
responses on the fur of the bear and in the left person on the top T-shirt using (NE) and
(TC). Note that the spots around the swimmer in all methods are due to successful detection of
the water drops.}
\label{res_4}
\end{figure*}
\section{Conclusions}
We presented a new color difference - \coldist{} and showed that it is perceptually more meaningful
than the state of the art color difference - CIEDE2000. We believe that this is just the first step
in designing perceptual color differences which perform well in the medium range.
It is easy to generalize our method to other color name sets (such as the Russian which separates
blue into goluboi and siniy). All one needs to do is to calculate the ground distance between all
color terms. This can be done by using the joint distribution of the new set of color terms. In
future work it will be interesting to check other color naming methods such as
\cite{conway1992experimental,lammens1994computational,seaborn1999fuzzy,benaventeBOV00,griffin2004optimality,mojsilovic2005computational,benavente2006data,menegaz2006discrete,menegaz2007semantics,benavente2008parametric}
A major difficulty of analyzing color images is the illumination variability of scenes. Color
invariants are often used to overcome this problem. However, Van de Weijer \etal
\cite{vandeweijer2009lcn,vandeweijer2007acn} showed that invariants are not discriminative
enough. For example, invariants usually do not distinguish between achromatic colors (black, gray and
white). Using color constancy or partial normalization algorithms
\cite{color_constancy_0,color_constancy_1,color_constancy_2,color_constancy_3,color_constancy_4,vdw8,lucolor,bianco2008improving,tancolor}
which do not necessarily reduce all distinctiveness may partially alleviate this problem. This method
was used to improve color naming by Benavente \etal \cite{benaventeBOV00}.
Color perception is also affected by spatial and texture cues. It will be interesting to combine
\coldist{} with spatial and texture models \cite{texture_spatial_color_2,texture_spatial_color_1,texture_spatial_color_3}
\label{conclusions_sec}
{\small
\bibliographystyle{ieee}
|
1712.04410
|
\section{Introduction}
The general Degasperis-Procesi model (\cite{ DegProc}, 1999) is the fife parametric family of conservation laws
\begin{align}
&\frac{\partial }{\partial t}\left\{u-\alpha^2\varepsilon^2\frac{\partial^2 u}{\partial x^2}\right\}\label{1}\\
&+\frac{\partial}{\partial x}\left\{c_0u+c_1u^2
-c_2\varepsilon^2\Big(\frac{\partial u}{\partial x}\Big)^2+\varepsilon^2\big(\gamma-c_3u\big)\frac{\partial^2 u}{\partial x^2}\right\}=0, \; x \in \mathbb{R}^1, \; t > 0,\notag
\end{align}
which describes, in particular, the dynamics of out-flows of shallow water.
Here $\alpha$, $c_0,\dots,c_3$, $\gamma$ are real parameters and $\varepsilon$ characterizes the dispersion.
There is known (see e.g. \cite{ELY}) that the family (\ref{1}) contains only three special cases that satisfy the asymptotic integrability condition: the Korteweg-de Vries, the Camassa-Holm, and the Degasperis-Procesi equations. More in detail:
1. Obviously, if we set $\alpha=c_1=c_2=0$ then we obtain the KdV equation
\begin{equation}
\frac{\partial u}{\partial t}+\frac{\partial}{\partial x}\left\{c_0u+c_1u^2
+\gamma\varepsilon^2\frac{\partial^2 u}{\partial x^2}\right\}=0,\label{2}
\end{equation}
which describes the wave propagation at the free surface of shallow water under the influence of gravity.
2. For $c_1=3c_3/(2\alpha^2)$, $c_2=c_3/2$, $\gamma=0$, and $v=c_3u$ Eq. (\ref{1}) becomes the Camassa-Holm equation
modeling the propagation of shallow water waves over a flat bottom,
\begin{equation}
\frac{\partial }{\partial t}\left\{v-\alpha^2\varepsilon^2\frac{\partial^2 v}{\partial x^2}\right\}+\frac{\partial}{\partial x}\left\{c_0v+\frac{3}{2\alpha^2}v^2
-\varepsilon^2\left(\frac12\Big(\frac{\partial v}{\partial x}\Big)^2+v\frac{\partial^2 v}{\partial x^2}\right)\right\}=0.\label{3}
\end{equation}
3. If $c_1=2c_3/\alpha^2$, $c_2=c_3$, $c_0=\gamma=0$, and $v=c_3u$, then Eq. (\ref{1}) becomes the Degasperis-Procesi equation,
\begin{equation}
\frac{\partial }{\partial t}\left\{v-\alpha^2\varepsilon^2\frac{\partial^2 v}{\partial x^2}\right\}+\frac{\partial}{\partial x}\left\{\frac{2}{\alpha^2}v^2
-\varepsilon^2\left(\Big(\frac{\partial v}{\partial x}\Big)^2+v\frac{\partial^2 v}{\partial x^2}\right)\right\}=0.\label{4}
\end{equation}
It is known that the KdV equation and the Camassa-Holm equation for $c_0>0$ admit smooth solitary wave solutions called "solitons", see
Fig.1.
\begin{figure}[H]
\centering
\includegraphics[height=2in,width=3.5in]{S1ejemplo1.eps}
\caption{Soliton solution of the Camassa-Holm equation for $c_0>0$}
\label{f2}
\end{figure}
Conversely,
the Degasperis-Procesi equation and the Camassa-Holm equation for $c_0=0$ have continuous solitary wave solutions called "peacons", see Fig.2.
\begin{figure}[H]
\centering
\includegraphics[width=13cm]{Pejemplo3.eps}
\caption{Peacon solution of the Camassa-Holm equation for $c_0=0$}
\label{f3}
\end{figure}
Moreover, it is known that the solitary wave solutions of the equations (\ref{3}) and (\ref{4}) interact elastically, that is in the same manner as the KdV solitons.
However, the special cases (\ref{2}) - (\ref{4}) exhaust that's all what is known about the family (\ref{1}). To begin the study of the wave propagation for nonintegrable versions of (\ref{1}) we should separate firstly two basic situations: smooth and non-smooth traveling solutions. We will do it in Section 2. The next question is the scenario of the solitary wave interaction. We shall discuss it in Section 3 for the case of solitons.
\section{Solitary wave solution}
Let us set the ansatz
\begin{equation}\label{5}
u=A\omega\big(\beta(x-Vt)/\varepsilon\big),
\end{equation}
where $\omega(\eta)$ is a smooth even function such that
\begin{align}
&\omega(\eta)\to0\quad\text{as}\quad \eta\to\pm\infty,\label{6}\\
&\omega(0)=1,\label{7}
\end{align}
the amplitude $A>0$ is a free parameter, and the parameters
$\beta$, $V$ should be determined. In what follows we assume that
\begin{equation}\label{8}
\gamma\geq0,\quad c_0\geq0,\quad\alpha>0,\quad c_k>0,\quad k=1,2,3.
\end{equation}
Substituting (\ref{5}) into Eq. (\ref{5}), integrating, and using (\ref{6}), we obtain the second order ODE
\begin{align}
\left\{1-\frac{c_2c_4A }{\gamma+\alpha^2V}\omega\right\}\frac{d^2 \omega}{d \eta^2}&=\frac{c_2A }{\gamma+\alpha^2V}\left(\frac{d \omega}{d \eta}\right)^2\notag\\
&+\frac{1}{\beta^2(\gamma+\alpha^2V)}\big((V-c_0)\omega-c_1A\omega^2\big).\label{9}
\end{align}
Next we define the auxiliary parameter $\beta$,
\begin{equation}\label{10}
\beta^2=c_1/c_3,
\end{equation}
rescaling the function $\omega$,
\begin{equation}\label{11}
W=p\omega,\quad p=c_3A/(\gamma+\alpha^2V),
\end{equation}
and denote constants
\begin{equation}
r=c_3/(c_2+c_3), \quad q=c_3(V-c_0)/(\gamma+\alpha^2V).\label{12}
\end{equation}
Using (\ref{10}) - (\ref{12}) we deduce that $W$ satisfies the equation
\begin{equation}\label{13}
(1-W)\frac{d^2 W}{d\eta^2}=\frac{1-r}{r}\left(\frac{d W}{d\eta}\right)^2+qW-W^2.
\end{equation}
The next step is the substitution
\begin{equation}\label{14}
W(\eta)=1-g(\eta)^r,
\end{equation}
which allows us to eliminate the first derivatives from the model equation (\ref{13}). Taking into account the condition (\ref{7}) and the property of being even, $g(-\eta)=g(\eta)$, we pass to the "boundary" problem
\begin{align}
&r\frac{d^2 g}{d\eta^2}=g-(2-q)g^{1-r}+(1-q)g^{1-2r},\quad \eta\in(0,\infty),\label{15}\\
& g^r\big|_{\eta=0}=1-p,\quad g|_{\eta\to\infty}=1.\label{16}
\end{align}
Notice that the correctness of (\ref{16}) implies the assumption
\begin{equation}
\text{if}\quad p>1, \quad \text{then}\quad r=(2k+1)/(2l+1),\quad k,l\in \mathbb{Z}.\label{17}
\end{equation}
Now we integrate (\ref{15}) and pass to the first order ODE
\begin{equation}\label{18}
r\left(\frac{d g}{d\eta}\right)^2=F(g),\quad \eta\in(0,\infty);\quad
g|_{\eta=0}=g_*,
\end{equation}
where
\begin{align}
&F(g)=g^2-2\frac{2-q}{2-r}g^{2-r}+\frac{1-q}{1-r}g^{2-2r}-C,\label{19}\\
&C=1-2\frac{2-q}{2-r}+\frac{1-q}{1-r},\quad g_*=(1-p)^{1/r}.\label{20}
\end{align}
Considering $\eta>>1$ we write $g=1-w$ and obtain from (\ref{18})-(\ref{20})
$$
\left(\frac{d w}{d\eta}\right)^2=q\,w^2.
$$
Thus
$$
g\to 1-e^{-\sqrt{q}\eta}\quad\text{as}\quad \eta\to\infty.
$$
Therefore, the solution of the problem (\ref{18}) exists, is unique for $g>0$, and satisfies the conditions (\ref{16}).
We now consider the even continuation $\widetilde{g}(\eta)$ of $g$ for negative $\eta$. Obviously, $\widetilde{g}\in C^\infty(\mathbb{R})$ if and only if
\begin{equation}\label{21}
\frac{d g}{d\eta}\Big|_{\eta=0}=0.
\end{equation}
Furthermore, since $F(1)=dF/dg|_{g=1}=0$ and $d^2F/dg^2|_{g=1}>0$, the equation
\begin{equation}\label{22}
F(g)=0
\end{equation}
has a solution $g_*\in(0,1)$ if and only if $C>0$. The last inequality is equivalent to the following assumption:
\begin{equation}\label{23}
r>q.
\end{equation}
On the other hand the initial condition in (\ref{18}) implies the relation
\begin{equation}\label{24}
V=\frac{1}{\alpha^2}\left(\frac{c_3}{1-g_*^r}A-\gamma\right).
\end{equation}
This allows to rewrite the coefficient $q$ in (\ref{19}) as a function on $A$ and the parameters $\alpha$, $c_0,\dots,c_3$, $\gamma$; therefore to find the solution $g_*$ of the equation (\ref{22}) as a function on $A$ and the parameters of the model (\ref{1}). Representing (\ref{23}) in the explicit form we obtain the conclusion
\begin{teo}
Under the assumption (\ref{8}) we assume that
\begin{equation}\label{25}
c_3-A^{-1}(\gamma+c_0\alpha^2)(1-g_*^r)<c_1r\alpha^2.
\end{equation}
Then the equation (\ref{9}) has the unique smooth solution what is even and satisfies the conditions (\ref{6}), (\ref{7}).
\end{teo}
\textbf{Example 1.} For the Camassa-Holm equation (\ref{3}) $r=2/3$ and (\ref{22}) is a cubic equation. Thus
$$
g_*= \big(1+c_3A/(c_0\alpha^2)\big)^{-3/2}\quad\text{if}\quad c_0>0\quad \text{and}\quad g_*=0\quad\text{if}\quad c_0=0.
$$
Respectively the condition (\ref{25}) is satisfied for $c_0>0$ and it is broken for $c_0=0$. In the last case
$$
\frac{d \omega}{d\eta}\Big|_{\eta=0}=-\sqrt{2(1-q)}/p\neq 0,
$$
therefore $\omega(\eta)$ is a continuous function only. Fig.3 depicts the $F(g)$ graph in the case $c_0=1$, $A=2$, $c_3=2$ and $\alpha=1$.
\begin{figure}[H]
\centering
\includegraphics[width=9cm]{Fejemplo1.eps}
\caption{Behavior of the function $F(g)$ for the Camassa-Holm equation with $c_0=1$}
\label{f1}
\end{figure}
\textbf{Example 2.} For the Degasperis-Procesi equation (\ref{4}) the condition (\ref{25}) is violated and
$$
\frac{d \omega}{d\eta}\Big|_{\eta=0}=-\sqrt{(1-q)}/p\neq 0.
$$
Fig.2 demonstrates the graph of the peacon $\omega(\eta)$ for this equation.
\textbf{Example 3}
Now let $c_0=\gamma=0$ and $\alpha^2c_1>c_2+c_3$. Then $q=c_3/\alpha^2c_1<r$ and $g_*$ doesn't depend on $V$. Thus
\begin{equation}
V=c_3A/\{(1-{g_*}^r)\alpha^2\}.\label{26}
\end{equation}
\textbf{Example 4}
Let $c_3=4c_2$. Setting $z=g^r$, $r={2/5}$, we transform the equation (\ref{22}) to the form
\begin{equation}
F=(1-z)^2f=0,\quad f=z^3+2z^2-\frac13(1-5q)z-\frac45(4-5q).\label{27}
\end{equation}
Solving the cubic equation $f=0$ we find the root $z_*=z_*(V)$. This and (\ref{24}) imply the equality
\begin{equation} \label{27a}
A=\mathfrak{A}(V),\quad \mathfrak{A}=c_3^{-1}(\gamma+\alpha^2V)\big(1-z_*(V)\big).
\end{equation}
Simple calculations show that $d\mathfrak{A}/dV|_{q=0}>0$. Thus, (\ref{27a}) allows us to define the velocity as a function of the amplitude at least for $V-c_0<<1$.
Similar result can be obtained in the case $c_2=3c_3/2$.
\section{Two-soliton asymptotic solution}
Obviously, there is not any hope to find both the exact multi-soliton solution to (\ref{1}) and an
asymptotics in the classical sense. So, we will treat $\varepsilon$ as a small parameter and construct a
weak asymptotic solution.
The Weak Asymptotics Method (see e.g. \cite{DanShel1} - \cite{Colombeau} and references therein) takes
into account the fact that soliton-type solutions which are smooth for $\varepsilon>0$ become non-smooth in the limit as $\varepsilon\to0$. Thus, it is
possible to treat such solutions as a mapping $\mathcal{C}^\infty
(0, T; \mathcal{C}^\infty (\mathbb{R}_x^1))$ for $\varepsilon=\operatorname{const}>0$ and only
as $\mathcal{C} (0, T; \mathcal{D}' (\mathbb{R}_x^1))$ uniformly
in $\varepsilon\ge0$. Accordingly, the remainder should be small in the
weak sense. The main advantage of the method is such that we can ignore the real shape of the colliding waves but look for (and find) exceptionally their main characteristics. For the equation (\ref{1}) they are the amplitudes and trajectories of the waves.
Originally, such idea had been suggested by Danilov\&Shelkovich
for shock wave type solutions (\cite{DanShel1}, 1997), and by Danilov\&Omel'yanov for soliton type solutions (\cite{DanOm}, 2003). Later the method has
been developed and adapted for many other problems (V. Danilov, G. Omel'yanov, V. Shelkovich, D. Mitrovic, M. Colombeau and others, see e.g. \cite{DanOmShel} - \cite{Colombeau}).
Notice finally that the treatment (Omel'yanov \cite{Om2, Om3}, 2015) of weak asymptotics as functions
which satisfy some conservation or balance laws takes us back to the ancient Whitham's idea to construct one-phase
asymptotic solution satisfying a Lagrangian. Now, for essentially nonintegrable equations and multi-soliton solutions, we
use the appropriate number of the laws and satisfy them in the weak sense.
Let us apply these ideas for the problem of two soliton interaction in the Degasperis-Procesi model (\ref{1}).
We set the initial data
\begin{equation} \label{28}
u|_{t=0} = \sum_{i=1}^2 A_i\omega \big(\beta (x - x_i^0)/\varepsilon\big).
\end{equation}
Here
$A_2>A_1>0$, $x_1^0 >x_2^0$, $\beta=\sqrt{c_1/c_3}$, and we assume that the trajectories $x = V_i t + x_i^0$
have a joint point $x = x^*$ at a time instant $t = t^*$, where $V_i$ are defined in the same manner as in
(\ref{24}).
To construct the weak asymptotic solution we start with the following definition of the smallness in the weak sense \cite{DanOm, Om3}:
{\bf Definition 1}
{\it A function} $v(t, x, \varepsilon)$ {\it is said to be of the value} $
O_{\mathcal{D}'}(\varepsilon^\varkappa)$ {\it if the relation}
$$ \int_{-\infty}^\infty v(t, x, \varepsilon )\psi(x) dx = O(\varepsilon^\varkappa)$$
{\it holds uniformly in} $t$ {\it for any test function} $\psi \in \mathcal{D}
(\mathbb{R}_x^1)$. {\it The right-hand side here is a} $\mathcal{C}^\infty${\it-function for}
$\varepsilon=\operatorname{const} > 0$ {\it and a piecewise continuous function
uniformly in} $\varepsilon \geq 0$.
Next we write two associated with (\ref{1}) conservation and balance laws in the differential form:
\begin{equation}\label{29}
\frac{\partial Q_j}{\partial t}+\frac{\partial P_j}{\partial x}+\varepsilon^{-1}K_j=O_{\mathcal{D}'}(\varepsilon^2),\quad j=1,2,
\end{equation}
where
\begin{align}
&Q_1=u,\quad P_1=c_0u+c_1u^2-(c_2-c_3)(\varepsilon u_x)^2,\quad K_1=0,\label{30}\\
&Q_2=u^2+\alpha^2(\varepsilon u_x)^2,\quad P_2=\mathbb{P}_2+2\alpha^2\varepsilon^2 u_x u_t,\quad
K_2=(2c_2-c_3)(\varepsilon u_x)^3,\label{31}\\
&\mathbb{P}_2=c_0u^2+\frac43c_1u^3-\big(3\gamma +(2c_2-5c_3)u\big)(\varepsilon u_x)^2,\label{32}
\end{align}
where subscripts denote partial derivatives.
Following \cite{DanOm, Om3}, we define two-soliton weak asymptotics:
{\bf Definition 2}
{\it A sequence} $u(t, x, \varepsilon )$, {\it belonging to}
$\mathcal{C}^\infty (0, T; \mathcal{C}^\infty (\mathbb{R}_x^1))$
{\it for} $\varepsilon =\operatorname{const}> 0$ {\it and belonging to} $\mathcal{C} (0, T;
\mathcal{D}' (\mathbb{R}_x^1))$ {\it uniformly in} $\varepsilon\geq0$, {\it is
called a weak asymptotic} mod $ O_{\mathcal{D}'}(\varepsilon^2)$
{\it solution of (\ref{1}) if the relations (\ref{29}) hold uniformly in}
$t$ {\it with the accuracy} $O_{\mathcal{D}'}(\varepsilon^2)$.
Next we present the ansatz
as the sum of two distorted solitons, that is:
\begin{equation}\label{33}
u=\sum_{i=1}^2G_i\omega\big(\beta(x-\varphi_i)/\varepsilon\big),
\end{equation}
where
\begin{equation}
G_i=A_i+S_i(\tau),\,\varphi_i=\varphi_{i0}(t)+\varepsilon\varphi_{i1}(\tau),\,\tau=\beta_1\big(\varphi_{20}(t)-\varphi_{10}(t)\big)/\varepsilon,\label{34}
\end{equation}
$\varphi_{i0}=V_it+x_{i0}$ describe the trajectories of the non-interacting waves
(\ref{5}) with the amplitudes $A_i$.
Next we suppose that $S_i(\tau)$, $\varphi_{i1}(\tau)$ are smooth functions such that
\begin{align}
&S_i\to0 \quad \text{as}\quad \tau\to\pm\infty,\label{35}\\
&\varphi_{i1}\to0 \quad \text{as}\quad \tau\to-\infty,\quad
\varphi_{i1}\to \varphi_{i1}^\infty=\operatorname{const}_i \quad \text{as}\quad
\tau\to+\infty.\label{36}
\end{align}
It is obvious that the existence of the weak asymptotics (\ref{33}) with the properties (\ref{35}), (\ref{36})
implies that the solitary waves interact like the KdV solitons at least in the leading term.
To construct the asymptotics we should calculate the weak expansions of the terms from the left-hand sides of the relations (\ref{29}).
It is easy to check that
\begin{equation}
u=\varepsilon\beta^{-1}a_{1} \sum_{i=1}^2G_i\delta(x-\varphi_i)+O_{\mathcal{D}'}(\varepsilon^3),\label{37}
\end{equation}
where $\delta(x)$ is the Dirac delta-function. Here and in what follows we use the notation
\begin{equation}\label{38}
a_{k}\stackrel{\normalfont\text{def}}{=}\int_{-\infty}^\infty\big(\omega(\eta)\big)^k d\eta, \quad k>0,\qquad
a'_{2}\stackrel{\normalfont\text{def}}{=}\int_{-\infty}^\infty\big(\omega'(\eta)\big)^2 d\eta.
\end{equation}
At the same time for any even $F(u,\varepsilon u_x)\in C^1$, $F\big(u(-x),\varepsilon u_x(-x)\big)=F\big(u(x),\varepsilon u_x(x)\big)$, we have
\begin{align}
&\int_{-\infty}^\infty F\left(\sum_{i=1}^2 G_i\omega \left( \beta \frac{x - \varphi_i
}{\varepsilon}\right), \beta\sum_{j=1}^2 G_j\omega' \left( \beta \frac{x - \varphi_j
}{\varepsilon}\right)\right)\psi(x)dx\notag\\
&=\frac{\varepsilon}{\beta}\int_{-\infty}^\infty \sum_{i=1}^2F\big(A_i\omega(\eta),\beta A_i\omega'(\eta)\big)
\psi(\varphi_i+\varepsilon\frac\eta\beta)d\eta\label{39}\\
&+\frac{\varepsilon}{\beta}\int_{-\infty}^\infty\Big\{ F\Big(\sum_{i=1}^2G_i\omega(\eta_{i2}),\beta\sum_{j=1}^2G_j\omega'(\eta_{j2})\Big)\notag\\
&-\sum_{i=1}^2F\big(A_i\omega(\eta_{i2}),\beta A_i\omega'(\eta)\big) \Big\}\psi(\varphi_2+\varepsilon\frac\eta\beta)d\eta,\notag
\end{align}
where
\begin{equation}
\eta_{12}=\eta-\sigma, \quad \eta_{22}=\eta,\quad\sigma=\beta(\varphi_1-\varphi_2))/\varepsilon.\label{40}
\end{equation}
We take into account that the second integrand in right-hand side (\ref{40}) vanishes exponentially fast as $|\varphi_1-\varphi_2|$ grows,
thus, its main contribution is at the point $x^*$. We write
\begin{equation}
\varphi_{i0}=x^*+V_i(t-t^*)=x^*+\varepsilon \frac{V_i}{\dot{\psi}_0}\tau \quad \text{and} \quad \varphi_{i}=x^*+\varepsilon\chi_{i},\label{41}
\end{equation}
where $\dot{\psi}_0=\beta(V_2-V_1)$, $\chi_i=V_i\tau/\dot{\psi}_0 + \varphi_{i1}$.
It remains to apply the formula
\begin{equation}
f(\tau)\delta(x-\varphi_i)=f(\tau)\delta(x-x^*)-\varepsilon\chi_if(\tau)\delta'(x-x^*)+O_{\mathcal{D}'}(\varepsilon^2),\label{42}
\end{equation}
which holds for each $\varphi_i$ of the form (\ref{41}) with slowly increasing $\chi_i$ and for $f(\tau)$ from the Schwartz space.
Moreover, the second term in (\ref{42}) is $O_{\mathcal{D}'}(\varepsilon)$. Thus, under the assumptions (\ref{35}), (\ref{36})
we obtain the weak asymptotic expansion of $F(u)$ in the final form:
\begin{equation}
F(u)=\frac{\varepsilon}{\beta} \sum_{i=1}^2a_{F,i}\delta(x-\varphi_{i})+\frac{\varepsilon}{\beta}\mathfrak{R}_{F}^{(0)}\delta(x-x^*)
+O_{\mathcal{D}'}(\varepsilon^2),\label{43}
\end{equation}
where
\begin{align}
a_{F,i}=\int_{-\infty}^\infty &F\big(A_i\omega(\eta),\beta A_i\omega'(\eta)\big)d\eta, \label{44}\\
\mathfrak{R}_{F}^{(n)}=
\int_{-\infty}^\infty\eta^n\Big\{& F\Big(\sum_{i=1}^2G_i\omega(\eta_{i2}),\beta\sum_{j=1}^2G_j\omega'(\eta_{j2})\Big)\label{45}\\
&-\sum_{i=1}^2F\big(A_i\omega(\eta_{i2}),\beta A_i\omega'(\eta_{i2})\big) \Big\}d\eta,\quad n=1,2.\notag
\end{align}
Note that to define $\partial Q_2/\partial t \mod O_{\mathcal{D}'}(\varepsilon^2)$ it is necessary to
calculate $Q_2$ with the precision $O_{\mathcal{D}'}(\varepsilon^3)$. Thus, transforming (\ref{37}) with the help
of (\ref{42}) and using (\ref{43}) with $F(u)=Q_2$, we obtain modulo $O_{\mathcal{D}'}(\varepsilon^3)$:
\begin{align}
u=a_{1}\frac{\varepsilon}{\beta} \sum_{i=1}^2A_{i}\delta(x-\varphi_i)
&+a_{1}\frac{\varepsilon}{\beta} \sum_{i=1}^2S_{i}\Big\{\delta(x-x^*)-\varepsilon\chi_i\delta'(x-x^*)\Big\},\label{46}\\
Q_2=\frac{\varepsilon}{\beta}\sum_{i=1}^2a_{Q_2,i}\delta(x-\varphi_i)
&+\frac{\varepsilon}{\beta} \mathfrak{R}_{Q_2}^{(0)}
\delta(x-x^*)\label{47}\\
&-\frac{\varepsilon^2}{\beta}\big\{\chi_2\mathfrak{R}_{Q_2}^{(0)}+\beta^{-1}\mathfrak{R}_{Q_2}^{(1)} \big\}\delta'(x-x^*).\notag
\end{align}
In the same manner we derive
\begin{align}
&\varepsilon^2u_xu_t=-\varepsilon a'_{2}\beta\sum_{i=1}^2V_iA^2_{i}\delta(x-\varphi_i)
-\varepsilon a'_{2}\mathfrak{L}\delta(x-x^*),\label{48}\\
&(\varepsilon u_x)^3=\frac{\varepsilon}{\beta} \mathfrak{R}_{K_2}^{(0)}\delta(x-x^*)-\varepsilon^2 \beta a^{(1)}_{3}\beta\sum_{i=1}^2A^3_{i}\delta'(x-\varphi_i)\notag\\
&-\frac{\varepsilon^2}{\beta} \left\{\chi_2\mathfrak{R}_{K_2}^{(0)}+\beta^{-1}\mathfrak{R}_{K_2}^{(1)}\right\}\delta'(x-x^*),\label{49}
\end{align}
where
\begin{align}
&\lambda_{(k,l)}=\frac{1}{a'_2} \int_{-\infty}^\infty \omega^{(k)}(\eta_{12})\omega^{(l)}(\eta)d\eta,\quad
a^{(1)}_{3}=\int_{-\infty}^\infty\eta\big(\omega'(\eta)\big)^3 d\eta.\label{50}\\
&\mathfrak{L}=\dot{\psi}_0\beta\sum_{i=1}^2\frac{d\varphi_{i1}}{d\tau}(G_i^2-A_i^2)
-\dot{\psi}_0\left(G_{1}\frac{dS_{2}}{d\tau}-G_{2}\frac{dS_{1}}{d\tau}\right)
\lambda_{(1,0)}\notag\\
&+\beta G_1G_2(\dot{\varphi_1}+\dot{\varphi_1})\lambda_{(1,1)}.
\end{align}
Substituting (\ref{43})-(\ref{49}) into
(\ref{29}) we obtain linear combinations of $\delta(x-x^*)$,
$\varepsilon\delta'(x-\varphi_{i})$, $i=1,2$, and
$\varepsilon\delta'(x-x^*)$ (see also \cite{DanOm, DanOmShel, Om3}). Therefore, we pass to the following system of equations:
\begin{align}
&a_{1}V_iA_{i}-a_{P_1,i}=0, \, a_{2}V_i\beta^2A_{i}^{2}+a_{Q_2,i}V_i-a_{\mathbb{P}_2,i}+a^{(1)}_{3}\beta_i^2A_{i}^3=0, \, i=1,2,\label{51}\\
&\sum_{i=1}^2S_{i}=0,\quad \dot{\psi}_0 \frac{d}{d\tau}\mathfrak{R}_{Q_2}^{(0)}+\mathfrak{R}_{K_2}^{(0)}=0,\label{52}\\
&a_1\dot{\psi}_0 \frac{d}{d\tau}\sum_{i=1}^2\Big\{A_{i}\varphi_{i1}+\chi_iS_i\Big\}=f,\label{53}\\
&\dot{\psi}_0\frac{d}{d\tau}\Big\{\sum_{i=1}^2a_{Q_2,i}\varphi_{i1}+\chi_2\mathfrak{R}_{Q_2}^{(0)}+\beta^{-1}\mathfrak{R}_{Q_2}^{(1)}\Big)=F,\label{54}
\end{align}
where
\begin{equation}
f=\mathfrak{R}^{(0)}_{P_1}, \quad F=\mathfrak{R}^{(0)}_{P_2}-a'_{2}\mathfrak{L}-\chi_2\mathfrak{R}^{(0)}_{K_2}-\beta^{-1}\mathfrak{R}^{(1)}_{K_2}.\label{55}
\end{equation}
An analysis of (\ref{51}) implies the following statement:
\begin{lem}
The algebraic equations (\ref{51}) with $\beta=\sqrt{c_1/c_3}$ imply again the relation
(\ref{24}) between $A_i$ and $V_i$.
\end{lem}
As for (\ref{52})-(\ref{54}), this system should be investigated in detail. Now we can formulate a previous result only
\begin{teo} \label{teo1}
Let the assumptions (\ref{8}), (\ref{25}) be satisfied.
Presuppose also that the equations (\ref{52})-(\ref{54}) admit a solution with the properties (\ref{3}), (\ref{36}). Then the solitary wave collision in the problem
(\ref{1}), (\ref{28}) preserves the elastic scenario with accuracy
$O_{\mathcal{D}'} (\varepsilon^2)$ in the sense of Definition 2.
\end{teo}
|
1712.04409
|
\section{Introduction}
The photometry of the minor body with extrasolar origin (1I/2017 U1) 'Oumuamua revealed an unprecedented shape: \cite{2017Natur} reported a shape elongation b/a close to 1/10, which calls for theoretical explanation. Here we show that the abrasion of a primordial asteroid by a huge number of tiny particles ultimately leads to such elongated shape. The model (called the Eikonal equation) predicting this outcome was already suggested in \cite{2009ApJ...699..L13D} to play an important role in the evolution of asteroid shapes.
Disruptive collisions (e.g. among asteroids) generate primordial fragments with average axis ratios $2:\sqrt{2}:1$ (\cite{2000AREPS...28..367R,2008Icar..196..135S,2015NatSR...5..9147}).
Despite substantial variation, extreme ratios close to $1:10$ have never been observed thus Oumuamua is very unlikely to be primordial fragment.
The primordial fragment then starts evolving via non-disruptive impacts, where the outcome is primarily determined by the fragment/impactor mass ratio, $M/m$. These impacts may include mergers (which may have shaped several Solar System asteroids), nevertheless, mergers can hardly explain elongations beyond 1:3. Large to moderate mass ratio $m\geq M$ lead to curvature-driven abrasion (\cite{2014PloSO...88657}) which tends to make objects rounder. Another class of low-energy impact collisions with high speed impactors of mass $m<<M$ has opposite results: it makes the asteroid less spherical.
\section{The model}
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{Fig_1_V8}
\caption{(a) Eikonal equation acting on a planar polygon: observe gradual decrease of the number of edges. (b) Last stage of the same evolution, observe rapid change of elongation. (c) Simulation of Eikonal evolution of scanned fragment. $n$=number of collisions, $f$=number of faces, $m$=mass, $b/a$=elongation. (d) Elongation vs relative mass loss for previous fragment and some polyhedra. Observe that latter bracket the evolution of the former. Observe gradual decrease of $f$ and rapid decrease of $b/a$ in the second phase. (e) Relative mass at $b/a=1/10$ vs initial value of $b/a$.
\label{fig:1}}
\end{center}
\end{figure}
Very small abrading dust particles can be described by the limit $m/M\to 0$ where we have an exact geometric model, the so-called Eikonal equation, describing the evolution of the shape of the main body. Eikonal abrasion evolves shapes \emph{away from the sphere} and shows the following generic, global properties:
(A) Large flat areas and sharp edges emerge spontaneously as dominant geometric features. This feature is strongly apparent already in the initial phase of Eikonal abrasion. If the initial shape is approximated by a multi-faceted polyhedron then the number $f$ of faces drops quickly until it reaches its ultimate value of $f=3$ or $f=4$.
(B) Shape elongation is dramatically increased during the evolution. In the initial phase of the evolution elongation is varying slowly. In the ultimate abrasion phase, where the mass of the object is a small fraction of the initial mass, the change is rapid and extreme elongation is reached. The elongation of the primordial shape is primarily influencing \emph{when} (i.e. at what mass loss percentage) extreme elongation is reached and it has much less influence on its actual value.
In our paper (\cite{2009ApJ...699..L13D}) we discussed the early stages of abrasion on asteroids where property (A)
is dominant because the asteroids observed in the Solar System are likely in the first phase of this evolution process.
We suggested that they suffer disruptive and/or merger collisions with large bodies frequently enough, and their global shape is regularly restructured and thus their evolution never reaches the end stages of Eikonal abrasion. Now we appeal to property (B) because 'Oumuamua may well have already reached this second, ultimate phase. Most likely it had sufficient time to evolve far from its original primordial shape and its current geometry may be interpreted as a mature, far evolved outcome of slow abrasion by tiny particles.
\section{Results}
In Fig. 1 we present abrasion scenarios that have led from a variety of initial shapes via the Eikonal evolution to highly elongated shapes. Observe the geometric features (A) and (B) on all examples.
'Oumuamua is the first known astronomical object with such extreme shape elongation. In its case, a long, uninterrupted slow abrasion is plausible, since the body resided in the interstellar space for several hundred million years (\cite{2017Natur}). Being free from larger impactors, the body was traveling at approximately 50 km/s velocity, passed through two solar systems, and suffered the impacts from many micrometer-sized interstellar dust grains with large velocities (in the order of the velocity dispersion of the Galactic thin disk, several 10 km/s). Collisions with small grains were energetic enough to dislodge splinters from the main body. A further evidence for an abrasion history is the presumedly complete lack of dust on the surface (\cite{2017Natur}).
Assuming an abrasion rate of 2--5$\mu$m/yr, the abrasion process could have easily evolved to the currently observed shape, regardless of the initial form.
A quantitative interpretation would have to rely on largely unknown data, such as the solar system where 'Oumuamua was formed, its initial form, cosmic path and internal composition. However, this example led to the recognition of the qualitative significance of the Eikonal abrasion. Further work is needed to establish quantitative evidence.
\acknowledgments
The authors acknowledge the support NKFIH grants GINOP-2.3.2-15-2016-00003, K-119245 and K-125015.
|
2106.08613
|
\section{Introduction}
Video anomaly detection refers to the task of recognizing unusual events in videos. It has gained attention due to the implementation of video surveillance systems. Surveillance cameras are widely used for public safety. However, the monitoring capacity is not up to the mark. Since abnormal events rarely happen in the real world compared to normal events, automatic anomaly detection systems are in high demand to reduce the monitoring burden. However, it is very challenging because obtaining the datasets is difficult owing to the imbalance of events and variable definitions of abnormal events based the context of each video. \\
\indent One of the challenging factors of anomaly detection is the data imbalance problem, meaning that the abnormal scenes are more difficult to capture than normal scenes because of their scarcity in the real world. Therefore, datasets with an equal number of both types of scenes are hard to obtain, and consequently only the normal videos are provided as training data~\cite{10.1145/1541880.1541882}. This is known as an unsupervised approach for anomaly detection used by most of previous works~\cite{hasan2016learning, Nguyen_2019_ICCV, ravanbakhsh2017abnormal, ravanbakhsh2019training}. The unsupervised network needs to learn the representative features of the normal training set and sort the frames with outlying features to detect abnormal events. Autoencoder (AE)~\cite{hinton2006reducing}-based methods~\cite{Abati_2019_CVPR, zaheer2020old, Liu_2018_CVPR} have proven to be successful for such a task. Frame predicting AEs~\cite{Liu_2018_CVPR, tang2020integrating} and frame reconstructing AEs~\cite{nguyen2019hybrid, hasan2016learning} have been proposed assuming that anomalies that are unseen in the training phase cannot be predicted or reconstructed when the model is trained only on normal frames. However, these methods do not consider the drawback of AE—that AE may generate anomalies as clearly as normal events due to the strong generalizing capacity of convolutional neural networks (CNNs)~\cite{gong2019memorizing}. To minimize this factor, Gong~\etal~\cite{gong2019memorizing} and Park~\etal~\cite{park2020learning} proposed memory-based methods to use only the most essential features of normal frames for the generation. However, the memory-based methods are not efficient for videos with various scenes because their performance is highly dependent on the number of items. Many memory items are required to read and update patterns of various scenes, slowing down the detection.\\
\indent Another critical and challenging issue for video anomaly detection is the performing speed. The main purpose of anomaly detection is to detect abnormal events or emergencies immediately, but slow models do not meet this purpose. In the previous studies, the following factors are observed to slow down the detection speed: heavy pre-trained networks such as optical flow~\cite{Liu_2018_CVPR, ravanbakhsh2017abnormal, ravanbakhsh2019training, yu2020cloze}, object detectors~\cite{georgescu2020anomaly, ionescu2019object}, and pre-trained feature extractors~\cite{ravanbakhsh2018plug, sultani2018real}. These modules are complex and computationally expensive.
\indent Therefore, we take the detection speed into account and employ a patch transformation method that is used only during training. We implement this approach by artificially generating abnormal patches via applying transformation to patches randomly selected from the training dataset. We adopt spatial rotation transformation~(SRT) and temporal mixing transformation~(TMT) to generate a patch anomaly at a random location within a stacked frame cuboid. Given this anomaly-included frame cuboid, our AE is trained to learn the employed transformation and predict the upcoming normal frame. The purpose of SRT is to generate an abnormal appearance and encourage the model to learn spatially invariant features of normal events. For instance, when a dataset defines walking pedestrians as normal and all others as abnormal, by giving a sequence of a rotated person ({\it e.g.,} upside-down, lying flat) and forcing the model to generate a normally standing person, the model learns normal patterns of pedestrians. TMT, which is shuffling the selected patch cube in the temporal axis to create abnormal motion, is intended to enhance learning temporally invariant features of normal events. Given a set of frames where an irregular motion takes place in a small area, the model has to learn how to rearrange the shuffled sequence in the right order to correctly predict the upcoming frame. \\
\indent To the best of our knowledge, unlike~\cite{Liu_2018_CVPR,gong2019memorizing, park2020learning, ravanbakhsh2017abnormal, ionescu2019object, ravanbakhsh2018plug}, our framework performs the fastest because there are no additional modules or pre-trained networks. Furthermore, the proposed patch transformation does not drop the speed because it is detached during detection. Likewise, we designed all components of our method considering the detection speed in an effort to make it suitable for anomaly detection in the real world.\\
\indent We summarize our contributions as follows:
\begin{itemize}
\item We apply a patch anomaly generation phase to the training data to enforce normal pattern learning, especially in terms of appearance and motion.
\item The proposed patch generation approach can be implemented with conjunction of any backbone networks during the training phase.
\item Our model performs at very high speed and at the same time, achieves competitive performance on three benchmark datasets without any pre-trained models({\it e.g,} optical flow networks, object detectors, and pre-trained feature extractors).
\end{itemize}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=1.0\linewidth]{./images/main.pdf}
\end{center}
\caption{The overview of our framework. During the training phase, SRT and TMT are employed to make our input $\mathbf{C'_f}$. The AE is trained to generate a succeeding frame that mimics the normal frame. During the testing phase, frames are fed into the AE and the corresponding output $\mathbf{P'_t}$ is generated. The normality score $S_t(P'_t, P_t)$ is used to discriminate abnormal frames. The $\mathbf{P'_t}$ in this figure is a combination of $\mathbf{P'_t}$ and a difference map for better understanding. The values in brackets indicate [channel, temporal, height, width] of feature and (depth, height, width) of kernel in order. }
\label{modelarchitecture}
\end{figure*}
\vspace{-0.3em}
\section{Related work}
\subsection{AE-based approach}
Frame predicting and reconstructing AEs have been proposed under the assumption that models trained only on normal data are not capable of predicting or reconstructing abnormal frames, because these are unseen during training. Some studies~\cite{Liu_2018_CVPR, tang2020integrating, park2020learning, lu2019future} trained AEs that predict a single future frame from several successive input frames. Additionally, many effective reconstructing AEs~\cite{nguyen2019hybrid, zaheer2020old, park2020learning, cho2020unsupervised} have been proposed. Cho~\etal~\cite{cho2020unsupervised} proposed two path AE, where two encoders were used to model appearance and motion features. Focusing on the fact that abnormal events occur in small regions, patch-based AEs~\cite{zaheer2020old, xu2017detecting, nguyen2019hybrid, fan2020video}, have been proposed. However, it has been observed that AEs tends to generalize well to generate abnormal events strongly, mainly due to the capacity of CNNs, which leads to missing out on anomalies during detection. To alleviate this drawback, Gong~\etal~\cite{gong2019memorizing} and Park~\etal~\cite{park2020learning} suggested networks that employ memory modules to read and update memory items. These methods showcased outstanding performance on several benchmarks. However, they are observed to be ineffective for large datasets due to the limitation of memory size. Furthermore, some works~\cite{Liu_2018_CVPR, ravanbakhsh2017abnormal, ravanbakhsh2019training, ravanbakhsh2018plug} have used optical flow to estimate motion features because information of temporal patterns is crucial in anomaly detection.
\subsection{Transformation-based approach}
Many image transformation methods, such as augmentations, have been proposed to increase recognition performance and robustness in varying environments in limited training datasets. This technique was first applied to image recognition and was later extended to video recognition.
For the image-level modeling, Gidaris~\etal~\cite{gidaris2018unsupervised} suggested unsupervised learning for image classification by predicting the direction of the rotated input. Krizhevsky~\etal~\cite{krizhevsky2012imagenet} used rotation, flipping, cropping, and color jittering to enhance learning spatially invariant features. Furtherefore, DeVries~\etal~\cite{devries2017improved} devised CutOut, a method that deletes a box at a random location to prevent the model from focusing only on the most discriminative regions. Zhang~\etal~\cite{zhang2017mixup} proposed MixUp which blends two training data on both the images and the labels. Yun~\etal~\cite{yun2019cutmix} put forth a combination of CutOut and MixUp, called CutMix. CutMix creates a new sample from two images by deleting a square shaped region from one image and replacing it with a patch from another image.
For the video-level model, augmentation techniques have been extended to the temporal axis. Ji~\etal~\cite{ji2019learning} proposed a method called time warping and time masking, which randomly skips or adjusts temporal frames. \\
\indent Several studies have used the techniques mentioned above for video anomaly detection based on the assumption that applying transformations to the input forces the network to embed critical information better. Zaheer~\etal~\cite{zaheer2020old} suggested a pseudo anomaly module to create an artificial anomaly patch by blending two arbitrary patches from normal frames. They reconstructed both normal and abnormal patches, and trained a discriminator to predict the source of the reconstructed output. Hasan~\etal~\cite{hasan2016learning} and Zhao~\etal~\cite{zhao2017spatio} sampled the training data by skipping a fixed number of frames in the temporal axis. Moreover, Joshi~\etal~\cite{joshi2019unsupervised} generated abnormal frames from normal frames by cropping an object detected with a semantic segmentation model and placing it in another region in the frame to generate an abnormal appearance. Wang~\etal~\cite{wang2020cluster} applied random cropping, flipping, color distortion, rotation, and grayscale to the entire frame. In contrast to these methods, our network embeds normal patterns by training from frames with anomaly-like patches. We transform the input frames along the spatial axis or temporal axis to generate abnormal frames within training datasets.
\section{Proposed approach}
This section presents an explicit description of our model formation. Our model consists of two main phases: (1) the patch anomaly generation phase and (2) the prediction phase.
\subsection{Overall architecture}
Fig.~\ref{modelarchitecture} presents the overview of our framework. During the training phase, we first load $n$ adjacent frames to make a frame cuboid. After that, we apply our patch anomaly generation to the frame cuboid, which is forwarded to the AE. Our AE extracts spatial and temporal patterns of the input and generates a future frame. During inference, the patch anomaly generation is not employed. A raw frame cuboids is fed as an input to the AE. The difference between the output of the AE and the ground truth frame is used as a score to judge normality.
\subsection{Patch anomaly generation phase}
\label{section3}
Abnormal events in videos are categorized into two large branches: (1) anomalies regarding appearances ({\it e.g.,} pedestrians on a vehicle road) and (2) anomalies regarding motion ({\it e.g.,} an illegal U-turn or fighting in public). Hence, it is important to learn both the appearance and motion features of normal situations to detect anomalies in both cases. \\
\indent The patch anomaly generation phase takes place before feeding the frames to the generator. We load $n$ successive frames $(\mathbf{F_t, F_{t+1}, F_{t+2},} \dots , \mathbf{F_{t+n-1}})$, resize each to $240 \times 360$, and concatenate them on the temporal axis to form a 4D cuboid $\mathbf{C_f} \in \mathbb{R}^{C \times n \times 240 \times 360}$, where $C$ denotes the number of channels for each frame. After that, we select a patch cuboid $\mathbf{C_p} \in \mathbb{R}^{C \times n \times 60 \times 60}$ from a random location within $\mathbf{C_f}$ to apply transformation. Since anomalies usually occur in foregrounds, we exclude a margin of 12.5 percent in length from the top and bottom of the width of $\mathbf{C_f}$ from the selection area. We heuristically find that these marginal regions are generally backgrounds. Therefore, they commonly do not contain moving objects. Thus, by limiting the range, $\mathbf{C_p}$ is more likely to capture foregrounds than backgrounds, encouraging the model to concentrate on the moving objects. Then we apply SRT or TMT to $\mathbf{C_p}$ to form a transformed patch cuboid $\mathbf{C'_p}$. Only one of the two is applied randomly for every input. \\
\indent For SRT, each patch is rotated in a random direction between 0\textdegree, 90\textdegree, 180\textdegree, and 270\textdegree, following the approach of~\cite{gidaris2018unsupervised}. By forwarding these transformed frame cuboids $\mathbf{C'_f}$ $(\mathbf{F'_t, F'_{t+1}, F'_{t+2},} ... , \mathbf{F'_{t+n-1}})$ to the frame generator, our network is encouraged to focus on the abnormal region and recognize the spatial features of the normal appearances. Suppose a network is being trained on a dataset of people walking on a road. When it is given a frame cuboid with an upside-down person created by 180\textdegree~rotation among all the other normal pedestrians and is programmed to predict a next normal scene, the network would learn the spatial features of a normal person, such as the head and the feet are generally placed at the top and bottom, respectively. Our SRT is demonstrated as follows:
\begin{equation}
SRT(\mathbf{F_i})=R(\mathbf{F} _ {{ \mathbf{i} }_{(x, y) \in [(x, x+W _ { p } ), (y, y+H _ { p } )]}}, \delta_i ), \\
\end{equation}
where $R$ represents the rotation function for a patch within the pixel range of $[x, x+W _ { p } ]$ in the width axis and $[y, y+H _ { p } ]$ in the height axis of input frame $F_i$. $\delta_i$ denotes the randomly set direction for the ${ i ^ { th } }$ frame, where $i$ is the index of the input frame in the range $[0, n-1]$. Furthermore, $W_{p}$ and $H_{p}$ represent the fixed width and height of the patch, respectively. The final $\mathbf{C'_f}$ is generated by concatenating the transformed $\mathbf{F'_i}$ in the temporal axis. \\
\indent TMT involves shuffling the sequence of the patch cuboid $\mathbf{C_p}$ in the temporal axis with the intention of generating abnormal movement. The network needs to detect the awkward motion and match the sequence to normal before predicting the next frame to reduce the loss and generate a frame as similar as possible to the ground truth. For example, when the patch sequence is reversed, and a backward-walking person is generated within a frame where only forward walking people are annotated as normal, the model should find the correct sequence of the abnormal person based on the learned features to predict the correct trajectory. Our TMT function is as follows:
\begin{equation}
TMT(\mathbf{F_i})=T(\mathbf{F _ { i }} , \mathbf{F} _ {\mathbf{\xi}_{{{ i }_{(x, y) \in [(x, x+W _ { p } ), (y, y+H _ { p } )]}}}}),
\end{equation}
where $T$ denotes a function that copies a patch located in pixel range of $[x, x+W _ { p } ]$ in the width axis and $[y, y+H _ { p } ]$ in the height axis of input frame $\mathbf{F _ { \xi_{i} }}$ and pastes it to the $i ^ { th }$ frame. $\xi$ represents the shuffled sequence of $n$ patches ({\it e.g.} sequence $(4, 1, 0, 3, 2)$ when $n$ is 5). Same as SRT, the final $C'_f$ is the stack of the transformed $F_i$.
\indent Our patch anomaly generation phase is computationally cheaper than the other methods that embed spatio-temporal feature extraction in networks, such as storing and updating memory items~\cite{gong2019memorizing, park2020learning}, and estimating optical flow with pre-trained networks~\cite{Liu_2018_CVPR, ravanbakhsh2017abnormal, yu2020cloze, cai2021appearance}. Therefore, our patch anomaly generation phase boosts feature learning with low cost. Furthermore, this phase is not used during the inference, meaning that it does not affect the detection speed at all. Thus, our model is low in complexity and computational costs (see Section~\ref{section4}).
\begin{figure}[t]
\centering
\subfloat[SRT]{\includegraphics[width=1\linewidth]{./images/ST_aug_srt.pdf}
}
\\
\vspace{-0.8em}
\subfloat[TMT]{\includegraphics[width=1\linewidth]{./images/ST_aug_tmt.pdf}
}\\
\vspace{0.5em}
\caption{Visualization of (a) SRT and (b) TMT. The frames in the upper rows are components of $C_f$. The regions marked in color are the locations of the selected $C_p$. The frames in the lower rows are data transformed components of $C'_f$.}
\label{trainSRT}
\end{figure}
\subsection{AE architecture}
The AE in our network aims to learn prototypical features of normal events and produce an output frame based on those features. Its main task is to predict $\mathbf{P_t}$—the frame coming after $\mathbf{C_f}$—from an input frame cuboid $\mathbf{C'_f}$. Therefore, it is necessary to learn the temporal features as well as the spatial features to generate the frame with fine quality. The architecture of our model follows that of U-Net~\cite{ronneberger2015u}, in which the skip connections between the encoder and the decoder boost generation ability by preventing gradient vanishing and achieving information symmetry. The encoder consists of a stack of three-layer blocks that reduce the resolution of the feature map. We employ 3D convolution~\cite{tran2015learning} to embed the temporal factor learning in our model. Specifically, the first block consists of one convolutional layer and one activation layer. The second and the last blocks are identical in structure: convolutional, batch normalization, and activation layers. The kernel size is set to $3\times3\times3$ for all three layers. The decoder also consists of a stack of three-layer blocks and is symmetrical to the encoder except that the convolutional layers are replaced by deconvolutional layers to upscale the feature map.
In addition, we use leakyReLU activation~\cite{maas2013rectifier} for the encoder and ReLU activation~\cite{nair2010rectified} for the decoder. \\
\indent Likewise, the architecture of our AE is very simple compared to other previous studies, especially methods that employ pre-trained feature extractors~\cite{luo2017revisit, sultani2018real}. In that the running time is generally dependent on the simplicity of the model architecture, our AE is well designed, considering the speed.
\subsection{Objective function and normality score}
\noindent {\bf Prediction loss.} Our model is trained to minimize the prediction loss. We use the $L1$ distance (Eq. (\ref{L1})) and structural similarity index (SSIM)~\cite{wang2004image} loss (Eq. (\ref{ssim})) to measure the difference between the generated frame $\mathbf{P'_t}$ and the ground truth frame $\mathbf{P_t}$. The $L1$ distance and SSIM demonstrate the difference of frames at the pixel-level and similarity at the feature-level, respectively. The functions are as follows:
\begin{equation}
\label{L1}
L_{p} (\mathbf{P'_t}, \mathbf{P_t})=| \mathbf{P'_t}, \mathbf{P_t}|
\end{equation}
\begin{equation}
\label{ssim}
L_{f}(\mathbf{P'_t}, \mathbf{P_t}) = 1 - \frac{ (2 \mu _ { \mathbf{P'_t} } \mu _ { \mathbf{P_t} } + c _ { 1 } )(2 \sigma _ { \mathbf{P'_t}\mathbf{P_t} } + c _ { 2 } ) } { (2 \mu _ { \mathbf{P'_t} } ^ { 2 } \mu _ { \mathbf{P_t} } ^ { 2 } + c _ { 1 } )( \sigma _ { \mathbf{P'_t} } ^ { 2 } + \sigma _ { \mathbf{P_t} } ^ { 2 } + c _ { 2 } ) } ,
\end{equation}
\noindent where $\mu$ and $\sigma^ { 2 }$ denote the average and variance of each frame, respectively. Furthermore, $\sigma _ { \mathbf{P'_t}\mathbf{P_t} }$ represents the covariance. $c _ { 1 }$ and $c _ { 2 }$ denote variables to stabilize the division. Following the work of Zhao~\etal~\cite{zhao2016loss}, we exploit a weighted combination of the two loss functions in our objective function as shown in Eq. (\ref{eq1}).
\begin{equation}
\label{eq1}
L_{pred} (\mathbf{P'_t}, \mathbf{P_t}) = \omega _ { p } L _ { p } (\mathbf{P'_t}, \mathbf{P_t}) + \omega _ { f } L _ { f } (\mathbf{P'_t}, \mathbf{P_t})
\end{equation}
$\omega _ { p }$ and $\omega _ { f }$ are the weights controlling the contribution of $L_{p}$ and $L_{f}$, respectively. Consequently, our model is urged to generate outputs that resemble the ground truth frames at both the pixel and feature levels.
\begin{table*}[!ht]
\centering
\resizebox{1.8\columnwidth}{!}{%
\begin{tabularx}{\textwidth}{c|l|c|c|c|c|c}
\hline
& Method & FPS & Prediction-based & \multirow{2}{*}{}{CUHK Avenue~\cite{lu2013abnormal}} & Shanghai Tech~\cite{luo2017revisit} & UCSD Ped2~\cite{mahadevan2010anomaly}\\
\hline \hline
\multirow{14}{*}{\rotatebox{90}{w/ pre-training\hspace{0.1cm}}}
&StackRNN \cite{luo2017revisit} & 50 & & 81.7 & 68.0 & 92.2 \\
&FFP \cite{Liu_2018_CVPR} & 133$^\dagger$ & {\ding{51}} & 85.1 & 72.8 & 95.4 \\
&AD \cite{ravanbakhsh2019training} & 2 & & - & - & 95.5 \\
&AMC \cite{Nguyen_2019_ICCV} & & {\ding{51}} & 86.9 & - & 96.2 \\
&MemAE \cite{gong2019memorizing} & 42$^\dagger$ & & 83.3 & 71.2 & 94.1 \\
&DummyAno \cite{ionescu2019object} & 11 & & \underline{90.4} & 84.9 & \underline{97.8} \\
&AutoregressiveAE \cite{Abati_2019_CVPR} & & & - & 72.5 & 95.4 \\
&AnoPCN \cite{ye2019anopcn} & 10&{\ding{51}} & 86.2 & 73.6 & 96.8 \\
&GCLNC \cite{zhong2019graph} & \textbf{150} & & - & \underline{84.1} & 92.8 \\
&GMVAE \cite{fan2020video} & \underline{120} & & 83.4 & - & 92.2 \\
&VECVAD \cite{yu2020cloze} &5 & & 89.6 & 74.8 & 97.3 \\
&FewShotGAN \cite{lu2020few} & &{\ding{51}} & 85.8 & 77.9 &96.2\\
&AMmem \cite{cai2021appearance} & & & 86.6 & 73.7 & 96.6 \\
&MTL \cite{georgescu2020anomaly}& 21 & & \textbf{92.8} & \textbf{90.2} & \textbf{99.8} \\
\hline
\multirow{11}{*}{\rotatebox{90}{w/o pre-training \hspace{0.1cm}}}&150Matlab \cite{Lu_2013_ICCV} & \underline{150} & & 80.9 & - & - \\
&ConvAE \cite{hasan2016learning} & & & 70.2 & 60.9 & 90.0 \\
&HybridAE \cite{nguyen2019hybrid} & & & 82.8 & - & 84.3 \\
&CVRNN \cite{lu2019future} & &{\ding{51}} & 85.8 & - & 96.1 \\
&IntegradAE \cite{tang2020integrating} & 30 &{\ding{51}} & 83.7 & 71.5 & 96.2 \\
&MNAD \cite{park2020learning} & 78$^\dagger$ &{\ding{51}} & \textbf{88.5} & 70.5 & \underline{97.0} \\
&MNAD \cite{park2020learning} & 56$^\dagger$ & & 82.8 & 69.8 & 90.2 \\
&CDDAE \cite{chang2020clustering} &32 & & \underline{86.0} & \textbf{73.3} & 96.5 \\
&OG \cite{zaheer2020old} & & & - & - & \textbf{98.1} \\
\cline{2-7}
&Baseline & \textbf{195} &{\ding{51}} & 83.2 & 72.1 & 95.7 \\
&Ours & \textbf{195} &{\ding{51}} & 85.3 & \underline{72.2} & 96.3 \\
\hline
\end{tabularx}%
}
\vspace{0.2cm}
\caption{Frame-level AUC scores (\%) of the state-of-the-art methods versus our architecture trained with patch anomaly generation phase. Pre-training includes any additional pre-trained models such as optical flow networks, object detectors, or feature extractors. The FPS values are based on the figures mentioned in each paper, and the ones with $\dagger$ denotes FPS computed in our re-implementation, conducted on the same device and environment as our model for a fair comparison. The top two results are marked with \textbf{bold} and \underline{underline}.}
\vspace{-0.2cm}
\label{t2}
\end{table*}
\vspace{0.3em}
\noindent {\bf Frame-level anomaly detection.}
When detecting anomalies in the testing phase, we adopt the peak signal to noise ratio (PSNR) as a score to estimate the abnormality of the evaluation set. We obtain this value between the predicted frame at the $t ^ { th }$ period $\mathbf{P'_t}$ and the ground truth frame $\mathbf{P_t}$:
\begin{equation}
PSNR(\mathbf{P'_t}, \mathbf{P_t}) = 10\log_{10} \frac{\max (\mathbf{P '_t})}{\|\mathbf{P'_t}-\mathbf{P_t}\|_2^2/N},
\end{equation}
where N denotes the number of pixels in the frame. Our model fails to generate when $\mathbf{P_t}$ contains abnormal events, resulting in a low value of PSNR and vice versa. Following the works of~\cite{Liu_2018_CVPR,gong2019memorizing,luo2017revisit}, we define the final normality score $S_t$ by normalizing $PSNR(\mathbf{P'_t}, \mathbf{P_t}$) of each video clip to the range $[0, 1]$.
\begin{equation}
\label{score}
S_t = \frac{PSNR(\mathbf{P'_t}, \mathbf{P_t}) - \min PSNR(\mathbf{P'_t}, \mathbf{P_t})} {\max PSNR(\mathbf{P'_t}, \mathbf{P_t}) - \min PSNR(\mathbf{P'_t}, \mathbf{P_t})},
\end{equation}
Therefore, our model is capable of discriminating between normal and abnormal frames using the normality score of Eq. (\ref{score})
\section{Experiments}
This section demonstrates experimental results obtained on three datasets with both quantitative and qualitative results and explicit discussion. We also show the effectiveness of our patch anomaly generation phase for video anomaly detection.
\subsection{Implementation details}
We implement all of our experiments with PyTorch~\cite{paszke2017automatic}, using a single Nvidia GeForce RTX 3090. Our model is trained using Adam optimizer~\cite{kingma2014adam} with a learning rate of 0.0002. Additionally, a cosine annealing scheduler~\cite{loshchilov2016sgdr} is used to reduce the learning rate to 0.0001. We train our model for 20 epochs on the Avenue dataset~\cite{lu2013abnormal} and Ped2 dataset~\cite{mahadevan2010anomaly} and five epochs on the ShanghaiTech dataset~\cite{luo2017revisit}. The number of input frames $n$ is empirically set to 5. We load frames in gray scale, resize to $240\times360$, and normalize the intensity of pixels to $[-1, 1]$. In addition, we add random Gaussian noise to the training input where the mean is set to 0 and the standard deviation is chosen randomly between 0 and 0.03. Furthermore, we set $W_p$ and $H_p$ to 60. The batch size is 4 during training. Optimal weights for the loss function in Eq. (\ref{eq1}) are empirically measured as $\omega _ { p } = 0.25$ and $\omega _ { f } = 0.75$.\\
\indent We adopt the area under curve (AUC) of the receiver operating characteristic (ROC) curve obtained from the frame-level scores and the ground truth labels for the evaluation metric. This metric is used in most studies~\cite{nguyen2019hybrid, zaheer2020old} on video anomaly detection. The baseline model, mentioned throughout following sections, denotes our model without the patch anomaly generation phase. Since the first five frames of each clip cannot be predicted, they are ignored in the evaluation, following~\cite{Liu_2018_CVPR, park2020learning, tang2020integrating}.
\subsection{Datasets}
{\noindent \bf CUHK Avenue~\cite{lu2013abnormal}.}
This dataset captures an avenue at a campus. It consists of 16 training and 21 testing clips. Training clips contains only normal events and testing clips contains a total of 47 abnormal events such as running, loitering, and throwing objects. The frame resolution is $360 \times 640$, all in RGB scale. The size of people is inconsistent due to the camera angle. Furthermore, the camera is kept fixed most of the time, but brief shaking occurs in the evaluation set.
\vspace{0.3em}
{\noindent \bf UCSD Ped2~\cite{mahadevan2010anomaly}.}
The UCSD Ped2 dataset~\cite{mahadevan2010anomaly} is acquired from a pedestrian walkway by a fixed camera from a long distance. The training and the testing sets consists of 16 and 12 clips, respectively. Anomalies in the testing clips are non-pedestrian objects, for instance, bikes, cars, and skateboards. The frames are in gray scale with a resolution of $240 \times 360$.
\vspace{0.3em}
{\noindent \bf ShanghaiTech Campus~\cite{luo2017revisit}.}
The ShanghaiTech Campus dataset~\cite{luo2017revisit} is acquired from 13 different scenes, each with training and testing sequences, unlike the two aforementioned datasets which follow the single scene formulation. There are a total of 330 training videos and 107 testing videos where non-pedestrian objects ({\it e.g.,} cars, bicycles) and aggressive motions ({\it e.g.,} brawling, chasing) are annotated as anomalies. Each frame is captured with $480 \times 856$ RGB pixels. This dataset is among the largest datasets for video anomaly detection.
\subsection{Experimental results}
\label{section4}
\noindent {\bf Impact of patch anomaly generation phase.}
Table~\ref{augmentationAblation} shows the impact of our patch anomaly generation estimated on Avenue~\cite{lu2013abnormal} and Ped2~\cite{mahadevan2010anomaly}. The results include five different conditions: (1) using only TMT, (2) using only SRT, (3) randomly applying TMT or SRT but with all patches rotated as a chunk in the same direction for SRT, where $\delta_t=\delta_{t+1}= \dots =\delta_{n-1}$ (represented as SRT* in Table~\ref{augmentationAblation}), (4) randomly applying TMT or SRT with varying directions for each patch, and (5) applying both TMT and SRT to the selected $\mathbf{C_p}$. From the results, it appears that SRT has greater contribution than TMT to the detection performance. This is because our SRT rotates each patch randomly in varying directions resulting in generating anomalies in the motion as well as the appearance.
\begin{table}
\centering
\begin{adjustbox}{width=0.8\linewidth}
\begin{tabular}{ c|c|c|c }
\hline
Method & Avenue~\cite{lu2013abnormal} & ST~\cite{luo2017revisit} & Ped2~\cite{mahadevan2010anomaly} \\
\hline\hline
Baseline & 83.2 & 72.1 & 95.7 \\
TMT & 83.0 & 72.2 & 95.1 \\
SRT & 85.0 & \textbf{72.4} & 96.0 \\
TMT $\bigvee$ SRT* & 84.5 & 72.1 & 95.2 \\
TMT $\bigvee$ SRT& \textbf{85.3} & 72.2 & \textbf{96.3} \\
TMT $\bigwedge$ SRT& 84.6 & 72.2 & 96.2 \\
\hline
\end{tabular}
\end{adjustbox}
\vspace{0.3cm}
\caption{We demonstrate the impact of our patch anomaly generation by an ablation studies on CUHK Avenue~\cite{lu2013abnormal}, ShanghaiTech (ST)~\cite{luo2017revisit}, and Ped2~\cite{mahadevan2010anomaly}. We present frame-level AUC (\%) of experiments on 5 variations: using only TMT, using only SRT, randomly selecting between TMT and single directional SRT (indicated as *), randomly selecting between TMT and SRT, and using both the TMT and SRT.}
\label{augmentationAblation}
\end{table}
\vspace{0.3em}
\noindent {\bf Performance comparison with existing works.}
We compare the frame-level AUC of our model with those of non-prediction-based methods~\cite{hasan2016learning,luo2017revisit,ravanbakhsh2017abnormal, sultani2018real, ravanbakhsh2019training, nguyen2019hybrid, gong2019memorizing, ionescu2019object, Abati_2019_CVPR, park2020learning, yu2020cloze, zaheer2020old, georgescu2020anomaly} and prediction-based methods~ \cite{Liu_2018_CVPR, tang2020integrating, Nguyen_2019_ICCV, park2020learning} (see Table~\ref{t2}). We find that our method achieves competitive performance on the three datasets with a very high temporal rate. Among the prediction-based methods, we exceed IntergadAE~\cite{tang2020integrating} in all datasets and show superior results especially in the Ped2 dataset~\cite{mahadevan2010anomaly}. Note that our model performs at par with other models without any additional modules whereas several other prediction-based models~\cite{Liu_2018_CVPR, tang2020integrating, Nguyen_2019_ICCV} employed pre-trained optical flow networks to estimate the motion features. Among the non-prediction-based networks, Georgescu~\etal~\cite{georgescu2020anomaly} achieved superior performance by combining self-supervised learning with a pre-trained object detector.\\
\indent Furthermore, we conduct a score gap comparison, inspired by Liu~\etal~\cite{Liu_2018_CVPR} to present the discriminating capacity of our model. Fig.~\ref{scoregap} shows that our model achieves higher gaps than FFP~\cite{Liu_2018_CVPR}—a prediction network boosted with optical flow loss and generative learning, and MNAD~\cite{park2020learning}—a prediction method that reads and updates memory items from a memory module. This demonstrates the effectiveness of our patch anomaly generation phase by the fact that the score distributions of normal and abnormal frames are significantly far apart from each other.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.8\linewidth]{./images/scoregap.pdf}
\end{center}
\vspace{-0.2cm}
\caption{Following the work of Liu~\etal~\cite{Liu_2018_CVPR}, we compare our work with FFP~\cite{Liu_2018_CVPR} and MNAD~\cite{park2020learning} by calculating the score gap between normal frames and abnormal frames on CUHK Avenue~\cite{lu2013abnormal} and UCSD Ped2~\cite{mahadevan2010anomaly}. The gap is obtained by averaging the scores of normal frames and those of abnormal frames and subtracting the two values. A higher gap represents a higher capacity for discriminating normal and abnormal frames.}
\label{scoregap}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.8\linewidth]{./images/ablation_patchsize.pdf}
\end{center}
\vspace{-0.2cm}
\caption{Results of ablation studies on patch size.}
\label{patchsizeAblation}
\end{figure}
\vspace{0.3em}
\noindent {\bf Running time.}
Our model boasts an astonishing speed of 195 frames per second (FPS). This rate is computed using UCSD Ped2~\cite{mahadevan2010anomaly} test set with a single Nvidia GeForce RTX 3090 GPU. We obtain this by averaging the entire time consumed in both frame generation and anomaly prediction. To our knowledge, it is far faster than any other previous works. We show a fair comparison with other networks in Table~\ref{t2}. We re-implemented networks that distributed official codes in public on the same device and environment used for our network. The FPS for these is marked with $\dagger$ in the table. We followed the figures mentioned in each paper for methods without publicly distributed codes. Note that our work is nearly 30 \% faster than the second fastest ones~\cite{zhong2019graph, Lu_2013_ICCV}. Moreover, we computed the number of trainable parameters (FLOPs) as proof of 195 FPS. Its value for our model is 2.15 M whereas it is 15.65 M for MNAD\cite{park2020learning} with 67 FPS and 14.53 M for FFP\cite{Liu_2018_CVPR} with 25 FPS. Our network is remarkably cheaper in computation than the compared methods.
\vspace{0.3em}
\noindent {\bf Ablation studies on patch size.}
Fig.~\ref{patchsizeAblation} shows the result of ablation experiments that we conducted on the Avenue dataset~\cite{lu2013abnormal} to observe the effect of patch size. The patch size determines the smallest unit to be focused on by the AE. In all experiments of these ablation studies, only the size of the patch is changed between $30 \times 30$, $40 \times 40$, $60 \times 60$, and $90 \times 90$, while the frame resolution remains fixed at $240 \times 360$. It means that a comparably small region is captured in a $\mathbf{C_f}$ with the size of $30 \times 30$, and a large region is captured in a $\mathbf{C_f}$ with the size of $90 \times 90$. Our network shows the lowest accuracy when the patch size is $90 \times 90$, which is more than 10 percent of the frame size. When the patch is considerably large, the model focuses on larger movements than smaller ones. Abnormal conditions usually occur in small parts, hence, lower performance is observed in this case.
\begin{figure}[t]
\centering
\subfloat[CUHK Avenue]{\includegraphics[width=1\linewidth]{./images/scoreplot_avenue.pdf}
}
\\
\vspace{-0.8em}
\subfloat[ShanghaiTech]{\includegraphics[width=1\linewidth]{./images/scoreplot_st.pdf}
}\\
\vspace{-0.8em}
\subfloat[UCSD Ped2]{\includegraphics[width=1\linewidth]{./images/scoreplot_ped.pdf}
}\\
\vspace{0.5em}
\caption{Score plot from evaluation. The red and blue lines denote $S_t$ and labels, respectively. Labels are 0 when frames are abnormal. (A) is obtained from Clips 4, 5, and 6 of Avenue~\cite{lu2013abnormal}. Running, throwing a bag, and moving in the wrong direction are well detected. (B) is obtained from Clip 1 of ShanghaiTech~\cite{luo2017revisit}. Chasing and running are detected as anomalies. (C) is obtained from Clips 1, 2, 3, and 4 of Ped2~\cite{mahadevan2010anomaly}. Captured anomalies within these clips are bicycles and a car.}
\label{scoreplot}
\end{figure}
\begin{figure}[!ht]
\centering
\subfloat[UCSD Ped2]{\includegraphics[width=1\linewidth]{./images/testimg_ped.pdf}
}
\\
\vspace{-0.8em}
\subfloat[CUHK Avenue]{\includegraphics[width=1\linewidth]{./images/testimg_avenue.pdf}
}\\
\vspace{-0.8em}
\subfloat[ShanghaiTech]{\includegraphics[width=1\linewidth]{./images/testimg_st.pdf}
}\\
\vspace{0.5em}
\caption{Examples of predicted frames and difference maps compared to our baseline. Best viewed in color.}
\label{output}
\end{figure}
\vspace{0.3em}
\noindent {\bf Qualitative results.}
We demonstrate the frame-level detecting performance or our model in Fig.~\ref{scoreplot}. From the figure, it can be observed that $S_t$ rapidly decreases when anomalies appear in the frames. Once the abnormal objects disappear, $S_t$ increases immediately.\\
\indent Furthermore, the object-level detecting capacity is observed in Fig.~\ref{output}. We present the examples of predicted frames and the corresponding difference maps. Additionally, we emphasize the results by comparing each sample with those of our baseline model. In the example of Ped2~\cite{mahadevan2010anomaly}, the bicycle is the annotated anomaly, which is an unseen appearance. In Avenue~\cite{lu2013abnormal} and ShanghaiTech~\cite{luo2017revisit}, the annotated anomalies relate to motion: a man throwing a bag and a running person. The outputs generated by our model trained with the patch anomaly generation phase are significantly much blurrier than those of the baseline, validating the effectiveness of our transformation phase. Note that our model nearly erased the bag and the person in the examples of Avenue~\cite{lu2013abnormal} and ShanghaiTech~\cite{luo2017revisit}. This proves that our model does not simply infer abnormal objects by copying from the inputs, which is what the baseline model does. Moreover, for the ShanghaiTech dataset~\cite{luo2017revisit}, the difference map of our model shows a distinction in a larger region compared to that of the baseline. We observe that our model did not accept the motion in the input; it attempted to predict the trajectory of the runner as it as per the training. However, the baseline model generated a moderate copy of the input based on the given trajectory.
\section{Conclusion}
In this paper, we proposed an unsupervised prediction network for video anomaly detection with a patch anomaly generation phase. We designed a light-weight AE model to learn the common spatio-temporal features of normal frames. The proposed method generated transformed frame cuboids as inputs, by applying SRT or TMT to a random patch cuboid within the frame cuboid. Our model was encouraged to pay attention to the appearance and motion patterns of normal scenes. In addition, we discussed the impact of the patch anomaly generation by conducting ablation studies. Furthermore, the proposed method achieved competitive performance on three benchmark datasets and performed at a very high speed, which is as important as the detection capacity in anomaly detection.
\newpage
{\small
\bibliographystyle{ieee_fullname}
|
2106.08620
|
\section{Introduction}
\par Numerical analysis is playing an increasingly important role in marine hydrodynamics. Computational Fluid Dynamic (CFD) models based on the Navier-Storkes (NS) equations with proper turbulence modeling are the most comprehensive ones for this purpose. They are applicable in more applications than a potential-flow model, in particular when viscous flow separation and wave breaking become relevant and important. The computational costs, however, are normally too high to afford, which is regarded as one of the bottlenecks of CFD models, if they are heavily involved in the design of marine structures.
Due to large-volume nature of most of the marine structures, the inertial effect is predominant whereas viscosity effect plays a secondary role. Therefore, the potential-flow theory is often applied together with empirical corrections for viscous effects.
\par For the potential-flow problems, Boundary Element Method (BEM) is the most commonly used numerical method in marine hydrodynamics, as it can reduce the dimension of the problem by one and only the boundaries of the fluid domain need to be discretized. Even though the number of unknowns is reduced in BEM compared with a volume method, it is still challenging for a conventional BEM to solve the resulting linear system with a large number of unknowns, because the matrix is dense. $O(N^2)$ memory is required by the conventional BEMs, and $O(N^2)$ and $O(N^3)$ operations are required for iterative solvers and direct solvers, respectively. Here $N$ denotes the number of total unknowns on the boundary surfaces.
\par Although BEM is a very popular numerical method in potential-flow hydrodynamic analyses, field solvers are also widely used. \cite{wu1994finite} is among the first to use FEM to investigate 2D nonlinear free-surface flow problems in the time domain. \cite{wu1995time} studied the fully-nonlinear wave-making problem by both FEM and BEM, and suggested that FEM is more efficient than BEM in terms of both CPU time and computer memory. \cite{2010FinitePart1} and \cite{2010FinitePart2} used a FEM to simulate the interaction between 3D fixed bodies and steep waves. On the other hand, high-order volume methods have gained great interest. \cite{2007On} and \cite{engsig2009efficient} developed 2D and 3D high-order Finite Difference Methods (FDMs) to study fully-nonlinear water wave problems in potential flows. \cite{2012Shao} and \cite{shao2014harmonic} proposed high-order Harmonic Polynomial Cell (HPC) methods in both 2D and 3D respectively to study water waves and their interaction with structures. Some recent extensions were made to utilize immersed boundary strategies and overset meshes to achieve better accuracy and stability \citep[e.g., see][]{hanssen2018free, tong2019numerical, tong2021adaptive,law2020numerical,liang2020liquid}. Compared to the BEMs, field solvers deal with spare matrices, and the computational costs are roughly linearly dependent on the number of unknowns.
Ordinary boundary-element and volume methods, e.g. BEM, FEM, FDM and HPC methods, are based on local approximations using smooth functions. Thus, very fine meshes have to be applied at areas where the fluid solution tends to be singular. Sharp edges are widely present in typical offshore structures. Examples are pontoons of semi-submersibles and tension leg platforms \citep[e.g., see][]{chen1995numerical, zhou2015resonance}, damping plates on offshore platforms \citep[e.g., see][]{tao2007spacing, shao2016stochastic, shao2019pontoon} and offshore floating wind turbine structures \citep[e.g., see][]{xu2019effect}, as well as the bilge keels on the ships. From industrial application point of view, it is essential to be able to obtain accurate numerical results with affordable computational efforts. However, this is not always possible, even for the 2nd order mean wave loads.
The calculation of 2nd order mean wave loads involves quadratic terms of the 1st order quantities, which pose great challenges at the sharp edges where the fluid velocities tend to be infinite. \cite{taylor1993effect} investigated the effect of corners on diffraction/radiation wave loads and wave-drift damping, and revealed that the most important hydrodynamic loads and the amplitudes of body motion do not change significantly while the radius of the corner approaches zero. For a floating truncated vertical cylinder free to surge and heave, \cite{zhao1989interaction} found it is difficult to obtain convergent 2nd order mean wave forces via the direct pressure integration. In their work, a method based on momentum and energy relationship was shown to be more robust and efficient. By applying the variants of Stokes's theorem, \cite{dai2005computation} and \cite{chen2007middle} developed a `middle-field formulation', which transforms the body-surface integral to a control surface at a distance from the body. Similar strategy was also applied by \cite{liang2017multi-domain} where a multi-domain approach was developed. The middle-field formulation can be used to calculate drift forces and moments in all 6 degrees of freedom. The floating truncated vertical cylinder studied by \cite{zhao1989interaction} was revisited in \cite{2018Numerical} and four different methods were used to calculate the vertical mean wave force, including a momentum formulation implemented in a time-domain higher-order BEM \citep{shao2013second}, a semi-analytical solution \citep{mavrakos1988}, the middle-field method in HydroStar, as well as the near-field method in HydroStar. The first three methods matched very well with each other, confirming the accuracy of the earlier results by \cite{zhao1989interaction} based on momentum and energy relations. However, the results determined by the direct pressure integration were quite different in the heave resonance regime. As elucidated in \cite{2018Numerical}, the results by the direct pressure integration are not convergent, despite very fine meshes have been used.
\cite{2020Comparative} used five different methods to investigate nonlinear radiation forces of bodies with sharp or rounded edges in the time domain. The first four methods are all near-field methods, and the fifth one based on momentum conservation. They found that the singularity at the sharp edge plays significant roles on numerical computation of hydrodynamic forces in all near-field methods, while it has much less influences on results based on momentum conservation. Using an approach based on a control surface, \cite{2020A} rewrote the integration of velocity square terms on body surface into the sum of two other integrals, one on a control surface enclosing the structure and the other on the free surface between the structure and the control surface. Encouraging results were obtained for double-frequency wave-radiation forces on an oscillating truncated vertical cylinder.
\par This paper aims to introduce, verify and demonstrate the XFEM as an accurate and efficient tool to calculate the linear and 2nd order wave loads on structures with sharp edges, without having to use a control surface. The XFEM has a powerful framework, which allow for adding the knowledge of the local solutions, normally known as a priori, to the finite-element approximation space at specific nodes. The solution enrichment at those nodes does not require any modification to the meshes. The idea of XFEM was originally used by \cite{1999Elastic} to solve the problem of elastic crack growth, and one year later, \cite{2000Arbitrary} formally named this approach as XFEM. The XFEM can be seen as an extension of the standard FEM based on the conception of Partition of Unity (PU) \citep{Babu1997THE}, and thus it maintains all advantages of the standard FEM. Earlier concepts of PU dates back to 1994, when it was first used to solve the so-called roughness coefficient elliptic boundary value problem by \cite{1994Special} with the name of special finite element method, namely the Generalized Finite Element Method (GFEM). Based on the ideas in \cite{1994Special}, the GFEM was further elaborated by \cite{1995On}, \cite{1996PUFEM}, \cite{I1997THE} and \cite{1997Approximation} with the name of partition of unity method (PUM) or partition of unity finite element method (PUFEM). The GFEM was developed in \cite{2000Thedesign} and \cite{2000Thegeneralized} with the name of GFEM. In the early days, both XFEM and GEFM were developed independently even their basic idea is similar. A feature to distinguish the XFEM and the GFEM in early work is that only local parts of the domain are enriched by XFEM, but GFEM enriches the whole domain globally. However, \cite{2010Fries} argued that the XFEM and the GFEM are almost identical numerical methods. The XFEM represents the singular properties by adding singular basis function or any analytical recognition of the solution to local approximation space, and it has been a tremendous success in dealing with singular or discontinuous problems, no matter how strong the discontinuity is \citep[see, e.g.][]{2001Modeling, 2002Non, 2015Extended, 2010Fries}. Besides, XFEM has also been introduced to CFD to model two-phase flows \citep{2010The}.
\par In the present work, as verification and demonstration, flow around an infinite-thin flat plate and a heaving rectangle on the free surface will be studied via four different FEMs, namely the linear FEM, linear XFEM, quadratic FEM and quadratic XFEM. Convergence studies will be presented to illustrate accuracy and efficiency of the XFEMs. Our results indicate that the singularities at sharp edges do not have a strong influence on the calculating of added mass and damping, confirming the conclusion from an earlier study by \cite{taylor1993effect}. However, if the near-field method is used, it is extremely challenging for conventional FEMs to achieve convergent 2nd order vertical mean forces for the heaving rectangle with affordable computational time on a normal PC. On the contrary, the XFEMs with local enrichment, using corner-flow solutions \citep{Newman2017Marine} at sharp edges, can achieve convergent results with much coarser meshes. Three different local enrichment strategies of XFEM will also be compared and suggestions will be made for practical implementation.
\par The rest of the paper will be organized as follows. In Sect.~\ref{sect:mathematical-formu}, the formulation of the boundary-value problem and corner-flow solutions are presented. In Sect.~\ref{sect:Numerical Method}, the basic idea of conventional FEM and the XFEM are introduced via a mixed boundary-value problem in 2D. Besides, three enrichment strategies for the XFEM are presented and compared. In Sect.~\ref{sect:Numerical case}, as the first verification case, the velocity potential in fluid domain and the added mass of an infinitely-thin flat plate are studied and compared with the analytical solution. The second verification concerns a heaving rectangle on a free surface, solved in the frequency domain. In Sect.~\ref{sect:Conclusion and perspective}, the conclusion is drawn and future perspectives are made.
\section{Mathematical formulation}\label{sect:mathematical-formu}
\subsection{Governing equation and linearized boundary condition}
\begin{figure*}[t]
\centering
\includegraphics[scale=.55]{Figures/computaional_domain.eps}
\caption{An illustration of the fluid domain and its boundaries, as well as the definition of the coordinate system.}
\label{Fig.21}
\end{figure*}
\par A 2D coordinate system $Oxy$ is defined with the $Ox$ axis coinciding with the undisturbed free surface and $Oy$ axis orienting positively upward, as illustrated in Fig.~\ref{Fig.21}. The fluid domain $\Omega$ is enclosed by the body surface $S_b$, free surface $S_f$, bottom surface $S_d$, and vertical control surfaces $S_m$ at a distance from the body.
\par It is assumed that the fluid is inviscid and incompressible, and the flow is irrotational so that a velocity potential $\phi$ exists. In this study, we only consider 2D flows, and thus the governing equation in the fluid domain $\Omega$ is written as
\begin{equation}
\label{Eq.1}
\frac{\partial^2 \phi}{\partial x^2}
+ \frac{\partial^2 \phi}{\partial y^2} = 0,
\end{equation}
where $\phi$ denotes velocity potential. Here only radiation problem is considered, and thus the impenetrable condition on the body surface is written as:
\begin{equation}
\label{Eq.2}
\frac{\partial \phi}{\partial n} = {\boldsymbol v} \cdot {\boldsymbol n} \quad \text{at}\quad S_b,
\end{equation}
where ${\boldsymbol v}$ is the velocity of the body and ${\boldsymbol n}$ is the vector normal to the body surface pointing out of the fluid domain. Besides, the combined linearized free-surface condition is written as
\begin{equation}
\label{Eq.5}
\frac{{{\partial }^{2}}\phi }{\partial {{t}^{2}}}
+g\frac{\partial \phi }{\partial y}=0\quad \text { at }\quad S_f.
\end{equation}
The bottom condition is
\begin{equation}
\label{Eq.6}
\frac{\partial\phi}{\partial n}= 0\quad \text { at }\quad S_d.
\end{equation}
\subsection{Linearized frequency-domain analysis}
\par Assuming that the problem is time-harmonic and a steady state is reached. Therefore, velocity potential can be separated into a spatial part and temporal part as follows:
\begin{equation}
\label{Eq.35}
\phi(x,y,t) = \mathrm{Re}\{\varphi(x,y) \cdot \mathrm{e}^{\mathrm{i}\omega t}\},
\end{equation}
where $\omega$ denotes the angular frequency of oscillation, and $\mathrm{i} =\sqrt{-1}$. The motion of body in $j$-th mode can be defined as:
\begin{equation}
\label{Eq.53}
{\eta}_j = \mathrm{Re}\{\eta_{ja} \mathrm{e}^{\mathrm{i}\omega t}\} \quad (j=1,2,3),
\end{equation}
where $\eta_{ja}$ denote the amplitude of body in $j$-th mode, and $j=1$, $2$, and $3$ correspond to sway, heave, and roll motions, respectively.
Accordingly, the governing equation and boundary-value problem with respect to the complex velocity potential $\varphi(x,y)$ can be written as:
\begin{equation}
\label{Eq.36}
\begin{aligned}
& \frac{\partial^{2} \varphi}{\partial x^{2}}+\frac{\partial^{2} \varphi}{\partial y^{2}}=0 &\text{in}\quad \Omega, \\
& -\omega^2 \varphi+ g\frac{\partial \varphi}{\partial y}=0 &\text{at}\quad S_f, \\
& \frac{\partial \varphi }{\partial n} =\sum\limits_{j=1}^{3}{\mathrm{i} \omega{{\eta }_{ja}}{{n}_{j}}} &\text{at}\quad S_b,\\
& \frac{\partial \varphi}{\partial y}=0 &\text {at} \quad S_d.
\end{aligned}
\end{equation}
Here $n_j$ represent the component of the normal vector in the direction of the motion of body in $j$-th mode. The dispersion relation in finite water depth is $k\tanh kh={\omega }^{2}/{g}$, where $k$ is wavenumber. Thus, the free-surface condition in Eq.~\eqref{Eq.36} can be rewritten as
\begin{equation}
\label{Eq.37}
-(k\tanh kh) \cdot \varphi+ \frac{\partial \varphi}{\partial y}=0 \quad \text { at } y=0.
\end{equation}
Besides, radiation condition requiring radiated waves propagating outwards can be expressed as:
\begin{equation}
\label{Eq.38}
\begin{aligned}
\frac{\partial \varphi}{\partial x}+\mathrm{i} k\, \mathrm{sgn}(x) \,\varphi \rightarrow 0 \quad \text { when } \quad x \rightarrow+\infty.
\end{aligned}
\end{equation}
Because the distance of horizontal between rectangle and matching boundary is large enough, the radiation condition can be satisfied at the matching boundaries $S_m$:
\begin{equation}
\label{Eq.39}
\begin{aligned}
&\frac{\partial \varphi}{\partial x}+\mathrm{i} k\, \mathrm{sgn}(x)\, \varphi = 0 \quad \text { at } S_m.
\end{aligned}
\end{equation}
\subsection{Corner-flow solution}\label{sect:corner-flow}
\begin{figure}[t]
\centering
\includegraphics[scale=.40]{Figures/CoordinateSystemCorner.eps}
\caption{Definition of the Cartesian and polar coordinate systems for the corner flow problem.}
\label{Fig.1}
\end{figure}
\par In order to demonstrate the singular characteristics of the corner flow by potential-flow theory, the flow past a sharp corner with an exterior angle $\beta$ and the corresponding interior angle of $\gamma = 2\pi-\beta$ as shown in Fig.~\ref{Fig.1} is considered. If the considered semi-infinite wedge is fixed, the corner-flow solution can, according to \cite{Newman2017Marine}, be defined in the polar coordinate system $Or\theta$ as
\begin{align}
\label{Eq.7}
\varphi ={{A}_{j}}{{r}^{j\pi /\beta }}\cos \left( \frac{j\pi }{\beta }\theta \right)
={{A}_{j}}{{r}^{j\pi /(2\pi -\gamma )}}\cos \left( \frac{j\pi }{2\pi -\gamma }\theta \right),
\end{align}
where $A_j$ is a constant and $j$ is an positive integer. It's obvious that the velocity determined by Eq.~\eqref{Eq.7} is singular at the tip of the semi-infinite wedge when $j=1$ and $\gamma<\pi$. If we define
\begin{equation}
\label{Eq.8}
m_j = \frac{j\pi }{2\pi -\gamma },
\end{equation}
Eq.~\eqref{Eq.7} can be rewritten as
\begin{align}
\label{Eq.9}
\varphi ={{A}_{j}}{{r}^{m_j}}\cos \left( m_j\theta \right).
\end{align}
\section{Numerical method}\label{sect:Numerical Method}
Since the XFEM is an extension of the conventional FEM by including singular basis function in the shape function, we will in this section start with very brief introduction of the conventional FEM, followed by more details of XFEM as well as different local enrichment strategies for XFEM. General description of conventional FEMs can be found in many textbooks \citep[see, e.g.][]{zienkiewicz2005finite,hughes2012finite,reddy2019introduction}.
\subsection{Finite Element Method}
\begin{figure*}[t]
\centering
\subfigure[Linear element] {\includegraphics[scale=.80]{Figures/IsoparametricElement4Node.eps}}
\subfigure[Quadratic element] {\includegraphics[scale=.80]{Figures/IsoparametricElement8Node.eps}}
\caption{Linear and quadrilateral standard element on a $\xi\eta$- plane.}
\label{Fig.22}
\end{figure*}
\par In a FEM formulation for a potential-flow problem, the fluid domain is discretized into elements, also called finite elements, and the velocity potential in each element can be approximated as
\begin{equation}
\label{Eq.11}
\varphi =\sum\limits_{j=1}^{n_p}{{{N}_{j}}}(x,y){{\varphi }_{j}}.
\end{equation}
Here $N_j(x,y)$ is shape function, $n_p$ is the number of the nodes in the whole fluid domain and $\varphi_{j}$ denotes the nodal value of the velocity potential at node $j$. Commonly, the shape function is defined in an element, for simplicity, using $n^e_p$ to represent the number of the nodes in a single element. Referring to \cite{zienkiewicz2005finite}, for 4-node quadrilateral linear FEM, namely $n^e_p=4$, the shape function defined on a parametric $\xi\eta$-plane can be written as:
\begin{equation}
\label{Eq.54}
{{N}_{i}}=\frac{1}{4}\left( 1+{{\xi }_{i}}\xi \right)\left( 1+{{\eta }_{i}}\eta \right), \qquad i=1,\cdots,4.
\end{equation}
where $(\xi_i,\eta_i)$ denote the normalized coordinates at node $i$.
For an incomplete quadratic quadrilateral element, namely $n^e_p=8$, the shape function can be expressed as:
\begin{equation}
\label{Eq.55}
\begin{aligned}
{{N}_{i}}& =\frac{1}{4}\left( 1+{{\xi }_{i}}\xi \right)\left( 1+{{\eta }_{i}}\eta \right)\left( {{\xi }_{i}}\xi +{{\eta }_{i}}\eta -1 \right) \quad (i=1,\cdots,4),\\
{{N}_{i}}& =\frac{1}{4}\left( 1+{{\xi }^{2}} \right)\left( 1+{{\eta }_{i}}\eta \right)\quad(i=5,7),\\
{{N}_{i}}& =\frac{1}{2}\left( 1+{{\xi }_{i}}\xi \right)\left( 1-{{\eta }^{2}} \right)\quad(i=6,8).
\end{aligned}
\end{equation}
Examples of the 4-node and 8-node quadrilateral elements in the parametric $\xi\eta$-plane are illustrated in Fig.~\ref{Fig.22}.
Application of the Galerkin method leads to
\begin{equation}
\label{Eq.12}
\iint_{\Omega }{{{N}_{i}}}(x,y)\left[ {{\nabla }^{2}}\sum\limits_{j=1}^{n_p}{{{N}_{j}}}(x,y){{\varphi }_{j}} \right]\mathrm{d}\Omega =0.
\end{equation}
Considering a general BVP with Dirichlet boundary $S_D$, Neumann boundary $S_N$ and Robin boundary $S_R$, the weak form of the integral in Eq.~\eqref{Eq.12} can be obtained by applying the Green's theorem and letting the test functions equal to zero on $S_D$
\begin{equation}
\label{Eq.13}
\begin{aligned}
\int_{S_N + S_R} N_i\frac{\partial \varphi}{\partial n} \mathrm{d} S
-\iint_{\Omega } \nabla N_i \cdot \sum\limits_{j\in S_D}\varphi_j\nabla N_j \mathrm{d}\Omega \\
-\iint_{\Omega }{\nabla }{{N}_{i}}\cdot \sum\limits_{j\notin {{S}_{D}}}{{{\varphi }_{j}}}\nabla {{N}_{j}}\mathrm{d}\Omega
=0 \quad (i\notin {{S}_{D}}).
\end{aligned}
\end{equation}
For a mixed Dirichlet-Neumann BVP, as we will study in Sect.~\ref{sect:flat-plate} for the flat plate in infinite fluid, $\varphi=f_p$ on $S_D$ and $\partial \varphi / \partial n = f_n $ on $S_N$ are known from the the boundary conditions, respectively.
In this case, we can use linear system to represent Eq.~\eqref{Eq.13} as
\begin{equation}
\label{Eq.14}
\mathbf{K\Phi = B},
\end{equation}
where
\begin{equation}
\label{Eq.15}
\mathbf{\Phi} ={{\left[ \begin{matrix}
{{\varphi }_{1}} & {{\varphi }_{2}} & \cdots & \begin{matrix}
{{\varphi }_{i}} & \cdots \\
\end{matrix} \\
\end{matrix} \right]}^{T}},
\end{equation}
Here the superscript $T$ represent the transpose of a matrix or vector. The elements in matrix $\mathbf{K}$ and vector $\mathbf{B}$ are defined respectively as
\begin{equation}
\label{Eq.16}
{{K}_{ij}} = \iint_{\Omega }{\nabla }{{N}_{i}}\cdot \nabla {{N}_{j}}\mathrm{d}\Omega, \quad (i\notin {{S}_{D}},j\notin {{S}_{D}})
\end{equation}
\begin{align}
\label{Eq.17}
{{B}_{i}}=\int_{{{S}_{b}}}{{{N}_{i}}{{f}_{n}}}\mathrm{d} S
-\iint_{\Omega }{\nabla }{{N}_{i}}\cdot \sum\limits_{j\in {{S}_{p}}}{{{({{f}_{p}})}_{j}}}\nabla {{N}_{j}}\mathrm{d}\Omega, \quad (i\notin {{S}_{D}}).
\end{align}
\par For a mixed Neumann-Robin BVP, as we will study in Sect.~\ref{sect:heave-rectangle} for the linear frequency-domain solution of a heaving rectangle at the free surface, the weak form can be more specifically written as:
\begin{equation}
\label{Eq.40}
\begin{aligned}
\iint_{\Omega }{\nabla {{N}_{i}}\cdot }\sum\limits_{j}{{{\varphi }_{j}}\nabla {{N}_{j}}}\mathrm{d}\Omega
+\mathrm{i} k\int_{{{S}_{m}}}{{{N}_{i}}\sum\limits_{j}{{{\varphi }_{j}}{{N}_{j}}}}\mathrm{d} S\\
-k\tanh kh\int_{{{S}_{f}}}{{{N}_{i}}\sum\limits_{j}{{{\varphi }_{j}}{{N}_{j}}}}\mathrm{d} S=\int_{{{S}_{b}}}{{{N}_{i}}{f_n}}\mathrm{d} S.
\end{aligned}
\end{equation}
Here the mean free surface $S_f$ and control surface $S_m$ are Robin boundaries, where the boundary conditions are defined in Eqs.~\eqref{Eq.37} and \eqref{Eq.38}, respectively. The Neumann boundary condition on $S_b$ has been defined in Eq.~\eqref{Eq.36}.
\subsection{Extended Finite Element Method (XFEM)}
XFEM was developed based on the concept of partition of unity (PU), and the so-called PU means a set of non-zero function $N_i(x,y)$ in the partition of unity domain satisfying the following condition:
\begin{equation}
\label{Eq.18}
\sum\limits_{i}{{{N}_{i}(x,y)}} = 1.
\end{equation}
For any function in the PU domain, the following relationship holds:
\begin{equation}
\label{Eq.19}
\sum\limits_{i}{{{N}_{i}}(x,y)}\psi (x,y)=\psi (x,y).
\end{equation}
Undoubtedly, Eq.~\eqref{Eq.19} is also satisfied when $\psi (x,y)$ is a constant.
Obviously, standard shape functions, for instance those shown in Eqs.~\eqref{Eq.54} and \eqref{Eq.55}, are PU functions.
After introducing the conventional FEM and the conception of PU, the enrichment function and extra degrees of freedom (DOFs) at the selected nodes will be presented. For simplicity and without losing generality, we denote $\mathcal{I}$ as the set of all nodes in the fluid domain and $\mathcal{J}$ as the subset of nodes which will be enriched. Thus the trial solution in the fluid domain with only one enrichment function on each point $j\in \mathcal{J}$ can be written as
\begin{equation}
\label{Eq.20}
\varphi =\sum\limits_{j\in \mathcal{I}}{{{N}_{j}}}(x,y){{\varphi }_{j}}+\sum\limits_{j\in \mathcal{J}}{{{N}_{j}}}(x,y)\psi (x,y){{\Psi }_{j}},
\end{equation}
where $\Psi_j$ represent the additional DOF at the enriched node $j$. $N_j(x,y)$ is the standard finite-element shape function, $\psi(x,y)$ denotes the enrichment function representing special knowledge, e.g. logarithmic singularity, of the fluid solution. The products ${{{N}_{j}}}(x,y)\psi (x,y)$ may be considered as local enrichment function, as their supports coincide with those of conventional finite-element shape functions, leading to sparsity in the discrete equation \citep{2010Fries}. It can be understood from Eq.~\eqref{Eq.20} that the values on nodes $j\in \mathcal{J}$ differ from $\varphi_j$, which is an unfavorable property of Eq.~\eqref{Eq.20}. To ensure that nodal values are always $\varphi_{j}$ at the enriched nodes $j\in \mathcal{J}$, the enrichment function can be shifted and Eq.~\eqref{Eq.20} can be rewritten as \citep[see, e.g.][]{2010Fries, 2000Arbitrary},
\begin{equation}
\label{Eq.21}
\begin{aligned}
\varphi =\sum\limits_{j\in \mathcal{I}}{{{N}_{j}}}&(x,y){{\varphi }_{j}}+\sum\limits_{j\in \mathcal{J}}{{{N}_{j}}}(x,y)[\psi (x,y)-\psi (x_j,y_j)]{{\Psi }_{j}}.
\end{aligned}
\end{equation}
As a result of the shifting, the enrichment represented by the 2nd summation on the right-hand side of Eq.\eqref{Eq.21} vanishes at the nodes $j\in \mathcal{J}$, and thus recover the Kronecker-$\delta$ property of standard finite-element approximations. Unless otherwise redefined, all the enrichment functions that will be used in this paper are the shifted enrichment functions.
More generally, if more than one enrichment function are introduced at each node $j\in \mathcal{J}$, Eq.\eqref{Eq.21} can be extended as
\begin{equation}
\label{Eq.22}
\begin{aligned}
\varphi =\sum\limits_{j\in I}{{{N}_{j}}}&(x,y){{\varphi }_{j}}+\\
&\sum\limits_{j\in J}\sum\limits_{l}{{{N}_{j}}}(x,y)[\psi^l (x,y)-\psi^l (x_j,y_j)]{{\Psi }^{l}_{j}}.
\end{aligned}
\end{equation}
Here $\psi^l$ is the $l$-th enrichment function at the $j$-th enrichment node. For brevity, we use a matrix form to express Eq.~\eqref{Eq.22} and rewrite it as
\begin{equation}
\label{Eq.23}
\varphi =\left[ \begin{matrix}
\mathbf{N}_{std} & \mathbf{N}_{enr} \\
\end{matrix} \right]\left[ \begin{matrix}
\mathbf{\Phi} \\
\mathbf{\Psi} \\
\end{matrix} \right],
\end{equation}
where
\begin{equation*}
\begin{aligned}
&{{N}_{stdj}}={{N}_{j}}(x,y) \quad (j=1,\cdots,n_p),\\
&{{N}^{l}_{enrj}}={{N}_{j}}(x,y)\cdot [\psi^l (x,y)-\psi^l (x_j,y_j)] \quad \\
&\qquad\qquad(j\in\mathcal{J}, l=1,\cdots, n^{enr}),
\end{aligned}
\end{equation*}
are the elements in $\mathbf{N}_{std}$ and $\mathbf{N}_{enr}$. $n^{enr}$ denotes the number of enrichment functions. The dimension of $\mathbf{N}_{std}$ is $1\times n_p$. If there are $n^{enr}_{p}$ nodes enriched in whole domain, the dimension of $\mathbf{N}_{enr}$ is $1\times (n^{enr}_{p}\cdot n^{enr})$. Substituting Eq.~\eqref{Eq.23} into Eq.~\eqref{Eq.13}, we obtain the following expression:
\begin{equation}
\label{Eq.24}
\begin{aligned}
&\int_{{{S}_{N}+{S}_{R}}}{\left[ \begin{matrix}
\mathbf{N}_{std}^{T} \\
\mathbf{N}_{enr}^{T} \\
\end{matrix} \right]{\frac{\partial \varphi}{\partial n}}}\mathrm{d} S-\iint_{\Omega } \nabla \mathbf{N}_{std} \cdot \mathbf{N}_{std}^D \mathrm{d}\Omega \mathbf{\Phi}_D\\
&-\iint_{\Omega }{\left[ \begin{matrix}
\nabla \mathbf{N}_{std}^{T} \\
\nabla \mathbf{N}_{enr}^{T} \\
\end{matrix} \right]\left[ \begin{matrix}
\nabla \mathbf{N}_{std} & \nabla \mathbf{N}_{enr} \\
\end{matrix} \right]}\mathrm{d}\Omega \left[ \begin{matrix}
\mathbf{\Phi} \\
\mathbf{\Psi} \\
\end{matrix} \right]=0.
\end{aligned}
\end{equation}
Here $\mathbf{N}_{std}^D$ denotes the shape function which lies on Dirichlet boundary, and $\mathbf{\Phi}_D$ represents the velocity potential of the nodes which are located on the Dirichlet boundary. We must emphasize that there are not any enrichment nodes on the Dirichlet boundary. For a mixed Dirichlet-Neumann BVP, in the same manner as Eq.~\eqref{Eq.14}, the linear system comes from Eq.~\eqref{Eq.24} can be written as:
\begin{equation}
\label{Eq.25}
\mathbf{K X}=\mathbf{B}.
\end{equation}
The coefficient matrix of $\mathbf{K}$ can be divided into four parts as follows:
\begin{equation}
\label{Eq.26}
\mathbf{K}=\left[ \begin{matrix}
{\mathbf{K}^{\varphi \varphi }} & {\mathbf{K}^{\varphi \psi }} \\
{\mathbf{K}^{\psi \varphi }} & {\mathbf{K}^{\psi \psi }} \\
\end{matrix} \right]
\end{equation}
where the elements in $\mathbf{K}^{\varphi \varphi }$, $\mathbf{K}^{\varphi \psi }$, $\mathbf{K}^{\psi \varphi }$ and $\mathbf{K}^{\psi \psi }$ are
\begin{equation}
\label{Eq.27}
\begin{aligned}
{{K}_{ij}^{\varphi \varphi }}
=& \iint_{\Omega }{\nabla }{{N}_{i}}\cdot \nabla {{N}_{j}}\mathrm{d}\Omega \quad\\
& (i\notin {{S}_{D}},j\notin {{S}_{D}}, i\in\mathcal{I}, j\in\mathcal{I}),
\end{aligned}
\end{equation}
\begin{equation}
\label{Eq.28}
\begin{aligned}
{{K}_{ij}^{\varphi \psi l}}
=& \iint_{\Omega }{\nabla }{{N}_{i}}\cdot \nabla \left[ {{N}_{j}\left(\psi^l (x,y)-\psi^l (x_j,y_j)\right)} \right]\mathrm{d}\Omega \\
& (i\notin {{S}_{D}},j\notin {{S}_{D}}, i\in\mathcal{I}, j\in\mathcal{J}),
\end{aligned}
\end{equation}
\begin{equation}
\label{Eq.29}
\begin{aligned}
{{K}_{ij}^{\psi \varphi l}}
=& \iint_{\Omega } \nabla \left[ {{N}_{i}\left(\psi^l (x,y)-\psi^l (x_i,y_i)\right)} \cdot {\nabla }{{N}_{j}} \right]\mathrm{d}\Omega \\
& (i\notin {{S}_{D}},j\notin {{S}_{D}}, i\in\mathcal{J}, j\in\mathcal{I}),
\end{aligned}
\end{equation}
\begin{equation}
\label{Eq.30}
\begin{aligned}
{{K}_{ij}^{\psi \psi l}}
=& \iint_{\Omega }\nabla \left[ {{N}_{i}\left(\psi^l (x,y)-\psi^l (x_i,y_i)\right)}\right] \cdot \\
& \nabla \left[ {{N}_{j}\left(\psi^l (x,y)-\psi^l (x_j,y_j)\right)} \right]\mathrm{d}\Omega \\
& (i\notin {{S}_{D}},j\notin {{S}_{D}}, i\in\mathcal{J}, j\in\mathcal{J}).
\end{aligned}
\end{equation}
$\mathbf{K}^{\varphi\varphi}$ comes from conventional standard finite elements, which is only relevant to standard shape function. $\mathbf{K}^{\varphi\psi}$, $\mathbf{K}^{\psi\varphi}$ and $\mathbf{K}^{\psi\psi}$ are related to the enrichment. $\mathbf{X}$ is a $(n_p+n^{enr}_{p}\cdot n^{enr})\times1$ vector.
The right-hand-side vector in Eq.~\eqref{Eq.25} is
\begin{equation}
\label{Eq.31}
\mathbf{B}=\left[ \begin{matrix}
{\mathbf{B}^{\varphi}} \\
{\mathbf{B}^{\psi}} \\
\end{matrix} \right],
\end{equation}
where the elements in $\mathbf{B}^{\varphi}$ and $\mathbf{B}^{\psi}$ are
\begin{equation}
\label{Eq.32}
\begin{aligned}
{{B}^{\varphi }_i}
=& \int_{{{S}_{N}+{S}_{R}}}{N_i{\frac{\partial \varphi}{\partial n}}}\mathrm{d} S-\iint_{\Omega } \nabla N_i \cdot \sum\limits_{j\in S_D}\varphi_j\nabla N_j \mathrm{d}\Omega\\
& (i\notin {{S}_{D}}, i\in\mathcal{I}),
\end{aligned}
\end{equation}
\begin{equation}
\label{Eq.33}
\begin{aligned}
{{B}^{\psi l}_i}
=& \int_{{{S}_{N}+{S}_{R}}}{ {{N}_{i}\left(\psi^l (x,y)-\psi^l (x_i,y_i)\right)} {\frac{\partial \varphi}{\partial n}}}\mathrm{d} S.\\
& (i\notin {{S}_{D}}, i\in\mathcal{J}).
\end{aligned}
\end{equation}
$\mathbf{B}^\varphi$ is related to the standard FEM and $\mathbf{B}^\psi$ is related to the local enrichment. To be clear, Eq.~\eqref{Eq.27} is equivalent to Eq.~\eqref{Eq.16}, and Eq.~\eqref{Eq.32} is equivalent to Eq.~\eqref{Eq.17} in the previous section.
\par For a mixed Neumann-Robin BVP, we use corner-flow solution as the enrichment solution and Eq.~\eqref{Eq.22} to construct the local approximation, and obtain a final equation system similar to Eq.~\eqref{Eq.40} as:
\begin{equation}
\label{Eq.41}
\begin{aligned}
&\iint_{\Omega }{\left[ \begin{matrix}
\nabla \mathbf{N}_{std}^{T} \\
\nabla \mathbf{N}_{enr}^{T} \\
\end{matrix} \right]\left[ \begin{matrix}
\nabla {\mathbf{N}_{std}} \nabla {\mathbf{N}_{enr}} \\
\end{matrix} \right]}\mathrm{d}\Omega \left[ \begin{matrix}
\mathbf{\Phi} \\
\mathbf{\Psi} \\
\end{matrix} \right]\\
&+\mathrm{i} k\int_{{{S}_{m}}}{\mathbf{N}_{std}^{T}} \cdot {\mathbf{N}_{std}}\mathrm{d} S\cdot \mathbf{\Phi} -\\
& k\tanh kh \int_{{{S}_{f}}}{\mathbf{N}_{std}^{T}\cdot{\mathbf{N}_{std}}}\mathrm{d} S\cdot \mathbf{\Phi}
=\int_{{{S}_{b}}}{\left[ \begin{matrix}
\mathbf{N}_{std}^{T} \\
\mathbf{N}_{enr}^{T} \\
\end{matrix} \right]{{f}_{n}}}\mathrm{d} S.
\end{aligned}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[scale=.4]{Figures/PointEnrichment.eps}
\caption{An illustration of the point enrichment.}
\label{Fig.2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=.4]{Figures/PatchEnrichment.eps}
\caption{An illustration of the patch enrichment.}
\label{Fig.3}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=.4]{Figures/RadiusEnrichment.eps}
\caption{An illustration of the radius enrichment.}
\label{Fig.4}
\end{figure}
\subsection{Enrichment strategies}\label{sect:enrichments}
\par In the previous subsection, the XFEM has been introduced through mixed BVPs. The key point of XFEM is the local enrichment, and we will in this subsection discuss three different enrichment strategies in detail. Unless otherwise mentioned in present work, a \textit{singular point} is defined as a point where the fluid velocity is infinite, and an element is called \textit{singular element} if it contains at least one \textit{singular point}. A \textit{singular patch} is a patch of multiple elements, among which at least one is a \text{singular element}.
\subsubsection{Point enrichment}\label{sect:point-enri}
\par In the point enrichment approach, as depicted in Fig.~\ref{Fig.2}, singular solutions are introduced to enrich the local approximation only on the \textit{singular points}. In this way, the additional number of unknowns due to enrichment is only dependent on the number of \textit{singular points} and the number of singular terms introduced at each \textit{singular point}, and thus is independent on the meshes. Therefore, this enrichment only influences the \textit{singular elements} which contain the \textit{singular points}. The influence domain of a enriched point depends on the mesh size. Consequently, the enhancement of the solution accuracy may not increase as the mesh are refined, which will be discussed later in Sect.~\ref{sect:flat-plate}.
\subsubsection{Patch enrichment}\label{sect:patch-enri}
\par Compared with point enrichment, the patch enrichment method will introduce enrichment to all points on the \textit{singular patch}, which are represented by the filled circles in Fig.~\ref{Fig.3}. The end points of the blue line are the \textit{singular points}. Similar to the point enrichment method, the enrichment domain of patch enrichment depends on the mesh size, and the additional number of unknowns do not increase even if the meshes are refined. Similar to the point enrichment, patch enrichment experiences low convergent rate with the refinement of the local meshes.
\subsubsection{Radius enrichment}\label{sect:radius-enri}
\par Different from the point enrichment and patch enrichment, the radius enrichment method will enriches the solution at all points within a circle with predefined radius $R_{enri}$. Here $R_{enri}$ must be a positive value and it is independent of the mesh size. As demonstrated in Fig.~\ref{Fig.4}, the center point of the enrichment area is the singular point. The value of $R_{enri}$ may be taken as $1/10$ of the space dimension, as it was suggested by \citep{laborde2005high}. In the present work, we normally take $R_{enri}=0.2$ as we are considering 2D problems. More detail about how to choose the enrichment radius will be discussed in the numerical example of heaving rectangle at free surface. The drawback of this enrichment method is that the additional number of unknowns will increase with the mesh refinements.
\subsection{Integrals on the singular elements}\label{sect:singular integration}
In this subsection, the integration on singular elements is discussed. For illustrating the element integral strategy explicitly, a singular function stems from corner-flow solution in Sect.~\ref{sect:corner-flow} is utilized as the enrichment function. In Eq.~\eqref{Eq.9}, the most singular term is the first term, i.e. the term with $j=1$
\begin{equation}
\label{Eq.60}
\psi(x,y) = r^{m_1}\cos(m_1\theta).
\end{equation}
For demonstration purpose, we will take this term as an example of the enrichment function, and discuss numerical integration of singular terms on the elements. In practice, more terms in Eq.~\eqref{Eq.9} can be included as enrichment functions, using the similar procedure that will be described in the rest of this section.
The trial solutions in Eqs.~\eqref{Eq.20} and \eqref{Eq.21} involve the evaluation of the following enrichment shape function
\begin{equation}
\label{Eq.61}
N^{enr}_j= N_{j}(x,y) \psi(x,y) = N_{j}(x,y) r^{m_1}\cos(m_1\theta).
\end{equation}
Here (x,y) is the location in physical space, which can be obtained from isoparametric element illustrated in Fig.~\ref{Fig.22}. Derivatives of the enrichment shape function with respect to $x$ and $y$ are expressed by
\begin{equation}
\label{Eq.62}
\begin{aligned}
& \frac{\partial {N^{enr}_j}}{\partial x}=\frac{\partial {{N}_{j}}}{\partial x}{{r}^{m_1}}\cos (m_1\theta )+{{N}_{j}}\frac{\partial }{\partial x}\left( {{r}^{m_1}}\cos (m_1\theta ) \right), \\
& \frac{\partial {N^{enr}_j}}{\partial y}=\frac{\partial {{N}_{j}}}{\partial y}{{r}^{m_1}}\cos (m_1\theta )+{{N}_{j}}\frac{\partial }{\partial y}\left( {{r}^{m_1}}\cos (m_1\theta ) \right). \\
\end{aligned}
\end{equation}
Substituting Eq.~\eqref{Eq.62} into Eq.~\eqref{Eq.30}, the diagonal entry of enriched element stiffness matrix ${K}_{ii}^{\psi \psi}$ can be written as
\begin{equation}
\label{Eq.63}
K_{ii}^{\psi \psi }=\int_{{{\Omega }^{e}}}{{{\left( \frac{\partial {N^{enr}_i}}{\partial x} \right)}^{2}}+}{{\left( \frac{\partial {N^{enr}_i}}{\partial y} \right)}^{2}}\mathrm{d} s,
\end{equation}
where $\Omega^e$ denotes the surface of elements, where point $i$ is one of their nodes. Apparently, if the interior angle $\gamma < \pi$, the $x$- and $y$-derivatives of $ r^{m_1}\cos (m_1\theta )$ are singular, with a singularity of $r^{m_1-1}$ as $r\rightarrow 0$. Thus the square terms in Eq.~\eqref{Eq.63} are $r^{2m_1-2}$ singularities. It is challenging but important to accurately calculate this singular integration. In this paper, the so-called DECUHR adaptive quadrature algorithm \citep{1994DECUHR} is employed to overcome the difficulties in numerical integration of Eq.~\eqref{Eq.63}. The DECUHR algorithm combines an adaptive subdivision strategy with an extrapolation of the error expansion, where a non-uniform subdivision of the element close to a \textit{singular point} is employed. More details of the DECUHR algorithm can be found in \cite{1994DECUHR}, and the application of this algorithm in GFEM to deal with singular integrals can be found in \citep{2000Thedesign}. An open-source FORTRAN code of this DECUHR algorithm from Alan Genz Software website of Washington State University, which can deal with problems at the dimension of 2$\sim$15, has been applied in this study. It is not an option in the code to handle 1D singular integrals.
\par In present work, an adaptive Gaussian quadrature algorithm is applied to accurately calculate the 1D singular integrals. It consists of the following steps:\\
Step 1. Setting a fixed \textit{tolerance}, using $T$ to represent the result, letting $T=0$, using $T_3$ represent the temporary variate and $T_3=0$. \\
Step 2. Using Gaussian integral to obtain the numerical integration result in whole element and written as $T_1$, letting $T=T_1$.\\
Step 3. Dividing the element into two uniform element, and integrating in those two subdivision element, expressing as $T_{21}$ and $T_{22}$ respectively, assuming $T_{21}$ is the numerical result in singular element, letting $T_{2} =T_{21}+T_{22}$ and $T_3 =T_3+T_{22}$ . \\
Step 4. Calculate $error = |T_2 -T_1|$. If $error > tolerance$, we denote $T =T_3+T_{21}$, divide the sub-element which contains \textit{singular point} into two, and go to Step 2. Otherwise, if $error \le tolerance$, we output $T$ as the final result.
\begin{figure}[t]
\centering
\includegraphics[scale=.413]{Figures/DoubleNode.eps}
\caption{An illustration of the double nodes on a flat plate. The blue line represents the flat plate without vanishing thickness.}
\label{Fig.5}
\end{figure}
\begin{figure*}[t]
\centering
\subfigure[Velocity potential in the fluid] {\includegraphics[scale=.30]{Figures/PointEnrichPhi.eps}}
\subfigure[Added mass] {\includegraphics[scale=.30]{Figures/PointEnrichmentAddedmass.eps}}
\caption{Results of mesh-convergence study for four FEMs using the point enrichment approach. $\Delta h$ = mesh size, $k$=slope.}
\label{Fig.6}
\end{figure*}
\section{Numerical studies}\label{sect:Numerical case}
\par For verification purposes, an uniform flow around a flat plate of vanishing thickness in 2D is considered. Then, the heaving rectangle on a free surface is studied via linear FEM, linear XFEM, quadratic FEM and quadratic XFEM.
\subsection{Uniform flow around a flat plate}\label{sect:flat-plate}
\begin{figure*}[t]
\centering
\subfigure[Velocity potential in the fluid] {\includegraphics[scale=.30]{Figures/PatchEnrichPhi.eps}}
\subfigure[Added mass] {\includegraphics[scale=.30]{Figures/PatchEnrichmentAddedmass.eps}}
\caption{Results of mesh-convergence study for four FEMs using the patch enrichment approach. $\Delta h$ = mesh size, $k$=slope.}
\label{Fig.8}
\end{figure*}
\begin{figure*}[t]
\centering
\subfigure[Velocity potential in the fluid] {\includegraphics[scale=.30]{Figures/RadiusEnrichmentPhi.eps}}
\subfigure[Added mass] {\includegraphics[scale=.30]{Figures/RadiusEnrichmentAddedmass.eps}}
\caption{Results of mesh-convergence study for four FEMs using the radius enrichment approach. $\Delta h$ = mesh size, $k$=slope.}
\label{Fig.9}
\end{figure*}
\begin{figure}[t]
\centering \includegraphics[scale=.30]{Figures/ConvergenceofRadiusPlate.eps}
\caption{The error of velocity potential versus the non-dimensional enrichment radius $R_{enri}/a$ for linear XFEM. $R_{enri}$ represents the enrichment radius, $a$ is half breadth of the plate. $64\times 64$ uniform meshes have been used.}
\label{Fig.10}
\end{figure}
\begin{figure*}[t]
\centering
\subfigure[Linear methods] {\includegraphics[scale=.30]{Figures/EfficiencyForLinear.eps}}
\subfigure[Quadratic methods] {\includegraphics[scale=.30]{Figures/EfficiencyForQuadratic.eps}}
\caption{The $L_2$ errors as function of number of unknowns for the conventional FEMs and their corresponding XFEMs using different enrichment strategies.}
\label{Fig.11}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[scale=.30]{Figures/VelocityAlongPlate.eps}
\caption{Horizontal velocity distribution along the length of plate. Results are presented for conventional linear FEM and linear XFEM, together with the analytical solutions. $a$ = half breadth of the flat plate, $v_0$ = free-stream inflow velocity.}
\label{Fig.7}
\end{figure}
\begin{table*}[htb]
\centering
\caption{The number of unknowns of linear FEM and linear XFEM with three different enrichment strategies. Four different mesh densities are considered.}
\begin{tabular}{ccccc}
\toprule
\multirow{2}*{Mesh size $(\Delta h/a)$} &\multirow{2}*{Linear FEM}&Linear XFEM&Linear XFEM& Linear XFEM \\
& & (point enrichment) & (patch enrichment) & (radius enrichment) \\
\midrule
0.5 & 84 & 86 & 104 & 86 \\
0.25 & 296 & 298 & 316 & 298 \\
0.125 & 1104 & 1106 & 1124 & 1124 \\
0.0625 & 4256 & 4258 & 4276 & 4336 \\
\bottomrule
\end{tabular}
\label{tab1}
\end{table*}
\begin{table*}[htb]
\centering
\caption{The number of unknowns of quadratic FEM and quadratic XFEM with three different enrichment strategies. Four different mesh densities are considered.}
\begin{tabular}{ccccc}
\toprule
\multirow{2}*{Mesh size $(\Delta h/a)$} & \multirow{2}*{Quad. FEM} & Quad. XFEM & Quad. XFEM & Quad. XFEM\\
& & (point enrichment) & (patch enrichment) & (radius enrichment) \\
\midrule
0.5 & 232 & 234 & 278 & 234 \\
0.25 & 848 & 850 & 894 & 860 \\
0.125 & 3232 & 3234 & 3278 & 3288 \\
0.0625 & 12608 & 12610 & 12654 & 12814 \\
\bottomrule
\end{tabular}
\label{tab2}
\end{table*}
\par The analytical solution of the complex potential for uniform flow around a 2D thin flat plate in an infinite domain can be found in the textbook of \cite{Newman2017Marine} and the Appendix, where we also show that a modification of the sign in the original formula is needed for the flow variable on the right-half plane.
\par To model the flat plate in infinite fluid, we have to use a truncated fluid domain in our numerical method. See a sketch of the truncated domain in Fig.~\ref{Fig.5}. Based on the analytical solution, Dirichlet boundary conditions are specified at the truncated boundaries surrounding the fluid domain, and Neumann boundary condition on the upper and lower surfaces of the thin plate. The mathematical formulation of the mixed BVP has been described in Sect.~\ref{sect:mathematical-formu} and the conventional FEM and XFEM explained in Sect.~\ref{sect:Numerical Method}. Even though the flat plate has zero thickness, the velocity potentials are different on the two sides of the plate. Thus a double-node technique is used on the plate except at the two endpoints of the flat plate, where the velocity potential must be continuous. The double-node technique allows for two velocity potential values at the same location. See an illustration in Fig.~\ref{Fig.5}, where the open circles and crosses represent two different nodes respectively.
\par To solve the mixed Dirichlet-Neumann BVP numerically, we have implemented four different FEM solvers, including linear FEM, linear XFEM, quadratic FEM and quadratic XFEM. In the linear and quadratic XFEMs, we have used the analytical solution as the enrichment function at the enrichment nodes close to the singular points, in this case the two ends of the plate. Figs.~\ref{Fig.2}, \ref{Fig.3} and \ref{Fig.4} illustrate the point, patch and radius enrichment strategies, respectively. In order to compare the accuracy and efficiency of different methods, the $L_2$ errors of the velocity potential on all of the grid points will be presented as a function of mesh size $\Delta h = \Delta x = \Delta y$. The $L_2$ error is defined as
\begin{equation}
\label{Eq.34}
e_{L_2}=\sqrt{\left.\sum_{i=1}^{N}\left(\phi_{i}^{\rm num}-\phi_{i}^{\rm ana}\right)^{2} \middle/ \sum_{i=1}^{N}\left(\phi_{i}^{\rm ana}\right)^{2}\right.},
\end{equation}
where $\phi_{i}^{\rm num}$ denotes numerical solution on the $i$th node, and $\phi_{i}^{\rm ana}$ represents the corresponding analytical solution. $N$ denotes the total number of nodes.
\par In Fig.~\ref{Fig.6}(a) and \ref{Fig.6}(b), the $L_2$ errors for velocity potential and the relatively errors for added mass are presented, respectively. The presented XFEMs results are based on the point enrichment strategy as described in Sect.~\ref{sect:point-enri}. It is a general conclusion that the XFEMs with point enrichment are superior to conventional FEMs if the same meshes are used. It is known that flow velocities, or in other words the gradients of velocity potential, are singular at the two end points of the plate. Thus, velocity potential changes dramatically within the \textit{singular elements} and the elements very close to the \textit{singular points}. Unless very fine meshes are used, neither the conventional linear FEM nor the conventional quadratic FEM is able to accurately capture such strong variation of the local flows, because they only use regular smooth functions in the local approximations. When the point enrichment is applied at the two end-points of the plate, the errors are greatly reduced, indicating that the local enrichment in XFEMs is very effective in reducing both the local and global errors, which is expected.
\par However, it is also surprising that all FEMs, including the quadratic FEMs, showed convergent rates close to $1.0$ for the velocity potential on fluid points. The $k$ values in the figures are the fitted convergence rate based on five different mesh densities. Similarly, as seen in Fig.~\ref{Fig.6}(b), lower than expected mesh-convergence rates are observed for the added mass of the flat plate. The reason is that there are too few enrichment points (only one in the case of point enrichment) at either end of the plate. The influence area of the enrichment functions is smaller for a locally finer mesh close to the \textit{singular point}, due to the fact that the interpolations by the finite-element shape functions in Eqs.~\eqref{Eq.21} and \eqref{Eq.22} are piecewise. The shape function $N_j$ at the point $j$ is always zero over the elements, which do not own point $j$ as one of their element nodes. At the non-enriched points, sufficiently close to the \textit{singular point} but belong to none of the \textit{singular elements}, the velocity potential also changes dramatically, and thus the applied smooth shape functions have difficulties to accurately capture the strong local singular solutions. A possible strategy would be to increase the number of enrichment points close the \textit{singular point}. Therefore, we will also investigate the patch enrichment and radius enrichment methods described in Sects.~\ref{sect:patch-enri} and \ref{sect:radius-enri}, respectively.
\par The results of convergence studies are presented in Figs.~\ref{Fig.8} for the patch enrichment. For both the solutions of velocity potential at grid nodes and added mass, only marginal increases of the convergence rate of linear XFEM are seen. However, the improvement in the results of quadratic XFEM is notable for both velocity potential and added mass. Specifically, the convergence rate for velocity potential becomes $k=1.73$, compared with $k=0.99$ for the point enrichment. Similarly, convergence rate for added mass increases to $k=1.5$ from $k=1.11$. Theoretically, we expect the convergence rate of a quadratic method be equal or greater than 2. Even though the overall accuracy of linear and quadratic XFEM has been greatly improved with the adoption of patch enrichment instead of the point enrichment, it is still below our expectation, in particular for the quadratic XFEM. The reason is as follows: similar to the point enrichment strategy, the enrichment area of the patch enrichment strategy also reduces as the meshes are refined, thus the local enrichment based on either point enrichment or patch enrichment is mesh-dependent. If the local solution close to the \textit{singular points} converges at a much lower rate than the rest part of the solution, the overall convergence rate will be lowered down.
\par To eliminate the mesh-dependency of the local enrichment, the radius enrichment method, as illustrated in Fig.~\ref{Fig.4}, appears to be a good choice. In this method, the enrichment area is a predefined constant and independent on mesh sizes. As demonstrated by the results of mesh-convergence study in Fig.~\ref{Fig.9}, the superiority of XFEMs, in particular the quadratic XFEM, is remarkable, when the radius enrichment is applied. We have used an constant enrichment radius of $R_{enri} = 0.2$, as suggested by \cite{laborde2005high} for 2D problems, at both ends of the plate. Compared to the conventional linear FEM, convergence rate of linear XFEM for the velocity potential advanced notably from $k=0.89$ to $k=1.38$, and from $k=1.02$ to $k=1.43$ for added mass. The convergence rate of quadratic XFEM improved exceedingly from $k=0.99$ to $k=3.44$ for velocity potential, and for added mass from $k=1.11$ to $k=1.79$.
\par For the present case, since we are using the analytical solution as the enrichment function at the singular points, the accuracy of the XFEM solutions will further improve if a larger enrichment radius is applied. This is illustrated in Fig.~\ref{Fig.10}, where we have presented the $L_2$ errors of the velocity potential as function of $R_{enri}/a$, which is the ratio between enrichment radius and length of the plate.
\par For both linear and quadratic XFEMs, it is apparent from the comparisons in Figs.~\ref{Fig.6}-\ref{Fig.9} that, the radius enrichment strategy yields the most accurate results for a given mesh resolution, with a cost of introducing more extra DOFs (or unknowns in the final linear system) than the two other enrichment strategies. Since all points within a radius $R_{enri}$ to the singular points will be enriched, too many extra DOFs may be introduced if an unnecessarily large $R_{enri}$ has been chosen. For a given $R_{enri}$, the number of extra DOFs is also larger for a finer mesh. On the other hands, the point enrichment method introduces fewest extra DOFs, but the accuracy is the lowest among the three enrichment methods. From practical application point of view, it is recommended to apply the radius enrichment method with a small enrichment area.
It is of more interest to compare the computational efforts to achieve a similar accuracy. In this regard, we have also plotted in Fig.~\ref{Fig.11} the $L_2$ errors of the velocity potential as function of the total number of unknowns, which is an indicator of CPU time. The number of unknowns for different enrichment strategies and different mesh sizes are listed in Table~\ref{tab1} for linear FEMs and Table~\ref{tab2} for quadratic FEMs. It is apparent that the local enrichment increases only marginally the total number of unknowns, while reducing the global errors significantly.
\par To illustrate how the XFEMs has increased the accuracy of the local flows, the horizontal velocity along the flat plate is shown in Fig.~\ref{Fig.7}. Here the conventional linear FEM and linear XFEM are compared. Solid line represents result for linear XFEM, while result of conventional linear FEM is represented by the dash line. The corresponding analytical solution is denoted by open circles. Thanks to the local enrichment, linear XFEM shows very encouraging results, especially close to the singular point. On the contrary, the conventional linear FEM fails to capture the strong variation of the flow at two ends of the plate. Since the applied FEM is only $C^0$ continuous, the presented velocity in the figure at each point is the average value of the velocities calculated in the elements sharing the point. For the \textit{singular elements}, the velocity was obtained by differentiating the shape functions in Eq.~\eqref{Eq.21}, and we have add more points within the element to better illustrate the variation of velocity therein.
\begin{figure*}[t]
\centering
\includegraphics[scale=.55]{Figures/HeavingBox.eps}
\caption{Sketch map of half rectangular heaving on the free surface.}
\label{Fig.12}
\end{figure*}
\subsection{Heaving rectangular cylinder on free surface}\label{sect:heave-rectangle}
\begin{figure}[t]
\centering
\includegraphics[scale=.30]{Figures/ConvergenceLength.eps}
\caption{Convergence performance of the horizontal length from the rectangle to the matching boundary when the square of forcing frequency is $\omega^2B/(2g)=0.1$. $A_{33}$=heave added mass, $B_{33}$=heave radiation damping, $S$=submerged cross-sectional area, $\rho$=mass density of water, $\omega$=circular frequency, $L_x$ = horizontal length from the rectangle to the matching boundary, $\lambda$ = wavelength of radiated waves. }
\label{Fig.13}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=.4]{Figures/MeshSetting.eps}
\caption{Schematic of the mesh of linear elements for the half rectangular heaving on the free surface.}
\label{Fig.14}
\end{figure}
\begin{figure*}[t]
\centering
\subfigure[Added mass]{\includegraphics[scale=.30]{Figures/BoxAddedMass.eps}}
\subfigure[Radiation damping]{\includegraphics[scale=.30]{Figures/BoxAddedDamping.eps}}
\caption{The added mass and radiation damping as function of the oscillatory frequency of a floating rectangular with beam-to-radio $(B/D)$ equals to 2.0. $B$=beam, $D$=draft, $A_{33}$=heave added mass, $B_{33}$= heave radiation damping, $S$=submerged cross-sectional area, $\rho$=mass density of water, $\omega$=circular frequency.}
\label{Fig.16}
\end{figure*}
\par In this part, under the framework of linear potential-flow theory in the frequency domain, a floating heaving rectangular cylinder on a free surface is considered. $B$ and $D$ are used to represent beam and draft of the rectangle, respectively. The considered water depth is $h = 40D$, and the beam-to-draft ratio $B/D$ is taken as 2.0. An illustration of half of the domain is presented in Fig.~\ref{Fig.12}. In theory, a radiation condition should be applied at $x \rightarrow \pm \infty$. In practice, it is impossible to model an fluid domain with infinite extension, and a truncation at a certain horizontal distance $L_x$ from the rectangle must be made. The radiation condition is then applied on a control surface $S_m$, which is chosen sufficiently far from the structure. In this study, we choose a horizontal truncation distance as twice the longest wavelength that will be studied, and use the same computational domain for all different cases.
\par To reduce the computational costs, the symmetric property of the considered problem is utilized, and thus only half of the fluid domain is considered. At the symmetry plane, horizontal velocity is equal to zero, i.e. ${\partial \phi} / {\partial x} = 0$. An illustration of the computational domain is presented in Fig.~\ref{Fig.12}.
\subsubsection{Linear added mass and damping coefficients}
\par Fig.~\ref{Fig.13} displays the non-dimensional added mass and damping coefficients for different truncation distances $L_x$ from the rectangle. A non-dimensional wave number $\omega^2B/(2g)$ = 0.1 has been considered in the calculations, corresponding to the longest wave that will be considered in this section. If the selected $L_x$ has negligible results for the longest wave, it is also considered as sufficient for the shorter waves. It is apparent from the results in Fig.~\ref{Fig.13} that the hydrodynamic coefficients do not change as long as $L_x/\lambda \ge 1.0$. Here $\lambda$ is the wavelength. $L_x=2\lambda$ will be applied in our later analyses in this section.
\par Matched multi-block meshes in the fluid domain are utilized as a starting point, with block I and block II fitted to the body surface, block IV below the body surface, block III and block V away from the structure. See an example of the meshes in Fig.~\ref{Fig.14}, generated from the open source mesh generator GMSH. The following parameters are defined to denote the number of elements along the sides of the blocks to control the mesh densities in different blocks: $N_{rx}$ is the number of elements on the bottom of the rectangle, $N_{ry}$ along the side wall of the rectangle, $N_{ix}$ along the free surface in the inner block, $N_{iy}$ in vertical direction of internal block at symmetry face. Correspondingly, $N_{oy}$ represent the element number in the horizontal direction of external domain on free surface, $N_{oy}$ represent the element number in vertical direction of external domain at symmetry face. Here $N_{rx}$ must be equal to $N_{ix}$ so that blocks I and II will match at their common boundary. For simplicity, we will also take $N_{rx}=N_{ry}=N_{i
x}=N_{iy}$. meshes in blocks IV and V are stretched along the vertical direction toward the sea bottom using a stretching radio of 1.1.
\begin{table}[htb]
\centering
\caption{The control parameters for the meshes used in the four different FEM methods that are implemented in this study to perform the hydrodynamic analyses.}
\begin{tabular}{ccccc}
\toprule
Method & $N_p$ & $N_{rx}$& $N_{ox}$ & $N_{oy}$\\
\midrule
Linear FEM & 78526& 105& 300& 60
\\
Linear XFEM & 81421& 105& 300& 60
\\
Quad. FEM & 15221 & 15& 120& 20
\\
Quad. XFEM & 15416& 15& 120& 20 \\
\bottomrule
\end{tabular}
\label{tab3}
\end{table}
\par The added mass and damping coefficients are calculated by the four different FEMs, and the results are compared with the experimental results reported in \cite{1968The}, the linear numerical potential-flow calculations by \cite{nestegard1984numerical} and \cite{liang2015application}. \cite{liang2015application} used the 2D HPC method in the frequency domain, and have taken account of the local singularity by a domain decomposition strategy, where the local corner-flow solutions were matched with the outer domain represented by the harmonic-polynomial cells. The mesh parameters used in our FEMs are listed in Table~\ref{tab3}, in which $N_p$ denote the number of DOFs (including additional DOFs for XFEM) in the computational domain. The present numerical results agree excellently well with those by \cite{liang2015application}, and fairly well with those of \cite{nestegard1984numerical}. All numerical results seem to deviate from with the experimental results at low frequencies. As it is commented in \cite{1968The}, the uncertainties in the experimental results for $\omega^2B/(2g)<0.25$ may have been high. For $\omega^2B/(2g)\ge 0.25$, the numerical results agree better with the experiments. The small differences may have been contributed by the viscous flow separation at the sharp and other nonlinearities which will occur in reality.
From the results in Fig.~\ref{Fig.16}, we may conclude that all the numerical methods in the comparison are be able to accurately predict the linear hydrodynamic coefficients with an affordable effort. It is also observed that the XFEMs do not show clear advantages in the linear hydrodynamic analysis, which is expected as only integrals of velocity potential (multiplied by the normal vector) over the mean wetted body surface are involved in the pressure integration. As seen in the corner flow solution in Sect.~\ref{sect:corner-flow} , the velocity potential is not singular at the corner. However, the fluid velocity close to the sharp corners is singular, which poses great challenges in nonlinear wave loads analysis as will be explained further.
\begin{figure*}[t]
\centering
\subfigure[Linear XFEM]
{\includegraphics[scale=.30]{Figures/ConvergenceExpensionlinear.eps}}
\subfigure[Quadratic XFEM]
{\includegraphics[scale=.30]{Figures/ConvergenceExpensionQuad.eps}}
\caption{Non-dimensional 2nd order mean vertical force versus the number of enrichment functions for non-dimensional oscillatory frequency of $\omega^2B/(2g)=1.0$. The non-dimensional 2nd order mean vertical force $\bar F_{y}^{(2)}=F_{y}^{(2)}/(\rho\omega^2\eta_{3a}^{2}B)$, $F_{y}^{(2)}$= 2nd order mean vertical force, $\rho$= mass density of water, $\eta_{3a}$= heave amplitude, $B$= beam, $n$= enrichment function number. Linear XFEM employs mesh 2 in Table~\ref{tab4}, quadratic XFEM employs mesh 2 in Table~\ref{tab5}.}
\label{Fig.30}
\end{figure*}
\begin{figure*}[t]
\centering
\subfigure[Linear XFEM]
{\includegraphics[scale=.30]{Figures/ConvergenceRadiusLinear.eps}}
\subfigure[Quadratic XFEM]
{\includegraphics[scale=.30]{Figures/ConvergenceRadiusQuad.eps}}
\caption{Non-dimensional 2nd order mean vertical force versus enrichment radius. The considered non-dimensional oscillatory frequency is $\omega^2B/(2g)=1.0$. The non-dimensional 2nd order mean vertical force $\bar F_{y}^{(2)}=F_{y}^{(2)}/(\rho\omega^2\eta_{3a}^{2}B)$, $F_{y}^{(2)}$= 2nd order mean vertical force, $\rho$= mass density of water, $\eta_{3a}$= heave amplitude, $B$= beam, $r$= enrichment radius. Linear XFEM employs mesh 2 in Table~\ref{tab4}, quadratic XFEM employs mesh 2 in Table~\ref{tab5}.}
\label{Fig.15}
\end{figure*}
\begin{figure*}[ht]
\centering
\subfigure[Linear method]
{\includegraphics[scale=.30]{Figures/MeanForceLinear.eps}}
\subfigure[Quadratic method]
{\includegraphics[scale=.30]{Figures/MeanForceQuad.eps}}
\caption{The non-dimensional 2nd order mean vertical force of a heaving floating rectangle. The non-dimensional 2nd order mean vertical force $\bar F_{y}^{(2)}=F_{y}^{(2)}/(\rho\omega^2\eta_{3a}^{2}B)$, $F_{y}^{(2)}$= 2nd order mean vertical force, $\rho$= mass density of water, $\eta_{3a}$= heave amplitude, $B$= beam, $\omega$= circular frequency. The CFM= conservation of fluid momentum. }
\label{Fig.17}
\end{figure*}
\subsubsection{The 2nd order mean vertical force}
\par The calculation of 2nd order wave loads based on pressure integration involves the integration of the quadratic terms of fluid velocities on body surface, which are singular but integrable near the sharp corners. \cite{zhao1989interaction} showed that the near-field approach based on direct pressure integration without special consideration of the singularity is very difficult to achieve convergent results, and the approach based on momentum and energy relationship or similar were much more efficient and robust. The later approach often involves integration on a control surface and a free surface confined by control surface and structure surface. Similar conclusions have been obtained later by many others \citep[e.g., see][]{chen2007middle, 2018Numerical, 2020A}.
\par The time averaged 2nd order vertical hydrodynamic force acting on the heaving rectangle by direct pressure integration over the mean wet body surface can be expressed as:
\begin{equation}
\label{Eq.42}
\begin{aligned}
\bar F_{y}^{\left( 2 \right)} =& \frac{1}{T}\int_{0}^{T}\Bigg\{-\rho \int_{{{S}_{B0}}}\Bigg\{
\frac{\partial {{\phi }^{\left( 1 \right)}}}{\partial t}
+ {{\eta }_{3}}\frac{{{\partial }^{2}}{{\phi }^{\left( 1 \right)}}}{\partial y\partial t} +\\
& \frac{\partial {{\phi }^{\left( 2 \right)}}}{\partial t}
+ \frac{1}{2}{{\left[ \frac{\partial {{\phi }^{\left( 1 \right)}}}{\partial x} \right]}^{2}
+ \frac{1}{2}{{\left[ \frac{\partial {{\phi }^{\left( 1 \right)}}}{\partial y} \right]}^{2}} } \Bigg\} {{n}_{3}}\text{d}S \Bigg\}\text{d}t,
\end{aligned}
\end{equation}
where $S_{B0}$ denotes the wetted mean body surface. $\eta_{3}$ is the heave motion define as $\eta_{3} =\mathrm{Re}[\eta_{3a}\mathrm{e}^{\mathrm{i}\omega t}]$, with $\eta_{3a}$ as the heaving amplitude. $n_3$ is the vertical component of the normal vector. $T$ is the oscillatory period expressed as $T = 2\pi/\omega$. $\phi^{(1)}$ and $\phi^{(2)}$ represent the first and second order velocity potential, respectively. A waterline integral due the fluctuation of waves near the mean water level is neglected as it does not contribute to the vertical loads in this particular case. The time derivatives of first and second order velocity potential equal to zero after time average over one period, and thus Eq.(\ref{Eq.42}) can be simplified as:
\begin{equation}
\label{43}
\bar F_{y}^{(2)} = -\rho\overline{\int_{{{S}_{B0}}} \left[ {{\eta}_{3}}\frac{{{\partial }^{2}}{{\phi }^{( 1)}}}{\partial y\partial t} +
\frac{1}{2}\nabla\phi^{(1)}\cdot\nabla\phi^{(1)} \right] {{n}_{3}}\text{d}S}.
\end{equation}
\par From a theoretical perspective, the near-field approach, far-field approach and the approaches based on control surfaces should be mathematically equivalent. However, since it is very difficult for the conventional numerical methods, e.g. FEM, FDM and BEM, to accurately describe the exact fluid velocities close to sharp corners, slow grid-convergences are expected for the near-field approach when it is applied to calculate the 2nd order wave loads. Despite difficult, it is still believed by the authors of this paper that, the strong variation of the local velocities can be captured accurately if an appropriate numerical method is adopted, thus the near-field approach can still be a good option for 2nd wave-load analysis. A good example of such a numerical method is the domain decomposition strategy developed by \cite{liang2015application}, where the solutions in the local domain surrounding the sharp corners are represented by the analytical corner-flow solutions. The strategy leads to very accurate and efficient near-field result, but it is not easy to implement for general purposes. The XFEM is a more powerful and general-purpose framework, which allows us to easily and explicitly include, for instance the singular corner-flow solutions as enrichment functions, in the local finite-element approximations. It also inherits other good features of the conventional FEMs, e.g. unstructured meshes.
\par For the considered rectangle, the interior angle at each corner is $\gamma = 90 ^{\circ}$, where $\gamma$ is the interior angle as illustrated in Fig.~\ref{Fig.1}. Eq.~\eqref{Eq.9} presents all possible fundamental solutions to the corner flows, among which we choose only the first a few as our enrichment function. The first term with $j=1$ is $\varphi = A_1 r^{\frac{2}{3}} \cos (\frac{2}{3}\theta)$, and the resulting radial velocity $\frac{\partial \varphi}{\partial r}$ and circumferential velocity $\frac{1}{r} \frac{\partial \varphi}{\partial \theta} $ are in the form of $r^{-\frac{1}{3}}-$singularity $r\rightarrow 0$, which are difficult for any regular functions to achieve good approximation.
In Fig.~\ref{Fig.30}, we compare the non-dimensional $\bar F_{y}^{(2)}$ when different number of terms from Eq.~\eqref{Eq.9} are included as enrichment functions.
As shown in Fig.~\ref{Fig.30}(a), for linear XFEM, convergent result has been achieved if enrichment function number $n\ge 3$. For quadratic XFEM, the convergence will be achieved with $n\ge 1$, as demonstrated in Fig.~\ref{Fig.30}(b). The reason that a linear XFEM needs more enrichment functions than a quadratic XFEM is as follows: the fundamental solution of a corner flow contains a singular term with $j=1$ in Eq.~\eqref{Eq.9} and other higher-order non-singular terms with $j>2$. Those higher-order terms are more accurately captured by the regular quadratic shape functions, and thus it seems to be sufficient for a quadratic XFEM to use only the singular enrichment function from Eq.~\eqref{Eq.9}. Based on the discussion above and the numerical observation, only three enrichment functions will be considered in later analyses. Adding unnecessarily too many higher-order terms with $j>3$ are not pose extra difficulties in numerical integration. On the other hand, the extra DOFs due to enrichment will increase rapidly with the number of enrichment function at each enrichment point.
\par Fig.~\ref{Fig.15} displays the non-dimensional 2nd order mean vertical force $\bar F_{y}^{(2)}$ for $\omega^2B/(2g)=1.0$ as function of $2R_{enri}/B$. The numerical results indicate that, for both linear and quadratic XFEMs, the convergence is achieved when $2R_{enri}/B$ $\geq 0.2$. The results also suggest that it is unnecessary to use a too large enrichment radius, because the results do not seem to improve further as long as $R_{enri}$ is greater than a threshold value of approximately $0.2$. On the other hand, larger $R_{enri}$ also means more extra DOFs and unknowns.
\par In Fig.~\ref{Fig.17}(a), the numerical results of $\bar F_{y}^{(2)}$ by the linear FEM and the linear XFEM are compared with a reference solution in \cite{liang2015application} based on conservation of fluid momentum (CFM). Direct pressure integration has been applied in the present FEM analyses. Mesh 1, mesh 2 and mesh 3 in the parentheses indicate coarse, medium and fine mesh setting, respectively. Details of the mesh parameters are shown in Table~\ref{tab4}. Apparently, the linear XFEM is more accurate than linear FEM as seen from their comparisons with the CFM results \citep{liang2015application}. Convergent result can be achieved rapidly after refine mesh 1 to mesh 2 via linear XFEM. The unknown number (or total DOFs) of mesh 1 and mesh 2 is 28866 and 81421 respectively when linear XFEM is applied. On the contrary, the linear FEM has not reached the convergence even with the finest mesh, i.e. mesh 3 with total DOFs of $N_p=556146$ in Table~\ref{tab4}. Note that the total number of unknowns, or toal DOFs, are different for a FEM and a XFEM, even though the same mesh is used. This is due to the extra DOFs introduced in the XFEM as a result of local enrichment.
\begin{table}[htb]
\centering
\caption{The three different meshes and DOFs parameters for the two linear (FEM and XFEM) methods, which are used in the calculation of the 2nd order mean vertical force.}
\begin{tabular}{cccccc}
\toprule
Method & $N_p$ & $N_{rx}$& $N_{ox}$ & $N_{oy}$\\
\midrule
Linear FEM (mesh 1)& 28275& 25& 300& 60
\\
Linear FEM (mesh 2)& 78526& 105& 300& 60
\\
Linear FEM (mesh 3)& 556146& 405& 400& 80
\\
Linear XFEM (mesh 1)& 28866& 25& 300& 60
\\
Linear XFEM (mesh 2)& 81421& 105& 300& 60 \\
\bottomrule
\end{tabular}
\label{tab4}
\end{table}
\par For quadratic methods, we also consider three different meshes, i.e. coarse, medium and fine meshes, represented by mesh 1, 2 and 3 in Table~\ref{tab5} respectively. As illustrated by the comparisons in Fig.\ref{Fig.17}(b), the conventional quadratic FEM is not able to reach a convergence even with the finest mesh (mesh 3) with a total DOFs of $N_p=406631$. On the contrary, the quadratic XFEM results are convergent with the medium mesh (mesh 2, $N_p=15416$). In fact, results based on the coarse mesh (mesh 1, $N_p=9293$) are already very close to the reference results. In this coarse mesh resolution, only 4 elements are distributed on half of the rectangle bottom.
\begin{table}[htb]
\centering
\caption{Mesh parameters for the two quadratic (FEM and XFEM) methods, which are applied in the calculation of the 2nd order mean vertical force.}
\begin{tabular}{ccccc}
\toprule
Method & $N_p$ & $N_{rx}$& $N_{ox}$ & $N_{oy}$\\
\midrule
Quad. FEM (mesh 1) & 9281& 4& 120& 20
\\
Quad. FEM (mesh 2) & 15221& 15& 120& 20
\\
Quad. FEM (mesh 3)& 406631& 215& 120& 50
\\
Quad. XFEM (mesh 1) & 9293& 4& 120& 20
\\
Quad. XFEM (mesh 2)& 15416& 15& 120& 20 \\
\bottomrule
\end{tabular}
\label{tab5}
\end{table}
\par Comparing the two XFEMs, quadratic XFEM has shown much faster mesh-convergence rate than linear XFEM. More specifically, convergent results can be reached by quadratic XFEM with less than DOFs $N_p=15416$, while its takes $N_p=81421$ for the linear XFEM. Therefore, the quadratic XFEM is considered as more competitive. From the standpoint of solution enrichment, the quadratic XFEM can be seen as a combination of global and local enrichment, with a global enrichment achieved via higher Lagrange polynomials in regular shape functions, and a local enrichment realized by adding prior knowledge to the local approximation space. The linear XFEM, however, only enrich the solution locally. Therefore, it is generally expected that the quadratic XFEM over-performs the linear XFEM.
\begin{figure}[t]
\centering
\includegraphics[scale=.3]{Figures/UnstructuredMesh.eps}
\caption{An example of the unstructured mesh of linear elements for the half rectangular heaving on the free surface. $N$ with subscripts represent the number of element along the boundaries of the fluid domain.}
\label{Fig.31}
\end{figure}
\subsubsection{Application of unstructured meshes}
\begin{figure}[ht]
\centering
\subfigure[Linear method]
{\includegraphics[scale=.30]{Figures/MeanForceLinearUnstru.eps}}
\subfigure[Quadratic method]
{\includegraphics[scale=.30]{Figures/MeanForceQuadUnstru.eps}}
\caption{The non-dimensional 2nd order mean vertical force of a heaving floating rectangle with unstructured mesh. The non-dimensional 2nd order mean vertical force $\bar F_{y}^{(2)}=F_{y}^{(2)}/(\rho\omega^2\eta_{3a}^{2}B)$, $F_{y}^{(2)}$= 2nd order mean vertical force, $\rho$= mass density of water, $\eta_{3a}$= heave amplitude, $B$= beam, $\omega$= circular frequency. The CFM= conservation of fluid momentum. }
\label{Fig.32}
\end{figure}
\begin{table*}[htb]
\centering
\caption{Mesh parameters for the conventional linear and quadratic FEMs and their corresponding XFEMs, which are applied to obtain the 2nd order mean vertical force.}
\begin{tabular}{cccccccc}
\toprule
Method & $N_p$ & $N_{rx}$& $N_{ry}$& $N_{fx}$ & $N_{bx}$ & $N_{sy}$ & $N_{my}$\\
\midrule
Linear FEM (mesh 1) & 35565& 75& 75& 1199& 69& 59& 59
\\
Linear XFEM (mesh 1) & 40854& 75& 75& 1199& 69& 59& 59
\\
Quad. FEM (mesh 2) & 5756& 10& 10& 199 & 29& 19& 19
\\
Quad. XFEM (mesh 2) & 5870& 10& 10& 199 & 29& 19& 19\\
\bottomrule
\end{tabular}
\label{tab6}
\end{table*}
\par In the previous subsections, a multi-block structured mesh was adopted for demonstration purpose, and the numerical results based on XFEMs were very encouraging. However, it is well-known that, one of the most powerful property of FEM is that it allows for the use of unstructured mesh without having to modify the numerical code. It is much easier for the unstructured meshes to deal with problems involving complex boundaries. In this subsection, the unstructured mesh will be adopted to investigate the same problem that have studied in the previous subsection.
\par An example of the unstructured mesh close the 2D rectangle, generated from the open-source mesh generator GMSH, is shown in Fig.~\ref{Fig.31}. The following parameters are defined to control the number of elements on the fluid boundaries: $N_{rx}$ is the number of elements on the bottom of the rectangle, $N_{ry}$ along the side wall of the rectangle, $N_{fx}$ along the free surface, $N_{sy}$ along the symmetry face, $N_{bx}$ along the bottom of the computational domain and $N_{my}$ along the matching boundary. Furthermore, for both linear and quadratic mesh, the mesh is stretched by a fixed stretching radio of 1.1 along the body boundary, so that the meshes are finer close to the corners. The meshes are also stretched vertically towards the bottom of the fluid and horizontally towards the matching boundary, using stretching factors of 1.08 and 1.05 respectively. The meshes are so adapted that the mesh density is higher close to the body and the free surface.
\par The 2nd order mean vertical force on the heaving rectangle at free surface is studied again in the frequency domain by using the unstructured mesh and the four FEMs, and the corresponding results for linear FEMs and quadratic FEMs are shown in Fig.~\ref{Fig.32}(a) and Fig.~\ref{Fig.32}(b) respectively. The main parameters of the applied unstructured meshes are summarized in Table~\ref{tab6}.
Due to the use of unstructured meshes and stretched grid on the fluid boundaries, it is expected that the required total number of unknowns are much smaller than that of the multi-block structured meshes. This has also been confirmed by our numerical results in Fig.~\ref{Fig.32}(a) and Fig.~\ref{Fig.32}(b). As seen in the figures, to achieve convergent results for $\bar F_{y}^{(2)}$, it is sufficient to use mesh 1 (total DOFs $N_p=40854$) and mesh 2 ($N_p=5870$) in Table~\ref{tab6} for linear XFEM and quadratic XFEM, respectively. On the other hand, as expected, the results of the conventional FEMs are not convergent when the same meshes as the corresponding XFEMs are used. There is one point that must be clarified for the results of the conventional linear FEM and quadratic FEM. In Fig.~\ref{Fig.32}, the linear FEM results appear to be closer to reference results than that of quadratic FEM. This is due to the fact that, mesh 1 as used by linear FEM is much finer than mesh 2 used by quadratic FEM.
In \cite{liang2015application}, a modified HPC method based on domain decomposition method was developed to solve the same hydrodynamic problem of the heaving rectangle at free surface in the frequency domain. Corner-flow solutions were used in the inner domain surrounding the sharp corner, while the outer domain solutions were represented by overlapping harmonic-polynomial cells. The inner and outer domain solutions are matched at their common boundaries. This method was shown to be capable of providing convergent 2nd order mean wave loads by using 80 elements along half of the bottom and in total approximately 352000 unknowns, while our linear and quadratic XFEM models need much fewer unknowns (only 5870 for quadratic XFEM and 40854 for linear XFEM) to achieve equally good results. In a nutshell, the superiority of the present study over \cite{liang2015application} is twofold. Firstly, from implementation point of view, the enrichment strategy based on Partition of Unity in XFEMs to include singular functions near the corner is easier and more flexible. Secondly, the unstructured meshes are allowed in XFEMs, which enables XFEMs to deal with more complex structures, whereas it is expected to be more difficult for the HPC method.
\section{Conclusions}\label{sect:Conclusion and perspective}
\par The XFEM is applied as an accurate and efficient tool to solve 2D potential-flow hydrodynamic problems for structures with sharp edges. To demonstrate the advantages of XFEM, four FEM codes, in 2D, including the conventional linear and quadratic FEMs, and the two corresponding XFEMs, are implemented and compared. All of our results have confirmed that the XFEM is promising framework to deal with potential-flow hydrodynamic problems involving structures with sharp edges. Three different enrichment strategies, including: the point enrichment, patch enrichment and radius enrichment, are also investigated in the study of the uniform flow around an infinite-thin flat plate. The first two enrichment methods are found to be mesh-dependent, and are not able to achieve the expected spatial convergence rate.
However, the radius enrichment method is mesh-independent, and has shown remarkably better accuracy and spatial convergence rate. Therefore, it is considered as the best option among the three. By studying the horizontal fluid velocity along the flat plate, we also demonstrate that XFEMs are capable of capturing the strong flow variation close to the endpoints, which cannot be represented by the conventional FEMs.
\par For a heaving rectangular cylinder on the free surface, both the conventional FEMs and XFEMs can accurately predict the linear hydrodynamic coefficients with affordable computational efforts, indicating that the singularity at sharp corner is inconsequential to the linear hydrodynamic loads. However, it has important effects on the 2nd order mean wave loads if the direct pressure integration is employed, because the singular flow velocities are involved. Compared with reference results based on conservation of fluid momentum, both linear and quadratic XFEMs have shown encouraging results even with a relative coarse mesh resolution, while the quadratic XFEM has an overall better performance than the linear XFEM. On the contrary, it is difficult for the two conventional FEMs to achieve convergence even with an extremely fine mesh.
For the quadratic XFEM, it is also found sufficient to only include the first singular term from the corner-flow solutions in the local enrichment, while it is beneficial to include a few more, e.g. 3 terms, in the local enrichment in the linear XFEM.
As a final demonstration, we show that the adoption of unstructured meshes and local refinement close to the sharp edges have a great potential to further reduce the total number of unknowns to achieve a desired accuracy.
\begin{figure}[t]
\centering
\includegraphics[scale=.4]{Figures/PlateAndCurrent.eps}
\caption{The uniform flow around the flat plate.}
\label{Fig.18}
\end{figure}
\section*{Appendix. Analytical solution of flow over a flat plate}
\begin{figure*}[t]
\centering
\subfigure[Without modification] {\includegraphics[scale=.50]{Figures/Phi.eps}}
\subfigure[Modification]
{\includegraphics[scale=.50]{Figures/Phim.eps}}
\caption{The contour of velocity potential.}
\label{Fig.19}
\end{figure*}
\begin{figure*}[t]
\centering
\subfigure[Without modification] {\includegraphics[scale=.50]{Figures/Quiver.eps}}
\subfigure[Modification]
{\includegraphics[scale=.50]{Figures/Quiverm.eps}}
\caption{Velocity vector diagram.}
\label{Fig.20}
\end{figure*}
The complex potential of uniform around flat plate in a 2D infinite domain is \citep{Newman2017Marine}
\begin{equation}
\label{Eq.44}
W\left( z \right)=-z{{v}_{0}}\cos \alpha +\mathrm{i}{{v}_{0}}\sqrt{{{z}^{2}}-{{a}^{2}}}\sin \alpha,
\end{equation}
where $z=x+\mathrm{i} y$,
and the corresponding complex velocity is
\begin{equation}
\label{Eq.45}
u-\mathrm{i} v=-{{v}_{0}}\cos \alpha +\mathrm{i} {{v}_{0}}\frac{z}{\sqrt{{{z}^{2}}-{{a}^{2}}}}\sin \alpha,
\end{equation}
where $v_0$ denotes the velocity of the uniform stream, $\alpha$ the angle between the uniform flow and the plate, $a$ the half of width of the flat plate, and $u$ and $v$ the horizontal and vertical velocity components, respectively. For convenience, we let $\alpha=\pi/2$ and $v_0=1$ as shown in Fig.~\ref{Fig.18}, the complex potential in the fluid domain is simplified to
\begin{equation}
\label{Eq.46}
W(z)=\mathrm{i}\sqrt{{{z}^{2}}-{{a}^{2}}},
\end{equation}
and the complex velocity become
\begin{equation}
\label{Eq.47}
u-\mathrm{i} v=\mathrm{i}\frac{z}{\sqrt{{{z}^{2}}-{{a}^{2}}}}.
\end{equation}
The complex potential can be divided into two parts, including: the potential function $\phi(x,y)$, and the stream function $\chi(x,y)$
\begin{equation}
\label{Eq.48}
W(z)=\phi( x,y)+\mathrm{i}\chi( x,y).
\end{equation}
According to Eqs.~\eqref{Eq.46}) and \eqref{Eq.48}, we obtain
\begin{equation}
\label{Eq.49}
\phi =\operatorname{Re}\left\{ W(z) \right\}=\operatorname{Re}\left\{ -\mathrm{i}\sqrt{{{z}^{2}}-{{a}^{2}}} \right\}.
\end{equation}
The velocity in the fluid domain can be written as:
\begin{equation}
\label{Eq.50}
u=\frac{\partial \phi }{\partial x} =\mathrm{Re}\left(\mathrm{i}\frac{z}{\sqrt{{{z}^{2}}-{{a}^{2}}}}\right).
\end{equation}
\begin{equation}
\label{Eq.51}
v=\frac{\partial \phi }{\partial y}=-\mathrm{Im}\left(\mathrm{i}\frac{z}{\sqrt{{{z}^{2}}-{{a}^{2}}}}\right)
\end{equation}
The velocity potential and velocity determined by Eqs.~\eqref{Eq.49} and \eqref{Eq.51} are not physical on the right-half plane as shown in Fig.~\ref{Fig.19} (a) and Fig.~\ref{Fig.20} (b), respectively. The velocity potential $\phi$ and velocity vectors are not symmetric about y-axis and there is discontinuity along $y$-axis. The correction, therefore, should be made when $x\ge 0$ gives
\begin{equation}
\label{Eq.52}
\begin{aligned}
\phi =-\phi, \\
u=-u, \\
v=-v.
\end{aligned}
\end{equation}
After modification, the contour of velocity potential was shown in Fig.~\ref{Fig.19} (b) and the vector diagram of velocity in Fig.~\ref{Fig.20} (b).
|
2106.08570
|
\section{Introduction}\label{sec1}
Anomaly detection, which attempts to automatically predict abnormal/normal events in a given video sequence, has been actively studied in the field of computer vision. As a high-level computer vision task, anomaly detection aims to effectively distinguish abnormal and normal activities as well as anomaly categories in video sequences. In the last few years, there have been many studies investigating anomaly detection in the research community~\cite{Li2014Anomaly, Cosar2017Toward, Ravanbakhsh2017Abnormal, Hasan2016Learning, Liu2018GAN, wan2020weakly, park2020learning, zaheer2020old, Liu2019MarginLE}.
Compared with normal behaviors, an event that rarely occurs or with low probability is generally considered as~\emph{anomaly}. In practice, it is difficult to build effective anomaly detection models due to the unknown event type and indistinct definition of~\emph{anomaly}. Traditionally, anomaly detection methods are designed from two aspects. One type of anomaly detection method is designed by reconstruction and they focus on modelling normal patterns in video sequences~\cite{Antic2011Video, Ravanbakhsh2017Abnormal, Hasan2016Learning, Lu2013Abnormal, Liu2018GAN, park2020learning, zaheer2020old}. The goal of these methods is to learn a feature representation model for normal patterns. At the testing stage, these methods utilize the differences between abnormal and normal samples to determine the final anomaly score of testing data, such as the reconstruction cost or specific threshold~\cite{Ravanbakhsh2017Abnormal, Hasan2016Learning, Lu2013Abnormal, Liu2018GAN, park2020learning, zaheer2020old}. Although reconstruction-based anomaly detection methods are good at reconstructing normal patterns in video sequences, the key issue in these methods is that they rely heavily on training data.
\begin{figure*}[ht]
\addtocounter{subfigure}{-6}
\centering
\subfigure{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Fighting_1.jpg}}
\subfigure{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Boxing_1.jpg}}
\subfigure{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Hurt_1}}
\subfigure{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Touching_1.jpg}}
\subfigure{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Running_a_1.jpg}}
\subfigure{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Running_n_1.jpg}}
\subfigure[\textit{Fighting}]{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Fighting_2.jpg}}
\subfigure[\textit{Boxing}]{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Boxing_2.jpg}}
\subfigure[\textit{Hurt}]{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Hurt_2.jpg}}
\subfigure[\textit{Touching}]{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Touching_2.jpg}}
\subfigure[\textit{Running}-A]{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Running_a_2.jpg}}
\subfigure[\textit{Running}-N]{\label{fig:subfig:Case}
\includegraphics[width = 2.6cm]{Figs/Examples_of_case/Running_n_2.jpg}}
\caption{Definition of anomaly on different visual scenes, where \textit{Running}-A and \textit{Running}-N are short for \textit{Running} Abnormal Events and \textit{Running} Normal Events, respectively. Column (a) to (f): \textit{Fighting}, \textit{Boxing}, \textit{Hurt}, \textit{Touching}, \textit{Running} Abnormal Events and \textit{Running} Normal Events.}
\label{fig:Examples_of_case}
\end{figure*}
\begin{table*}[ht]
\centering \caption{The detailed information of existing video anomaly detection databases.}
\label{tab:basic_information_of_datasets}
\renewcommand\tabcolsep{20pt}
\begin{tabular}{ c c c c c c }
\toprule
Database&Year&Videos&Scenes&Supervision&Categories\\
\midrule
UCSD Ped1~\cite{Li2014Anomaly} &2014&70&1&Video-level& not specified\\
UCSD Ped2~\cite{Li2014Anomaly} &2014&28&1&Video-level& not specified\\
Avenue~\cite{Lu2013Abnormal} &2013&37&1& Video-level& not specified\\
LV~\cite{Leyva2017Video} &2017&28&28 & Video-level& not specified\\
ShanghaiTech~\cite{Luo2017A} &2017&437&13& Video-level& not specified\\
UCF-Crime~\cite{Sultani2018Real} &2018&1900&-& Video-level& 13\\
\midrule
\textbf{LAD}&\textbf{-}&\textbf{2000}&\textbf{1895}& \textbf{Video- and frame- level}& \textbf{14}\\
\bottomrule
\end{tabular}
\end{table*}
Another type of anomaly detection method regards anomaly detection as a classification problem~\cite{Zhu2013Anomaly, Colque2015Histograms}. For these methods, anomaly scores of video sequences are predicted by extracting features such as Histogram of Optical Flow (HOF) or dynamic texture (DT) with a trained classifier~\cite{Zhu2013Anomaly, Colque2015Histograms}. The performance of these methods is highly dependent on training samples. To obtain satisfactory performance, extracting effective and discriminative features is crucial for such anomaly detection methods.
Most of the existing anomaly detection methods are designed based on the hypothesis that any pattern different from learned normal patterns is regarded as an anomaly. Under this assumption, the same activity in different scenes might be denoted as a normal or abnormal event. For example, as shown in Fig.~\ref{fig:Examples_of_case}, a fighting scene where two men are brawling may be considered as abnormal, while it may be normal when these two men are doing boxing sport; a girl/boy running on the street because of panic may be considered as abnormal, but it may be normal when the weather is raining since the girl/boy forget to take an umbrella; an animal touching human may be considered as abnormal (i.e., snake bite human), while it may be normal in the case of kissing people by a dolphin. Additionally, there is much redundant visual information in high-dimensional video data, which increases the difficulty for event representation in video sequences.
The main challenges of anomaly detection task are caused by the lack of a large-scale anomaly detection database with fine-grained annotations. Although several anomaly detection databases are proposed in the research community~\cite{Li2014Anomaly, Lu2013Abnormal, Leyva2017Video, Sultani2018Real, Luo2017A}, they are flawed either in the dataset scale or annotation richness. Specifically, there are no more than 100 video sequences in ~\cite{Li2014Anomaly,Lu2013Abnormal,Leyva2017Video}, which could not satisfy the requirement of training data for deep learning based models. Besides, existing databases~\cite{Li2014Anomaly, Lu2013Abnormal, Leyva2017Video, Sultani2018Real, Luo2017A} only provide video-level labels in training set, which makes it infeasible to learn anomaly detection models in a fully-supervised manner. Moreover, the definition of anomaly is unclear, which makes it hard for anomaly ground-truth annotation and computational model design. For anomaly detection models for specific events such as hyperspectral anomaly detection~\cite{yuan2015hyperspectral}, violence detector~\cite{Mohammadi2016Angry} and traffic detector~\cite{Sultani2010Abnormal, yuan2016anomaly}, their applications are limited since they cannot be used to detect other abnormal events.
To address these above problems in existing anomaly detection studies, we investigate anomaly detection from the following two aspects in this study.
\begin{itemize}
\item We build a new \textbf{L}arge-scale \textbf{A}nomaly \textbf{D}etection (\textbf{LAD}) database consisting of $2000$ video sequences and corresponding anomaly ground-truth data including video-level labels (abnormal/normal video, anomaly type) and frame-level labels (abnormal/normal video frame). There are 14 abnormal event categories in total. More than 100 video sequences are collected for each abnormal category, making it the largest database for anomaly detection to date.
\item We propose a multi-task deep neural network for anomaly detection by learning local and global contextual spatiotemporal features with a multi-task joint learning scheme. An inflated 3D convolutional network is constructed to extract local spatiotemporal contextual features, which are further used to input a designed recurrent convolutional neural network to learn global spatiotemporal contextual features. The anomaly category and score can be predicted by a multi-task deep network with these global features.
\end{itemize}
The rest of this paper is organized as follows. Section II reviews the related work. Section III provides the details of the built large anomaly database, including the data collection and annotations. Section IV describes the proposed method in detail. Section V briefly describes the performance evaluation metrics and the performance comparison of the proposed method. We conclude this paper in Section VI.
\section{Related Work}\label{sec_framework}
\subsection{Anomaly Detection Databases}\label{Image-Based Saliency}
Currently, there have been several anomaly detection databases for video sequences~\cite{Li2014Anomaly, Lu2013Abnormal,Leyva2017Video,Leyva2017Video, Sultani2018Real}. The detailed information of these existing databases is given in Table~\ref{tab:basic_information_of_datasets}.
\textbf{UCSD}~\cite{Li2014Anomaly} includes two subsets of~\textbf{Ped1} and~\textbf{Ped2}, where an anomaly event is defined as a car or a bicycle appearing abnormally in the street compared with normal patterns of the car or pedestrian. In this database, the crowd density of different video sequences is different. All video sequences are with 10 Frames Per Second (fps), including two different outdoor scenes. The first subset~\textbf{Ped1} contains 34 training and 36 testing video sequences, including around 8000 video frames in total, while the second subset~\textbf{Ped2} contains 16 training and 12 testing video sequences including 4950 video frames in total.
\textbf{Avenue}~\cite{Lu2013Abnormal} contains 16 training and 21 testing video sequences. In this database, abnormal events are labeled as people running, loitering, throwing,~\emph{etc.} The size of a person may vary depending on the position and angle of the camera. It provides pixel-level annotation for each video frame. Each video sequence is about 2 minutes long. There are around 31000 video frames with a resolution of $640 \times 360$ in total. All video sequences are recorded in the same visual scene.
\textbf{LV}~\cite{Leyva2017Video} This database contains 28 realistic video sequences for out-door and in-door scenes, and abnormal events are labeled as people fighting, people clashing, vandalism,~\emph{etc.} Each video sequence is divided into training and testing data.
\textbf{ShanghaiTech}~\cite{Luo2017A} contains 437 realistic video sequences for out-door scenes. There are 13 different visual scenes in this database, where all video sequences are captured by surveillance cameras. This database 130 abnormal events including Running, bicycles, skaters~\emph{etc.}
\textbf{UCF-Crime}~\cite{Sultani2018Real} contains 13 real-world anomaly categories, including~\emph{Abuse, Arrest, Arson, Assault, Accident, Burglary, Explosion, Fighting, Robbery, Shooting, Stealing, Shoplifting} and~\emph{Vandalism}. It includes 1900 surveillance video sequences in total, composed of 950 abnormal video sequences and 950 normal video sequences. There are about 128 hours for all these video sequences in this database. The testing set including 150 normal and 140 abnormal video sequences, while the rest are used as the training set. This database provides only video-level labels for training videos.
From Table~\ref{tab:basic_information_of_datasets}, we can observe that the training video of most existing anomaly detection databases is limited in scale. Although they contain a variety of abnormal events, the categories of abnormal videos are not specified. However, the visual scenes in the real world are diverse and complicated with different anomaly types. Another common drawback of existing databases is the lacking of frame-level labels. As a result, anomaly detection algorithms can only be learned in a weakly-supervised manner, which deteriorates the performance and impedes the wide usage in practical applications.
In this work, we build a new large-scale anomaly detection database, including 2000 video sequences and the corresponding video- and frame-level labels, to promote anomaly detection in a fully-supervised manner. The built database contains 1895 different visual scenes with 14 anomaly categories, including~\emph{Crash, Crowd, Destroy, Drop, Falling, Fighting, Fire, Fall Into Water, Hurt, Loitering, Panic, Thiefing, Trampled,} and~\emph{Violence}. We will introduce this database in detail in Section III.
\subsection{Anomaly Detection Methods}\label{Video-Based Saliency}
Early anomaly detection studies extract object trajectories to detect abnormal activities in video sequences, where an object against the learned normal object trajectories is detected as an anomaly~\cite{Cosar2017Toward, Piciarelli2008Trajectory, Wu2010Chaotic, piciarelli2006on-line, jiang2011anomalous, tung2011goal-based, morris2011trajectory, calderara2011detecting, patino2015abnormal, yi2015understanding}. Cosar~\emph{et al.} proposed an unsupervised architecture for abnormal behavior prediction by object trajectory analysis (i.e., speed, direction, and body movement) and pixel-level analysis (i.e., appearance)~\cite{Cosar2017Toward}. Piciarelli~\emph{et al.} designed an anomaly detection model by clustering extracted normal trajectories of moving objects in video sequences~\cite{Piciarelli2008Trajectory, piciarelli2006on-line}. Specifically, they utilized a single-class SVM to learn normal object trajectories. In the testing stage, a new trajectory is predicted as an anomaly or not by comparing it with the clustering model with a threshold. Wu~\emph{et al.} exploited chaotic invariants of lagrangian particle trajectories to represent anomaly activities in crowded scenes~\cite{Wu2010Chaotic}. Patino~\emph{et al.} detected speed and direction change by trajectories of moving objects, which predict anomaly events~\cite{patino2015abnormal}. Jiang~\emph{et al.} proposed a context-aware anomaly detection method~\cite{jiang2011anomalous}. By tracking all moving objects in a video sequence, the anomaly event is detected by considering different levels of spatiotemporal contexts. Morris~\emph{et al.} studied the features of the normal recurrent motion patterns of the surveillance subjects to detect abnormalities~\cite{morris2011trajectory}. Yi~\emph{et al.} proposed a pedestrian behavior model for anomaly detection by stationary crowd group~\cite{yi2015understanding}. However, these methods can not work well when objects are occlusive.
To solve the challenging problem from object occlusion, some studies used global features to represent complex scenes for anomaly detection~\cite{Li2014Anomaly, Antic2011Video, Leyva2017Video, Wang2014Detection, Mehran2009Abnormal, adam2008robust, saligrama2012video, benezeth2009abnormal, kim2009observe, kratz2009anomaly, zhang2005semi-supervised, roshtkhari2013online, zhu2013context-aware, xiao2015learning, Cui2011Abnormal, Yuan2015Online, Cheng2015Gaussian}. Then they used a nonlinear one-class support vector machine to learn normal patterns. The event behavior with an outlier score predicted by the trained model is considered an an anomaly. Different from the study~\cite{Wang2014Detection}, Li~\emph{et al.} proposed a joint anomaly detection model by combining temporal and spatial anomalies with a Mixture of Dynamic Textures (MDT) for modelling normal crowd activities~\cite{Li2014Anomaly}. Besides, Mehran~\emph{et al.} introduced a social force model to stimulate the normal behaviour of the crowd. Then they classified video frames as normal or abnormal by using a bag of words approach~\cite{Mehran2009Abnormal}. Cui~\emph{et al.} defined a concept of interaction energy to represent the current interaction between the surrounding region and objects. A behaviour is considered as anomaly when the energy and velocity of an object change dramatically~\cite{Cui2011Abnormal}. Adam~\emph{et al.} used low-level information based on multiple local monitors for anomaly detection in video sequences~\cite{adam2008robust}. In order to detect abnormal events in video sequences, Saligrama~\emph{et al.} used spatiotemporal features with a $k$-nearest neighbor method to design an anomaly detection model~\cite{saligrama2012video}. Benezeth~\emph{et al.} used normal events to train a spatiotemporal co-occurrence matrix and used the matrix and Markov random field to detect anomaly~\cite{benezeth2009abnormal}. Kim~\emph{et al.} used a mixture of probabilistic PCA models to present the local optical flow pattern, and used the representation and Markov random field to define normal patterns~\cite{kim2009observe}. Antic~\emph{et al.} introduced a probabilistic model by localizing abnormalities with statistical inference~\cite{Antic2011Video}. Yuan~\emph{et al.} proposed an informative Structural Context Descriptor (SCD) to describe the crowd scene for anomaly detection~\cite{Yuan2015Online}. Lu~\emph{et al.} proposed to learn multiple dictionaries to model normal patterns with sparse constraint~\cite{Lu2013Abnormal}. Leyva~\emph{et al.} designed an anomaly detection method based on optical flow information and foreground occupancy~\cite{Leyva2017Video}. In~\cite{athanesious2020detecting}, a novel hand-craft optical-optical feature extractor named Super Orientation Optical Flow (SOOF) is proposed to efficiently capture motion information of objects in surveillance videos. In~\cite{yuan2015hyperspectral}, a vertex- and edge-weighted graph is constructed to reduce false-positive rate in hyperspectral anomaly detection task. To tackle specific problems caused by dynamic outdoor environments in traffic scenes, Yuan~\emph{et al.} proposed a spatial localization constrained sparse coding approach as a motion descriptor.
Recently, deep learning techniques have been widely used to build anomaly detection models~\cite{Ravanbakhsh2017Abnormal, Hasan2016Learning, Liu2018GAN, Sultani2018Real, Sabokrou2017Deep, sabokrou2016video, Luo2017Remembering, Hinami2017Joint, Luo2017A, Ionescu2017Unmasking, Xu2015learning}. Sabokrou~\emph{et al.} proposed a cascaded Deep Neural Networks (DNN) for anomaly detection by hierarchically modelling normal patches using deep features, then they used Gaussian classifier to identify abnormal behaviours in video sequences~\cite{Sabokrou2017Deep}. Ravanbakhsh~\emph{et al.} trained two Generative Adversarial Nets (GANs) to learn normal patterns in video sequences~\cite{Ravanbakhsh2017Abnormal}. During the training stage, the first generator of GANs takes a normal video frame as input and produces a reconstructed optical flow image, while the second generator of GANs is fed into a real optical-flow image and generates a reconstructed appearance image. In the testing stage, this model detects anomalies by using the reconstruction differences between real data (original video frames and original optical-flow images) and generated data (reconstructed video frames and reconstructed optical-flow images. Hasan~\emph{et al.} proposed two auto-encoder models to learn temporal regularity for anomaly detection~\cite{Hasan2016Learning}. Similarly, Xu~\emph{et al.} proposed a deep neural network based model by constructing a stacked denoising autoencoder for feature learning for abnormal event detection~\cite{Xu2015learning}. Luo~\emph{et al.} proposed a Convolutional LSTMs Auto-Encoder (ConvLSTM-AE) to encode normal appearance and motion patterns for abnormal event detection~\cite{Luo2017Remembering}. Hinami~\emph{et al.} learned a Convolutional neural network through multiple visual tasks, then they used semantic information to detect anomaly events~\cite{Hinami2017Joint}. Ionescu~\emph{et al.} applied the unmasking technique to train a binary classifier to distinguish two consecutive short video sequences and gradually remove the most discriminant features~\cite{Ionescu2017Unmasking}. Luo~\emph{et al.} proposed a Temporally-coherent Sparse Coding (TSC) approach for anomaly detection, in which similar adjacent frames are encoded with similar reconstruction coefficients~\cite{Luo2017A}. Liu~\emph{et al.} proposed an anomaly detection model based on the difference between a predicted frame and the ground-truth, where the temporal constraint is considered besides spatial constraints~\cite{Liu2018GAN}. Sultani~\emph{et al.} learned a generic model using deep Multiple Instance Learning (MIL) framework with weakly labeled data~\cite{Sultani2018Real}, and Wan~\emph{et al.} proposed a dynamic MIL loss and a center loss for enlarging the inter-class distance between anomalous and normal instances and reducing the intra-class distance of normal instances, respectively.
All above deep learning based methods formulate anomaly detection as the unsupervised learning or weakly-supervised learning problem due to the lack of frame-level labels in the training set of the existing anomaly detection databases. In this paper, leveraging the fine-grained frame-level annotation from our proposed LAD database, we formulate anomaly detection as a fully-supervised learning problem and propose a novel multi-task deep neural network to address anomaly detection in videos. Through extensive experimental analysis, we show that our model significantly improves the performance anomaly detection.
\section{Anomaly Detection Benchmark}
\subsection{Data Collection}
\begin{figure*}[htb]
\centering
\begin{minipage}{1\linewidth}
\centerline{
\tiny \rotatebox{90}{\qquad\qquad \textbf{Crash}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Crash/resize/227.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Crash/resize/229.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Crash/resize/247.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Crash/resize/248.jpg}
\vspace{0.1cm}
\hspace{0.1cm}
\tiny \rotatebox{90}{\qquad\qquad \textbf{Crowd}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Crowd/resize/27.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Crowd/resize/167.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Crowd/resize/259.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Crowd/resize/349.jpg}
\vspace{0.1cm}
}
\end{minipage}
\begin{minipage}{1\linewidth}
\centerline{
\tiny \rotatebox{90}{\qquad\qquad \textbf{Destroy}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Destroy/resize/104.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Destroy/resize/503.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Destroy/resize/556.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Destroy/resize/722.jpg}
\vspace{0.1cm}
\hspace{0.1cm}
\tiny \rotatebox{90}{\qquad\qquad \textbf{Drop}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Drop/resize/92.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Drop/resize/188.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Drop/resize/198.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Drop/resize/247.jpg}
\vspace{0.1cm}
}
\end{minipage}
\begin{minipage}{1\linewidth}
\centerline{
\tiny \rotatebox{90}{\qquad \textbf{Falling}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Falling/resize3/129.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Falling/resize3/197.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Falling/resize3/213.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Falling/resize3/221.jpg}
\vspace{0.1cm}
\hspace{0.1cm}
\tiny \rotatebox{90}{\qquad \textbf{FallIntoWater}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/FallIntoWater/resize/423.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/FallIntoWater/resize/424.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/FallIntoWater/resize/466.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/FallIntoWater/resize/470.jpg}
\vspace{0.1cm}
}
\end{minipage}
\begin{minipage}{1\linewidth}
\centerline{
\tiny \rotatebox{90}{\qquad \textbf{Fighting}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Fighting/resize/123.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Fighting/resize/180.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Fighting/resize/600.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Fighting/resize/651.jpg}
\vspace{0.1cm}
\hspace{0.1cm}
\tiny \rotatebox{90}{\qquad\qquad \textbf{Fire}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Fire/resize/1.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Fire/resize/2.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Fire/resize/102.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Fire/resize/158.jpg}
\vspace{0.1cm}
}
\end{minipage}
\begin{minipage}{1\linewidth}
\centerline{
\tiny \rotatebox{90}{\qquad\qquad \textbf{Hurt}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Hurt/resize/1.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Hurt/resize/65.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Hurt/resize/278.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Hurt/resize/296.jpg}
\vspace{0.1cm}
\hspace{0.1cm}
\tiny \rotatebox{90}{\qquad\qquad \textbf{Loitering}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Loitering/resize/278.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Loitering/resize/295.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Loitering/resize/325.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Loitering/resize/354.jpg}
\vspace{0.1cm}
}
\end{minipage}
\begin{minipage}{1\linewidth}
\centerline{
\tiny \rotatebox{90}{\qquad\qquad \textbf{Panic}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Panic/resize/213.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Panic/resize/246.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Panic/resize/308.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Panic/resize/340.jpg}
\vspace{0.1cm}
\hspace{0.1cm}
\tiny \rotatebox{90}{\qquad\qquad \textbf{Thiefing}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Thiefing/resize/13.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Thiefing/resize/133.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Thiefing/resize/149.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Thiefing/resize/179.jpg}
\vspace{0.1cm}
}
\end{minipage}
\begin{minipage}{1\linewidth}
\centerline{
\tiny \rotatebox{90}{\qquad\qquad \textbf{Trampled}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Trampled/resize/279.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Trampled/resize/281.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Trampled/resize/379.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Trampled/resize/446.jpg}
\vspace{0.1cm}
\hspace{0.1cm}
\tiny \rotatebox{90}{\qquad\qquad \textbf{Violence}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Violence/resize/43.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Violence/resize/108.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Violence/resize/123.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Violence/resize/148.jpg}
\vspace{0.1cm}
}
\end{minipage}
\begin{minipage}{1\linewidth}
\centerline{
\tiny \rotatebox{90}{\qquad \textbf{Normal (I)}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Normal/resize2/1.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Normal/resize2/22.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Normal/resize2/27.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Normal/resize2/139.jpg}
\vspace{0.1cm}
\hspace{0.1cm}
\tiny \rotatebox{90}{\qquad \textbf{Normal (II)}}
\hspace{0.1cm}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Normal/resize3/9.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Normal/resize3/39.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Normal/resize3/48.jpg}
\includegraphics[width=0.115\textwidth]{Figs/Examples_of_dataset/Normal/resize3/51.jpg}
\vspace{0.1cm}
}
\end{minipage}
\caption{Visual samples of different anomaly event categories in the proposed LAD database. The proposed database contains 14 distinct anomaly categories, including~\emph{Crash, Crowd, Destroy, Drop, Falling, Fall Into Water, Fighting, Fire, Hurt, Loitering, Panic, Thiefing, Trampled and Violence,} as well as~\emph{Normal} activities.}
\label{fig:Examples_of_dataset}
\end{figure*}
To collect large-scale representative anomaly activities, we search for a large number of video sequences from public websites including YouTube\footnote{https://www.youtube.com/}, YouKu\footnote{https://www.youku.com/}, and Tencent Video\footnote{https://v.qq.com/}. Besides, we collect some video sequences from existing activity recognition databases, such as FCVID~\cite{FCVID}, Hollywood2~\cite{Marszalek2009Actions}, and YouTube Action~\cite{Liu2009Recognizing}. Additionally, we record some normal activities or suddenly occurring abnormal events in the square and school by a digital camera to provide plenty of visual scenes and real-world events. With these operations, we initially collect over 2500 video sequences in total.
We analyze the collected video sequences and classify these video sequences into 14 categories, including~\emph{Crash, Crowd, Destroy, Drop, Falling, Fighting, Fire, Fall Into Water, Hurt, Loitering, Panic, Thiefing, Trampled,} and~\emph{Violence}. For each category, we discard some video sequences which fall into any of the following two conditions: (1) low resolution or low quality; and (2) incomplete anomaly event or anomaly is not clear. We strictly select more than 100 video sequences, including more than 50 normal video sequences and 50 abnormal video sequences for each category. Finally, we preserve 14 distinct anomaly categories with 2000 video sequences totally. The frame rate of all video sequences is 25 fps. For each video sequence, we manually extract a video segment that represents an abnormal/normal activity by irrelevant video frames. In Fig.~\ref{fig:Examples_of_dataset}, we show four frames of an example video for each anomaly category, including 2 normal frames and 2 abnormal frames.
\subsection{Annotations}
\begin{figure*}[ht]
\centering
\subfigure[]{\label{fig:dataset_info}
\includegraphics[width = 5.4cm]{Figs/dataset_info/Class_anomaly_percentage_0527.eps}}
\subfigure[]{\label{fig:dataset_info}
\includegraphics[width = 5.4cm]{Figs/dataset_info/Global_anomaly_percentage.eps}}
\subfigure[]{\label{fig:dataset_info}
\includegraphics[width = 5.4cm]{Figs/dataset_info/Global_video_frame_number.eps}}
\caption{The statistics information of the proposed LAD database. (a) The anomaly distribution of each anomaly category; (b) the anomaly distribution of each video sequence; (c) the number of video frames for each video sequence.}
\label{fig:dataset_info}
\end{figure*}
As a high-level video analysis task, anomaly detection requires frame-level labels to identify the time period of an abnormal event starts and video-level labels to recognize the anomaly category. Thus, we provide both video- and frame-level labels in our database. To ensure the quality of annotations, we invite five postgraduate students to take part in our annotation experiment. In the annotation experiment, we define 1 as abnormal video frame and 0 as normal video frame. We first ask annotators to find the video frames where an anomaly event begins and ends, which are all labeled as 1, and the rest are labeled as 0. Then, we compute the averaging scores of annotations for each frame. Finally, we binaries the averaging scores by using threshold 0.5, and take binary averaging scores as the frame-level anomaly labels. Video-level labels are annotated to represent anomaly category, where a video sequence is labeled anomaly if any frame in this video sequence is abnormal.
In this database, a normal video sequence in each anomaly category denotes that behaviour in this video sequence is regarded as normal. For example, for the~\emph{\textit{Fighting}} category, the boxing activity is classified as normal though it is similar to the fighting anomaly event; for the~\emph{Falling} category, a woman falling down when playing roller-skating is labeled as anomaly, while a woman bending into a squat with knees is annotated as normal; for the~\emph{Hurt} category, a woman being attacked by a dog is labeled as anomaly, while a woman walking the dog is annotated as normal. We divide the built database into training and testing subsets. The testing set contains 560 sequences, composed of randomly selected 20 abnormal and 20 normal video sequences for each anomaly category. The rest are used as training set. The statistics information of all video sequences is shown in Fig.~\ref{fig:dataset_info}. In the built database, we record the entire process from the beginning to the end for anomaly events, and we use a video sequence to represent a complete event. As shown in Fig.~\ref{fig:dataset_info}, the number of video frames for most video sequences is in the range of $\left[ 4000, 8000 \right]$. The anomaly percentage of \textit{Fire} and \textit{Loitering} event categories are high because the anomaly of these anomaly events generally lasts for a long time. The video frames with smoke or small fires are considered anomalies when we annotate abnormal fire frames for \textit{Fire}. By contrast, the anomaly percentage of the \textit{Falling} category is the lowest since this type of anomaly event lasts for a short time. In this type of video sequences, when a person falls down, he can stand up quickly. Besides, we compare our database with UCF-Crime~\cite{Sultani2018Real}, and find that there are some video sequences with an anomaly percentage higher than 0.5. Since the anomaly events of these video sequences last for a long time, the whole event can be fully expressed. In addition, the abnormal frames of UCF-Crime database~\cite{Sultani2018Real} are not completely labeled. For example, the authors only consider the moment of explosion as abnormal for the Explosion category, but the fire generated after the explosion is regarded as normal.
\section{Proposed Method}\label{sec_framework}
\begin{figure*}[ht]
\centering
\includegraphics[width = 0.96\textwidth]{Framework/framework_wan}
\caption{The architecture of the proposed anomaly detection method by modeling local and global spatiotemporal contextual features.}
\label{fig:framework}
\end{figure*}
Here, we propose a multi-task deep neural network for anomaly detection. The proposed model is demonstrated in Fig.~\ref{fig:framework}. It consists of two components, i.e., a local and a global spatiotemporal context-aware streams.
Our observation is that local outliers may failed to extract feature representation of continuous action. To alleviate this problem, we devise a local spatiotemporal context aware submodule and a spatiotemporal context aware submodule, as shown in Fig.~\ref{fig:framework}. In particular, we first encode each video sequence by feature representation with a pretrained Inflated 3D convolutional network (I3D)~\cite{CarreiraZ17}. Given a video sequence with $M$ frames, we divide it into $N$ clips, and each clip contains $m$ video frames. Thus, each video sequence can be denoted as ${\bm V} = \left\{{{\bm v}_{n}, N=M/m}\right\}_{n=1}^N$. The split clips are fed into a pretrained I3D to extract high-level visual features. For $K$ consecutive clips, the local feature vectors can be represented as $\bm{X}= \left\{{\bm x}_{t}\right\}_{t=1}^K$.
The video sequence is high dimensional data that contains plenty of visual information. Thus, preserving important cues yet filtering out the redundancy is important to learn effective anomaly detection models. To learn robust global spatiotemporal contextual cues, we feed the extracted local contextual features of $K$ consecutive clips into the global context-aware stream to learn high-level features. As shown in Fig.~\ref{fig:framework}, we adopt a two-layer Convolutional LSTM (ConvLSTM) network~\cite{Shi2015Convolutional} to learn global spatiotemporal features of a video segment. Unlike LSTM, ConvLSTM~\cite{Shi2015Convolutional} is designed by using three-dimensional data as the input and uses convolutional operation, which can obtain temporal information and extract spatial features. At the same time, it provides good generalization by reducing the number of parameters and the computational complexity. Specifically, we show the formula for ConvLSTM as follows.
\begin{equation}
{\bm{{i}}}_{t}=\sigma (\bm{W}_{xi} * \bm{X}_{t}+ \bm{W}_{hi} * \bm{H}_{t-1} + \bm{W}_{ci} \circ \bm{C}_{t-1}+b_{i})
\end{equation}
\begin{equation}
{\bm{{f}}}_{t}=\sigma (\bm{W}_{xf} * \bm{X}_{t} + \bm{W}_{hf} * \bm{H}_{t-1} + \bm{X}_{cf} \circ \bm{C}_{t-1}+b_{f})
\end{equation}
\begin{equation}
\bm{C}_{t}={\mathclap{\bm{{f}}}_{t}}{\circ}{\bm{C}_{t-1}}+{\mathclap{\bm{{i}}}_{t}} \circ {\rm tanh}(\bm{W}_{xc} * \bm{X}_{t}+\bm{X}_{hc} * \bm{H}_{t-1}+b_{c})
\end{equation}
\begin{equation}
{\bm{{o}}}_{t}=\sigma (\bm{W}_{xo} * \bm{X}_{t} + \bm{W}_{ho} * \bm{H}_{t-1} + \bm{W}_{co} \circ \bm{C}_{t}+b_{o})
\end{equation}
\begin{equation}
\bm{H}_{t}={\bm{{o}}}_{t} \circ {\rm tanh}(\bm{C}_{t})
\end{equation}
where ${\bm{{X}}}_{t}$ and $\bm{H}_{t}$ denote the input and output of ConvLSTM~\cite{Shi2015Convolutional} at time $t$; ${\bm{{i}}}_{t}$, ${\bm{{f}}}_{t}$, ${\bm{{o}}}_{t}$ and $\bm{C}_{t}$ represent outputs of input gate, forget gate, output gate and memory cell; $*$ is a convolutional operation; $\circ$ represents the Hadamard product; and $\sigma$ is the sigmoid activation function.
For ConvLSTM, we use the local features extracted from each clip of a video segment as the input. The ConvLSTM network leverages both long- and short-term cues of input features. The hidden states of the last layer of ConvLSTM are fed into three fully convolutional layers to predict the final event category and anomaly scores.
Furthermore, we design a multi-task joint learning network for learning the intrinsic relationship between anomaly detection and classification. The sub-network of anomalous categories classification task is designed to recognize anomaly category, and we use a cross-entropy loss function in this sub-network.
\begin{equation}
\emph{$L$}_{1}=-{\sum_{i=1}^C}{\hat{y}_{i}}\log{{y}_{i}} + \gamma\parallel \bm{W} \parallel_{2}^{2}
\end{equation}
where ${{\hat{\bm y}}}=[\hat{y}_{1},\hat{y}_{2},...,\hat{y}_{C}]$ denotes the one hot encoding of anomalous category label for a video sequence; ${\bm y}=[{y}_{1},{y}_{2},...,{y}_{C}]$ represents the corresponding score vector predicted by the sub-network of anomalous categories classification; $\parallel {\bm W} \parallel_{2}^{2}$ is a $L_{2}$-norm regularization term to avoid over-fitting; $\gamma$ is a hyper-parameter to balance the trade-off between the loss and regularization.
As the sub-task of anomaly score prediction is modeled as a regression problem. We use \textbf{$smooth$} loss function~\cite{Girshick2015Fast} as learning objective in this sub-network as follows.
\begin{equation}\label{eq5}
\begin{split}
&~\emph{$L$}_{2} = \sum_{i} ({smooth}(s_{i}-\hat{s_{i}})) \\
& \emph{smooth}(x) =
\begin{cases}0.5x^{2}, &\left |x \right |\leq 1 \cr \left |x \right|-0.5, &otherwise
\end{cases}
\end{split}
\end{equation}
where $\hat{s_{i}}$ denotes the anomalous label of a video frame; ${s_{i}}$ represents the corresponding score predicted by the sub-network of anomaly score prediction. Based on ${L}_{1}$ and ${L}_{2}$, the final loss function is written as follows:
\begin{equation}
\emph{$L$} = {\lambda}_{1}{L}_{1} + {\lambda}_{2}{L}_{2}
\end{equation}
where $\lambda_{1}$ and $\lambda_{2}$ are hyper-parameters to weight the importance of two sub-task.
\section{Experimental Results}\label{sec_experiment}
\begin{table*}[ht]
\centering
\renewcommand\tabcolsep{9pt}
\caption{Data splits on Avenue~\cite{Lu2013Abnormal}, UCSD Ped2~\cite{Li2014Anomaly}, UCF-Crime~\cite{Sultani2018Real}, ShanghaiTech~\cite{Luo2017A} and LAD databases.}
\begin{tabular}{ c c c c c c c }
\toprule
Split&Subset& Avenue~\cite{Lu2013Abnormal}&UCSD Ped2~\cite{Li2014Anomaly}& ShanghaiTech~\cite{Luo2017A}&UCF-Crime~\cite{Sultani2018Real}&LAD\\
\midrule
\multirow{2}{*}{Unsupervised}& Train & 8 & 8 &175 & 800 & 958\\
& Test & 18 & 14 & 199 & 290 & 560 \\
\multirow{2}{*}{Weakly-supervised}&Train & 19 & 14 & 238 &1610 & 1440 \\
&Test& 18 & 14 & 199 & 290 & 560 \\
\multirow{2}{*}{Fully-supervised} &Train& 19 & 14 & 238 & 1610 & 1440 \\
&Test & 18 & 14 & 199 & 290 & 560 \\
\bottomrule
\end{tabular}
\label{tab:data_splits}
\end{table*}
\subsection{Implementation and Evaluation Metrics}\label{sec_experimetal_setup}
\noindent{\textbf{Implementation}}: In this work, the proposed deep anomaly detection network is implemented in Ubuntu operating system with Tensorflow~\cite{65Abadi2016}. The experiments are conducted with Intel Core I7-6900K*16 CPU (3.20GHz), 64 GB RAM, and Nvidia TITAN X (Pascal) GPU with 16 GB memory.
The I3D is pretrained by using Kinetics-400~\cite{CarreiraZ17}, which is a large-scale video classification dataset. We set the ${\lambda}_{1}$ and ${\lambda}_{2}$ as 1 and 10, respectively. We use a threshold of 0.5 to obtain the binarization anomaly score. Here, we use Adam optimizer~\cite{Kingma2014} to update parameters in the proposed model, the learning rate is set as 3e-4, the weight decay is set as 5e-4, the batch size is set as 60.
We divide the video sequence into clips with $m$=16 consecutive non-overlap video frames and set $K$=5. A total of $K$ $\times$ $m$=80 frames are used as the input to I3D to extract local spatiotemporal contextual features. The output of the sub-network of event classification is set as $C$=14, which is the number of LAD anomaly categories. We extract a $p$=1024 dimension feature of last pooling layer in the I3D, and concatenate the outputs of RGB and optical-flow I3D as the local spatiotemporal contextual feature of a video clip. The video frames are resized to $224 \times 224$, with their mean values being removed.
The channel of hidden layers is set as 128 in proposed two-layer ConvLSTM, with $3 \times 3$ convolutional kernel and $1 \times 1$ stride. The dimension of the output features in our ConvLSTM is $4 \times 4 \times 128$. After reshaping the learned global spatiotemporal contextual feature as a 2048-dimension vector, we feed it into four fully convolutional layers for the final anomaly score prediction. The dimensions of the first three fully convolutional layers are set as 2048, 1024, and 512, respectively. The last layers are set as $C$=14 and $m \times K$=80 for anomalous categories classification and score prediction, respectively.
\begin{table*}[ht]
\centering
\renewcommand\tabcolsep{8pt}
\caption{AUC results on Avenue~\cite{Lu2013Abnormal}, UCSD Ped2~\cite{Li2014Anomaly}, UCF-Crime~\cite{Sultani2018Real}, ShanghaiTech~\cite{Luo2017A} and LAD databases, where $\mathcal{U}$, $\mathcal{W}$ and $\mathcal{S}$ represent unsupervised, weakly-supervised and fully-supervised methods, respectively. The * indicates experimental results are performed by using public source code.}
\begin{tabular}{ c c c c c c c }
\toprule
\quad&\quad& Avenue~\cite{Lu2013Abnormal}&UCSD Ped2~\cite{Li2014Anomaly}& ShanghaiTech~\cite{Luo2017A}&UCF-Crime~\cite{Sultani2018Real}&LAD\\
\midrule
Sparse~\cite{Lu2013Abnormal}& $\mathcal{U}$ & - & - & - & 65.51\,\; & 50.31* \\
ConvAE~\cite{Hasan2016Learning}& $\mathcal{U}$ & - & - & - & 50.60\,\; & 53.24* \\
GMM~\cite{Leyva2017Video} & $\mathcal{U}$ & - & - & - & 56.43* & 41.02* \\
Stacked RNN~\cite{Luo2017A}& $\mathcal{U}$ & 70.09* & 52.58* & 67.66* &- & 49.42* \\
U-Net~\cite{Liu2018GAN} & $\mathcal{U}$ & 55.26* & 71.26* & 56.59* & - & 53.96* \\
MNAD~\cite{park2020learning} & $\mathcal{U}$ & 73.58* & 46.72* & 51.13* & 56.20* & 45.84* \\
OGNet~\cite{zaheer2020old} & $\mathcal{U}$ & 63.23* & 69.08* & 69.26* & - & 55.07* \\
\midrule
DeepMIL~\cite{Sultani2018Real}& $\mathcal{W}$ & 87.53* & 90.19* & 86.30\,\; & \textbf{75.41}\,\; & 70.18* \\
MLEP~\cite{Liu2019MarginLE} & $\mathcal{W}$ & 89.20\,\; &- & 73.40\,\; & 50.01* & 50.57* \\
AR-Net~\cite{wan2020weakly} & $\mathcal{W}$ & 89.31* & 93.64* & 91.24\,\; & 74.36* & 79.84* \\
\midrule
\textbf{Our Method} & $\mathcal{F}$ & \textbf{89.33\,\;} & \textbf{95.12\,\;} & \textbf{92.97} & 74.98\,\; & \textbf{86.28\,\;} \\
\bottomrule
\end{tabular}
\label{tab:AUC_results}
\end{table*}
\noindent{\textbf{Evaluation Metrics}}: In this study, following existing anomaly detection studies~\cite{Luo2017A, Liu2019MarginLE}, we utilize a frame-level Area Under ROC (Receiver Operating Characteristic) Curve (AUC) for quantitative performance evaluation. A higher AUC value indicates better performance. To evaluate anomalous categories performance of our model, we use the accuracy as the metric.
\noindent{\textbf{Data Splits}}:We conduct the experiments upon six databases including Avenue~\cite{Lu2013Abnormal}, UCSD Ped2~\cite{Li2014Anomaly}, ShanghaiTech~\cite{Luo2017A}, UCF-Crime~\cite{Sultani2018Real} and LAD.
We adopt three data splits of each database to fit requirements of unsupervised, weakly-supervised and fully-supervised anomaly detection methods.
\noindent\textbf{Weakly-supervised Splits} In the standard protocol of Avenue, UCSD Ped2, ShanghaiTech, all training videos are normal, and this sitting is not suit for weakly-supervised learning. So we reorganize these databases. For Avenue, UCSD Ped2, we selected randomly 50\% video to be training video, while the rest are used as the testing set. We use the same splits from~\cite{wan2020weakly} and ~\cite{Sultani2018Real} for ShanghaiTech and UCF-Crime, respectively. Only video-level anomalous label is provided for training weakly-supervised anomaly detection models.
\noindent\textbf{Unsupervised Splits} For each database, we use only normal videos in training set of the weakly-supervised split to train unsupervised anomaly detection models, and evaluate the unsupervised anomaly detection models by using all videos in test set of the weakly-supervised split.
\noindent\textbf{Fully-supervised Splits} We use the same data splits of weakly-supervised methods. Frame-level anomalous label and video-level anomalous categories label are provided for training model. It worthy notice that UCF-Crime does not provide frame-level anomalous label for the default training set, we use video-level anomalous label to be frame-level anomalous label for each training video.
The numbers of training and testing videos on different splits of the databases are shown in Table~\ref{tab:data_splits}.
\subsection{Comparison with State-of-the-art Models}\label{sec_evaluation_method}
In this section, we compare the proposed approach with several state-of-the-art unsupervised and weakly-superivsed anomaly detection methods. The unsupervised anomaly detection methods contain GMM~\cite{Leyva2017Video}, Sparse~\cite{Lu2013Abnormal}, ConvAE~\cite{Hasan2016Learning}, Stack RNN~\cite{Luo2017A}, U-Net~\cite{Liu2018GAN}, MNAD~\cite{park2020learning} and OGNet~\cite{zaheer2020old}. The weakly-superivsed anomaly detection methods
contain DeepMIL~\cite{Sultani2018Real}, MLEP~\cite{Liu2019MarginLE} and AR-Net~\cite{wan2020weakly}.
GMM is an anomaly detection model by using Gaussian Mixture Model and Markov Chains. Sparse is a dictionary-based anomaly detection model by learning normal dictionary using sparse representation. ConvAE is the first deep learning based anomaly detection model by using Auto-Encoder to model normal event patterns. Stack RNN U-Net is an anomaly detection model based on the reconstruction errors between a predicted frame and the ground-truth. MNAD and OGNet are latest unsupervised anomaly detection methods. We retrain these models by using unsupervised splits on each database in this paper. DeepMIL is a representative weakly-supervised anomaly detection method, and AR-Net achieves the highest AUC performance so far in ShanghaiTech. The performance of weakly-supervised anomaly detection methods are obtained by using weakly-supervised splits on each database. It should notices that unsupervised anomaly detection methods only use the normal videos to train their models.
We show the comparison results in Table~\ref{tab:AUC_results} on Avenue, UCSD Ped2, ShanghaiTech, UCF-Crime and LAD. Our method outperforms all competing anomaly detection models on our LAD and achieves an absolute gain of 6.44\% in terms of AUC, compared to the state-of-the-art~\cite{wan2020weakly}. Compare the weakly-supervised methods, our method achieves similar AUC performance on UCF-Crime. It may be caused by using noisy frame-level anomalous label, which is the same as video-level anomalous label, to train our model. Our model achieves higher AUC performance than the competing anomaly detection models on Avenue, UCSD Ped2 and ShanghaiTech, it reveals frame-level annotation is an effective tool to promote anomaly detection task.
As shown in Table~\ref{tab:AUC_results}, the competing weakly-supervised anomaly detection models obtain higher AUC performance on Avenue, UCSD Ped and ShanghaiTech, compare to unsupervised anomaly detection models. It indicates that the database including training abnormal videos is necessity to promote the video anomaly detection task. Most competing models achieve higher AUC performance on Avenue and UCSD Ped, while they get relative lower AUC performance on ShanghaiTech, UCF-Crime and LAD. These experimental results demonstrate that variety of visual scenes is a indeterminable issue to current anomaly detection models. It indicates that our LAD, which contains thousands visual scenes, is a challenge database for video anomaly detection.
\begin{table}[ht]
\centering
\renewcommand\tabcolsep{18pt}
\caption{Competition of different local spatiotemporal feature extractor, where \textbf{R}\\ and \textbf{O} indicate RGB and Optical-flow, respectively.}
\begin{tabular}{ c c c}
\toprule
\quad& Input modal & AUC\\
\midrule
C3D~\cite{Du2015Learning} & \textbf{R} & 77.21 \\
$\rm I3D^{RGB}$~\cite{CarreiraZ17} & \textbf{R} & 84.43 \\
$\rm I3D^{Optical-flow}$~\cite{CarreiraZ17} & \textbf{O} & 82.46 \\
I3D~\cite{CarreiraZ17} & \textbf{R}{\&}\textbf{O} & \textbf{86.93}\\
\bottomrule
\end{tabular}
\label{tab:ablation_feature}
\end{table}
\subsection{Ablation Study}\label{Ablation}
To evaluate the effectiveness of the local spatiotemporal feature extractor, we compare four different spatiotemporal networks including C3D~\cite{Du2015Learning}, $\rm I3D^{RGB}$~\cite{CarreiraZ17}, $\rm I3D^{Optical-flow}$~\cite{CarreiraZ17} and I3D~\cite{CarreiraZ17} on LAD. As shown in Table~\ref{tab:ablation_feature}, our method with C3D achieves a frame-level AUC of 77.21\%. And the $\rm I3D^{RGB}$ and $\rm I3D^{RGB}$ based our method achieves 84.43\% and 82.46\% in terms of AUC, respectively. Our Method with I3D boosts the performance with a frame-level AUC of 86.93\%.
The comparison results by using different loss functions in Table~\ref{tab:lossfunction} illustrate the boost brought by the proposed multi-task loss functions. Our method with $L_{1}$, a loss function for the anomaly score prediction task, is treated as the baseline. It achieves a frame-level AUC of 80.43\% on LAD. While our method with both $L_{1}$ and $L_{2}$ obtains an absolute gain of 6.50\% in terms of AUC. And the $L_{1}$ loss function boosts the anomalous categories performance by obtaining a 59.3\% in terms of accuracy.
\begin{table}[ht]
\centering
\renewcommand\tabcolsep{18pt}
\caption{AUC and accuracy results obtained by using different loss functions\\ on LAD.}
\begin{tabular}{ c c c c}
\toprule
$L_{1}$ & $L_{2}$ & AUC & Accuracy\\
\midrule
\Checkmark & \XSolid & 80.43 & 3.1 \\
\XSolid & \Checkmark & 50.00 & 58.4 \\
\Checkmark & \Checkmark & \textbf{86.93} & \textbf{59.3} \\
\bottomrule
\end{tabular}
\label{tab:lossfunction}
\end{table}
\subsection{Qualitative Analysis}\label{Qualitative Analysis}
\begin{figure}[ht]
\setlength{\abovecaptionskip}{2pt}
\centering
\includegraphics[width=0.48\textwidth]{Figs/Ablation/K.pdf}
\caption{AUC of different $K$ values.}\label{fig:k}
\end{figure}
In order to gain insight into the hyperparameter $K$, we perform experiments using the I3D local spatiotemporal feature extractor with different values of $K$, as shown in Fig.~\ref{fig:k}. Our method achieves the best performance in terms of AUC when we set $K=5$, and our method with $K=2$ obtains an absolute reduction of 3.97\% in terms of AUC. It confirms the necessity to gain global spatiotemporal features. The comparison results by using different $\lambda_{2}$ values are shown in Fig.~\ref{fig:lossw}, and we set $\lambda_{1}$=1 in these experiments. Our method gains the boost when $\lambda_{2}$ is set as 10.
\begin{figure}[ht]
\setlength{\abovecaptionskip}{2pt}
\centering
\includegraphics[width=0.48\textwidth]{Figs/Ablation/lossweight.pdf}
\caption{AUC of different $\lambda_{2}$ values.}
\label{fig:lossw}
\end{figure}
\begin{figure}[ht]
\setlength{\abovecaptionskip}{2pt}
\centering
\includegraphics[width=0.5\textwidth]{Figs/Confusion/matrix.jpg}
\caption{Visualization of the confusion matrix of anomalous categories classification results by using our method.
}\label{fig:Confusion}
\end{figure}
\begin{table}[ht]
\centering
\renewcommand\tabcolsep{18pt}
\caption{Experimental results of anomalous categories classification.}
\begin{tabular}{ c c c }
\toprule
\quad & UCF-Crime &LAD\\
\midrule
TCNN~\cite{hou2017tube} & 28.4 & - \\
C3D~\cite{Du2015Learning} & 23.0 & 45.9 \\
Our Method & \textbf{59.6} & \textbf{59.3} \\
\bottomrule
\end{tabular}
\label{tab:Time_and_accuracy}
\end{table}
As shown in Table \ref{tab:Time_and_accuracy}, we can observe that our method outperforms the competing model on UCF-Crime and LAD. Specifically, Our method obtains a relative gain of over 100\% in terms of accuracy on UCF-Crime, compared to TCNN~\cite{hou2017tube} and C3D. Comparing to C3D, our method boosts the accuracy performance by obtaining a 13.4\% absolute improvement in terms of accuracy.
To analyze the anomalous categories classification performance, we show the confusion matrix of our method in Fig.~\ref{fig:Confusion}, where we can observe that the proposed model can obtain a promising performance of abnormal event classification. The worst accuracy is from the violence category since the anomaly samples of the violence category are easily wrongly classified into the crowd or fighting categories. The best accuracy is obtained from the crash, falling, fire and fallingtowater categories since their anomaly definitions are clear, and there is a big gap between these categories and other categories.
\section{Conclusion}\label{sec_conclusion}
In this study, we contribute a large-scale benchmark for anomaly detection in video sequences. It contains 2000 different video sequences with 14 anomaly categories. We provide annotation data including video-level and frame-level labels. The proposed database enables research possibility of anomaly detection in a fully-supervised manner. Then we propose a multi-task computational model of anomaly detection by effectively learning local and global spatiotemporal contextual features for video sequences. In the proposed multi-task deep neural network, the local spatiotemporal features are first extracted by an Inflated $3$D convolutional network from each video segment. Then we feed these local spatiotemporal contextual features to a recurrent convolutional architecture to learn global spatiotemporal contextual features. Finally, anomaly scores and abnormal event categories are predicted by the output of the fully convolutional layers of two sub-networks. Comparison experiments show that the proposed method outperforms the state-of-the-art anomaly detection methods on public databases and the built LAD database. In the future, we will further investigate anomaly detection to improve the performance of anomaly detection for video sequences.
\section{Acknowledgment}\label{Acknowledgment}
This work was supported in part by the Jiangxi Provincial Natural Science Foundation under 20202ACB202007, the Fok Ying Tung Education Foundation under 161061, the Foundation of Jiangxi Provincial Department of Education under Grants 20203BBE53033, the Youth Foundation of Jiangxi Education Department under Grants GJJ200535 and GJJ190279, and the Postgraduate Innovation Special Fund of Jiangxi Province, China under Grant YC2020-B139.
\bibliographystyle{IEEEtran}
|
1909.01529
|
\section{Introduction}
The spectral norm and the nuclear norm of a third order tensor play an important role in the tensor completion and recovery problem \cite{SGCH19, YZ16}. It is NP-hard to compute them \cite{FL17}. It is an active research topic to study them more \cite{Hu15, JYZ17, Li16}.
In this paper, unless otherwise stated, all the discussions will be carried out in the filed of real numbers.
The spectral norm of a third norm is the largest singular value of that tensor. The nuclear norm is the dual norm of the spectral norm. Hence,
singular values of a third order tensor form the base of the spectral norm and the nuclear norm. Recall that the product of a (maybe rectangular) matrix and the transpose of that matrix is a positive semi-definite symmetric (square) matrix. There is a one to one equality between the singular values of the original matrix and the square roots of the eigenvalues of that positive semi-definite symmetric matrix. Then the spectral norm of the original matrix is equal to the square root of the spectral radius of that positive semi-definite symmetric matrix. Does such a relation still exist for a third order tensor? In the next section, we give a firm answer to this question. We show that if we make contraction of a third order tensor with itself on one index, then we get a positive semi-definite biquadratic tensor. A real number is a singular value of that third order tensor if and only if it is the square root of an M-eigenvalue of that positive semi-definite biquadratic tensor. Thus, the spectral norm of that third order tensor is the square root of the spectral norm of that positive semi-definite biquadratic tensor.
In Section 3, we show that the square root of the nuclear norm of that positive semi-definite biquadratic tensor is a lower bound of the nuclear norm of that third order tensor. The equality may not hold in general.
The equality between the spectral norm of a third order tensor and the spectral norm of a positive semi-definite biquadratic tensor does not change the complexity of the problem, but provides us an alternative way to attack the problem. In Sections 4 and 5, by this relation, we present several upper and lower bounds for the spectral norm of a third order tensor, by spectral radii of some symmetric matrices. In Section 6, we establish some relations between these upper and lower bounds, and thus give a range for the spectral norm of that third order tensors.
In Section 7, we present some lower bounds for the nuclear norm of a third order tensor, by the nuclear norms of some symmetric matrices.
Some final remarks are made in Section 8.
\section{Spectral Norm}
Suppose that $d_1, d_2$ and $d_3$ are positive integers. Without loss of generality, we may assume that $d_1 \le d_2 \le d_3$.
Let $\Re^{d_1 \times d_2 \times d_3}$ be the space of third order tensors of dimension $d_1 \times d_2 \times d_3$. The singular values of a tensor $\A = (a_{ijk}) \in \Re^{d_1 \times d_2 \times d_3}$ are defined as follows \cite{Lim05}.
\begin{definition}
A real number $\lambda$ is called a singular value of $\A$ if there are vectors $\x = (x_1, \cdots, x_{d_1})^\top \in \Re^{d_1}, \y = (y_1, \cdots, y_{d_2})^\top \in \Re^d_2, \z = (z_1, \cdots, z_{d_3})^\top \in \Re^{d_3}$ such that the following equations are satisfied:
For $i = 1, \cdots, d_1$,
\begin{equation} \label{e1}
\sum_{j=1}^{d_2}\sum_{k=1}^{d_3} a_{ijk}y_jz_k = \lambda x_i;
\end{equation}
For $j = 1, \cdots, d_2$,
\begin{equation} \label{e2}
\sum_{i=1}^{d_1}\sum_{k=1}^{d_3} a_{ijk}x_iz_k = \lambda y_j;
\end{equation}
For $k = 1, \cdots, d_3$,
\begin{equation} \label{e3}
\sum_{i=1}^{d_1}\sum_{j=1}^{d_2} a_{ijk}x_iy_j = \lambda z_k;
\end{equation}
and
\begin{equation} \label{e4}
\x^\top \x = \y^\top \y = \z^\top \z = 1.
\end{equation}
Then $\x, \y$ and $\z$ are called the corresponding singular vectors.
\end{definition}
If $\lambda$ is a singular value of $\A$, with singular vectors $\x, \y$ and $\z$, then by definition, $-\lambda$ is also a singular value of $\A$, with singular vector $-\x, -\y$ and $-\z$.
For $\A = (a_{ijk}), \B = (b_{ijk}) \in \Re^{d_1 \times d_2 \times d_3}$, their inner product is defined as
$$
\langle \A, \B \rangle := \sum_{i=1}^{d_1} \sum_{j=1}^{d_2} \sum_{k=1}^{d_3} a_{ijk}b_{ijk}.
$$
In a special case, if $\B$ is rank-one, i.e., $\B = (b_{ijk}) = \x \otimes \y \otimes \z$ for some nonzero vectors $\x \in \Re^{d_1}, \y \in \Re^{d_2}, \z \in \Re^{d_3}$, or equivalently $b_{ijk} = x_iy_jz_k$ for $i = 1, \cdots, d_1, j = 1, \cdots, d_2$ and $k = 1, \cdots, d_3$, then
$$\langle \A, \x \otimes \y \otimes \z \rangle \equiv \sum_{i=1}^{d_1} \sum_{j=1}^{d_2} \sum_{k=1}^{d_3} a_{ijk}x_iy_jz_k.$$
\begin{definition}
The spectral norm of $\A \in \Re^{d_1 \times d_2 \times d_3}$ is defined \cite{FL17, Hu15, JYZ17, Li16} as
\begin{equation} \label{n1}
\| \A \| : = \max \left\{ \langle \A, \x \otimes \y \otimes \z \rangle : \x^\top \x = \y^\top \y = \z^\top \z = 1, \x \in \Re^{d_1}, \y \in \Re^{d_2}, \z \in \Re^{d_3} \right\}.
\end{equation}
\end{definition}
Then the spectral norm of $\A$ is equal to the largest singular value of $\A$ \cite{FL17, Hu15, JYZ17, Li16}.
\medskip
We now consider biquadratic tensors.
\begin{definition}
Let $\Re^{d_1 \times d_2 \times d_1 \times d_2}$ be the space of fourth order tensors of dimension $d_1 \times d_2 \times d_1 \times d_2$. Let $\T = (t_{ijpq}) \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$. The tensor $\T$ is called biquadratic if for all $i, p = 1, \cdots, d_1$ and $j, q = 1, \cdots, d_2$, we have
$$t_{ijpq} = t_{pjiq} = t_{pqij}.$$
The tensor $\T$ is called positive semi-definite if for any $\x \in \Re^{d_1}$ and $\y \in \Re^{d_2}$,
$$\langle \T, \x \otimes \y \otimes \x \otimes \y \rangle \equiv \sum_{i, p =1}^{d_1} \sum_{j, q = 1}^{d_2} t_{ijpq}x_iy_jx_py_q \ge 0.$$
The tensor $\T$ is called positive definite if for any $\x \in \Re^{d_1}, \x^\top \x = 1$ and $\y \in \Re^{d_2}, \y^\top \y = 1$,
$$\langle \T, \x \otimes \y \otimes \x \otimes \y \rangle \equiv \sum_{i, p =1}^{d_1} \sum_{j, q = 1}^{d_2} t_{ijpq}x_iy_jx_py_q > 0.$$
The spectral norm of $\T$ is defined by
\begin{equation} \label{n3}
\| \T \| := \max \left\{ \left| \langle \T, \x \otimes \y \otimes \x \otimes \y \rangle \right| : \x^\top \x = \y^\top \y = 1, \x \in \Re^{d_1}, \y \in \Re^{d_2} \right\}.
\end{equation}
\end{definition}
We may check that $\| \cdot \|$ defines a norm in $\Re^{d_1 \times d_2 \times d_1 \times d_2}$.
\begin{definition}
Suppose that $\T = (t_{ijpq}) \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$ is biquadratic. A number $\mu$ is called an M-eigenvalue of $\T$ if there are vectors $\x = (x_1, \cdots, x_{d_1})^\top \in \Re^{d_1}, \y = (y_1, \cdots, y_{d_2})^\top \in \Re^d_2$ such that the following equations are satisfied:
For $i = 1, \cdots, d_1$,
\begin{equation} \label{e5}
\sum_{p=1}^{d_1}\sum_{j, q=1}^{d_2} t_{ijpq}y_jx_py_q = \mu x_i;
\end{equation}
For $j = 1, \cdots, d_2$,
\begin{equation} \label{e6}
\sum_{i,p=1}^{d_1}\sum_{q=1}^{d_2} t_{ijpq}x_ix_py_q = \mu y_j;
\end{equation}
and
\begin{equation} \label{e7}
\x^\top \x = \y^\top \y = 1.
\end{equation}
Then $\x$ and $y$ are called the corresponding M-eigenvectors.
\end{definition}
\begin{theorem} \label{t1}
Suppose that $\T = (t_{ijpq}) \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$ is biquadratic. Then its M-eigenvalues always exist. The spectral norm of $\T$ is equal to the largest absolute value of its M-eigenvalues. Furthermore, $\T$ is positive semi-definite if and only if all of its M-eigenvalues are nonnegative; $\T$ is positive definite if and only if all of its M-eigenvalues are positive. If $\T$ is positive semi-definite, then its spectral norm is equal to its largest M-eigenvalue.
\end{theorem}
{\bf Proof} Consider the optimization problem
\begin{equation} \label{e8}
\min \left\{ \langle \T, \x \otimes \y \otimes \x \otimes \y \rangle : \x^\top \x = \y^\top \y = 1, \x \in \Re^{d_1}, \y \in \Re^{d_2} \right\}.
\end{equation}
Since the objective function is continuous and the feasible region is compact, this optimization problem always has an optimal solution. Since the linear independence constraint qualification in optimization is satisfied, the optimality condition holds at that optimal solution. By optimization theory, the optimality condition of (\ref{e8}) has the form (\ref{e5}-\ref{e7}), and the optimal Langrangian multiplier $\mu$ always exists at the solution. This shows that $\T$ always has an M-eigenvalue.
Suppose that $\mu$ is an M-eigenvalue of $\T$ with corresponding vectors $\x$ and $\y$. By (\ref{e5}) and (\ref{e6}), we have
$$\mu = \langle \T, \x \otimes \y \otimes \x \otimes \y \rangle.$$
By this and (\ref{n3}), the spectral norm of $\T$ is equal to the largest absolute value of its M-eigenvalues.
By this and (\ref{e8}), $\T$ is positive semi-definite if and only if all of its M-eigenvalues are nonnegative; $\T$ is positive definite if and only if all of its M-eigenvalues are positive. If $\T$ is positive semi-definite, then all of its M-eigenvalues are nonnegative. This implies that its spectral norm is equal to its largest M-eigenvalue in this case.
\qed
For $d_1 = d_2 =3$, the elastic tensor in solid mechanics falls in the form of $\T$, with two additional symmetric properties between indices $i$ and $j$, and between indices $p$ and $q$. Then, the positive definiteness condition of $\T$ corresponds the strong ellipticity condition in solid mechanics. In 2009, M-eigenvalues were introduced for the elastic tensor to characterize the strong ellipticity condition in \cite{QDH09}. An algorithm for computing the largest M-eigenvalue was presented in \cite{WQZ09}. Also see \cite{QCC18} for details. Here, we extend M-eigenvalues to general biquadratic tensors and study their spectral norms.
\medskip
For $\A = (a_{ijk}) \in \Re^{d_1 \times d_2 \times d_3}$, consider its contraction with itself on the third index, $\T^{(3)} = \left(t^{(3)}_{ijpq}\right) \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$, defined by
\begin{equation} \label{e9}
t^{(3)}_{ijpq} = \sum_{k=1}^{d_3} a_{ijk}a_{pqk}.
\end{equation}
Then $\T^{(3)}$ is biquadratic. For any $\x \in \Re^{d_1}$ and $\y \in \Re^{d_2}$,
$$\langle \T^{(3)}, \x \otimes \y \otimes \x \otimes \y \rangle = \sum_{k=1}^{d_3} \left( \sum_{i=1}^{d_1} \sum_{j= 1}^{d_2} a_{ijk}x_iy_j \right)^2 \ge 0.$$
Hence $\T^{(3)}$ is also positive semi-definite.
\begin{theorem} \label{t2}
Let $\A = (a_{ijk}) \in \Re^{d_1 \times d_2 \times d_3}$ and $\T^{(3)} = \left(t^{(3)}_{ijpq}\right) \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$ be constructed as above. Then $\lambda$ is a nonzero singular value of $\A$, with $\x \in \Re^{d_1}, \y \in \Re^{d_2}$ and $\z \in \Re^{d_3}$ as its corresponding singular vectors, if and only if it is a square root of an M-eigenvalue of $\T^{(3)}$, with $\x$ and $\y$ as corresponding M-eigenvectors. This also implies that the spectral norm of $\A$ is equal to the square root of the largest M-eigenvalue of $\T^{(3)}$.
\end{theorem}
{\bf Proof} Suppose that $\lambda \not = 0$ is a singular value of $\A$, with corresponding singular vectors $\x, \y$ and $\z$, satisfying (\ref{e1}-\ref{e4}). Multiplying (\ref{e1}) and (\ref{e2}) by $\lambda$ and substituting
$$\lambda z_k = \sum_{p=1}^{d_1} \sum_{q=1}^{d_2} a_{pqk}x_py_q$$
into these two equations, we see that $\mu = \lambda^2$ is an M-eigenvalue of $\T^{(3)}$, with $\x$ and $\y$ as the corresponding M-eigenvectors.
On the other hand, assume that $\mu = \lambda^2 \not = 0$ is an M-eigenvalue of $\T^{(3)}$, with corresponding M-eigenvectors $\x$ and $\y$, satisfying (\ref{e5}-\ref{e7}), where $\T^{(3)}$ is constructed as above. Let
$\z = (z_1, \cdots, z_{d_3})^\top$ with
$$ z_k = {1 \over \lambda} \sum_{i=1}^{d_1} \sum_{j=1}^{d_2}a_{ijk}x_iy_j.$$
Then (\ref{e3}) is satisfied.
$$\begin{aligned}
\z^\top \z & = {1 \over \lambda^2} \sum_{k=1}^{d_3} \left(\sum_{i=1}^{d_1} \sum_{j=1}^{d_2}a_{ijk}x_iy_j \sum_{p=1}^{d_1} \sum_{q=1}^{d_2}a_{pqk}x_py_q \right) \\
& = {1 \over \mu}\sum_{i, p=1}^{d_1}\sum_{j, q=1}^{d_2} \left( \sum_{k=1}^{d_3} a_{ijk}a_{pqk}\right)x_iy_jx_py_q \\
& = {1 \over \mu} \sum_{i=1}^{d_1} \left( \sum_{p=1}^{d_1}\sum_{j, q=1}^{d_2} t^{(3)}_{ijpq}y_jx_py_q \right)x_i \\
& = \sum_{i=1}^{d_1} x_i^2 \\
& = 1.
\end{aligned}$$
This proves (\ref{e4}).
We also have
$$\begin{aligned}
\sum_{j=1}^{d_2}\sum_{k=1}^{d_3} a_{ijk}y_jz_k & = {1 \over \lambda} \sum_{p=1}^{d_1} \sum_{j, q=1}^{d_2}\sum_{k=1}^{d_3} a_{ijk}a_{pqk}x_py_jy_q\\
& = {1 \over \lambda} \sum_{p=1}^{d_1} \sum_{j, q=1}^{d_2} t^{(3)}_{ijpq}x_py_jy_q\\
& = {\mu x_i \over \lambda}\\
& = \lambda x_i.
\end{aligned}
$$
This proves (\ref{e1}). We may prove (\ref{e2}) similarly. Hence, $\lambda$ is a singular value of $\A$, with $\x, \y$ and $\z$ as the corresponding singular vectors.
By Theorem \ref{t1}, we now conclude that the spectral norm of $\A$ is equal to the square root of the largest M-eigenvalue of $\T^{(3)}$.
\qed
{\bf Example 1} Let the entries of $\A = (a_{ijk}) \in \Re^{2\times 2 \times 3}$ be
$$ \begin{aligned} a_{111} & =& 4, \ a_{121} & =& 1, \ a_{112} & =& 3, \ a_{122} & = & 2, \ a_{113} & = & 2, \ a_{123} & = & -1, \\ a_{211} & =& -1, \ a_{221} & =& 2, \ a_{212} & =& -5, \ a_{222} & =& 1, \ a_{213} & =& 3, \ a_{223} & =& 4. \end{aligned}$$
Calculate the spectral norm of $\A$ by definition, we see that the spectral norm of $\A$ is $6.7673$.
Then the entries of $\T^{(3)} =\left(t^{(3)}_{ijpq}\right)$ are $t^{(3)}_{1111} = 29$, $t^{(3)}_{1112} = t^{(3)}_{1211} = 8$, $t^{(3)}_{1121} = t^{(3)}_{2111} = -13$, $t^{(3)}_{1212} = 6$, $t^{(3)}_{1221} = t^{(3)}_{2112} = -14$, $t^{(3)}_{1122} = t^{(3)}_{2211} = 19$, $t^{(3)}_{2121} = 35$, $t^{(3)}_{1222} = t^{(3)}_{2212} = 0$, $t^{(3)}_{2122} = t^{(3)}_{2221} = 5$, $t^{(3)}_{2222} = 21$.
Calculate the spectral norm of $\T^{(3)}$ by definition, we see that the spectral norm of $\T$ is $45.7959$. Its square root is $6.7673$, which is equal to the spectral norm of $\A$.
\qed
\begin{corollary} \label{c1}
We may also consider the contraction of $\A$ and itself over its second index or the first index. Then we have a tensor $\T^{(2)}$ in $\Re^{d_1 \times d_3 \times d_1 \times d_3}$ and a tensor $\T^{(1)}$ in $\Re^{d_2 \times d_3 \times d_2 \times d_3}$. Theorem \ref{t2} is true for $\A$ and these two positive semi-definite biquadratic tensors $\T^{(2)}$ and $\T^{(1)}$ too.
\end{corollary}
Our numerical computation confirms the results of Theorem \ref{t2} and Corollary \ref{c1}.
\section{Nuclear Norm}
The nuclear norm is somewhat more important in the tensor completion and recovery problem \cite{SGCH19, YZ16}.
\begin{definition}
The nuclear norm of $\A \in \Re^{d_1 \times d_2 \times d_3}$ is defined \cite{FL17, Li16} as
\begin{equation} \label{n2}
\|\A \|_* := \inf \left\{ \sum_{i=1}^r |\lambda_i| : \A = \sum_{i=1}^r \lambda_i \uu_i \otimes \vv_i \otimes \w_i, {\uu_i^\top \uu_i = \vv_i^\top \vv_i = \w_i^\top \w_i = 1, \atop \lambda_i\in \Re, \uu_i \in \Re^{d_1}, \vv_i \in \Re^{d_2}, \w_i \in \Re^{d_3},} i=1, \cdots, r \right\}.
\end{equation}
\end{definition}
Then we have \cite{FL17, Li16}
\begin{equation}\label{eq:dual}
\|\A \|_* := \max \left\{ \langle \A, \B \rangle : \| \B \| = 1, \B \in \Re^{d_1 \times d_2 \times d_3} \right\}.
\end{equation}
We may define the nuclear norm of a tensor in $\Re^{d_1 \times d_2 \times d_1 \times d_2}$ similarly.
\begin{definition}
The nuclear norm of $\T \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$ is defined as
\begin{equation} \label{n4}
\|\T \|_* := \inf \left\{ \sum_{i=1}^r |\lambda_i| : \T = \sum_{i=1}^r \lambda_i \uu_i \otimes \vv_i \otimes \w_i \otimes \s_i, {\uu_i^\top \uu_i = \vv_i^\top \vv_i = \w_i^\top \w_i = \s_i^\top \s_i = 1, \atop \lambda_i\in\Re, \uu_i, \w_i \in \Re^{d_1}, \vv_i, \s_i \in \Re^{d_2},} i=1, \cdots, r \right\}.
\end{equation}
\end{definition}
Then we have the following theorem.
\begin{theorem} \label{t3}
Suppose that $\A = (a_{ijk}) \in \Re^{d_1 \times d_2 \times d_3}$, and $\T^{(3)} = \left(t^{(3)}_{ijpq}\right)$ is constructed by (\ref{e9}). Assume $\| \A \|_*$ and $\|\T^{(3)}\|_*$ are defined by (\ref{n2}) and (\ref{n4}) respectively. Then
\begin {equation} \label{e14}
\| \A \|_*^2 \ge \left\| \T^{(3)} \right\|_*\ge \frac{1}{d_3}\| \A\|_*^2.
\end{equation}
\end{theorem}
{\bf Proof} For any $\epsilon > 0$, by (\ref{n2}), we have positive integer $r$ and $\uu_i \in \Re^{d_1}, \vv_i \in \Re^{d_2}, \w_i \in \Re^{d_3}$ such that
$$\uu_i^\top \uu_i = \vv_i^\top \vv_i = \w_i^\top \w_i = 1,$$
for $i = 1, \cdots, r$, and
$$\A = \sum_{i=1}^r \lambda_i \uu_i \otimes \vv_i \otimes \w_i$$
and
$$\|\A \|_* + \epsilon \ge \sum_{i=1}^r |\lambda_i|.$$
By (\ref{e9}), we have
$$\T^{(3)} = \sum_{i, j=1}^r \lambda_i\lambda_j\alpha_{ij} \uu_i \otimes \vv_i \otimes \uu_j \otimes \vv_j,$$
where $\alpha_{ij}=\w_i^\top \w_j$.
Then by (\ref{n4}), we have
$$\left(\|\A \|_* + \epsilon\right)^2 \ge \|\T^{(3)}\|_*$$
for any $\epsilon > 0$.
This proves the first inequality in (\ref{e14}).
For the lower bound in \eqref{e14}, suppose that $\B\in\Re^{d_1\times d_2\times d_3}$ is such that
\[
\|\B\|=1\ \text{and }\langle\A,\B\rangle =\|\A\|_*.
\]
For simplicity of notation, denote by the $d_1\times d_2$ matrix $[a_{\cdot\cdot k}]$ as $A_k$ for all $k=1,\dots,d_3$. Similarly, we have $d_3$ matrices $B_k$'s for $\B$. Since $\|\A\|_*$ is the maximum of $\langle\A,\B\rangle$ over all tensors $\B$ with unit spectral norm, and the spectral norm is defined by maximizing a multilinear function over the joint sphere (cf.\ \eqref{n1}), we must have that
\[
\langle A_k,B_k\rangle\geq 0\ \text{for all }k=1,\dots,d_3\ \text{and }\|\A\|_*=\sum_{k=1}^{d_3} \langle A_k,B_k\rangle.
\]
Let the tensor $\mathcal S$ be defined similarly to $\T^{(3)}$ for $\A$, i.e., $\mathcal S=\sum_{k=1}^{d_3}B_k\otimes B_k$. It follows from Theorems \ref{t1} and \ref{t2} that
\[
\|\mathcal S\|=1.
\]
Then, by \eqref{eq:dual}, we have
\[
\left\|\T^{(3)}\right\|_*\geq \langle\T^{(3)},\mathcal S\rangle=\sum_{k=1}^{d_3}\langle A_k,B_k\rangle^2\geq \frac{1}{d_3}\left(\sum_{k=1}^{d_3}\langle A_k,B_k\rangle\right)^2= \frac{1}{d_3}\|\A\|_*^2.
\]
The second inequality in \eqref{e14} is thus proved.
\qed
Numerical computations show that strict inequality may hold in (\ref{e14}).
\begin{corollary} \label{c2}
We may also consider the contraction of $\A$ and itself over its second index or the first index. Then we have a tensor $\T^{(2)}$ in $\Re^{d_1 \times d_3 \times d_1 \times d_3}$ and a tensor $\T^{(1)}$ in $\Re^{d_2 \times d_3 \times d_2 \times d_3}$. Theorem \ref{t3} is true for $\A$ and these two positive semi-definite biquadratic tensors $\T^{(2)}$ and $\T^{(1)}$ too.
\end{corollary}
Numerical computation shows that the nuclear norms of these three positive semi-definite biquadratic tensors can be different for a third order tensor $\A$.
\section{Upper Bounds}
Theorems \ref{t2} and \ref{t3} connect the spectral norm and nuclear norm of a third order tensor with the spectral norms and nuclear norms of three positive semi-definite biquadratic tensors. This does not change the complexity of the problem. But they provide us an alternative way to attack the problem. In particular, a biquadratic tensor has more structure such as the diagonal structure. In 2009, Wang, Qi and Zhang \cite{WQZ09} presented a practical method for the largest M-eigenvalue of a biquadratic tensor.
Thus, we may apply that method to compute the spectral norm of a biquadratic tensor.
We first present an attainable bound for a biquadratic tensor.
Let $\T = (t_{ijpq}) \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$ be a biquadratic tensor. We may unfold $\T$ to a $d_1d_2 \times d_1d_2$ matrix $T = (t_{<ij><pq>})$, where $<ij>$ is regarded as one index $<ij>\equiv (i-1)d_1+j = 1, \cdots, d_1d_2$, and $<pq>$ is regard as another index, $<pq> \equiv (p-1)d_1+qd_2 = 1, \cdots, d_1d_2$. Since $\T$ is biquadratic, matrix $T$ is symmetric. Note that even if $\T$ is positive semi-definite, $T$ may not be positive semi-definite. On the other hand, if $T$ is positive semi-definite, $\T$ is always positive semi-definite. If $\mathcal T$ is constructed by a third order tensor as the previous sections, it can be shown that the corresponding matrix $T$ is indeed positive semi-definite. We do not go to this detail.
We say that $\T$ is rank-one if there are nonzero $\uu \in \Re^{d_1}$ and $\vv \in \Re^{d_2}$ such that $\T = \uu \otimes \vv \otimes \uu \otimes \vv$.
\begin{theorem} \label{t4}
Suppose that $\T = (t_{ijpq}) \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$ is a biquadratic tensor. Let the symmetric $d_1d_2 \times d_1d_2$ matrix $T$ be constructed as above. Then the spectral radius of $T$ is an upper bound of the spectral norm of $\T$. This upper bound is attained if $\T$ is rank-one. Thus, this upper bound is attainable even if $\T$ is the contraction of a third order tensor $\A$ with $\A$ itself by (\ref{e9}).
\end{theorem}
{\bf Proof} The spectral radius of the symmetric matrix $T$ can be calculated as follows.
\begin{equation} \label{e15}
\rho(T) = \max \left\{ \left| \s^\top T \s \right| : \s^\top \s = 1, \s \in \Re^{d_1d_2} \right\}.
\end{equation}
We may fold $\s$ to a $d_1 \times d_2$ matrix $S = (s_{ij})$. Then
$$\s^\top T \s = \langle \T, S \otimes S \rangle \equiv \sum_{i,p=1}^{d_1} \sum_{j,q=1}^{d_2} t_{ijpq}s_{ij}s_{pq}.$$
On the other hand, let $S = \x \otimes \y$ for $\x^\top \x = \y^\top \y = 1, \x \in \Re^{d_1}, \y \in \Re^{d_2}$. Then $\x^\top \x = \y^\top \y = 1$ implies the vector $\s$, corresponding the matrix $S$, satisfying $\s^\top \s = 1$. Compare the maximal problems in (\ref{n3}) and (\ref{e15}). The feasible region of (\ref{n3}) is a subset of the feasible region of (\ref{e15}). In the feasible region of (\ref{n3}), the two objective functions are equal. Thus, the optimal objective function value of (\ref{e15}), i.e., the spectral radius of the symmetric matrix $T$, is an upper bound of the optimal objective function value of (\ref{e15}), i.e., the spectral norm of $\T$. When $\T$ is rank-one, The feasible regions of (\ref{n3}) and (\ref{e15}) are the same, and the objective function values of (\ref{n3}) and (\ref{e15}) are equal. Then the upper bound is attained in this case. If $\A$ is rank-one, then $\T = \T^{(3)}$ formed by (\ref{e9}) is also rank-one. Thus this upper bound is attainable even if $\T$ is formed by (\ref{e9}).
\qed
{\bf Example 1 (Continued)} In this example, we have
$$T^{(3)} = \left(\begin{matrix} 29 & 8 & -13 & 19 \\ 8 & 6 & -14 & 0 \\ -13 & -14 & 35 & 5 \\ 19 & 0 & 5 & 21 \end{matrix}\right).$$
By calculation, the spectral radius of $T^{(3)}$ is $53.1980$. Its square root is $7.2937$. This gives an upper bound for the spectral norm of $\A$.
\qed
As in Corollaries \ref{c1} and \ref{c2}, if we take contraction of the first or the second indices of a third order tensor $\A$, we may get different upper bounds for the spectral norm of $\A$. Hence, there are totally three upper bounds for the spectral norm of a third order tensor. For Example 1, the two other upper bounds are $8.2529$ and $7.8874$, which are not better than $7.2937$. Also, this approach involves the calculation of the spectral radius of a $d_1d_2 \times d_1d_2$ (or $d_1d_3 \times d_1d_3$ or $d_2d_3 \times d_2d_3$) symmetric matrix. When $d_1, d_2$ and $d_3$ are large, this approach involves the calculation of the spectral radius of a high dimensional symmetric matrix.
We now present a different way to obtain this upper bound. Consider the contraction of $\A$ with itself on the second and third indices. This result a matrix $B^{(1)} = \left(b^{(1)}_{ij}\right) \in \Re^{d_1 \times d_1}$, with
\begin{equation} \label{4.16}
b^{(1)}_{ij} = \sum_{k=1}^{d_2} \sum_{l=1}^{d_3} a_{ikl}a_{jkl}.
\end{equation}
Then $B^{(1)}$ is a symmetric matrix.
\begin{theorem} \label{t4.2}
Let $\A \in \Re^{d_1 \times d_2 \times d_3}$ and $B$ be constructed by (\ref{4.16}). The matrix $B^{(1)}$ is positive semi-definite. The square root of its spectral radius is an upper bound of the spectral norm of $\A$. This upper bound is equal to the upper stated in Theorem \ref{t4}, when $\T$ in Theorem \ref{t4} is the contraction of $\A$ with $\A$ itself on its first index. Thus, this upper bound is also attainable.
\end{theorem}
{\bf Proof} We may unfold $\A = (a_{ijk})$ to a $d_1 \times d_2d_3$ matrix $A^{(1)} = (a_{i<jk>})$, where $<jk>$ is regarded as one index $<jk> \equiv (j-1)d_2+k = 1, \cdots, d_2d_3$. The spectral norm of matrix $A^{(1)}$ can be calculated as
\begin{equation} \label{4.17}
\left\|A^{(1)}\right\| = \max \left\{ \x^\top A^{(1)} \s : \x^\top \x = \s^\top \s = 1, \x \in \Re^{d_1}, \s \in \Re^{d_2d_3} \right\}.
\end{equation}
Compare the maximal problems in (\ref{n1}) and (\ref{4.17}). The feasible region of (\ref{n1}) is a subset of (\ref{4.17}). In the feasible region of (\ref{n1}), the two objective functions are equal. Hence, the optimal objective function value of (\ref{4.17}), i.e., the spectral norm of the matrix $A^{(1)}$, is an upper bound of the optimal objective function value of
(\ref{n1}), i.e., the spectral norm of $\A$. The spectral norm of the matrix $A^{(1)}$ is the largest singular value of $A^{(1)}$, which is equal to the square root of the spectral radius of $A^{(1)}\left(A^{(1)}\right)^\top$. We now can recognize that $B^{(1)} = A^{(1)}\left(A^{(1)}\right)^\top$. Thus, $B^{(1)}$ is symmetric and positive semi-definite, and the square root of its spectral radius is an upper bound of the spectral norm of $\A$.
When $\T = \T^{(1)}$ in Corollary \ref{c1} is the contraction of $\A$ with $\A$ itself on its first index, the upper bound obtained there is equal to the upper bound obtained here. In fact, in this case, the upper bounds stated in Corollary \ref{c1}, when $\T = \T^{(1)}$, is the square root of the spectral radius of $\left(A^{(1)}\right)^\top A^{(1)}$, while the upper bound given here is the square root of the spectral radius of $A^{(1)}\left(A^{(1)}\right)^\top$. By linear algebra, they are equal. Hence, this upper bound is also attainable.
\qed
As $B^{(1)}$ is a $d_1 \times d_1$ symmetric matrix, this approach is relatively easy to be handled. We may also consider the contraction of $\A$ with itself on the first and third indices, or on the first and second indices. This results in another way to calculate the two other upper bounds for the spectral norm of $\A$.
\section{Lower Bounds}
We present two attainable lower bounds for the spectral norm of the biquadratic tensor $\T$ in this section.
Let $\T = (t_{ijpq}) \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$ be a biquadratic tensor. We say that $\T$ is diagonal with respect to its first and third indices if $t_{ijpq} = 0$ whenever $i \not = p$. We say that $\T$ is diagonal with respect to its second and fourth indices if $t_{ijpq} = 0$ whenever $j \not = q$.
\begin{theorem} \label{t5}
Let $\T = (t_{ijpq}) \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$ be a biquadratic tensor. A lower bound for the spectral norm of $\T$ is the maximum of the spectral radii of $d_2$ symmetric $d_1 \times d_1$ matrices $(t_{ijpj})$, where $j$ is fixed, for $j = 1, \cdots, d_2$. This lower bound is attained if $\T$ is diagonal with respect to its second and fourth indices. Another lower bound for the spectral norm of $\T$ is the maximum of the spectral radii of $d_1$ symmetric $d_2 \times d_2$ matrices $(t_{ijiq})$, where $i$ is fixed, for $i = 1, \cdots, d_1$. This lower bound is attained if $\T$ is diagonal with respect to its first and third indices.
\end{theorem}
{\bf Proof} Fix $j$. Let $\y$ be a unit vector in $\Re^{d_2}$ such that its $j$th component is $1$ and its other components are zero. Then the objective function of (\ref{e9}) is equal to
$$\langle \T, \x \otimes \y \otimes \x \otimes \y \rangle = \sum_{i,p=1}^{d_1}t_{ijpj}x_ix_p.$$
Let $\x$ be the eigenvector of the symmetric matrix $(t_{ijpj})$ such that
$$\sum_{i,p=1}^{d_1}t_{ijpj}x_ix_p = \rho(t_{ijpj}),$$
where $\rho(t_{ijpj})$ is the spectral radius of the symmetric $d_1 \times d_1$ matrices $(t_{\cdot j\cdot j})$. This is true for $j = 1, \cdots, d_2$. Hence, the maximum of the spectral radii of $d_2$ symmetric $d_1 \times d_1$ matrices $(t_{\cdot j\cdot j})$, where $j$ is fixed, for $j = 1, \cdots, d_2$, is a lower bound for the spectral norm of $\T$. Let $\T$ is diagonal with respect to its second and fourth indices. Then the objective function value of (\ref{e9}) is equal to a convex combination of the spectral radii of $d_2$ symmetric $d_1 \times d_1$ matrices $(t_{\cdot j\cdot j})$, where $j$ is fixed, for $j = 1, \cdots, d_2$. Then this lower bound is attained in this case. The other conclusion can be proved similarly.
\qed
{\bf Example 1 (Continued)} In this example, fix $j = 1$ and $j = 2$, respectively, we have two symmetric matrices
$$\left(\begin{matrix} 29 & - 13 \\ -13 & 35 \end{matrix}\right), \ \ \left(\begin{matrix} 6 & 0 \\ 0 & 21 \end{matrix}\right).$$
Their spectral radii are $45.3417$ and $21$, respectively. The maximum of these two spectral radii is $45.3417$. This gives a lower bound of the spectral norm of $\T^{(3)}$. Its square root is $6.7336$. This gives a lower bound for the spectral norm of $\A$.
Similarly, fix $i = 1$ and $i = 2$, respectively, we have two symmetric matrices
$$\left(\begin{matrix} 29 & 8 \\ 8 & 6 \end{matrix}\right), \ \ \left(\begin{matrix} 35 & 5 \\ 5 & 21 \end{matrix}\right).$$
Their spectral radii are $31.5089$ and $36.6023$, respectively. The maximum of these two spectral radii is $36.6023$. Its square root is $6.0500$. This gives another lower bound for the spectral norm of $\A$.
\qed
A question is for which kind of third order tensor $\A$, these two lower bounds are attained.
As in Corollaries \ref{c1} and \ref{c2}, if we take contraction of the first or the second indices of a third order tensor $\A$, we may get different lower bounds for the spectral norm of $\A$. Hence, there are totally six lower bounds for the spectral norm of a third order tensor. In particular, for the example in Example 1, if we take contraction of the first index of the third order tensor $\A$ in that example, we get a lower bound $6.7336$ for the spectral norm of $\A$. As the spectral norm of $\A$ is $6.7673$, this lower bound is about $0.5 \%$ close to the true value. Surprised by this accuracy, we calculate $1000$ randomly generated examples of $2 \times 2 \times 3$ tensors. We found that the lower bounds obtained in this way fall within $0.01\%, 0.02\%, 0.05\%, 0.1\%, 0.2\%. 0.5\%, 1\%, 2\%, 5\%, 10\%, 20\%$ and $50\%$ are $4.60\%, 6.40\%, 9.50\%, 13.30\%, 18.80\%, 29.80\%, 40.80\%, 56.00\%, 80.00\%, 94.2\%, 99.50\%$ and $100\%$, respectively. This shows that for such a third order tensor, there is a big chance to give a good lower bound in this way.
In this approach, spectral radii of $d_i \times d_i$ symmetric matrices for $i = 1, 2, 3$, are calculated. This only involves relatively low dimensional matrices. Therefore, this approach is relatively efficient.
\section{Relation}
The first lower bound of $\| \T\|$ in Theorem \ref{t5} may be denoted as
$$L = \max \left\{ \rho((t_{ijpj})) : j \ {\rm is\ fixed},\ j = 1, \cdots, d_2 \right\}.$$
Suppose that $\T = \T^{(3)}$ is constructed by (\ref{e9}) from a third order tensor $\A = (a_{ijk})$. Then
$$L = \max \left\{ \max \left\{ \sum_{i, p=1}^{d_1} \sum_{k=1}^{d_3} a_{ijk}a_{pjk}x_ix_p : \x^\top \x = 1, \x \in \Re^{d_1} \right\} : j = 1, \cdots, d_2 \right\}.$$
On the other hand, the spectral radius of the matrix $B$, constructed by (\ref{4.16}), is as follows.
$$\rho\left(B^{(1)}\right) = \max \left\{ \sum_{i, p=1}^{d_1} \sum_{j=1}^{d_2} \sum_{k=1}^{d_3} a_{ijk}a_{pjk}x_ix_p : \x^\top \x = 1, \x \in \Re^{d_1} \right\}.$$
Then we find that
$$\rho\left(B^{(1)}\right) \le d_2 L.$$
Combining this with Theorems \ref{t4.2} and \ref{t5}, we have the following theorem.
\begin{theorem} \label{t6}
Let $\A \in \Re^{d_1 \times d_2 \times d_3}$, $L$ and $B^{(1)}$ be constructed as above. Then we have
$${1 \over d_2}\rho\left(B^{(1)}\right) \le L \le \|\A\| \le \rho\left(B^{(1)}\right) \le d_2L.$$
\end{theorem}
This establishes a range of $\|\A\|$ by either $L$ or $\rho\left(B^{(1)}\right)$. We may contract on other indices and obtain similar results. Combining them together, we may get a better range of $\|\A\|$.
\section{Lower Bounds for Nuclear Norms}
Suppose that $\T = (t_{ijpq}) \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$ is a biquadratic tensor. Let the $d_1d_2 \times d_1d_2$ symmetric matrix $T$ be constructed as in Section 4. Then $T$ is a matrix flattening of the tensor $\T$. As Lemma 3.1 of \cite{Hu15}, there is a one to one correspondence between the $d_1d_2 \times d_1d_2$ symmetric matrices and $d_1 \times d_2 \times d_1 \times d_2$ biquadratic tensors. Hence, with an argument similar to the proof of Proposition 4.1 of \cite{Hu15}, we have the following result.
\begin{theorem} \label{t7}
Suppose that $\T = (t_{ijpq}) \in \Re^{d_1 \times d_2 \times d_1 \times d_2}$ is a biquadratic tensor. Let the $d_1d_2 \times d_1d_2$ symmetric matrix $T$ be constructed as in Section 4. Then $\| T \|_* \le \| \T \|_*$.
\end{theorem}
Combining Theorems 3.3 and \ref{t7}, we have a lower bound for the nuclear norm of a third order tensor, by the nuclear norm of a matrix. Note that the nuclear norm of a tensor is NP-hard to compute, while the nuclear norm of a matrix is relatively easy to be computed.
Suppose that $\A = (a_{ijk}) \in \Re^{d_1 \times d_2 \times d_3}$, $\T = \T^{(1)}$ is constructed by contraction of $\A$ with $\A$ itself on the first index, and $T^{(1)}$ be the $d_2d_3 \times d_2d_3$ matrix flattening of $\T^{(1)}$.
Then, the square root of the nuclear norm of $T^{(1)}$ also gives a lower bound of the nuclear norm of $\A$. Let $A^{(1)}$ be the matrix flattening of $\A$ as given in the proof of Theorem \ref{t4.2}. By \cite{Hu15}, $\left\| A^{(1)}\right\|_*$ also gives a lower bound of $\|\A \|_*$. By the definition of nuclear norms, we find that $\left\|T^{(1)}\right\|_* = \left\| A^{(1)}\right\|_*^2$. Thus, the lower bound given here is the square root of the lower bound given in \cite{Hu15} for $\|\A\|_*$. Let $B^{(1)}$ be the $d_1 \times d_1$ symmetric matrix constructed as in Theorem \ref{t4.2}. With an argument similar to the proof of Theorem \ref{t4.2} and by using the definition of nuclear norms, we may show that
$$\left\| T^{(1)}\right\|_* = \left\| B^{(1)}\right\|_* = \left\| A^{(1)}\right\|_*^2.$$
Since $B^{(1)}$ is symmetric and its dimension is lower, the approach using $B^{(1)}$ may be better than the approach using $A^{(1)}$ in \cite{Hu15}. In \cite{Hu15}, a range of $\|\A\|_*$ is given by $\left\| A^{(1)} \right\|_*$ as:
$$\left\| A^{(1)} \right\|_* \le \| \A \|_* \le \sqrt{ \min \{ d_2, d_3\}} \left\| A^{(1)} \right\|_*.$$
Then we have
$$\sqrt{\left\| B^{(1)} \right\|_*} \le \| \A \|_* \le \sqrt{\min \{ d_2, d_3\} \left\| B^{(1)} \right\|_*}.$$
\section{Final Remarks}
In \cite{JYZ17}, it was shown that the spectral norm and the nuclear norm of a tensor is equal to the spectral norm and the nuclear norm of the Tucker core of that tensor. As the size of the Tucker core may be smaller than the size of the original tensor, maybe we may combine our results with that approach.
We may also explore more algorithms like that one in \cite{WQZ09} to compute the largest M-eigenvalue of a positive semi-definite biquadratic tensor, and use them for computing the spectral norm of a third order tensor.
We hope that some further research may explore more applications of the equality between singular values of a third order tensor and M-eigenvalues of the related positive semi-definite biquadratic tensor.
\bigskip
{\bf Acknowledgments} The authors are thankful to
Yannan Chen for the discussion on Theorems 4.2 and 7.1, and his calculation, to Yiju Wang and Xinzhen Zhang for their comments, and to Qun Wang for her calculation.
|
1807.07914
|
\section{Introduction}\label{sec:intro}
Quantum computing is a computation paradigm that relies on the principles of quantum mechanics in order to process information. Recent advances in both algorithmic research, which has found remarkable speed-ups for a growing number of applications \cite{Childs2010,Montanaro2016, Biamonte2017}, and hardware development \cite{Linke2017,Friis2018} continue to progress the field of quantum information processing. The near-term state of quantum computing is defined by the noisy intermediate-scale quantum (NISQ) paradigm which involves small-scale noisy quantum processors \cite{preskillNISQ2018} being used in a hybrid quantum-classical framework. In this context, recent experimental demonstrations \cite{Peruzzo2014, O'Malley2016, Kandala2017, otterbach2017, deuteron} of hybrid computations have reinforced the need for robust programming models and classical validation frameworks.
\par
The successful integration of quantum processors into conventional computational workloads is a complex task which depends on the programming and execution models that define how quantum resources interact with conventional computing systems \cite{Humble2016HPEC,Britt2017}. Many different models have been proposed for programming quantum computers and a number of software development efforts have begun focusing on high-level hybrid programming mechanisms capable of integrating both conventional and quantum computing processors together \cite{Green2013,Javadiabhari2014,Wecker2014,Humble2014,smith2016practical,liu2017q,Svore2018,pakin_2018}. For example, recent efforts have focused on Python-based programming frameworks enabling the high-level expression of quantum programs in a classical context, which may target numerical simulators or a variety of physical quantum processing units (QPUs) \cite{1608.03355, projectq, qiskit}. The eXtreme-scale ACCelerator programming model (XACC) is a recently-developed quantum-classical programming, compilation, and execution framework that enables programming across multiple languages and targets multiple virtual and physical QPUs \cite{xaccarxiv}.
\par
In all cases, the verification of quantum program correctness is a challenging and complex task due to the intrinsically noisy nature of near-term QPUs, and this is additionally complicated by remote hosting. As a remedy, numerical simulation techniques can greatly expedite the analysis of quantum-classical programming efforts by providing direct insight into the prepared quantum states, as well as serving to test a variety of quantum computing hardware models. Modeling and simulation is essential for designing effective program execution mechanisms because it provides a controlled environment for understanding how complex computational systems interact, subsequently generating feedback based on the state machine statistics. For example, the performance of existing QPUs is limited by the hardware connectivity \cite{Linke2017} and numerical simulations can draw on a broad range of parameterized models to test new processor layouts and architectures.
\par
In practice, exact brute-force simulations of quantum computing are notoriously inefficient in memory complexity due to the exponential growth in resources with respect to system size. These brute-force methods explicitly solve the Schrodinger equation, or a mixed-state master equation, using a full representation of the quantum state in its underlying (exponentially large) Hilbert space. Limits on available memory place upper bounds on the size of the vectors or density operators that can physically be stored, severely restricting the size the simulated quantum circuit. Even with the availability of current large-scale HPC systems, including the state-of-the-art supercomputing systems, recent records for quantum circuit simulations are limited to less than 50 qubits \cite{nersc45, Pednault2017}. The performance of the brute-force quantum circuit simulators on current supercomputing architectures is also limited by the inherently low arithmetic intensity (Flop/Byte ratio) of the underlying vector operations (sparse matrix-vector multiplications) required for simulating a discrete sequence of one- and two-qubit gates.
\par
The inherent inefficiency of the brute-force state-vector quantum circuit simulators has motivated a search for approximate numerical simulation techniques increasing the upper bound on the number of simulated qubits. As we are interested in general-purpose (universal) quantum circuit simulators, we will omit efficient specialized simulation algorithms that target certain subclasses of quantum circuits, for example, quantum circuits composed of only Clifford operations \cite{Aaronson2004}. As a general solution, we advocate for the use of the tensor network (TN) theory as a tool for constructing factorized approximations to the exact multi-qubit wave-function tensor. The two main advantages offered by the tensor-network based wave-function factorization are (1) the memory (space) and time complexity of the quantum circuit simulation reflect the level of entanglement in the quantum system, (2) the numerical action of quantum gates on the factorized wave-function representation results in numerical operations (tensor contractions) which become arithmetically intensive for entangled systems, thus potentially delivering close to the peak utilization of modern HPC platforms.
\par
\section{Quantum Circuit Simulation with Tensor Networks}
Tensor network theory \cite{Orus2014,Biamonte2017} provides a versatile and modular approach for the dimensionality reduction of operators acting in high-dimensional linear spaces. For the following discussion, a tensor is a generalization of a vector and is defined in a linear space constructed as the tensor product of two or more primitive vector spaces. Consequently, the components of a tensor are enumerated by a tuple of indices, instead of by a single index as is the case for vectors. From the numerical perspective, a tensor can be viewed as a multi-dimensional array of objects, which may be real or complex numbers. In this work, following the physics nomenclature, we shall refer to the number of indices in a tensor \(T_{i_1 ... i_n}\) as its rank (in this case the tensor rank is $n$). Each index represents a distinct vector space contributing to the composite space by the tensor product. The extent of the range of each index gives the dimension of the vector space. In their essence, tensor networks aim at decomposing higher-rank tensors into a contraction over lower-rank tensors such that the factorized product accurately reconstructs properties of the original tensor (i.e. a variant of lossy compression in linear spaces). Any tensor can be approximated by a suitably chosen tensor network with arbitrary precision, however the size of the tensor factors may grow exponentially in worst case examples. Tensor factorizations, which we also refer to as decompositions, are not unique in general and the problem of finding the optimal tensor decomposition is a difficult non-convex optimization problem \cite{Kolda2015}.
In practice, a tensor network factorization is typically specified by a graph in which the nodes are the tensor factors and the edges represent physical or auxiliary vector spaces which are associated with the indices of the corresponding tensor factors. A closed edge, that is, an edge connecting two nodes, represents a contracted index shared by two tensor factors over which a summation is to be performed. In a standalone tensor network, contracted indices are associated with auxiliary vector spaces. An open edge, that is, an edge connected to only one node, represents an uncontracted index of that tensor factor. Uncontracted indices are typically associated with physical vector spaces. Different tensor network architectures differ by the topology of their representative graphs. Furthermore, one can define even more general tensor network architectures by replacing graphs with hypergraphs, in which case an edge may connect three or more tensors. In the subsequent discussion, however, we will mostly deal with conventional graph topologies.
\begin{figure}[ht!]
\centering
\includegraphics[width=\columnwidth]{MPS.pdf}
\caption{Figure depicting the tensor decomposition of a wavefunction into the MPS form}
\label{fig:tn}
\end{figure}
\par
A quantum many-body wave-function, including a multi-qubit wave-function, is essentially a high-rank tensor (its rank is equal to the number of simulated quantum particles or quasi-particles) \cite{Orus2014}. A number of different tensor network architectures have been suggested for the purpose of factorizing quantum many-body wave-functions, including the matrix-product state (MPS) \cite{White:1992ie, Schollwock2011}, the projected entangled pair-state (PEPS) \cite{Schuch2007,Verstraete2008}, the tree tensor network state (TTNS) \cite{Murg2010, Nakatani2013, Dumitrescu2017}, the multiscale entanglement renormalization ansatz (MERA) \cite{vidal2006, evenbly2009algorithms}, as well as somewhat related non-conventional schemes like the complete-graph tensor network (CGTN) \cite{cgtn}. All of the above tensor network \textit{ansaetze} differ in the factorization topology, that is, in how the tensor factors are contracted with each other to form the final quantum many-body wave-function. In a good tensor network factorization, topology is induced by the entanglement structure of a particular quantum many-body system. Many physical systems are described by many-body Hamiltonians with only local interactions -- in many cases, nearest neighbor only -- with correlation functions decaying exponentially for non-critical states. In such cases, the locality structure of the many-body Hamiltonian induces the necessary topology required to properly capture the quantum correlations present in the system of interest. The factorization topology also strongly affects the computational cost associated with the numerical evaluation/optimization of a specific tensor network architecture. Another important characteristic of a tensor network is its so-called maximal bond dimension, that is, the maximal dimension of the auxiliary linear spaces (auxiliary linear spaces are those contracted over). Provided that the maximal bond dimension is bounded, many tensor network factorizations can be evaluated with a polynomial computational cost in the bond dimension. In practice, the entanglement structure of the underlying quantum many-body system determines the maximal bond dimension needed for a given error tolerance and a given tensor network topology. A poorly chosen tensor network topology will necessarily lead to rapidly increasing (exponentially at worst) bond dimensions in order to keep the factorization error within the error threshold.
The entanglement structure in a multi-qubit wave-function is determined by the quantum circuit and may be unknown in general. Consequently, there is no well-defined choice of a tensor network architecture (topology) that could work equally well for all quantum circuits, unless it is some kind of an adaptive topology. In practice, the choice of a tensor network architecture for representing a multi-qubit wave-function is often dictated by numerical convenience and ease of implementation. For example, one of the simplest tensor network architectures, the MPS ansatz, was used to simulate Shor's algorithm for integer factorization \cite{Dang2017}. Although the inherently one-dimensional chain topology of the MPS ansatz often results in severely growing bond dimensions, and this can be remedied by a more judicious tensor network form\cite{Dumitrescu2017}, its computational convenience and well understood theory makes the MPS factorization an appealing first candidate for our quantum virtual machine (quantum circuit simulator). In future, we plan on adding more advanced tensor network architectures.
In order to simulate a general quantum circuit over an $N$-qubit register with the tensor network machinery the following steps will be necessary (see Figure \ref{fig:tnalg}):
\begin{enumerate}
\item Specify the chosen tensor network graph that factorizes the rank-$N$ wave-function tensor into a contracted product of lower-order tensors (factors).
\item Transform the quantum circuit into an equivalent quantum circuit augmented with SWAP gates in order to maximize the number of accelerated gate applications (see below). This is an optional step.
\item Group quantum gates into ordered aggregates (super-gates) which will act as a whole on the qubit wave-function. In the simplest case, all quantum gates will be applied one-by-one in order of appearance, with no aggregation. This is an optional step.
\item Sequentially apply aggregated super-gates (or individual gates when no aggregation occurred) to the wave-function tensor network, thus evolving it towards the output state.
\end{enumerate}
\begin{figure}[ht!]
\centering
\includegraphics[width=\columnwidth]{Figure_2.pdf}
\caption{Graphical illustration of the general quantum circuit simulation algorithm with the qubit wave-function factorized as a tensor network.}
\label{fig:tnalg}
\end{figure}
In the above general algorithm, the application of a super-gate (or just an individual gate) on a multi-qubit wave-function tensor consists of the following steps:
\begin{enumerate}
\item Append the individual gates constituting the given super-gate to the input wave-function tensor network \(TN_{inp}\), thus obtaining a larger tensor network \(TN_{mid}\).
\item If there are 2- or higher-body gates present, check whether they are applied to the qubit pairs or triples, etc. that allow accelerated gate application (for example, in MPS factorization, these would be the adjacent qubit pairs, triples, and so on). If yes, evaluate their action in an accelerated fashion (see below). Otherwise, resort to the general algorithm in the next steps.
\item Instantiate a new tensor network \(TN_{out}\) by cloning \(TN_{inp}\).
\item Close \(TN_{mid}\) with \(TN_{out}\), thus obtaining a closed tensor network \(TN_{opt}\).
\item Optimize the tensors of \(TN_{out}\) to maximize \(TN_{opt}\).
\item If \(TN_{opt}\) value is not acceptable, increase dimensions of the auxiliary spaces in \(TN_{out}\) and repeat Step 5.
\end{enumerate}
\begin{figure}[ht!]
\centering
\includegraphics[width=\columnwidth]{Figure_3.pdf}
\caption{Graphical illustration of an accelerated evaluation of the action of a two-body gate on a pair of adjacent qubits in the matrix-product state representation.}
\label{fig:mpssvd}
\end{figure}
In cases where an accelerated gate application is possible (for example, a 2-body gate is applied to the adjacent qubits in the MPS-factorized wave-function), one can restrict the update procedure only to the tensor factors directly affected by the gate action. In case of MPS factorization, in order to apply a 2-body gate to two adjacent qubits one can contract the gate tensor with the two MPS tensors representing the affected qubits and then perform the singular value decomposition (SVD) on the tensor-result, thus obtaining the new (updated) MPS tensors as illustrated in Figure \ref{fig:mpssvd}.
The above general algorithm demonstrates the procedure used by TNQVM for approximate simulation of quantum circuits based on the tensor network factorization. For the sake of completeness, we should also mention quantum circuit simulators which use tensor representations for a brute-force simulation of quantum circuits with no approximations \cite{Pednault2017, fried2017qtorch}. This is different from our approach which is based on the explicit factorization of the multi-qubit wave-function tensor. In these other tensor-based schemes the entire quantum circuit as a collection of gate tensors is considered as a tensor network which is subsequently contracted over in order to compute observables or evaluate output probability distributions. In Ref.~\citenum{Pednault2017}, a clever tensor slicing technique was introduced that avoided the evaluation of the full wave-function tensor, thus reducing the memory footprint and bypassing the existing 45-qubit limit on large-scale HPC systems. Yet, despite enabling simulations of somewhat larger qubit counts, this technique does not lift the asymptotic bounds of the exact simulation cost.
\par
\section{Quantum Virtual Machines}
\label{sec:qvm}
In order to evaluate the correctness of a quantum program and its implementation via a decomposition into primitive gate operations, it is necessary to model both the conventional computing and quantum computing elements of the system architecture. In particular, it is necessary to expose the interface to the available instruction set architecture (ISA) and methods to support quantum program execution, scheduling, and layout. There are currently many different technologies available for testing and evaluating quantum processing units, and each of these technologies presents different ISAs and methods for program execution \cite{Britt2017ISA}.
\par
As shown in Fig.~\ref{fig:qvm}, a quantum virtual machine (QVM) provides a portable abstraction of technology-specific details for a broad variety of heterogeneous quantum-classical computing architectures. The hardware abstraction layer (HAL) defines a portable interface by which the underlying quantum processor technology as well as other hardware components such as memory are exposed to system libraries, runtimes and drivers running on the host conventional computer. The implementation of the HAL provides an explicit translation of quantum program instructions into native, hardware-specific syntax, which may be subsequently executed by the underlying quantum processor. The HAL serves as a convenience to ensure portability of programs across different QPU platforms, while the QVM encapsulates the environment in which applications can be developed independently from explicit knowledge of QPU details. This environment is provided by the integration of the HAL with programming tools, libraries, and frameworks as well as the host operating system.
\begin{figure}[ht]
\centering
\includegraphics[width=2.0in]{SystemLayers}
\caption{A schematic design how a quantum virtual machine (QVM) manages access to an underlying QPU through the hardware abstraction layer. A program binary exists within an application framework that accesses system resources through libraries. Library calls are managed by the host operating system, which manages and schedules requests to access hardware devices including attached QPUs. The hardware abstraction layer (HAL) provides a portable interface by which these requests are made to the underlying QPU technology.}
\label{fig:qvm}
\end{figure}
\par
Application performance within a QVM depends strongly on the efficiency with which host programs are translated into hardware-specific instructions. This includes the communication overhead between the HAL and hardware layers as well as the overhead costs for managing these interactions by the host operating system. Both algorithmic and hardware designs impact this performance by deciding when and how to allocate computational burden to specific devices. Presently, there is an emphasis on the development and validation of hybrid programs, which loosely integrates quantum processing with conventional post-processing tasks. This algorithmic design introduces a requirement for transferring memory buffers between the host and QPU systems. Memory management therefore becomes an important task for application behavior. While current QPUs are often accessed remotely through network interfaces, long-term improvements in application performance will require fine grain control over memory management.
\section{Tensor Network Quantum Virtual Machine}
Our implementation of a QVM presented in this work is based on a previously developed hybrid quantum-classical programming framework, called XACC \cite{xaccarxiv}, combined with a quantum circuit simulator that uses tensor network theory for compressing the multi-qubit wave-function. We provide an overview of the Tensor Network Quantum Virtual Machine (TNQVM) and its applications, including its software architecture and integration with the XACC programming framework. Since XACC integrates directly with TNQVM, compiled programs can in principle be verified instantaneously on any classical computer including workstations as well as HPC clusters and supercomputers. The support of different classical computer architectures (single-core, multi-core, GPU, distributed) for performing numerical simulations is achieved by interchangeability of the numerical backends in our TNQVM simulator. These backends are numerical tensor algebra libraries which perform all underlying tensor computations on a supported classical computer. In this work, we detail the HAL implementation of TNQVM using ITensor \cite{itensor} for serial simulations, with some example applications demonstrating the utility of TNQVM. We also sketch some details on the upcoming ExaTENSOR backend that will enable large-scale quantum circuit simulations on homo- and heterogeneous HPC systems. Independent verification of hybrid programs within TNQVM provides an increased confidence in the use of these codes to characterize and validate actual QPUs.
\subsection{XACC}
The eXtreme-scale ACCelerator programming model (XACC) has been specifically designed for enabling near-term quantum acceleration within existing classical high-performance computing applications and workflows \cite{xaccarxiv, mccaskeyicrc}. This programming model and associated open-source reference implementation follow the traditional co-processor model, akin to OpenCL or CUDA for GPUs, but takes into account the subtleties and complexities arising from the interplay between classical and quantum hardware. XACC provides a high-level application programming interface (API) that enables classical applications to offload quantum programs (represented as quantum kernels, similar in structure to GPU kernels) to an attached quantum accelerator in a manner that is agnostic to both the quantum programming language and the quantum hardware. Hardware agnosticism enables quantum code portability and also aids in benchmarking, verification and validation, and performance studies for a wide array of virtual (simulators) and physical quantum platforms.
To achieve language and hardware interoperability, XACC defines three important abstractions: the quantum intermediate representation (IR), compilers, and accelerators. XACC compiler implementations map quantum source code to the IR -- the in-memory object key to integrating of a diverse set of languages to a diverse set of hardware. IR instances (and therefore compiled kernels) are executed by realizations of the accelerator concept, which defines an interface for injecting physical or virtual quantum hardware. Accelerators take this IR as input and delegate execution to vendor-supplied APIs for the QPU, or an associated API for a simulator. This forms the hardware abstraction layer, or abstract device driver, necessary for a general quantum (virtual) machine.
The IR itself can be further decomposed into instruction and function concepts, with instructions forming the foundation of the IR infrastructure and functions serving as compositions of instructions (see Figure \ref{fig:xacc-visitor}). Each instruction exposes a unique name and the set of qubits that it operates on. Functions are a sub-type of the instruction abstraction that can contain further instructions. This setup, the familiar composite design pattern \cite{composite}, forms an {\it n-ary} tree of instructions where function instances serve as nodes and concrete instructions instances serve as leaves.
\begin{figure}[htb!]
\centering
\includegraphics[width=.45\textwidth]{xacc-ir-visitor}
\caption{Architecture of the XACC intermediate representation demonstrating sub-type extensibility for instructions, and the associated instruction visitor abstraction, enabling runtime-extension of concrete instruction functionality.}
\label{fig:xacc-visitor}
\end{figure}
Operating on this tree and executing program instructions is a simple pre-order traversal on the IR tree. In order to enhance this tree of instructions with additional functionality, XACC provides a dynamic double-dispatch mechanism, specifically an implementation of the familiar visitor pattern \cite{gof}. The visitor pattern provides a mechanism for adding virtual functions to a hierarchy of common data structures dynamically, at runtime, and without modifying the underlying type. This is accomplished via the introduction of a visitor type that exposes a public set of visit functions, each one taking a single argument that is a concrete sub-type of the hierarchical data structure composition (see Figure \ref{fig:xacc-visitor}). For gate model quantum computing, XACC exposes a visitor class that exposes a visit method for all concrete gate instructions (X, H, RZ, CX, etc...). All instructions expose an \texttt{accept} method that takes as input a general visitor instance, and invokes the appropriate visit method on the visitor through double-dispatch. XACC instruction visitors thereby provide an extensible mechanism for dynamically operating on, analyzing, and transforming compiled IR instances at runtime.
\subsection{Tensor Network Accelerator and Instruction Visitors}
The integration of a tensor network quantum circuit simulator with XACC can be accomplished through extensions of appropriate XACC concepts. In essence, this is an extension of the quantum virtual machine hardware abstraction layer that enables existing high-level programs and libraries to target a new virtual hardware instance. Injecting new simulators into the XACC framework requires a new implementation of the accelerator concept. Enabling that simulator to be extensible in the type of tensor networks, algorithmic execution, and the library backend requires different mappings of the IR to appropriate simulation data structures. This can be accomplished through individual implementations of the instruction visitor concept.
Our open-source implementation of the Tensor Network Quantum Virtual Machine (TNQVM) library extends the XACC accelerator concept with a new derived class that simulates pure-state quantum computation via tensor network theory \cite{tnqvm-github}. This library provides the TNAccelerator (Tensor Network Accelerator) that exposes an \texttt{execute} method that takes as input the XACC IR function to be executed. Generality in the tensor network graph structure and the simulation algorithm is enabled through appropriate implementations of the instruction visitor concept. For example, an instruction visitor can be implemented to map the incoming XACC IR tree to tensor operations on a matrix product state (MPS) ansatz. Walking the IR tree via pre-order traversal and invoking the instruction visitor \texttt{accept} mechanism on each instruction triggers invocation of the appropriate visit function via double dispatch. The implementation of these visit methods provides an extensible mechanism for performing instruction-specific tensor operations on a specific tensor network graph structure.
Furthermore, this visitor extension mechanism can be leveraged to not only provide new tensor network structures and operations, but also provide the means to leverage different tensor algebra backend libraries, and therefore introduce a classical parallel execution context. Different visitor implementations may provide a strictly serial simulation approach, while others can enable a massively parallel or heterogeneous simulation approach (incorporating the Message Passing Interface, OpenMP, and/or GPU acceleration via CUDA or OpenCL).
To date we have implemented two instruction visitor backends for the TNQVM and the TNAccelerator. We have leveraged the ITensor library \cite{itensor} to provide a serial matrix product state simulator, and the ExaTENSOR library from the Oak Ridge Leadership Computing Facility (OLCF) to provide a matrix product state simulator that leverages MPI, OpenMP and CUDA for distributed parallel execution on GPU-accelerated heterogeneous HPC platforms. However, the ExaTENSOR library is currently undergoing final testing before its public release, thus it has not been utilized yet as a fully functional backend of TNQVM. Nevertheless, we will provide some details on the ExaTENSOR backend below.
\subsubsection{ITensor MPS Implementation}
The ITensor MPS instruction visitor implementation provides a mechanism for the simulation of an $N$-qubit wavefunction via a matrix product state tensor network decomposition. The MPS provides a way to restrict the entanglement entropy through SVD and associated truncation of Schmidt coefficients to reduce the overall Schmidt rank. With these MPS states, we need $O(n\chi^2)$ numbers to represent $n$ qubits, where $\chi$ is the largest Schmidt rank we keep. As long as $\chi$ is not too large (grows polynomially with system size), the space complexity is feasible for classical simulation. For example, if the quantum register is used to store the gapped ground states of systems with local interactions, we can simulate larger number of qubits and still adequately approximate the wavefunction by keeping $\chi$ small enough.
Our ITensor MPS visitor implementation begins by initializing a matrix product state tensor network using the serial tensor data structures provided by the ITensor library\cite{itensor}. Simulation of the compiled IR program is run through a pre-order tree traversal of the instruction tree. At each leaf of this tree (a concrete instruction), the \texttt{accept} method on the instruction is invoked (see Figure \ref{fig:xacc-visitor}) which dispatches a call to the correct \texttt{visit} method of the instruction visitor.
At this point, the appropriate gate tensor is contracted into the MPS representation, which maps onto itself under local quantum gates. Updating the MPS according to two-body entanglers involves two-qubit gates which act on two rank-3 tensors, and the full contraction results in a rank-4 tensor. We maintain the MPS structure by decomposing the rank-4 tensor into two rank-3 tensors and a diagonal matrix between them. Note that when the two qubits are not adjacent we apply SWAP gates on intermediary qubits to bring them together. The gate is then applied and reverse SWAPs bring the qubits back to the original positions. Otherwise, applying a gate to non-adjacent qubits would modify the underlying graph topology, complicating future evolution by adding an non-local loop in the tensor network.
The SVD is used to return the resulting rank-4 tensor to the canonical MPS form ($n$ rank-3 tensors and $n-1$ diagonal matrices), with the singular values below a cutoff threshold $\epsilon$ (e.g., default is $\epsilon=10^{-4}$) being truncated. The truncation over subspaces supporting exponentially small components of the wave-function allows our MPS-based TNQVM simulate large numbers of qubits, contingent on some slowly growing entanglement properties. Examples and discussion may be found in the demonstrations in Sec.~\ref{sec:demonstration}.
\subsubsection{ExaTENSOR MPS Implementation}
The ExaTENSOR numerical tensor algebra backend will enable larger-scale TNQVM quantum circuit simulations on GPU-enabled and other accelerated as well as conventional multicore HPC platforms. ExaTENSOR stores tensors in distributed memory (on multiple/many nodes) as a generally sparse collection of tensor slices in a hierarchical fashion. Such distributed tensor storage lifts the memory limitations pertinent to a single node, thus extending the maximal number of simulated qubits. Although we currently target the (distributed) MPS implementation, ExaTENSOR also provides a generic tensor network builder that can be used for constructing an arbitrary tensor network. The ExaTENSOR MPS visitor implementation provides a constructor that creates the MPS representation of the simulated multi-qubit wave-function (all constituent MPS tensors are distributed now). Then the XACC IR tree traversal invokes ExaTENSOR MPS \texttt{visit} method for each traversed node (instruction). The \texttt{visit} method implements lazy visiting, namely it only caches the corresponding instruction (gate) in the instruction cache of the ExaTENSOR MPS visitor. At some point, once the instruction cache has enough work to perform, the \texttt{evaluate} method of the ExaTENSOR visitor is invoked which implements the generic gate action algorithm shown in Section II. Specifically, it allocates the output MPS tensor network, that is, the result of the action of the cached gates on the input MPS tensor network. Then it creates the inner product (closed) tensor network by joining the gate tensors to the input MPS tensor network, subsequently closing it with the output tensor network (see Figure \ref{fig:tnalg}). This closed tensor network is a scalar whose value needs to be maximized. The ExaTENSOR MPS visitor will utilize the standard gradient descent algorithm by evaluating the gradient with respect to each tensor constituting the output tensor network. Each of these gradients is an open tensor network itself that needs to be fully contracted. Importantly, the computational cost of this contraction of many tensors strongly depends on the order in which the pairwise tensor contractions are performed. Finding the optimal tensor contraction sequence is an NP-hard problem. Instead, ExaTENSOR implements a heuristic algorithm that delivers the best found sequence of pairwise tensor contractions in a reasonable amount of time (subseconds). Then this pseudo-optimal sequence of pairwise tensor contractions is cached for a subsequent reuse, if needed. Given the sequence of pairwise tensor contractions, the ExaTENSOR library will numerically evaluate all of them and return the gradients that will subsequently be used for updating the output tensor network tensors, until the optimized inner product scalar reaches the desired value. In case it does not reach the desired value, the tensors constituting the output tensor network are reallocated with increased dimensions of the auxiliary spaces and the entire procedure is repeated. At this point, the early prototype implementation of the ExaTENSOR MPS visitor in TNQVM is based on the single-node version of the ExaTENSOR library and we are currently finishing the integration of the TNQVM with the distributed version of the ExaTENSOR library as well as performing the final testing of the ExaTENSOR library itself before the public release scheduled later this year.
\section{Demonstration}
\label{sec:demonstration}
Here we demonstrate the utility of TNQVM by describing the overall memory scaling of our matrix product state TNQCM for varying levels of entanglement and system size. Our demonstrations show how TNQVM can be leveraged to validating hybrid quantum-classical programming models. Specifically we focus on random circuit simulations and the variational quantum eigensolver (VQE) hybrid algorithm.
\subsection{Profiling Random Circuit Simulations with MPS}
We demonstrate the improved resource cost of representing quantum states ($O(n\chi^2)$ vs $O(2^n)$) with TNQVM by using an MPS formulation and by profiling the memory usage of simulating randomly generated circuits. We vary the entanglement structure of our random circuits by constructing time slices defined as \emph{rounds}. The first round begins with a layer of Hadamard operations on all qubits, followed by a layer of single qubit gates (Pauli gates and other general rotations), followed by a set of nearest-neighbor CNOT entangling operations. Multiple rounds constitute multiple iterations of generating these layers (excluding the Hadamards, which only appear in the first layer). Clearly, later rounds add layers of entangling CNOT operations and therefore generate states with a more complicated entanglement structure.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{tnqvm_memory_profiling.pdf}
\caption{Memory usage as a function of the number of rounds (circuit depth) and with increasing number of qubits. Memory usage is constant for a small number of rounds but rapidly increases as the total circuit depth and number of qubits increases. }
\label{fig:profiling}
\end{figure}
We generate these random circuits for $5$ through $85$ qubits in increments of $5$, and for numbers of rounds ranging from $2$ through $10$ in increments of $2$. For each $(round, n-qubits)$ pair, we generate 10 random circuits, compute the heap memory usage, and compute the mean and standard deviation of the memory usage. The results are plotted in Figure \ref{fig:profiling}. For lightly-entangled systems (i.e. those generated by a small number of random rounds) we see that the MPS structure is able to encode the wavefunction of the system efficiently with a small cost. For example, for only two rounds the maximum bond dimension is $\chi=4$, which is independent of system size. As we increase the entanglement in our random circuits, the computational cost of the MPS simulations increases exponentially. This is because circuits we have sampled from are designed to saturate exponentially increase the entanglement which saturates at entanglement described by $\chi_{max} = 2^{n/2}$ of an $n$-qubit system undergoing $m>n$ random rounds\cite{Boixo2018}.
\subsection{Variational Quantum Eigensolver}
\begin{figure}[!htb]
\begin{minipage}{0.45\textwidth}
\begin{listing}[H]
\begin{Verbatim}[commandchars=\\\{\}]
\PYGdefault{n}{h2\char`\_{}src} \PYGdefault{o}{=} \PYGdefault{l+s+s2}{\char`\"{}\char`\"{}\char`\"{}}
\PYGdefault{l+s+s2}{\char`\_{}\char`\_{}qpu\char`\_{}\char`\_{} ansatz(AcceleratorBuffer b,}
\PYGdefault{l+s+s2}{ double t0) \char`\{{}}
\PYGdefault{l+s+s2}{ RX(3.1415926) 0}
\PYGdefault{l+s+s2}{ RY(1.57079) 1}
\PYGdefault{l+s+s2}{ RX(7.85397) 0}
\PYGdefault{l+s+s2}{ CNOT 1 0}
\PYGdefault{l+s+s2}{ RZ(t0) 0}
\PYGdefault{l+s+s2}{ CNOT 1 0}
\PYGdefault{l+s+s2}{ RY(7.8539752) 1}
\PYGdefault{l+s+s2}{ RX(1.57079) 0}
\PYGdefault{l+s+s2}{\char`\}{}}
\PYGdefault{l+s+s2}{\char`\_{}\char`\_{}qpu\char`\_{}\char`\_{} term0(AcceleratorBuffer b, double t0) \char`\{{}}
\PYGdefault{l+s+s2}{ ansatz(b, t0)}
\PYGdefault{l+s+s2}{ MEASURE 0 [0]}
\PYGdefault{l+s+s2}{\char`\}{}}
\PYGdefault{l+s+s2}{... (rest of measurement kernels)}
\PYGdefault{l+s+s2}{\char`\"{}\char`\"{}\char`\"{}}
\PYGdefault{n}{qpu} \PYGdefault{o}{=} \PYGdefault{n}{xacc}\PYGdefault{o}{.}\PYGdefault{n}{getAccelerator}\PYGdefault{p}{(}\PYGdefault{l+s+s1}{\char`\'{}tnqvm\char`\'{}}\PYGdefault{p}{)}
\PYGdefault{n+nb}{buffer} \PYGdefault{o}{=} \PYGdefault{n}{qpu}\PYGdefault{o}{.}\PYGdefault{n}{createBuffer}\PYGdefault{p}{(}\PYGdefault{l+s+s1}{\char`\'{}q\char`\'{}}\PYGdefault{p}{,}\PYGdefault{l+m+mi}{2}\PYGdefault{p}{)}
\PYGdefault{n}{p} \PYGdefault{o}{=} \PYGdefault{n}{xacc}\PYGdefault{o}{.}\PYGdefault{n}{Program}\PYGdefault{p}{(}\PYGdefault{n}{qpu}\PYGdefault{p}{,} \PYGdefault{n}{h2\char`\_{}src}\PYGdefault{p}{)}
\PYGdefault{n}{p}\PYGdefault{o}{.}\PYGdefault{n}{build}\PYGdefault{p}{()}
\PYGdefault{n}{kernels} \PYGdefault{o}{=} \PYGdefault{n}{p}\PYGdefault{o}{.}\PYGdefault{n}{getKernels}\PYGdefault{p}{()}
\PYGdefault{k}{for} \PYGdefault{n}{t0} \PYGdefault{o+ow}{in} \PYGdefault{n}{np}\PYGdefault{o}{.}\PYGdefault{n}{linspace}\PYGdefault{p}{(}\PYGdefault{o}{\char`\-{}}\PYGdefault{n}{np}\PYGdefault{o}{.}\PYGdefault{n}{pi}\PYGdefault{p}{,}\PYGdefault{n}{np}\PYGdefault{o}{.}\PYGdefault{n}{pi}\PYGdefault{p}{,}\PYGdefault{l+m+mi}{100}\PYGdefault{p}{):}
\PYGdefault{k}{for} \PYGdefault{n}{k} \PYGdefault{o+ow}{in} \PYGdefault{n}{kernels}\PYGdefault{p}{[}\PYGdefault{l+m+mi}{1}\PYGdefault{p}{:]:}
\PYGdefault{n}{k}\PYGdefault{o}{.}\PYGdefault{n}{execute}\PYGdefault{p}{(}\PYGdefault{n+nb}{buffer}\PYGdefault{p}{,}
\PYGdefault{p}{[}\PYGdefault{n}{xacc}\PYGdefault{o}{.}\PYGdefault{n}{InstructionParameter}\PYGdefault{p}{(}\PYGdefault{n}{t0}\PYGdefault{p}{)])}
\end{Verbatim}
\end{listing}
\end{minipage}
\caption{XACC program compiling and executing the variational quantum eigensolver for the $H_2$ molecule.}
\end{figure}
Finally, we demonstrate the utility of our tensor network simulation XACC Accelerator backend (the TNQVM library) in validating quantum-classical algorithms. It is this rapid feedback mechanism that is critical to understanding intended algorithmic results, and enables confidence in the programming of larger systems. Here we demonstrate this programmability and its verification and validation through a simple simulation of diatomic hydrogen via the variational quantum eigensolver algorithm. The quantum-classical program is shown in the listing below leveraging the TNQVM library.
This code listing demonstrates the integration of XACC and our tensor network accelerator implementation. The code shows how to program, compile, and execute the VQE algorithm to compute expectation values for the simplified (symmetry-reduced), two qubit $H_2$ Hamiltonian (see \cite{babbush_scalable_sim}). We start off by defining the quantum source code as XACC quantum kernels (note - we have left out a few measurement kernels for brevity). Each of these kernels is parameterized by a single \texttt{double} representing the variational parameter for the problem ansatz circuit (the \texttt{ansatz} kernel in the \texttt{h2\_src} string). Integration with the TNQVM simulation library is done through a public XACC API function (\texttt{getAccelerator}). This accelerator reference is used to compile the program and get reference to executable kernels that delegate work to the TN Accelerator. We then loop over all $\theta$ and compute the expectation values for each Hamiltonian measurement term. Notice that this execution mechanism is agnostic to the accelerator sub-type. This provides a way to quickly swap between validation and verification with TNQVM, and physical hardware execution on quantum computers from IBM, Rigetti, etc.
\section{Conclusion}
In this work we have discussed the concept of a general quantum virtual machine and introduced a concrete implementation of the QVM that enables quantum-classical programming with validation through an extensible tensor network quantum circuit simulator (TNQVM). We have discussed the applicability and scalability of a matrix product state backend implementation for TNQVM and discussed the role of TNQVM in benchmarking quantum algorithms and hybrid quantum-classical applications including random circuit sequences used in quantum supremacy\cite{Boixo2018} and the variational quantum eigensolver\cite{Peruzzo2014}. We have chosen a tensor network based quantum virtual machine due to the complexity reduction such a formalism provides for a broad range of problems. In general TNQVM enables large-scale simulation of quantum circuits which generate states characterized by short-range entanglement. Studying systems with long-range entanglement interactions will require further developments in implementing more advanced tensor network decomposition types. We plan to investigate the applicability of the tree tensor network and the multiscale entanglement renormalization ansatz in future work, in an effort to scale simulation capabilities to a larger number of qubits.
\section*{Acknowledgements}
This work has been supported by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory and the US Department of Energy (DOE) Early Career Research Program. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. This manuscript has been authored by UT-Battelle, LLC, under contract DEAC0500OR22725 with DOE. The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan.
\bibliographystyle{apsrev}
|
1807.07872
|
\section{Introduction}\label{sec:intro}
For the following discussion, we decompose the usual face identification task into two sub-problems: \emph{recognition} and \emph{tagging}. Here we understand recognition as the unsupervised task of matching an observed face to a cluster of previously seen faces with similar appearance (disregarding variations in pose, illumination etc.), which we refer to as an \emph{identity}. Humans routinely operate at this level of abstraction to recognise familiar faces: even when people's names are not known, we can still tell them apart. Tagging, on the other hand, refers to putting names to faces, i.e.~ associating string literals to known identities.
Humans tend to create an inductive mental model of facial appearance for each person we meet, which we then query at new encounters to be able to recognise them. This is opposed to a transductive approach, attempting to match faces to specific instances from a memorised gallery of past face observations---which is how identification systems are often implemented \cite{Jafri2009}.
An alternative way to represent faces, aligned with our inductive recognition, is via \emph{generative} face models, which explicitly separate latent identity content, tied across all pictures of a same individual, from nuisance factors such as pose, expression and illumination \cite{Ioffe2006,Prince2007,Li2012}. While mostly limited to linear projections from pixel space (or mixtures thereof), the probabilistic framework applied in these works allowed tackling a variety of face recognition tasks, such as closed- and open-set identification, verification and clustering.
A further important aspect of social interactions is that, as an individual continues to observe faces every day, they encounter some people much more often than others, and the total number of distinct identities ever met tends to increase virtually without bounds. Additionally, we argue that human face recognition does not happen in an isolated environment, but situational contexts (e.g.~ `home', `work', `gym') constitute strong cues for the groups of people a person expects to meet (\cref{fig:context_aware_setting}).
With regards to tagging, in daily life we very rarely obtain named face observations: acquaintances normally introduce themselves only once, and not repeatedly whenever they are in our field of view. In other words, humans are naturally capable of semi-supervised learning, generalising sparse name annotations to all observations of the corresponding individuals, while additionally reconciling naming conflicts due to noise and uncertainty.
\begin{figure}[tb]
\centering
\subfloat[][Standard face recognition]
{\includegraphics[width=.43\textwidth]{fig/fig1a.pdf}\label{fig:std_recognition_setting}}
\hspace{2em}
\subfloat[][Context-aware model of identities]
{\includegraphics[width=.43\textwidth]{fig/fig1.pdf}\label{fig:context_aware_setting}}
\caption{Face recognition settings. Points represent face observations and boxes are name labels.}
\end{figure}
In contrast, standard computational face identification is \emph{fully supervised} (see \cref{fig:std_recognition_setting}), relying on vast labelled databases of high-quality images \cite{LearnedMiller2016}. Although many supervised methods achieve astonishing accuracy on challenging benchmarks (e.g.~ \cite{Taigman2014,Schroff2015}) and are successfully employed in practical biometric applications, this setting has arguably limited analogy to human social experience.
Expanding on the generative perspective, we introduce a unified Bayesian model which reflects all the above considerations on identity distributions, context-awareness and labelling (\cref{fig:context_aware_setting}). Our nonparametric identity model effectively represents an unbounded population of identities, while taking contextual co-occurrence relations into account and exploiting modern deep face representations to overcome limitations of previous linear generative models. Our main contributions in this work are twofold:
\begin{enumerate}
\item We propose an unsupervised face recognition model which can explicitly reason about people it has never seen; and
\item We attach to it a novel robust label model enabling it to predict names by learning from both named and unnamed faces.
\end{enumerate}
\subsection*{Related Work}\label{sec:related_work}
Other face recognition methods (even those formulated in a Bayesian framework) \cite{Zhang2003,Zhao2006,Choi2010,Tapaswi2012,Le2017}, often limit themselves to point estimates of parameters and predictions, occasionally including ad-hoc confidence metrics. A distinct advantage of our approach is that it is probabilistic end-to-end, and thus naturally provides predictions with principled, quantifiable uncertainties. Moreover, we employ modern Bayesian modelling tools---namely hierarchical nonparametrics---which enable dynamically adapting model complexity while faithfully reflecting the real-world assumptions laid out above.
Secondly, although automatic face tagging is a very common task, each problem setting can impose wildly different assumptions and constraints. Typical application domains involve the annotation of personal photo galleries \cite{Zhang2003,Zhao2006,Anguelov2007,Gallagher2009}, multimedia (e.g.~ TV) \cite{Tapaswi2012,Le2017} or security/surveillance \cite{Jafri2009}. Our work focuses on egocentric human-like face recognition, a setting which seems largely unexplored, as most of the work using first-person footage appears to revolve around other tasks like object and activity recognition, face detection, and tracking \cite{Betancourt2015}. As we explained previously, the dynamic, \emph{online} nature of first-person social experience brings a number of specific modelling challenges for face recognition.
Finally, while there is substantial prior work on using contexts to assist face recognition, we emphasize that much (perhaps most) of it is effectively complementary to our unified framework. Notions of \emph{global} context such as timestamp, geolocation and image background \cite{Torralba2003,Zhao2006,Anguelov2007,Choi2010} can readily be used to inform our current context model (\cref{sec:context_model}). In addition, we can naturally augment the proposed face model (\cref{sec:face_model}) to leverage further \emph{individual} context features, e.g.~ clothing and speech \cite{Zhao2006,Anguelov2007,Tapaswi2012,Le2017}. Integration of these additional factors opens exciting avenues for future research.
\section{A Model of Identities}
In this section, we describe in isolation each of the building blocks of the proposed approach to facial identity recognition: the context model, the identity model and the face model. We assume data is collected in the form of camera \emph{frames} (either photographs or a video stills), numbered $1$ to $M$, and faces are cropped with some face detection system and grouped by frame number indicators, $f_n \in \{1,\dots,M\}$. The diagram in \cref{fig:model_diagram} illustrates the full proposed graphical model, including the label model detailed in \cref{sec:label_model}.
\begin{figure}[t]
\centering
\input{fig/full_context_label_colours.tex}
\caption{Overview of the proposed generative model, encompassing the \textcolor{ctxcol}{\bf context model}, \textcolor{idtcol}{\bf identity model}, \textcolor{faccol}{\bf face model} and \textcolor{labcol}{\bf label model}. Unfilled nodes represent latent variables, shaded nodes are observed, the half-shaded node is observed only for a subset of the indices and uncircled nodes denote fixed hyperparameters.
$\boldsymbol{\pi}_0$ and $(\boldsymbol{\pi}_c)_{c=1}^C$ are the global and context-wise identity probabilities, $\boldsymbol{\omega}$ denotes the context probabilities, $(c^*_m)_{m=1}^M$ are the frame-wise context labels, indexed by the frame numbers $(f_n)_{n=1}^N$, $(z_n)_{n=1}^N$ are the latent identity indicators, $(\*x_n)_{n=1}^N$ are the face observations and $(y_n)_{n=1}^N$ are the respective name annotations, $(\theta^*_i)_{i=1}^\infty$ are the parameters of the face model and $(y^*_i)_{i=1}^\infty$ are the identities' name labels. See text for descriptions of the remaining symbols.}
\label{fig:model_diagram}
\end{figure}
\subsection{Context Model}\label{sec:context_model}
In our identity recognition scenario, we imagine the user moving between contexts throughout the day (e.g.~ home--work--gym...). Since humans naturally use situational context as a strong prior on the groups of people we expect to encounter in each situation, we incorporate context-awareness in our model of identities to mimic human-like face recognition.
The context model we propose involves a categorical variable $c_n \in {\{1,\dots,C\}}$ for each observation, where $C$ is some fixed number of distinct contexts.\footnote{See footnote \labelcref{foot:unbounded_contexts}.} Crucially, we assume that all observations in frame $m$, $\mathcal{F}}%{\mathfrak{F}_m = {\{n : f_n = m\}}$, share the same context, $c^*_m$ (i.e.~ $\forall n, c_n = c^*_{f_n}$).
We define the identity indicators to be independent given the context of the corresponding frames (see \cref{sec:id_model}, below). However, since the contexts are tied by frame, marginalising over the contexts captures identity co-occurrence relations. In turn, these allow the model to make more confident predictions about people who tend to be seen together in the same environment.
This formalisation of contexts as discrete semantic labels is closely related to the place recognition model in \cite{Torralba2003}, used there to disambiguate predictions for object detection. It has also been demonstrated that explicit incorporation of a context variable can greatly improve clustering with mixture models \cite{Perdikis2015}.
Finally, we assume the context indicators $c^*_m$ are independently distributed according to probabilities $\boldsymbol{\omega}$, which themselves follow a Dirichlet prior:
\begin{align}
\boldsymbol{\omega} &\sim \operatorname{Dir}(\boldsymbol{\gamma}) \\
c^*_m \mathbin{|} \boldsymbol{\omega} &\sim \operatorname{Cat}(\boldsymbol{\omega}) \,, & m &= 1,\dots,M \,,
\end{align}
where $M$ is the total number of frames. In our simulation experiments, we use a symmetric Dirichlet prior, setting $\boldsymbol{\gamma} = (\gamma_0 / C, \dots, \gamma_0 / C)$.
\subsection{Identity Model}\label{sec:id_model}
In the daily-life scenario described in \cref{sec:intro}, an increasing number of unique identities will tend to appear as more faces are observed. This number is expected to grow much more slowly than the number of observations, and can be considered unbounded in practice (we do not expect a user to run out of new people to meet). Moreover, we can expect some people to be encountered much more often than others. Since a Dirichlet process (DP) \cite{Ferguson1973} displays properties that mirror all of the above phenomena \cite{Teh2010}, it is a sound choice for modelling the distribution of identities.
Furthermore, the assumption that all people can potentially be encountered in any context, but with different probabilities, is perfectly captured by a hierarchical Dirichlet process (HDP) \cite{Teh2006}. Making use of the context model, we define one DP \emph{per context} $c$, each with concentration parameter $\alpha_c$ and sharing the same \emph{global} DP as a base measure.\footnote{\label{foot:unbounded_contexts}One could further allow an unbounded number of latent contexts by incorporating a nonparametric context distribution, resulting in a structure akin to the nested DP \cite{Rodriguez2008,Blei2010} or the dual DP described in \cite{Wang2009b}. See \cref{app:random_measure} for details.} This hierarchical construction thus produces context-specific distributions over a common set of identities.
We consider that each of the $N$ face detections is associated to a latent identity indicator variable, $z_n$. We can write the generative process as
\begin{align}
\boldsymbol{\pi}_0 &\sim \operatorname{GEM}(\alpha_0) \\
\boldsymbol{\pi}_c \mathbin{|} \boldsymbol{\pi}_0 &\sim \operatorname{DP}(\alpha_c, \boldsymbol{\pi}_0) \,, & c &= 1,\dots,C \\
z_n \mathbin{|} f_n = m, \*c^*, (\boldsymbol{\pi}_c)_c &\sim \operatorname{Cat}(\boldsymbol{\pi}_{c^*_m})\,, & n &= 1,\dots,N \,,
\end{align}
where $\operatorname{GEM}(\alpha_0)$ is the DP stick-breaking distribution, ${\pi_{0i} = \beta_i \prod_{j=1}^{i-1} (1-\beta_j)}$, with ${\beta_i \sim \operatorname{Beta}(1,\alpha_0)}$ and $i=1,\dots,\infty$. Here, $\boldsymbol{\pi}_0$ is the global identity distribution and $(\boldsymbol{\pi}_c)_{c=1}^C$ are the context-specific identity distributions.
Although the full generative model involves infinite-dimensional objects, DP-based models present simple finite-dimensional marginals. In particular, the posterior predictive probability of encountering a known identity $i$ is
\begin{equation}\label{eq:prob_known}
p\CondBrackets{z_{N+1} = i}{c_{N+1}=c, \*z, \*c^*, \boldsymbol{\pi}_0}
= \frac{\alpha_c \pi_{0i} + N_{ci}}{\alpha_c + N_{c \cdot}} \,,
\end{equation}
where $N_{ci}$ is the number of observations assigned to context $c$ and identity $i$ and $N_{c\cdot}$ is the total number of observations in context $c$.
Finally, such a nonparametric model is well suited for an open-set identification task, as it can elegantly estimate the prior probability of encountering an unknown identity:
\begin{equation}\label{eq:prob_unknown}
p\CondBrackets{z_{N+1} = I+1}{c_{N+1}=c, \*z, \*c^*, \boldsymbol{\pi}_0}
= \frac{\alpha_c \pi_0'}{\alpha_c + N_{c \cdot}} \,,
\end{equation}
where $I$ is the current number of distinct known identities and ${\pi_0' = \sum_{i=I+1}^\infty \pi_{0i}}$ denotes the global probability of sampling a new identity.
\subsection{Face Model}\label{sec:face_model}
In face recognition applications, it is typically more convenient and meaningful to extract a compact representation of face features than to work directly in a high-dimensional pixel space.
We assume that the observed features of the $n$\textsuperscript{th} face, $\*x_n$, arise from a parametric family of distributions, $F_\mathrm{X}$. The parameters of this distribution, $\theta^*_i$, drawn from a prior, $H_\mathrm{X}$, are unique for each identity and are shared across all face feature observations of the same person:
\begin{align}
\theta^*_i &\sim H_\mathrm{X} \,, & i &= 1,\dots,\infty \\
\*x_n \mathbin{|} z_n, \boldsymbol{\theta}^* &\sim F_\mathrm{X}(\theta^*_{z_n}) \,, & n &= 1,\dots,N \,.
\end{align}
As a consequence, the marginal distribution of faces is given by a \emph{mixture model}: $p\CondBrackets{\*x_n}{c_n=c, \boldsymbol{\theta}^*, \boldsymbol{\pi}_c} = \sum_{i=1}^\infty \pi_{ci} F_\mathrm{X}(\*x_n \mathbin{|} \theta^*_i)$.
In the experiments reported in this paper, we used the 128-dimensional embeddings produced by OpenFace, a publicly available, state-of-the-art neural network for face recognition \cite{Amos2016}, implementing FaceNet's architecture and methodology \cite{Schroff2015}. In practice, this could easily be swapped for other face embeddings (e.g.~ DeepFace \cite{Taigman2014}) without affecting the remainder of the model. We chose isotropic Gaussian mixture components for the face features ($F_\mathrm{X}$), with an empirical Gaussian--inverse gamma prior for their means and variances ($H_\mathrm{X}$).
\section{Robust Semi-Supervised Label Model}\label{sec:label_model}
We expect to work with only a small number of labelled observations manually provided by the user. Since the final goal is to identify any observed face, our probabilistic model needs to incorporate a semi-supervised aspect, generalising the sparse given labels to unlabelled instances. Throughout this section, the terms `identity' and `cluster' will be used interchangeably.
One of the cornerstones of semi-supervised learning (SSL) is the premise that clustered items tend to belong to the same class \cite[\S 1.2.2]{Chapelle2006}. Building on this \emph{cluster assumption}, mixture models, such as ours, have been successfully applied to SSL tasks \cite{Bouveyron2009}. We illustrate in \cref{fig:labels} our proposed label model detailed below, comparing it qualitatively to nearest-neighbour classification on a toy example.
With the motivation above, we attach a label variable (a \emph{name}) to each cluster (identity), here denoted $y^*_i$. This notation suggests that there is a single true label ${\tilde{y}_n = y^*_{z_n}}$ for each observation $n$, analogously to the observation parameters: ${\theta_n = \theta^*_{z_n}}$. Finally, the observed labels, $y_n$, are potentially corrupted through some noise process, $F_\mathrm{Y}$. Let $\mathcal{L}}%{\mathfrak{L}$ denote the set of indices of the labelled data. The complete generative process is presented below:
\begin{align}
H_\mathrm{Y} &\sim \operatorname{DP}(\lambda, L}%{\mathsf{L}) \\
y^*_i \mathbin{|} H_\mathrm{Y} &\sim H_\mathrm{Y} \,, & i &= 1,\dots,\infty \\
y_n \mathbin{|} z_n, \*y^*, H_\mathrm{Y} &\sim F_\mathrm{Y}(y^*_{z_n}; H_\mathrm{Y}) \,, & n &\in \mathcal{L}}%{\mathfrak{L} \,.
\end{align}
\begin{figure}[tb]
\centering
\includegraphics[width=.48\textwidth]{fig/label_nn.png}
\hfill
\includegraphics[width=.48\textwidth]{fig/label_model_dp.png}
\caption{Hard label predictions of the proposed semi-supervised label model (right) and nearest-neighbour classification (left). Points represent unlabelled face observations, squares are labelled and the black contours on the right show identity boundaries. The proposed label model produces more \emph{natural} boundaries, assigning the `unknown' label (white) to unlabelled clusters and regions distant from any observed cluster, while also accommodating label noise (`Bob' $\to$ `Alice') without the spurious boundaries introduced by NN.}
\label{fig:labels}
\end{figure}
As mentioned previously, a related model for mixture model-based SSL with noisy labels was proposed in \cite{Bouveyron2009}. Instead of considering an explicit noise model for the class labels, the authors of that work model directly the conditional label distribution for each cluster. Our setting here is more general: we assume not only an unbounded number of clusters, but also of possible labels.
\subsection{Label Prior}
We assume that the number of distinct labels will tend to increase without bounds as more data is observed. Therefore, we adopt a further nonparametric prior on the cluster-wide labels:
\begin{equation}
H_\mathrm{Y} \sim \operatorname{DP}(\lambda, L}%{\mathsf{L}) \,,
\end{equation}
where $L}%{\mathsf{L}$ is some base probability distribution over the countable but unbounded label space (e.g.~ strings).\footnote{One could instead consider a Pitman--Yor process if power-law behaviour seems more appropriate than the DP's exponential tails \cite{Pitman1997}.}
We briefly discuss the choice of $L}%{\mathsf{L}$ further below.
All concrete knowledge we have about the random label prior $H_\mathrm{Y}$ comes from the set of observed labels, $\*y_\mathcal{L}}%{\mathfrak{L}$. Crucially, if we marginalise out $H_\mathrm{Y}$, the predictive label distribution is simply \cite{Teh2010}
\begin{equation}\label{eq:label_posterior}
y^*_{I+1} \mathbin{|} \*y^* \sim \frac{1}{\lambda + I} \biggl( \lambda L}%{\mathsf{L} + \sum_{\ell \in \mathcal{Y}}%{\mathfrak{Y}} J_\ell \delta_\ell \biggr) \,,
\end{equation}
which we will denote $\widehat{\LabelPrior}(y^*_{I+1} \mathbin{|} \*y^*)$. Here, $\mathcal{Y}}%{\mathfrak{Y}$ is the set of distinct known labels among $\*y_\mathcal{L}}%{\mathfrak{L}$ and $J_\ell = |\{i : y^*_i = \ell\}|$, the number of components with label $\ell$ (note that $\sum_\ell J_\ell = I$).
In addition to allowing multiple clusters to have repeated labels, this formulation allows us to reason about \emph{unseen} labels. For instance, some of the learned clusters may have no labelled training points assigned to them, and the true (unobserved) labels of those clusters may never have been encountered among the training labels. Another situation in which unseen labels come into play is with points away from any clusters, for which the identity model would allocate a new cluster with high probability. In both cases, this model gives us a principled estimate of the probability of assigning a special `unknown' label.
The base measure $L}%{\mathsf{L}$ may be defined over a rudimentary language model. For this work, we adopted a geometric/negative binomial model for the string length $|\ell|$, with characters drawn uniformly from an alphabet of size $K$:
\begin{equation}
L}%{\mathsf{L}_{\phi, K}(\ell) = \operatorname{Geom}(|\ell|; \tfrac{1}{\phi}) \operatorname{Unif}(\ell; K^{|\ell|})
= \frac{1}{\phi-1} \left( \frac{\phi - 1}{\phi K} \right)^{|\ell|} \,,
\end{equation}
where $\phi$ is the expected string length.
\subsection{Label Likelihood}
In the simplest case, we could consider $F_\mathrm{Y}(\cdot) = \delta_\cdot$, i.e.~ noiseless labels. Although straightforward to interpret and implement, this could make inference highly unstable whenever there would be conflicting labels for an identity. Moreover, in our application, the labels would be provided provided by a human user who may not have perfect knowledge of the target person's true name or its spelling, for example.
Therefore, we incorporate a label noise model, which can gracefully handle conflicts and mislabelling. We assume observed labels are noisy completely at random (NCAR) \cite[\S II-C]{Frenay2014}, with a fixed error rate $\varepsilon$:%
\footnote{The `true' label likelihood $F_\mathrm{Y}(\ell \mathbin{|} y^*_i; H_\mathrm{Y})$ is random due to its dependence on the unobserved prior $H_\mathrm{Y}$. We thus define $\widehat{\LabelLik}$ as its posterior expectation given the known identity labels $\*y^*$. See \cref{app:label_lik} for details.}
\begin{equation}\label{eq:label_marginal}
\widehat{\LabelLik}(\ell \mathbin{|} y^*_i; \*y^*) = \begin{cases}
1-\varepsilon \,, & \ell = y^*_i \\
\varepsilon \frac{\widehat{\LabelPrior}(\ell \mathbin{|} \*y^*)}{1-\widehat{\LabelPrior}(y^*_i \mathbin{|} \*y^*)} \,, & \ell \neq y^*_i
\end{cases} \,.
\end{equation}
Intuitively, an observed label, $y_n$, agrees with its identity's assigned label, $y^*_{z_n}$, with probability $1-\varepsilon$. Otherwise, it is assumed to come from a modified label distribution, in which we restrict and renormalise $\widehat{\LabelPrior}$ to exclude $y^*_{z_n}$. Here we use $\widehat{\LabelPrior}$ in the error distribution instead of $L}%{\mathsf{L}$ to reflect that a user is likely to mistake a person's name for another known name, rather than for a completely random string.
\subsection{Label Prediction}
For label prediction, we are only concerned with the true, noiseless labels, $\tilde{y}_n$. The predictive distribution for a single new sample is given by
\begin{equation}
\begin{split}
\hspace{2em}&\hspace{-2em} p\CondBrackets{\tilde{y}_{N+1} = \ell}{\*x_{N+1}, \*z, \*c^*, \*y^*, \boldsymbol{\theta}^*, \boldsymbol{\pi}_0} \\
&= \sum_{i \leq I : y^*_i = \ell} p\CondBrackets{z_{N+1} = i}{\*x_{N+1}, \*z, \*c^*, \boldsymbol{\theta}^*, \boldsymbol{\pi}_0} \\
&\qquad + \widehat{\LabelPrior}(y^*_{I+1} = \ell \mathbin{|} \*y^*) \, p\CondBrackets{z_{N+1} = I+1}{\*x_{N+1}, \*z, \*c^*, \boldsymbol{\theta}^*, \boldsymbol{\pi}_0} \,.
\end{split}
\end{equation}
The sum in the first term is the probability of the sample being assigned to any of the existing identities that have label $\ell$, while the last term is the probability of instantiating a new identity with that label.
\section{Evaluation}
One of the main strengths of the proposed model is that it creates a single rich representation of the known world, which can then be queried from various angles to obtain distinct insights. In this spirit, we designed three experimental setups to assess different properties of the model: detecting whether a person has been seen before (outlier detection), recognising faces as different identities in a sequence of frames (clustering, unsupervised) and correctly naming observed faces by generalising sparse user annotations (semi-supervised learning).
In all experiments, we used celebrity photographs from the Labelled Faces in the Wild (LFW) database \cite{Huang2007}.\footnote{Available at: \url{http://vis-www.cs.umass.edu/lfw/}} We have implemented inference via Gibbs Markov chain Monte Carlo (MCMC) sampling, whose conditional distributions can be found in \cref{app:gibbs}, and we run multiple chains with randomised initial conditions to better estimate the variability in the posterior distribution. For all metrics evaluated on our model, we report the estimated 95\% highest posterior density (HPD) credible intervals over pooled samples from 8 independent Gibbs chains, unless stated otherwise.
\subsection{Experiment 1: Unknown Person Detection}\label{sec:eval_unknown}
In our first set of experiments, we study the model's ability to determine whether or not a person has been seen before. This key feature of the proposed model is evaluated based on the probability of an observed face not corresponding to any of the known identities, as given by \cref{eq:prob_unknown}. In order to evaluate purely the detection of unrecognised faces, we constrained the model to a single context ($C=1$) and set aside the label model ($\mathcal{L}}%{\mathfrak{L} = \emptyset$).
This task is closely related to outlier/anomaly detection. In particular, our proposed approach mirrors one of its common formulations, involving a mixture of a `normal' distribution, typically fitted to some training data, and a flatter `anomalous' distribution\footnote{The predictive distribution of $\*x_n$ for new identities is a wide Student's $t$.} \cite[\S 7.1.3]{Chandola2009}.
We selected the 19 celebrities with at least 40 pictures available in LFW and randomly split them in two groups: 10 known and 9 unknown people. We used 27 images of each of the \emph{known} people as training data and a disjoint test set of 13 images of each of the \emph{known} and \emph{unknown} people. We therefore have a binary classification setting with well-balanced classes at test time. Here, we ran our Gibbs sampler for 500 steps, discarding the first 100 burn-in iterations and thinning by a factor of 10, resulting in 320 pooled samples.
\begin{figure}[t]
\centering
\subfloat[][Association matrix, counting agreements in the MAP identity predictions (including \emph{unknown}). Ticks delimit ground-truth identities.]
{\includegraphics[scale=.44]{fig/similarity_matrix.pdf}\label{fig:assoc_matrix}}
\hfill
\subfloat[][ROC analysis for unknown person detection compared with baselines. AUC is shown with median and 50\% and 95\% HPD intervals.]
{\includegraphics[scale=.44]{fig/unknown_roc.pdf}\label{fig:unknown_roc}}
\caption{Results of the unknown person detection experiment on test images}
\end{figure}
In \cref{fig:assoc_matrix}, we visualise the agreements between maximum \textit{a posteriori} (MAP) identity predictions for test images:
\begin{equation}
\hat{z}_n = \arg\max_i p\CondBrackets{z_n=i}{\*x_n, \*z, \*c^*, \boldsymbol{\pi}_0, \boldsymbol{\theta}^*} \,,
\end{equation}
where $i$ ranges from $1$ to $I+1$, the latter indicating an \emph{unknown} identity, absent from the training set, and $n$ indexes the test instances. Despite occasional ambiguous cases, the proposed model seems able to consistently group together all unknown faces, while successfully distinguishing between known identities.
As a simple baseline detector for comparison, we consider a threshold on the distance to the nearest neighbour (NN) in the face feature space \cite[\S 5.1]{Chandola2009}. We also evaluate the decision function of a one-class SVM \cite{Scholkopf2001}, using an RBF kernel with $\gamma = 10$, chosen via leave-one-person-out cross-validation on the training set (roughly equivalent to thresholding the training data's kernel density estimate with bandwidth $1/\sqrt{2\gamma} \approx 0.22$). We compare the effectiveness of both detection approaches using ROC curve analysis.
\Cref{fig:unknown_roc} shows that, while all methods are highly effective at detecting unknown faces, scoring $95\%+$ AUC, ours consistently outperforms, by a small margin, both the NN baseline and the purpose-designed one-class SVM. Taking the MAP prediction, our model achieves $[92.3\%, 94.3\%]$ detection accuracy.
\subsection{Experiment 2: Identity Discovery}
We then investigate the clustering properties of the model in a purely unsupervised setting, when only context is provided. We evaluate the consistency of the estimated partitions of images into identities with the ground truth in terms of the adjusted Rand index \cite{Rand1971,Hubert1985}.
Using simulations, besides having an endless source of data with ground-truth context and identity labels, we have full control over several important aspects of experimental setup, such as sequence lengths, rates of encounters, numbers of distinct contexts and people and amount of provided labels. Below we describe the simulation algorithm used in our experiments and illustrated in \cref{fig:frame_simulation}.
In our experiments we aim to simulate two important aspects of real-world identity recognition settings:
1. \emph{Context}: knowing the context (e.g.~ location or time) makes it more likely for us to observe a particular subset of people; and
2. \emph{Temporal consistency}: identities will not appear and disappear at random but instead be present for a longer duration.
To reproduce contexts, we simulate a single session of a user meeting new people. To this end we first create a number of fixed contexts and then assign identities uniformly at random to each context. For these experiments, we defined three contexts: `home', `work' and `gym'. At any time, the user knows its own context and over time transitions between contexts. Independently at each frame, the user may switch context with a small probability.
To simulate temporal consistency, each person in the current context enters and leaves the camera frame as an independent binary Markov chain. As shown in \cref{fig:frame_simulation} this naturally produces grouped observations. The image that is observed for each `detected' face is sampled from the person's pictures available in the database. We sample these images without replacement and in cycles, to avoid observing the same image consecutively.
\begin{figure}[t]
\centering
\includegraphics[width=.9\linewidth]{fig/frame_identities.pdf}
\caption{The simulation used in Experiment 2, showing identities coming in and out of the camera frame. Identities are shown grouped by their context (far right), and shading indicates identities present in the user's current context.}
\label{fig:frame_simulation}
\end{figure}
For this set of experiments, we consider three practical scenarios:
\begin{itemize}
\item \emph{Online:} data is processed on a frame-by-frame basis, i.e.~ we extend the training set after each frame and run the Gibbs sampler for 10 full iterations
\item \emph{Batch:} same as above, but enqueue data for 20 frames before extending the training set and updating the model for 200 steps
\item \emph{Offline:} assume entire sequence is available at once and iterate for 1000 steps
\end{itemize}
In the interest of fairness, the number of steps for each protocol was selected to give them roughly the same overall computation budget (ca.~200\,000 frame-wise steps). In addition, we also study the impact on recognition performance of disabling the context model, by setting $C=1$ and $c^*_m=1, \forall m$.
\begin{figure}[t]
\centering
\subfloat[][With contexts]
{\includegraphics[width=.49\linewidth]{fig/fixed-contexts.pdf}\label{fig:fixed_contexts}}
\hfill
\subfloat[][Without contexts]
{\includegraphics[width=.49\linewidth]{fig/no-contexts.pdf}\label{fig:no_contexts}}
\caption{Identity clustering consistency. Markers on the horizontal axis (\protect\tikz{\protect\node[star,fill=red!80!black,star point ratio=2.25,inner sep=1pt] {};}) indicate when new people are met for the first time.}
\label{fig:discovery_results}
\end{figure}
We show the results of this experiment in \cref{fig:discovery_results}. Clearly it is expected that, as more identities are met over time, the problem grows more challenging and clustering performance tends to decrease. Another general observation is that online processing produced much lower variance than batch or offline in both cases. The incremental availability of training data therefore seems to lead to more coherent states of the model.
Now, comparing \cref{fig:fixed_contexts,fig:no_contexts}, it is evident that context-awareness not only reduces variance but also shows marginal improvements over the context-oblivious variant. Thus, without hurting recognition performance, the addition of a context model enables the \emph{prediction} of context at test time, which may be useful for downstream user-experience systems.
\subsection{Experiment 3: Semi-Supervised Labelling}
In our final set of experiments, we aimed to validate the application of the proposed label model for semi-supervised learning with sparse labels.
In the context of face identification, we may define three groups of people:
\begin{itemize}
\item \emph{Acquainted:} known identity with known name
\item \emph{Familiar:} known identity with unknown name
\item \emph{Stranger:} unknown identity
\end{itemize}
We thus selected the 34 LFW celebrities with more than 30 pictures, and split them roughly equally in these three categories at random. From the \emph{acquainted} and \emph{familiar} groups, we randomly picked 15 of their images for training and 15 for testing, and we used 15 pictures of each \emph{stranger} at test time only. We evaluated the label prediction accuracy as we varied the number of labelled training images provided for each acquaintance, from 1 to 15.
For baseline comparison, we evaluate nearest-neighbour classification (NN) and label propagation (LP) \cite{Zhu2002}, a similarity graph-based semi-supervised algorithm. We computed the LP edge weights with the same kernel as the SVM in \cref{sec:eval_unknown}. Recall that the face embedding network was trained with a triplet loss to explicitly optimise Euclidean distances for classification \cite{Amos2016}. As both NN and LP are distance-based, they are therefore expected to hold an advantage over our model for classifying labelled identities.
\begin{figure}[t]
\centering
\subfloat[][Acquaintances]
{\includegraphics[width=.49\linewidth]{fig/ssl_acqu.pdf}\label{fig:ssl_acqu}}
\hfill
\subfloat[][Familiar and strangers]
{\includegraphics[width=.49\linewidth]{fig/ssl_fami_stra.pdf}\label{fig:ssl_fami_stra}}
\caption{Label prediction accuracy. Note that NN and LP effectively have null accuracy for the \emph{familiar} and \emph{strangers} groups, as they cannot predict `unknown'.}
\label{fig:ssl_results}
\end{figure}
\Cref{fig:ssl_acqu} shows the label prediction results for the labelled identities (acquaintances).
In this setting, NN and LP performed nearly identically and slightly better than ours, likely due to the favourable embedding structure.
Moreover, all methods predictably become more accurate as more supervision is introduced in the training data.
More importantly, the key distinctive capabilities of our model are demonstrated in \cref{fig:ssl_fami_stra}. As already discussed in \cref{sec:eval_unknown}, the proposed model is capable of detecting complete strangers, and here we see that it correctly predicts that their name is unknown. Furthermore, our model can acknowledge that familiar faces belong to different people, whose names may not be known. Neither of these functionalities is provided by the baselines, as they are limited to the closed-set identification task.
\section{Conclusion}
In this work, we introduced a fully Bayesian treatment of the face identification problem. Each component of our proposed approach was motivated from human intuition about face recognition and tagging in daily social interactions. Our principled identity model can contemplate an unbounded population of identities, accounting for context-specific probabilities of meeting them.
We demonstrated that the proposed identity model can accurately detect when a face is unfamiliar, and is able to incrementally learn to differentiate between new people as they are met in a streaming data scenario. Lastly, we verified that our approach to dealing with sparse name annotations can handle not only acquaintances, whose names are known, but also familiar faces and complete strangers in a unified manner---a functionality unavailable in conventional (semi-) supervised identification methods.
Here we considered a fully supervised context structure. As mentioned in \cref{sec:related_work}, one could imagine an unsupervised approach involving global visual or non-visual signals to drive context inference (e.g.~ global image features, time or GPS coordinates), in addition to extensions to the face model with individual context information (e.g.~ clothing, speech). Yet another interesting research direction is to explicitly consider time dependence, e.g.~ by endowing the sequence of latent contexts with a hidden Markov model-like structure \cite{Torralba2003}.
\subsubsection*{Acknowledgement.}
This work was partly supported by CAPES, Brazil (BEX 1500/2015-05).
\bibliographystyle{splncs04}
\section{Random Measure Interpretation}\label{app:random_measure}
While the exposition in the main text considers the explicit representation of the nonparametric model in terms of weights ($\boldsymbol{\pi}_0$ and $(\boldsymbol{\pi}_c)_{c=1}^C$) and atom locations ($(\theta^*_i, y^*_i)_{i=1}^\infty$), here we also provide the interpretation in terms of random measures:
\begin{align}
H_\mathrm{Y} &\sim \operatorname{DP}(\lambda, L}%{\mathsf{L}) \\
G_0 \mathbin{|} H_\mathrm{Y} &\sim \operatorname{DP}(\alpha_0, H_\mathrm{X} \otimes H_\mathrm{Y}) \\
G_c \mathbin{|} G_0 &\sim \operatorname{DP}(\alpha_c, G_0) \,, & c &= 1,\dots,C \label{eq:rm1} \\
\boldsymbol{\omega} &\sim \operatorname{Dir}(\boldsymbol{\gamma}) \label{eq:rm2} \\
c^*_m \mathbin{|} \boldsymbol{\omega} &\sim \operatorname{Cat}(\boldsymbol{\omega}) \,, & m &= 1,\dots,M \label{eq:rm3} \\
(\theta_n, \tilde{y}_n) \mathbin{|} f_n = m, \*c^*, (G_c)_c
&\sim G_{c^*_m} \,, & n &= 1,\dots,N \label{eq:rm4} \\
\*x_n \mathbin{|} \theta_n &\sim F_\mathrm{X}(\theta_n) \,, & n &= 1,\dots,N \\
y_n \mathbin{|} \tilde{y}_n, H_\mathrm{Y}
&\sim F_\mathrm{Y}(\tilde{y}_n) \,,& n &\in \mathcal{L}}%{\mathfrak{L} \,.
\end{align}
Note that, under this perspective,
\[
G_0 = \sum_{i=1}^\infty \pi_{0i} \delta_{(\theta^*_i, y^*_i)} \quad \text{and} \quad
G_c = \sum_{i=1}^\infty \pi_{ci} \delta_{(\theta^*_i, y^*_i)} \,.
\]
Now, if we let $C \to \infty$ as mentioned in footnote \labelcref{foot:unbounded_contexts}, assuming that $\forall c, {\alpha_c = \alpha}$, and $\boldsymbol{\gamma} = (\tfrac{\gamma_0}{C}, \dots, \tfrac{\gamma_0}{C})$, we obtain the following nested-hierarchical Dirichlet process:
\begin{align}
Q \mathbin{|} G_0 &\sim \operatorname{DP}(\gamma_0, \operatorname{DP}(\alpha, G_0)) \\
G_n \mathbin{|} Q &\sim Q \,, & n &= 1,\dots,N \\
(\theta_n, \tilde{y}_n) \mathbin{|} G_n
&\sim G_n \,, & n &= 1,\dots,N \,,
\end{align}
replacing \crefrange{eq:rm1}{eq:rm4}.
\section{Label Likelihood}\label{app:label_lik}
Given the label prior, $H_\mathrm{Y}$, we can formulate the following likelihood model:
\begin{equation}\label{eq:label_lik_unobs}
F_\mathrm{Y}(\ell \mathbin{|} y^*_i) = \begin{cases}
1-\varepsilon \,, & \ell = y^*_i \\
\varepsilon \frac{H_\mathrm{Y}(\ell)}{1-H_\mathrm{Y}(y^*_i)} \,, & \ell \neq y^*_i
\end{cases} \,.
\end{equation}
Note that \cref{eq:label_lik_unobs} depends on the unobserved label prior $H_\mathrm{Y}$. Fortunately, we are able to marginalise over $H_\mathrm{Y}$ to obtain the following convenient result, given $\ell \neq y^*_i$:
\begin{equation}\label{eq:label_expec_prob_wrong}
\mathbb{E}\expecfences*{\frac{H_\mathrm{Y}(\ell)}{1-H_\mathrm{Y}(y^*_i)}}{\*y^*}
= \frac{\widehat{\LabelPrior}(\ell \mathbin{|} \*y^*)}{1-\widehat{\LabelPrior}(y^*_i \mathbin{|} \*y^*)} \,,
\end{equation}
where $\widehat{\LabelPrior}$ is defined as in Eq.\ (14) (main paper). This straightforward equivalence arises from the fact that posterior weights in a DP follow a Dirichlet distribution and are therefore neutral: after removing one weight, the proportions between the remaining ones are independent of its value, and they simply follow a Dirichlet distribution with that component discarded.
We can then formulate an alternative likelihood, which depends on $\*y^*$:
\begin{equation}
\widehat{\LabelLik}(\ell \mathbin{|} y^*_i; \*y^*) = \begin{cases}
1-\varepsilon \,, & \ell = y^*_i \\
\varepsilon \frac{\widehat{\LabelPrior}(\ell \mathbin{|} \*y^*)}{1-\widehat{\LabelPrior}(y^*_i \mathbin{|} \*y^*)} \,, & \ell \neq y^*_i
\end{cases} \,.
\end{equation}
Although the marginalisation in \cref{eq:label_expec_prob_wrong} breaks the conditional independence of the true component labels, it gives us a simple, tractable form for the likelihoods of observed labels.
The simpler case of uniform label noise, discussed in \cite{Frenay2014}, could not easily be extended to our context with infinite support, as this would result in an improper likelihood $F_\mathrm{Y}$.
\section{Gibbs Sampler Conditionals}\label{app:gibbs}
Joint posterior:
\[
p\CondBrackets{\*z, \*y^*, \boldsymbol{\theta}^*, \boldsymbol{\pi}}{\*X, \*y_\mathcal{L}}%{\mathfrak{L}, \*c^*}
\]
\subsection{Global Weights}
As suggested in \cite{Teh2006}, we augment our Markov chain state with the weights of the global DP $G_0$, such that the context DPs $(G_c)_c$ become conditionally independent and can be sampled in parallel:
\begin{equation}
\boldsymbol{\pi}_0 = (\pi_{01}, \dots, \pi_{0I}, \pi_0') \mathbin{|} \*T \sim \operatorname{Dir}(T_{\cdot 1}, \dots, T_{\cdot I}, \alpha_0) \,,
\end{equation}
where $I$ is the current number of distinct identities, $\pi_0'$ is the weight of $G_0$'s base measure ($\pi_0' = \sum_{i=I+1}^\infty \pi_{0i}$) and $T_{\cdot i} = \sum_{c=1}^C T_{ci}$ are auxiliary variables counting the total number of `tables' (context-wise clusters) having `dish' (global cluster) $i$, in the Chinese restaurant analogy \cite{Teh2006}.
Finally, to sample the table counts $\*T$ conditioned on the global weights $\boldsymbol{\pi}$ and identity and context assignments $\*z$ and $\*c^*$, we use a similar scheme to the one presented in \cite{Dai2015}:
\begin{equation}
T_{ci} = \sum_{n=1}^{N_{ci}} \indicator*{u_n \leq \frac{\alpha_c \pi_{0i}}{\alpha_c \pi_{0i} + n}} \,,
\end{equation}
where $(u_n)_{n=1}^{N_{ci}}$ are uniformly sampled from $[0,1]$.
\subsection{Identity Assignments}
For the unlabelled instances, we have
\begin{equation}\label{eq:identity_gibbs}
p\CondBrackets{z_n}{\*X, \*y_\mathcal{L}}%{\mathfrak{L}, \*z_{-n}, \*c^*, \*y^*, \boldsymbol{\theta}^*, \boldsymbol{\pi}_0} \propto \begin{cases}
F_\mathrm{X}(\*x_n \mathbin{|} \theta^*_i) \, p\CondBrackets{z_n=i}{\*z_{-n}, \*c^*, \boldsymbol{\pi}_0} \,, \\
\widetilde{\ObsLik}(\*x_n) \, p\CondBrackets{z_n \text{ new}}{\*z_{-n}, \*c^*, \boldsymbol{\pi}_0} \,,
\end{cases}
\end{equation}
where $\widetilde{\ObsLik}(\*x) = \int F_\mathrm{X}(\*x \mathbin{|} \theta) H_\mathrm{X}(\theta) \,\mathrm{d}\theta$, the prior predictive distribution of the observations. The Chinese restaurant franchise conditionals $p\CondBrackets{z_n}{\*z_{-n}, \*c^*, \boldsymbol{\pi}_0}$ are given by \cite{Teh2006}
\begin{equation}\label{eq:crf_conditional}
p\CondBrackets{z_n = i}{\*z_{-n}, \*c^*, \boldsymbol{\pi}_0} \propto \begin{cases}
N_{c_n i}^{-n} + \alpha_{c_n} \pi_{0i} \,, & N_{c_n i}^{-n} > 0 \\
\alpha_{c_n} \pi_0' \,, & i \text{ new}
\end{cases} \,,
\end{equation}
where $N_{ci} = |\{n : c_n=c \wedge z_n=i\}|$, i.e.~ the number of samples in context $c$ assigned to cluster $i$.
The global weights $\boldsymbol{\pi}$ are updated whenever an instance gets assigned to a new cluster, by splitting $\pi_0'$ according to the stick-breaking process: sample ${\beta \sim \operatorname{Beta}(1, \alpha_0)}$, then set ${\pi_{0,I+1} \gets \beta \pi_0'}$ and ${\pi_0' \gets (1-\beta) \pi_0'}$ \cite{Teh2006}.
For $n \in \mathcal{L}}%{\mathfrak{L}$, there is an additional term accounting for the likelihood of the observed label:
\begin{multline}
p\CondBrackets{z_n}{\*X, \*y_\mathcal{L}}%{\mathfrak{L}, \*z_{-n}, \*c^*, \*y^*, \boldsymbol{\theta}^*, \boldsymbol{\pi}_0} \\
\propto F_\mathrm{X}(\*x_n \mathbin{|} \theta^*_{z_n}) \, \widehat{\LabelLik}(y_n \mathbin{|} y^*_{z_n}; \*y^*) \, p\CondBrackets{z_n}{\*z_{-n}, \*c^*, \boldsymbol{\pi}_0} \,.
\end{multline}
\subsection{Contexts}
\begin{equation}\label{eq:context_gibbs}
p\CondBrackets{c^*_m}{\*X, \*y_\mathcal{L}}%{\mathfrak{L}, \*z, \*c^*_{-m}, \*y^*, \boldsymbol{\theta}^*, \boldsymbol{\pi}_0}
\propto p\CondBrackets{\*z_{\mathcal{F}}%{\mathfrak{F}_m}}{\*z_{-\mathcal{F}}%{\mathfrak{F}_m}, \*c^*, \boldsymbol{\pi}_0} \, p\CondBrackets{c^*_m}{\*c^*_{-m}}
\end{equation}
The context posterior predictive distribution is
\begin{equation}
p\CondBrackets{c^*_m = c}{\*c^*_{-m}} \propto \frac{\gamma_0}{C} + M_c^{-m} \,,
\end{equation}
where $M_c^{-m}$ is the number of frames assigned to context $c$, excluding frame $m$.
The conditional distribution for the identities in frame $m$ can be computed via sequential application of \cref{eq:crf_conditional}:
\begin{equation}
p\CondBrackets{\*z_{\mathcal{F}}%{\mathfrak{F}_m}}{\*z_{-\mathcal{F}}%{\mathfrak{F}_m}, \*c^*, \boldsymbol{\pi}_0} = \prod_{r=1}^{|\mathcal{F}}%{\mathfrak{F}_m|} p\CondBrackets{z_{\mathcal{F}}%{\mathfrak{F}_m}^{(r)}}{\*z_{\mathcal{F}}%{\mathfrak{F}_m}^{(<r)}, \*z_{-\mathcal{F}}%{\mathfrak{F}_m}, \*c^*, \boldsymbol{\pi}_0} \,,
\end{equation}
where $r$ indexes observations within each single frame. Note that, due to exchangeability of the HDP, the order of iteration of $r$ is inconsequential.
\subsection{Labels}
Let $J_\ell^{-i} = |\{j : y^*_j = \ell \wedge j \neq i\}|$, the number of identities with label $\ell$ excluding identity $i$, and $\mathcal{L}}%{\mathfrak{L}^{(i)} = \{n \in \mathcal{L}}%{\mathfrak{L} : z_n = i\}$, the indices of labelled observations assigned to identity $i$. We can then write the Gibbs identity label predictive as
\begin{equation}
\widehat{\LabelPrior}(y^*_i \mathbin{|} \*y^*_{-i}) = \frac{1}{\lambda + I - 1} \begin{cases}
\lambda L}%{\mathsf{L}(\ell) + J_\ell^{-i} \,, & y^*_i = \ell \in \mathcal{Y}}%{\mathfrak{Y} \\
\lambda (1 - L}%{\mathsf{L}(\mathcal{Y}}%{\mathfrak{Y})) \,, & y^*_i \notin \mathcal{Y}}%{\mathfrak{Y}
\end{cases} \,,
\end{equation}
where $\mathcal{Y}}%{\mathfrak{Y}$ is the set of all known labels, whether allocated to components or not. Additionally, recall that the label likelihood is
\begin{equation}
\widehat{\LabelLik}(y \mathbin{|} \ell; \*y^*) = (1 - \varepsilon)^{\indicator{y = \ell}} \left[ \varepsilon \frac{\widehat{\LabelPrior}(y \mathbin{|} \*y^*)}{1 - \widehat{\LabelPrior}(\ell \mathbin{|} \*y^*)} \right]^{\indicator{y \neq \ell}} \,.
\end{equation}
The probability of assigning a label $\ell$ to identity $i$, given the remaining identity labels, can be computed as
\begin{equation}
\begin{split}
\hspace{2em}&\hspace{-2em} p\CondBrackets{y^*_i = \ell}{\*y_\mathcal{L}}%{\mathfrak{L}, \*z_\mathcal{L}}%{\mathfrak{L}, \*y^*_{-i}}
\propto \widehat{\LabelPrior}(\ell \mathbin{|} \*y^*_{-i}) \prod_{n \in \mathcal{L}}%{\mathfrak{L}^{(i)}} \widehat{\LabelLik}(y \mathbin{|} \ell; \*y^*) \\
&\propto \frac{\lambda L}%{\mathsf{L}(\ell) + J_\ell^{-i}}{\lambda + I - 1}
(1 - \varepsilon)^{|\mathcal{L}}%{\mathfrak{L}_\ell^{(i)}|}
\prod_{k \in \mathcal{Y}}%{\mathfrak{Y} \setminus \{\ell\}} \left[ \frac{\varepsilon (\lambda L}%{\mathsf{L}(k) + J_k)}{\lambda + I - (\lambda L}%{\mathsf{L}(\ell) + J_\ell)} \right]^{|\mathcal{L}}%{\mathfrak{L}_k^{(i)}|} \,,
\end{split}
\end{equation}
where $\mathcal{L}}%{\mathfrak{L}_\ell^{(i)} = \{n \in \mathcal{L}}%{\mathfrak{L}^{(i)} : y_n = \ell\}$.
First, let us consider the probability of assigning a \emph{known} label to identity $i$:
\begin{align*}
\hspace{2em}&\hspace{-2em} p\CondBrackets{y^*_i = \ell \in \mathcal{Y}}%{\mathfrak{Y}}{\*y_\mathcal{L}}%{\mathfrak{L}, \*z_\mathcal{L}}%{\mathfrak{L}, \*y^*_{-i}} \\
&\propto \frac{\lambda L}%{\mathsf{L}(\ell) + J_\ell^{-i}}{\lambda + I - 1}
(1 - \varepsilon)^{|\mathcal{L}}%{\mathfrak{L}_\ell^{(i)}|}
\left[ \frac{\varepsilon (\lambda L}%{\mathsf{L}(\ell) + J_\ell)}{\lambda + I - (\lambda L}%{\mathsf{L}(\ell) + J_\ell)} \right]^{-|\mathcal{L}}%{\mathfrak{L}_\ell^{(i)}|} \\
& \qquad \times \prod_{k \in \mathcal{Y}}%{\mathfrak{Y}} \left[ \frac{\varepsilon (\lambda L}%{\mathsf{L}(k) + J_k)}{\lambda + I - (\lambda L}%{\mathsf{L}(\ell) + J_\ell)} \right]^{|\mathcal{L}}%{\mathfrak{L}_k^{(i)}|} \\
&\approx \frac{\lambda L}%{\mathsf{L}(\ell) + J_\ell^{-i}}{\lambda + I - 1}
(1 - \varepsilon)^{|\mathcal{L}}%{\mathfrak{L}_\ell^{(i)}|}
\left[ \frac{\varepsilon (\lambda L}%{\mathsf{L}(\ell) + J_\ell)}{\lambda + I - J_\ell} \right]^{-|\mathcal{L}}%{\mathfrak{L}_\ell^{(i)}|}
\prod_{k \in \mathcal{Y}}%{\mathfrak{Y}} \left[ \frac{\varepsilon (\lambda L}%{\mathsf{L}(k) + J_k)}{\lambda + I - J_\ell} \right]^{|\mathcal{L}}%{\mathfrak{L}_k^{(i)}|} \\
&= \frac{\lambda L}%{\mathsf{L}(\ell) + J_\ell^{-i}}{\lambda + I - 1}
\left[ \frac{(1 - \varepsilon)(\lambda + I - J_\ell)}{\varepsilon (\lambda L}%{\mathsf{L}(\ell) + J_\ell)} \right]^{|\mathcal{L}}%{\mathfrak{L}_\ell^{(i)}|}
\frac{\prod_{k \in \mathcal{Y}}%{\mathfrak{Y}} [\varepsilon (\lambda L}%{\mathsf{L}(k) + J_k)]^{|\mathcal{L}}%{\mathfrak{L}_k^{(i)}|}}{(\lambda + I - J_\ell)^{|\mathcal{L}}%{\mathfrak{L}^{(i)}|}} \\
&\propto \frac{\lambda L}%{\mathsf{L}(\ell) + J_\ell^{-i}}{(\lambda + I - J_\ell)^{|\mathcal{L}}%{\mathfrak{L}^{(i)}|}}
\left[ \frac{(1 - \varepsilon) (\lambda + I - J_\ell)}{\varepsilon (\lambda L}%{\mathsf{L}(\ell) + J_\ell)} \right]^{|\mathcal{L}}%{\mathfrak{L}_\ell^{(i)}|} \,.
\addtocounter{equation}{1}\tag{\theequation}\label{eq:prob_label_known}
\end{align*}
where the approximation assumes that $\lambda L}%{\mathsf{L}(\ell) \ll \lambda + I - J_\ell$, $\forall \ell$, which is generally the case for sensible choices of $\lambda$ and $L}%{\mathsf{L}$.
We can analogously estimate the probability of assigning an \emph{unknown} label to an identity as follows:
\begin{align*}
p\CondBrackets{y^*_i \notin \mathcal{Y}}%{\mathfrak{Y}}{\*y_\mathcal{L}}%{\mathfrak{L}, \*z_\mathcal{L}}%{\mathfrak{L}, \*y^*_{-i}} &= \sum_{\ell \notin \mathcal{Y}}%{\mathfrak{Y}} p\CondBrackets{y^*_i = \ell}{\*y_\mathcal{L}}%{\mathfrak{L}, \*z_\mathcal{L}}%{\mathfrak{L}, \*y^*_{-i}} \\
&\propto \sum_{\ell \notin \mathcal{Y}}%{\mathfrak{Y}} \frac{\lambda L}%{\mathsf{L}(\ell)}{\lambda + I - 1}
\prod_{k \in \mathcal{Y}}%{\mathfrak{Y}} \left[ \frac{\varepsilon (\lambda L}%{\mathsf{L}(k) + J_k)}{\lambda + I - \lambda L}%{\mathsf{L}(\ell)} \right]^{|\mathcal{L}}%{\mathfrak{L}_k^{(i)}|} \\
&\approx \sum_{\ell \notin \mathcal{Y}}%{\mathfrak{Y}} \frac{\lambda L}%{\mathsf{L}(\ell)}{\lambda + I - 1}
\prod_{k \in \mathcal{Y}}%{\mathfrak{Y}} \left[ \frac{\varepsilon (\lambda L}%{\mathsf{L}(k) + J_k)}{\lambda + I} \right]^{|\mathcal{L}}%{\mathfrak{L}_k^{(i)}|} \\
&= \frac{\lambda (1 - L}%{\mathsf{L}(\mathcal{Y}}%{\mathfrak{Y}))}{\lambda + I - 1}
\prod_{k \in \mathcal{Y}}%{\mathfrak{Y}} \left[ \frac{\varepsilon (\lambda L}%{\mathsf{L}(k) + J_k)}{\lambda + I} \right]^{|\mathcal{L}}%{\mathfrak{L}_k^{(i)}|} \\
&\propto \frac{\lambda (1 - L}%{\mathsf{L}(\mathcal{Y}}%{\mathfrak{Y}))}{(\lambda + I)^{|\mathcal{L}}%{\mathfrak{L}^{(i)}|}} \,,
\addtocounter{equation}{1}\tag{\theequation}\label{eq:prob_label_unknown}
\end{align*}
noting that $J_\ell = J_\ell^{-i} = |\mathcal{L}}%{\mathfrak{L}_\ell^{(i)}| = 0$ for $\ell \notin \mathcal{Y}}%{\mathfrak{Y}$ and using a similar approximation as in \cref{eq:prob_label_known}.
Finally, combining \cref{eq:prob_label_known,eq:prob_label_unknown}, we can summarise
\newcommand{\overset{\sim}{\propto}}{\overset{\sim}{\propto}}
\begin{multline}
p\CondBrackets{y^*_i}{\*X, \*y_\mathcal{L}}%{\mathfrak{L}, \*z, \*c^*, \*y^*_{-i}, \boldsymbol{\theta}^*, \boldsymbol{\pi}_0} \\
\overset{\sim}{\propto} \begin{cases}
\dfrac{\lambda L}%{\mathsf{L}(\ell) + J_\ell^{-i}}{(\lambda + I - J_\ell)^{|\mathcal{L}}%{\mathfrak{L}^{(i)}|}}
\left[ \dfrac{(1 - \varepsilon) (\lambda + I - J_\ell)}{\varepsilon (\lambda L}%{\mathsf{L}(\ell) + J_\ell)} \right]^{|\mathcal{L}}%{\mathfrak{L}_\ell^{(i)}|},
& y^*_i = \ell \in \mathcal{Y}}%{\mathfrak{Y} \\[2.5ex]
\dfrac{\lambda (1 - L}%{\mathsf{L}(\mathcal{Y}}%{\mathfrak{Y}))}{(\lambda + I)^{|\mathcal{L}}%{\mathfrak{L}^{(i)}|}},
& y^*_i \notin \mathcal{Y}}%{\mathfrak{Y}
\end{cases} \,,
\end{multline}
where $\overset{\sim}{\propto}$ means \emph{approximately proportional to}.
\subsection{Face Feature Parameters}
\begin{equation}
p\CondBrackets{\theta^*_i}{\*X, \*y_\mathcal{L}}%{\mathfrak{L}, \*z, \*c^*, \*y^*, \boldsymbol{\theta}^*_{-i}, \boldsymbol{\pi}_0}
\propto H_\mathrm{X}(\theta^*_i) \prod_{\mathclap{n:z_n=i}} F_\mathrm{X}(\*x_n \mathbin{|} \theta^*_i ) \,,
\end{equation}
which will be analytically tractable if $F_\mathrm{X}$ and $H_\mathrm{X}$ are a conjugate pair.
|
1703.07868
|
\section{Introduction and the main result}
We begin with some notation. Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and
let $(\mathbf{B}, \| \cdot \| )$ be a real separable Banach space
equipped with its Borel $\sigma$-algebra $\mathcal{B}$
($=$ the $\sigma$-algebra generated by the class of open subsets of
$\mathbf{B}$ determined by $\|\cdot\|$). A {\bf B}-valued random variable
$X$ is defined as a measurable function from $(\Omega, \mathcal{F})$ into $(\mathbf {B}, \mathcal{B})$.
Throughout this note, let $\{R, R_{n}; ~n \geq 1\}$ be a Rademacher sequence; i.e., $\{R, R_{n}; ~n \geq 1\}$
is a sequence of independent and identically distributed (i.i.d.) real-valued random variables with
$\mathbb{P}(R = -1) = \mathbb{P}(R = 1) = 1/2$.
Probability inequalities are essential for establishing virtually all
probability limit theorems and for advancing statistical theory, and
they also have an intrinsic interest as well. For example, some invaluable and
celebrated classical inequalities are those of L\'{e}vy, Ottaviani, Kahane,
Hoffmann-J{\o}rgensen (see, e.g., L\'{e}vy (1937), Chow and
Teicher (1997, p. 75), Kahane (1968), and Hoffmann-J{\o}rgensen (1974),
respectively), etc. This note is devoted to establishing the following
probability inequality which is a comparison theorem for sums of independent
$\mathbf{B}$--valued random variables.
\vskip 0.3cm
\begin{theorem}
Let $\varphi(\cdot)$ and $\psi(\cdot)$ be two continuous and increasing functions
defined on $[0, \infty)$ such that $\varphi(0) = \psi(0) = 0$ and
\begin{equation}
\lim_{t \rightarrow \infty} \varphi(t) = \infty
~~\mbox{and}~~\frac{\psi(\cdot)}{\varphi(\cdot)} ~\mbox{is a nondecreasing function on}~[0, \infty).
\end{equation}
Here we define $\frac{\varphi(0)}{\psi(0)} = \lim_{t \rightarrow 0^{+}} \frac{\varphi(t)}{\psi(t)}$.
For $n \geq 1$, set $a_{n} = \varphi(n)$ and $b_{n} = \psi(n)$. Then we have:
{\bf (i)}~~Let $\{x_{n};~ n \geq 1\}$ be a {\bf B}-valued sequence such that $\|x_{n}\| \leq b_{n}, ~n \geq 1$.
Then for every $n \geq 1$ and all $t \geq 0$,
\begin{equation}
\mathbb{P}\left(\left\|\sum_{i=1}^{n} R_{i}x_{i} \right\| > t b_{n} \right)
\leq 2 \mathbb{P} \left(\left\|\sum_{i=1}^{n} R_{i}
\varphi\left(\psi^{-1}(\|x_{i}\|)\right) \frac{x_{i}}{\|x_{i}\|} \right\| > t a_{n} \right).
\end{equation}
Here $\displaystyle \varphi\left(\psi^{-1}(\|0\|)\right) \frac{0}{\|0\|} \stackrel{\Delta}{=} 0$ since
$\displaystyle \lim_{x \rightarrow 0} \varphi\left(\psi^{-1}(\|x\|)\right) \frac{x}{\|x\|} = 0$.
{\bf (ii)}~~If $\{V_{n};~n \geq 1 \}$ is a sequence of independent
and symmetric {\bf B}-valued random variables, then for every
$n \geq 1$ and all $t \geq 0$,
\begin{equation}
\mathbb{P}\left(\left\|\sum_{i=1}^{n} V_{i} \right\| > t b_{n} \right)
\leq 4 \mathbb{P} \left(\left\|\sum_{i=1}^{n} \varphi\left(\psi^{-1}(\|V_{i}\|)\right)
\frac{V_{i}}{\|V_{i}\|} \right\| > t a_{n} \right)
+ \sum_{i=1}^{n}\mathbb{P}\left(\|V_{i}\| > b_{n} \right).
\end{equation}
\end{theorem}
\vskip 0.3cm
The proof of Theorem 1.1 is given in Section 2. An application of Theorem 1.1 is presented in Section 3.
The application provides what we refer to as a comparison theorem for the weak law of large numbers (WLLN)
for i.i.d. ${\bf B}$-valued random variables.
\section{Proof of Theorem 1.1}
To establish Theorem 1.1, we use the following result which is the second part of Theorem 4.4 of Ledoux and Talagrand (1991).
\begin{lemma}
Let $\{x_{n};~ n \geq 1\}$ be a {\bf B}-valued sequence and let $\{\alpha_{n}; ~n \geq 1 \}$ be a real-valued
sequence such that $\sup_{n \geq 1}|\alpha_{n}| \leq 1$. Then we have, for every $n \geq 1$ and all $t \geq 0$,
\[
\mathbb{P} \left(\left\|\sum_{i=1}^{n} \alpha_{i} R_{i} x_{i} \right\| > t \right)
\leq 2 \mathbb{P} \left(\left\|\sum_{i=1}^{n} R_{i} x_{i} \right\| > t \right).
\]
\end{lemma}
\vskip 0.3cm
\noindent {\it Proof of Theorem 1.1}~~Part {\bf (i)}. Clearly we have, for every
$n \geq 1$ and all $t \geq 0$,
\[
\begin{array}{lll}
\mbox{$\displaystyle
\mathbb{P}\left(\left\|\sum_{i=1}^{n} R_{i}x_{i} \right\| > t b_{n} \right)$}
&=& \mbox{$\displaystyle
\mathbb{P}\left(\left\|\sum_{i=1}^{n} \left(\frac{a_{n}}{b_{n}}
\cdot \frac{\psi\left(\psi^{-1}(\|x_{i}\|) \right)}{\varphi\left(\psi^{-1}(\|x_{i}\|) \right)}
\right) R_{i} \frac{\varphi\left(\psi^{-1}(\|x_{i}\|)\right)}{\psi\left(\psi^{-1}(\|x_{i}\|) \right)} x_{i}
\right\| > t a_{n} \right)$}\\
&&\\
&=& \mbox{$\displaystyle \mathbb{P}\left(\left\|\sum_{i=1}^{n} \left(\frac{\varphi(n)}{\psi(n)}
\cdot \frac{\psi\left(\psi^{-1}(\|x_{i}\|) \right)}{\varphi\left(\psi^{-1}(\|x_{i}\|) \right)}
\right) R_{i} \varphi\left(\psi^{-1}(\|x_{i}\|)\right) \frac{x_{i}}{\|x_{i}\|}
\right\| > t a_{n} \right)$}.
\end{array}
\]
Since $\varphi(\cdot)$ and $\psi(\cdot)$ are two continuous and increasing functions defined on
$[0, \infty)$ satisfying $\varphi(0) = \psi(0) = 0$ and (1.1), we see that $\varphi^{-1}(\cdot)$
is also a continuous and increasing function defined on
$[0, \infty)$ such that $\psi^{-1}(0) = 0$, $\lim_{t \rightarrow \infty} \psi^{-1}(t) = \infty$, and
\[
0 \leq \frac{\psi\left(\psi^{-1}(t)\right)}{\varphi\left(\psi^{-1}(t)\right)}
\leq \frac{\psi(n)}{\varphi(n)} = \frac{b_{n}}{a_{n}}~~\mbox{whenever}~~
0 \leq t \leq \psi(n) = b_{n}.
\]
Note that $\|x_{n}\| \leq b_{n} = \psi(n)$, $n \geq 1$.
We thus conclude that, for every $n \geq 1$,
\[
0 \leq \frac{\psi\left(\psi^{-1}(\|x_{i}\|) \right)}{\varphi\left(\psi^{-1}(\|x_{i}\|) \right)}
\leq \frac{\psi(n)}{\varphi(n)} ~~\mbox{for}~ i = 1, 2, ..., n
\]
and hence that, for every $n \geq 1$,
\[
0 \leq \frac{\varphi(n)}{\psi(n)}
\cdot \frac{\psi\left(\psi^{-1}(\|x_{i}\|) \right)}{\varphi\left(\psi^{-1}(\|x_{i}\|) \right)}
\leq 1~~\mbox{for}~ i = 1, 2, ..., n.
\]
By applying Lemma 2.1 we thus have, for every $n \geq 1$ and all $t \geq 0$,
\[
\begin{array}{ll}
& \mbox{$\displaystyle
\mathbb{P}\left(\left\|\sum_{i=1}^{n} \left(\frac{\varphi(n)}{\psi(n)}
\cdot \frac{\psi\left(\psi^{-1}(\|x_{i}\|) \right)}{\varphi\left(\psi^{-1}(\|x_{i}\|) \right)}
\right) R_{i} \varphi\left(\psi^{-1}(\|x_{i}\|)\right) \frac{x_{i}}{\|x_{i}\|}
\right\| > t a_{n} \right)$}\\
&\\
& \mbox{$\displaystyle
\leq 2 \mathbb{P} \left(\left\|\sum_{i=1}^{n} R_{i}
\varphi\left(\psi^{-1}(\|x_{i}\|)\right) \frac{x_{i}}{\|x_{i}\|} \right\| > t a_{n} \right)$}
\end{array}
\]
proving Part {\bf (i)}.
Part {\bf (ii)}. For every $n \geq 1$, write
\[
V_{n,i} = V_{i}I\left\{\|V_{i} \| \leq b_{n} \right\},
~~T_{i} = \varphi\left(\psi^{-1}(\|V_{i}\|)\right)\frac{V_{i}}{\|V_{i}\|},
~~T_{n,i} = \varphi\left(\psi^{-1}(\|V_{n,i}\|)\right)\frac{V_{n,i}}{\|V_{n,i}\|}, ~~i = 1, ..., n.
\]
Clearly we have, for all $t \geq 0$,
\begin{equation}
\mathbb{P}\left(\left\|\sum_{i=1}^{n} V_{i} \right\| > t b_{n} \right)
\leq \mathbb{P} \left( \left\|\sum_{i=1}^{n} V_{n,i} \right\| > t b_{n} \right)
+ \sum_{i=1}^{n}\mathbb{P}\left(\|V_{i}\| > b_{n} \right).
\end{equation}
Note that
\[
\left\{\|V_{i} \| \leq b_{n} \right\} =
\left\{\psi^{-1}(\|V_{i} \|) \leq n \right\} =
\left\{\varphi\left(\psi^{-1}(\|V_{i} \|) \right) \leq a_{n} \right\}
= \left\{\|T_{i} \| \leq a_{n} \right\}, ~~i = 1, ..., n.
\]
Thus it is easy to see that
\[
T_{n,i} = T_{i} I\left\{\|T_{i} \| \leq a_{n} \right\}, ~~i = 1, ..., n.
\]
Since $\{V_{i};~i \geq 1 \}$ is a sequence of independent
and symmetric {\bf B}-valued random variables, $\{V_{n,i};~i = 1, ..., n \}$, $\{T_{i};~i = 1, ..., n \}$, and
$\{T_{n,i};~i = 1, ..., n \}$ are finite sequences of independent and symmetric {\bf B}-valued random variables.
Let $\{R, R_{n}; ~n \geq 1\}$ be a Rademacher sequence which is independent of $\{V_{n};~n \geq 1 \}$.
Then $\left\{R_{i}V_{n,i}; ~i = 1, ..., n \right\}$ has the same distribution as
$\left\{V_{n,i}; ~i = 1, ..., n \right\}$ in ${\bf B}^{n}$ and
$\left\{R_{i}T_{n,i};~i = 1, ..., n \right\}$ has the same distribution as
$\left\{T_{n, i};~ i = 1, ..., n \right\}$ in ${\bf B}^{n}$. Since $\|V_{n,i}\| \leq b_{n}, ~i = 1, ..., n$,
by applying (1.2), we have, for all $t \geq 0$,
\begin{equation}
\begin{array}{lll}
\mbox{$\displaystyle
\mathbb{P} \left( \left\|\sum_{i=1}^{n} V_{n,i} \right\| > t b_{n} \right)$}
&=& \mbox{$\displaystyle
\mathbb{P} \left( \left\|\sum_{i=1}^{n} R_{i} V_{n,i} \right\| > t b_{n} \right)$}\\
&&\\
&=&
\mbox{$\displaystyle
\mathbb{E}\left(\mathbb{P}\left(\left. \left\|\sum_{i=1}^{n} R_{i} V_{n,i} \right\| > t b_{n}
\right| V_{1}, ..., V_{n} \right) \right)$}\\
&&\\
&\leq&
\mbox{$\displaystyle
2 \mathbb{E}\left(\mathbb{P}\left(\left. \left\|\sum_{i=1}^{n} R_{i} T_{n,i} \right\| > t a_{n}
\right| V_{1}, ..., V_{n} \right) \right)$}\\
&&\\
&=&
\mbox{$\displaystyle
2 \mathbb{P} \left( \left\|\sum_{i=1}^{n} R_{i} T_{n,i} \right\| > t a_{n} \right)$}\\
&&\\
&=&
\mbox{$\displaystyle
2 \mathbb{P} \left( \left\|\sum_{i=1}^{n} T_{n,i} \right\| > t a_{n} \right)$}.
\end{array}
\end{equation}
For all $n \geq 1$, since $\{T_{i};~ i = 1, ..., n \}$ is a finite sequence of independent
and symmetric {\bf B}-valued random variables, it follows that
$\left\{T_{i}I\left\{\|T_{i}\| \leq a_{n} \right\} - T_{i}I\left\{\|T_{i}\| > a_{n} \right\}; ~i = 1, ..., n \right\}$
has the same distribution as $\{T_{i};~i = 1, ..., n \}$ in ${\bf B}^{n}$.
Note that
\[
\sum_{i=1}^{n} T_{n,i} = \frac{\sum_{i=1}^{n}T_{i} + \sum_{i=1}^{n} \left(T_{i}I\left\{\|T_{i}\| \leq a_{n} \right\}
- T_{i}I\left\{\|T_{i}\| > a_{n} \right\} \right)}{2}, ~ n \geq 1.
\]
We thus have, for every $n \geq 1$ and all $t \geq 0$,
\begin{equation}
\begin{array}{lll}
\mbox{$\displaystyle
\mathbb{P} \left( \left\|\sum_{i=1}^{n} T_{n,i} \right\| > t a_{n} \right)$}
&\leq& \mbox{$\displaystyle
\mathbb{P} \left( \left\|\sum_{i=1}^{n} T_{i} \right\| > t a_{n} \right)$}\\
&&\\
&& \mbox{$\displaystyle
+ \mathbb{P} \left( \left\|\sum_{i=1}^{n} \left(T_{i}I\left\{\|T_{i}\| \leq a_{n} \right\}
- T_{i}I\left\{\|T_{i}\| > a_{n} \right\} \right) \right\| > t a_{n} \right)$}\\
&&\\
&=& \mbox{$\displaystyle
2 \mathbb{P} \left( \left\|\sum_{i=1}^{n} T_{i} \right\| > t a_{n} \right).$}
\end{array}
\end{equation}
Now we can see that (1.3) follows from (2.1), (2.2), and (2.3). ~$\Box$
\section{An application}
As an application of Theorem 1.1, in this section we will establish what we call
a comparison theorem for the WLLN for i.i.d. {\bf B}-valued random variables.
Theorem 3.1 is new even when $\mathbf{B} = \mathbb{R}$.
\vskip 0.3cm
\begin{theorem}
{\rm{(Comparison theorem for the WLLN).}}
Let $(\mathbf{B}, \|\cdot\|)$ be a real separable Banach space.
Let $\{a_{n}; n \geq 1\}$ and $\{b_{n}; n \geq 1\}$ be increasing
sequences of positive real numbers such that
\begin{equation}
\lim_{n \rightarrow \infty} a_{n} = \infty~ \mbox{and}~ \left\{b_{n}/a_{n}; ~n \geq 1 \right\}~
\mbox{is a nondecreasing sequence}.
\end{equation}
Suppose that, for every symmetric sequence $\{X, X_{n}; ~n \geq 1 \}$ of i.i.d. {\bf B}-valued random variables,
\begin{equation}
\frac{S_{n}}{a_{n}} \rightarrow_{\mathbb{P}} 0 ~~\mbox{if and only if}~~\lim_{n \rightarrow \infty}
n \mathbb{P}(\|X\| > a_{n}) = 0.
\end{equation}
Here and below $S_{n} = \sum_{i=1}^{n} X_{i},~n \geq 1$. Then, for every sequence
$\{X, X_{n}; ~n \geq 1 \}$ of i.i.d. {\bf B}-valued random variables, we have that
\begin{equation}
\frac{S_{n}- \gamma_{n}}{b_{n}}
\rightarrow_{\mathbb{P}} 0 ~~\mbox{or}~~\limsup_{n \rightarrow \infty}
\mathbb{P} \left(\frac{\left\|S_{n}- \gamma_{n} \right\|}{b_{n}}
> \lambda \right) > 0 ~~\forall~\lambda > 0
\end{equation}
according as
\begin{equation}
\lim_{n \rightarrow \infty}
n \mathbb{P}(\|X\| > b_{n}) = 0~~\mbox{or}~~\limsup_{n \rightarrow \infty}
n \mathbb{P}(\|X\| > b_{n}) >0.
\end{equation}
Here and below $\gamma_{n} = n \mathbb{E}\left(XI\{\|X\| \leq b_{n} \} \right)$, $n \geq 1$.
\end{theorem}
\vskip 0.2cm
\begin{remark}
Under the assumptions of Theorem 3.1, we conclude from Theorem 3.1 that, for every
sequence $\{X, X_{n}; ~n \geq 1 \}$ of i.i.d. {\bf B}-valued random variables,
\[
\frac{S_{n} - \gamma_{n}}{b_{n}} \rightarrow_{\mathbb{P}} 0 ~~\mbox{if and only if}~~
\lim_{n \rightarrow \infty}
n \mathbb{P}(\|X\| > b_{n}) = 0,
\]
\[
\limsup_{n \rightarrow \infty}
\mathbb{P} \left(\frac{\left\|S_{n}- \gamma_{n} \right\|}{b_{n}}
> \lambda \right) > 0 ~~\forall~\lambda > 0 ~~\mbox{if and only if}~~
\limsup_{n \rightarrow \infty}
n \mathbb{P}(\|X\| > b_{n}) > 0.
\]
Hence
\[
\frac{S_{n} - \gamma_{n}}{b_{n}} \nrightarrow_{\mathbb{P}} x ~~\forall ~x \in \mathbf{B}\backslash \{0\}.
\]
\end{remark}
\vskip 0.2cm
Let $0 < p \leq 2$. Then $\mathbf{B}$ is said to be of {\it stable type $p$} if
\[
\sum_{n=1}^{\infty} \Theta_{n}v_{n} ~~\mbox{converges a.s. whenever}~~
\{v_{n}: ~n \geq 1\} \subseteq \mathbf{B} ~~\mbox{with}~~
\sum_{n=1}^{\infty} \|v_{n}\|^{p} < \infty,
\]
where $\{\Theta_{n}; ~n \geq 1 \}$ is a sequence of i.i.d. stable random variables
each with characteristic function $\psi(t) = \exp \left\{-|t|^{p}\right\}, ~- \infty < t <
\infty$. A remarkable characterization of stable type $p$ Banach spaces via the WLLN
was provided by Marcus and Woyczy\'{n}ski (1979) who showed that, for given $1 \leq p < 2$, the following two
statements are equivalent:
\begin{align*}
& {\bf (i)} \quad \mbox{The Banach space $\mathbf{B}$ is of
stable type $p$.}\\
& {\bf (ii)} \quad \mbox{For every symmetric sequence $\{X, X_{n}; ~n \geq 1 \}$
of i.i.d. {\bf B}-valued variables},
\end{align*}
\[
\frac{S_{n}}{n^{1/p}} \rightarrow_{\mathbb{P}} 0~~\mbox{if
and only if}~~\lim_{n \rightarrow \infty} n \mathbb{P}\left(\|X\| > n^{1/p}\right) = 0.
\]
Combining Theorem 3.1 and the above characterization of stable type $p$ Banach spaces,
we immediately obtain the following two results.
\vskip 0.2cm
\begin{corollary}
Let $1 \leq p < 2$ and let $\{a_{n}; n \geq 1\}$ be an increasing sequence of positive
real numbers such that
\[
\lim_{n \rightarrow \infty} a_{n} = \infty~ \mbox{and}~ \left\{n^{1/p}/a_{n}; ~n \geq 1 \right\}~
\mbox{is a nondecreasing sequence}.
\]
Let $(\mathbf{B}, \|\cdot\|)$ be a real separable Banach space such that, for every symmetric sequence
$\{X, X_{n}; ~n \geq 1 \}$ of i.i.d. {\bf B}-valued random variables, (3.2) holds.
Then the Banach space {\bf B} is of stable type $p$.
\end{corollary}
\vskip 0.2cm
\begin{corollary}
Let $(\mathbf{B}, \|\cdot\|)$ be a real separable Banach space.
Let $1 \leq p < 2$ and let $\{b_{n}; n \geq 1\}$ be a sequence of positive
real numbers such that
\[
\left\{b_{n}/n^{1/p}; ~n \geq 1 \right\}~\mbox{is a nondecreasing sequence}.
\]
If $\mathbf{B}$ is of stable type $p$, then for every sequence $\{X, X_{n}; ~n \geq 1 \}$
of i.i.d. {\bf B}-valued random variables, (3.3) and (3.4) are equivalent.
\end{corollary}
\vskip 0.2cm
\begin{corollary}
Let $\left \{X, X_{n};~ n \geq 1 \right \}$ be a sequence of i.i.d. real-valued random variables
and let $\left \{b_{n};~n \geq 1 \right \}$ be a sequence of positive real numbers such that $b_{n}/n^{1/p}$
is nondecreasing for some $p \in [1, 2)$. Set $S_{n} = \sum_{i=1}^{n} X_{i}, ~n \geq 1$. Then
\[
\frac{S_{n}- n \mathbb{E}(X I\{|X| \leq b_{n} \})}{b_{n}}
\rightarrow_{\mathbb{P}} 0 ~~\mbox{or}~~\limsup_{n \rightarrow \infty}
\mathbb{P} \left(\frac{\left|S_{n}- n \mathbb{E}(X I\{|X| \leq b_{n} \}) \right|}{b_{n}}
> \lambda \right) > 0 ~~\forall~\lambda > 0
\]
according as
\[
\lim_{n \rightarrow \infty}
n \mathbb{P}(|X| > b_{n}) = 0~~\mbox{or}~~\limsup_{n \rightarrow \infty}
n \mathbb{P}(|X| > b_{n}) >0.
\]
\end{corollary}
\vskip 0.2cm
\noindent {\it Proof}.~~It is well known that the real line $\mathbb{R}$ is of stable type $p$ for all
$p \in [1, 2)$ and so the corollary follows immediately from Corollary 3.2. ~$\Box$
\vskip 0.2cm
\begin{remark}
Corollary 3.3 is an improved version of Theorem 1 (ii) of Klass and Teicher (1977) wherein
$\left \{b_{n}/n;~ n \geq 1 \right \}$ is nondecreasing. Theorem 1 of Klass and Teicher (1977)
may be regarded as a WLLN analogue of Feller's (1946) extension of Marcinkiewicz-Zygmund SLLN
(see, e.g., Chow and Teicher (1997, p. 125)) to more general norming sequences.
\end{remark}
\vskip 0.3cm
To prove Theorem 3.1, we use the following two preliminary lemmas. The second lemma is due to
Li, Liang, and Rosalsky (2016).
\vskip 0.2cm
\begin{lemma}
Let $\left \{a_{n};~n \geq 1 \right \}$ and $\left \{b_{n};~ n \geq 1 \right \}$ be increasing sequences of
positive real numbers satisfying (3.1). Then there exist two continuous and increasing functions $\varphi(\cdot)$ and $\psi(\cdot)$
defined on $[0, \infty)$ such that (1.1) holds and
\begin{equation}
\varphi(0) = \psi(0) = 0, ~\varphi(n) = a_{n}, ~\psi(n) = b_{n}, ~ n \geq 1.
\end{equation}
\end{lemma}
\noindent {\it Proof}~~Let $a_{0} = b_{0} = 0$. Let
\[
\varphi(t) = a_{n-1} + \left(a_{n} - a_{n-1} \right) (t - n + 1), ~n - 1 \leq t < n, ~n \geq 1
\]
and
\[
\psi(t) = b_{n-1} + \left(b_{n} - b_{n-1} \right) (t - n + 1), ~n - 1 \leq t < n, ~n \geq 1.
\]
Clearly, $\varphi(\cdot)$ and $\psi(\cdot)$ are two continuous and increasing functions
defined on $[0, \infty)$ such that (3.5) holds. We now verify that (1.1) holds with
the chosen $\varphi(\cdot)$ and $\psi(\cdot)$. Note that (3.1) implies that,
for $n-1 < t < n$ and $n \geq 1$,
\[
\left(\frac{\psi(t)}{\varphi(t)} \right)^{\prime} =
\frac{\psi^{\prime}(t) \varphi(t) - \psi(t) \varphi^{\prime}(t)}{\varphi^{2}(t)}
= \frac{a_{n-1}b_{n} - b_{n-1}a_{n}}{\varphi^{2}(t)}
= \frac{a_{n-1}a_{n} \left(\frac{b_{n}}{a_{n}} - \frac{b_{n-1}}{a_{n-1}} \right)}{\varphi^{2}(t)}
\geq 0,
\]
where $b_{0}/a_{0} \stackrel{\Delta}{=} b_{1}/a_{1}$. Thus (1.1) follows. ~$\Box$
\vskip 0.2cm
\begin{lemma}
{\rm (Corollary 1.3 of Li, Liang, and Rosalsky (2017))}
Let $\{X, X_{n}; ~n \geq 1\}$ be a sequence of i.i.d.
{\bf B}-valued random variables. Let $\{X_{n}^{\prime};~n \geq 1 \}$ be an
independent copy of $\{X_{n};~n \geq 1 \}$. Write $S_{n} = \sum_{i=1}^{n} X_{i}$,
$S_{n}^{\prime} = \sum_{i=1}^{n} X_{i}^{\prime}$, $n \geq 1$. Let $\{b_{n}; n \geq 1\}$
be an increasing sequence of positive real numbers such that
$\lim_{n \rightarrow \infty} b_{n} = \infty$. Then we have
\[
\frac{S_{n}- n \mathbb{E}\left(XI\{\|X\| \leq b_{n} \} \right)}{b_{n}} \rightarrow_{\mathbb{P}} 0
\]
if and only if
\[
\frac{S_{n} - S_{n}^{\prime}}{b_{n}} \rightarrow_{\mathbb{P}} 0.
\]
\end{lemma}
\vskip 0.3cm
With the preliminaries accounted for, Theorem 3.1 may be proved.
\vskip 0.3cm
\noindent {\it Proof of Theorem 3.1}~~To establish the conclusion of Theorem 3.1,
it suffices to show that, for every sequence $\{X, X_{n}; ~n \geq 1 \}$
of i.i.d. {\bf B}-valued random variables, the following three statements are equivalent:
\begin{equation}
\frac{S_{n}- \gamma_{n}}{b_{n}}
\rightarrow_{\mathbb{P}} 0,
\end{equation}
\begin{equation}
\lim_{n \rightarrow \infty} \mathbb{P}
\left( \frac{\left\|S_{n}- \gamma_{n} \right\|}{b_{n}} > \lambda \right)
= 0 ~~\mbox{for some constant}~\lambda \in (0, \infty),
\end{equation}
\begin{equation}
\lim_{n \rightarrow \infty}
n \mathbb{P}(\|X\| > b_{n}) = 0.
\end{equation}
Here and below $\gamma_{n} = n \mathbb{E}\left(XI\{\|X\| \leq b_{n} \} \right)$, $n \geq 1$.
Let $\left \{X^{\prime}, X^{\prime}_{n};~ n \geq 1 \right \}$ be an independent copy of
$\left \{X, X_{n};~ n \geq 1 \right \}$ and set $S^{\prime}_{n} = \sum_{i=1}^{n} X^{\prime}_{i}, ~n \geq 1$.
Since (3.7) obviously follows from (3.6), it suffices to establish the implications ``(3.6) $\Rightarrow$ (3.8)",
``(3.8) $\Rightarrow$ (3.6)", and ``(3.7) $\Rightarrow$ (3.6)".
It follows from (3.6) that
\[
\frac{S_{n} - S_{n}^{\prime}}{b_{n}} \rightarrow_{\mathbb{P}} 0.
\]
and hence by the remarkable L\'{e}vy inequality in a Banach space setting
(see, e.g., see Proposition 2.3 of Ledoux and Talagrand (1991)), we have that for every $n \geq 1$,
\begin{equation}
\mathbb{P} \left(\frac{\max_{1 \leq i \leq n}\|X_{i} - X_{i}^{\prime} \|}{b_{n}} > t \right)
\leq 2
\mathbb{P} \left(\frac{\left \| S_{n} - S_{n}^{\prime} \right \|}{b_{n}} > t \right) \rightarrow 0
~~\mbox{as}~n \rightarrow \infty~~\forall~t \geq 0.
\end{equation}
Note that $\{X - X^{\prime}, X_{n} - X_{n}^{\prime}; ~n \geq 1 \}$ is a sequence of i.i.d. {\bf B}-valued random
variables. Thus (3.9) implies that
\[
\mathbb{P} \left(\frac{\max_{1 \leq i \leq n}\|X_{i} - X_{i}^{\prime} \|}{b_{n}} > t \right) =
1 - \left(1 - \mathbb{P} \left(\frac{\|X - X^{\prime}\|}{b_{n}} > t \right) \right)^{n}
\rightarrow 0 ~~\mbox{as}~~n \rightarrow \infty ~~\forall~t \geq 0
\]
which is equivalent to
\[
n \mathbb{P} \left(\frac{\|X - X^{\prime}\|}{b_{n}} > t \right) \rightarrow 0 ~~\mbox{as}~~n \rightarrow
\infty ~~ \forall ~t > 0
\]
and hence (3.8) holds since
\[
\left \{\|X^{\prime} \| \leq b_{n}/2,~\|X\| > b_{n} \right \}
\subseteq \left \{\|X - X^{\prime} \| > b_{n}/2 \right \}
~~\mbox{and}~~\lim_{n \rightarrow \infty} \mathbb{P} \left(\|X^{\prime} \| \leq b_{n}/2 \right) = 1
\]
ensures that for all large $n$,
\[
\mathbb{P} \left (\|X\| > b_{n} \right) \leq 2 \mathbb{P} \left( \|X - X^{\prime} \| > b_{n}/2 \right).
\]
Thus we see that (3.6) implies (3.8).
We now show that (3.8) implies (3.6). Suppose that (3.8) holds. Set
\[
\tilde{X} = \frac{X - X^{\prime}}{2}, ~~\tilde{X}_{n} = \frac{X_{n} - X^{\prime}_{n}}{2}, ~~n \geq 1.
\]
Clearly,
\[
\mathbb{P} (\|\tilde{X} \| > t) \leq \mathbb{P} (\|X\| > t) + \mathbb{P} (\|X^{\prime}\| > t)
= 2 \mathbb{P} (\|X\| > t) ~~\forall ~t > 0.
\]
Thus $\{\tilde{X}, \tilde{X}_{n}; ~n \geq 1 \}$ is a sequence of i.i.d.
symmetric {\bf B}-valued random variables such that
\begin{equation}
\lim_{n \rightarrow \infty} n \mathbb{P} \left(\|\tilde{X}\| > b_{n} \right) = 0.
\end{equation}
Since $\{a_{n}; n \geq 1\}$ and $\{b_{n}; n \geq 1\}$ are increasing
sequences of positive real numbers satisfying (3.1), by Lemma 3.1 there exist
two continuous and increasing functions $\varphi(\cdot)$ and $\psi(\cdot)$
defined on $[0, \infty)$ such that both (1.1) and (3.5) hold. Write
\[
Y = \varphi\left(\psi^{-1}(\|\tilde{X}\|)\right)
\frac{\tilde{X}}{\|\tilde{X}\|}, ~ Y_{n} = \varphi\left(\psi^{-1}(\|\tilde{X}_{n}\|)\right)
\frac{\tilde{X}_{n}}{\|\tilde{X}_{n}\|}, ~n \geq 1.
\]
It is easy to see that
\[
\mathbb{P}\left(\|Y\| > a_{n} \right) =
\mathbb{P}\left( \left\|\varphi\left(\psi^{-1}(\|\tilde{X}\|)\right)
\frac{\tilde{X}}{\|\tilde{X}\|} \right \| > \varphi(n) \right) =
\mathbb{P} \left(\|\tilde{X} \| > b_{n} \right), ~n \geq 1.
\]
It thus follows from (3.10) that $\{Y, Y_{n}; ~n \geq 1 \}$
is a sequence of i.i.d. symmetric {\bf B}-valued random variables such that
\[
\lim_{n \rightarrow \infty} n \mathbb{P}\left(\|Y\| > a_{n} \right) = 0
\]
and hence by (3.2)
\begin{equation}
\frac{\sum_{i=1}^{n} Y_{i}}{a_{n}} \rightarrow_{\mathbb{P}} 0.
\end{equation}
By Theorem 1.1 (ii) together with (3.11) and (3.10), we have that for $\epsilon > 0$,
\[
\begin{array}{lll}
\mbox{$\displaystyle
\mathbb{P}\left(\left\|\sum_{i=1}^{n} \tilde{X}_{i} \right\| > \epsilon b_{n} \right)$}
& \leq &
\mbox{$\displaystyle
4 \mathbb{P} \left(\left\|\sum_{i=1}^{n} \varphi\left(\psi^{-1}(\|\tilde{X}_{i}\|)\right)
\frac{\tilde{X}_{i}}{\|\tilde{X}_{i}\|} \right\| > \epsilon a_{n} \right)
+ \sum_{i=1}^{n}\mathbb{P}\left(\|\tilde{X}_{i}\| > b_{n} \right) $}\\
&&\\
&=&
\mbox{$\displaystyle
4 \mathbb{P} \left(\left\|\sum_{i=1}^{n} Y_{i} \right\| > \epsilon a_{n} \right)
+ n \mathbb{P} \left(\|\tilde{X}\| > b_{n} \right)$} \\
&&\\
& \rightarrow &
\mbox{$\displaystyle 0 ~~\mbox{as}~ n \rightarrow \infty $}
\end{array}
\]
and hence
\[
\frac{S_{n} - S_{n}^{\prime}}{2b_{n}} = \frac{\sum_{i=1}^{n} \tilde{X}_{i}}{b_{n}}
\rightarrow_{\mathbb{P}} 0.
\]
We thus conclude that
\[
\frac{S_{n} - S_{n}^{\prime}}{b_{n}} \rightarrow_{\mathbb{P}} 0.
\]
By Lemma 3.2, (3.6) follows.
It remains to show that (3.7) implies (3.6). Let $\{X, X_{n}; ~n \geq 1 \}$
be a sequence of i.i.d. {\bf B}-valued random variables satisfying (3.7).
By the L\'{e}vy inequality in a Banach space setting, we have that for all $n \geq 1$,
\[
\mathbb{P} \left( \frac{\max_{1 \leq i \leq n} \left \|X_{i} - X_{i}^{\prime} \right\|}{b_{n}}
> 2 \lambda \right)
\leq 2 \mathbb{P} \left( \frac{\left \|S_{n} - S_{n}^{\prime} \right\|}{b_{n}}
> 2 \lambda \right)
\leq 4 \mathbb{P} \left( \frac{\left \|S_{n} \right\|}{b_{n}}
> \lambda \right) ~~\forall~\lambda > 0.
\]
Then it follows from (3.7) that
\[
\lim_{n \rightarrow \infty }
n \mathbb{P} \left( \frac{\|X - X^{\prime}\|}{b_{n}} > 2 \lambda \right)= 0; ~~\mbox{i.e.,}~
\lim_{n \rightarrow \infty }
n \mathbb{P} \left( \left\|\frac{X - X^{\prime}}{2 \lambda} \right| > b_{n} \right)= 0 ~~\forall ~\lambda > 0.
\]
That is, (3.8) holds with $X$ replaced by symmetric random variable $(X - X^{\prime})/(2\lambda)$.
Since (3.6) and (3.8) are equivalent, we conclude that
\[
\frac{\sum_{i=1}^{n} \frac{X_{i} - X_{i}^{\prime}}{2 \lambda}}{b_{n}} \rightarrow_{\mathbb{P}} 0;
~~\mbox{i.e.,}~ \left(\frac{1}{2 \lambda} \right) \frac{S_{n} - S_{n}^{\prime}}{b_{n}} \rightarrow_{\mathbb{P}} 0.
\]
Thus
\[
\frac{S_{n} - S_{n}^{\prime}}{b_{n}} \rightarrow_{\mathbb{P}} 0
\]
which, by Lemma 3.2, implies (3.6). ~$\Box$
\vskip 0.5cm
\noindent
{\bf Acknowledgments}\\
\noindent The research of Deli Li was partially supported by a grant from the Natural Sciences and
Engineering Research Council of Canada (grant \#: RGPIN-2014-05428)and the research of Han-Ying Liang
was partially supported by the National Natural Science Foundation of China (grant \#: 11271286).
\vskip 0.5cm
{\bf References}
\begin{enumerate}
\item Chow, Y.S., and Teicher, H. 1997. {\it Probability Theory:
Independence, Interchangeability, Martingales, 3rd ed.}
Springer-Verlag, New York.
\item Feller, W. 1946. A limit theoerm for random variables with infinite
moments. Amer. J. Math. {\bf 68}:257-262.
\item Hoffmann-J{\o}rgensen, J. 1974. Sums of independent Banach space
valued random variables. {\it Studia Mathematica} {\bf 52}:159-186.
\item Kahane, J.-P. 1985. {\it Some Random Series of Functions, 2nd ed.} Heath Math. Monographs,
1968. Cambridge Univ. Press.
\item Klass, M., and Teicher, H. 1977. Iterated logarithm laws for asymmetric random variables
barely with or without finite mean. {\it Ann. Probab.} {\bf 5}:861-874.
\item Ledoux, M., and Talagrand, M. 1991. {\it Probability in Banach
Spaces: Isoperimetry and Processes.} Springer-Verlag, Berlin.
\item L\'{e}vy, P. 1937. {\it Th\'{e}orie de L'addition des Variables
Al\'{e}atoires.} Gauthier-Villars, Paris.
\item Li, D., Liang, H.-Y., and Rosalsky, A. 2017. A note on symmetrization procedures
for the laws of large numbers. {\it Statist. Probab. Lett.} {\bf 121}: 136-142
\item Marcus, M. B., and Woyczy\'{n}ski, W. A. 1979. Stable measures and
central limit theorems in spaces of stable type. {\it Trans. Amer. Math.
Soc.} {\bf 251}:71-102.
\end{enumerate}
\end{document}
|
1803.08902
|
\section{\label{sec:level1}Introduction}
Active particles like bacteria, animals or artificial micro-swimmers \cite{wadaPRL99,rappelPRL83,vicsekPRE74,sumpter} are able to transform different forms of energy into self-propelled directed motion \cite{Marchetti.RevModPhys.85,BDLR2016rmp}. They use various energy sources to drive some internal motor mechanism and re\-present out of equilibrium systems driven by a continuous energy flow. Artificial micro-swimmers, for instance, turn chemical energy \cite{howsePRL99} or radiation like light \cite{PalacciScience,jiangPRL105} or ultrasound \cite{WangACSNano2012} into an actively driven, self-propelled motion.
Non-equilibrium systems that are composed of a large number of active particles can show fascinating collective phenomena. In particular, short- and long-range interactions between individual particles result in alignment mechanisms that can cause directional ordering (so-called polar ordering) and synchronization of the motion of self-propelled particles \cite{uchidaPRL106,golestanianSoftMatter7}. The resulting collective modes of motion are often referred to as swarming \cite{Marchetti.RevModPhys.85}. Also, vibrated granular media in confined geometries are employed as good model systems for certain aspects of collective behavior of active particles \cite{GranularExcitonsNature1996,ArTs2003pre,WHDL2013prl,NaMR2006jsme}.
\lo{Depending on the particular interactions between particles, their density and strength of driving (activity) one observes different regimes of clustering, ordering and motion that one may, in analogy to equilibrium behavior call gas, liquid, liquid-crystalline and crystalline states \cite{BDLR2016rmp,MaVC2018arpc}. Much recent attention focused on an actively driven condensation phenomenon, the motility-induced phase separation between a gaseous and a liquid state that is purely due to self-propulsion \cite{Ginot2015prx,SSWK2015prl,CaTa2015arcmp}. However, for certain particle interactions and/or at quite high densities active particles can also form crystalline ordered states, in particular, resting \cite{Thar2002,Thar2005} or traveling \cite{PalacciScience,theurkauff2012prl,LibchaberPRL2015,ginot2018aggregation} patches with nearly crystalline order \cite{ToTR2005ap}. These ``active crystals'' \cite{MenzelLoewen,MenzelOhtaLoewenPhysRevE.89} (called ``flying crystals'' in \cite{ToTR2005ap} and ``living crystals'' in \cite{PalacciScience,MSAC2013prl,BDLR2016rmp}) have properties that differ from passive crystalline clusters \cite{speckPRL112,speckPRL110}. The activity due to self-propulsion can change the critical temperature and density at which crystallization sets in. Besides, it can induce organized translational and rotational motion \cite{theurkauff2012prl,ReBH2013pre,Ginot2015prx,ginot2018aggregation}.
Many particle-based models are studied that show resting, traveling and rotating, active, crystalline and amorphous clusters \cite{EbEr2003pspi,REES2008epjt,MSAC2013prl,NKEG2014prl} as well as cluster-crystals \cite{Menz2013jpm,DeLH2017njp}. For instance, a systematic study of the interplay of a short-range attraction and self-propulsion in Brownian dynamics simulations shows that clusters form at low activity (due to attraction) as well as at high activity (motility-induced) with a homogeneous active fluid phase in between \cite{ReBH2013pre}.}
\lo{There exist many continuum models for active matter \cite{ToTR2005ap,Marchetti.RevModPhys.85,Menz2015prspl,RKBH2018pre}, an important example is the Toner-Tu model of swarming \cite{ToTu1995prl,TonerTu}. It represents a generalization of the compressible Navier-Stokes equations of hydrodynamics to systems without Galilei invariance, i.e., with preferred velocities.
Recently, a simple active Phase-Field-Crystal model (aPFC) has been proposed that describes transitions between the liquid state and resting and traveling crystalline states \cite{MenzelLoewen}.
It combines elements of the Toner-Tu theory and the (passive) Phase-Field-Crystal model (PFC), an intensively studied microscopic continuum model for the dynamics of crystallization processes on diffusive time scales \cite{EmmerichPFC}.}
The Phase-Field-Crystal model was introduced by Elder and coworkers \cite{ElderGrantPRL88} and is applied for passive colloidal particles but also used for atomic systems \cite{TGTP2009prl,ERKS2012prl}. Mathematically, it corresponds to the conserved Swift-Hohenberg equation (cSH) \cite{TARG2013pre}, i.e., the counterpart with conserved dynamics (i.e., of the form of a continuity equation) of the Swift-Hohenberg (SH) equation that represents non-conserved dynamics \cite{EGUW2018springer}. The latter is the standard equation for pattern formation close to the onset of a monotonous short-wave instability in systems without a conservation law, e.g., a Turing instability in reaction-diffusion systems or the onset of convection in a B\'enard system \cite{CrossHohenberg}. The cSH equation was first derived as the equation governing the evolution of binary fluid convection between thermally insulating boundaries \cite{KnoblochPRA1989}; in the PFC context recent derivations from classical Dynamical Density Functional Theory (DDFT) of colloidal crystallization can be found in Refs.~\cite{EmmerichPFC,ARTK2012pre}. In the course of the derivation, the one-particle density of DDFT is shifted and scaled to obtain the order parameter field of PFC. For brevity, in the following we refer to it as ``density''. Note that both, SH and PFC models, represent gradient dynamics on the same class of energy functionals \cite{EGUW2018springer}. However, in the active PFC model the coupling between density and polarization (quantified by the coupling or activity parameter) breaks the gradient dynamics structure, therefore allowing for sustained motion. Note that non-variational amendments of the standard non-conserved SH equation are also studied and can also show traveling states, though with different onset behavior \cite{KoTl2007c,HoKn2011pre,BuDa2012sjads}.
Up to now the active Phase-Field-Crystal model has mainly been employed to study the linear stability of the liquid state with respect to the development of resting and traveling crystalline patterns and in the study of domain-filling resting and traveling crystals by direct time simulations \cite{MenzelLoewen,MenzelOhtaLoewenPhysRevE.89,ChGT2016el,PVWL2018pre}.
The main purpose of the present work is to investigate resting and traveling, periodic and localized states and the related transitions as described by the active Phase-Field-Crystal model.
\lo{Our aim is to present a detailed analysis of the underlying bifurcation structure that can serve as reference for future similar analyses of other models describing active crystals. This shall allow one to develop a clearer understanding of observed multistabilities of states, hysteresis effects and critical threshold states for the occurrence of qualitative changes.
Here, a particular focus is on the transitions from resting to traveling states that} will turn out to occur at drift-pitchfork and drift-transcritical bifurcations. Drift-pitchfork bifurcations are widely studied in the literature and occur in many systems \cite{FaDT1991jpi,GGGC1991pra}. This includes the onset of motion of self-aggregating membrane channels \cite{LeNH2006prl}, drifting liquid column arrays \cite{BrFL2001el}, chemically-driven running droplets \cite{JoBT2005epje} and traveling localized states in reaction-diffusion systems~\cite{SOBP_PRL97,PiPRL01,driftbif_gurevich}. The onset of motion for localized structures is studied, for instance, in Refs.~\cite{krischerPRL73,Or-Gui.PhysRevE.57,baerPRE64,akhmedievPRE53} while Refs.~\cite{MenzelOhtaLoewenPhysRevE.89,OSIPOV1996,PVWL2018pre} focus on domain-filling patterns.
In the PFC and aPFC models, spatially localized states correspond to finite crystalline patches (i.e., patches of periodic states) that coexist with a liquid background (i.e., a homogeneous state). A great variety of resting localized states has been analyzed in detail for the PFC model in Ref.~\cite{TARG2013pre} where detailed bifurcation diagrams are given in the case of one spatial dimension \lo{(1d) while the two (2d) and three (3d) dimensional cases are investigated via direct numerical simulations. An example of a bifurcation diagram in 2d is given in \cite{EGUW2018springer}.} We expect such resting localized states (i.e., resting crystalline patches) to exist also for the aPFC model at least at small values of the activity parameter \lo{similar to the clusters observed at small activity in \cite{ReBH2013pre}.} Increasing activity brings the system further out of equilibrium and we expect that the localized states begin to travel. However, we also expect that activity might destroy the crystalline patches.
In general, localized states are experimentally observed and modeled in various areas of biology, chemistry and physics \cite{MathBio,BioPatterns,coulletPRL84,ChemWaves,BurkeKnoblochLSgenSHe}. Examples range from localized patches of vegetation patterns \cite{MERON2004}, local arrangements of free-surface spikes of magnetic fluids closely below the onset of the Rosenzweig instability \cite{richterPRL94} and localized spot patterns in nonlinear optical systems \cite{SchaepersPRL2000} to oscillating localized states (oscillons) in vibrated layers of colloidal suspensions \cite{LiouPRL1999}.
In the context of solidification described by PFC models, localized states are observed in and near the thermodynamic coexistence region of liquid and crystal state. Crystalline patches of various size and symmetry can coexist with a liquid environment depending on control parameters as mean density and undercooling \cite{RATK2012pre,TARG2013pre, EGUW2018springer}. For instance, increasing the mean density, the crystals are enlarged as further density peaks (or ``bumps'', or ``spots'') are added at their borders. Ultimately, the whole finite domain is filled and the branches of localized states terminate on the branch of space filling periodic states. Within their existence region, the localized states form ``snaking'' branches in the bifurcation diagram \cite{BurkeKnoblochSnakingChaos2007,SandstedeSnakes}. An important difference between conserved systems like the PFC model and non-conserved systems like the SH model, is that the respective snaking curves of localized states are slanted \cite{BoCR2008pre,Dawe2008sjads,LoBK2011jfm,PACF2017prf} and straight \cite{K_IMA16,BurkeKnoblochSnakingChaos2007,ALBK2010sjads,LSAC2008sjads}, respectively. For an extensive discussion of this point see the conclusion of Ref.~\cite{TARG2013pre}. \lo{Note that besides mass conservation also boundary conditions can have an influence on the type of snaking \cite{kozyreffPRL2009}.}
Here, we use the aPFC model to explore how slanted snaking of localized states as a characteristic feature of pattern-forming systems with a conserved quantity is amended by activity. This includes the question when and how resting localized states start to travel and whether and how they are destroyed by activity. Our work is organized as follows:
Section~\ref{sec:level1mod} introduces the model, its analytical and numerical treatment, while section~\ref{sec:level1lin} analyzes the linear stability of the uniform state (liquid state)
and discusses the different types of dispersion relations. Then, sections~\ref{sec:level1per} and~\ref{sec:level1loc} employ numerical continuation techniques to determine
bifurcation diagrams for resting and traveling periodic states (crystal) and localized states (crystallites coexisting with liquid), respectively, employing the mean density and activity parameter as main control parameters. Section~\ref{sec:level1drift} analyzes the condition for the onset of motion of crystallites. Finally, section~\ref{sec:level1con} concludes and gives an outlook.
\section{\label{sec:level1mod}The model}
\subsection{\label{sec:level2}Governing equations}
The local state variables of the aPFC model as introduced in Ref.~\cite{MenzelLoewen} are the scalar order parameter field $\psi(\mathbf{r},t)$, $\mathbf{r}\in \Omega \subset \mathbb{R}^\mathrm{n}$ (called in the following ``density'') where $\Omega$ denotes the considered domain, and the vectorial order parameter field $\mathbf{\ensuremath{P}}(\mathbf{r},t)$ (called in the following ``polar ordering'') that describes the local strength and direction of the active drive. The field $\psi(\mathbf{r},t)$ is conserved, i.e., $\int_\Omega\psi\, \mathrm{d^n r}$ is constant, and specifies the modulation about the mean density $\bar{\psi}$ that itself encodes the deviation from the critical point \cite{EmmerichPFC}. The field $\mathbf{\ensuremath{P}}(\mathbf{r},t)$ is non-conserved.
The uncoupled dynamics of $\psi(\mathbf{r},t)$ and $\mathbf{\ensuremath{P}}(\mathbf{r},t)$ corresponds to a purely conserved and a mixed non-conserved and conserved gradient dynamics on an underlying free energy functional $\mathcal{F}[\psi,\vec{P}]$, respectively. The functional contains no terms mixing the two fields and the coupling is purely non-variational, i.e., it can not be written as a gradient dynamics. The coupling is introduced in both equations in the simplest nontrivial form allowed for by the tensorial character of the fields that keeps the conserved character of the $\psi$-dynamics, i.e., the evolution of $\psi$ follows a continuity equation $\partial_t\psi=-\nabla\cdot\vec{j}$ where $\vec{j}$ is a flux. The non-dimensional evolution equations are \cite{MenzelLoewen}
\begin{align}
\partial_{t}\psi &= \nabla^{2}\frac{\delta\mathcal{F}}{\delta\psi}-v_{0}\nabla\cdot\mathbf{P},\\
\partial_{t}\mathbf{P} &= \nabla^{2}\frac{\delta\mathcal{F}}{\delta\mathbf{P}}-D_{\mathrm{r}}\frac{\delta\mathcal{F}}{\delta\mathbf{P}}-v_{0}\nabla\psi
\label{eq:gov}
\end{align}
where $v_0$ is the coupling strength, also called activity parameter or velocity of self-propulsion. Physically speaking, $\mathbf{\ensuremath{P}}$ is subject to translational and rotational diffusion with $D_{\mathrm{r}}$ being the rotational diffusion constant. The functional $\mathcal{F}[\psi,\ensuremath{P}]$ is the sum of the standard phase-field-crystal functional $\mathcal{F}_{\mathrm{pfc}}[\psi]$ \cite{ElderGrantPRL88, ElderGrantPRE70, EmmerichPFC} and an orientational part $\mathcal{F}_{\mathbf{P}}[\vec{P}]$
\begin{equation}
\mathcal{F}=\mathcal{F}_{\mathrm{pfc}}+\mathcal{F}_{\mathbf{P}}
\end{equation}
with
\begin{equation}
\mathcal{F}_{\mathrm{pfc}}[\psi] = \int \mathrm{d^nr}\left\{ \frac{1}{2}\psi\left[\epsilon+\left(1+\nabla^{2}\right)^{2}\right]\psi+\frac{1}{4}(\psi+\bar{\psi})^{4}\right\}
\label{eq:functional}
\end{equation}
and
\begin{equation}
\mathcal{F}_{\mathbf{P}}[\vec{P}]=\int \mathrm{d^nr} \left(\tfrac{C_1}{2}\mathbf{P}^{2}+\tfrac{C_2}{4}\mathbf{P}^{4}\right).
\label{eq:functionalP}
\end{equation}
The functional (\ref{eq:functional}) encodes the phase transition between liquid and crystal state \cite{EmmerichPFC}. It consists of a negative interfacial energy density ( $\sim|\nabla\psi|^2$) that favors
the creation of interfaces, a bulk energy density and a stabilizing stiffness term ($\sim(\Delta\psi)^2$) -- this can be seen by partial integration. The parameter $\epsilon$ encodes temperature. Namely, negative values correspond to an undercooling of the liquid phase and result in solid (periodic) states for suitable mean densities $\bar{\psi}$, whereas positive values result in a liquid (homogeneous) phase. The functional (\ref{eq:functionalP}) with $C_1<0$ and $C_2>0$ allows for spontaneous polarization (pitchfork bifurcation at $C_1=0$). However, in most of our work we will avoid spontaneous polarization using positive $C_1>0$ and $C_2=0$ as also done in most of the analysis of Refs.~\cite{MenzelLoewen,MenzelOhtaLoewenPhysRevE.89,ChGT2016el}. With $C_1>0$ diffusion reduces the polarization.
Determining the variations of Eqs.~(\ref{eq:functional}) and (\ref{eq:functionalP}) and introducing them in the governing equations~(\ref{eq:gov}) we obtain the kinetic equations
\begin{align}
\partial_{t}\psi &= \nabla^{2}\left\{\left[\epsilon+\left(1+\nabla^{2}\right)^{2}\right]\psi+\left(\bar{\psi}+\psi\right)^{3}\right\}-v_{0}\nabla\mathbf{\cdot P}, \label{eq:dtpsi} \\
\partial_{t}\mathbf{P} &= C_1\nabla^{2}\mathbf{P} - D_{\mathrm{r}}C_1\mathbf{P}-v_{0}\nabla\psi.
\label{eq:dtP}
\end{align}
In the following we study resting and traveling solutions of these equations in the spatially one-dimensional case with a special emphasis on the onset of motion.
Then $\vec{P}$ also becomes a scalar $P$ and indicates the strength and sense of direction of motion.
%
\subsection{\label{sec:level2}Steady and stationary states}
To investigate steady and stationary states (where the latter are steady states in some comoving frame that moves with velocity $c$) we consider Eqs.~(\ref{eq:dtpsi}) and (\ref{eq:dtP}) with $\partial_{t}\psi=c\partial_x\psi$ and $\partial_{t}P=c\partial_x P$. Hence, positive velocities $c$ correspond to a propagation to the left. Then Eq.~(\ref{eq:dtpsi}) can be integrated once and we obtain the coupled fifth- and second-order ordinary differential equations
\begin{align}
0 =& \partial_{x}\left\{\left[\epsilon+\left(1+\partial_{xx}\right)^{2}\right]\psi+\left(\bar{\psi}+\psi\right)^{3}\right\}-v_{0}P \label{eq:steadystatePSI} \nonumber \\
&-c \psi-J,\\
0 =& C_1 \partial_{xx}P-D_{\mathrm{r}}C_1 P - v_{0}\partial_{x}\psi-c\partial_{x}P \label{eq:steadystateP}
\end{align}
where the integration constant $J$ represents a flux. We emphasize that the velocity $c$ is equal to zero for resting states. For traveling states it is a nonlinear eigenvalue that has to be determined along with the solution profile.
Beside the trivial steady state $(\psi=0, P=0)$ there exist spatially-modulated states $(\psi=\psi(x), P=P(x))$ that solve Eqs.~(\ref{eq:steadystatePSI}) and (\ref{eq:steadystateP}). We will determine their bifurcation diagrams employing continuation techniques (see next section). In the treated special case of $C_2=0$ [cf.~Eq.~(\ref{eq:dtP})], for periodic states one may integrate the linear Eq.~(\ref{eq:steadystateP}) over one period $\ell$ and finds $\int_\ell dx\,P(x)=0$. As $\int_\ell dx\,\psi(x)=0$ by definition, Eq.~(\ref{eq:steadystatePSI}) then implies $J=0$. Note, that as $\psi(x)$ is the deviation from the mean $\bar\psi$, for $J=0$ the flux of material is given by $c\bar\psi$. Note, that the system is invariant under the transformation $(\psi,P,x,c)\to (\psi,-P,-x,-c)$. In the case of $\bar\psi=0$, also the symmetry $(\psi,P,x,c)\to (-\psi,-P,x,c)$ holds.
\subsection{\label{sec:level2}Numerical approach}
We employ numerical path-continuation techniques \cite{DWCD2014ccp,KrauskopfOsingaGalan-Vioque2007,Kuznetsov2010,EGUW2018springer} bundled in the package auto07p \cite{DoKK1991ijbc,Doedelauto2012} to determine steady ($c=0$) and stationary ($c\neq0$) periodic and localized solutions of Eqs.~(\ref{eq:steadystatePSI}) and (\ref{eq:steadystateP}) on a domain of size $L$. The techniques allow one to follow branches of solutions in parameter space, detect bifurcations, switch branches and in turn follow the bifurcating branches. The pseudo-arclength continuation implemented in auto07p is also able to follow branches when they fold back at saddle-node bifurcations allowing one to determine the entire bifurcation diagram. In the literature the method is extensively applied to the SH equation \cite{BuKn2006pre,MaBK2010pd,BuDa2012sjads} and PFC-type models \cite{Thie2010jpcm,TARG2013pre,RATK2012pre}. To our knowledge, continuation has not yet been applied to the aPFC model.
To do so, our system of Eqs. (\ref{eq:steadystatePSI}) and (\ref{eq:steadystateP}) is transformed into a seven-dimensional dynamical system (with $x$ being the independent variable with seven periodic boundary conditions). A phase condition that breaks translational invariance and a constraint that controls the volume are included as integral conditions (cf.~Refs.~\cite{cenosTutorial, EGUW2018springer} for examples of using such conditions for several related equations). This implies that in each continuation run beside the main control
parameter one has two further parameters that have to be adapted (with other words they represent nonlinear eigenvalues of the problem). Here, we use either the mean density $\bar{\psi}$ or the activity $v_{0}$ as main control parameter while velocity $c$ and flux $J$ are adapted.
The resulting bifurcation diagrams are given in terms of the $L^2$-norm of the solution array that we use as main solution measure.
It is defined by
\begin{equation}
||\underline{\psi},\underline{P}||_2=\sqrt{\frac{1}{L} \int_0^L \sum_{i=1}^7 a_i^2 \mathrm{d}x}
\end{equation}
where the $a_i$ stand for the elements of the solution array $(\underline{\psi},\underline{P})=(\psi, \partial_{x}\psi, \partial^2_{x}\psi, \partial^3_{x}\psi, \partial^4_{x}\psi, P, \partial_{x} P)$.
In addition, we perform direct numerical simulations (DNS) employing a pseudo-spectral method. Starting from a homogeneous state with a small random perturbation, Eqs.~(\ref{eq:dtpsi}) and (\ref{eq:dtP}) are integrated forward in time via a semi-implicit Euler method, while spatial derivatives are calculated in Fourier space and nonlinearities in real space.
\section{\label{sec:level1lin}Liquid state and its linear stability}
\begin{figure*
\includegraphics{Figure1.eps}
\caption{\label{fig:phasediagram} \lo{(a) Morphological phase diagram of the active PFC model in the 1d case in the plane spanned by activity $v_{0}$ and mean concentration $\bar{\psi}$ as obtained by linear and nonlinear analysis. The remaining parameters are $\epsilon=-1.5$, $C_{1}=0.1$, $C_{2}=0.0$ and $D_\mathrm{r}=0.5$. Labels ``I'' to ``VI'' in (a) indicate parameters for which the real part of the dispersion relation $\lambda(k)$ is shown in (b) with solid [dashed] lines corresponding to real [complex] eigenvalues. In (a) gray shading indicates the linearly unstable region where $\mathrm{Re}(\lambda(k))>0$ for a band of wavenumbers $k$. There, periodic (crystalline) patterns are formed. The analytically obtained curved solid and horizontal dashed black lines indicate the onset of the monotonic and oscillatory finite wavelength instability, respectively. For the coinciding red lines the critical wavenumber is approximated as $k_c\approx 1$. The gray-shaded region of the linearly unstable homogeneous (liquid) phase is separated by the vertical dashed blue line into regions where stable resting (light gray) and stable traveling (dark gray) crystals are found in the fully nonlinear regime. The thin dotted black and dot-dashed red lines indicate changes in the primary bifurcation behavior and indicate where the (then unstable) resting crystals cease to exist (see main text).}}
\end{figure*}
The trivial solution of a PFC model is the homogeneous state that represents the liquid phase where on diffusive time scales the probability of finding a particle is uniform in space. In analogy, we also call the
homogeneous state $(\psi_0,\mathbf{P}_{0})=(0,\mathbf{0})$ of the present aPFC model ``liquid phase''. Although it exists at all parameter values, for $\epsilon<0$ it is only stable at high $|\bar{\psi}|$ and at lower $|\bar{\psi}|$ becomes unstable w.r.t. coupled density and polarization fluctuations. However, in the context of colloidal particles the region $\bar{\psi}>0$ is unphysical \cite{TARG2013pre} and we focus on $\bar{\psi}<0$ where the liquid state is stable at low values of $\bar{\psi}$ (high $|\bar{\psi}|$) while the crystalline state is at high $\bar{\psi}$ (low $|\bar{\psi}|$).
To determine the linear stability of the homogeneous state, Eqs.~(\ref{eq:dtpsi}) and (\ref{eq:dtP}) are linearized in small perturbations $(\delta\psi,\delta\mathbf{P})$ about \lo{$(0,\mathbf{0})$} yielding
\begin{align}
\partial_{t}\delta\psi &= \nabla^{2}\left(\epsilon+3\bar{\psi}^{2}+\left(1+\nabla^{2}\right)^{2}\right)\delta\psi-v_{0}\nabla \cdot \delta\mathbf{P}, \label{eq:dtdeltaPSI_trivial}\\
\partial_{t}\delta\mathbf{P} &= \nabla^{2}\left(C_{1}\delta\mathbf{P}\right)-D_{\mathrm{r}}C_{1}\delta\mathbf{P}-v_{0}\nabla\delta\psi \label{eq:dtdeltaP_trivial}.
\end{align}
We restrict our analysis to one spatial dimension, expand the spatial dependency of the perturbation into decoupled harmonic modes and, in consequence, use the exponential ansatz $\delta \psi (x,t), \delta P(x,t) \propto \mathrm{ext}(ikx+\lambda t)$
in Eqs.~(\ref{eq:dtdeltaPSI_trivial}) and (\ref{eq:dtdeltaP_trivial}) to obtain the eigenvalues
\begin{equation}
\lambda_\pm = \frac{1}{2}\left(L_1(k)+L_2(k)\right)\pm \frac{1}{2} \sqrt{\left(L_1(k)-L_2(k)\right)^2-4 v_0^2 k^2} \label{eq:eigenvalues}
\end{equation}
where
\begin{align}
L_1(k)&= -k^2\left( \epsilon + 3 \bar{\psi}^2 + \left( 1-k^2 \right)^2 \right)\\
L_2(k)&= -k^2 C_1 -D_\mathrm{r}C_1.
\end{align}
We investigate the stability of \lo{$(\psi_0,P_{0})=(0,0)$} in the $(\bar{\psi},v_{0})$-plane and determine the boundary, where the largest real part of an eigenvalue $\lambda$ crosses zero at a finite critical wavenumber $k_c$, i.e., a maximum of the dispersion relation $\mathrm{Re}(\lambda(k))$ touches zero. This can either occur with a zero or with a finite imaginary part corresponding to unstable modes that result in the development of a resting or traveling crystalline state (i.e., spatially-periodic solution), respectively. Setting $\lambda=0$ and substituting $k^{2}=z$ gives a cubic equation for $z$. Considering Cardano's method and the desired number of roots, we are able to find analytical expressions for the stability boundaries in both cases.
The results are presented in Fig.~\ref{fig:phasediagram}(a). The white area at low $\bar{\psi}$ corresponds to a linearly stable liquid phase, whereas the gray shading marks regions where the liquid phase is unstable w.r.t.\ spatially periodic perturbations. The dashed horizontal line (red and black) \lo{at $\bar{\psi}\approx-0.67$} separates the linearly stable liquid phase and a traveling crystal. It is independent of activity $v_0$, as can be seen, when taking a closer look at Eq.~(\ref{eq:eigenvalues}). There, $v_0$ only appears in the (then negative) discriminant and therefore only influences $\mathrm{Im}(\lambda)$, i.e., the drift velocity $c$ of the perturbation modes. The upwards curved black line that separates white and light gray regions at low activity indicates the stability border of the liquid phase related to a purely real eigenvalue, i.e., a monotonic instability.
Alternatively to Cardano's method, the critical wavenumber can be approximated by $k_c\approx 1$ as used in Ref.~\cite{ChGT2016el}. This approximation gives the red lines in Fig.~\ref{fig:phasediagram}(a). The resulting stability border can not be distinguished by eye from the exact results.
Corresponding dispersion relations are displayed in Fig.~\ref{fig:phasediagram}(b) showing $\mathrm{Re(\lambda)}$ of the \lo{leading two eigenvalues} with solid [dashed] lines for real [complex] eigenvalues. The roman numbering corresponds to labels in the stability diagram~\ref{fig:phasediagram}(a). \lo{Case~I shows a dominant (i.e.~at the maximum) instability mode that is real (i.e.~monotonic), and likely results in a resting crystal.} However, with increasing activity $v_0$ the 'bubble' of real eigenvalues around the maximum shrinks. At the codimension-2 point (case II) this bubble shrinks to zero and the \lo{marginally stable eigenvalue at the maximum becomes complex. Case~III then shows a dominant mode that is complex (i.e.~oscillatory), and likely results in crystallization into a traveling crystal. Cases~IV to VI give further qualitatively different dispersion relations. In particular, points~V and VI illustrate the important change in the character of the dominant mode at $k\approx1$ from monotonic to oscillatory. Case~IV is located on the thin dotted black line in Fig.~\ref{fig:phasediagram}(a) that marks where the minimum of Re($\lambda$) touches zero. \lo{The dot-dashed red line is the corresponding approximation obtained by assuming $k_\mathrm{min}=1$.
Crossing this line does not influence the linear stability but changes the number of expected primary bifurcations. Accordingly, in Fig.~\ref{fig:crystal_v0} below (that represents a horizontal cut through Fig.~\ref{fig:phasediagram}(a) at $\bar{\psi}=-0.5$), at $v_0\approx0.34$ the branch of the (then unstable) resting crystals ends in a subcritical bifurcation.}
As discussed above the two phase boundaries in Fig.~\ref{fig:phasediagram}(a) between the liquid phase and, respectively, \lo{stable resting and stable traveling crystals} collide in point II. From there, the boundary between fully nonlinear resting and traveling crystals continues nearly vertically upwards (blue dashed line). In the nonlinear regime, this separating line cannot be determined by the present linear considerations and is obtained by numerical continuation. The resulting dashed blue line marks the onset of crystal motion and confirms Ref.~\cite{ChGT2016el}, where a similar straight line in a different parameter plane was deduced from direct time simulations. \lo{Note that to the right of the vertical line, there is the region where resting crystals still exist as unstable steady states.}
\lo{Comparing the velocity $c_\mathrm{lin}$ of the dominant linear mode and the fully nonlinearly determined drift velocity $c$ allows us to rate how well the linear analysis performs. Fig.~\ref{fig:im_vs_c}(a) shows that close to but above the liquid-solid boundary at $\bar{\psi}=-0.67$ (Fig.~\ref{fig:phasediagram}), the linear (dashed black line) and the fully nonlinear results (dot-dashed orange line) coincide in the onset of motion and the drift velocity in the entire $v_0-$range. However, in the nonlinear regime at $\bar{\psi}=-0.5$, Fig.~\ref{fig:im_vs_c}(b) shows that there is a considerable offset in the onset of motion. Yet, at high activities $v_0$ the linear and nonlinear velocities still converge. The nonlinear drift velocity $c$ corresponds to the branch of traveling crystals shown in Fig.~\ref{fig:crystal_v0} in the next section.}
}
\begin{figure}
\includegraphics{Figure2.eps}
\caption{ \label{fig:im_vs_c} \lo{Velocity $c_\mathrm{lin}$ of the dominant linear mode (black dashed line) and drift velocity $c$ (orange dot-dashed) of fully nonlinear moving crystals in dependence of activity $v_0$ for (a) $\bar{\psi}=-0.67$ and (b) $\bar{\psi}=-0.5$. Remaining parameters as in Fig.~\ref{fig:phasediagram}. The eigenvalues are obtained from the linear stability analysis of the homogeneous state. The velocity $c$ of the fully nonlinear traveling crystals is determined by numerical continuation. The speed of linear modes $c_\mathrm{lin}$ corresponds to Im$(\lambda)/k$, i.e., $c_\mathrm{lin} =\mathrm{Im}(\lambda)$ for $k=1$.
(a) In the linear regime close to the onset of crystallization $c_\mathrm{lin}$ and $c$ coincide. (b) Deep in the unstable regime, $c_\mathrm{lin}$ does not provide a suitable approximation for the onset of motion of the crystal. However, $c_\mathrm{lin}$ and $c$ approach each other at high $v_0$.}}
\end{figure}
\section{Crystalline states}
\label{sec:level1per}
\begin{figure}
\includegraphics{Figure3.eps}
\caption{\label{fig:crystal_v0}Resting and traveling crystals as a function of activity $v_0$ in the one-dimensional aPFC model. (a) The solution profiles of the periodic crystalline states are characterized by the $L^2$-norms of $\psi$, \lo{$||\psi||_2=\sqrt{\frac{1}{L} \int_0^L \psi^2 \mathrm{d}x}$}, and $P$ (inset). Branches of resting structures are shown in dashed gray, while traveling crystals are in dot-dashed orange. At a critical value of $v_0\approx0.15$, the resting crystal is destabilized and starts to move. The spatial periodicity remains unchanged. (b) depicts parts (3 times period $\ell$) of the profiles of the structures at points indicated by roman numbers in (a). Crystals I and II are close to the onset of motion. Profile III shows an active crystal at a high activity of $v_0=10.0$ beyond the range of (a). The drift velocity $c$ of the moving crystals increases monotonically with $v_0$ as \lo{shown in Fig.~\ref{fig:im_vs_c}(b). Note that the phase difference} between $\psi$ and $P$ changes when varying $v_0$, highlighted by vertical lines. $\bar{\psi}=-0.5, L=100$, remaining parameters are as in Fig.~\ref{fig:phasediagram}.}
\end{figure}
In the standard PFC model (Eq.~(\ref{eq:steadystatePSI}) with $v_0=0$), at sufficient distance from the critical point ($\epsilon$ sufficiently negative or $|\bar{\psi}|$ sufficiently low) the transition from the liquid state (homogeneous solution) to a crystalline state (periodic solution) corresponds to a first order liquid-solid phase transition with a parameter region - limited by the binodal lines - where the two states coexist \cite{TARG2013pre}. As $\psi$ is a conserved quantity this does not automatically imply that one has a subcritical bifurcation from the homogeneous to the periodic solution branch. For a detailed discussion of this intricate point see the conclusion of Ref.~\cite{TARG2013pre}.
Here, as the aPFC model is non-variational the transition between the states does not correspond anymore to a thermodynamic phase transition, i.e., arguments based on free energy do not hold anymore. Furthermore, now also the activity $v_0$ may be used to induce the transition. In particular, for the parameters of Fig.~\ref{fig:phasediagram} at $\bar{\psi}$ approximately between $-0.71$ and $-0.67$ increasing $v_0$ beyond the solid line melts the resting crystal.
More striking is the behavior at higher densities (in Fig.~\ref{fig:phasediagram}(a)) for $\bar{\psi}$ above $-0.67$). As illustrated in the bifurcation diagram Fig.~\ref{fig:crystal_v0}, there, increasing $v_0$ does not destroy the resting crystal but results in the onset of motion at a critical activity $v_c\approx0.15$ (corresponding to the vertical dashed line in Fig.~\ref{fig:phasediagram}(a)), \lo{i.e., in a transition from a stable resting to a stable traveling crystal.}
Specifically, for the resting crystals Fig.~\ref{fig:crystal_v0}(a) shows that with increasing activity the norm of $\psi$ monotonically decreases while, in contrast, the amplitude of the polarization field (see inset) first increases from zero (at $v_0=0$) until at some $v_0=v_c$ its norm equals the one of $\psi$. There the branch of traveling crystals bifurcates and the resting crystals become unstable and ultimately cease to exist \lo{(after further undergoing a Hopf bifurcation)} at about $v_0=0.34$ where the branch ends in a subcritical pitchfork bifurcation on the branch of homogeneous states. \lo{As mentioned in section~\ref{sec:level1lin}, this bifurcation corresponds to point~IV in Fig.~\ref{fig:phasediagram}. There, a double real eigenvalue of the linear stability problem of the liquid state crosses zero indicating a bifurcation of the uniform state. The mentioned unstable steady and oscillatory states will be discussed elsewhere.}
At $v_c$, a drift-pitchfork bifurcation \cite{Friedrich2005} occurs, i.e., a real eigenvalue crosses zero (see stability analysis in section~\ref{subsec:linstab}) and two branches of moving periodic states (i.e., traveling crystals) emerge from the branch of resting crystals. An analytical condition for the drift bifurcations is derived in section~\ref{sec:level1drift}.
The two bifurcating branches with the same norm are related by the symmetry $(\psi,P,x,c)\to (\psi,-P,-x,-c)$) and the velocity close to the bifurcation is $c\propto(v_0-v_c)^{1/2}$. The individual solutions on the emerging branches do not have the symmetry $(\psi,P,x)\to (\psi,-P,-x)$ anymore that the resting crystal states have (i.e., zero crossings of $P$ do not anymore coincide with the position of the peak maxima of $\psi$). Instead, for the traveling crystals the individually practically unchanged $\psi(x)$ and $P(x)$ profiles are shifted w.r.t.\ each other. The profiles keep their spatial periodicity and always move with a constant drift velocity. This velocity and the size of the phase shift between $\psi$ and $P$ profiles increase monotonically with $v_0>v_c$ also far away from the bifurcation. Indeed, for $v_0\gg1$ one finds $c\approx v_0$ and $\psi(x) \approx P(x)$. Typical density and polarization profiles are given in Fig.~\ref{fig:crystal_v0}(b).
\section{\label{sec:level1loc}Localized states}
As for the passive PFC model, where the described phase transition between liquid and crystal state is of first order for sufficiently negative $\epsilon$, one finds that in the transition region patches of liquid state and crystal state may coexist. In the PFC model this corresponds to the existence of a broad variety of spatially localized states (or crystallites) that in 1d were numerically analyzed in Ref.~\cite{RATK2012pre,TARG2013pre} (for selected 2d results see \cite{EGUW2018springer}). Next we systematically explore how the bifurcation structure of these crystallites is amended by activity employing Eqs.~(\ref{eq:steadystatePSI}) and (\ref{eq:steadystateP}). We investigate if and to what extent the phenomenon of slanted homoclinic snaking \cite{SandstedeSnakes} is changed by finite values of activity. Do traveling localized states arise due to self-propulsion? Can motion also be induced by changes in the mean concentration?
Following former works, we classify the localized states according to their spatial symmetry \cite{BurkeKnoblochLSgenSHe,TARG2013pre} and their drift velocity \cite{MenzelLoewen}. There are two kinds of resting localized states (RLS) that have a parity (left-right) symmetry in the $\psi$-component and an inversion-symmetric polarization: $(\psi(x),P(x))=(\psi(-x),-P(-x))$. The symmetric localized patches can either have a $\psi$-peak or $\psi$-trough at the center, resulting in an odd or even number of peaks, respectively. We call them ``odd states'' ($\mathrm{RLS_{\mathrm{odd}}}$) and ``even states'' ($\mathrm{RLS_{\mathrm{even}}}$). Beside spatially symmetric states, resting asymmetric localized states exist that have no parity symmetry. We refer to them as $\mathrm{RLS_{\mathrm{asym}}}$. In the PFC model, the RLS states form an intricate tilted snakes-and-ladders structure \cite{TARG2013pre}.
Traveling localized states have a nonzero drift velocity and are called TLS. For TLS, the above symmetries in $\psi$ and $P$ are not preserved.
\begin{figure}
\includegraphics{Figure4.eps}
\caption{\label{fig:snaking} Slanted homoclinic snaking of resting symmetric steady states (drift velocity $c=0$). Shown is the $L^2$-norm of the steady states in dependence of the mean concentration $\bar{\psi}$. The active drive is fixed at $v_0=0.16475$. The steady localized states bifurcate subcritically from the periodic solution with $n=16$ peaks (dashed gray line). The light (dark) blue line represents resting localized structures with a peak (trough) in the middle, $\mathrm{RLS_{\mathrm{odd}}}$ ($\mathrm{RLS_{\mathrm{even}}}$). Both lines ultimately terminate on the $n=16$ periodic state. Beside the spatially extended crystal with $n=16$ peaks, there are solutions with $n=15$ and $n=17$ peaks (dashed green lines). Remaining parameters as in Fig.~\ref{fig:crystal_v0}.}
\end{figure}
\begin{figure}
\includegraphics{Figure5.eps}
\caption{\label{fig:firstladder}Resting and traveling localized states as a function of the mean concentration $\bar{\psi}$. The localized states are created in a subcritical bifurcation and branch off from the $n=16$ periodic solution branch (dashed gray, more periodic branches in dashed green). Light and dark blue lines: $\mathrm{RLS_{\mathrm{odd}}}$ and $\mathrm{RLS_{\mathrm{even}}}$. The ladder branch (dashed black line) corresponding to asymmetric states connects the two symmetric RLS. Beside snaking branches and the ladder rungs, we find traveling localized states (TLS, dot-dashed orange) that arise due to activity. Remaining parameters as in Fig.~\ref{fig:snaking}.}
\end{figure}
\begin{figure}
\includegraphics{Figure6.eps}
\caption{\label{fig:snake-ladder}Tilted snakes-and-ladders structure for finite active drive. The light (dark) blue line represents odd (even) symmetric localized structures. The dashed black lines correspond to asymmetric localized states. Because of the active drive above $v_c$ there exist traveling states (TLS, dot-dashed orange line) that emerge in various drift bifurcations. The shown branches of TLS have between 5, 6 and 7 peaks in $\psi$. Remaining parameters as in Fig.~\ref{fig:snaking}.}
\end{figure}
\begin{figure}
\includegraphics{Figure7.eps}
\caption{\label{fig:solutions}Typical density and polarization profiles of localized states for $\bar{\psi}=-0.75$ and various values of activity $v_0$ (rounded value given in each panel). Blue colors indicate symmetric RLS. Two states with an odd number of peaks are followed by an even RLS (top, from left to right). An asymmetric resting state is plotted in black. The profiles in red are traveling localized states. Their profile is slightly asymmetric, too. Note that the integral over $\psi$ vanishes, as it only describes the modulation around $\bar{\psi}$. Remaining parameters as in Fig.~\ref{fig:crystal_v0}.}
\end{figure}
\subsection{Bifurcation diagrams}
Figure \ref{fig:snaking} gives the bifurcation diagram for periodic and localized states of the aPFC model for fixed finite activity $v_0\approx0.16>v_c$ employing the mean density $\bar\psi$ as control parameter. It illustrates the main characteristics of the resting crystallites (steady localized states) and their snaking path towards a spatially extended crystal that fills the whole considered domain. The appearance of the bifurcation diagram is similar to the one obtained for the conserved Swift-Hohenberg equation (passive PFC) \cite{TARG2013pre}, note, in particular, the slanted snaking that also occurs for other systems with conserved quantities \cite{BoCR2008pre,Dawe2008sjads,LoBK2011jfm,TARG2013pre}. The liquid state with solution measure $||\underline{\psi},\underline{P}||_2=0$ is destabilized when $\bar{\psi}$ is increased above a critical mean concentration $\bar\psi_c\approx-0.66$\lo{, coinciding with point II in Fig.~\ref{fig:phasediagram}(a)}. For the employed domain size of $L=100$, three branches of periodic states bifurcate from the uniform state. The dashed gray and dashed green lines correspond to periodic structures with $n=15,16$ and 17 $\psi$-peaks. Slightly beyond the primary bifurcation, the periodic state with $n=16$ is destabilized and two branches (light and dark blue) emerge in a subcritical secondary bifurcation. Fig.~\ref{fig:firstladder} gives a zoom of this region. The two branches correspond to the resting odd and even localized states, respectively. Both branches undergo a series of saddle-node bifurcations where their stabilities change (cf.~Fig.~\ref{fig:snake-ladder} and subsection~\ref{subsec:linstab}). The odd and the even branch of symmetric RLS are connected by many branches of asymmetric RLS that are given in Figs.~\ref{fig:firstladder} and \ref{fig:snake-ladder} as dashed black lines, but are not included in Fig.~\ref{fig:snaking}.
Each pair of saddle-node bifurcations adds a couple of peaks to the localized crystalline patch that, in consequence, enlarges until ultimately the whole domain is filled with the crystalline state and the branches of localized states terminate on the $n=16$ branch of periodic states.
Because of the conserved character of the density $\psi$ the density of the coexisting uniform state is not constant but changes with the increasing size of the crystalline patch. This results in the slanted snaking structure, i.e., the loci of subsequent saddle-node bifurcations do not form straight vertical lines in Fig.~\ref{fig:snaking} but drift towards larger $\bar\psi$. Increasing the domain size, adds more 'undulations' to the slanted snaking structure and the relative tilt between subsequent saddle-node bifurcations becomes smaller, however, without changing the overall tiltedness.
A qualitatively new feature of the solution structure of the aPFC model are the branches of traveling localized states (TLS) shown as dot-dashed orange lines in Figs.~\ref{fig:firstladder} and \ref{fig:snake-ladder}. The TLS drift with a constant velocity $c$. Their $\psi$ profiles look quite similar to the one of RLS, the left-right symmetry is broken, though. Crossing the onset of motion, the $P$ profile loses its inversion symmetry and approaches the phase and shape of $\psi$. Typical profiles of RLS and TLS are presented in Fig.~\ref{fig:solutions}. The branches of TLS bifurcate in drift-transcritical bifurcations from the branches of asymmetric RLS and in drift-pitchfork bifurcations from the branches of symmetric RLS. An analytical condition for the detection of the drift bifurcations is derived in section~\ref{sec:level1drift}. This criterion holds for both types of drift bifurcations.
The branches of TLS connect the snaking branches of symmetric RLS like rungs. They may connect two sub-branches of the same symmetry like the two lower orange branches in Fig.~\ref{fig:snake-ladder} as well as branches of $\mathrm{RLS_{\mathrm{odd}}}$ and $\mathrm{RLS_{\mathrm{even}}}$ like the orange branch with the highest norm in Fig.~\ref{fig:snake-ladder}. TLS of small extension (one or two peaks, i.e., the ones in Fig.~\ref{fig:firstladder}) exist in a broad range of mean density $\bar{\psi}$. Because of their similar profiles, the norm of RLS and TLS is almost equal and the branches seem to nearly coincide in the lower part of Fig.~\ref{fig:firstladder}.
\begin{figure*}
\includegraphics{Figure8.eps}
\caption{\label{fig:bifv0} Bifurcation diagram of resting and traveling localized states giving the
$L^2$-norm as a function of the active drive $v_0$. The mean concentration is fixed at $\bar{\psi}=-0.75$. Resting solutions are indicated by blue (left-right symmetric states) and dashed black (asymmetric states) lines, moving states are dot-dashed red and orange. The traveling single peak exists up to high values of $v_0\approx1.6$. The remaining parameters are $\epsilon=-1.5$, $C_1=0.1$ and $D_\mathrm{r}=0.5$. Inset: Velocity $|c|$ of the traveling single peak as a function of $v_0$. At a critical value of $v_0=v_c$ (vertical line) the transition from a resting to a traveling linearly stable state occurs. Black dots (red dashed lines) give the results of direct numerical simulations (numerical continuation). The moving state corresponds to the long finger in the large panel. Its upper half is stable (right orange branch in inset). $v_c$ in the main plot and the vertical black line in the inset mark the onset of motion as calculated semi-analytically for the single peak (cf.~section \ref{sec:level1drift}).}
\end{figure*}
Similar to the case of periodic states, also for RLS an increase of the activity $v_0$ at fixed $\bar\psi$ may result in a transition to TLS. Fig.~\ref{fig:bifv0} gives a typical example of a bifurcation diagram using $\bar{\psi}=-0.75$. Thereby, the threshold value for the onset of motion slightly differs for the various RLS (inset of Fig.~\ref{fig:bifv0}). All discussed TLS have density and polarization profiles that are steady in corresponding comoving frames.
Recall that the onset of motion coincides with a symmetry breaking related to a phase shift between the density and the polar ordering profiles. The density peaks are shifted away from the zeros of $P$, resulting in a nonzero value when integrating $\psi$ times $P$ over the width of a peak. Above the critical activity the left-right symmetry of the density profile is also broken. The same holds for the inversion symmetry of the polarization. As described above and shown in Fig.~\ref{fig:solutions} at large $v_0$ the $P$ profile approaches the position and shape of $\psi$. In fact, the norm of $\psi$ and $P$ are equal for traveling structures.
Beside path-continuation we also employ direct time simulations of Eqs.~(\ref{eq:dtpsi}) and (\ref{eq:dtP}) to investigate the TLS. In particular, we track the traveling single density peak over time and determine its velocity. This confirms the continuation results as shown in the inset of Fig.~\ref{fig:bifv0}. The two orange dot-dashed lines in the inset correspond to the long nose of a traveling single peak in the main panel. The upper branch of this nose is stable, losing its stability at the fold at $v_0 \approx 1.6$. The lower branch is unstable and corresponds to the left orange branch in the inset. Its onset of motion is at a slightly smaller value of $v_0$ as compared to the stable one. For the particular value of mean concentration $\bar{\psi}$ shown in Fig.~\ref{fig:bifv0}, localized states consisting of more than one peak appear to only exist in a fairly narrow range of $v_0$ around $v_c$. The dot-dashed red line in Fig.~\ref{fig:bifv0} that corresponds to broader TLS with a few peaks wiggles about an almost vertical line before terminating on the blue branch of four connected resting peaks. The region of existence of the TLS is studied via fold continuation in the next section. Note that the velocities of all these different traveling structures are very similar.
\subsection{Fold continuation}
\begin{figure*}
\includegraphics{Figure9.eps}
\caption{\label{fig:fold-cont}(a) Two parameter continuation of the loci of the drift bifurcations (blue solid lines) and of the saddle-node bifurcations (orange dot-dashed lines) of the one- and two-peak TLS. (b) and (c) Corresponding one parameter bifurcation diagrams at fixed values of $\bar{\psi}$ marked by gray horizontal lines in (a). Blue branches correspond to RLS, and dot-dashed orange branches correspond to TLS. The drift bifurcations are marked by circles, and the saddle-node bifurcations are indicated by orange symbols. For increasing mean concentrations the interval of $v_0$ in which moving LS exist (onset of motion up to fold) grows and ultimately the activity value at the fold diverges, i.e., TLS exist for arbitrarily high activities.}
\end{figure*}
A two-parameter continuation allows one to track the loci of various bifurcation points in a two parameter plane \cite{DoKK1991ijbc}. Here, we follow the loci of (i) the saddle-node bifurcations that mark the points where stable and unstable one-peak and two-peak TLS annihilate and (ii) the drift bifurcations where TLS emerge from RLS in the parameter plane spanned by activity $v_0$ and mean density $\bar\psi$. This allows us to determine the area of existence of these localized states in the $(v_{0},\bar{\psi})$-plane.
The result is displayed in Fig.~\ref{fig:fold-cont}(a) where drift and saddle-node bifurcations are marked by blue solid lines and orange dot-dashed lines, respectively. The plot has to be carefully interpreted as the various bifurcations can be located on different branches in the bifurcation diagrams. To facilitate this we have marked the two values of $\bar{\psi}=-0.71$ and $\bar{\psi}=-0.78$ by horizontal gray lines and provide the corresponding one parameter bifurcation diagrams as Fig.~\ref{fig:fold-cont} (b) and (c) (also cf.~Fig.~\ref{fig:bifv0}), where the bifurcation points are highlighted by symbols, that also mark the fold continuation lines in (a).
Fig.~\ref{fig:fold-cont} proves that traveling localized states are a generic solution of the active PFC model as they occur in an extended region of the parameter plane.
In fact, the values of $v_0$ at the saddle-node bifurcations that limit their existence diverge at $\bar{\psi}=-0.74$ and $\bar{\psi}=-0.69$ for one- and two-peak TLS, respectively. We numerically follow their position up to $v_0\gtrsim10^3$. Note that for $\bar{\psi}=-0.71$ the fold of the one-peak TLS has already moved far outside of the displayed $v_0$-interval. At this $\bar{\psi}$, the two-peak TLS exists up to $v_0\approx0.38$ while at $\bar{\psi}=-0.78$ its range of existence is smaller. All drift bifurcations are quite close to $v_0=0.15$ with only small variations between different localized states and with $\bar{\psi}$. This makes an interpretation of the corresponding diagram region challenging.
Roughly speaking, one-peak [two-peak] TLS exist in the lower part of Fig.~\ref{fig:fold-cont}(a) in the area between the nearly vertical blue solid lines and the dot-dashed line marked by the filled circle [square] while in the upper part of Fig.~\ref{fig:fold-cont}(a) they exist in the area between the dot-dashed line marked by the filled triangle and the one marked by the filled circle [square]. Remember that in (b) the filled circle has left the displayed interval of $v_0$. The uppermost unmarked dot-dashed line in Fig.~\ref{fig:fold-cont}(a) is related to three-peak TLS and will be further discussed elsewhere.
\subsection{\label{subsec:linstab}Linear stability}
\begin{figure}
\includegraphics{Figure10.eps}
\caption{\label{fig:EW} Black lines: Real part of eigenvalues obtained from numerical LSA with a finite difference method. The black dashed [dotted] lines indicate a complex [real] eigenvalues. Orange line: Real eigenvalue from continuation. As expected, two neutrally stable modes with Re($\lambda$) = 0 are found (translation mode and volume mode). One mode is destabilized at $v_c \approx 0.161$, the detected onset of motion. At $v_c$ the mode coincides with the spatial derivative of the localized state and corresponds to a translation.}
\end{figure}
\begin{figure*}
\includegraphics{Figure11.eps}
\caption{\label{fig:stability}Linear stability of localized states. Light and dark blue lines: $\mathrm{RLS_{\mathrm{odd}}}$ and $\mathrm{RLS_{\mathrm{even}}}$. Dashed black: Asymmetric RLS. Dot-dashed orange: TRS. Stable steady states are indicated by $-$ signs and plotted as heavy lines. For unstable states, the number of $+$ signs gives the number of unstable eigenmodes with $\mathrm{Re}(\lambda)>0$. (a) Continuation of $v_0$, $\bar{\psi}=-0.75$. Symmetric RLS lose their stability in drift bifurcations at $v_0\approx0.16$ and TLS become stable. Asymmetric RLS are always unstable. (b) Continuation of $\bar{\psi}$, $v_0=0.16475$. LS are created in a subcritical bifurcation, branching off from the periodic branch (dashed gray, more periodic branches in dashed green). Note that vertical cuts at the respective values of $v_0$ of (b) in (a) and of $\bar{\psi}$ vice versa correspond to each other.}
\end{figure*}
Up to here we have discussed bifurcation diagrams and existence of solutions. Although general knowledge about bifurcations allows one to develop quite a good idea about the stability of the various solutions, it is important to explicitly determine the linear stability. The obtained detailed information then permits us to predict which states can persist in experiments or direct numerical simulations (the linearly stable states) and which states may only appear as (possible long-lived) transients. These are given by the unstable states that represent saddles in function space, as they might first attract time-evolutions to then repel them into well defined directions corresponding to the eigenvectors of the most unstable eigenvalue.
For the analysis, Eqs.~(\ref{eq:dtpsi}) and (\ref{eq:dtP}) are linearized in small perturbations $\delta\psi$ and $\delta P$ about a one-dimensional steady state $\left(\psi_{0}(x),P_{0}(x)\right)^T$ to obtain
\begin{align}
\partial_{t}\delta\psi &= \partial_{xx}\left(\epsilon+3(\bar{\psi}+\psi_{\mathrm{0}})^{2}+\left(1+\partial_{xx}\right)^{2}\right)\delta\psi-v_{0}\partial_{x}\delta P, \label{eq: lin dt psi} \\
\partial_{t}\delta P &= \partial_{xx}\left(C_{1}\delta P\right)-D_{\mathrm{r}}C_{1}\delta P-v_{0}\partial_{x}\delta\psi. \label{eq: lin dt P}
\end{align}
In the case of uniformly moving states $\left(\psi_{0},P_{0}\right)^T=\left(\psi_{0}(x+ct),P_{0}(x+ct)\right)^T$, a comoving frame term is added to the right-hand side.
Assuming exponential growth of the perturbation, i.e., $\delta\psi=\psi_1\exp(\lambda t)$ and $\delta P=P_1\exp(\lambda t)$ we have to solve
the linear eigenvalue problem:
\begin{align}
\mathcal{L}\left(\psi_{0},P_{0}\right)\left(\begin{array}{c}
\psi_1\\
P_1
\end{array}\right)=\lambda\left(\begin{array}{c}
\psi_1\\
P_1 \label{eq:linprob}
\end{array}\right),
\end{align}
where the linear operator $\mathcal{L}\left(\psi_{0},P_{0}\right)$ is defined by the right-hand side of Eqs.~(\ref{eq: lin dt psi}) and (\ref{eq: lin dt P}) [it is explicitly given below in Eq.~(\ref{eq: linoperator})].
We are not able to pursue an analytical solution of the linear problem because already the steady states $\psi_{0}(x)$ and $P_{0}(x)$ are obtained by numerical continuation. Instead, we discretize the steady states equidistantly in space, i.e., employ a finite difference method to transform (\ref{eq:linprob}) into a standard linear algebraic eigenvalue problem that we solve employing standard numerical routines.
The black lines in Fig.~\ref{fig:EW} give an example of a calculated eigenvalue spectrum in dependence of the activity. Shown are the real parts of the leading ten eigenvalues for the branch of one-peak RLS that in Fig.~\ref{fig:stability} is stable at $v_0 =0.1$. The dotted lines indicate purely real eigenvalues whereas the three dashed lines indicate complex eigenvalues. The largest eigenvalue is real and crosses zero at a critical activity of $v_c\approx0.161$ where the drift-pitchfork bifurcation occurs, as discussed in detail in section~\ref{sec:level1drift}. The obtained $v_c$ well agrees with the value we obtain through the numerical continuation of the one-peak TLS that detects the drift-pitchfork bifurcation (as a fold) at the same value. Note that in the discretized eigenvalue problem the zero crossing has to be obtained by extrapolation as the relevant eigenvalue 'interacts' with one of the two zero eigenvalues, in this way 'blurring' the crossing. This is related to the problem of level repulsion or avoided crossing (von Neumann-Wigner theorem~\cite{NWT1929}). To prevent the blurred zero crossing, we also solve Eq.~(\ref{eq:linprob}) by numerical continuation \cite{cenosTutorialLindrop}. The eigenvalue we obtain in this way is given by the orange line in Fig.~\ref{fig:EW}. It confirms the finite difference calculations and perfectly matches $v_c$.
Two zero eigenvalues exist for all $v_0$ and represent neutrally stable modes that are related to the symmetries of the model. One of them represents a translation mode that occurs because Eqs.~(\ref{eq:dtpsi}) and (\ref{eq:dtP}) are invariant with respect to translation and, therefore, exhibit the neutral eigenmode of translation, often called Goldstone mode of translational symmetry. In addition, an infinitesimal change in the mean concentration $\bar{\psi}$ does also result in another solution of the equations, i.e., the second mode with zero eigenvalue is a neutral volume mode or Goldstone mode of symmetry with respect to mass change.
Calculating the eigenfunction that is destabilized shows that at $v_c$ the mode matches the spatial derivative of the investigated localized peak. The derivative corresponds to an infinitesimal shift of the position of the peak and, therefore, to the Goldstone mode of translational symmetry. This fact indicates that the onset of motion is indeed due to a drift bifurcation.
A typical result of a systematic stability analysis is shown in Fig.~\ref{fig:stability}, where (a) represents an enlargement off a part of the bifurcation diagram in Fig.~\ref{fig:bifv0} and (b) is the lowest part of the snakes-and-ladders structure. The branches of linearly stable and unstable states are indicated by ``-'' and ``+'' signs, respectively. The number of ``+'' signs gives the number of unstable eigenmodes.
Linearly stable states are represented by heavy lines, indicating that in (a) in the considered parameter range one has stable one- and two-peak RLS and TLS with regions of multistability of (i) one- and two-peak RLS at low activity, (ii) one- and two-peak TLS at slightly larger activity and in between (iii) a very small region where one-peak TLS and two-peak RLS are both linearly stable. In the considered case all the eigenvalues that cross the imaginary axis are real, although stable complex eigenvalues do occur (see dashed lines in Fig.~\ref{fig:EW}). Note that Fig.~\ref{fig:stability}(a) shows more bifurcations than are followed in Fig.~\ref{fig:fold-cont}(a).
Studying Figs.~\ref{fig:stability}(b) and \ref{fig:snake-ladder} in detail one finds that - despite the similar shape of the snake and ladder - the stability of the RLS differs from the one found for the PFC model \cite{TARG2013pre}: there the symmetric RLS change their stability as the branches snake along, while the asymmetric RLS are always unstable. Here, however, the stable symmetric RLS are already destabilized before the saddle-node bifurcation is reached as the TLS emerge at the drift-pitchfork bifurcation, i.e., their range of linear stability is diminished. Since $v_0=0.16475>v_c$ in (b) most of the resting branches are unstable. At a drift-transcritical bifurcation the asymmetric RLS also acquire an additional unstable mode as compared to the case of PFC. For activities lower than $v_c$ the picture is very similar to the one of the passive PFC model.
\begin{figure}
\includegraphics{Figure12.eps}
\caption{\label{fig:transcritical_bif}Drift-transcritical bifurcation. Enlargement of Fig.~\ref{fig:stability}(a). The asymmetric RLS (dashed black, +) acquires an additional unstable mode (++) in a drift-transcritical bifurcation. The moving double bump (dot-dashed red line) changes its stability in the transcritical bifurcation and at the nearby fold. All shown branches are linearly unstable.}
\end{figure}
Figure \ref{fig:transcritical_bif} enlarges a detail of Fig.~\ref{fig:stability}(a): the drift-transcritical bifurcation, where moving states branch off the asymmetric resting state composed of two density peaks of different height. As already the resting state is asymmetric, the two sub-branches emerging at the drift bifurcation are not related to each other by symmetry, but intrinsically differ. Hence, in this case the creation of the TLS corresponds to a drift-transcritical bifurcation, different from the drift-pitchfork bifurcations in which the symmetric RLS lose their stability. The transcritical bifurcation does not coincide with the fold of the (red) TLS branch and its stability changes twice close to the drift bifurcation. Accordingly, in Fig.~\ref{fig:stability}(a) the two sub-branches of TLS seem to have the same stability before and after crossing the resting asymmetric state. There is another the drift-transcritical bifurcation on the asymmetric branch in Fig.~\ref{fig:stability}(b).
\section{\label{sec:level1drift}Onset of Motion - the Drift Instability}
Next we discuss the numerically found drift bifurcations more in detail and derive a specific simple analytic condition that allows one to detect drift bifurcations for a class of models that includes the aPFC model. The analytical criterion for the onset of motion is valid for the encountered drift-pitchfork and drift-transcritical bifurcations.
\subsection{Velocity expansion}
We consider the one-dimensional version of the model (\ref{eq:dtpsi}) and (\ref{eq:dtP}) in a comoving frame with coordinate $x'=x+ct$, time $t$ and velocity $c$. We use $\left(\psi_{0}(x),P_{0}(x)\right)^T$ to denote a steady solution, i.e., with $c=0$. Assuming there are only small corrections
$(\tilde{\psi}_i,\tilde{P}_i)^T$ to the steady state
when changing parameters close to the drift bifurcation, we introduce a velocity expansion
\begin{align}
\psi &= \psi_{0}(x)+c\left[\tilde{\psi}_{1}(x)+c\tilde{\psi}_{2}(x)+c^{2}\tilde{\psi}_{3}(x)+\ldots\right],\\
P &= P_{0}(x)+c\left[\tilde{P}_{1}(x)+c\tilde{P}_{2}(x)+c^{2}\tilde{P}_{3}(x)+\ldots\right].\nonumber
\end{align}
Plugging in the expansions (up to order $c^{2}$) in the dynamic equations (Eqs.~(\ref{eq:dtpsi}) and (\ref{eq:dtP}))
leads to
\begin{widetext}
\begin{align}
c\,\partial_{x}\left(\psi_{0}+c\tilde{\psi}_{1}+c^{2}\tilde{\psi}_{2}\right) =& \partial_{xx}\left[\left(\epsilon+\left(1+\partial_{xx}\right)^{2}\right)\left(\psi_{0}+c\tilde{\psi}_{1}+c^{2}\tilde{\psi}_{2}\right)+\left(\bar{\psi}+\psi_{0}+c\tilde{\psi}_{1}+c^{2}\tilde{\psi}_{2}\right)^{3}\right],\nonumber \\
& -v_{0}\partial_{x}\left(P_{0}+c\tilde{P}_{1}+c^{2}\tilde{P}_{2}\right)\\
c\,\partial_{x}\left(P_{0}+c\tilde{P}_{1}+c^{2}\tilde{P}_{2}\right) =& \left(\partial_{xx}-D_{\mathrm{r}}\right)\left[C_{1}\left(P_{0}+c\tilde{P}_{1}+c^{2}\tilde{P}_{2}\right)+C_{2}\left(P_{0}+c\tilde{P}_{1}+c^{2}\tilde{P}_{2}\right)^{3}\right]\nonumber \\
& -v_{0}\partial_{x}\left(\psi_{0}+c\tilde{\psi}_{1}+c^{2}\tilde{\psi}_{2}\right).\nonumber
\end{align}
\end{widetext}
\newpage{}
By equating coefficients of $c^{n}$, we find for $c^{0}$
\begin{align}
0 &= \partial_{xx}\left[\left(\epsilon+\left(1+\partial_{xx}\right)^{2}\right)\psi_{0}+\left(\bar{\psi}+\psi_{0}\right)^{3}\right] -v_{0}\partial_{x}P_{0}\\
0 &= \left(\partial_{xx}-D_{\mathrm{r}}\right)\left(C_{1}P_{0}+C_{2}P_{0}^{3}\right)-v_{0}\partial_{x}\psi_{0},\nonumber
\end{align}
i.e., we recover the equations for the resting base state. To linear order in $c$ we obtain
\begin{align}
\partial_{x}\psi_{0} =& \partial_{xx}\left[\left(\epsilon+\left(1+\partial_{xx}\right)^{2}\right)\tilde{\psi}_{1}+3\left(\bar{\psi}+\psi_{0}\right)^{2}\tilde{\psi}_{1}\right]\nonumber\\
& -v_{0}\partial_{x}\tilde{P}_{1} \label{eq:linear in c-1}\\
\partial_{x}P_{0} =& \left(\partial_{xx}-D_{\mathrm{r}}\right)\left(C_{1}\tilde{P}_{1}+3\,C_{2}P_{0}^{2}\tilde{P}_{1}\right)-v_{0}\partial_{x}\tilde{\psi}_{1}.
\nonumber
\end{align}
i.e., a linear system for $\tilde{\psi}_{1}$ and $\tilde{P}_{1}$. We write Eqs.~(\ref{eq:linear in c-1}) in matrix form
\begin{align}
\partial_{x}\left(\begin{array}{c}
\psi_{0}\\
P_{0}
\end{array}\right) &= \mathcal{L}(\psi_{0},P_{0})\left(\begin{array}{c}
\tilde{\psi}_{1}\\
\tilde{P}_{1}
\end{array}\right)\label{eq:phi0=00003DL'phi1-1}
\end{align}
with the same linear operator $\mathcal{L}$ already employed in (\ref{eq:linprob}):
\begin{widetext}
\begin{equation}
\mathcal{L}(\psi_{0}(x),P_{0}(x))=\left(\begin{array}{cc}
\partial_{xx}\left[\left(\epsilon+\left(1+\partial_{xx}\right)^{2}\right)+3\left(\bar{\psi}+\psi_{0}(x)\right)^{2}\right] & -v_{0}\partial_{x}\\
-v_{0}\partial_{x} & \left(\partial_{xx}-D_{\mathrm{r}}\right)\left(C_{1}+3\,C_{2}P_{0}(x)^{2}\right)
\end{array}\right)
\label{eq: linoperator}.
\end{equation}
\end{widetext}
In the following, we focus again on the case of a linear equation for $P$ without spontaneous polarization, $C_2=0$. We notice that the top left component of (\ref{eq: linoperator})
\begin{align}
L_{11}(x) &= \partial_{xx}\left[\left(\epsilon+\left(1+\partial_{xx}\right)^{2}\right)+3\left(\bar{\psi}+\psi_{0}(x)\right)^{2}\right]\nonumber \\
&= \partial_{xx}\, L_{\mathrm{SH}}(\psi_{0}(x))
\end{align}
is the product of a Laplacian (due to mass conservation) and the linearized operator from a Swift-Hohenberg equation with cubic nonlinearity. This fact will turn out to be very helpful when forming the adjoint operator $\mathcal{L}^{\dagger}$.
\subsection{Translational symmetry and Goldstone modes}
\lo{Adding its first spatial derivative to the base state corresponds to a small shift in the position of the state. Since the aPFC model is translationally invariant,
\begin{align}
\partial_{x}\left(\begin{array}{c}
\psi_{0}\\
P_{0}
\end{array}\right) &= \left(\begin{array}{c}
\mathcal{G}_{1}\\
\mathcal{G}_{2}
\end{array}\right)\equiv\left(\begin{array}{c}
\psi_{\mathcal{G}}\\
P_{\mathcal{G}}
\end{array}\right)
\end{align}
can be identified as a neutral eigenfunction with eigenvalue zero, often referred to as the Goldstone mode $\mathcal{G}$ of the translational symmetry. Thus,
\begin{equation}
\mathcal{L}\,\partial_{x}\left(\begin{array}{c}
\psi_{0}\\
P_{0}
\end{array}\right)=\mathcal{L} \, \mathcal{G}=\mathbf{0}.
\label{eq:goldtrans}
\end{equation}}
A typical destabilization occurs when the real part of an eigenvalue crosses zero as parameters of the system are being changed. We now consider the case that the imaginary part also equals zero, so that the corresponding eigenfunctions of $\mathcal{L}$ can be expressed by a linear combination of the Goldstone modes. The second Goldstone mode mentioned in Section~\ref{subsec:linstab} is the volume mode that does not interfere in the drift bifurcation. At the bifurcation point, a real eigenvalue crosses the imaginary axis, i.e., an additional neutral mode exists. In consequence, the system of eigenfunctions of the null space of the linear operator is incomplete and must be supplemented by a generalized neutral eigenfunction \cite{driftbif_gurevich}. This function is called the propagator mode $\mathcal{P}$, defined by
\begin{equation}
\mathcal{L}\mathcal{P}=\mathcal{G}.\label{eq:LP=00003DG propagator mode-1}
\end{equation}
It is exactly the occurrence of $\mathcal{P}$ that marks the destabilization, i.e., the onset of motion. Using the Fredholm alternative \cite{evans}, one finds that Eq.~(\ref{eq:LP=00003DG propagator mode-1}) can be solved iff
\begin{equation}
\langle\mathcal{G}^{\dagger}|\mathcal{G}\rangle=0,\label{eq:<G+,G>=00003D0}
\end{equation}
where $\mathcal{G}^{\dagger}$ is the neutral eigenfunction of the adjoint operator $\mathcal{L}^{\dagger}$ with the same spatial symmetry as $\mathcal{G}$. The scalar product $\langle\cdot|\cdot\rangle$ is defined as a full spatial integration over the considered domain. The values of a set of control parameters for which Eq.~(\ref{eq:<G+,G>=00003D0}) is fulfilled corresponds to the bifurcation point.
\subsection{The adjoint linearized operator}
Let $\mathcal{G}^{\dagger}$ be the adjoint neutral eigenfunction,
i.e.,
\begin{equation}
\mathcal{L}^{\dagger}\,\mathcal{G}^{\dagger}=\mathbf{0}.
\label{eq:anef}
\end{equation}
Equation~(\ref{eq:phi0=00003DL'phi1-1}) corresponds to
\begin{equation}
\mathcal{G} = \mathcal{L}\left(\begin{array}{c}
\tilde{\psi_{1}}\\
\tilde{P_{1}}
\end{array}\right)
\end{equation}
showing that $(\tilde{\psi}_{1},\tilde{P}_{1})^T$ is a generalized neutral eigenfunction $\mathcal{P}$. To find $\mathcal{G}^{\dagger}=(\psi_{\mathcal{G}}^{\dagger},P_{\mathcal{G}}^{\dagger})^T$ we determine the adjoint operator
\begin{align}
\mathcal{L}^{\dagger} &= \left(\begin{array}{cc}
L_{\mathrm{SH}}\,\partial_{xx} & v_{0}\partial_{x}\\
v_{0}\partial_{x} & C_{1}\left(\partial_{xx}-D_{\mathrm{r}}\right)
\end{array}\right)
\end{align}
using $(AB)^{\dagger}=B^{\dagger}A^{\dagger}$, the self-adjointness of $\partial_{xx}$
and $L_{\mathrm{SH}}$, the relation $\partial_{x}^{\dagger}=-\partial_{x}$,
and ($v_{0}, C_{1}, D_{\mathrm{r}})\in\mathbb{R}$.
\subsection{Determining the adjoint eigenfunctions}
\lo{
Componentwise the adjoint problem reads
\begin{align}
0 &= L_{\mathrm{SH}}\,\partial_{xx}\psi_{\mathcal{G}}^{\dagger}+ v_0\partial_xP_{\mathcal{G}}^{\dagger} \label{eq:adjoint1}\\
0 &= v_0\partial_x\psi_{\mathcal{G}}^{\dagger} + C_1\left(\partial_{xx}-D_{\mathrm{r}}\right)P_{\mathcal{G}}^{\dagger}\label{eq:adjoint2}
\end{align}
Comparing Eq.~(\ref{eq:adjoint1}) to the steady state equation for $\psi$ (\ref{eq:steadystatePSI}) with $J=0$ and $c=0$ and employing a simple chain rule
\begin{align}
0 &= \partial_x \frac{\delta\mathcal{F}}{\delta\psi}(\psi_0)-v_0P \\
&= L_{\mathrm{SH}}\partial_x\psi_0-v_0P
\end{align}
suggests
\begin{align}
\partial_{xx}\psi_{\mathcal{G}}^{\dagger} &= \partial_{x}\psi_{0},\\
\partial_{x}P_{\mathcal{G}}^{\dagger} &= -P_{0}
\end{align}
Integrating yields
\begin{align}
\psi_{\mathcal{G}}^{\dagger}(x)&=\int_{0}^{x}\left(\psi_{0}(x')+\mathrm{C}\right)\mathrm{d}x'+\mathrm{D},\\
P_{\mathcal{G}}^{\dagger}(x)&=-\int_{0}^{x}P_{0}(x')\mathrm{d}x'+\mathrm{F}
\label{eq:integratedtwice}
\end{align}
with constants C, D, F. Eq.~(\ref{eq:adjoint2}) is consistent with this neutral adjoint eigenfunction. Substituting gives
\begin{align}
v_0\psi_0-C_1(\partial_{xx}-D_\mathrm{r})\int P_{0}(x)\mathrm{d}x=\mathrm{const.}
\end{align}
which is true as can be seen by integrating the steady state equation for $P$, Eq.~(\ref{eq:steadystateP}).
}
\begin{figure}
\includegraphics{Figure13.eps}
\caption{\label{fig:fredholm}Onset of motion. (a) $L^2$-Norm of steady states in dependence of $v_0$ for fixed $\bar{\psi}=-0.75$. The blue branch corresponds to a RLS with one bump. The RLS is destabilized at $v_c$ and starts to travel with drift velocity $c$. The traveling odd LS is indicated by the dashed orange branch. (b) Solvability condition Eq.~(\ref{eq:normdiff}) $||\psi_{0}||^2-||P_{0}||^2$ (blue) of the RLS and velocity $|c|$ (dashed orange line) of TLS vs. activity $v_0$ showing perfect agreement of the two approaches.}
\end{figure}
\subsection{Solvability condition}
Collecting all the results, the solvability condition (\ref{eq:<G+,G>=00003D0}) reads
\begin{align}
\langle\mathcal{G}^{\dagger}|\mathcal{G}\rangle &= \langle\psi_{\mathcal{G}}^{\dagger}|\psi_{\mathcal{G}}\rangle+\langle P_{\mathcal{G}}^{\dagger}|P_{\mathcal{G}}\rangle\nonumber \\
&= \langle\int_{0}^{x}\left(\psi_{0}(x')+\mathrm{C}\right)\mathrm{d}x'+\mathrm{D}|\partial_{x}\psi_{0}\rangle \nonumber\\
&\;\;-\; \langle\int_{0}^{x}\left(P_{0}(x')+\mathrm{E}\right)\mathrm{d}x'|\partial_{x}P_{0}\rangle \\
&= -\langle\partial_{x}\left(\int_{0}^{x}\left(\psi_{0}(x')+\mathrm{C}\right)\mathrm{d}x'+\mathrm{D}\right)|\psi_{0}\rangle\nonumber \\
&\;\;+\; \langle\partial_{x}\int_{0}^{x}\left(P_{0}(x')+\mathrm{E}\right)\mathrm{d}x'|P_{0}\rangle \\
&= -\langle\psi_{0}|\psi_{0}\rangle+\langle P_{0}|P_{0}\rangle=0. \\
\Leftrightarrow0 &= ||\psi_{0}||^{2}-||P_{0}||^{2}\label{eq:normdiff},
\end{align}
where we have employed a partial integration and used $\langle\mathrm{C}|\psi_{0}\rangle=\mathrm{C}\int_{0}^{L}\psi_{0}\mathrm{d}x=0$, since $\psi$ is the modulation around the fixed mean density. The same holds for the integral over $P_0$ as explained in Section~\ref{sec:level2}. For all TLS we have found the onset of motion perfectly matches the zero crossing of $||\psi_{0}||^2-||P_{0}||^2$.
A particular example is given in Fig.~\ref{fig:fredholm}. Panel (a) shows a part of the bifurcation diagram Fig.~\ref{fig:bifv0}. The solid blue branch corresponds to the RLS with a single density peak that loses its stability in a drift-pitchfork bifurcation at $v_c$. The emerging traveling bump (upper dot-dashed orange line) is linearly stable (cf. Fig.~\ref{fig:stability}(a), the lower orange branch is unstable). In the lower panel, Fig.~\ref{fig:fredholm}(b), we plot the difference of the squared norms as employed in Eq.~(\ref{eq:normdiff}). In addition, we also display the velocity of the emerging TLS (dot-dashed orange). The two zero crossings of $||\psi_{0}||^2-||P_{0}||^2$ occur at exactly the same values of $v_0$ as the onsets of motion. The second root is due to the lower unstable branch of TLS that bifurcates at a slightly lower activity. Notice that the criterion for the onset of motion, Eq.~\ref{eq:normdiff}, also holds for the drift-transcritical bifurcation.
\section{\label{sec:level1con}Discussion and conclusions}
We have in some detail studied the bifurcation structure of the active Phase-Field-Crystal model in the one-dimensional case. \lo{After discussing the linear stability of the liquid (homogeneous) state with respect to monotonous and oscillatory modes, we have briefly discussed the existence and stability of stable domain-filling resting and traveling crystalline (periodic) structures. Note that we have not systematically studied unstable domain-filling periodic structures. Our main focus has been on crystallites (crystals of finite extension) that correspond to stable and unstable localized states of different symmetries. We have analyzed how the classical slanted snakes-and-ladders structure (slanted homoclinic snaking) known from the Phase-Field-Crystal model \cite{TARG2013pre} is amended by activity. In particular,} we have shown that increasing activity, one finds a critical value for the onset of motion of the various localized states and of the domain-filling crystal. Using the mean concentration $\bar\psi$ as control parameter we have found that traveling states emerge either through drift-pitchfork bifurcations of the resting parity (left-right) symmetric localized states or through drift-transcritical bifurcations of resting asymmetric localized states that form the rungs of the snakes-and-ladders bifurcation structure. At the studied parameter values these traveling localized states always occur within the $\bar\psi-$range limited by the snaking branches of resting localized states.
Note that this onset behavior differs from the case of the non-variational Swift-Hohenberg equations studied in Refs.~\cite{HoKn2011pre}. There, at any value of the driving parameter in front of the non-variational term all asymmetric states drift and the original pitchfork bifurcations of the variational system either split into two saddle-node bifurcations or become a drift-pitchfork bifurcation. Here, however, the coupling of the two fields allows for resting asymmetric states even for finite activity parameter and moving states emerge through drift bifurcations that are not present (in any form) in the variational limit.
The second investigated main control parameter has been the activity. Here, the general tendency is that an increase in activity suppresses the resting localized and periodic states that ultimately annihilate in saddle-node bifurcations at critical activities that are of a similar magnitude for all studied states. \lo{In other words activity ultimately melts all resting crystalline structures as the driving force overcomes the attractive forces that stabilize the equilibrium crystals and crystallites that exist in the reference system without activity. This corresponds to the melting of equilibrium clusters by activity observed in the Brownian dynamics simulations of Ref.~\cite{ReBH2013pre} for self-propelled particles with short-range attraction.}
However, at values of the activity below this melting point most branches of resting states show drift bifurcations where branches of traveling states emerge that may exist in a small range of activity or even extend towards infinite activity as we have shown by numerical two-parameter continuation of the relevant bifurcations. In other words, depending on parameters, although activity may melt traveling crystallites, there are extended parameter regions where this is not the case. \lo{In fact, we have found that although a high activity melts most traveling localized states, i.e., traveling crystalline patches, this is not the case for traveling periodic states, i.e., traveling domain-filling crystals. They can be driven with arbitrarily high activity and then show high velocities. We believe, that this is most likely the case because the periodicity of the domain-filling crystals is fixed, while the traveling localized states naturally adapt their peak spacing. This additional degree of freedom could make them less stable. Note that the found crystallites are unrelated to the motility-induced clusters discussed, e.g., in \cite{Ginot2015prx,SSWK2015prl,CaTa2015arcmp}. The latter effect has not yet been found in an active PFC model as they are mainly considered to study how equilibrium crystallization is amended by activity. It should be further investigated whether it may also describe motility-induced clustering, especially when allowing for spontaneous polarization ($C_2\neq0$).}
Furthermore, we have investigated the region of existence of traveling localized states and have shown that they are generic solutions for extended regions of the plane spanned by mean concentration and activity. Whereas extended traveling localized states of three and more peaks quickly vanish into the homogeneous background, narrow localized states (one and two density peaks) can be driven at quite high activities where they reach high velocities. This does not seem to be the case in the \lo{nonvariational} systems studied in \cite{HoKn2011pre,BuDa2012sjads}. Therefore, a comparative study of the present system, the systems studied in \cite{HoKn2011pre,BuDa2012sjads} and the ones reviewed and discussed in \cite{KoTl2007c} would be beneficial.
A further focus has been the onset of motion that occurs at a critical activity which only slightly depends on the particular localized state. We have considered drift instabilities for the system of two coupled equations where one represents a mass-conserving dynamics of a density-like quantity and the second one is a linear equation for the polarization. Also the non-variational coupling of the two equations is linear. Under these conditions we have derived a general criterion for the onset of motion. Namely, the zero crossing of the difference
of the squared norms of the two steady fields ($||\psi_{0}||^2-||P_{0}||^2$) marks the onset of motion for all localized and extended crystalline states. The criterion holds for both types of drift instabilities that occur in the aPFC model: drift-pitchfork and drift-transcritical bifurcations and may be used to determine the critical strength of activity that is needed for collective traveling states. Note, that the criterion also applies to other models of active media that fulfill the described conditions. This will be discussed elsewhere. What needs further clarification is the question of whether such a simple criterion can be derived for more complicated active models, that do more faithfully model specific properties of the experimental systems.
Finally, we highlight a number of questions that merit further investigation. \lo{Here, our main aim has been to establish a first overview of the rather involved overall bifurcation structure
that is related to the onset of motion in continuum models of active crystals. Although we have focused on a one-dimensional systems we believe} that most of the obtained results will hold for two- or even three-dimensional systems. There, however, the picture is complicated by the possible occurrence of various pattern types, compare, for instance, the differences found in the classical non-conserved Swift-Hohenberg model \cite{BuKn2006pre,ALBK2010sjads,LSAC2008sjads}.
Having established the existence of the various traveling and resting localized states it will be interesting to study their interactions (in analogy to section~IV of Ref.~\cite{HoKn2011pre}), and to obtain more detailed information about their regions of existence, multistability and instabilities. As experimental studies often focus on the collective behavior of many interacting clusters \cite{theurkauff2012prl,Ginot2015prx,ginot2018aggregation}, it should be investigated whether it is possible to derive statistical models from single cluster bifurcation studies as the present one. Such a methodology has recently been presented for ensembles of sliding drops \cite{WTEG2017prl}.
\lo{We hope that the provided study will serve as a reference for other such analyses of more detailed models for active crystals, e.g., here we have focused on a rather simple coupling of concentration and polarization and have also excluded spontaneous polarization. The obtained results regarding the onset of motion should also be compared to related results regarding the bifurcation structure of other models of active matter. This will allow one to develop a clearer general understanding of observed multistabilities of states, hysteresis effects and thresholds where qualitative changes occur.}
\begin{acknowledgments}
We acknowledge support through the doctoral school ``Active living fluids'' funded by the German French University (Grant No. CDFA-01-14).
LO wishes to thank the foundation ``Studienstiftung des deutschen Volkes'' for financial support, Johannes Kirchner for fruitful discussions and Fenna Stegemerten and Tobias Frohoff-H\"ulsmann for their detailed feedback on the manuscript.
\end{acknowledgments}
|
1909.08928
|
\section{Introduction}
\label{introduction}
Data centres support the provision of core Internet services and it is therefore crucial to have in place data transport mechanisms that ensure high performance for the diverse set of supported services. Data centres consist of a large number of commodity servers and switches, support multiple paths among servers, which can be multi-homed, very large aggregate bandwidth and very low latency communication with shallow buffers at the switches.
\noindent\textbf{One-to-many and many-to-one communication.} \black{Modern data centres support a plethora of services that produce one-to-many and many-to-one traffic workloads. Distributed storage systems, such as GFS/HDFS~\cite{GFS, HDFSlink} and Ceph \cite{Ceph}, replicate data blocks across the data centre (with or without daisy chaining\footnote{\url{https://patents.google.com/patent/US20140215257}}). Partition-aggregate \cite{MapReduce,SparkRDD}, streaming telemetry~\cite{sflow-streaming,gangliaDistributed}, distributed messaging~\cite{Akka,JGroups}, publish-subscribe systems \cite{ApacheKafka, GooglePubSub}, high frequency trading~\cite{tradingex1, tradingex2} and replicated state machines~\cite{statemachines1, statemachines2} also produce similar workloads. Multicast has already been deployed in data centres (e.g. to support virtualised workloads \cite{virtualisedNet1} and financial services~\cite{tradingex3}). With the advent of P4, multicasting in data centres is becoming practical \cite{ElmoMulticast}. As a result, much research on scalable network-layer multicasting in data centres has recently emerged \cite{rfcMulticast2, reliablemulticast, DualStructure, ScalingIP,infocom-multicast}, including approaches for optimising multicast flows in reconfigurable data centre networks \cite{SplitCast} and programming interfaces for applications requesting data multicast \cite{republic}.}
Existing data centre transport protocols are suboptimal in terms of network and server utilisation for these workloads. One-to-many data transport is implemented through multi-unicasting or daisy chaining for distributed storage. As a result, copies of the same data are transmitted multiple times, wasting network bandwidth and creating hotspots that severely impair the performance of short, latency-sensitive flows. In many application scenarios, multiple copies of the same data can be found in the network at the same time (e.g. in replicated distributed storage) but only one replica server is used to fetch it. Fetching data from all servers, in parallel, from all available replica servers (many-to-one data transport) would provide significant benefits in terms of eliminating hotspots and naturally balancing load among servers.
These performance limitations are illustrated in Figure \ref{tcpndp}, where we plot the application goodput for TCP and NDP (Novel Datacenter transport Protocol) \cite{NDP} in a distributed storage scenario with $1$ and $3$ replicas. When a single replica is stored in the data centre, NDP performs very well, as also demonstrated in \cite{NDP}. TCP performs poorly\footnote{It is well-established that TCP is ill-suited for meeting throughput and latency requirements of applications in data centre networks, therefore we will be using NDP and PIAS~\cite{PIAS} as the baseline protocols in this paper.}. On the other hand, when three replicas are stored in the network, both NDP and TCP perform poorly in both write and read workloads. Writing data involves either multi-unicasting replicas to all three servers (blue and green lines in Figure \ref{tcpndp}a) or daisy chaining replica servers (black line); although daisy chaining performs better, avoiding the bottleneck at the client's uplink, they both consume excessive bandwidth by moving multiple copies of the same block in the data centre. Fetching a data block from a single server when it is stored in two more servers creates hotspots at servers' uplinks due to collisions from randomly selecting a replica server for each read request (see black and purple lines in Figure \ref{tcpndp}b).
\begin{figure}[!h]
\setlength{\belowcaptionskip}{-3pt}
\centering
\subcaptionbox{One-to-many (write)}[.48\linewidth][c]{%
\includegraphics[scale=0.22]{fig1a-v1multicast-ndp-tcp.pdf}}\quad
\subcaptionbox{Many-to-one (read)}[.48\linewidth][c]{%
\includegraphics[scale=0.22]{fig1b-v1mulstisource-ndp-tcp.pdf}}\quad
\caption{Goodput in a 250-server FatTree topology with 1GB link speed \& 10$\mu$s link delay. Background traffic is present to simulate congestion. Results are for 10,000 (a) write and (b) read block requests (2MB each). Each I/O request is `assigned' to a host in the network, which is selected uniformly at random and acts as the client. Requests' arrival times follow a Poisson process with an inter-arrival rate $\lambda=1000$. Replica selection and placement is based on HDFS' default policy.}
\label{tcpndp}
\vspace{-2mm}
\end{figure}
\noindent\textbf{Long and short flows. }Modern cloud applications commonly have strict latency requirements \cite{DCTCP,minimizingfct,RepFlow,OneMoreQueue,HOMA,pfabric}. At the same time, background services require high network utilisation \cite{infocom-morteza, Improving-Datacenter-MPTCP, packet-spraying, Hedera}. A plethora of mechanisms and protocols have been proposed to date to provide efficient access to network resources to data centre applications, by exploiting support for multiple equal-cost paths between any two servers \cite{Improving-Datacenter-MPTCP,NDP,packet-spraying,FMTCP2015} and hardware capable of low latency communication \cite{HOMA,AUTO,pHost-CoNEXT-2015} and eliminating Incast \cite{TCP-Incast-2012, LTTP , Jiang} and Outcast \cite{TCP-Outcast}. Recent proposals commonly focus on a single dimension of the otherwise complex problem space; e.g. TIMELY\cite{TIMELY}, DCQCN\cite{DCQCN-RDMA}, QJUMP \cite{qjump} and RDMA over Converged Ethernet v2 \cite{rdma} focus on low latency communication but do not support multi-path routing. Other approaches \cite{Hedera, packet-spraying} do provide excellent performance for long flows but perform poorly for short flows \cite{Improving-Datacenter-MPTCP,infocom-morteza}. None of these protocols support efficient one-to-many and many-to-one communication.
\noindent\textbf{Contribution. }In this paper we propose SCDP\footnote{\black{SCDP builds on our early work on integrating fountain coding in data transport protocols \cite{polyraptor, Trevi, scdp-arxiv}. In \cite{Trevi} we motivated the need for a novel data transport mechanism to efficiently support one-to-many and many-to-one communication and argued that rateless codes is the way forward in doing so. In \cite{polyraptor}, we introduced an early version of SCDP to the research community.}}, a general-purpose data transport protocol for data centres that, unlike any other protocol proposed to date, supports efficient one-to-many and many-to-one communication. This, in turn, results in significantly better overall network utilisation, minimising hotspots and providing more resources to long and short unicast flows. At the same time, SCDP supports fast completion of latency-sensitive flows and consistently high-bandwidth communication for long flows. SCDP eliminates Incast and Outcast. All these are made possible by integrating RaptorQ codes \cite{RFC-6330-RQ, RaptorQ-book} with receiver-driven data transport \cite{NDP, HOMA}, in-network packet trimming \cite{cuttingPayload, NDP} and Multi-Level Feedback Queuing (MLFQ) \cite{PIAS}.
\noindent\textbf{SCDP performance overview. }We found that SCDP improves goodput performance by up to $\sim$50\% compared to NDP and $\sim$60\% compared to PIAS with different application workloads involving one-to-many and many-to-one communication (\textsection\ref{goodput-performance}). Equally importantly, it reduces the average FCT for short flows by up to $\sim$45\% compared to NDP and $\sim$70\% compared to PIAS under two realistic data centre traffic workloads (\textsection\ref{realistic-workloads}). For short flows, decoding latency is minimised by the combination of the systematic nature of RaptorQ codes and MLFQ; even in a $70\%$ loaded network, decoding was needed for only $9.6\%$ of short flows. This percentage was less than $1\%$ in a $50\%$ congested network (\textsection\ref{network-overhead}). The network overhead induced by RaptorQ codes is negligible compared to the benefits of supporting one-to-many and many-to-one communication. Only 1\% network overhead was introduced when the network was very heavily congested (\textsection\ref{unnecessary-overhead}). RaptorQ codes have been shown to perform exceptionally well even on a single core, in terms of encoding/decoding rates. We therefore expect that with hardware offloading, in combination with SCDP's block pipelining mechanism {(\textsection\ref{pipelining})}, the required computational overhead will not be significant.
\vspace{-1mm}
\section{RaptorQ Encoding and Decoding}
\label{raptorQ}
\noindent\textbf{Encoding. }RaptorQ codes are \emph{rateless} and \emph{systematic}. The input to the encoder is one or more \emph{source blocks}; for each one of these source blocks, the encoder creates a potentially very large number of \emph{encoding symbols} (rateless coding). All $K$ source symbols (i.e. the original fragments of a source block) are amongst the set of encoding symbols (systematic coding). All other symbols are called \emph{repair} symbols. Senders initially send source symbols, followed by repair symbols, if needed.
\noindent\textbf{Decoding. }A source block can be decoded after receiving a number of symbols that must be equal to or larger than the number of source symbols; all symbols contribute to the decoding process equally. In a lossless communication scenario, decoding is not required, because all source symbols are available (systematic coding).
\noindent\textbf{Performance. } In the absence of loss, RaptorQ codes do not incur any network or computational overhead. The trade-off associated with RaptorQ codes when loss occurs is with respect to some (1) minimal network overhead to enable successful decoding of the original fragments and (2) computational overhead for decoding the received symbols to the original fragments. RaptorQ codes behave exceptionally well in both respects. With two extra encoding symbols (compared to the number of original fragments), the decoding failure probability is in the order of $10^{-6}$. It is important to note that decoding failure is not fatal; instead one or more encoding symbols can be requested in order to ensure that decoding is successful~\cite{RaptorQ-book}. The time complexity of RaptorQ encoding and decoding is linear to the number of source symbols. RaptorQ codes support excellent performance for all block sizes, including very small ones, which is very important for building a general-purpose data transport protocol that is able to handle efficiently a diverse set of workloads. In \cite{CodornicesRqNew, LiquidCloudStorage}, the authors report encoding and decoding speeds of over 10 Gbps using a RaptorQ software prototype running on a single core. With hardware offloading, RaptorQ codes would be able to support data transport at line speeds in modern data centre deployments. On top of that, multiple blocks can be decoded in parallel, independently of each other (e.g. on different cores). Decoding small source blocks is even faster, as reported in~\cite{CodornicesRqNew}. The decoding performance does not depend on the sequence that symbols arrived nor on which ones do.
\noindent\textbf{Example. }Before explaining how RaptorQ codes are integrated in SCDP, we present a simple example of point-to-point communication between two hosts, which is illustrated in Figure \ref{RQ-block-diagram}\footnote{Note that Figure \ref{RQ-block-diagram} does not illustrate SCDP's underlying mechanisms. The design of SCDP is discussed extensively in Section \ref{design}.}. On the sender side, a single source block is passed to the encoder that fragments it into $8$ equal-sized source symbols $S\textsubscript{1},S\textsubscript{2}, ..., S\textsubscript{8}$. The encoder uses the source symbols to generate repair symbols $S\textsubscript{a},S\textsubscript{b},S\textsubscript{c}$ (here, the decision to encode $3$ repair symbols is arbitrary). Encoding symbols are transmitted to the network, along with the respective encoding symbol identifiers (ESI) and source block numbers (SBN) \cite{RFC-6330-RQ}. As shown in Figure \ref{RQ-block-diagram}, symbols $S\textsubscript{4}$ and $S\textsubscript{b}$ are lost. Symbols take different paths in the network but this is transparent to the receiver that only needs to collect a specific amount of encoding symbols (source and/or repair). The receiver can receive symbols from multiple senders from different network interfaces. In this example, the receiver attempts to decode the original source block upon receiving $9$ symbols, i.e. with one extra symbol which is network overhead (as shown in Figure \ref{RQ-block-diagram}). Decoding is successful and the source block is passed to the receiver application. As mentioned above, if no loss had occurred, there would be no need for decoding and the data would have been directly passed to the application.
\begin{figure}[t]
\centering
\includegraphics[scale=0.073]{fig2-rq-block-diagram.pdf}\quad
\caption{RaptorQ-based communication}
\label{RQ-block-diagram}
\vspace{-4mm}
\end{figure}
\noindent \textbf{Erasure coding in data transport. } There is a long and interesting trail of research that integrates erasure coding into data transport protocols. SCDP is unique compared to all these works, efficiently supporting one-to-many and many-to-one data transport sessions for distributed storage and numerous other workloads prevalent in modern data centres, without sacrificing performance for traditional short and long flows. In ~\cite{end-to-end-coding}, the authors explore the advantages and challenges of integrating end-to-end coding into TCP. Corrective~\cite{Reducing-Web-Latency} employs coding for faster loss recovery but it can only deal with one packet loss in one window as its coding redundancy is fixed. FMTCP~\cite{FMTCP2015} employs fountain coding to improve the performance of MPTCP~\cite{Improving-Datacenter-MPTCP} by recovering data over multiple subflows. LTTP~\cite{LTTP} is a UDP-based transport protocol that uses fountain codes to mitigate Incast in data centres. CAPS~\cite{CAPS-coding} deals with out of order data by integrating forward error correction on short flows, in order to reduce their flow completion time, and employs ECMP for achieving high throughput for long flows. RC-UDP~\cite{RCUDP} is a rateless coding data transport protocol that enables reliable data transfer over high bandwidth networks. It uses block-by-block flow control where the sender keeps sending encoded symbols until the receiver sends an acknowledgement indicating successful decoding. PPUSH~\cite{scdpcitedpaper} is a multi-source data delivery protocol that employs RaptorQ codes for sending multiple flows in parallel using all available replicas.
\section{The Case for RaptorQ Coding in Data Transport for Data Centre Networks}
\label{case-for-rq}
The starting point in designing SCDP, which is also the key differentiator to the rest of the literature, is its efficient handling of one-to-many and many-to-one communication, without sacrificing performance for traditional unicast flows.
\noindent\black{\textbf{One-to-many communication. }None of the existing data transport protocols for data centres can support communication beyond traditional unicast flows, even if network-level multicasting was deployed in the network. Congestion control in reliable multicasting is a challenging problem and traditional sender-driven, reliable multicasting approaches (e.g. as in~\cite{RFC2362, RizzoSigcomm}) would suffer from Incast \cite{TCP-Incast-2012}, and lack of support for multipath routing and multi-homed servers, as well as their inability to spray packets in the network. A receiver-driven approach would be more suitable. However, extending approaches, such as NDP \cite{NDP} or Homa \cite{HOMA}, is far from trivial as this would entail complications with flow control, when losses occur, because lost packets must be retransmitted. Senders would have to maintain state, enqueuing incoming pull requests by multiple receivers, while waiting to multicast a new packet or retransmit a lost packet. Equally importantly, the slowest receiver would slow down all other receivers.}\footnote{\black{How existing protocols for data centres could be extended to support one-to-many and many-to-one communication is beyond the scope of this paper.}}
\black{With RaptorQ codes and receiver-driven flow control, one-to-many communication is simple and efficient: a sender multicasts a new symbol after receiving a \textit{pull} request from all receivers (see Section \ref{multicast} for a detailed description). A sender does not need to remember which symbols it has sent as there is no notion of retransmission. Instead, it only needs to count the number of pending pull requests from each receiver so it can `clock' symbol sending. A receiver can decode the original data and complete the session after it receives the necessary amount of symbols (see Section \ref{raptorQ}), independently of other receivers that may be behind in terms of receiving symbols because of network congestion (e.g. when they are connected to a congested ToR switch).}
\noindent\black{\textbf{Many-to-one communication. }Existing protocols do not and could not support many-to-one communication in a way that benefits the overall performance. Even if senders were instructed to only send a subset of the original data fragments (emulating many-to-one communication), a congested or slow server would always be the bottleneck for the whole session.}
\black{With RaptorQ codes, each sender contributes as much as it can, given the current conditions, in terms of network congestion and local load. The rateless nature of RaptorQ codes, enable receivers to successfully decode a source block regardless of which server sent the symbols. The only requirement is to receive the required number of symbols (See Section \ref{raptorQ}). This is a unique characteristic of SCDP (see Section \ref{multi-source}), which `bypasses' network hotspots by having non-congested servers contributing more symbols to the receiver. Crucially, this is done without any central coordination.}
\noindent\black{\textbf{Flow completion time and goodput. }SCDP's benefits discussed above do not come at a cost for traditional unicast flows. This is due to the combination of the systematic nature of RaptorQ codes, MLFQ, and packet trimming. More specifically, FCT for short flows is very small, unaffected by the introduction of coding because senders first send the original data fragments (systematic coding) with the highest priority, minimising loss for them. As a result, decoding is very rarely required for short flows. SCDP performs exceptionally well also for long flows despite the fact that (the otherwise efficient) decoding is needed more often. This is done by employing pipelining of source blocks, which alleviates the decoding overhead for large data blocks and maximises application goodput (see Section \ref{pipelining}). In combination with receiver-driven flow control and packet trimming, SCDP eliminates Incast and Outcast, playing well with switches' shallow buffers}.
\noindent\black{\textbf{Network utilisation. }SCDP ensures high network utilisation for all communication modes; with RaptorQ coding there is no notion of ordering, as all symbols contribute to the decoding (if needed) of source data. As a result, symbols can be sprayed in the network through all available paths maximising utilisation and minimising the formation of hotspots. At the same time, receivers can receive symbols from different interfaces naturally enabling multi-homed topologies (e.g. \cite{BCube,Jellyfish}).}
\section{SCDP Design}
\label{design}
\black{In this section, we present SCDP's design; we define SCDP's packet types and adopted switch model. We then describe all SCDP's supported communication modes, and how we maximise goodput and minimise flow completion time (FCT) for long and short flows, respectively.}
\vspace{-3mm}
\subsection{Packet Types}
\label{packet-types}
\black{SCDP's packet format is shown in Figure \ref{pkt-types}. Port numbers are used to identify a transport session. The type field (\textsc{typ} in Figure \ref{pkt-types}) is used to denote one of the three SCDP packet types; \textit{symbol}, \textit{header} and \textit{pull} (denoted as \textsc{smbl}, \textsc{hdr} and \textsc{pull}, respectively, in Algorithms~\ref{sender-alg} and \ref{receiver-alg}). The priority field (\textsc{pri} in Figure \ref{pkt-types}) is set by the sender and is used by MLFQ (see Section \ref{service-model}).}
A \emph{symbol} packet carries in its payload one MTU-sized source or repair symbol. The source block number (SBN) identifies the source block the carried symbol belongs to. The encoding symbol identifier (ESI) identifies the symbol within the stream of source and repair symbols for the specific source block \cite{RFC-6330-RQ}. A sender initiates a transport session by pushing an initial window of symbols with the \emph{syn} flag set, for the first source block. These symbol packets also carry a number of \emph{options}: the \emph{transfer mode} (\textsc{m} in Figure \ref{pkt-types}) can be unicast, many-to-one or one-to-many. The rest of the options are used to define the total length of the session (\textsc{f} in Figure \ref{pkt-types}), number of source blocks (\textsc{z} in Figure \ref{pkt-types}) and the symbol size (\textsc{t} in Figure \ref{pkt-types}). The source block size $K$ is derived from these options as described in RaptorQ RFC~\cite{RFC-6330-RQ}. We adopt the notation used in this RFC\cite{RFC-6330-RQ}.
\emph{Header} packets are trimmed versions of symbol packets. Upon receiving a symbol packet that cannot be buffered, a network switch trims its payload and forwards the header, with the highest priority. Header packets are used to ensure that a window (\textit{w}) of symbol packets is always in-flight.
A \emph{pull} packet is sent by a receiver to request a symbol. The sequence number is only used to indicate how many symbols of the specified source block identifier to send, in case pull requests get reordered. Multiple symbol packets may be sent in response to a single pull request, as described in Section \ref{unicast}. The \emph{fin} flag is used to identify the last pull request; upon receiving such a pull request, a sender sends the last symbol packet for this SCDP session.
\vspace{-2mm}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth,height=0.15\textheight]{fig3-packets.pdf}
\caption{\textcolor{black}{SCDP packet format}}
\label{pkt-types}
\end{center}
\vspace{-5mm}
\end{figure}
\vspace{-3.5mm}
\subsection{Switch Service Model}
\label{service-model}
SCDP relies on network switching functionality that is either readily available in today's data centre networks \cite{HOMA} or is expected to be \cite{NDP} when P4 switches are widely deployed. SCDP does not require any more switch functionality than NDP~\cite{NDP}\footnote{\black{As reported in~\cite{NDPRethinking}, there is ongoing work by switch vendors to implement the NDP switch. Moreover, a smartNIC implementation of the NDP end-host stack is also ongoing. This is very promising for the deployability of next-generation protocols, including SCDP, in the real-world.}}, Homa \cite{HOMA}, QJUMP~\cite{qjump}, or PIAS~\cite{PIAS} do.
\noindent\textit{Priority scheduling and packet trimming.} In order to support latency-sensitive flows, we employ MLFQ \cite{PIAS}, and packet trimming \cite{cuttingPayload}. We assume that network switches support a small number of queues with respective priority levels. The top priority queue is only used for header and pull packets. This is crucial for swiftly providing feedback to receivers about loss. Given that both types of packets are very small, it is extremely unlikely that the respective queue gets full and that they are dropped\footnote{SCDP receivers employ a simple timeout mechanism, as in \cite{NDP}, to recover from the unlikely losses of pull and header packets.}. The rest of the queues are small and buffer symbol packets. Switches perform weighted round-robin scheduling between the top-priority (header/pull) queue and the symbol packet queues. This guards against a congestion collapse situation, where a switch only forwards trimmed headers and all symbol packets are trimmed to headers. When a data packet is to be transmitted, the switch selects the head packet from the highest priority, non-empty queue.
\noindent\textit{Multipath routing.} SCDP packets are sprayed to all available equal-cost paths to the destination\footnote{In SCDP's one-to-many transfer mode there are multiple destinations.} in the network. SCDP relies on ECMP and spraying could be done either by using randomised source ports \cite{infocom-morteza}, or the ESI of symbol and header packets and the sequence number of pull packets.
\begin{algorithm}[t]
\setlength{\textfloatsep}{0pt}
\scriptsize
\DontPrintSemicolon
\SetNoFillComment
\SetKwFunction{FInit}{initSession()}
\SetKwFunction{FCreateHeader}{createHeader(ss, syn, fin, type)}
\SetKwFunction{FRecv}{onReceivePullRequest(pullReq)}
\SetKwFunction{FCreate}{createPacket(ss)}
\SetKwFunction{FSendNet}{sendPacket(header,symbol)}
\SetKwProg{Fn}{Function}{}{}
SessionState $ss$\;
\Fn{\FInit}{
{// initialise $ss$: $SBN$, $srcPort$, $dstPort$, $type$, $options$}\\
$ss.ESI \leftarrow 0 $\;
$ss.expectedPullSeqNum \leftarrow 0 $\;
$ss.numSentSymbols \leftarrow 0 $\;
\While{$ss.numSentSymbols < w$} {
$hdr \leftarrow createHeader(ss, true, false, SMBL)$\;
$symbol \leftarrow getNextSymbol(ss)$ \textit{// source or repair symbol}\;
$sendPacket(hdr,symbol)$ \textit{// send to network}\;
$ss.ESI \leftarrow ss.ESI + 1 $\;
$ss.numSentSymbols \leftarrow s.numSentSymbols + 1$\;
}
}
\Fn{\FRecv}{
$gap \leftarrow ss.expectedPullSeqNum $ - $ pullReq.seqNum $\;
\While{$gap > 0$} {
$hdr \leftarrow createHeader(ss, false, false, SMBL)$\;
$symbol \leftarrow getNextSymbol(ss)$ \textit{// source or repair symbol}\;
$sendPacket(header,symbol)$ \textit{// send to network}\;
$gap \leftarrow gap - 1 $\;
}
\If{$\textit{pullReq.fin} == true$}
{
\textit{// session will be completed and \;}
\textit{// garbage collected soon (in a timeout) \;}
$ss.toBeGarbageCollected \leftarrow true$ \;
}
}
\Fn{\FCreateHeader}{
$ hdr \leftarrow createHeader(ss)$ \textit{// sets port numbers}\;
$ hdr.\{SBN, syn, fin\} \leftarrow (ss.SBN, syn, fin)$\;
$ hdr.\{typ,pri, opts\} \leftarrow (type, getMLFQPriority(), ss.opts)$\;
\If{$\textit{type} == SMBL$}{$ hdr.\{ESI\} \leftarrow ss.ESI$\;}
\If{$\textit{type} == PULL$}{$ hdr.\{seqNum\} \leftarrow ss.seqNum$\;}
}
\caption{\textcolor{black}{SCDP Sender}}
\label{sender-alg}
\vspace{-1mm}
\end{algorithm}
\begin{algorithm}[t]
\scriptsize
\DontPrintSemicolon
\SetNoFillComment
\SetKwFunction{FInit}{initSession(packet)}
\SetKwFunction{FSend}{sendSymbol()}
\SetKwFunction{FGetHeader}{getHeaderInfo(pkt.hdr)}
\SetKwFunction{FGetHeaderTemp}{getHeaderInfo(hdr)}
\SetKwFunction{FRecvPacket}{onReceivePacket(pkt)}
\SetKwFunction{FRecvSymbol}{processSymbol(symbol)}
\SetKwFunction{FRecvSymbolTemp}{processSymbol(pkt.payload)}
\SetKwFunction{FRecvTrim}{processHeader(header)}
\SetKwFunction{FDecodeSB}{decodeSrcBlock()}
\SetKwFunction{FAddPull}{addPullRequest()}
\SetKwProg{Fn}{Function}{}{}
\SetKwComment{tcp}{\small // }{}%
\SetCommentSty{small}
SessionState $ss$\;
$ss.established\leftarrow false$\;
\Fn{\FRecvPacket}{
$syn, type \leftarrow $ \FGetHeader \;
\If{$syn == true$ \&\& $ss.established == false$}{
$ ss.\{established, requestMoreSymbols\} \leftarrow (true, true)$\;
$ ss.\{seqNum, numRcvdSymbols\} \leftarrow (0, 0)$\;
$ss.K \leftarrow calcKFromOpts(ss.opts)$ \;\textit{// $K$ is derived from the header options as in RaptorQ RFC~\cite{RFC-6330-RQ}}
}
\If{$type == SMBL $}{
\FRecvSymbolTemp
}
\If{$type == HDR $}{
\FRecvTrim
}
}
\Fn{\FRecvSymbol}{
$ss.storeSymbol($$\textit{symbol}$$)$\;
$ss.numRcvdSymbols \leftarrow ss.numRcvdSymbols + 1 $\;
\eIf {$ss.numRcvdSymbols == ss.K$ \&\& $ss.overhead==0$}{
$ss.skipDecoding \leftarrow true$\;
$ss.requestMoreSymbols \leftarrow false$\;
$ss.deliverSBN() $ \textit{ // deliver to application layer}
}
{\If{$ss.numRcvdSymbols == ss.K+ss.overhead$}{\FDecodeSB}}
\If{$ss.numRcvdSymbols == ss.K - 1$}{$ss.Fin \leftarrow true$}
\If{$ss.requestMoreSymbols == true$}{\FAddPull}
}
\Fn{\FRecvTrim}{
$ss.overhead \leftarrow 2$\;
\FAddPull
}
\Fn{\FGetHeaderTemp}{
$ (ss.SBN, ss.ESI, ss.opts) \leftarrow hdr.\{SBN,ESI, opts\} $\;
$ (type, syn) \leftarrow hdr.\{typ, syn\} $\;
$return$ $(type, syn)$\;
}
\Fn{\FAddPull}{
$ss.seqNum \leftarrow ss.seqNum+1 $\;
$pullReq \leftarrow createHeader(ss, false, ss.Fin, PULL)$\;
{// $createHeader$ is defined in Algorithm~\ref{sender-alg} }\\
$enqueuePullRequest(pullReq)$\;
}
\Fn{\FDecodeSB}{
$success \leftarrow ss.decode()$\;
\eIf{$success == true$}{
$ss.requestMoreSymbols \leftarrow false$\;
$ss.deliverSBN()$ \textit{ // deliver to application layer}\;
}
{
$ss.overhead\leftarrow ss.overhead + 1$ \textit{// very rare}\;
}
}
\caption{\textcolor{black}{SCDP Receiver}}
\label{receiver-alg}
\vspace{-1mm}
\end{algorithm}
\vspace{-4mm}
\subsection{Unicast Transport Sessions}
\label{unicast}
A sender implicitly opens a unicast SCDP transport session by pushing an initial window \black{of $w$ (\textit{syn}-enabled) symbol packets tagged with the highest priority (Lines $2 - 12$ in Algorithm~\ref{sender-alg}\footnote{\black{For clarity, Algorithms \ref{sender-alg} and \ref{receiver-alg} illustrate a slightly simplified version of SCDP for unicast data transport for a single source block without pipelining.}})}. Senders tag outgoing symbol packets with a priority value, which is used by the switches when scheduling their transmission (\textsection\ref{service-model}). The priority of outgoing symbol packets is gradually degraded when specific thresholds are reached. Calculating these thresholds can be done as in PIAS \cite{PIAS} or AuTO \cite{AUTO} (\black{Line 30 in Algorithm ~\ref{sender-alg}}). The receiver establishes a new session upon receiving the first symbol that carries the $syn$ flag (\black{Lines $5 - 10$ in Algorithm~\ref{receiver-alg}}). After receiving the initial window of packets, the receiver takes control of the flow of incoming packets by pacing pull requests to the sender (\black{Lines 33 and 37 in Algorithm~\ref{receiver-alg}}). A pull request carries a sequence number which is auto-incremented for each incoming symbol packet (\black{Line $43$ in Algorithm~\ref{receiver-alg}}). The sender keeps track of the sequence number of the last pull request and, upon receiving a new pull request, it sends one or more packets to fill the gap between the sequence numbers of the last and current request (\black{Lines $14 - 26$ in Algorithm~\ref{sender-alg}}). Such gaps may appear when pull requests are reordered due to packet spraying. Senders ignore pull requests with sequence numbers that have already been `served'; i.e. when they had previously responded to the respective pull requests.
Receivers maintain a single queue of pull requests for all active transport sessions. Flow control's objective is to keep the receiver's incoming link as fully utilised as possible at all times. This dictates the pace at which receivers send pull requests to all different senders. Receivers buffer encoding symbols along with their ESI and SBN and start decoding a source block upon receiving either $K$ source symbols (\black{Lines $20 - 24$ in Algorithm~\ref{receiver-alg}}), where $K$ is the total number of source symbols, or $K+o$ source and repair symbols, when loss occurs \black{($o$ is the induced network overhead in number of symbols) (\black{Lines $25 - 27$ in Algorithm~\ref{receiver-alg}}). As discussed in Section \ref{raptorQ}, RaptorQ codes perform exceptionally well in terms of decoding failure probability; with $o = 2$, which is the value we have chosen for SCDP, the decoding failure is very rare (in the order of $10^{-6}$) and when it happens the penalty is one RTT for requesting one more symbol and the extra latency for attempting decoding twice. It would be extremely unlikely for decoding to fail with $o = 3$\label{key}.}\\
The receiver sets the \textit{fin} flag in the pull request for the last symbol (a source or repair symbol at that point) that sends to the sender. Note that this may not actually be the last request that the receiver sends, because the symbol packet that is sent in response to that request may get trimmed. All pull requests for the last required symbol (not a specific one) are sent with the \textit{fin} flag on (\black{Lines $29 - 31$ in Algorithm~\ref{receiver-alg}}). The sender responds to fin-enabled pull requests by sending the next symbol in the potentially very large stream of source and repair symbols, with the highest priority. It finally releases the transport session only after a time period that ensures that the last prioritised symbol packet was not trimmed (\black{Lines $22 - 26$ in Algorithm~\ref{sender-alg}}). This time period is very short; in the very unlikely case that the prioritised symbol packet was trimmed, the respective header would be prioritised along with the pull packet subsequently sent by the receiver.
\vspace{-5mm}
\subsection{One-to-many Transport Sessions}
\label{multicast}
One-to-many transport sessions exploit support for network-layer multicast (e.g. with\cite{ElmoMulticast,rfcMulticast2,reliablemulticast, infocom-multicast, DualStructure, ScalingIP}) and coordination at the application layer; for example, in a distributed storage scenario, multicast groups could be pre-established for different replica server groups or setup on demand by a metadata storage server. This would eliminate the associated latency overhead for establishing multicast groups on the fly and is practical for other data centre multicast workloads, such as streaming telemetry \cite{sflow-streaming,gangliaDistributed} and distributed messaging \cite{Akka,JGroups}, where destination servers are known at deployment time. With recent advances in scalable data centre multicasting, a very large number of multicast groups can be deployed with manageable overhead in terms of switch state and packet size. For example, Elmo \cite{ElmoMulticast} encodes multicast group information inside packets, therefore minimising the need to store state at the network switches. With small group sizes, as in the common data centre use cases mentioned above, Elmo can support an extremely large number of groups, which can be encoded directly in packets, eliminating any maintenance overhead associated with churn in the multicast state. ``In a three-tier data centre topology with 27K hosts, Elmo supports a million multicast groups using a 325-byte packet header, requiring as few as 1.1K multicast group-table entries on average in leaf switches, with a traffic overhead as low as 5\% over ideal multicast'' \cite{ElmoMulticast}.
As with unicast transport sessions, an SCDP sender initially pushes \black{$w$ (\textit{syn}-enabled) symbol packets tagged with the highest priority}. Receivers then request more symbols by sending respective pull packets. The sender sends a new symbol packet only after receiving a request from all receivers within the same multicast group. \black{In Algorithm \ref{sender-alg}, this would only require a simple extension where the sender counts the number of pending pull requests from each receiver (not shown in order to maintain clarity). Receivers queue and pace pull packets as in the unicast transport mode depicted in Algorithm \ref{receiver-alg}. Network hotspots, (e.g. when incoming symbols are frequently trimmed at the ToR switch), can prevent specific receivers from receiving symbols as fast as other receivers of the same one-to-many session do.} The rateless property of RaptorQ codes is ideal for such situation; within a single transport session, receivers may receive a different set of symbols but they will all decode the original source block as long as the required number of symbols is collected, regardless of which symbols they missed (see Section \ref{raptorQ}). \black{Receivers successfully decode the original data as soon as they receive the necessary number of symbols and they are not slowed down by receivers that are behind a hotspot. This is an important property for applications that only require a specific subset of receivers (e.g. some form of quorum) to receive the data before notifying a user or some other service.}
\black{Some receivers may end up receiving more symbols than what would be required to decode the original source block. This is unnecessary network overhead induced by SCDP but, in Section \ref{unnecessary-overhead}, we show that even under severe congestion, SCDP performs significantly better than NDP, exploiting the support for network-layer multicast. Dealing with situations where receivers are extremely slow or unresponsive is an important problem. We argue that dealing with such a situation is a policy issue and should be handled at the application rather than the data transport layer. For example, the data transport protocol could notify the application of a straggler server (e.g. in a high-performance, user-space stack deployment), which, in turn, could either ignore the notification and leave the data transport session unchanged or update the multicast group used by the data transport layer. Different applications may have different requirements and consistency constraints that are related to dealing with unresponsive servers. Exploring such policies is outside the scope of this work.}
\vspace{-2.3mm}
\subsection{Many-to-one Transport Sessions}
\label{multi-source}
Many-to-one data transport is a generalisation of the unicast transport discussed in Section \ref{unicast}. \black{Each sender $i$ pushes an initial window $w_i$ of (\textit{syn}-enabled) symbol packets to the receiver, as shown in Algorithm \ref{sender-alg} (Lines $2 - 13$)}. These packets are tagged with the highest priority and may contain source or repair symbols. The total number of initially pushed symbol packets $w_{total}=\sum_{i=1}^{n_s}w_i$, where $n_s$ is the total number of senders, is selected to be larger than the initial window $w$ used in unicast transport sessions\footnote{We assume that the value of $w$ is decided at the application layer.}. This is to enable natural load balancing in the data centre in the presence of slow senders or hotspots in the network. In that case, SCDP ensures that a subset of senders (e.g. $2$ out of $3$ in a 3-replica scenario) can still fill the receiver's downstream link. In Section \ref{window-size-eval}, we show that initial window sizes that are greater than $10$ symbol packets result in the same (high) goodput performance. A large initial window would inevitably result in more trimmed symbol packets, which however would not affect short flows that are prioritised over longer multi-source sessions. As discussed in Section \ref{raptorQ}, RaptorQ codes are rateless and all symbols contribute to the decoding process, therefore the receiver is agnostic to their origin. As a result, efficient data transport can be achieved by partitioning the potentially large stream of source and repair symbols amongst all senders, so that each one produces unique symbols. These can be done through coordination at the application layer or randomness. \black{Receivers behave as shown in Algorithm \ref{receiver-alg}.}
\vspace{-2mm}
\subsection{Maximising Goodput for Long Flows - Block Pipelining}
\label{pipelining}
With RaptorQ codes, if loss occurs, the receiver must perform decoding on the collected source and repair symbols (\textsection\ref{raptorQ}). This induces latency before the data can become available to the application. For large source blocks, SCDP masks this latency by splitting the large source block to many smaller blocks, instead of encoding and decoding the whole block. The smaller blocks are then pipelined over a single SCDP session. With pipelining, a receiver decodes each smaller block while receiving symbol packets for the next one, effectively masking the latency induced by decoding, except for the last source block. The latency for decoding this last smaller block is considerably smaller compared to decoding the whole block at once. For short, latency-sensitive flows, this could be a serious issue, but SCDP strives to eliminate losses, resulting in fast, decoding-free completion of short flows (see section below).
\vspace{-1.2mm}
\subsection{Minimising Completion Time for Short Flows}
\label{latency-overhead-tricks}
SCDP ensures that a window of $w$ symbol packets are on the fly throughout the lifetime of a transport session. The window decreases by one symbol packet for each one of the last $w$ symbol packets that the sender sends. As long as no loss occurs (detected by the receiver through receiving a trimmed header), a receiver sends $K - w$ pull requests in total, where $K$ is the number of source symbols (or original fragments) and $w$ is the size of the initial window. For every received trimmed header (i.e. observed loss), the receiver sends a pull request, and, subsequently, the sender sends a new symbol, which equally contributes to the decoding of the source block. This ensures that SCDP does not induce any unnecessary overhead; i.e. symbol packets that are redundant in decoding the source block. The target for the total number of received symbols also changes when loss is detected. Initially, all receivers aim at receiving $K$ source symbols. Upon receiving the first trimmed header, \black{the target changes to $K+o$ (where $o$ is the overhead discussed in Section \ref{unicast}), which ensures that decoding failure is extremely unlikely to occur (see Section \ref{raptorQ}).}
By prioritising earlier packets of a session over later ones through MLFQ, SCDP minimises loss for short flows. This has an extremely important corollary in terms of SCDP's computational cost; no decoding is required for the great majority of short flows, therefore completion times are almost always near-optimal. We evaluate this aspect of SCDP's design in Section \ref{network-overhead}. Note that for all supported types of communication, encoding latency can be masked either (1) by pre-encoding a number of repair symbols or (2) by generating repair symbols while sending source and previously generated repair symbols. The latter is possible due to the systematic nature of RaptorQ coding that enables senders to begin transmission before generating any repair symbols, by sending the original data fragments (i.e. source symbols).
\begin{figure}[t]
\setlength{\belowcaptionskip}{-4pt}
\centering
\includegraphics[scale=0.364]{fig4-workloads.pdf}\quad
\caption{Read/write workloads and replica placement policy used in evaluation. In our simulations, the selection of remote racks to store data blocks is random and racks in different pods can be selected (i.e. core switches are involved).}
\label{modern-workloads}
\vspace{-3mm}
\end{figure}
\begin{figure*}[t]
\subcaptionbox{rs = 1MB, $\lambda$ = 2000}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig5a-v3-one-to-many-scdp-ndp-pias-1MB-2000.pdf}}\quad
\subcaptionbox{rs = 1MB, $\lambda$ = 4000}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig5b-v3-one-to-many-scdp-ndp-pias-1MB-4000.pdf}}\quad
\subcaptionbox{rs = 4MB, $\lambda$ = 2000}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig5c-v3-one-to-many-scdp-ndp-pias-4MB-2000.pdf}}\quad
\subcaptionbox{rs = 4MB, $\lambda$ = 4000}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig5d-v3-one-to-many-scdp-ndp-pias-4MB-4000.pdf}}\quad
\caption{Performance comparison for SCDP, NDP and PIAS - write I/O with $3$ replicas (one-to-many)}
\label{3replica-multicast}
\vspace{-5mm}
\end{figure*}
\begin{figure*}[t]
\subcaptionbox{rs = 1MB, $\lambda$ = 2000}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig6a-v3-many-to-one-scdp-ndp-pias-1MB-2000.pdf}}\quad
\subcaptionbox{rs = 1MB, $\lambda$ = 4000}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig6b-v3-many-to-one-scdp-ndp-pias-1MB-4000.pdf}}\quad
\subcaptionbox{rs = 4MB, $\lambda$ = 2000}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig6c-v3-many-to-one-scdp-ndp-pias-4MB-2000.pdf}}\quad
\subcaptionbox{rs = 4MB, $\lambda$ = 4000}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig6d-v3-many-to-one-scdp-ndp-pias-4MB-4000.pdf}}\quad
\caption{Performance comparison for SCDP, NDP and PIAS - read I/O with $3$ replicas (many-to-one)}
\label{3replica-multisource}
\vspace{-5mm}
\end{figure*}
\section{Experimental Evaluation}
\label{sec:evaluation}
We have extensively evaluated SCDP's performance through large scale, packet-level simulations and compared it to the state-of-the-art. To do so, we have developed OMNeT++ models for SCDP, NDP, PIAS, the respective switch service models, including MLFQ, and network-layer multicast \cite{multicastfattree}\footnote{Some of our models that we use in this paper have been published at the OMNeT++ Community Summit\cite{omnetpp-ndp-model}. More introductory details in~\cite{moThesis}.}.
\noindent\black{\textbf{Simulation setup.} For our experimentation we have used a $250$-server FatTree topology with $25$ core switches and $5$ aggregation switches in each pod ($50$ aggregation switches in total). This is a typical size for a simulated data centre topology, also used in the evaluation of recently proposed protocols \cite{HOMA,PIAS,pHost-CoNEXT-2015, pfabric}. The default values for the link capacity, link delay and switch buffer size are 1 Gbps, $10\mu$s and $20$ packets, respectively. We have run each simulation $5$ times with different seeds and report average (with $95\%$ confidence intervals) or aggregate values.}
\noindent\black{\textbf{Multi-Level Feedback Queuing.} For protocols that rely on MLFQ, the switch buffer is allocated to $5$ packet queues with different scheduling priorities. The thresholds for demoting the priority for a specific session are statically assigned to 10KB, 100KB, 1MB and 10MB, respectively. In a real-world deployment these would be set dynamically, e.g. as in AuTO \cite{AUTO}.} \black{In the following, we briefly discuss details specific to the developed protocol models.}
\noindent\black{\textbf{SCDP.} We have implemented SCDP in full, as described in Section \ref{design}. For the MLFQ mechanism, the top priority queue is for pull and header packets, which are very small.
We model the decoding latency based on the results reported in~\cite{CodornicesRqNew}, by fitting the worst-case decoding latencies for different number of $K$ source symbols into a polynomial function.When calculating the completion time or goodput for a given SCDP session, we use the fitted model to extrapolate a decoding latency for the last block in the pipeline, and add it to the total time. We do not model the encoding latency as this can be easily masked by either (i) pre-computing repair symbols or (ii) encoding repair symbols while sending source symbols given that RaptorQ codes are systematic. The size of an encoding symbol (source and repair) is $1500$ bytes long (i.e. one MTU). Unless otherwise stated, the initial window $w$ for one-to-one and one-to-many sessions is set to $12$ symbol packets. For many-to-one sessions $w$ is set to $6$ symbol packets per sender. For all experiments we set the block size for pipelining to $100$ MTU-sized symbols.}
\noindent\black{\textbf{NDP}~\cite{NDP} is a receiver-driven, unicast data transport protocol. A sender initiates a flow by sending an initial window of data at line rate, as in SCDP. The receiver then pulls packets from the sender by sending pull requests. If a switch queue overflows, the packet data is trimmed and the header is priority-forwarded. The receiver adds a pull packet for each received data or header packet, which are then paced from a single pull queue shared by all applications, based on the receiver's downlink link rate. In our NDP model the initial window value is set to 12 packets and all packets are $1500$ bytes long (i.e. one MTU). It has been shown (e.g. in \cite{Aeolus20}, \cite{AMRT20}, \cite{polo20} and \cite{opera20}) that NDP outperforms other modern data transport protocols (e.g. Homa~\cite{HOMA} and pHost~\cite{pHost-CoNEXT-2015}), therefore it constitutes a good baseline for our experimental evaluation.}
\noindent\black{\textbf{NDP+} is a simple extension of NDP that uses MLFQ and is included here to understand how MLFQ affects the performance of SCDP in relation to NDP. Note that results for NDP+ are not included in the plots to maintain clarity, but are reported when appropriate. We use the same priority demoting thresholds for NDP+ as in SCDP. Packets are set to be MTU-sized.}
\noindent\black{\textbf{PIAS.}~\cite{PIAS} is a flow scheduling mechanism that leverages MLFQ and employs DCTCP \cite{DCTCP} for end-to-end data transport, which relies on Explicit Congestion Notification (ECN) in the network to provide multi-bit feedback to end hosts. For uniformity, we use the same priority demoting thresholds for PIAS as in SCDP and NDP+. Packets are set to be MTU-sized.}
\vspace{-3mm}
\subsection{Goodput for One-to-Many and Many-To-One Sessions}
\label{goodput-performance}
In this section we measure the application goodput for SCDP, NDP, NDP+ and PIAS in a distributed storage setup with $3$ replicas (as depicted in Figure \ref{modern-workloads}). The setup involves many-to-one and one-to-many communication. In each run, we simulate $2000$ transport sessions (or I/O requests at the storage layer) with sizes 1MB and 4MB each (denoted as \emph{rs} in the figures). Transport session arrival times follow a Poisson process \black{with inter-arrival rate $\lambda$}; we have used different $\lambda$ values ($2000$ and $4000$) to assess the performance of the studied protocols under different loads. Each I/O request is `assigned' to a host in the network (denoted as $C_i$ in Figure \ref{modern-workloads}), which is selected uniformly at random and acts as the client. Replica selection and placement is based on HDFS' default policy. More specifically, we assume that clients are not data nodes themselves, therefore a data block is placed on a randomly selected node (denoted as $R_i$ in Figure \ref{modern-workloads}). One replica is stored on a node in a different remote rack, and the last replica is stored on a different node in the same remote rack. A client will read a block from a server located in the same rack, or a randomly selected one, if no replica is stored in the same rack. In order to simulate congestion in the core of the network, $30\%$ of the nodes run background long flows, the scheduling of which is based on a permutation traffic matrix.
\noindent\textbf{One-to-many transport sessions. } We evaluate SCDP's performance in one-to-many traffic workloads and assess how it benefits from the underlying support for network-layer multicast, compared to NDP, NDP+ and PIAS. One-to-many communication with these protocols is implemented through (1) multi-unicasting data to multiple recipients (Figure \ref{modern-workloads}a) or (2) daisy-chaining the transmission of replicas through the respective servers (Figure \ref{modern-workloads}b). In daisy-chaining, each replica starts transmitting the data to the next replica server (according to HDFS's placement policy), as soon as it starts receiving data from another replica server. Daisy-chaining eliminates the bottleneck at the client's uplink. We measure the overall goodput from the time the client initiates the transmission until the last server receives the whole data. The results for various loads and I/O request sizes are shown in Figure \ref{3replica-multicast}. In all figures, flows are ranked according to the measured goodput (shown on the y axis). SCDP, with its natural load balancing and the support of multicast (Figure \ref{modern-workloads}c), significantly outperforms NDP and PIAS even when daisy-chaining is used for replicating data. Daisy-chaining is effective compared to multi-unicasting when the network is not heavily loaded. With SCDP, around $50\%$ of the sessions experience goodput that is over $90\%$ of the available bandwidth for 1MB sessions and $\lambda = 2000$. The remaining $50\%$ sessions still get a goodput performance over $60\%$ of the available bandwidth. When the network load is heavier, daisy-chaining does not provide any significant benefits over multi-unicasting because data needs to be moved in the data centre multiple times and congestion gets severe. For $\lambda = 4000$ and 4MB sessions, NDP's and PIAS' performance is significantly worse for most sessions, whereas SCDP still offers an acceptable transport service to all sessions. SCDP fully exploits the support for network-layer multicasting providing superior performance to all storage clients because the required network bandwidth is minimised. Minimising the bandwidth requirements for one-to-many flows that are extremely common in the data centre, makes space for regular short and long flows. For the experimental setup with the heaviest network load ($\lambda = 4000$ and 4MB sessions), we have measured the average goodput for SCDP background traffic to be 0.408 Gbps, compared to 0.252 Gbps and 0.182 Gbps for NDP and PIAS experiments, respectively\footnote{This improvement for background flows is despite these running at the lowest possible priority, and they span the whole duration of the simulation.}. This is 15.6\% of the available bandwidth freed up for regular unicast flows. We evaluate the positive effect that SCDP has with respect to network hotspots in Section \ref{minimising-hotspots}.
\textcolor{black}{NDP+ is on average $14\%$ better than NDP and $21\%$ worse than SCDP, in terms of measured goodput. This reinforces our argument that the performance gains in one-to-many communication is mostly due to exploiting the supported network-level multicast. PIAS performs worse than SCDP, NDP and NDP+ because it relies on DCTCP for data transport and as a result it suffers from the limitations of a single-path protocol (i.e. lack of support for multi-path transport and packet spraying).}
\noindent\textbf{Many-to-one transport sessions. }In the many-to-one scenario, clients read previously stored data from the network. SCDP naturally balances this load according to servers' capacity and network congestion, as discussed in Section \ref{multi-source} (see Figure \ref{modern-workloads}e). With NDP and PIAS, clients read data either from a replica server located in the same rack or a randomly selected server, if there is no replica stored in the same rack and we simulate both a single-block (see Figure \ref{modern-workloads}d) and multi-block request workload. The latter enables parallelisation at the application layer (e.g. the read-ahead optimisation where a client reads multiple consecutive blocks under the assumption that they will soon be requested). Here, we simulate a $3$-block read-ahead policy and measure the overall goodput from the time the I/O request is issued until all $3$ blocks are fetched. To make the results as comparable to each other as possible, for the $3$-block setup we use blocks the size of which is one third of the size of the single-block scenario (as reported in Figure \ref{3replica-multisource}). We do not include multi-block results for SCDP as they are almost identical to the single-block case, confirming the argument that it naturally distributes the load without any application-layer parallelisation. In Figure \ref{3replica-multisource} we observe that SCDP significantly outperforms NDP and PIAS for all different request sizes and $\lambda$ values. Even under heavy load, SCDP provides acceptable performance to all transport sessions. This is the result of (1) the natural and dynamic load balancing provided to SCDP's many-to-one sessions and (2) MLFQ; long background flows run at the lowest priority to boost the performance of shorter flows. Around $82\%$ of the sessions experience goodput that is above $90\%$ of the available bandwidth for 1MB sessions and $\lambda = 2000$. In contrast, NDP and PIAS offer this good performance to only $10\%$ and $23\%$ of the sessions, respectively. For $\lambda = 4000$ and 4MB sessions, NDP's and PIAS' performance is significantly worse for most sessions, whereas SCDP still offers good performance to all sessions. Notably, the performance difference between SCDP and both NDP and PIAS increases with the congestion in the network, with SCDP being able to provide acceptable levels of performance where NDP and PIAS would not (e.g. in the presence of hotspots or in over-subscribed networks). \black{NDP+ outperforms both NDP and PIAS, providing on average $7\%$ improvement in goodput over the goodput that can be provided by NDP and PIAS. This shows that only a small part of SCDP's performance gains over NDP come from MLFQ. The key differentiator is the natural load balancing that is enabled by RaptorQ codes; a congested server will not slow down the session because the rest of the senders will contribute most of the needed source and repair symbols. It is worth noting that PIAS shows better performance for some sessions compared to NDP (for $\lambda = 2000$ and 1MB sessions). The benefit becomes clearer for larger flows, at the highest inter-arrival rate. However, this does not come for free; instead, background traffic paid for this. For the experimental setup with $\lambda = 4000$ and 4MB sessions, we have measured the average goodput for NDP background traffic (not shown in the figures) to be 0.342 Gbps, compared to 0.152 Gbps for the respective PIAS experiment.}
\begin{figure*}[t]
\centering
\subcaptionbox{ (0, 100KB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig7a-part1-websearch-type1-pias.pdf}}\quad
\subcaptionbox{ (0, 100KB] 99th percentile}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig7b-part2-websearch-type1-pias.pdf}}\quad
\subcaptionbox{ (100KB, 1MB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig7c-part3-websearch-type1-pias.pdf}}\quad
\subcaptionbox{ (1MB, 10MB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig7d-websearch-type1.pdf}}\quad
\caption{Web search workload with unicast flows as background traffic}
\label{fct-ws-workloads-type-1}
\vspace{-3mm}
\end{figure*}
\begin{figure*}[t]
\setlength{\belowcaptionskip}{-3pt}
\subcaptionbox{ (0, 100KB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig8a-part1-datamin-type1-pias.pdf}}\quad
\subcaptionbox{(0, 100KB] 99th percentile}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig8b-part2-datamin-type1-pias.pdf}}\quad
\subcaptionbox{ (100KB, 1MB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig8c-part3-datamin-type1-pias.pdf}}\quad
\subcaptionbox{ (1MB, 10MB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig8d-datamin-type1.pdf}}\quad
\caption{Data mining workload with unicast flows as background traffic}
\label{fct-dm-workloads-type-1}
\vspace{-3mm}
\end{figure*}
\begin{figure}[htp]
\setlength{\belowcaptionskip}{-3pt}
\centering
\subcaptionbox{(0, 100KB]}[.48\linewidth][c]{%
\includegraphics[width=0.9\linewidth,height=0.12\textheight]{fig9a-10gbps.pdf}}\quad
\subcaptionbox{(100KB, 1MB]}[.48\linewidth][c]{%
\includegraphics[width=0.9\linewidth,height=0.12\textheight]{fig9b-10gbps.pdf}}\quad
\caption{Web search workload with 10Gbps links}
\label{highspeedlinks}
\vspace{-5.5mm}
\end{figure}
\vspace{-4mm}
\subsection{Performance Benchmarking with Realistic Workloads}
\label{realistic-workloads}
SCDP is designed to be a general-purpose transport protocol for data centres therefore it is crucial that it provides high performance for all supported transport modes and traffic workloads. In this section, we use realistic workloads reported by data centre operators to evaluate SCDP's applicability and effectiveness beyond one-to-many and many-to-one sessions. Here, we consider two typical services; \textit{web search} and \textit{data mining} \cite{VL2,DCTCP}. The respective flow size distributions are shown in Table \ref{table-fct-workloads}. They are both heavy-tailed; i.e. a small fraction of long flows contribute most of the traffic. We have chosen the workloads to cover a wide range of average flow sizes ranging from $64$KB to $7.4$MB. \black{We simulate six target loads of background traffic ($0.3$, $0.4$, $0.5$, $0.6$, $0.7$ and $0.8$).} We generate $20000$ transport sessions, the inter-arrival time of which follows a Poisson process with $\lambda = 2500$. In Figures \ref{fct-ws-workloads-type-1}a and \ref{fct-ws-workloads-type-1}c and \ref{fct-dm-workloads-type-1}a and \ref{fct-dm-workloads-type-1}c, we report the average flow completion time (FCT) of flows with sizes in ($0-1$MB). For the shortest flows ($0-100$KB) we also report the 99th percentile of the measured FCTs (Figures \ref{fct-ws-workloads-type-1}b and \ref{fct-dm-workloads-type-1}b). Finally, Figures \ref{fct-ws-workloads-type-1}d and \ref{fct-dm-workloads-type-1}d illustrate the measured goodput for flows with sizes in (1MB, 10MB] ($4000$ and $1500$ flows in web search and data mining workloads, respectively) (for load value of 0.8).
\begin{table}[ht!]
\scriptsize
\centering
\begin{tabular}{c|c|c|c|c|c|}
\cline{2-6}
& \begin{tabular}[c]{@{}c@{}}0 - \\ 10KB\end{tabular} & \begin{tabular}[c]{@{}c@{}}10KB - \\ 100KB\end{tabular} & \begin{tabular}[c]{@{}c@{}}100KB -\\ 1MB\end{tabular} & \begin{tabular}[c]{@{}c@{}}1M-\\ ..\end{tabular} & \begin{tabular}[c]{@{}c@{}}Average \\ flow size\end{tabular} \\ \hline
\multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}Web \\ Search \cite{DCTCP}\end{tabular}} & 19\% & 43\% & 18\% & 20\% & 1.6MB \\ \hline
\multicolumn{1}{|c|}{\begin{tabular}[c]{@{}c@{}}Data \\ Mining \cite{VL2}\end{tabular}} & 78\% & 5\% & 8\% & 9\% & 7.4MB \\ \hline
\end{tabular}
\caption{Flow size distribution of realistic workloads}
\label{table-fct-workloads}
\end{table}
\begin{figure*}[t]
\setlength{\belowcaptionskip}{-3pt}
\centering
\subcaptionbox{ (0, 100KB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig10a-part1-websearch-type2-pias.pdf}}\quad
\subcaptionbox{ (0, 100KB] 99th percentile}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig10b-part2-websearch-type2-pias.pdf}}\quad
\subcaptionbox{ (100KB, 1MB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig10c-part3-websearch-type2-pias.pdf}}\quad
\subcaptionbox{ (1MB, 10MB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig10d-websearch-type2.pdf}}\quad
\caption{Web search workload with a mixture of one-to-many and many-to-one sessions as background traffic}
\label{fct-ws-workloads-type-2}
\vspace{-3mm}
\end{figure*}
\begin{figure*}[t]
\setlength{\belowcaptionskip}{-2pt}
\subcaptionbox{ (0, 100KB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig11a-part1-datamin-type2-pias.pdf}}\quad
\subcaptionbox{(0, 100KB] 99th percentile}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig11b-part2-datamin-type2-pias.pdf}}\quad
\subcaptionbox{ (100KB, 1MB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig11c-part3-datamin-type2-pias.pdf}}\quad
\subcaptionbox{ (1MB, 10MB]}[.231\linewidth][c]{%
\includegraphics[scale=0.2]{fig11d-datamin-type2.pdf}}\quad
\caption{Data mining workload with a mixture of one-to-many and many-to-one sessions as background traffic}
\label{fct-dm-workloads-type-2}
\vspace{-3mm}
\end{figure*}
\begin{figure*}[t]
\setlength{\belowcaptionskip}{-3pt}
\centering
\subcaptionbox{Incast: goodput comparison}[.23\linewidth][c]{%
\includegraphics[scale=0.2]{fig12a-incastv2.pdf}}\quad
\subcaptionbox{Incast: FCT with 70 senders}[.23\linewidth][c]{%
\includegraphics[scale=0.2]{fig12b-fct-incast.pdf}}\quad
\subcaptionbox{Outcast: setup}[.19\linewidth][c]{%
\includegraphics[scale=0.17]{fig12c-outcast-exp.pdf}}\quad
\subcaptionbox{Outcast: goodput}[.23\linewidth][c]{%
\includegraphics[scale=0.2]{fig12d-outcast.pdf}}\quad
\label{key} \caption{Incast and Outcast evaluation}
\label{incast-outcast-fig}
\vspace{-3mm}
\end{figure*}
SCDP performs better in all scenarios due to the decoding-free completion of (almost all) short flows and the supported MLFQ. Note that when loss occurs, SCDP sessions must exchange $2$ additional symbols; they also pay the `decoding latency' price. For very short flows, the 99th percentile FCT is close to the average one for all loads, which indicates that this is rarely happening. We study the extent that this overhead and the associated decoding latency is required in Section~\ref{network-overhead}. For higher loads, NDP performs even worse than SCDP because of the lack of support for MLFQ, which results in the trimming of more packets belonging to short flows.
Note that the FCT of short flows in web search is larger than in data mining. This is mainly because the percentage of long flows in the former workload is larger than in the latter, resulting in a higher overall load (for all fixed loads of background traffic). A key message here is that SCDP provides significantly better tail performance for short flows compared to NDP and PIAS, especially as the network load increases, despite the (very unlikely) potential for decoding and network overhead. For flows with sizes in (1MB, 10MB], we observe that goodput with SCDP is better compared to NDP and PIAS; tail performance is also better.\\
\textcolor{black}{For all realistic workloads, NDP+ performance is better than NDP and on par with SCDP when the load is not very high. NDP+ performs slightly better than SCDP when the load is very high, due to SCDP’s induced decoding latency when loss occurs. For example, for $0$-$100$KB flows and 0.8 load, the average FCT for SCDP is 0.31ms and 0.292ms for NDP+. This reinforces that the latency penalty due to decoding is almost negligible because MLFQ prevents losses for very short flows.}
It is worth noting here that in our experiments, NDP outperforms PIAS. This is contradicting the results presented in \cite{HOMA} for the same workload. We believe that this happens because the experimental setup in \cite{HOMA} is such that packet losses for short messages are very rare (if not non-existent) by having large buffers (in contrast to the, admittedly more realistic, experimental setup in this paper), therefore short flows use the highest priority queue to complete quickly without any losses. This is also mentioned in \cite{Aeolus20} where it is stated that ``one possible reason is that Homa assumes infinite switch buffers in their simulations. In contrast, in our simulations, we allocate 500KB buffer for each switch port''. We have reproduced the experiment in \cite{HOMA} using our OMNeT++ models by allocating a very large buffer to all queues, eliminating losses for short flows. In this experimental setup we observed that the average FCT for PIAS, when the network load is 0.8, drops from 0.43ms when using 100KB switch buffer (Figure~\ref{fct-ws-workloads-type-1}a) to 0.32ms, when using a very large buffer. This is indeed better than the average FCT observed for NDP. It is however worth noting, that this improvement does not come for free; instead, the average goodput for longer flows drops from ~0.7Gbps (Figure~\ref{fct-ws-workloads-type-1}d) to ~0.5Gbps. In other words, by eliminating loss for short flows, loss becomes more frequent in the lower priority queues occupied by packets belonging to longer flows. In general, we argue that PIAS performs worse than NDP because (1) it relies on DCTCP for data transport and as a result it suffers from the limitations of a single-path protocol (i.e. lack of support for multi-path transport and packet spraying); (2) connection establishment requires a three-way handshake and senders start with a small window, both of which can severely hurt FCTs for short flows; and (3) buffer occupancy in NDP is significantly lower than in PIAS \cite{NDP} which also affects performance for short slows.
\vspace{-3mm}
\subsection{Experimentation with 10Gbps Links}
\label{10-links}
Our decision to use 1Gbps links was solely driven by the very expensive nature of simulations, in terms of computational and memory resources. OMNeT++ is a packet-level simulator which means that by increasing the supported link rates by one order of magnitude (or more), the number of ‘live’ packets in the simulated network would dramatically increase, requiring extremely large amounts of memory and processing power to store and process all simulated packets. We are confident that our results are representative of SCDP's general behaviour and performance, compared to the state of the art. There are two aspects of SCDP that would need to be considered when deployed in faster networks; (1) the decoding latency would be more prominent in FCTs of short flows, because the actual data transmission would be faster; (2) the value of the initial window would need to be larger, in order for receivers to be able to run their links at capacity. We have performed experimentation to explore these two aspects; (1) regarding decoding latency, we have experimented with short flows in the context of the ‘web search’ workload in a simulated network with 10Gbps links and two different network loads (0.5 and 0.9). The results are shown in Figure \ref{highspeedlinks}. We observe that the flow completion times for both SCDP and NDP are (as expected) roughly an order of magnitude smaller compared to the respective results in Figures \ref{fct-ws-workloads-type-1}a and \ref{fct-ws-workloads-type-1}c. SCDP still performs better compared to NDP despite the fact that the decoding latency is now more prominent in the flow completion time. When the network load is very high, the gap between SCDP and NDP is at its smallest, because losses (trimmed packets) and therefore decoding are more frequent. It is worth pointing out that, in this experiment, we have not changed our underlying model for decoding latency, which is based on the results presented in~\cite{CodornicesRqNew}. In~\cite{CodornicesRqNew}, receivers were able to decode (roughly) at 1.3Gbps. However, in \cite{LiquidCloudStorage}, the authors report substantially higher decoding throughputs (up to 10 Gbps), which provides confidence that, in combination with SCDP’s pipelining mechanism, decoding will not be a bottleneck. Future hardware offloading approaches could potentially render decoding of small blocks negligible. The issue of selecting the value of the initial window in a 10Gbps setup is discussed in Section \ref{window-size-eval}.
\vspace{-3mm}
\subsection{Minimising Hotspots in the Network}
\label{minimising-hotspots}
SCDP increases network utilisation by exploiting support for network-layer multicasting and enabling load balancing when data is fetched simultaneously from multiple servers, as demonstrated in Section \ref{goodput-performance}. This, in turn, makes space in the network for regular short and long flows. In this section, we evaluate this performance benefit. We use as background traffic a 50\%/50\% mixture of write and read I/O requests (4MB each) that produce one-to-many and many-to-one traffic, respectively. We repeat the experiment of the previous section and evaluate the performance benefits of SCDP over NDP and PIAS with respect to minimising hotspots and maximising network utilisation for regular short and long flows.
In Figures \ref{fct-ws-workloads-type-2}a and \ref{fct-ws-workloads-type-2}c, we observe that SCDP's performance is almost identical to the one reported in Figures \ref{fct-ws-workloads-type-1}a and \ref{fct-ws-workloads-type-1}c (similarly between Figure \ref{fct-dm-workloads-type-1} and Figure \ref{fct-dm-workloads-type-2}). In contrast, NDP's and PIAS' performance deteriorates significantly because the background traffic requires more bandwidth (one-to-many) and results in hotspots at servers' uplinks (many-to-one). Tail performance for SCDP gets only marginally worse (the 99th percentile increases from 0.277ms to 0.287ms for the web search workload in load 0.5), whereas NDP's and PIAS' performance get significantly worse (the 99th percentile increases from 0.306ms to 0.381ms in NDP and from 0.386ms to 0.48ms in PIAS in load 0.5). The observed behaviour is more pronounced in the web search workload which, as described in the previous section, results in higher overall network utilisation compared to the data mining workload.
\begin{figure*}[tb]
\setlength{\belowcaptionskip}{-6pt}
\minipage{0.5\textwidth}
\centering
\subcaptionbox{Goodput performance}[.42\linewidth][c]{%
\includegraphics[scale=0.22]{fig13a-iw-scdp-1t1.pdf}}\quad\quad
\subcaptionbox{\# trimmed pkts}[.42\linewidth][c]{%
\includegraphics[scale=0.22]{fig13b-IW-err.pdf}}\quad
\caption{The effect of the initial window size}
\label{IWeffect}
\endminipage\hfill
\minipage{0.25\textwidth}
\centering
\includegraphics[scale=0.22]{fig14-dec.pdf}\quad
\quad
\caption{\# decoded sessions}
\label{eval-overhead-sessions}
\endminipage\hfill
\minipage{0.25\textwidth}
\centering
\includegraphics[scale=0.22]{fig15-iw-speeds.pdf}
\caption{varying \textit{w} \& link rates}
\label{iw-speeds}
\endminipage\hfill
\end{figure*}
\begin{figure*}[t]
\setlength{\belowcaptionskip}{-4pt}
\minipage{0.75\textwidth}
\centering
\subcaptionbox{One-to-many - $1$MB}[.3\linewidth][c]{%
\includegraphics[width=1\linewidth]{fig16a-multicast-overhead1.pdf}}\quad
\subcaptionbox{One-to-many - $3$MB}[.3\linewidth][c]{%
\includegraphics[width=1\linewidth]{fig16b-multicast-overhead2.pdf}}\quad
\subcaptionbox{Goodput - $\lambda = 4000$}[.3\linewidth][c]{%
\includegraphics[width=1\linewidth]{fig16c-lamda4000.pdf}}\quad
\caption{Unnecessary network overhead in one-to-many sessions}
\vspace{-2mm}
\label{eval-unnecessary-overhead}
\endminipage\hfill
\minipage{0.23\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{fig17-fairness.pdf}
\caption{Convergence test}
\label{fairness}
\endminipage\hfill
\vspace{-2mm}
\end{figure*}
\vspace{-3mm}
\subsection{Eliminating Incast and Outcast}
\label{incast-section}
SCDP eliminates Incast by integrating packet trimming and not relying on retransmissions of lost packets due to the rateless nature of RaptorQ codes. We simulated Incast by having multiple senders (ranging from $1$ to $70$) sending blocks of data ($70$KB and $256$KB, each, in two separate experiments) to a single receiver. All sessions were synchronised and background traffic was present to simulate congestion. Figure \ref{incast-outcast-fig}a illustrates the measured aggregated goodput for all SCDP, NDP and TCP flows. As expected, TCP's performance collapses when the number of senders increases. SCDP performs slightly better compared to NDP even when a large number of servers send data to the receiver at the same time. This is attributed to the decoding-free completion of these flows, in combination with the packet trimming and the lack of retransmissions for SCDP. Figure \ref{incast-outcast-fig}b shows the CDF of the FCTs in the presence of Incast with 70 senders. We observe that for the vast majority of transport sessions, SCDP provides superior performance compared to NDP.\\
SCDP eliminates outcast by employing receiver-driven flow control and packet trimming, which prevent port blackout. We have simulated a classic outcast scenario, where two receivers that are connected to the same ToR switch receive traffic from senders located in the same pod (2 flows crossing 4 hops) and different pods (12 flows crossing 6 hops), respectively. Flow size is 200KB and all flows start at the same time. This is illustrated in Figure \ref{incast-outcast-fig}c. Here, the bottleneck link lies between the aggregate switch and the ToR switch, which is different from the Incast setup. Figure \ref{incast-outcast-fig}d shows the aggregate goodput for the two groups of flows, for SCDP and TCP. TCP Outcast manifests itself through (1) unfair sharing of the bottleneck bandwidth (around 113 and 274 Mbps for the groups of flows, respectively) and (2) suboptimal overall performance (around 0.387 Gbps). SCDP eliminates Outcast as the bottleneck is shared fairly between the two groups of flows (around 460 and 435 Mbps for the groups of flows, respectively, and the overall goodput is around 0.9 Gbps).
\subsection{The effect of the initial window size}
\label{window-size-eval}
A key parameter of SCDP is the \black{initial window $w$ of symbol packets that a sender pushes to the network. The window is maintained throughout the lifetime of a session and is only decreased for the last $w$ pull packets}. Here, we evaluate the effect that the initial window has in the performance of SCDP. The experimental setup is as described in Section \ref{goodput-performance}, with $1.5$MB unicast sessions (we evaluated one-to-many and many-to-one sessions as well, which showed similar results as the unicast sessions). In Figure \ref{IWeffect}a, we observe that for very small values of the initial window, goodput is very low and the receiver's downlink underutilised. As the window increases, utilisation approaches the maximum available link capacity (for $12$ symbol packets).
For larger values of the initial window (up to 24 symbol packets), the measured goodput is consistently high (i.e. downlink runs at full capacity). Increasing the window inevitably leads to more trimmed packets due to the added network load when pushing symbol packets. This is illustrated in Figure \ref{IWeffect}b, where the average number of trimmed packets for session sizes of $1.5$MB grows from $13$ for an initial window of $12$ to $32$ trimmed packets for an initial window of $20$. We can therefore assume that there is relatively wide range of window values for which performance can be consistently high. In order to further explore this point, we have repeated the same experiment by setting the initial window to $52$ packets; in Figure \ref{IWeffect}a, we observe that goodput deteriorates significantly. This is because the initial `push' phase results in severe congestion and loss, which, in turn, results in (1) significant network overhead induced by the large number of trimmed packets (39 packets on average for each SCDP session) that are forwarded with priority over all other symbol packets; (2) latency decoding being induced to a larger number of SCDP sessions; (3) large batches of pull requests potentially that block pull requests belonging to other sessions.
We also explore the effect of the initial window value with different link rates (otherwise keeping the experimental setup unchanged). In Figure~\ref{iw-speeds}, we clearly observe that, as the supported link rate increases, the value of the initial window must also be increased in order to fully utilise the receivers’ downlink (12 symbols for 1Gbps link, 24 symbols for 5Gbps link and 32 symbols for 10Gbps).
\vspace{-3.5mm}
\subsection{Network Overhead and Induced Decoding Latency}
\label{network-overhead}
SCDP provides zero-overhead data transport when no loss occurs. In the opposite case, \black{there is an overhead $o$ of $2$ extra symbols (compared to the number of original fragments $K$)} are required by the decoder to decode the source block (with extremely high probability). Additionally, the required decoding induces latency in receiving the original source block. Short flows in data centres are commonly latency sensitive so SCDP must be able to provide decoding-free completion of such flows. To asses the efficacy of our MLFQ-based approach, we measure the number of unicast flows that suffer symbol packet loss for different network loads ranging from $0.5$ to $0.7$. For each network load, we examine different $\lambda$ values for the Poisson inter-arrival rate of the studied short flows ($150$KB). In each simulation, we generate $5000$ sessions with the respective $\lambda$ value as their inter-arrival time. In Figure \ref{eval-overhead-sessions}, we observe that for load values of $0.5$ and $0.6$, the times that a short flow would require decoding and extra $2$ symbol packets is very small (0.44\% and 1.2\% of the flows, respectively, when $\lambda$ = 8000), rendering the respective overhead negligible.
\vspace{-3mm}
\subsection{Overhead in One-to-Many Sessions}
\label{unnecessary-overhead}
In Section \ref{multicast}, we identified a limitation of SCDP with respect to unnecessary network overhead which may occur in one-to-many transport sessions in the presence of congestion. This is due to receivers getting behind with the reception of symbols. Consequently, up-to-date receivers will be receiving more symbols than what they actually need. In order to evaluate the extent of this limitation we set up a similar experiment to the one presented in Section \ref{goodput-performance}. Figures \ref{eval-unnecessary-overhead}a and \ref{eval-unnecessary-overhead}b depict the CCDF of the number of symbols that were sent unnecessarily for different values of $\lambda$, and session sizes. We observe that as the network load increases, the number of sessions that induce unnecessary network overhead increases. It is important to note that, even when this happens, the measured goodput for SCDP is significantly better than that of NDP. Figure \ref{eval-unnecessary-overhead}c illustrates the measured goodput for the examined session sizes and highest network load ($\lambda = 4000$). Clearly, SCDP significantly outperforms NDP despite the potential for some unnecessary network overhead. The benefit of exploiting network-layer multicast makes this potential overhead negligible.
\vspace{-4mm}
\subsection{Resource Sharing}
\label{fairness-evaluation}
SCDP achieves excellent fairness due to the following design principles: (1) receivers pull symbol packets from one or more senders in the data centre at a pace that matches their downlink bandwidth. Given that servers are uniformly connected to the network with respect to link speeds, SCDP enables fair sharing of the network to servers. (2) A receiver pulls symbol packets for each SCDP session on a round robin basis. As a result, SCDP enables fair sharing of its downlink to all transport sessions running at a specific receiver. It would be straightforward to support priority scheduling at the receiver. (3) SCDP employs MLFQ in the network. Obviously, this prioritisation scheme provides fairness between competing flows only within the same priority level. In Figure \ref{fairness} we report goodput results with respect to the convergence behaviour of 5 SCDP unicast sessions that start sequentially with 2 seconds interval and 18 seconds duration, from 5 sending severs to the same receiving server under the same ToR switch. SCDP performs equally well to DCTCP in that respect \cite{DCTCP}. Clearly, flows acquire a fair share of the available bandwidth very quickly. Each incoming flow is initially prioritised over the ongoing flows (MFLQ) but, given the reported time scales, this cannot be shown in Figure \ref{fairness}. We have repeated this experiment with larger number of flows, and we find that SCDP converges quickly, and all flows achieve their fair share.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we proposed SCDP, a general-purpose transport protocol for data centres that is the first to exploit network-layer multicast in the data centre and balance load across senders in many-to-one communication, while performing at least as well as the state of the art with respect to goodput and flow completion time for long and short unicast flows, respectively. Supporting one-to-many and many-to-one application workloads is very important given how extremely common they are in modern data centres \cite{ElmoMulticast}. SCDP achieves this remarkable combination by integrating systematic rateless coding with receiver-driven flow control, packet trimming and in-network priority scheduling.
RaptorQ codes incur some minimal network overhead, only when loss occurs in the network, but our experimental evaluation showed that this is negligible compared to the significant performance benefits of supporting one-to-many and many-to-one workloads. RaptorQ codes also incur computational overhead and associated latency when when loss occurs. However, we showed that this is rare for short flows because of MLFQ. For long flows, block pipelining alleviates the problem by splitting large blocks into smaller ones and decoding each of these smaller blocks while retrieving the next one. As a result, latency is incurred only for the last smaller block. RaptorQ codes have been shown to perform at line speeds even on a single core; we expect that with hardware offloading the overall overhead will not be significant.
\black{As part of our future work, we aim at developing an SCDP prototype (in-kernel and/or using user-space network stack) and exploring its performance with real application workloads. We will also explore machine learning-based approaches for setting the initial window on a per-flow basis. More specifically, we will investigate the applicability of Reinforcement Learning in updating the initial window value for new or existing flows based on the (partially) observable state of the network (e.g. as \cite{sigcomm-rl-2020} performs congestion control). A key argument in this paper was that RaptorQ coding should be the centrepiece of the data transport mechanism, in order to enable a unified approach for efficiently dealing with all supported communication modes. As part of our future work, we will investigate this argument further by developing extensions of existing unicast data centre protocols (e.g. \cite{NDP}) that can handle one-to-many and many-to-one data transport and compare their performance with SCDP.}
\bibliographystyle{IEEEtran}
|
1909.08936
|
\section{Introduction}
In non-relativistic topological matter, effectively quasi-relativistic description of low-energy quasiparticles with linear spectrum phenomena may emerge \cite{Volovik2003, Horava05}. In particular, in three spatial dimensions at a generic (two-fold) degenerate fermion band crossing at momentum $\mathbf{p}_W$, the Hamiltonian is of the Weyl form \cite{Herring1937, Abrikosov1971, NielsenNinomiya83, Volovik2003}
\begin{align}
H_{W} = \sigma^a e^{i}_a (p- p_W)_i + \cdots \label{eq:WeylHamiltonian}
\end{align}
where the $e^{i}_a = \partial_{p_i} H(\mathbf{p})\vert_{p_W}$ are the linear coefficients of the Hermitean Pauli matrices $\sigma^a$, $a=1,2,3$, close to the Weyl node(s) at $\mathbf{p}_W$. The net chirality {$\sum_{\{\mathbf{p}_W\}} \textrm{sgn}(\det e^{i}_a)$ vanishes \cite{NielsenNinomiya83}. For slowly varying parameters in the operator $\sigma^ae^{\mu}_a i \partial_{\mu}\equiv i\partial_t -H_W$, the semi-classical fields $e^{\mu}_a(x) = \{e^0_{a}, e^i_a\}$ are promoted to background spacetime tetrad fields with dimensions of unity for temporal indices $e_a^0$ and velocity for spatial $e^i_a$. At the level of the Hamiltonian, the shift of the Weyl node $\mathbf{p}_W$ acts as an emergent (axial) gauge field with emergent Lorentz symmetry to the linear order but the linear expansion \eqref{eq:WeylHamiltonian} at $\vek{p}_W$ is valid at much lower scales, however. If the fermions are charged, the fermions can in addition couple to the electromagnetic vector potential via minimal coupling.
These background fields imply the chiral anomaly for the low-energy massless quasiparticles. For the applications of the chiral anomaly in Weyl semimetal and Weyl superfluids/superconductors, see e.g. \cite{NielsenNinomiya83, Volovik1986a, Volovik2003, ZyuzinBurkov12}. In particular, the non-trivial coordinate dependence (torsion) related to the tetrads $e_a^{\mu}(x)$ in \eqref{eq:WeylHamiltonian} can lead to the gravitational Nieh-Yan (NY) anomaly
\cite{NiehYan1982a,NiehYan1982b,Nieh2007,Yajima96,ChandiaZanelli97a,ChandiaZanelli97b,ChandiaZanelli98, Parrikar2014, FerreirosEtAl19,Nissinen2019}. Nevertheless, the NY contribution to the anomaly has remained contentious and subtle due to presence of a dimensionful ultra-violet (UV) scale $\Lambda^2$ with canonical dimensions of momentum, as required by the canonical dimensions of $e^a_{\mu}$. See, however, Ref. \onlinecite{Nissinen2019} and the discussion below.
Here we discuss the temperature corrections to the gravitational NY anomaly and different finite temperature terms in the hydrodynamic momentum transport of chiral Weyl superfluids. The results apply for the chiral superconductors as well, when the electromagnetic potential is added $\mathbf{v}_s \to \vek{v}_s-e\vek{A}/m$ in some convenient gauge. We show that these corrections to the momentum currents include, among various other similar terms, a term originating from the torsional Nieh-Yan anomaly. For all such low-energy temperature corrections proportional to $T^2$ in the free energy, the prefactors in the corresponding terms are dimensionless, i.e. do not depend on the details of the microscopic physics, but are fully determined by geometry, topology and the number of effective fermionic species \cite{VolovikZelnikov2003,Volovik2003}. Specifically, we compare the finite temperature corrections to the lowest order gradient terms in free-energy of the chiral superfluid, and identify the contribution from the chiral Nieh-Yan anomaly in the low-energy quasirelativistic Weyl superfluid, which leads us to conjecture that the finite temperature NY anomaly term can be similarly universal in general Weyl systems. Finally we compare these results to relativistic Weyl fermions with positive and negative branches at zero momentum.
\section{Torsional anomaly}
For spacetimes with torsion (and curvature), Nieh and Yan \cite{NiehYan1982a,NiehYan1982b,Nieh2007} introduced the 4-dimensional invariant
\begin{equation}
N=\mathcal{T}^a \wedge \mathcal{T}_a - e^a \wedge e^b\wedge R_{ab} \,
\label{N}
\end{equation}
where $e^a = e^a_{\mu} dx^{\mu}$ is the local tetrad 1-form field and $\mathcal{T}^a = de^a + \omega^a_{\ b} \wedge e^b$ and $R^{ab}=d\omega^a_{b} + \omega^a_{\ c} \wedge \omega^c_{\ b}$, in terms of the tetrad and spin-connection $\omega^a_{\ b} = \omega^a_{\mu b} dx^{\mu}$. As usual, the spacetime metric follows as $g_{\mu\nu} = e^{a}_{\mu} e^{b}_{\nu}\eta_{ab}$, where $\eta_{ab}$ is the local orthonormal (Lorentz) metric. This invariant can be written, using the associated Bianchi identities, as
\begin{equation}
N=dQ \,\,, \,\, Q=e^a\wedge {\cal T}_a \,.
\label{N2}
\end{equation}
$N$ is a locally exact 4-from independent from $\textrm{tr} (R \wedge R)$ and the dual of the scalar curvature $\sqrt{g}\mathcal{R}$ in the presence of non-zero torsion. It can be associated with a difference of two topological terms, albeit in terms of an embedding to five dimensions \cite{ChandiaZanelli97a,ChandiaZanelli97b,ChandiaZanelli98}. In terms of four-dimensional chiral fermions on such a spacetime, it has been suggested that this invariant contributes to the axial anomaly, i.e. the anomalous production of the chiral current:
\begin{equation}
\partial_\mu (e j_5^\mu) = \frac{\Lambda^2}{4\pi^2} N({\bf r},t) \,, \label{eq:NYterm}
\end{equation}
where $e=\det e^a_{\mu}$ and $e j^{\mu}_5$ is the axial current (pseudotensor) density and the non-universal parameter $\Lambda$ has dimension of relativistic momentum (mass) $[\Lambda]=[1/L] = [M]$ and is determined by some ultraviolet (UV) energy scale.
Given the anomaly term \eqref{eq:NYterm}, there has been several attempts to consider the Nieh-Yan anomaly in condensed matter systems with Weyl fermions like \eqref{eq:WeylHamiltonian}, see e.g. \cite{Parrikar2014, SunWan2014, FerreirosEtAl19, Nissinen2019}. However, in non-relativistic systems the relativistic high-energy cut-off $\Lambda$ is not a well defined parameter and, moreover, can be anisotropic. The complete UV theory is, of course, non-Lorentz invariant and the linear, quasirelativistic Weyl regime is valid at much lower scales. Moreover, the anomalous hydrodynamics of superfluid $^3$He at zero temperature suggests that the chiral anomaly is either completely exhausted by the emergent axial gauge field corresponding to the shift of the node or, conversely, the gravitational NY anomaly term arising from the tetrad and spin-connection for local Lorentz invariance along the uniaxial symmetry direction.
It was recently shown in Ref. \onlinecite{Nissinen2019} that the low-energy theory satisfies the symmetries and conservations laws related to an emergent quasirelativistic spacetime with torsion and $\Lambda$ is determined from the UV-scale where the linear Weyl approximation breaks down, as dictated by the underlying $p$-wave BCS Fermi superfluid. Here we will consider the temperature corrections to the anomaly and the hydrodynamic free-energy in the chiral Weyl superfluid, with the expectation that $\Lambda = T$, $k_B=1$, with some dimensionless prefactor $\gamma$, in units where the tetrads have canonical dimensions of velocity, or the relevant ``speed of light" is set unity.
In terms of effective low-energy actions, the fully relativistic analogs work unambiguously only for terms in the effective action with dimensionless coefficients. Perhaps the most well-known example being the 2+1-dimensional topological Chern-Simons (CS) terms describing the Quantum Hall effect. Gravitational Chern-Simons terms can be similarly quantized in terms of chiral central charge which has relation to thermal transport and the boundary conformal field theory \cite{Volovik90, ReadGreen01}. The CS action was recently generalized to 3+1d (and higher odd space dimensions) crystalline topological insulators, using so-called dimensionful elasticity tetrads $E$ with dimension $[E]=[1/L]=[M]$. Topological polarization terms can also be written down \cite{SongEtAl2019} by dimensional descent. The ensuing higher dimensional Chern-Simons and polarization terms are expressed as the mixed responses $E\wedge A\wedge d A$ and $E \wedge E \wedge dA$ with quantized dimensionless coefficients \cite{NissinenVolovik2018b, NissinenVolovik2018, SongEtAl2019}.
Another such example is the temperature correction to curvature effects, with $\delta S_{\rm eff}$ = $\int T^2{\cal R}$ in the low-energy action \cite{VolovikZelnikov2003}. This represents the analog of the gravitational coupling (inverse Newton constant) in the low-energy action, where the curvature scalar ${\cal R}$ is the effective scalar spacetime curvature. Since $[T]^2[{\cal R}]=[M]^4$, the coefficient of this term is dimensionless, and can be given in terms of universal constants: it is fully determined by the number of the fermionic and bosonic species in the effective theory on flat background, and thus works both in relativistic and non-relativistic systems \cite{VolovikZelnikov2003}. The same universal behavior takes place with the terms describing the chiral magnetic and chiral vortical effects in Weyl superfluid $^3$He-A, where the coefficients are dimensionless \cite{VolovikVilenkin2000,Volovik2003, BasarEtAl14}. Similarly, it has been observed that the coefficient of the $\textrm{tr}(R\wedge R)$ graviational anomaly in chiral Weyl systems affects the thermal transport coefficients in flat space \cite{LandsteinerEtAl11, LoganayagamSurowka12, JensenEtAl13, Landsteiner2014, Lucas2016, GoothEtAl17, StoneKim18}. These coefficients are fundamental, being determined by the underlying degrees of freedom in addition to symmetry, topology and geometry. From this perspective especially, since the NY form is second order in derivatives and can be computed from linear response, our findings are very interesting and warrant further research.
Our goal in the present paper is to separate different $T^2$ contributions in the hydrodynamic free-energy of Weyl superfluids in order to identify the terms responsible for different relativistic phenomena, including a term from the thermal Nieh-Yan anomaly, as well as gravitational terms of the form $\int T^2({\cal R} + \mu^2)$, where $\mu$ is the chiral chemical potential $\mu \ll T$.
\section{Temperature correction to the Nieh-Yan term} \label{sec:TempNY}
The relativistic zero-temperature anomaly term in the axial current production $\Lambda^2({\cal T}^a \wedge {\cal T}_a+ e^a\wedge e^b\wedge R_{ab})$ is still not confirmed in general, see however Ref. \onlinecite{Nissinen2019}. On one hand the UV cut-off parameter $\Lambda$ is not well-defined in relativistic field theory with fundamental chiral fermions. On the other hand, such a cut-off is not in general available in non-relativistic matter with quasi-relativistic low-energy chiral fermions and can be anisotropic \cite{Nissinen2019} or even zero. However, the term of the form $\gamma T^2({\cal T}^a\wedge {\cal T}_a+ e^a\wedge e^b\wedge R_{ab})$ has the proper dimensionality $[M]^{4}$, and its prefactor $\gamma$ could be a universal constant in canonical units, being expressed via some invariant related to the degrees of freedom.
For concreteness, we focus on the finite temperature Nieh-Yan anomaly in chiral $p$-wave Weyl superfluid (such as $^3$He-A) with
\begin{equation}
\partial_\mu (ej_5^\mu) = \gamma T^2 N({\bf r},t) \,,
\label{T2NiehYan}
\end{equation}
and check whether the dimensionless parameter $\gamma$ can be universal. We now use the result obtained by Khaidukov and Zubkov \cite{Zubkov2018} and Imaki and Yamamoto \cite{Imaki2019} for the finite temperature contribution to the chiral current. For a single (complex) chiral fermion, one has for the chiral current
\begin{equation}
j_5^k= - \frac{T^2}{24} \epsilon^{kij}\mathcal{T}^0_{ij} \,.
\label{j5}
\end{equation}
We assume that this current can be covariantly generalized to the 4-current:
\begin{equation}
e j^\mu_5= - \frac{T^2}{24} \epsilon^{\mu\nu\alpha\beta} e_{\nu a}\mathcal{T}^a_{\alpha\beta} \,.
\label{j5general}
\end{equation}
Then one obtains the divergence
\begin{equation}
\partial_{\mu} (e j^\mu_5)= - \frac{T^2}{48} \epsilon^{\mu\nu\alpha\beta} \mathcal{T}_{a\mu \nu}\mathcal{T}^a_{\alpha\beta}
\,.
\label{j5nonconservation}
\end{equation}
In the presence of curvature $R(\omega)$, this becomes the temperature correction to the full Nieh-Yan term in Eq. (\ref{T2NiehYan}), where now the non-universal cut-off $\Lambda$ is substituted by the well defined temperature $T$, and the dimensionless parameter $\gamma=1/12$:
\begin{equation}
\partial_\mu (e j^\mu_5) =- \frac{T^2}{12} N({\bf r},t) \,.
\label{j5nonconservationR}
\end{equation}
Note it is possible that the local relativistic (Tolman) temperature $T=T_0/\vert e^0_t \vert$ enters the local anomaly, while the constant $T_0$ is the global equilibrium temperature of the condensed matter system \cite{VolovikZelnikov2003}. In \eqref{eq:WeylHamiltonian} we have simply $e^0_t = -1$.
\section{From relativistic physics to chiral Weyl superfluid} \label{sec:superfluid_LL}
In the presence of a finite Weyl node $p_{W}\neq 0$, the chiral anomaly for the chiral current leads to the anomalous production of the linear momentum \cite{Volovik2003, Nissinen2019}. Even though chiral current is not well-defined at high-energy, the spectral flow of chiral quasiparticles is accompanied by the spectral flow of the linear momentum ${\bf p}_W$ of the Weyl point. In $^3$He-A there are two (spin-degenerate) Weyl points with opposite chirality and opposite momenta, ${\bf p}_{W\pm}=\pm p_F{\hat{\bf l}}$, where $\hat{\bf l}$ is the unit vector of the orbital
momentum of the superfluid. In particular, the anomalous production of the linear momentum density $n_{\pm}$ at nodes of the two opposite Weyl points sum-up leading to
\begin{equation}
\dot {\bf P}_{\rm anom} = -p_F \hat{\bf l}\, (\partial_t n_5),
\label{anomalousProductionP}
\end{equation}
where $n_5=n_+-n_-$ is the chiral density. The corresponding quasirelativistic momentum density of the Weyl fermions, valid in the vicinity of the node, is\cite{Nissinen2019}
\begin{equation}
{\bf P}_{\textrm{NY-node}} =- p_F \hat{\bf l} e j^0_5\,.
\label{NYmomentum}
\end{equation}
Thus Eq. \eqref{j5nonconservation} gives the temperature correction to this
anomalous momentum production, leading to a mass current due to thermal Nieh-Yan anomaly. Next we discuss this in detail for the quasiparticles and the superfluid vacuum in the non-relativistic chiral Weyl $p$-wave superfluid, using a Landau level model for the currents \cite{Volovik85,Nissinen2019}. For relativistic Weyl fermions, see \cite{Stone2019}. For considerations of similar temperature effects in chiral Weyl superfluid, see Ref. \cite{KobayashiEtAl18}
\subsection{Nieh-Yan term from the hydrodynamics of chiral Weyl superfluid}
It is known that the hydrodynamics of chiral gapless $^3$He-A experiences momentum anomalies related to the spectral flow through the Weyl points. Let us express Eqs. \eqref{anomalousProductionP}, \eqref{NYmomentum} in terms of the hydrodynamic variables and quasiparticles of the chiral superfluid. The Weyl fermions arise from the BdG Hamiltonian close to the nodes,
\begin{align}
H_{\rm BdG}(-i \partial) = \left( \begin{matrix} \epsilon(-i\partial) & \frac{1}{2i}\{\partial_i,\Delta^i \} \\ \frac{1}{2i}\{\partial_i,\Delta^{*i} \} & -\epsilon(i \vek{\partial}) \end{matrix}\right)
\end{align}
Here $\epsilon(\mathbf{p}) = \frac{p^2-p_F^2}{2m^*}$ is the normal state dispersion minus the Fermi level $\mu_F$; $\Delta_i = c_\bot(\hat{\bf m}+i\hat{\bf n})$ is the order parameter in the $p+ip$ chiral superfluid;
$\hat{\bf l}=\hat{\bf m}\times\hat{\bf n}$ is the unit vector in the direction of the orbital angular momentum of Cooper pairs; anisotropic node velocities are $c_\parallel=v_F$ and $c_\bot =\Delta_0/p_F$, constituting effective speeds of light in Weyl equation along $\hat{\bf l}$
and in transverse directions, respectively. In the weak coupling BCS theory $c_\bot \ll c_\parallel$, in $^3$He-A their ratio is of order $10^{-3}$. In a chiral superconductor we in addition perform the minimal coupling $\epsilon(-i\partial_i) \to \epsilon(i D_i) +eA_0$, where $D_i=\partial_i-eA_i$ is the gauge covariant derivative and $A_\mu$ is the electromagnetic potential. Equivalently this amounts to $\mathbf{v_s} \to \mathbf{v}_s-\frac{e\vek{A}}{m}$ in the free-energy.} In what follows we ignore the superfluid velocity (giving rise to spin-connection in addition to tetrads) in the anomaly considerations. Then the only hydrodynamic variable appearing in the torsion is the unit vector of the orbital momentum $\hat{\bf l}$ \cite{Nissinen2019}. Our results can afterwards then generalized to include the superfluid velocity appearing in the free-energy.
The Weyl nodes are at $\mathbf{p}_{W\pm} = \pm p_F \unitvec{l}$. Expanding the Hamiltonian as $H_{\rm BdG} \simeq \sigma^a e^i_a (\hat{p}-p_W)_i$, the vierbein ${\bf e}^i_a$ in the vicinity of the Weyl point take the form:
\begin{eqnarray}
e^{\mu}_a = \{e^t_{a},{\bf e}^i_a\} &=&
\,\left(\begin{array}{cc}1 & 0\\
0 & c_\bot \hat{\bf m} \\ 0 & c_\bot \hat{\bf n} \\ 0 &c_\parallel \hat{\bf l}\end{array} \right) , \quad a = 0,1,2,3.
\end{eqnarray}
For the inverse vierbein ${\bf e}^i_a {\bf e}^a_j = \delta^i_j$ we have
\begin{eqnarray}
e^a_{\mu} = \{e^a_t, {\bf e}^a_i \} &=&
\,\left(\begin{array}{cccc}1 & 0 & 0&0 \\
0 & \frac{1}{c_\bot}\hat{\bf m} & \frac{1}{c_\bot}\hat{\bf n} & \frac{1}{c_\parallel}\hat{\bf l}\end{array} \right). \label{Eu}
\end{eqnarray}
In order to compute the thermal contribution from the quasiparticles in the presence of torsion, we assume that the tetrad gives rise to a constant torsion via $\unitvec{m} = \unitvec{x}$, $\unitvec{n} = \unitvec{y} - T_B x \unitvec{z}$, where $T_B \ll 1$ is a perturbation \cite{Volovik85, Nissinen2019}. In this case, the 3d spectrum organizes into one-dimensional states on 2d Landau levels (LLs) \cite{Volovik85,Volovik2003,Parrikar2014, Stone2019, Laurila20}, where the relevant spatial torsion
\begin{align}
\frac{1}{2}\epsilon^{ijk}\mathcal{T}^3_{jk} e_3^i = c_{\parallel} \unitvec{l}\cdot ( \nabla \times \frac{\unitvec{l}}{c_{\parallel}}) \equiv T_B,
\end{align}
where $T_B = \unitvec{l}\cdot\nabla\times\unitvec{l} = \mathcal{T}^z_{xy}$, is playing the role of effective magnetic field. Only the gapless lowest LLs are relevant, while the gapped levels, with gap $\sim c_\perp \sqrt{p_FT_B}$ are particle-hole symmetric and cancel out \cite{Volovik85, BalatskiiEtAl86}. We approximate $\epsilon(\mathbf{p}) = \frac{p^2-p_F^2}{2m} \simeq \frac{p_z^2-p_F^2}{2m} = \epsilon(p_z)$, strictly valid when $p_{\bot} \ll m c_{\perp}$ which coincides with the linear Weyl regime $\epsilon_{p_z} = \pm v_F(p_z-p_F)$. However, we can fix the anisotropic dispersion to $\epsilon(p_z)$ for all momenta to obtain a convenient UV-complete model, for which the following analysis for the total vacuum current is valid, as long as the correct cutoff for the linear Weyl regime is maintained for $^3$He-A, see Appenix \ref{Appendix2}. The dispersion of the lowest LL becomes $E_{n=0} = -\textrm{sgn}(p_z T_B)\epsilon(p_z)$, while density of states per lowest LL per three-momentum $p_z$ becomes $n_{\rm LL}(p_z) = \frac{\abs{p_z T_B}}{4\pi^2}$. The lowest LL is particle-like (hole-like) for $p_W = \mp p_F\unitvec{l}$. For more on this non-relativistic model with torsion, see \cite{Volovik85, BalatskiiEtAl86,Nissinen2019, Laurila20}
The total (chiral) quasiparticle current along $\unitvec{l}=\frac{e^i_3}{v_F}\approx \unitvec{z}$ becomes
\begin{align}
\mathbf{P}^{\rm qp}\cdot\unitvec{l} &= -2 \int_{0}^{\infty} N_{\rm LL}(p_z) dp_z~ p_z n_{F}(\epsilon_{p_z}-\mu_F) \nonumber\\
&=-\left(\frac{p_F^3}{6\pi^2}+\frac{p_F T^2}{6 c_{\bot}^2} \right) \unitvec{l}\cdot\nabla\times\unitvec{l} \label{eq:LL_current} \\
&= \vek{j}^{\rm vac}_{\rm anom\parallel} + \vek{j}^{\rm qp}_{\rm anom\parallel}(T) \nonumber.
\end{align}
where $n_{F}(x) =(e^{\beta x} +1)^{-1}$ is the quasiparticle distribution function and a factor of two comes from the spin-degeneracy. In order to compute the integral,
we have used $T\ll\Delta_0 \ll \mu_F$ as well as the linear expansion $\epsilon(\Delta p_z) \approx v_F(\Delta p_z c_{\perp}/v_F) \approx T$ with anisotropic momentum scaling, where close to the nodes $\Delta p_z \sim \frac{T}{\Delta_0}p_F = \frac{T}{c_{\perp}}$ for both $T, \epsilon \ll mc_\perp^2$, the cutoff of the linear Weyl regime in $^3$He-A. See Appendix \ref{Appendix2} and \cite{Nissinen2019,Laurila20} for more details.
From the above we conclude that $\vek{j}^{\rm vac}_{\rm anom}$ is the anomalous superfluid current from filled quasiparticle states, see Eq. \eqref{eq:j-current} below,
\begin{align}
\vek{j}^{\rm vac}_{\rm anom\parallel} = -\frac{p_F^3}{6\pi^2}\unitvec{l}(\unitvec{l}\cdot \nabla \times \unitvec{l}) = -\frac{C_0}{2}\unitvec{l}(\unitvec{l}\cdot \nabla \times \unitvec{l}), \label{eq:vacuum_j}
\end{align}
whereas the finite temperature contribution to the quasiparticle momentum is
\begin{align}
\vek{j}^{\rm qp}_{\rm anom\parallel}(T) = -\frac{p_F T^2}{6 c_{\perp}^2}\unitvec{l}(\unitvec{l}\cdot \nabla \times \unitvec{l}) \label{eq:Tcorrection}.
\end{align}
and arises due to thermal normal component close to the nodes. These contributions to the anomalous vacuum current from the perspective of the superfluid are analyzed in the next subsection.
The relativistic anomaly results in Sec. \ref{sec:TempNY} and the Landau level argument for the chiral superfluid suggest in addition the following temperature correction to the anomalous momentum in $^3$He-A, in the vicinity of the node, from the thermal Nieh-Yan anomaly \cite{Nissinen2019}:
\begin{align}
{\bf P}_{\textrm{NY-node}}(T) =-p_F \hat{\bf l}\, n_5(T)= \frac{p_F T^2 }{12 c_\bot^2}\hat{\bf l}(\hat{\bf l}\cdot(\nabla\times \hat{\bf l})).
\label{deltaPanomalous}
\end{align}
With the Landau level approximation, the quasiparticle density $n_5(T)$ between the two Weyl nodes is computed from a similar integral as Eq. \eqref{eq:LL_current},
\begin{align}
n_5(T) &= -2 \int_{0}^{\infty} N_{\rm LL}(p_z) dp_z \bigg[n_F(\epsilon_{p_z}-\mu_F) -\Theta(\mu-\epsilon_{p_z})\bigg] \nonumber\\
&= -\frac{T_B}{2\pi^2} \int_0^{\infty} dx ~x n_{F}(x) \nonumber\\
&= - \frac{T^2}{12c_{\perp}^2} (\unitvec{l}\cdot\nabla\times\unitvec{l}). \label{eq:LL_density}
\end{align}
Similar to the total current, this is the contribution from the linear spectrum close to the nodes with $\Delta p_{\parallel}\sim(\frac{c_{\perp}}{c_{\parallel}})\frac{T}{c_{\bot}}$ due to thermal fluctuations, which renormalizes the velocity coefficient and requires the cutoff of the Weyl regime relevant for the full dispersion, such as $^3$He-A. The thermal fluctuations for the superfluid are discussed in Appendix \ref{Appendix1}.
What have been calculated above are actually the temperature dependence of the anomalous torsional conductivities of the superfluid, i.e. then chiral momentum-density and number densities in response to $T_B$, see e.g. \cite{Landsteiner2014}. Note in particular that the leading torsional contribution from the NY anomaly in Eqs. \eqref{eq:LL_current}, \eqref{eq:LL_density} from thermal fluctuations is suppressed by the the factor $c_{\bot}^{-2}$, in contrast to the antisymmetric torsion $S = \frac{1}{2}\epsilon^{ijk} e^3_i \mathcal{T}^3_{jk} = \frac{T_B}{c_\parallel^2}$ from \eqref{Eu}, as was found in Ref. \cite{Nissinen2019} without the Landau level approximation to the anomaly. It is also intriguing to see that the non-relativistic thermal integrals in Eqs. \eqref{eq:LL_current},\eqref{eq:LL_density} with explicit non-relativistic vacuum regulatization due to the filled quasiparticle states coincide with similar expression for relativistic Weyl fermions with positive and negative branches \cite{LoganayagamSurowka12, JensenEtAl13, BasarEtAl14, Landsteiner2014, Zubkov2018, Imaki2019}, see Appendix \ref{Appendix3}.
\subsection{Anomalous vacuum current}
The hydrodynamic anomaly in momentum conservation arises between the quasiparticles and vacuum mass current $\vek{P} = \vek{j}$ of the superfluid, which at $T=0$ has the following general form \cite{Cross1975, VollhardtWolfle, Volovik2003}:
\begin{align}
\mathbf{j} = \rho \mathbf{v}_s + \frac{1}{4m} \nabla \times (\rho \unitvec{l}) - \frac{C_0}{2} \unitvec{l} (\unitvec{l} \cdot (\nabla \times \unitvec{l}) )\,,
\label{eq:j-current}
\end{align}
where the last term is anomalous with the parameter $C_0$ from the combined orbital-gauge symmetry of the superfluid \cite{VolovikMineev81}, that fully determines the axial anomaly in the system due to the Weyl quasiparticles of the superfluid \cite{Volovik2003}
\begin{align}
C_0(T=0) = p_F^3/3\pi^2 = \rho,
\end{align}
where we ignore corrections of the order of $(\Delta_0^2/E_F^2) = (c_{\perp}/c_{\parallel})^2 \ll 1$ at zero temperature to the density from pairing ($\rho-C_0 \sim \rho (\Delta_0^2/E_F^2)$). Notably, this term exists only on the weak coupling side of the topological BEC-BCS Lifshitz transition, where the pair of the Weyl points with ${\bf p}_{W\pm}=\pm p_F{\hat{\bf l}}$ appears in the quasiparticle spectrum \cite{VolovikMineev81, Volovik2003}.
Here we are interested in the temperature correction to the anomaly, which may come from the Nieh-Yan term.
The extension of the anomalous current to nonzero temperature gives
\begin{equation}
{\bf P}_{\rm anom}(T) = -\frac{1}{2}C_0(T) \hat{\bf l}(\hat{\bf l}\cdot(\nabla\times \hat{\bf l})
\,,
\label{Cross}
\end{equation}
where according to Cross \cite{Cross1975,VollhardtWolfle}, the anomalous parameter $C_0(T)$
has the following temperature dependence at low $T\ll T_c$:
\begin{equation}
C_0(T) =C_0(0) - T^2 \frac{p_F}{6 c_\bot^2} \left(1+ \frac{m^*}{m} \right)\,.
\label{C}
\end{equation}
Here $m^*$ is the effective mass of quasiparticles in the normal Fermi liquid, which differs from the bare mass $m$ of the $^3$He atom due to the Fermi liquid corrections. In Eqs. \eqref{eq:vacuum_j} and \eqref{eq:Tcorrection} we have neglected the Fermi-liquid corrections due to interactions: An additional factor $\frac{1}{2}(\frac{m^*}{m}-1)(\rho_n^{(0)}/\rho)$ arises from the reduced quasiparticle momentum flow, due to Galilean invariance. The current becomes \cite{Cross1975}
\begin{align}
\vek{j}_{\rm anom} = \frac{1}{1+\frac{1}{3} F^s_1(\rho^{(0)}_{n\parallel}/\rho)} \frac{m^*}{m} \vek{j}^{(0)}_{\rm anom} \label{eq:backflow}
\end{align}
where the Landau parameter $\frac{1}{3}F_1^s = \frac{m^*}{m}-1$ and $\rho_{n\parallel, \perp}^{(0)}(T)$ are the bare thermal quasiparticle densities without Fermi-liquid corrections along and perpendicular to $\unitvec{l}$. They are given by, see the Appendix \ref{Appendix1},
\begin{align}
\rho^{(0)}_{n\parallel} = \frac{\pi^2 T^2}{\Delta_0^2}\rho, \quad \rho^{(0)}_{n\bot} = \frac{7\pi^4T^4}{15 \Delta_0^4}\rho, \label{eq:rhoT}
\end{align}
where by Galilean invariance, $\rho\delta_{ij} = \rho_{s ij} + \rho_{n ij}$ at all temperatures. From Eqs. \eqref{eq:backflow}, \eqref{eq:rhoT} we gather
\begin{align}
\frac{C_0(T)}{2} = \frac{p_F^3}{6\pi^2} - \frac{p_F T^2}{6c_{\bot}^2} -\frac{p_F T^2}{12 c_{\perp}^2}\left(\frac{m^*}{m}-1\right) + O(T^4), \label{eq:NY_Fermi_corrections}
\end{align}
which is the Fermi-liquid corrected result Eq. \eqref{eq:backflow} and corresponds to the result \eqref{eq:Tcorrection} from the reduction of superfluid density when the Fermi-liquid corrections are ignored.
We summarize these findings as follows. While Eq. \eqref{Cross} with temperature corrections Eq. \eqref{C} looks similar to Eq. \eqref{deltaPanomalous}, $\vek{P}_{\rm anom}(T)$ represents the consistent anomaly vacuum momentum density from all filled states which depends on the non-fundamental parameters $m^*$ and $m$ via the Fermi-liquid corrections, while we expect that the $T^2$-contribution to the
(covariant) NY anomaly arise from contributions close to the linear Weyl nodes and should contain fundamental and universal prefactors.
Concerning the vacuum momentum current of the superfluid, one reason is that there are several different $T^2$ contributions to the current and free energy in the chiral superfluid, and they correspond to phenomemena with different origins and scales. Although of similar form in terms of the low-energy Goldstone variables of the superfluid, they can expressed in relativistic form with fundamental parameters only when carefully keeping track of each individual contribution to avoid double counting. In particular, in the next section we shall see that the non-fundamental parameters $m^*$ and $m$ do not enter the free energy or the final results, when experessed in terms of the correct relativistic variables valid in the quasirelativistic low-energy theory.
\section{Relativistic corrections to free-energy}
Let us consider the $T^2$-corrections in the free energy which are second order in derivatives containing combinations of $(\hat{\bf l}\cdot {\bf v}_s)$ and $(\hat{\bf l}\cdot (\nabla\times\hat{\bf l}))$, neglecting all higher order $O(T^4, \partial^3)$ terms. These terms can be distributed into three groups, which have different dependence on $m^*$ and $m$:
\begin{align}
F&=F_1+F_2+F_3 \,,
\label{123}
\\
F_1[\unitvec{l},\vek{v}_s]&=\frac{p_F}{12} \frac{T^2}{c_\perp^2}(\hat{\bf l}\cdot {\bf v}_s)(\hat{\bf l}\cdot (\nabla\times\hat{\bf l})),
\label{1}
\\
F_2[\unitvec{l},\vek{v}_s]&=- \frac{p_Fm^*}{96m^2} \frac{T^2}{c_\perp^2}
\left(4m(\hat{\bf l}\cdot {\bf v}_s)-(\hat{\bf l}\cdot (\nabla\times\hat{\bf l}))\right)^2,
\label{2}
\\
F_3[\unitvec{l}]&=- \frac{p_F}{288m^*} \frac{T^2}{c_\perp^2}(\hat{\bf l}\cdot (\nabla\times\hat{\bf l}))^2,
\label{3}
\end{align}
The relativistic form of each of these free-energy contributions arises separately as follows.
The term $F_3$ in Eq. \eqref{3} describes the universal temperature correction to the Newton gravitational coupling, which depends on the number of fermionic species \cite{VolovikZelnikov2003}:
\begin{equation}
F_3[\mathcal{R},T]=- \frac{v_F}{288} \frac{T^2}{c_\perp^2}(\hat{\bf l}\cdot (\nabla\times\hat{\bf l}))^2
=\frac{T^2}{144}\sqrt{-g} \cal{R}\,.
\label{NewtonG}
\end{equation}
Note that being exressed in terms of the scalar curvature and metric determinant $\sqrt{-g} =1/v_Fc_\perp^2$, this term becomes universal: it does not contain the microscopic parameters of the system: $p_F$, $m$ and $m^*$. The prefactor is fully determined by the number of fermionic species.
The term $F_2$ in Eq. \eqref{2} is expressed in terms of the combination $\unitvec{l}\cdot \vek{v} = \hat{\bf l}\cdot {\bf v}_s-\frac{1}{4m}\hat{\bf l}\cdot (\nabla\times\hat{\bf l})$, proportional to the ground state current which does not receive corrections due to Galilean invariance. Here the velocity ${\bf v}={\bf j}/\rho$ is the velocity of the ``total quantum vacuum", where ${\bf j}\equiv {\bf j}(T=0)$ is the total vacuum current in Eq. \eqref{eq:j-current} at $T=0$.
We conclude that the $F_2$ contribution gives the temperature $T$ and chemical potential $\mu$ correction to the free energy of the gas of chiral fermionic particles in the limit $|\mu| \ll T$ (see Eqs. (9.12) and (10.42) in Ref. \cite{Volovik2003}):
\begin{eqnarray}
F_2[\mu,T] =- \frac{p_Fm^*}{96m^2} \frac{T^2}{c_\perp^2}
\left(4m\,\hat{\bf l}\cdot {\bf v}_s-\hat{\bf l}\cdot (\nabla\times\hat{\bf l})\right)^2
=\nonumber
\\
=-\frac{T^2}{6} \sqrt{-g} \mu^2 \,,
\label{A0}
\end{eqnarray}
where the chiral chemical potential of the superfluid is determined by the Doppler shift
\begin{equation}
\mu_R=-\mu_L =p_F {\bf v}\cdot \hat{\bf l}= p_F \hat{\bf l}\cdot {\bf v}_s-(p_F/4m)\hat{\bf l}\cdot (\nabla\times\hat{\bf l})
\,.
\label{A0mu}
\end{equation}
The final version of Eq. \eqref{A0} does not contain microscopic parameters. The Eq. \eqref{A0} also gives rise to the mass of ``photon" in $^3$He-A \cite{Volovik1998}. In principle, the chiral chemical potential may also serve as the $\Lambda$ parameter in the Nieh-Yan term, see however Appendix \ref{Appendix3}.
Finally $F_1$ in Eq.(\ref{1}) is the term in free energy, which gives rise to the vacuum current without any factors of $\frac{m^*}{m}$ with contribution from the Nieh-Yan thermal anomaly in Eqs. \eqref{deltaPanomalous}, \eqref{C}, \eqref{eq:NY_Fermi_corrections}:
\begin{align}
{\bf P}_{\rm anom}(T) &= \frac{\delta F_1}{\delta {\bf v}_s}=\frac{p_F}{12} \frac{T^2}{c_\perp^2}\hat{\bf l}(\hat{\bf l}\cdot (\nabla\times\hat{\bf l})) \label{J}
\\
&\simeq -p_F n_{5}(T)\hat{\bf l}, \nonumber
\end{align}
where the last equality was derived in Eq. \eqref{eq:LL_density}. We stress however that $\vek{P}_{\rm anom}(T)$ represents the total anomalous superfluid current, whereas the right-hand side is equal to the quasiparticle momentum density due to thermal fluctuations close to the node.
\section{Conclusions}
We discussed the possibility of thermal Nieh-Yan anomaly where the role of the non-universal dimensionful UV cut-off $\Lambda$ is played by the temperature IR scale, and the dimensionless prefactor $\gamma$ in the anomaly is universal. We identified a contribution from this anomaly in the known low-temperature corrections of non-relativistic chiral $p$-wave Weyl superfluid (or superconductor). In this system, the anomaly results from thermal effects of the linear Weyl spectrum at finite momentum in the presence of an explicit vacuum of filled quasiparticles, although the end results are similar to relativistic fermions when interpreted carefully in terms of the anisotropy and cutoff of the quasirelativistic Weyl regime.
What we calculated, via the anisotropic Landau level model with non-relativistic symmetries in Sec. \ref{sec:superfluid_LL}, are actually the temperature dependence of the anomalous torsional conductivities $\sigma_{\mathcal{T}^a}$ of the quasiparticle system, i.e. the chiral momentum-density and number densities (at the nodes at finite momenta $\pm p_F$) in response to the spacelike torsion $T_B= \frac{1}{2}\epsilon^{ijk}e^3_i \mathcal{T}^3_{jk}$. Namely, for example,
\begin{align}
\langle P^a \rangle =\langle e^a_i T^{0i} \rangle = e^a_i \frac{\sigma_{\mathcal{T}^b}(T)}{2} \epsilon^{0ijk} T^b_{jk} = \delta^{a3}\sigma_{T_B} T_B,
\end{align}
$P^i = T^{0i}$, where $T^{\mu\nu}$ is the stress-tensor, which are of second order in derivates and can be calculated in linear response, e.g. Kubo formulas, to the background tetrads \cite{LandsteinerEtAl11, Landsteiner2014, BradlynRead15, Gromov15}. A similar momentum anomaly in anisotropic system with non-relativistic symmetries was also considered in \cite{Copetti20}. For the zero temperature case, see \cite{Nissinen2019,Laurila20} as well as \cite{Stone2019, Stone2019b} for a relativistic model related to Weyl semi-metals. We, however, stress that the universal gravitational NY anomaly and thermal physics we discussed arise in flat space from the geometric background fields in the low-energy quasiparticle Hamiltonian. These tetrads arise universally in all Weyl systems \eqref{eq:WeylHamiltonian} and couple to the momentum \cite{Parrikar2014, ShapourianEtAl15} as in gravity. The connection of our results and the relation of the gravitational NY anomaly, with the coefficient $\gamma$, to thermal transport in Weyl system should be further elucidated \cite{Luttinger64, LandsteinerEtAl11, LoganayagamSurowka12, JensenEtAl13, Lucas2016, GoothEtAl17, KobayashiEtAl18}.
Detailed consideration of the temperature dependent anomaly terms in the hydrodynamics of the non-relativistic $p$-wave chiral superfluid with quasirelativistic Weyl fermions demonstrates that in the hydrodynamics of this liquid there are several $T^2$ terms, which can be assigned to different emergent relativistic phenomena, both anomalous and non-anomalous. In particular, we identified and discussed the term in the vacuum momentum corresponding to the (consistent) thermal Nieh-Yan anomaly. As expected this term originates from thermal fluctuations close to the linear nodes with emergent quasirelativistic torsion with anisotropy. Note that in terms of the superfluid and Weyl fermions, the $T=0$ anomalous vacuum contribution to the current can be assigned only to a non-local action \cite{Volovik1986c}. We showed how the various $T^2$ low-temperature corrections can be written aslow-energy relativistic terms with dimensionless prefactors, which do not seem to depend on microscopic physics, but are fully determined by geometry, topology and the number of fermionic and bosonic quantum fields. Detailed comparison of the finite temperature superfluid hydrodynamics with Fermi-liquid corrections to the anomalous quasiparticle axial current production in the presence of arbitrary textures and superfluid velocity remains to be identified \cite{Volovik85, Combescot86} . See however Ref. \onlinecite{Nissinen2019} for the zero temperature case.
\emph{Note added:} After the initial submission of this manuscript as a preprint, arXiv:1909.08936v1, with the predicted $T^2$-contribution to the NY anomaly, Eqs. \eqref{j5nonconservationR}, \eqref{J}, the recent preprints \cite{Stone2019, Stone2019b, Ojanen2019} discussing related torsional anomaly phenomena at finite temperatures appeared. General aspects of the temperature anomaly in Weyl materials were further discussed in the short paper \cite{NissinenVolovik2019}. In particular, the result Eq. \eqref{j5nonconservationR} has been confirmed in Ref. \onlinecite{Stone2019} by a direct calculation of the spectral flow of relativistic Landau levels in the presence of a constant torsional magnetic field $T^3_{\mu\nu}$ \cite{Volovik85, BalatskiiEtAl86, Parrikar2014} at finite temperature. Here similar computations for the non-relativistic Weyl superfluid in Eqs. \eqref{eq:LL_current} and \eqref{eq:LL_density} give corresponding results. While the current manuscript was being finalized, also the paper \cite{Imaki20} appeared discussing the anomaly for relativistic fermion at finite temperature and chemical potential.
\emph{Acknowledgements:} GEV thanks Mike Zubkov for discussions. JN thanks Z.-M. Huang for correspondence and T. Ojanen for discussions.
This work has been supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 694248).
|
1406.5838
|
\section{Introduction}
There exist different entropic inequalities for composite quantum systems \cite{NielsonCH,LiebRuskai,PetzNielson,CorlenLieb2007,20pr,2014,fromRusk,Ruskai}. Recently it was formulated \cite{OlgaVova,OlgaVova1,RitaPS2014,maps,OlgaVovaVIminkovski} that the inequalities known for composite systems uncluding subadditivity condition and strong subadditivity condition can be generalized for noncomposite systems like single qudit. The aim of this work is to use the developed approach \cite{OlgaVova,OlgaVova1,RitaPS2014,maps,OlgaVovaVIminkovski} and to get the new entropic inequalities for single qudit states on the base of known properties of relative entropy of composite systems like the monotonicity of the relative entropy. We obtain a generic new matrix inequality for Hermitian matrices. In case where the matrices coincide with the two density matrices of bipartite quantum system states the new inequality coincides with the monotonicity property of the relative entropy. But it is valid for arbitrary pair of density matrices also. The bound for distance between two arbitrary density matrices is obtained and expressed in terms of matrix elements of these matrices. The paper is organized as follows. In next Sec. 2 the property of relative entropy are considered. The example of qutrit and 4$\times4$ - density matrices are presented in Sec. 3. Conclusion and perspectives are given in Sec. 4.
\section{Relative entropy}
The relative entropy between density matrices $\rho$ and $\sigma$ is defined as
\begin{equation}\label{NEADM1}
S(\rho||\sigma)=\mbox{Tr}(\rho\ln\rho-\rho\ln\sigma),
\end{equation}
where $\rho$ and $\sigma$ are density matrices of quantum states. Also if the density matrices are the density matrices of bipartite system states, i.e. $\rho\mapsto\rho(1,2)$ and $\sigma\mapsto\sigma(1,2)$, one has inequality
\begin{equation}\label{NEADM2}
\mbox{Tr}\left(\rho(1,2)\ln\rho(1,2)-\rho(1,2)\ln\sigma(1,2)\right)\geq\mbox{Tr}\left(\rho(1)\ln\rho(1)-\rho(1)\ln\sigma(1)\right),
\end{equation}
where
\[\rho(1)=\mbox{Tr}_2\,\rho(1,2),\quad \sigma(1)=\mbox{Tr}_2\,\sigma(1,2).\]
This inequality reflects the monotonicity property of relative entropy.
Another analogous inequality holds
\begin{equation}\label{NEADM2}
\mbox{Tr}\left(\rho(1,2)\ln\rho(1,2)-\rho(1,2)\ln\sigma(1,2)\right)\geq
\mbox{Tr}\left(\rho(2)\ln\rho(2)-\rho(2)\ln\sigma(2)\right),
\end{equation}
where
\[\rho(2)=\mbox{Tr}_1\,\rho(1,2),\quad \sigma(2)=\mbox{Tr}_1\,\sigma(1,2).\]
Now we extend the inequalities for arbitrary matrices. For example, given a density $N\times N$ matrix $\rho$ in block form
\[\rho=\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)\]
and two $N\times N$-matrices
\[\rho_A=\left(\begin{array}{cc}
A&0\\0&0\end{array}\right),\quad \rho_D=\left(\begin{array}{cc}
D&0\\0&0\end{array}\right).\]
Let us construct positive map $\rho\mapsto\rho_1$ where
\[\rho_1=\rho_A+\rho_D\equiv\left(\begin{array}{cc}
A+D&0\\0&0\end{array}\right).\]
The block $D$ is $m\times m$-matrix, i.e. block $A$ is $(N-m)\times(N-m)$-matrix and $m\leq N-m$.
The following matrix inequality holds for the density matrix
$\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)$ given in block form
\[\mbox{Tr}\left[\left(\begin{array}{cc}
A+D&0\\0&0\end{array}\right)\ln\left(\begin{array}{cc}
A+D&0\\0&0\end{array}\right)-\left(\begin{array}{cc}
A+D&0\\0&0\end{array}\right)\ln\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)\right]\geq0.\]
Another inequality for this matrix reads
\[\mbox{Tr}\left[\left(\begin{array}{ccc}
\mbox{Tr}\,A&\mbox{Tr}\,B&0\\\mbox{Tr}\,C&\mbox{Tr}\,D&0\\0&0&0\end{array}\right)\ln\left(\begin{array}{ccc}
\mbox{Tr}\,A&\mbox{Tr}\,B&0\\\mbox{Tr}\,C&\mbox{Tr}\,D&0\\0&0&0\end{array}\right)-\left(\begin{array}{ccc}
\mbox{Tr}\,A&\mbox{Tr}\,B&0\\\mbox{Tr}\,C&\mbox{Tr}\,D&0\\0&0&0\end{array}\right)\ln\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)\right]\geq0.\]
This inequality means the nonnegativity condition for relative entropy of the initial density matrix $\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)$ and its "portraits" obtained by applying two positive maps
\[\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)\mapsto\left(\begin{array}{cc}
A+D&0\\0&0\end{array}\right), \quad\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)\mapsto\left(\begin{array}{ccc}
\mbox{Tr}\,A&\mbox{Tr}\,B&0\\\mbox{Tr}\,C&\mbox{Tr}\,D&0\\0&0&0\end{array}\right).\]
The new matrix inequality connected with monotonicity property of relative entropy of two density matrices
\[\rho=\left(\begin{array}{cc}
A&B\\C&D\end{array}\right),\quad \sigma=\left(\begin{array}{cc}
a&b\\c&d\end{array}\right)\]
is given by the relation
\[\mbox{Tr}\left[\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)\ln\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)-\left(\begin{array}{cc}
A&B\\C&D\end{array}\right)\ln\left(\begin{array}{cc}
a&b\\c&d\end{array}\right)\right]\]
\[\geq\mbox{Tr}\left[\left(\begin{array}{cc}
A+D&0\\0&0\end{array}\right)\ln\left(\begin{array}{cc}
A+D&0\\0&0\end{array}\right)-\left(\begin{array}{cc}
A+D&0\\0&0\end{array}\right)\ln\left(\begin{array}{cc}
a+d&0\\0&0\end{array}\right)\right].\]
This inequality holds for two arbitrary nonnegative matrices with $\mbox{Tr}\rho=\mbox{Tr}\sigma=1$.
\section{Qutrit and qudit examples}
The example of qutrit density matrices $\rho$ and $\sigma$ provides the new inequality
\[
\mbox{Tr}\left[\left(
\begin{array}{ccc}
\rho_{11} & \rho_{12} & \rho_{13} \\
\rho_{21} & \rho_{22} & \rho_{23} \\
\rho_{31} & \rho_{32} & \rho_{33} \\
\end{array}
\right)\left[\ln \left(
\begin{array}{ccc}
\rho_{11} & \rho_{12} & \rho_{13} \\
\rho_{21} & \rho_{22} & \rho_{23} \\
\rho_{31} & \rho_{32} & \rho_{33} \\
\end{array}
\right)- \ln \left(
\begin{array}{ccc}
\sigma_{11} &\sigma_{12} &\sigma_{13} \\
\sigma_{21} &\sigma_{22} &\sigma_{23} \\
\sigma_{31} &\sigma_{32} &\sigma_{33} \\
\end{array}\right) \right]\right]\geq\]
\[\mbox{Tr}\left[\left(
\begin{array}{cc}
\rho_{11}+\rho_{33} & \rho_{12} \\
\rho_{21} & \rho_{22}\end{array}\right)\left[\ln\left(
\begin{array}{cc}
\rho_{11}+\rho_{33} & \rho_{12} \\
\rho_{21} & \rho_{22}\end{array}\right)-\ln\left(
\begin{array}{cc}
\sigma_{11}+\sigma_{33} & \sigma_{12} \\
\sigma_{21} & \sigma_{22}\end{array}\right)\right]\right].\]
The relative entropy of two states is known to be nonnegative characteristic of a distance between the states. The new above inequality gives the bound for the distance which equals to relative entropy of two "qubits". The bound equals to the maximal "qubits" distance obtained by all the permutations of indices 1,2,3, in the qutrit states. Thus the problem of distance bound of qudits expressed in terms of the relative entropy is reduced to the problem of distance of "qubit portrait" \cite{Vovf,Lupo} of the qudit states. This property is analogous to the property of entanglement of composite qudit system states which can be characterized by the entanglement of the qubit portrait of these states. One can write the chain of the inequalities for arbitrary density matrices $\rho$ and $\sigma$. We demonstrate such chain for density $4\times4$-matrices $\rho$ and $\sigma$ which can be associated either with two-qubit state or with the states of qudit with $j=3/2$. One has the chain of inequalities
\[
\mbox{Tr}
\left\{
\left(\begin{array}{cccc}
\rho_{11}&\rho_{12}&\rho_{13}&\rho_{14}\\
\rho_{21}&\rho_{22}&\rho_{23}&\rho_{24}\\
\rho_{31}&\rho_{32}&\rho_{33}&\rho_{34}\\
\rho_{41}&\rho_{42}&\rho_{43}&\rho_{44}
\end{array}\right)
\left[
\ln\left(\begin{array}{cccc}
\rho_{11}&\rho_{12}&\rho_{13}&\rho_{14}\\
\rho_{21}&\rho_{22}&\rho_{23}&\rho_{24}\\
\rho_{31}&\rho_{32}&\rho_{33}&\rho_{34}\\
\rho_{41}&\rho_{42}&\rho_{43}&\rho_{44}
\end{array}\right)
-\ln\left(\begin{array}{cccc}
\sigma_{11}&\sigma_{12}&\sigma_{13}&\sigma_{14}\\
\sigma_{21}&\sigma_{22}&\sigma_{23}&\sigma_{24}\\
\sigma_{31}&\sigma_{32}&\sigma_{33}&\sigma_{34}\\
\sigma_{41}&\sigma_{42}&\sigma_{43}&\sigma_{44}
\end{array}\right)
\right]
\right\}
\geq
\]
\[\mbox{Tr}
\left\{
\left(\begin{array}{ccc}
\rho_{11}+\rho_{44}&\rho_{12}&\rho_{13}\\
\rho_{21}&\rho_{22}&\rho_{23}\\
\rho_{31}&\rho_{32}&\rho_{33}
\end{array}\right)
\left[
\ln\left(\begin{array}{ccc}
\rho_{11}+\rho_{44}&\rho_{12}&\rho_{13}\\
\rho_{21}&\rho_{22}&\rho_{23}\\
\rho_{31}&\rho_{32}&\rho_{33}
\end{array}\right)-
\ln\left(\begin{array}{ccc}
\sigma_{11}+\sigma_{44}&\sigma_{12}&\sigma_{13}\\
\sigma_{21}&\sigma_{22}&\sigma_{23}\\
\sigma_{31}&\sigma_{32}&\sigma_{33}
\end{array}\right)
\right]
\right\} \geq
\]
\[
\mbox{Tr}
\left\{
\left(\begin{array}{cc}
\rho_{11}+\rho_{33}+\rho_{44} & \rho_{12} \\
\rho_{21} & \rho_{22}\end{array}\right)
\left[
\ln \left(\begin{array}{cc}
\rho_{11}+\rho_{33}+\rho_{44} & \rho_{12} \\
\rho_{21} & \rho_{22}\end{array}\right)
-\ln\left(\begin{array}{cc}
\sigma_{11}+\sigma_{33}+\sigma_{44} & \sigma_{12} \\
\sigma_{21} & \sigma_{22}\end{array}\right)
\right]
\right\}
\geq 0.
\]
Thus the distance of two-qubits or $j=3/2$ qudit states has the bound determined by relative entropy of "qutrit" states which also has bound determined by relative entropy of "qubit" states. The obtained inequalities are valid for arbitrary density matrices including the noncomposite system states of a single qudit.
For given finite density matrix $\rho$ and arbitrary Hermitian $N\times N$-matrix $B$ one can get inequality
\[ 0\leq\exp\left[-\mbox{Tr}(\rho\ln\rho)\right]\leq\left[(\mbox{Tr}e^{-B})(\mbox{Tr}e^B)\right]^{1/2}.\]
For $B=\rho\ln\rho$ one has:
\[\exp\left[-\mbox{Tr}(\rho\ln\rho)\right]\leq\{[\mbox{Tr}\exp(-\rho\ln\rho)][\mbox{Tr}\exp(\rho\ln\rho)]\}^{1/2}.\]
If $b_k$ $(k=1,2,\ldots,N)$ are eigenvalues of the matrix $B$ the inequality means that there exists the bound for the sum
\[\sum_{k=1}^N\sum_{j=1}^N\exp(b_k-b_j)\geq N^2.\]
For $N=2$ the inequality provides obvious relation
\[ 2[1\cosh(b_1-b_2)]\geq 4\]
which is saturated for $b_1=b_2$. One can also check that for any set of reals $b_k$, $k=1,2,\ldots,N$ and nonnegative numbers $w_k$ such that $\sum_k w_k=1$ the following inequality holds
\[ \ln\sum_{k=1}^N[\exp(-b_k)+\sum_{k=1}^N(w_k\ln w_k+w_kb_k)\geq0.\]
Thus for $N$ real $b_k$ and probability vector $\vec w=\{w_k\}$ one has the generic inequality. If the qudit state is associated with spin tomogram \cite{DodPLA,OlgaJETP,OlgaBregence} or probability vector $\vec w(u)$ on the unitary group $u$ the obtained inequality for $b_k=-w_k(u)$ provides some uncertainty relation for measurable probabilities $w_k(u)$.
\section*{Conclusion}
To conclude we formulate the main result of our work. Using the "portrait" approach \cite{Vovf,Lupo} to the density matrices of single qudit states the analog of monotonicity property of relative entropy valid for bipartite quantum system was constructed for systems without subsystems. New matrix inequalities for qutrit density matrices were obtained. The chain of matrix inequalities for single qudit states generalizing the chain of the analogous inequalities for multipartite qudit states was demonstrated on example of qudit $j=3/2$ systems and two-qubit system. The generic inequality for $N$ real numbers and a probability $N$-vector was obtained. Also an "uncertainty relation" for spin tomogram was obtained. The possible applications of the obtained inequalities to analyze quantum correlations in the states of noncomposite systems will be considered in future publication.
|
1408.3248
|
\section{Introduction}
\label{Sec:Introduction}\glsresetall
\vspace{-1ex}
Wireless localization is a fairly mature area of research, with a vast literature \cite{Macagnano2012, Yanying2009, Hui2007}.
It is therefore paradoxical that despite the formidable effort put into the problem, wireless positioning is still shy of its potential as a truly ubiquitous technology \cite{Harrop2012, Wirola2010, Hui2007, Harrop2013}.
Ubiquity requires the technology to be available in every environment, and it is well-known that wireless localization systems are still inaccurate and unreliable in places such as urban canopies and indoors, which are characterized by rich multipath and scarcity of \ac{LOS} conditions.
Furthermore, compared to the quality and omnipresence of satellite- and cellular-based systems in open outdoor spaces, indoor positioning solutions \cite{Convergence2012, Dresden2013, Medina2013} are still relatively fragile, under-deployed and unconsolidated.
One explanation for this discrepancy is that literature has provided a large number of building blocks to solve parts of the problem, but which for a reason or another still do not come together harmoniously to provide a comprehensive solutions.
To qualify the latter statement, consider the specific case of \ac{AoA} or \ac{DoA} positioning.
A good number of AoA-based localization algorithms \cite{Niculescu2003, Rong2006, Kegen2007, Azzouzi2011}, and an even wider body of literature on AoA estimation \cite{Schmidt1986, Barabell1983, FuLi1993, Gershman1999, tuncer2009} exists.
Of particular relevance is the fact that simultaneous estimation of the AoA of multiple signals/sources is relatively easy to perform, which is of fundamental importance to reduce latency in indoor applications where the concentration of users is typically large.
Yet, AoA-based indoor positioning is not common today because: $a$) AoA-based localization algorithms are highly susceptible to \ac{NLOS} conditions, such that accurate and robust AoA input is needed; and $b$) accurate and robust AoA estimation requires expensive multi-antenna systems and high computational capabilities, which are incompatible with typical indoor requirements of small, low-cost, low-power devices \cite{Wirola2010, Harrop2013}.
On the other extreme of the technological spectrum are proximity-based (in particular RFID) approaches \cite{Hui2007, Ristic2006, Shigeng2010}, which do satisfy the latter requirements, but at the expense of accuracy, and therefore also failed to penetrate the general market.
The limitations of the AoA- and proximity-based approaches partially explain the predominance of range-based indoor localization systems proposed both by academia \cite{Zhang2005, Hui2007, Shouhong2009, Yanying2009, Moragrega2010, Macagnano2012, JunlinYan2013}.
Indeed, various accurate and robust distance-based localization algorithms exist, and distance estimates are relatively inexpensive to obtain from radio signals -- via \ac{RSSI}, \ac{ToA} or \ac{PDoA} methods -- without requiring multiple antennas or significant additional RF circuitry.
But again the deployment of this technology is short of its potential, which arguably is a result of the fact that since ranging quality is severely degraded by interference, range-based positioning systems are required to carefully schedule the collection of ranging information, leading to low refreshing ratios and high communication costs.
The above rationale points to a curious predicament.
On the one hand, many excellent multipoint \ac{AoA} estimation algorithms exist \cite{Schmidt1986, Barabell1983, FuLi1993, Gershman1999, tuncer2009}, which however are not typically utilised for indoor positioning as multi-antenna systems are too expensive.
On the other hand, many excellent distance-based localization algorithms exist \cite{Zhang2005, Hui2007, Shouhong2009, Yanying2009, Moragrega2010, Macagnano2012, JunlinYan2013}, which however can only be effectively employed for indoor positioning if ranging information can be collected efficiently from multiple sources so as to reduce latency.
The work presented in this article is a proposal to solve the aforementioned impasse.
Specifically, we offer a solution to the multipoint ranging problem based on the same superresolution techniques typically used for AoA estimation.
As shall be explained, however, in this context the ability to handle multiple sources when employing superresolution methods does not stem from the separability of signals through the eigen-properties of mixed covariance matrices, but rather by a robustness to sampling sparsity which interestingly is not always enjoyed by such methods in the multi-antenna setting.
The feature suggests that the collection of input data can be optimized by designing such sampling sparsity according to Golomb rulers \cite{Dewdney1985, Rankin1993, Soliday1995, CottaCONSTRAINTS2007}, which however must maintain mutual orthogonality.
The latter is achieved by a new genetic algorithm -- designed under the inspiration of the behaviour of prides of lions -- which enables the construction of multiple orthogonal and equivalent\footnote{Equivalence will be defined more rigorously according to two different criteria.} Golumb rulers.
The performance of the new algorithm to construct Golomb rulers is compared against the state of the art, and shown thereby to outperform all alternatives we could find.
Furthermore, an original \acp{CRLB} analysis of the new strategy is performed, which indicates that in addition to the advantage of enabling simultaneous multipoint ranging, the overall solution achieves remarkable gain in accuracy over current methods.
In summary, our contributions are as follows:
\begin{itemize}
\item[1)] A new multipoint ranging algorithm obtained by adapting superresolution techniques for \ac{ToA} \cite{RainerHach2005, Baba2011, Myungkyun2010, JianXing2007} and \ac{PDoA} \cite{Scherhaufl2013, Povalac2011, Ahmad2006} ranging, under a unified mathematical framework;
\item[2)] A new genetic algorithm that outperforms the best known alternative and enables the construction of multiple orthogonal sets of Golomb rulers of equivalent properties;
\item[3)] A complete \acp{CRLB} analysis of the resulting method, which validates its advantages.
\end{itemize}
\section{Super-resolution {ToA} and {PDoA} Ranging}
\label{Sec:Prelim}
There are three basic methods to estimate the distance between a pair of wireless devices using their signals: \ac{RSSI}, \ac{ToA} and \ac{PDoA}.
Amongst these alternatives, \ac{RSSI}-ranging is known to be the least accurate and least robust \cite{Chandrasekaran2009, Elnahrawy2004}.
In fact, after some early attention due mostly to its inherent low-power potential \cite{Qianqian2011,Keping2013}, \ac{RSSI}-ranging has since lost appeal thanks to the emergence of low-power physical layer standards such as 802.15.4g \cite{IEEE802.11-2011} and 802.11ac \cite{Szulakiewicz2012}, which facilitate the implementation of low-power \ac{ToA} and \ac{PDoA} ranging mechanisms.
In light of the above, we shall focus hereafter on the latter two forms of ranging.
\subsection{ToA-based Two-Way Ranging}
\label{Subsubsec:Twr}
Consider the problem of estimating the distance $d$ between a reference node (anchor) $A$ and a target node $T$ based on ToA measurements.
Using the standard two-way ranging technique \cite{RainerHach2005, Baba2011, Myungkyun2010, JianXing2007}, and assuming that the procedure is executed not a single but multiple times, the $k$-th distance estimate $\hat{d}_k$ of $d$ is computed by
\begin{equation}
\label{Eq:distancetoa}
\hat{d}_k = \Big[\big(\tau_{_{\scriptstyle\textup{RX}:k}} - \tau_{_{\scriptstyle\textup{TX}:k}}\big) - k\cdot\tau_{_{\scriptstyle T}}\Big]\cdot\dfrac{c}{2}
\end{equation}
where $c$ is the speed of light; $\tau_{_{\scriptstyle\textup{TX}:k}}$ and $\tau_{_{\scriptstyle\textup{RX}:k}}$ are respectively the time stamps of the $k$-th packet at transmission and reception back at the anchor; and $\tau_{_{\scriptstyle T}}$ is a fixed and known waiting period observed by the target, for reasons that are beyond\footnote{For instance, $\tau_{_{\scriptstyle T}}$ may be imposed by the frame structure of the underlying communication system.} the ranging process itself.
Since $\tau_{_{\scriptstyle T}}$ is known \emph{a priori} by the anchor, it serves no mathematical purpose and therefore can be assumed to be zero\footnote{Strictly speaking, $\tau_{_{\scriptstyle T}}$ could also be considered a source of ranging errors, since it is subject to jitter (imperfect time-keeping). In practice, however, jitter errors are several orders of magnitude below the timing errors involved in measuring $\tau_{_{\scriptstyle\textup{RX}:k}}$, and therefore can be effectively ignored.} without loss of generality (w.l.g.).
Similarly, before the $k$-th ranging cycle the anchor may in practice hold for a (possibly unequal) waiting period $\tau_{_{\scriptstyle\! A:(k-1)}}$, which however can also be normalized to zero, wlg.
Referring to Figure \ref{Fig:ToA_Top}, and considering the latter assumptions on $\tau_{_{\scriptstyle T}}$ and $\tau_{_{\scriptstyle\! A:i}}$ for $i=\{1,\cdots,k-1\}$, equation \eqref{Eq:distancetoa} can then be rewritten as
\begin{equation}
\label{Eq:Mul_two_way_rang}
\hat{d}_k = \left[\underbrace{\big(\tau_{_{\scriptstyle\textup{RX}:k}} -\tau_{_{\scriptstyle\textup{TX}:1}}\big)}_{\Delta \tau_k } - k\cdot\!\!\!\!\!\!\Dcancelto[0]{$\tau_{_{\scriptstyle T}}$} - \sum_{i=1}^{k-1}\!\!\!\!\Dcancelto[0]{$\tau_{_{\scriptstyle\! A:i}}$}\!\right]\cdot\dfrac{c}{2 k} \equiv \Delta \tau_k \cdot\dfrac{c}{2 k}.
\end{equation}
One way to interpret the model described by equation \eqref{Eq:Mul_two_way_rang} is that in a \ac{ToA}-based \ac{TWR} scheme with multiple ranging cycles, the time-difference measurement $\Delta \tau_k$ obtained at the $k$-th cycle has a linear functional relationship with the cycle index $k$, with the proportionality factor determined by the distance $d$ between the target and the anchor, $i.e.$,
\begin{equation}
\label{Eq:TimeDifferenceDistance}
\Delta \tau_k = \omega_d k, \quad \mbox{with} \quad \omega_d = \frac{2d}{c}.
\end{equation}
The convenience of this interpretation of \ac{ToA}-based \ac{TWR} will soon become evident.
\subsection{{PDoA}-based Continuous Wave Radar Ranging}
\label{Subsubsec:DFRR}
Consider the problem of estimating the distance $d$ between a reference node (anchor) $A$ and a target node $T$ based on the phases of the signals exchanged between the devices.
One possible mechanism, as illustrated in figure \ref{Fig:PDoA_Top}, is that the anchor $A$ emits a continuous sinusoidal wave of frequency $f$ with a known phase $\varphi_{_{\textup{TX}}}$ and the
target $T$ acts as an active reflector, such that $A$ can measure the phase $\varphi_{_{\textup{RX}}}$ of the returned signal \cite{Scherhaufl2013, Povalac2011, Ahmad2006}.
In this case, the roundtrip distance $2d$ and the phases $\varphi_{_{\textup{TX}}}$ and $\varphi_{_{\textup{RX}}}$ are related by
\begin{equation}
\label{Eq:CWRR}
\varphi = \varphi_{_{\textup{RX}}} - \varphi_{_{\textup{TX}}} = \dfrac{4 \pi d}{c} f - 2\pi N,
\end{equation}
where $N$ is the integer number of complete cycles of the sinusoidal over the distance $2d$.
Obviously the distance $d$ cannot be estimated directly based on equation \eqref{Eq:CWRR} since the quantity $N$ is unknown.
However, taking the derivative of equation \eqref{Eq:CWRR} with respect to $f$ one obtains
\begin{equation}
\label{Eq:CWRRDerivative}
\dfrac{\d \varphi}{\d f} = \dfrac{4 \pi d}{c}.
\end{equation}
Let there be a set of equi-spaced frequencies $\mathbb{F}=\{f_0,\cdots,f_K\}$ such that $\Delta f = f_{k+1} - f_k$ for all $0\leq k < K$,
and assume the roundtrip phases $\varphi_k$ for all $f_k$ are measured.
Then, thanks to the linear relationship between $f$ and $d$ described by equation \eqref{Eq:CWRRDerivative}, it follows that
\begin{equation}
\label{Eq:PhaseDifferenceDistance}
\Delta \varphi_k = \omega_d k, \quad \mbox{with} \quad \omega_d = \frac{4\pi\Delta f d }{c},
\end{equation}
where $\Delta \varphi_k \triangleq \varphi_k - \varphi_0$ for all $1\leq k < K$.
Comparing equations \eqref{Eq:TimeDifferenceDistance} and \eqref{Eq:PhaseDifferenceDistance}, we conclude that both the \ac{ToA}-based \ac{TWR} and the \ac{PDoA}-based \ac{CWRR} methods are mathematically equivalent, in the sense that the measured quantities, respectively $\Delta \tau_k$ and $\Delta \varphi_k$, have a linear relationship with a counter $k$, governed by a slope coefficient $\omega_d$ that is directly and unequivocally related to the desired information $d$.
In light of the models described above, we shall consider for simplicity that we are able to measure quantities $\Delta_k$, such that
\begin{equation}
\label{Eq:GenericRanging}
\Delta_k = \omega_d \cdot k,
\end{equation}
where $\omega_d$ is a coefficient with a constant relationship with $d$.
Notice that trivially due to the linearity of this relationship, we have, for any pair of integers $(k,q)$, with $k > q$,
\begin{equation}
\label{Eq:GenericRangingDifference}
\Delta_k - \Delta_q = \omega_d \cdot (k - q) = \Delta_{k - q}.
\end{equation}
This simple property has a remarkable consequence.
Indeed, consider an ascending sequence of non-negative integers $\mathcal{N}= \{n_1,\cdots,n_K\}$ and the associated set of input measurements $\scaleobj{1.3}{\mathbbold{\Delta}}_\mathcal{N} = \{\Delta_{n_1},\cdots,\Delta_{n_K}\}$.
By virtue of equation \eqref{Eq:GenericRangingDifference}, the set $\scaleobj{1.3}{\mathbbold{\Delta}}_\mathcal{N}$ can be expanded into $\scaleobj{1.3}{\mathbbold{\Delta}}_\mathcal{V} =
\{\Delta_{n_2}\!-\Delta_{n_1},\cdots,\Delta_{n_K}\!-\Delta_{n_1},\cdots,\Delta_{n_K}\!-\Delta_{n_{K-1}}\} = \{\Delta_{n_2-n_1},\cdots,\Delta_{n_K - n_{K-1}}\} = \{\Delta_{\nu_1},\cdots,\Delta_{\nu_M}\}$, where the cardinality $M$ of $\scaleobj{1.3}{\mathbbold{\Delta}}_\mathcal{V}$ is obviously upper bounded by $M \leq K\frac{K-1}{2}$.
Other then the much larger cardinality, the sequences $\scaleobj{1.3}{\mathbbold{\Delta}}_\mathcal{V}$ and $\scaleobj{1.3}{\mathbbold{\Delta}}_\mathcal{N}$ have, as far as the purpose of distance estimation is concerned, fundamentally the same nature since both carry samples of the quantities $\Delta_k$.
In other words, the model described in subsection \ref{Subsubsec:DFRR} allows for large input sets of cardinality $N$ to be obtained from a significantly smaller number $K$ of actual measurements, by carefully designing the feedback intervals or the carrier frequencies required to perform ranging estimates.
Furthermore, the linearity between the measured quantities $\Delta_k$ and the corresponding indexes $k$ is so that such design can be considered directly in terms of the relationship between the integer sequences $\mathcal{N} \to \mathcal{V}$.
Sparse sequences $\mathcal{N}$ that generate optimally expanded equivalents $\mathcal{V}$ are known as \emph{Golomb rulers} and their design under the constraints of our problem is the subject in Section \ref{Sec:Gol_Sr_R}.
Here, however, let us proceed by demonstrating how the aforementioned model enables the straightforward application of superresolution algorithms for \ac{ToA} and \ac{PDoA} ranging.
\vspace{-2ex}
\subsection{Multi-point Ranging via Super-resolution Algorithms}
\label{Subsec:S_PDB_R}
Straightforwardly, assume that a set of input measurements $\scaleobj{1.3}{\mathbbold{\Delta}}_\mathcal{N}$ is collected, from which the associated expanded set $\scaleobj{1.3}{\mathbbold{\Delta}}_\mathcal{V}$ is constructed and consider the corresponding complex vector
\vspace{-3ex}
\begin{equation}
\label{Eq:Steerin_vect}
\mathbf{x} = [e^{j\Delta_{\nu_1}},e^{j\Delta_{\nu_2}},\cdots,e^{j\Delta_{\nu_M}}]^\textup{T} \equiv [e^{j\omega_d},e^{j{\nu_2}\omega_d},\cdots,e^{j{\nu_M}\omega_d}]^\textup{T},
\vspace{-1ex}
\end{equation}
where $^\textup{T}$ denotes transposition and we have normalised ${\nu_1}=1$, without loss of generality.
One can immediately recognize from equation \eqref{Eq:Steerin_vect} the similarity between the vector $\mathbf{x}$ and the steering vector of a linear antenna array \cite{Schmidt1986, Barabell1983, tuncer2009}, with inter-element spacings governed by $\scaleobj{1.3}{\mathbbold{\Delta}}_\mathcal{V}$.
An estimate of the parameter of interest $\omega_d$ can therefore be recovered from the covariance matrix $\mathbf{R}_\mathbf{x} \triangleq \mathbb{E}[\mathbf{x}\cdot\mathbf{x}^\textup{H}]$.
Specifically, under the assumption that each measurement $\Delta_{\nu_m}$ is subject to independent and identically distributed (iid) white noise with variance $\sigma^2$, the covariance matrix $\mathbf{R}_\mathbf{x}$ can be eigen-decomposed to
\vspace{-1ex}
\begin{equation}
\mathbf{R}_\mathbf{x} = \mathbf{U}\cdot\boldsymbol{\Lambda}\cdot\mathbf{U}^\textup{H},
\vspace{-1ex}
\end{equation}
with
\vspace{-1ex}
\begin{equation}
\mathbf{U} =
\begin{pmat}[{|c}]
\mathbf{u}_\mathbf{x} & \mathbf{U}_0,\cr
\end{pmat}
\quad \mbox{and} \quad
\boldsymbol{\Lambda} =
\begin{pmat}[{|c}]
1 + \sigma^2 & \boldsymbol{0}\cr
\-
\boldsymbol{0} & \sigma^2 \mathbf{I}\cr
\end{pmat},
\end{equation}
where $\mathbf{U}_0$ is the $K$-by-$(K\!-\!1)$ null-space of $\mathbf{R}_\mathbf{x}$.
Given the above properties, many superresolution algorithms can be employed to obtain ranging estimates from \ac{ToA} and \ac{PDoA} measurements \cite{Rubsamen2009, Xiong2012, Abdalla2013,Faye2013}.
Since our focus in this article is to demonstrate such possibility, discuss the resulting opportunities to optimize resources, and analyze the corresponding implications on the achievable ranging accuracies, we shall limit ourselves to two explicit classical examples, for the sake of clarity.
One way to obtain an estimate $\hat{\omega}_d$ of $\omega_d$ is via the classic spectral \ac{MUSIC} algorithm \cite{Schmidt1986, Gershman1999, Stoica1990}, where a search for the smallest vector projection onto the noise subspace of $\mathbf{R}_\mathbf{x}$ is conducted, namely
\begin{equation}
\label{Eq:MUSIC}
\hat{\omega}_d = {\rm arg}\max\limits_{\omega_d} \frac{1}{\|\mathbf{e}^\textup{H}\cdot \mathbf{U}_0\|^2}
\quad \mbox{with} \quad \mathbf{e}\triangleq [e^{j\omega_d},e^{j{\nu_2}\omega_d},\cdots,e^{j{\nu_M}\omega_d}]^\textup{T}.
\end{equation}
Alternatively, $\hat{\omega}_d$ can be obtained using the root \ac{MUSIC} algorithm \cite{Barabell1983, tuncer2009, ElKassis2010}, which makes use of the fact that the projection square norm $\|\mathbf{e}^\textup{H}\cdot \mathbf{U}_0\|^2$ defines an equivalent polynomial in $\mathbb{C}$ with coefficients fully determined by the Grammian matrix of the null subspace of $\mathbf{R}_\mathbf{x}$.
Specifically, define the auxiliary variable $z \triangleq e^{j\omega}$ such that $\mathbf{e} = [z,z^{\nu_2},\cdots,z^{\nu_M}]^\textup{T}$, and the two zero-padded vectors
$\mathbf{e}_{_\textup{L}} = [z^{-1},0,\cdots,0,z^{-\nu_2},0,\cdots,0,z^{-\nu_3},0,\cdots,\cdots,0,z^{-\nu_M}]$ and
$\mathbf{e}_{_\textup{R}} = [z,0,\cdots,0,z^{\nu_2},0,\cdots,0,z^{\nu_3},0,\cdots,\cdots,0,z^{\nu_M}]$.
Then we may write
\begin{eqnarray}
\label{Eq:MUSICPolynomial}
P(z) \hspace{-3ex}&& = \|\mathbf{e}^\textup{H}\cdot \mathbf{U}_0\|^2 = \mathbf{e}_{_\textup{L}}\cdot \mathbf{G}\cdot \mathbf{e}_{_\textup{R}}^\textup{T} \equiv \sum_{\nu=0}^{2\nu_M-2} {\rm tr}(\mathbf{G};\nu)\cdot z^{\nu},
\end{eqnarray}
where the last equivalence sign alludes to the multiplication by $z^{\nu_M}$ required to take the algebraic function into a polynomial; $\mathbf{G}$ is a Gramian matrix constructed by zero-padding the matrix $\mathbf{U}_0\cdot\mathbf{U}_0^\textup{H}$, such that the $(m,\ell)$-th element of $\mathbf{U}_0\cdot\mathbf{U}_0^\textup{H}$ is the $(\nu_m,\nu_\ell)$-th element of $\mathbf{G}$; and ${\rm tr}(\mathbf{G};\nu)$ denotes the $m$-th trace of the matrix $\mathbf{G}$ -- $i.e.$, the sum of the $k$-th diagonal of $\mathbf{M}$, counting from the the bottom-left to the upper-right corner.
The estimate $\hat{\omega}_d$ can then be obtained by finding the only unit-norm root of $P(z)$, $i.e.$,
\begin{equation}
\label{Eq:RootMUSIC}
\hat{\omega}_d = {\rm arg}\, {\rm sol}\, \Big\{P(z) = 0 \;\Big|\; |z| = 1\Big\}.
\end{equation}
Whatever the specific method used to extract the distance information (embedded in $\hat{\omega}_d$) from the vectors constructed as shown in equation \eqref{Eq:Steerin_vect}, the following properties apply to the superresolution algorithms described above.
\begin{itemize}
\item Superposibility: Thanks to the expansions $\mathcal{N} \to \mathcal{V}$, measurement intervals/frequencies corresponding to multiple sources can be superposed without harm.
To exemplify, consider the case of two sources $A$ and $B$ and the measurements from both sources be collected continuously according to the sequence $\mathcal{N} = \{1,3,4,5,6,7,8,10\}$, but such that the sources $A$ and $B$ are only active according to the orthogonal sequences $\mathcal{N}_A = \{1,3,6,7\}$ and
$\mathcal{N}_B = \{4,5,8,10\}$.
The samples in $\mathcal{N}_A$ can, however, be transformed into the sequence $\mathcal{V}_A = \{3-1,6-1,7-1,6-3,7-3,7-6\} \equiv \{1,2,3,4,5,6\}$, which contains $6$ samples.
Furthermore and likewise, $\mathcal{N}_B \to \mathcal{V}_B = \{5-4,8-4,10-4,8-5,10-5,10-8\}\equiv \{1,2,3,4,5,6\}$.
In other words, out of only 8 jointly collected samples, 6 \ac{ToA} or \ac{PDoA} (equivalent) measurements from each source are obtained, without interference.
\item Unambiguity: In the case of \ac{AoA} estimation using antenna arrays, the elements of the steering vectors are complex numbers whose arguments are \emph{periodic functions} of the desired parameter, which in turn gives rise to aliasing (ambiguity) of multiple parameter values that lead to the same set of measurements \cite{Song2002, Byungwoo2004, Keller2006, Tayem2012}.
In contrast, in the context hereby the quantities $\Delta_k$ are \emph{linear functions} of the desired parameter $d$, such that no such ambiguity occurs.
\item Separability: Thanks to both properties above, superresolution ranging can be carried without interference using orthogonal non-uniform sample vectors, each processed by a separate estimator.
Consequently, issues such as correlation amongst multiple signals, which commonly affect superresolution algorithms \cite{Chongying2007, Ariananda2012, Weiziu1991, Wang2002, Tayem2012}, do not exist in the context hereby.
In other words, the application of superresolution algorithms to multipoint ranging are more closely related to Pisarenko's original harmonic decomposition algorithm \cite{Pisarenko1973}, than to derivative methods such as \ac{MUSIC}.
\end{itemize}
\section{Optimization of ToA and PDoA Range Sampling via Golomb Rulers}
\label{Sec:Gol_Sr_R}
Under the mathematical model described in Section \ref{Sec:Prelim}, the optimization of ranging resources amounts to allocating ranging cycles or frequency pairs to multiple sources, respectively, which is directly related to that of designing Golomb rulers \cite{Rankin1993}.
Golomb rulers are sets of integer numbers that generate, by means of the difference amongst their elements, larger sets of integers, without repetition.
The problem was first studied independently by Sidon \cite{Sidon1932} and Babcock \cite{Babcock1953}, but these special sets are named after Solomon W. Golomb \cite{Golomb1977} as he was the first to popularize their application in engineering.
Before we discuss the design of Golomb rulers for the specific application of interest, it will prove useful to briefly review some of their basic characteristics and features.
\vspace{-3ex}
\subsection{Basic Characteristics and Features of Golomb Rulers}
\label{Subsec:Gol_Bas}
Consider a set of ordered, non-negative integer numbers $\mathcal{N} = \{n_1, n_2, \cdots,n_K\}$, with $n_1 = 0$ and $n_K = N$, wlg\footnote{Since Golomb rulers are invariant to translation, we consider without loss of generality, that the first element is $0$ and the last is $N$. That is slightly different from the representation adopted in subsection \ref{Subsec:S_PDB_R}, but will prove convenient hereafter.}.
This set has cardinality (or \emph{order}) $K$, and it will prove convenient to define the \emph{length} of
the set by its largest element $N$.
Next, consider the corresponding set $\mathcal{V}$ of all possible pairwise differences
\begin{equation}
\label{Eq:Golomb}
\nu_{k\ell} = n_k - n_\ell \quad (1 \leq \ell < k \leq K).
\end{equation}
If the differences $\nu_{k\ell}$ are such that $\nu_{k\ell} = \nu_{pq}$ if and only if (iff) $k=p$ and $\ell=q$, then the set
$\mathcal{N}$ is known as a \emph{Golomb ruler}.
Such sets are thought of as \emph{rulers}, as their elements can be understood as \emph{marks} of a ruler, which can thus \emph{measure} only the lengths indicated by any pair of marks.
In analogy to the latter, we henceforth refer to the set $\mathcal{V}$ as the \emph{measures} set.
It follows from the definition that the number of distinct lengths that can be measured by a Golomb ruler -- in other words, the order of $\mathcal{V}$ -- is equal to $K\frac{K-1}{2}$.
The first key feature of a Golomb ruler is therefore that if $\mathcal{N}$ has order $K$, then $\mathcal{V}$ has order $K\frac{K-1}{2}$.
A simple example of a Golomb ruler is $\mathcal{N} = \{0,1,4,6\}$, which generates the Measures $\mathcal{V}=\{1,2,3,4,5,6\}$.
In this particular example, $\mathcal{V}$ is \emph{complete}, as it contains all positive integers up to its length, so that the Golomb of order $4$ is said to be \emph{perfect}.
In other words, a perfect ruler allows for \emph{all lengths} to be measured, up to the length of the ruler itself.
Unfortunately, no perfect Golomb ruler exists \cite{Dewdney1985} for $K > 4$.
It is therefore typical to focus on designing rulers that retain another feature of the order-4 Golomb ruler, namely, its compactness or \emph{optimality} in the following senses: $a$) no ruler shorter then $N=6$ can exist that yields $K\frac{K-1}{2}=6$ distinct measures; and $b$) no further marks can be added to the ruler, without adding redundancy.
In general, these two distinct optimality criteria are defined as
\begin{itemize}
\item[$a$)] \emph{Length optimality}: Given a certain order $K$, the ruler's length $N$ is \emph{minimal};
\item[$b$)] \emph{Density optimality}: Given a certain length $N$, the ruler's order $K$ is \emph{maximal}.
\end{itemize}
The design of optimum Golomb rulers of higher orders is an NP-hard problem \cite{Apostolos2002, CottaCONSTRAINTS2007, Meyer2009}.
To illustrate the computational challenge involved, the Distributed.net project \cite{distributed2013}, which has the largest computing capacity in the world, has since the year 2000 dedicated a large share of its computing power to finding optimum Golomb rulers of various sizes.
The project took 4 years to compute the optimal Golomb ruler of order 24, and is expected to take 7 years to complete the search for the optimal oder-27 ruler!
\subsection{Genetic Algorithm to Design Orthogonal Golomb Rulers}
\label{Subsec:design_golomb_rulers}
Although a few systematic algorithms to generate Golomb rulers do exist \cite{Konstantinos2009},
none of the methods discovered so far are capable of outputting rulers adhering to a specific optimality criterion.
This, allied with the NP-hardness of the problem, makes efficient heuristic techniques the primary method to design Golomb rulers.
Indeed, the optimum rulers of orders 24 to 26 found by the Distributed.net were all obtained using heuristic methods \cite{Konstantinos2009, wiki:Golomb2013}.
Notice moreover that the optimality criteria described above are not necessarily sufficient to satisfy the needs of specific applications.
Due to the aforementioned reasons, heuristic techniques such as constraint programming \cite{Boese1994}, local search \cite{CottaCONSTRAINTS2007}, and evolutionary or genetic algorithms \cite{CarlosCotta2004, Ayari2010} are the standard approach to design Golomb rulers with specific features.
In the context of this article, our interest is to design \emph{orthogonal} Golomb rulers (so as to enable multipoint ranging), that also come as close as possible to satisfying the length and density of the optimality criteria described in subsection \ref{Subsec:Gol_Bas} (so as to optimise resources).
The orthogonality requirement adds the demand that rulers be designed out of a \underline{predefined} set of available integers $\mathcal{W}$, which to the best of our knowledge is an \emph{unsolved} problem.
In the next subsection we therefore describe a new genetic algorithm to design the required rulers.
The algorithm is a modified version of the technique first proposed in \cite{Soliday1995}, and inspired on the behaviour of wild animals that live in small groups, such as \emph{prides of lions}, and incorporate the following components.
\subsubsection{Representation}
\label{Subsubsec:Repre}
Following the framework proposed in \cite{Soliday1995}, Golomb rulers will be represented not by their marks, but by the differences of \emph{consecutive marks}.
That is, let $\mathcal{N} = \{n_1, n_2, \cdots,n_K\}$.
Then this set will be represented by $\mathcal{S} = \{s_1, \cdots, s_{K-1}\}$, where
\begin{equation}
\label{Eq:Repre}
s_i = n_{i+1} - n_i \quad \forall \quad i = \{1,\cdots,K-1\}.
\end{equation}
\subsubsection{Initial Population}
\label{Subsubsec:Ini_Pop}
In order to initialize the genetic algorithm an \emph{initial population} of segment sets is needed.
Let $s_{\max}$ be a design parameter describing the largest possible segment in the desired rulers, and consider the \emph{primary set} of segments $\mathcal{S}^* \triangleq \{1,2,\cdots,s_{\max}\}$.
Then, each member $\mathcal{S}_p$ of the initial population is given by an $(K\!-\!1)$-truncation of a uniform random permutation of $\mathcal{S}^*$.
Notice that $s_{\max}$ must be larger then the order $K$ of the desired rulers, and that the larger the difference $s_{\max} - K$, the larger the degrees of freedom available to construct suitable rulers.
An initial population $\mathbb{P}$ of cardinality $p$ can then be defined as a set of $P$ non-equal segment sets $\mathcal{S}_p$, that is, $\mathbb{P} \triangleq \{\mathcal{S}_p\}_{p=1}^P$, with $\mathcal{S}_p \neq \mathcal{S}_q$ for all pairs $(p,q)$.
\subsubsection{Fitness Function}
\label{Subsubsec:Eva}
Once an initial population $\mathbb{P}$ is selected, each of the candidate rulers $\mathcal{S}_p$ are evaluated according to a fitness function designed to capture how closely the candidate ruler $\mathcal{S}_p$ approaches the prescribed features of the desired rulers.
Specifically, in the application of interest Golomb rulers must have length as small as possible for a given order (see optimality criteria in subsection \ref{Subsec:Gol_Bas}); must have all marks belonging to a certain set of admissible marks $\mathcal{W}$; and must have no repeated measures (by definition).
In order to define a suitable fitness function with basis on these criteria, let us denote the set of marks and the measure set corresponding to $\mathcal{S}_p$ respectively by $\mathcal{N}_p$ and $\mathcal{V}_p$.
Next, let $N_p$ and $F_p$ respectively denote the length and the minimum number of marks\footnote{Notice that in order to count $F_p$, all shifts of $\mathcal{N}_p$ within the range $[\min(\mathcal{W}),\max(\mathcal{W})]$ must be considered.} in $\mathcal{N}_p$ that are not in $\mathcal{W}$.
Finally, let $R_p$ be the number of repeated elements in $\mathcal{V}_p$.
Then, the fitness function is defined as
\begin{equation}
\label{Eq:Fitness}
f(\mathcal{S}_p) = N_p \times (R_p + F_p + 1).
\end{equation}
Notice that since randomly selected candidate rulers $\mathcal{S}_p$ are by construction suboptimal, $N_p \geq N$ for all $p$.
Furthermore, the sum $R_p+F_p$ is a non-negative integer, assuming the value $0$ only when no repetitions occur in
$\mathcal{V}_p$ and no marks outside $\mathcal{W}$ can be found in $\mathcal{N}_p$, \emph{simultaneously}.
In other words, the \emph{minimum} value of the fitness function is exactly $N$ and is achieved if and only if the respective candidate is indeed a Golomb ruler satisfying all the conditions required.
\subsubsection{Mutations}
\label{Subsubsec:Mut}
Although the fitness function has the desired property of being minimized only at optimum choices of $\mathcal{S}_p$, the underlying optimization procedure is not analytical, but combinatorial, due to the discreteness of the optimisation space (specifically, the space of all sets of segment sequences with $K-1$ elements).
Therefore, in order to optimize $f(\mathcal{S}_p)$ one needs to search the vicinity of $\mathcal{S}_p$, which is achieved by performing \emph{mutations} over the latter.
There are two distinct types of elementary mutations that can be considered: \emph{transmutation} and \emph{permutation}.
The first refers to the case where one element of $\mathcal{S}_p$ is changed to another value\footnote{Since a segment of length 1 is always required in a Golomb ruler \cite{Boese1994}, $s_i=1$ is never subjected to transmutation \cite{Boese1994}.}, while the second refers to a permutation between two segments.
Both types of mutation have similar effects on the all quantities $N_p$, $R_p$, and $F_p$.
But since a candidate sequence $\mathcal{S}_p$ is by definition already a Golomb ruler if $R_p = 0$, mutation is applied to $\mathcal{S}_p$ only if $R_p>0$.
And in that case, only one of the two types of elementary mutations is applied, randomly and with equal probability.
The elementary mutation operator will be hereafter denoted $\mathscr{M}(\cdot)$, and a version of $\mathcal{S}_p$ subjected to a single elementary mutation is denoted $\mathcal{S}^\dagger_p$ such we may write $\mathcal{S}^\dagger_p = \mathscr{M}(\mathcal{S}_p)$.
A sequence $\mathcal{S}_p$ is replaced by $\mathcal{S}^\dagger_p$ if and only if $f(\mathcal{S}^\dagger_p) < f(\mathcal{S}_p)$.
The mutation step is repeated for every $p$ until an improved replacement of $\mathcal{S}_p$ is found.
The mutation procedure is further iterated over the population $\mathbb{P}$ repeatedly until at least on candidate sequence $\mathcal{S}_p$ is Golomb, $R_p = 0$.
If no ruler can be found out of the initial population after a certain number of mutation iterations, the algorithm is restarted with an increased primary set $\mathcal{S}^* \triangleq \{1,2,\cdots,s_{\max}+1\}$.
This process is repeated until a mutated population $\mathbb{P}^\dagger$ is found, which contains at least one Golomb ruler.
\subsubsection{Selection}
\label{Subsubsec:Selection}
As a result of the mutation process described above, $\mathbb{P}^\dagger$ certainly contains one or more Golomb rulers.
Such rulers, however, may still violate the prescribed set of admissible marks $\mathcal{W}$ -- that is, may still have $F_p > 0$ -- and may not have the shortest length desired -- $i.e.$, $N_p > N$.
The optimized Golomb ruler will be obtained via the \emph{evolutionary} process to be described in the sequel, which in turn requires the classification the rulers in the population according to their function.
Specifically, the sequence $\mathcal{S}_p$ with $R_p = 0$ and the smallest score $f(\mathcal{S}_p)$ will be hereafter referred to as the \emph{dominant male} sequence and denoted $\mathcal{S}_{\male}$.
In other words, define $\mathbb{P}_{\!\male}^\dagger = \{\mathcal{S}_p| R_p = 0\}$, then
\begin{equation}
\mathcal{S}_{\male} = \{\mathcal{S}_p \in \mathbb{P}_{\male}^\dagger | f(\mathcal{S}_p) < f(\mathcal{S}_q)\;\forall\; q \neq p\}.
\end{equation}
In turn, all the other remaining sequences will be designated as \emph{female} sequences.
We shall therefore denote\footnote{Notice that this implies that ``male'' sequences in $\mathbb{P}_{\!\male}^\dagger$, but do not have the smallest score are thereafter relabelled ``female''.} $\mathbb{P}_{\female}^\dagger \triangleq \mathbb{P}^\dagger \setminus \mathcal{S}_{\male}$.
\subsubsection{Evolution}
\label{Subsubsec:Cross}
The evolution of the sequences occurs based on the Darwinian principle of variation via reproduction and selection by survival of the fittest.
Here, reproduction refers to the construction of new sequences via random crossover between the male sequence and any of the female ones, where crossover amounts to the swap of a block of adjacent ``genes'' from $\mathcal{S}_{\male}$ and $\mathcal{S}_{\female}$.
Let us denote the crossover operator as $\mathscr{C}(\cdot,\cdot)$, such that a child of $\mathcal{S}_{\male}$ and the $i$-th female $\mathcal{S}_{\female:i}$ in the population, generated via a single elementary crossover, can be described as $\mathscr{C}(\mathcal{S}_{\male},\mathcal{S}_{\female:i})$.
Then, the population evolves according to the following behaviour:
\begin{itemize}
\item The dominant male reproduces with all females generating the children $\mathscr{C}(\mathcal{S}_{\male},\mathcal{S}_{\female:i})$;
\item If there are any children with no repetition ($R_i=0$) and with fitness function lower then that of
$\mathcal{S}_{\male}$, then the child with the lowest score amongst those takes the place of the dominant male, that is
{\small
\begin{equation}
\mathcal{S}_{\male} \leftarrow \{\mathscr{C}(\mathcal{S}_{\male},\mathcal{S}_{\female:i}) | R_i = 0,\; f(\mathscr{C}(\mathcal{S}_{\male},\mathcal{S}_{\female:i})) < f(\mathcal{S}_{\male}) \;\text{and}\;f(\mathscr{C}(\mathcal{S}_{\female:i})) < f(\mathscr{C}(\mathcal{S}_{\female:j})) \;\forall\; j\neq i \};
\end{equation}}
\item All other sequences are considered female, and out of original females and their children, only the best $P-1$ sequences, $i.e.$ the ones with the lowest scores, remains in $\mathbb{P}^\dagger$.
\end{itemize}
A pseudo-code of the genetic algorithm described above is given in Appendix \ref{Sec:PseudoCode}.
Due to the ``pride of lions'' evolutionary approach employed in the proposed algorithm, convergence to desired rulers is significantly faster then that achieved with the
``giant octopus\footnote{It is known that both the female and male Pacific giant octopuses parish shortly after the hatching of their eggs \cite{howstuffworks:Octopus2012}.}'' approach taken in \cite{Soliday1995}, where both parent sequences are destroyed during the crossover process.
To illustrate the latter, consider the results shown in Table \ref{table:RelError}, which compares the
average relative errors $\eta $ associated with Golomb rulers obtained with Soliday's algorithm \cite{Soliday1995} and the method proposed above, with
\begin{equation}
\eta \triangleq \mathbb{E}\left[\frac{N - N_\textup{opt}}{N_\textup{opt}}\right],
\end{equation}
where $N_\textup{opt}$ is the length of the shortest-possible (optimal) ruler with the same cardinality.
\begin{table}[H]
\centering
\caption{Comparison of Average Relative Error of Golomb Rulers}
\label{table:RelError}
\vspace{-3ex}
{\footnotesize
\hfill{}
\begin{tabular}{| c | c | c | c | c| }
\hline
$K$ & $N_\textup{opt}$ & {Soliday} \cite{Soliday1995} & {Proposed} ($P=2$) & {Proposed} ($P=4$)\\ \hline \hline
5 & 11 & 0.0\% & 0.0\% & 0.0\% \\ \hline
6 & 17 & 0.0\% & 0.0\% & 0.0\% \\ \hline
7 & 25 & 0.0\% & 0.0\% & 0.0\% \\ \hline
8 & 34 & 2.94\% & 0.0\% & 0.0\% \\ \hline
9 & 44 & 0.0\% & 4.6\% & 0.0\% \\ \hline
10 & 55 & 12.7\% & 12.7\% & 9.1\% \\ \hline
11 & 72 & 9.7\% & 11.1\% & 8.33\% \\ \hline
12 & 85 & 21.2\% & 16.5\% & 14.1\% \\ \hline
13 & 106 & 17.0\% & 17.0\% & 15.1\% \\ \hline
14 & 127 & 32.3\% & 23.6\% & 17.3\% \\ \hline
15 & 151 & 36.4\% & 26.5\% & 19.9\% \\ \hline
\end{tabular}}
\hfill{}
\vspace{-3ex}
\end{table}
It is found that even if the population considered during the evolution process is maintained to the minimum, replacing parents only by better offsprings tends to improve results as $K$ grows.
More importantly, a substantial and consistent improvement is achieved if $P>2$, such that the best (male) ruler can ``reproduce'' with multiple females.
Thanks to the modified fitness function (see equation \eqref{Eq:Fitness} compared to \cite[Eq. (4)]{Soliday1995}), which not only includes a direct term ($i.e.$, $F_p$) to account for the utilisation of forbidden marks, but also is only minimized when sequences are in fact Golomb rulers, the algorithm here proposed is capable of generating any desired number of orthogonal Golumb rulers, provided that $s_{\max}$ is sufficiently large.
This is achieved by subsequent executions of the algorithm, each time with $\mathcal{W}$ reduced by the marks of the rulers already generated.
There are, furthermore, two distinguished ways the resulting Golomb rulers can be grouped together.
One possibility is to group the rulers such that all have the same length $N$, even if with different different number of marks.
This approach is motivated by the fact that the corresponding array-like vectors (see equation \eqref{Eq:Steerin_vect}) will have the same aperture, which in turn is directly related to the
accuracy of the corresponding distance estimation via superresolution algorithms.
This choice is referred to as Equivalent\footnote{As shall be demonstrated in Section \ref{Sec:CrlbPerf_Anal}, unequal Golomb rulers with the same $K$ and $N$, may still have different \acp{CRLB}.} Ranging Quality (ERQ) grouping.
Another possibility, however, is to group the Golomb rulers with the same cardinality $K$.
This grouping approach is motivated by the fact that, in the context hereby, each marker in the ruler corresponds to a measurement that is taken, and therefore is referred to as Fair Resource Allocation (FRA).
Examples of Golomb rulers obtained with the algorithm described above and grouped according to the ERQ and FRA criteria are listed in Table \ref{table:GolRul}.
It can be observed that, as desired, no two identical numbers can be found in two different rulers within the same group.
It follows that all the rulers of each group can be superimposed without interference and within a maximally compact span\footnote{If a conventional design were employed, the alternative would be to shift each ruler by length of the later!}.
To clarify, thanks to the rulers displayed in Table \ref{table:GolRul}, within a block of no more than $100$ cycles/frequencies, multipoint ranging between a source and 5 different anchors can be carried out by taking only $50$ \ac{ToA}/\ac{PDoA} measurements.
Furthermore, this can be achieved either with equivalent ranging quality using the group of ERQ rulers, or with fairly allocated resources using the group of FRA rulers, respectively.
\begin{table}[H]
\center
\caption{Examples of Golomb Rulers with ERQ and FRA Designs.}
\label{table:GolRul}
{\small
\hfill{}
\begin{tabular}{|c|l|c:c||c|l|c:c|}
\hline
$K$ &\multicolumn{1}{c|}{{Equal Ranging Quality}}& $N$& $M$ & $K$ &\multicolumn{1}{c|}{{Fair Resource Allocation}}& $N$& $M$ \\
\hline
\hline
9 & 0,1,7,10,30,41,45,63,87 &87 &36 & 10 & 0,1,16,21,24,49,63,75,81,85 &85 &45 \\
\hline
9 & 2,3,6,32,37,49,56,76,89 &87 &36 & 10 & 2,3,11,32,45,56,60,72,78,92 &90 &45 \\
\hline
10 & 4,5,16,20,33,42,52,66,73,91 &87 &45 & 10 & 5,9,15,29,42,51,68,80,91,96 &91 &45 \\
\hline
11 & 8,9,18,21,38,46,53,72,77,93,95 &87 &55 & 10 & 6,13,17,19,33,43,61,62,84,93 &87 &45 \\
\hline
11 & 12,13,17,25,31,47,68,70,79,96,99 &87 &55 & 10 & 12,14,22,27,28,46,66,73,77,94 &82 &45 \\
\hline
\end{tabular}}
\hfill{}
\vspace{-3ex}
\end{table}
\section{Error Analysis and Comparisons}
\label{Sec:CrlbPerf_Anal}
In this section we analyse the performance of the multipoint ranging approach described above, both with \ac{PDoA} and \ac{ToA} measurements.
To this end, we first derive the Fisher Information Matrices and associated Cramer-Rao Lower Bounds (CRLB) corresponding to the algorithms and later offer comparisons with simulated results.
Since related material on \ac{ToA} can be found more easily \cite{Tao2008,Kaune2012}, we shall consider first the \ac{PDoA} case and offer only a synthesis of the \ac{ToA} counterpart.
\vspace{-2ex}
\subsection{Phase-Difference of Arrival}
\label{Subsubsection:PDoA}
Start by recognising that phase difference measurements subject to errors are circular random variables.
The Central Limit Theorem (CLT) over circular domains establishes that the most entropic ($i.e.$, least assuming) model for circular variables with known mean and variance is the von Mises or Tikhonov distribution \cite{Abreu2008}.
We assume, therefore, that phase measurements are modeled as
\begin{equation}
\label{Eq:Tikhonov_angles}
\hat{\Delta}\varphi \sim P_\mathcal{T}(x;\Delta\varphi,\kappa)
\end{equation}
with
\begin{equation}
\label{Eq:Tikhonov_pdf1}
P_\mathcal{T}(x;\Delta\varphi,\kappa) \triangleq \dfrac{1}{2\pi I_0(\kappa)}\cdot \exp(\kappa\cos(x - \Delta \varphi)),\quad -\pi \leq x \leq \pi,
\end{equation}
where $I_n(\kappa)$ is the $n$-th order modified Bessel function of the first kind and $\kappa$ is a shape parameter which in the case of phase estimates is in fact given by the signal-to-noise-ratio (SNR) of input signals, and that relates to the error variance by
\begin{equation}
\sigma^2_{\Delta\varphi} = {1 - \frac{I_1(\kappa)}{I_0(\kappa)}}
\mathrel{\mathop{\kern0pt\xrightarrow{\hspace*{3em}}}\limits_{\kappa >> 1}} {\frac{2}{2\kappa + 1}} \approx \frac{2}{\kappa}.
\end{equation}
Consider then that a set of $K$ independent measurements $\{\Delta \varphi_k\}_{k\in\mathcal{N}}$ is collected according to a Golomb ruler $\mathcal{N}$, such that the samples can be expanded into and augmented set of $M$ samples $\{\Delta \varphi_m\}_{m\in\mathcal{V}}$, with
\begin{equation}
\label{Eq:DeltaPhiGolomb}
\Delta \varphi_m = \Delta \varphi_k - \Delta \varphi_\ell = \omega_d (k - \ell) = \omega_d \nu_m, \quad \mbox{for}\quad k > \ell\quad \mbox{and}\quad (k,\ell)\to m,
\end{equation}
where each index $m$ corresponds to a pair $(k,\ell)$ with $k>\ell$ with ascending differences\footnote{Notice that this is ensured without ambiguity thanks to the fact that $\mathcal{N}$ is a Golomb ruler.}, and we commit a slight abuse of notation compared to equation \eqref{Eq:PhaseDifferenceDistance}, since $\nu_m$ is a positive integer obtained from a the difference $k-\ell$, such that $\nu_m \neq m$.
At this point it is worthy of mention that although the expanded samples $\Delta \varphi_m$ are actually differences of phase differences, these quantities not only preserve the linear relationship with the parameter of interest but also their independence.
As a result of the double-differences, however, the SNR of $\Delta \varphi_m$ from equation \eqref{Eq:DeltaPhiGolomb} is twice that of $\Delta \varphi_k$ from equation \eqref{Eq:PhaseDifferenceDistance}.
In light of the asymptotic relationship being twice as large, it follows that the shape parameter $\kappa$ associated with $\Delta \varphi_m$'s are twice as small.
Using the model above, and incorporating the optimised sampling via Golomb ruler, the likelihood function associated with $M$ independent measurements as per equation \eqref{Eq:PhaseDifferenceDistance} becomes,
\begin{equation}
\label{Eq:Likelihood}
L_\mathcal{T}(\hat{d};\Delta f,\kappa) = \prod_{m = 1}^{M} P_\mathcal{T}(x;\Delta\varphi_m ,\kappa) = \displaystyle \dfrac{1}{(2\pi I_0(\kappa/2))^M} \prod_{m = 1}^{M}
\exp\left[\frac{\kappa}{2}\cos\left(\frac{4\pi\Delta f}{c} \nu_m\cdot(\hat{d} - d)\right)\right],
\end{equation}
where $\nu_m \in \mathcal{V}$ and we have slightly modified the notation in order to emphasise the quantity and parameter of interest $d$.
For future convenience, let us define $\alpha = \frac{4\pi\Delta f}{c}$. Then the associated log-likelihood function is
\begin{equation}
\label{Eq:Loglikelihood}
\ln L_\mathcal{T}(\hat{d};\Delta f,\kappa) = -M \ln {2\pi I_0(\kappa/2)} + \frac{\kappa}{2} \sum_{m=1}^{M}
\cos\left(\alpha\cdot \nu_m\cdot (\hat{d} - d)\right),
\end{equation}
and its Hessian becomes
\begin{equation}
\label{Eq:Hess_loglikehood}
\dfrac{\partial ^2 \ln L_\mathcal{T}(\hat{d};\Delta f,\kappa)}{\partial \hat{d}^2} =
- \frac{\alpha^2\kappa}{2}\sum_{m=1}^{M} \nu_m^2\cos\left(\alpha\cdot \nu_m\cdot(\hat{d} - d)\right).
\end{equation}
The Fisher Information is the negated expectation of the Hessian, thus,
\begin{equation}
J(\mathcal{V};\Delta f,\kappa) = -\mathbb{E} \left[\dfrac{\partial ^2 \ln L_\mathcal{T}(\hat{d};\Delta f,\kappa)}{\partial \hat{d}^2}\right] = \frac{\alpha^2\kappa}{2}\sum_{m=1}^{M} \nu_m^2\mathbb{E}\left[\cos\left(\alpha\cdot \nu_m\cdot(\hat{d} - d)\right)\right],
\end{equation}
where the notation alludes to the fact that the key input determining the Fisher Information is the set of measures $\mathcal{V}=\{\nu_1,\cdots,\nu_M\}$.
Next, recognise that each term $\alpha\cdot \nu_m\cdot(\hat{d} - d)$ is in fact a centralized circular variate with the same distribution $P_\mathcal{T}(x;0,\kappa/2)$, regardless of $m$.
Then, substituting $\alpha\cdot \nu_m\cdot(\hat{d} - d)$ with $\theta$, we obtain
\begin{align}
J(\mathcal{V};\Delta f,\kappa) & = \frac{\alpha^2\kappa}{2}
\sum_{m=1}^{M} \nu_m^2\mathbb{E}\left[\cos\theta\right] = \frac{\alpha^2\kappa}{2} \sum_{m=1}^{M} \dfrac{\nu_m^2}{I_0(\kappa/2)}\underbrace{\frac{1}{\pi}\int\limits_{0}^{\pi} \cos\theta \exp\left(\frac{\kappa}{2}\cos \theta\right) \,\d \theta}_{I_1(\kappa/2)}\nonumber\\[-4ex]
& = \frac{\alpha^2\kappa}{2} \dfrac{I_1(\kappa/2)}{I_0(\kappa/2)}\sum_{m=1}^{M} \nu_m^2,
\end{align}
where the integration limits in the integral above follow from evenness of the function $\cos(\theta)\exp(\frac{\kappa}{2}\cos\theta)$, and the last equality results from the integral solution found in \cite[Eq. 9.6.19, pp. 376]{MyListOfPapers:Abramowitz1965}.
Since the above Fisher Information is a scalar, the CRLB is obtained directly by taking its inverse, $i.e.$,
\begin{equation}
\label{Eq:Crlbrang1}
\text{CRLB}_\textup{PDoA}(\mathcal{V};\Delta f,\kappa) = \frac{1}{J(\mathcal{V};\Delta f,\kappa)}.
\end{equation}
Before proceeding to the \ac{ToA} case, some discussion on the analytical results offered above are in order.
First, let us emphasise that given a set of phase difference measurements $\{\Delta\varphi_{n_k}\}_{k=1}^{K}$, with $n_k\in \mathcal{N}$, one always has the \emph{option} of either exploit the properties of the Golomb ruler $\mathcal{N}$ and expand to a set of measurements $\{\Delta\varphi_{\nu_m}\}_{m=1}^{M}$, or not.
In case such option is \emph{not} adopted, the associated Fisher Information and CRLB can obviously be obtained exactly as done above, but with $\kappa$ replacing $\kappa/2$ and $\mathcal{N}$ replacing $\mathcal{V}$.
That is,
\begin{equation}
\label{Eq:CrlbrangNotExpanded}
J(\mathcal{N};\Delta f,\kappa) = {\alpha^2\kappa} \dfrac{I_1(\kappa)}{I_0(\kappa)}\sum_{k=1}^{K} n_k^2 \quad \Longleftrightarrow \quad \text{CRLB}_\textup{PDoA}(\mathcal{N};\Delta f,\kappa) = \frac{1}{J(\mathcal{N};\Delta f,\kappa)}.
\end{equation}
Comparing these expressions, it can be readily seen that the choice of adopting the Golomb approach on the one hand subjects the resulting double-phase-differences to twice the noise, but on the other hand expands the number terms in the summation.
In principle, the optimum choice between these options therefore depends on the ruler $\mathcal{N}$ and its order $K$, and the associated $\mathcal{V}$ and $M$, as well as $\kappa$.
As can be shown in Figure \ref{Fig:crlb_kappa}, for instance, the ruler $\mathcal{N}=\{0,1,4,6\}$ yields superior results compared to its associated measure set $\mathcal{V}=\{1,2,3,4,5,6\}$, because the loss of 3dB (implied by $\kappa \to \kappa/2$) incurred by the latter is not compensated by the increase gained in the sum of squares achieved by using $\mathcal{V}$ instead of $\mathcal{N}$.
For larger rulers, however, the advantage of expanding the rulers quickly becomes significant, thanks to the geometric increase of $M$ with respect to $K$.
A ruler of order 6, $e.g.$, $\mathcal{N}=\{0,1,4,10,12,17\}$, already achieves better performance expanded into
$\mathcal{V}=\{1,\cdots,17\}$ then otherwise, for $\sigma_{\Delta_\varphi} \leq 0.22$.
Likewise, the expanded version of the order-10 ruler $\mathcal{N}=\{0,1,16,21,24,49,63,75,81,85\}$ is superior up to $\sigma_{\Delta_\varphi} \leq 0.65$ -- which incidentally defines essentially the entire range of interest --
and finally the expanded ruler of order-20 is always superior, for any $\sigma_{\Delta_\varphi}$.
In summary, it can be said that applying the Golomb expansion leads to superior results, as long as the ruler is large enough and $\sigma_{\Delta_\varphi}$ is in the region of interest.
\subsection{Time of Arrival}
\label{Subsubsection:ToA}
Due to the similarity of the \ac{ToA} and \ac{PDoA} ranging models described in Section \ref{Sec:Prelim}, the Fisher Information and CRLB for \ac{ToA}-ranging with Golomb rulers are very similar to those given above for the \ac{PDoA} case.
For the sake of brevity, we therefore offer here only a succinct derivation.
Assuming that the error on the time of arrival estimates are Gaussian-distributed, were have
\begin{equation}
\label{Eq:time_arrival}
\hat{\Delta}\tau \sim P_\mathcal{G}(x;\Delta\tau,\sigma_{\Delta\tau}^2) = \dfrac{1}{\sqrt{2\pi} \sigma_{\Delta\tau}}
\exp\left(-\frac{(x - \Delta\tau)^2}{2\sigma_{\Delta\tau}^2}\right),
\end{equation}
such that the likelihood function, the log-likelihood function, its Hessian and the Fisher Information, considering already the expansion $\mathcal{N}\to\mathcal{V}\Rightarrow \sigma_{\Delta\tau}^2 \to 2\sigma_{\Delta\tau}^2$ and emphasising the quantities of interest, becomes
\begin{eqnarray}
\label{Eq:LikelihoodGaussian}
&L_\mathcal{G}(\hat{d};\sigma^2_{\Delta\tau}) = \prod_{m = 1}^{M} P_\mathcal{G}(\hat{d};\Delta\tau_m,2\sigma_{\Delta\tau}^2) =
\displaystyle \dfrac{1}{(4\pi\sigma_{\Delta\tau}^2)^{M/2}} \prod_{m = 1}^{M}
\exp\left(-\frac{\nu_m^2}{c^2\sigma_{\Delta\tau}^2}(\hat{d} - d)^2\right),&\nonumber\\
&\ln L_\mathcal{G}(\hat{d};\sigma^2_{\Delta\tau}) = -\frac{M}{2} \ln 4\pi\sigma^2_{\Delta\tau} - \dfrac{1}{c^2\sigma^2_{\Delta\tau}}\displaystyle\sum_{m=1}^{M} \nu_m^2 (\hat{d}-d)^2,&\nonumber\\
&\dfrac{\partial ^2 \ln L_\mathcal{G}(\hat{d};\sigma^2_{\Delta\tau})}{\partial \hat{d}^2} = -\dfrac{2}{c^2\sigma^2_{\Delta\tau}}\displaystyle\sum_{m=1}^{M} \nu_m^2 \quad\Longrightarrow\quad J(\mathcal{V};\sigma^2_{\Delta \tau}) = \dfrac{2}{c^2\sigma^2_{\Delta\tau}}\displaystyle\sum_{m=1}^{M} \nu_m^2.& \label{Eq:FisherToAGolomb}
\end{eqnarray}
As discussed above, if the measurements taken according to the Golomb markers are, however, used without taking their differences, the associated noise process has half the variance such that
\begin{equation}
J(\mathcal{N};\sigma^2_{\Delta \tau}) = \dfrac{4}{c^2\sigma^2_{\Delta\tau}}\displaystyle\sum_{m=1}^{M} \nu_m^2.\label{Eq:FisherToANoGolomb}
\end{equation}
\subsection{Simulations and Comparison Results}
\label{Sec:Results}
Let us finally study the performance of the proposed multipoint ranging technique by means of simulations and comparisons with the corresponding CRLBs derived above.
For the sake of brevity, we will consider only \ac{PDoA} ranging as all results obtained with the \ac{ToA} approach are equivalent.
First, consider Figure \ref{Fig:Stra_musrmus_freqkappa}, where the performances of two classic superresolution algorithms -- namely the Music and Root Music algorithms of briefly described in Subsection \ref{Subsec:S_PDB_R} -- are compared against the CRLB derived in Subsection \ref{Subsubsection:PDoA}.
Plots are shown both as a function of $K$ for various $\sigma_{\Delta\varphi}$ and vice-versa, and for the sake of having a practical reference, we include also results obtained by simply averaging the distance estimates corresponding to all independent samples.
We emphasise that in this figure \underline{no Golomb ruler is used}.
Instead, a sequence of $K$ consecutive samples is collected for each range estimate, as typically assumed in existing work \cite{Chien-Sheng2012, Junyang2012}.
One fact learned from these plots -- and is particularly visible in Figure \ref{Fig:Stra_musrmus_freq} -- is that without the efficient use of samples made possible by the Golomb ruler approach here proposed, superresolution algorithms require a large number of samples in order to reach the CRLB, which is a problem since energy consumption and latency are directly related to the number of samples collected.
Another fact of relevance that can be learned, however, is that although supperresolution methods do improve on a ``naive'' average-based estimator, that gain in itself is not that significant unless the number of samples $K$ is rather large.
This is highlighted in Figure \ref{Fig:Stra_musrmus_kappa}, where it is seen that with $K=10$, the simple average-based algorithm has essentially the same performance of MUSIC.
The results above emphasize the significance of our contribution, by demonstrating that the efficient utilisation of samples is fundamental to reap from superresolution algorithms their true potential performance.
This is further illustrated in Figure \ref{Fig:Gol_rmus_freq}, where it can be seen that thanks to the Golomb sampling superresolution algorithms with a relatively small number of samples come much closer to the CRLB.
Considered in coordination with the results of Figure \ref{Fig:crlb_kappa}, it can be generally said that a Golomb-optimized scheme with a total of 10 samples, taken at frequencies corresponding to an accordingly Golomb ruler $\mathcal{N}$ expanded into the associated measure set $\mathcal{V}$, followed by MUSIC estimation is an excellent choice for \ac{PDoA} ranging.
In fact, as illustrated by Table \ref{table:GolRul}, such a choice also allows for an easy design of various orthogonal Golomb rulers, such that multipoint ranging can be efficiently performed.
But since in this case a choice needs to be made between the ERQ and FRA ruler allocation approaches, a fair question to ask in this context is what are the performances of corresponding choices.
This is addressed in Figure \ref{Fig:Multipoint}, where the average performances of an ERQ and an FRA multipoint ranging schemes employing the rulers shown in Table \ref{table:GolRul} are compared against corresponding CRLBs.
The figure shows that in fact both approaches have similar performances relative to one another and relative to the CRLBs.
\section{Conclusions}
\label{Sec:Conclusion}
We offered an efficient and accurate solution to the multipoint ranging problem, based on an adaptation of superresolution techniques, with optimised sampling.
Specifically, using as examples the specific cases of \ac{ToA} and \ac{PDoA}, unified under the same mathematical framework, we constructed a variation of the MUSIC and Root-MUSIC algorithm to perform distance estimation over sparse sample sets determined by Golomb rulers.
The design of the mutually orthogonal sets of Golomb rulers required by the proposed method -- a problem that founds no solution in current literature -- was shown to be achievable via a new genetic algorithm, which was also shown to outperform the best known alternative when used to generate optimal rulers.
A \acp{CRLB} analysis of the overall optimised multipoint ranging solution was performed, which compared to simulated results quantified the substantial gains achieved by the proposed technique.
\section{Acknowledgements}
This work has been performed within the framework FP7 European Union Project BUTLER (grant no. 287901).
\newpage
\appendices
\section{}
\label{Sec:PseudoCode}
\vspace{-1ex}
\begin{algorithm}[H]
{\small
\caption{- Golomb Ruler Generation Algorithm}\label{gen_algo}
\begin{algorithmic}
\State $\mathcal{W} \longleftarrow$ Set of forbidden marks (given)
\vspace{-1ex}
\State $K \longleftarrow$ Desired order of the ruler (given)
\vspace{-1ex}
\State $C \longleftarrow$ Maximum number of mutations (given)
\vspace{-1ex}
\State $G \longleftarrow$ Maximum number of generations (given)
\vspace{-1ex}
\State $s_{\textup{max}}:= K-1$
\vspace{-1ex}
\While{$\nexists\; \mathcal{S}_p | f(\mathcal{S}_p) = K\frac{(K-1)}{2}$}
\vspace{-1ex}
\State $\mathcal{S}^* := \{1,2,\cdots,s_{\textup{max}}\}$
\vspace{-1ex}
\For{$p:= 1 \to P$}
\vspace{-1ex}
\State count $\leftarrow 0$
\vspace{-1.5ex}
\State $\mathcal{S}_p \leftarrow$ randomly select $K-1$ elements of $\mathcal{S}^*$
\vspace{-1.5ex}
\State $\mathcal{S}_p \leftarrow$ randomly permute the elements of $\mathcal{S}_p$
%
\vspace{-1ex}
\While{count $<=$ $C$}
\vspace{-1ex}
\State $\mathcal{S}^\dagger_p = \mathscr{M}(\mathcal{S}_p)$.
\vspace{-1ex}
%
\If{$f(\mathcal{S}^\dagger_p) < f(\mathcal{S}_p)$}
\vspace{-1ex}
\State $S_p \leftarrow \mathcal{S}^\dagger_p$
\vspace{1ex}
\EndIf{\bf end if}
\vspace{-1.5ex}
\State count $\leftarrow$ count + $1$
\vspace{1ex}
\EndWhile{\bf end while}
\vspace{1ex}
\EndFor{\bf end for}
\vspace{-1.5ex}
\State $\mathbb{P}^\dagger = \{\mathcal{S}_p\}^{P}_{p=1}$
%
\vspace{-1ex}
\If{$\nexists\; \mathcal{S}_p | R_p=0$}
\vspace{-1ex}
\State $s_{\textup{max}} \leftarrow s_{\textup{max}}+1$
\vspace{-1.5ex}
\State restart
\vspace{-1.5ex}
\Else
\vspace{-1.5ex}
%
\State $\mathbb{P}_{\!\male}^\dagger \leftarrow \{\mathcal{S}_p| R_p = 0\}$
\vspace{-1ex}
\State $\mathcal{S}_{\male} =
\{\mathcal{S}_p \in \mathbb{P}_{\male}^\dagger | f(\mathcal{S}_p) <f(\mathcal{S}_q)\;\forall\; q \neq p\}$
\vspace{-1ex}
\State $\mathbb{P}_{\female}^\dagger \leftarrow \mathbb{P}^\dagger \setminus \mathcal{S}_{\male}$
\vspace{1ex}
\EndIf{\bf end if}
\vspace{-1.5ex}
%
\State count $\leftarrow$ 0
\vspace{-1.5ex}
\While{count $<$ G {\bf or} $\exists\; \mathcal{S}_p | f(\mathcal{S}_p) = K\frac{(K-1)}{2}$}
\vspace{-1ex}
\For{$p := 1 \to P-1$}
\vspace{-1ex}
\State $\mathcal{S}^\dagger_p \leftarrow \mathscr{C}(\mathcal{S}_p,\mathcal{S}_{\male})$
\vspace{-1ex}
\If{$f(\mathcal{S}^\dagger_p) < f(\mathcal{S}_p)$}
\vspace{-1ex}
\State $\mathcal{S}_p \leftarrow \mathcal{S}^\dagger_p$
\vspace{-1ex}
\ElsIf{$f(\mathcal{S}^\dagger_p) < f(\mathcal{S}_{\male})$}
\vspace{-1ex}
\State $\mathcal{S}_{\male} \leftarrow \mathcal{S}^\dagger_p$
\vspace{1ex}
\EndIf{\bf end if}
\vspace{1ex}
\EndFor{\bf end for}
\vspace{1ex}
\EndWhile{\bf end while}
\vspace{1ex}
\EndWhile{\bf end while}
\vspace{-1ex}
\Function{Fitness Function}{}
\vspace{-1ex}
\State $N_p \leftarrow$ length of $\mathcal{N}_p$ associated to $\mathcal{S}_p$ (input)
\vspace{-1ex}
\State $R_p \leftarrow$ number of repeated elements in $\mathcal{S}_p$ (input)
\vspace{-1ex}
\State $F_p \leftarrow$ number of forbidden marks in $\mathcal{N}_p$ (input)
\vspace{-1ex}
\State $f(\mathcal{S}_p) \leftarrow N_p \times (R_p + F_p + 1)$.
\vspace{-1ex}
\State \Return $f(\mathcal{S}_p)$
\vspace{0.5ex}
\EndFunction{\bf end function}
\end{algorithmic}
}
\end{algorithm}
\newpage
|
1408.3391
|
\section{Introduction}
\label{sec:intro}
The charged particles produced in high energy particle collisions are the result of hard and soft interactions.
The hard processes are well described by perturbative Quantum Chromodynamics while the soft processes, which occur at low momentum and are the bulk of the interactions, are non-perturbative and, therefore, difficult to describe.
This necessitates the use of effective models to characterize these processes.
The models must be verified by (and possibly tuned to) experimental results.
Therefore, characterization of the properties of the distributions of the produced particles is essential for understanding the soft processes involved in the collisions which are also important for understanding the hard processes as they affect the underlying event.
In this paper we focus on the phenomenon called forward-backward particle multiplicity correlations (or forward-backward correlations for short)~\cite{Sjostrand:1987su} to shed light on these soft processes.
Forward-backward correlations measure the correlation strength between the number of particles produced in regions located in opposite hemispheres separated by the plane perpendicular to the beam axis intersecting the collision point.
The regions are typically equidistant (angularly) from the plane perpendicular to the beam axis and probe the forward and backward rapidities where most of the particle production is expected.
This measurement has the advantage that it is mostly influenced by the dynamics of the collision rather than the following hadronization processes~\cite{Hwa:2007sq}.
Models implement the underlying processes in these collisions in different ways.
In Pythia, three main processes exist which affect forward-backward correlations~\cite{Sjostrand:1987su,Wraight:2011ej}.
The first process comprises hard scatterings which generally produce forward-backward correlations limited to small angular separations.
The second process is initial state radiation which is the emittance of gluons at early times during the interaction and generally causes forward-backward correlations with larger angular separations.
The third process is multiple parton interactions which is an effective many-body QCD interaction that causes forward-backward correlations with the largest angular separations.
Various tunes of Pythia arise with different contributions from these processes to particle production~\cite{Skands:2009zm}.
To investigate which tune more accurately describes reality, one needs to either measure forward-backward correlations with large angular separations (where the net effect of the different contributions is most pronounced) or with high accuracy and precision.
Large angular separations are often beyond the design of experiments.
High accuracy and precision require advanced techniques to ensure minimal detector bias and are investigated here.
While different measures exist for characterizing forward-backward correlations, in this paper we focus only on the Pearson correlation factor, which we denote as $b$.
This correlation factor is defined as:
\begin{IEEEeqnarray}{rCl}
b &\equiv& \textrm{Cor}(N_f,N_b)=\frac{\textrm{Cov}(N_f,N_b)}{\sqrt{\textrm{Var}(N_f) \cdot \textrm{Var}(N_b)}} \nonumber \\
&=& \frac{\langle N_fN_b \rangle - \langle N_f \rangle\langle N_b \rangle}
{\sqrt{(\langle N_f^2 \rangle - \langle N_f \rangle^2) \cdot (\langle N_b^2 \rangle - \langle N_b \rangle^2)}}
\label{eq:b}
\end{IEEEeqnarray}
where $N_f$ and $N_b$ are the number of particles produced in the regions in the forward and backward hemispheres, respectively.
One important property of the Pearson correlation factor is that it is a bound quantity.
It can be shown that $-1 \leq b \leq 1$~\cite{soegaardPhD} and does not scale with the multiplicity of the event.
This property arises from the denominator of $b$, which is the square root of the product of the forward and backward multiplicity variances.
\begin{figure}
\includegraphics[width=0.32\textwidth]{corr_1_0_dist.pdf}
\includegraphics[width=0.32\textwidth]{corr_0_6_dist.pdf}
\includegraphics[width=0.32\textwidth]{corr_0_0_dist.pdf}
\caption{The figures depict three sets of forward-backward multiplicity pairs with $b = 1, 0.6, \textrm{and } 0$ from left to right.
The variances (used in the denominator of $b$) are the same for $n_F$ and $n_B$ in all three cases.
This demonstrates that the correlation information is essentially contained in the covariance.}
\label{fig:CorrPlots}
\end{figure}
The correlation factor can be interpreted geometrically as how well the set of number pairs describe a line when plotted on a two dimensional figure.
This is demonstrated in Fig.~\ref{fig:CorrPlots}.
The intersection and the slope of the line are irrelevant to the value of $b$~\cite{soegaardPhD}. This can likewise be demonstrated by the fact that
\begin{IEEEeqnarray}{rCl}
\textrm{Cor}(\alpha X + \beta, \gamma Y + \nu) &=& \textrm{Cor}(X,Y), \ \textrm{if } \alpha \gamma > 0
\end{IEEEeqnarray}
where $\alpha$, $\beta$, $\gamma$, and $\nu$ are constants. If $\alpha \gamma < 0$, the correlation factor switches sign.
If the slope in Fig.~\ref{fig:CorrPlots} is negative, the corresponding correlation factor is also negative and the quantities are said to be anti-correlated.
While Eq.~(\ref{eq:b}) shows that only five quantities ($\langle N_f \rangle$, $\langle N_b \rangle$, $\langle N_fN_b \rangle$, $\langle N_f^2 \rangle$ and $\langle N_b^2 \rangle$) are necessary to calculate the correlation factor, the measurement is often not trivial to perform for many detector types.
Any observable will be altered by the environment surrounding the collision in the experiment.
Secondary particle production and partial detector acceptance and inefficiency will influence the measurement.
Directly evaluating this influence in a model independent way is challenging for correlation measurements~\cite{Ravan:2013lwa}.
This is especially evident when evaluating the variances in the denominator of $b$ when partial acceptance exists. The correlation between the measured and not measured regions requires more sophisticated techniques if the gaps in the acceptance are significant.
While the effect of secondary particle production is beyond the scope of this paper (but could be the subject of a subsequent paper), the effect of detector inefficiency and partial detector acceptance is examined.
The influence on the measured correlation strength and a means to account for these effects is provided.
The method is verified through studies using simulations.
\section{Measuring the Correlation Factor}
\label{sec:measuringb}
While forward-backward correlations can be measured in both collider and fixed target experiments, the investigation here is done for collider experiments.
The space surrounding the collision is divided into a forward hemisphere and a backward hemisphere separated by the plane perpendicular to the beam axis intersecting the collision point.
The hemisphere where $\theta<\pi/2$ is usually termed forward, and the hemisphere where $\theta>\pi/2$ is usually termed backward, where the reference direction at $\theta=0$ is defined by the experiment.
Forward-backward multiplicity correlations are usually measured between bins of equal width (in $\eta$) spanning the entire azimuth.
Correlations between bins where only part of the azimuthal angle is taken into account (twist correlations) can also be measured \cite{Wraight:2011ej}.
While these twist correlations are not directly computed in this paper, they require merely a subset of the information necessary to analyze the full azimuth and, therefore, the techniques presented here could be used with minor modifications to measure twist correlations.
The centers of the two bins (in $\eta$) are likewise usually equidistant from $\eta=0$.
In this paper we call such a pair of geometrical regions a {\it forward-backward bin}.
The analysis is carried out by determining the number of particles present in each geometrical region event-by-event.
From these particle multiplicities the necessary five quantities are calculated for each event.
These five values are then averaged over all events and the correlation factor is calculated.
Figure~\ref{fig:FwdBwdBins} shows an example of how the forward-backward bins are defined.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{FB_DetectorRegions_Cyl.pdf}
\hspace{3ex}
\includegraphics[width=0.50\textwidth]{FB_bins.pdf}
\caption{(Color online) Left: The detector is schematically divided into two halves along the line at $\theta=\pi/2$.
Each half then consists of solid angles which span a ``small'' polar angle and the full azimuthal angle.
A forward-backward bin consists of two regions where their centers are equidistant from $\theta=\pi/2$.
The regions are often termed forward when $\theta<\pi/2$ and backward when $\theta>\pi/2$.
Each forward-backward bin is represented by a different color here.
Right: The detector regions have been mapped to a two dimensional figure.
The horizontal axis is the pseudorapidity, which is a function of the polar angle, $\theta$.
Note that the mapping is not to scale and equal size polar angle bins on the left do not correspond to equal size pseudorapidity bins on the right.}
\label{fig:FwdBwdBins}
\end{figure}
\subsection{The Effect of Efficiency}
\label{sec:derivation}
It is common, either by design or due to malfunction, that detectors do not register all particles impinging on them.
Full hermeticity does not usually exist either.
In both cases, the result is that fewer particles are detected than were actually produced in the collision.
This alters the value of an observable.
First order observables, like the average number of produced particles, can account for this in a straight-forward manner, since the value scales with the efficiency or acceptance.
For higher order observables, the effect of efficiency or acceptance becomes more complex.
To study the effect of efficiency, a statistical approach is taken.
In the case of forward-backward correlations, a joint probability distribution for the produced primary particles, $P^P(N^P_f,N^P_b)$, contains the physics information one wants to measure.
The joint probability distribution is normalized such that
\begin{IEEEeqnarray}{rcl}
\sum_{N^P_f=0}^\infty \sum_{N^P_b=0}^\infty P^P(N^P_f,N^P_b) &=& 1
\label{eq:normP}
\end{IEEEeqnarray}
A moment generating function can be defined from this whose derivatives evaluated at $t_f = 0$ and $t_b = 0$ produce all of the desired moments.
\begin{IEEEeqnarray}{rCl}
\textrm{mgf}^P(t_f,t_b) &\equiv& \sum_{N^P_f=0}^\infty \sum_{N^P_b=0}^\infty P^P(N^P_f,N^P_b) e^{N^P_f t_f + N^P_b t_b}
\label{eq:mgfP}
\end{IEEEeqnarray}
From the moment generating function, the cumulant generating function is defined as:
\begin{IEEEeqnarray}{rCl}
\textrm{cgf}^P(t_f,t_b) &\equiv& \ln \left[\textrm{mgf}^P(t_f,t_b)\right]
\end{IEEEeqnarray}
where derivatives of $\textrm{cgf}^P$ evaluated at $t_f = 0$ and $t_b = 0$ produce the quantities desired to compute the correlation factor (and many more cumulants with further derivatives).
For the purpose of this paper, the first two cumulants (the mean and the covariance) are important.
\begin{IEEEeqnarray}{rCl}
\frac{\partial\textrm{cgf}^P}{\partial t_r}(0,0) &\equiv& \textrm{cgf}^P_r(0,0) = \langle N_r \rangle, \ \textrm{where } r = f \textrm{ or } b
\label{eq:firstcumulant} \\
\frac{\partial^2\textrm{cgf}^P}{\partial t_{r_1} \partial t_{r_2}}(0,0) &\equiv& \textrm{cgf}^P_{r_1 r_2}(0,0) = \textrm{Cov}(N^P_{r_1}, N^P_{r_2}), \ \textrm{where } r_1,r_2 = f \textrm{ or } b
\label{eq:secondcumulant}
\end{IEEEeqnarray}
In Eq.~(\ref{eq:firstcumulant}), $r$ stands for a ``region'' that could be forward or backward.
In Eq.~(\ref{eq:secondcumulant}), $r_1$ and $r_2$ stand for ``region 1'' and ``region 2'', respectively, and can independently be forward or backward.
In the case where $r_1 = r_2 \textrm{ (} = r \textrm{)}$, the covariance becomes the variance, such that
\begin{IEEEeqnarray}{rCl}
\textrm{Cov}(N^P_r, N^P_r) &=& \textrm{Var}(N^P_r)
\end{IEEEeqnarray}
We now consider the case where a uniform detection efficiency exists over the whole forward and backward regions ($\varepsilon_f$ and $\varepsilon_b$ respectively).
Perfect detection efficiency is defined to have a value of 1 while a completely dead region would have a value of 0.
The restriction of uniformity is not realistic, but is instructive for an initial investigation where the efficiency will be taken to be the average detection efficiency in the region.
Equation~(\ref{eq:normP}) is then modified as follows to account these efficiencies in the forward and backward regions.
\begin{IEEEeqnarray}{rCl}
\sum_{N^P_f=0}^\infty \sum_{N^P_b=0}^\infty P^P(N^P_f,N^P_b) \left(\varepsilon_f+(1-\varepsilon_f)\right)^{N^P_f} \left(\varepsilon_b+(1-\varepsilon_b)\right)^{N^P_b} &=& 1
\label{eq:normE}
\end{IEEEeqnarray}
One can now arrive at the moment generating function for the detected particles ($\textrm{mgf}^D$).
Since one particle is detected with the probability $\varepsilon_r$, one applies the term $e^{1 \cdot t_r}$ to the $\varepsilon_r$ terms.
Likewise, one applies $e^{0 \cdot t_r}=1$ to the $(1-\varepsilon_r)$ terms, since no particle is detected with this probability.
The resulting term, $\varepsilon_r e^{t_r}+(1-\varepsilon_r)$, is actually the moment generating function for a specific particle to be found in the region with probability $\varepsilon_r$, which we term $\textrm{mgf}^E (t_r;\varepsilon_r)$.
The corresponding cumulant generating function is then $\textrm{cgf}^E (t_r;\varepsilon_r) \equiv \ln \left[ \textrm{mgf}^E (t_r;\varepsilon_r) \right]$.
The moment generating function for the distribution of detected particles then becomes:
\begin{IEEEeqnarray}{rCl}
\textrm{mgf}^D(t_f, t_b) &=& \sum_{N^P_f=0}^\infty \sum_{N^P_b=0}^\infty P^P(N^P_f,N^P_b)\left[\varepsilon_f e^{t_f}+(1-\varepsilon_f)\right]^{N^P_f} \left[\varepsilon_b e^{t_b}+(1-\varepsilon_b)\right]^{N^P_b} \nonumber \\
&=& \sum_{N^P_f=0}^\infty \sum_{N^P_b=0}^\infty P^P(N^P_f,N^P_b) e^{N^P_f \textrm{cgf}^E(t_f;\varepsilon_f) + N^P_b \textrm{cgf}^E(t_b;\varepsilon_b)}
\label{eq:mgfD}
\end{IEEEeqnarray}
Comparing Eq.~(\ref{eq:mgfD}) to Eq.~(\ref{eq:mgfP}) shows that the effect of detection efficiency is merely a substitution of the variables in the moment generating function of primary particles, namely $t_r \rightarrow \textrm{cgf}^E (t_r;\varepsilon_r)$.
The final equation relating the cumulant generating function of detected particles to the cumulant generating function of primary particles is then found by:
\begin{IEEEeqnarray}{rCl}
\textrm{mgf}^D (t_f, t_b) &=& \textrm{mgf}^P \left(\textrm{cgf}^E (t_f;\varepsilon_f), \textrm{cgf}^E (t_b;\varepsilon_b)\right) \Rightarrow \nonumber \\
\textrm{cgf}^D (t_f, t_b) &=& \textrm{cgf}^P \left(\textrm{cgf}^E (t_f;\varepsilon_f), \textrm{cgf}^E (t_b;\varepsilon_b)\right)
\label{eq:cfg_detected}
\end{IEEEeqnarray}
One should note that Eq.~(\ref{eq:cfg_detected}) can be generalized to allow one to evaluate the effect of acceptance or efficiency on any order correlation.
The $n$-region equivalent of Eq.~(\ref{eq:cfg_detected}) is:
\begin{IEEEeqnarray}{rCl}
\textrm{mgf}^D (t_1, \cdots, t_n) &=& \textrm{mgf}^P \left(\textrm{cgf}^E (t_1;\varepsilon_1), \cdots, \textrm{cgf}^E (t_n;\varepsilon_n)\right) \Rightarrow \nonumber \\
\textrm{cgf}^D (t_1, \cdots, t_n) &=& \textrm{cgf}^P \left(\textrm{cgf}^E (t_1;\varepsilon_1), \cdots, \textrm{cgf}^E (t_n;\varepsilon_n)\right)
\label{eq:cfg_detected_general}
\end{IEEEeqnarray}
Derivatives of Eq.~(\ref{eq:cfg_detected_general}) evaluated at $t_1, \cdots, t_n = 0$ reveal the effect of acceptance or efficiency on the desired moment or cumulant relative to the moments or cumulants of the primary distribution.
One could use this information (as will be done here for the variance and covariance) to account for these effects in higher order correlations.
The cumulants of the distribution of detected particles can now be calculated by differentiating Eq.~(\ref{eq:cfg_detected}) and evaluating the results at $t_f=0$ and $t_b=0$.
The first derivative gives the average number of found particles in a region.
\begin{IEEEeqnarray}{rCl}
\left.\frac{\partial \textrm{cgf}^D}{\partial t_r}\right|_{t_f,t_b=0} &=& \textrm{cgf}^P_r \left(\textrm{cgf}^E (0;\varepsilon_f), \textrm{cgf}^E (0;\varepsilon_b)\right) \cdot \left. \frac{\textrm{d} \left[\textrm{cgf}^E (t_r;\varepsilon_r)\right]}{\textrm{d} t_r} \right|_{t_r=0} \nonumber \\
&=& \textrm{cgf}^P_r \left(0, 0\right) \cdot \left. \frac{\varepsilon_r e^{t_r}}{\varepsilon_r e^{t_r} + (1 - \varepsilon_r)} \right|_{t_r=0} \Rightarrow \nonumber \\
\langle N^D_r \rangle &=& \langle N^P_r \rangle \cdot \varepsilon_r
\label{eq:DetectedMean}
\end{IEEEeqnarray}
The result in Eq.~(\ref{eq:DetectedMean}) is expected, since it is intuitive that the mean value of the distribution scales with the probability that any given particle is detected.
The variances or the covariance (given by the second derivative), however, yield a more complicated result.
\begin{IEEEeqnarray}{rCl}
\left. \frac{\partial^2 \textrm{cgf}^D}{\partial t_{r_1} \partial t_{r_2}} \right|_{t_f,t_b=0} &=&
\textrm{cgf}^P_{r_1 r_2} \left(\textrm{cgf}^E(0;\varepsilon_f), \textrm{cgf}^E(0;\varepsilon_b)\right)
\cdot \left. \frac{\textrm{d}\left[\textrm{cgf}^E \left(t_{r_1};\varepsilon_{r_1}\right) \right]}{\textrm{d} t_{r_1}} \right|_{t_{r_1}=0}
\cdot \left. \frac{\textrm{d}\left[\textrm{cgf}^E \left(t_{r_2};\varepsilon_{r_2}\right) \right]}{\textrm{d} t_{r_2}} \right|_{t_{r_2}=0} \nonumber \\
&& + \delta_{r_1 r_2} \cdot \textrm{cgf}^P_{r_1} \left(\textrm{cgf}^E(0;\varepsilon_f), \textrm{cgf}^E(0;\varepsilon_b)\right)
\cdot \left.\frac{\textrm{d}^2\left[\textrm{cgf}^E(t_{r_1};\varepsilon_{r_1})\right]}{\textrm{d} t^2_{r_1}}\right|_{t_{r_1}=0} \nonumber \\
&=& \textrm{cgf}^P_{r_1 r_2} \left(0,0\right)
\cdot \left. \frac{\varepsilon_{r_1} e^{t_{r_1}}}{\varepsilon_{r_1} e^{t_{r_1}} + (1 - \varepsilon_{r_1})} \right|_{t_{r_1}=0}
\cdot \left. \frac{\varepsilon_{r_2} e^{t_{r_2}}}{\varepsilon_{r_2} e^{t_{r_2}} + (1 - \varepsilon_{r_2})} \right|_{t_{r_2}=0} \nonumber \\
&& + \delta_{r_1 r_2} \cdot \textrm{cgf}^P_{r_1} \left(0,0\right)
\cdot \left. \frac{\left[ \varepsilon_{r_1} e^{t_{r_1}} + \left(1 - \varepsilon_{r_1}\right) \right] \cdot \varepsilon_{r_1} e^{t_{r_1}} - \left( \varepsilon_{r_1} e^{t_{r_1}} \right)^2}{\left[\varepsilon_{r_1} e^{t_{r_1}} + \left(1 - \varepsilon_{r_1}\right)\right]^2} \right|_{t_{r_1}=0} \Rightarrow \nonumber \\
\textrm{Cov}(N^D_{r_1},N^D_{r_2}) &=&
\textrm{Cov}(N^P_{r_1},N^P_{r_2}) \cdot \varepsilon_{r_1} \varepsilon_{r_2}
+ \delta_{r_1 r_2} \cdot \langle N^P_{r_1} \rangle \cdot \varepsilon_{r_1} \left( 1 - \varepsilon_{r_1} \right)
\label{eq:DetectedCovVar}
\end{IEEEeqnarray}
This result shows that a special case exists for the variance where the differentiation is performed twice with respect to the same variable and the Kronecker delta ($\delta_{r_1 r_2}$) evaluates to 1.
The final expressions for the covariance and the variances of the distribution of detected particles are:
\begin{IEEEeqnarray}{rCl}
\textrm{Cov}(N^D_f,N^D_b) &=& \textrm{Cov}(N^P_f,N^P_b) \cdot \varepsilon_f \varepsilon_b \label{eq:DetectedCov}\\
\textrm{Var}(N^D_r) &=& \textrm{Var}(N^P_r) \cdot \varepsilon^2_r + \langle N^P_r \rangle \cdot \varepsilon_r \left( 1 - \varepsilon_r \right) \label{eq:DetectedVar}
\end{IEEEeqnarray}
Equation~(\ref{eq:DetectedVar}) shows that the detected variance has an additional dependence, beyond the variance of the primary produced particles and the efficiency, on the mean number of particles produced in the region, which the covariance does not possess.
Equations~(\ref{eq:DetectedMean}), (\ref{eq:DetectedCov}), and (\ref{eq:DetectedVar}) can be inverted to obtain the cumulants of the distribution of the primary particles from the detected quantities:
\begin{IEEEeqnarray}{rCl}
\langle N^P_r \rangle &=& \frac{\langle N^D_r \rangle}{\varepsilon_r} \label{eq:CorrectedMean} \\
\textrm{Cov}(N^P_f,N^P_b) &=& \frac{\textrm{Cov}(N^D_f,N^D_b)}{\varepsilon_f \varepsilon_b} \label{eq:CorrectedCov} \\
\textrm{Var}(N^P_r) &=& \frac{\textrm{Var}(N^D_r) - \langle N^D_r \rangle \cdot \left( 1 - \varepsilon_r \right)}{\varepsilon^2_r} \label{eq:CorrectedVar}
\end{IEEEeqnarray}
From these expressions, the correlation factor in the case of an imperfect detector (with an efficiency less than 1) is derived as:
\begin{IEEEeqnarray}{rCl}
b &=& \frac{\textrm{Cov}(N^P_f,N^P_b)}
{\sqrt{\textrm{Var}(N^P_f) \cdot
\textrm{Var}(N^P_b)}} \nonumber \\
&=& \frac{\frac{\textrm{Cov}(N^D_f,N^D_b)}{\varepsilon_f \varepsilon_b}}
{\sqrt{\frac{\textrm{Var}(N^D_f) - \langle N^D_f \rangle \cdot \left( 1 - \varepsilon_f \right)}{\varepsilon^2_f}}
\sqrt{\frac{\textrm{Var}(N^D_b) - \langle N^D_b \rangle \cdot \left( 1 - \varepsilon_b \right)}{\varepsilon^2_b}}} \nonumber \\
&=& \frac{\textrm{Cov}(N^D_f,N^D_b)}
{\sqrt{\textrm{Var}(N^D_f) - \langle N^D_f \rangle \cdot \left( 1 - \varepsilon_f \right)}
\sqrt{\textrm{Var}(N^D_b) - \langle N^D_b \rangle \cdot \left( 1 - \varepsilon_b \right)}}
\label{eq:BasicAccCorrection}
\end{IEEEeqnarray}
While the overall multiplicative efficiency factors in the covariance and variance terms cancel when calculating the correlation factor, Eq.~(\ref{eq:BasicAccCorrection}) shows that the additive terms, proportional to the mean number of particles detected in the region, remain and must be evaluated when an inefficiency exists.
The result in Eq.~(\ref{eq:BasicAccCorrection}) assumes that the detection efficiencies, $\varepsilon_f$ and $\varepsilon_b$, are the same for all particles in their respective regions.
When the efficiency varies little or not at all over the region, this assumption is valid.
However, variations in the efficiency of the region will affect a correlation measurement.
The most extreme variation exists when a fraction of the region has no detection efficiency and the rest has perfect detection efficiency, which could be the case when the acceptance of the detector does not cover the whole region (in azimuth for instance).
Additionally, a non-uniform distribution of particles (termed ``event shape'') in the region will affect the measurement when the efficiency varies.
In this case, when the particle multiplicity density is higher in the active region relative to the dead region, the effective efficiency is higher.
The opposite is true when the particle multiplicity density is lower in the active region relative to the dead region.
The net effect does not necessarily cancel out on average over many events when performing correlation measurements.
The effect of efficiency variations and event shape is analyzed in section~\ref{sec:eventshape} using the same framework developed so far and the effects they have on the correlation measurements are examined in section~\ref{sec:verification}.
\subsection{Accounting for Azimuthal Event Shape}
\label{sec:eventshape}
The effect of the event shape (in the presence of an inefficiency) can be reduced if one can select regions of the detector where the particle multiplicity density gradient is small or the efficiency is constant over the region.
This generally occurs when smaller regions of the detector are used.
We first consider the case where, event-by-event, a non-uniform azimuthal event shape exists for the produced particles, which is, however, uniform on average over many events.
The solution is then to segment the $\eta$ regions, studied in the section~\ref{sec:derivation}, additionally into $\varphi$ segments.
The particle multiplicity of these sub-regions will be denoted with an extra subscript (for example, $N^P_{f,1}$ for the primary multiplicity in the first $\varphi$ segment of the forward region), where the second subscript is a value between $1$ and $m_\varphi$ (the number of $\varphi$ segments).
The results in Eqs.~(\ref{eq:DetectedMean}) and (\ref{eq:DetectedCovVar}) have no assumption about the type of segmentation and are, therefore, also true for these sub-regions.
The generalization to these sub-regions is
\begin{IEEEeqnarray}{rCl}
\langle N^D_{r,i_\varphi} \rangle &=& \langle N^P_{r,i_\varphi} \rangle \cdot \varepsilon_{r,i_\varphi} \label{eq:DetectedMeanSeg} \\
\textrm{Cov}(N^D_{r_1,i_\varphi},N^D_{r_2,j_\varphi}) &=&
\textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,j_\varphi}) \cdot \varepsilon_{r_1,i_\varphi} \varepsilon_{r_2,j_\varphi} \nonumber\\
&& + \delta_{r_1 r_2} \cdot \delta_{i_\varphi j_\varphi} \cdot \langle N^P_{r_1,i_\varphi} \rangle \cdot \varepsilon_{r_1,i_\varphi} \left( 1 - \varepsilon_{r_1,i_\varphi} \right)
\label{eq:DetectedCovVarSeg}
\end{IEEEeqnarray}
where $1 \leq i_\varphi \leq m_\varphi$ and $1 \leq j_\varphi \leq m_\varphi$.
The relationship of the mean and covariance of the sub-regions (for primary particles) can be trivially derived.
For the mean, this is
\begin{IEEEeqnarray}{rCl}
\langle N^P_r \rangle = \langle \sum_{i_\varphi=1}^{m_\varphi} N^P_{r,i_\varphi} \rangle = \sum_{i_\varphi=1}^{m_\varphi} \langle N^P_{r,i_\varphi} \rangle
\label{eq:SumOfMeans}
\end{IEEEeqnarray}
which is the expected sum of the means of the sub-regions.
For the covariance, this is
\begin{IEEEeqnarray}{rCl}
\textrm{Cov}(N^P_{r_1},N^P_{r_2}) &=& \langle N^P_{r_1} N^P_{r_2} \rangle - \langle N^P_{r_1} \rangle \langle N^P_{r_2} \rangle \nonumber \\
&=& \langle \sum_{i_\varphi=1}^{m_\varphi} N^P_{r_1,i_\varphi} \sum_{j_\varphi=1}^{m_\varphi} N^P_{r_2,j_\varphi} \rangle - \langle \sum_{i_\varphi=1}^{m_\varphi} N^P_{r_1,i_\varphi} \rangle \langle \sum_{j_\varphi=1}^{m_\varphi} N^P_{r_2,j_\varphi} \rangle \nonumber \\
&=& \sum_{i_\varphi=1}^{m_\varphi} \sum_{j_\varphi=1}^{m_\varphi} \langle N^P_{r_1,i_\varphi} N^P_{r_2,j_\varphi} \rangle - \sum_{i_\varphi=1}^{m_\varphi} \sum_{j_\varphi=1}^{m_\varphi} \langle N^P_{r_1,i_\varphi} \rangle \langle N^P_{r_2,j_\varphi} \rangle \nonumber \\
&=& \sum_{i_\varphi=1}^{m_\varphi} \sum_{j_\varphi=1}^{m_\varphi} \left( \langle N^P_{r_1,i_\varphi} N^P_{r_2,j_\varphi} \rangle - \langle N^P_{r_1,i_\varphi} \rangle \langle N^P_{r_2,j_\varphi} \rangle \right) \nonumber \\
&=& \sum_{i_\varphi=1}^{m_\varphi} \sum_{j_\varphi=1}^{m_\varphi} \textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,j_\varphi})
\label{eq:SumOfCovariances}
\end{IEEEeqnarray}
which is the sum of the covariances of each sub-region to every other sub-region.
One should note that Eqs.~(\ref{eq:SumOfMeans}) and (\ref{eq:SumOfCovariances}) apply also to the detected means and covariances.
To account for acceptance and efficiency, rotational invariance is exploited.
One would expect, for example, that the mean number of primary particles produced at a certain pseudorapidity and at a certain azimuthal angle would be independent of the azimuthal angle (and only dependent on the azimuthal range of the measurement).
To use this in practice, we will impose the restriction that each $\eta$ region is {\it equally} divided into $m_\varphi$ azimuthal segments that span $2 \pi / m_\varphi$.
With this restriction, many of the measurements are redundant.
For the mean number of primary particles, this means that the value at each angle can be replaced by the average.
\begin{IEEEeqnarray}{rCl}
\langle N^P_{r,i_\varphi} \rangle = \frac{\sum_{j_\varphi=1}^{m_\varphi}\langle N^P_{r,j_\varphi}\rangle}{m_\varphi}, \textrm{ where } 1 \leq i_\varphi \leq m_\varphi \label{eq:meanrotinv}
\end{IEEEeqnarray}
Using Eqs.~(\ref{eq:DetectedMeanSeg}) and (\ref{eq:meanrotinv}) one can derive the (expected) relationship between the mean number of primary particles and the detected quantities.
\begin{IEEEeqnarray}{rCl}
\langle N^D_{r} \rangle &=&
\sum_{i_\varphi=1}^{m_\varphi}\langle N^D_{r,i_\varphi}\rangle =
\sum_{i_\varphi=1}^{m_\varphi}\langle N^P_{r,i_\varphi}\rangle\varepsilon_{r,i_\varphi} =
\sum_{i_\varphi=1}^{m_\varphi}\left(\frac{\sum_{j_\varphi=1}^{m_\varphi}\langle N^P_{r,j_\varphi}\rangle}{m_\varphi}\right)\varepsilon_{r,i_\varphi} \nonumber \\
&=& \left(\frac{\sum_{j_\varphi=1}^{m_\varphi}\langle N^P_{r,j_\varphi}\rangle}{m_\varphi}\right) \cdot \sum_{i_\varphi=1}^{m_\varphi}\varepsilon_{r,i_\varphi} =
\frac{\langle N^P_r\rangle}{m_\varphi} \cdot \sum_{i_\varphi=1}^{m_\varphi}\varepsilon_{r,i_\varphi} \nonumber \\
\Rightarrow \langle N^P_r \rangle &=&
m_\varphi \cdot \frac{\sum_{i_\varphi=1}^{m_\varphi}\langle N^D_{r,i_\varphi}\rangle}{\sum_{i_\varphi=1}^{m_\varphi}\varepsilon_{r,i_\varphi}}
\label{eq:FinalMeanFormula}
\end{IEEEeqnarray}
\begin{figure}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=0.9\textwidth]{invariant_twist_sep1.pdf}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=0.9\textwidth]{invariant_twist_sep2.pdf}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=0.9\textwidth]{invariant_twist_sep3.pdf}
\end{minipage}
\caption{(Color online) The figures depict sets of correlations between different $\varphi$ regions.
Note that the solid arrow and dashed arrow can (in general) point to different $\eta$ regions.
From left to right, the plots show the correlations between regions shifted by 1, 2, and 3 $\varphi$ segments.
In general this shift can be any value between 0 and the number of $\varphi$ segments minus 1.
For each shift, the correlation between the regions should be the same (for primary particles) independent of the average $\varphi$ angle of the correlated regions.
This produces redundant measures of the same correlation.
A set of redundant measures is called an ``invariant twist''.}
\label{fig:InvariantTwists}
\end{figure}
Equation~(\ref{eq:FinalMeanFormula}) is simple because all quantities in the sum are equivalent (due to rotational invariance).
Rotational invariance can be applied to the expression for the covariance where one expects the covariance between any two segments with equal $\varphi$ displacement to be equivalent (shown in Fig.~\ref{fig:InvariantTwists}).
To do this, Eq.~(\ref{eq:SumOfCovariances}) must be rewritten to group these quantities.
\begin{IEEEeqnarray}{rCl}
\textrm{Cov}(N^P_{r_1},N^P_{r_2}) &=& \sum_{i_\varphi=1}^{m_\varphi} \textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi}) \nonumber \\
&& + \sum_{s=1}^{m_\varphi-1} \left\{
\sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi+s})
\vphantom{+ \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^P_{r_1,m_\varphi+i_\varphi-s},N^P_{r_2,i_\varphi})}
\right. \nonumber \\
&& \hphantom{+ \sum_{s=1}^{m_\varphi-1} \left\{
\vphantom{\sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi+s})
+ \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^P_{r_1,m_\varphi+i_\varphi-s},N^P_{r_2,i_\varphi})}
\right.}
\left.
\vphantom{\sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi+s})}
+ \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^P_{r_1,m_\varphi+i_\varphi-s},N^P_{r_2,i_\varphi})
\right\}
\label{eq:SumOfCovariancesTwists}
\end{IEEEeqnarray}
The first sum in Eq.~(\ref{eq:SumOfCovariancesTwists}) correlates all regions with the same $\varphi$.
The terms within the braces in Eq.~(\ref{eq:SumOfCovariancesTwists}) correlate regions shifted by $s$ segments in $\varphi$ (these correspond to twist correlations).
Every term in the first sum must be the same (on average) by rotational invariance as well as every term within the braces (for each value of $s$).
Each of these terms can be analyzed individually to see how they relate to the detected quantities.
We first analyze the terms inside the braces of Eq.~(\ref{eq:SumOfCovariancesTwists}), but investigate the result as if they were detected quantities instead.
This yields the following result, if rotational invariance is applied for each twisted quantity.
\begin{IEEEeqnarray}{l}
\sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^D_{r_1,i_\varphi},N^D_{r_2,i_\varphi+s})
+ \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^D_{r_1,m_\varphi+i_\varphi-s},N^D_{r_2,i_\varphi}) = \nonumber \\
\phantom{\sum_{i_\varphi=1}^{m_\varphi-s} =} \sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi+s}) \cdot \varepsilon_{r_1,i_\varphi} \varepsilon_{r_2,i_\varphi+s} \nonumber \\
\phantom{\sum_{i_\varphi=1}^{m_\varphi-s} =} + \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^P_{r_1,m_\varphi+i_\varphi-s},N^P_{r_2,i_\varphi}) \cdot \varepsilon_{r_1,m_\varphi+i_\varphi-s} \varepsilon_{r_2,i_\varphi} \nonumber \\
\phantom{\sum_{i_\varphi=1}^{m_\varphi-s}} = \frac{1}{m_\varphi} \left( \sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi+s})
+ \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^P_{r_1,m_\varphi+i_\varphi-s},N^P_{r_2,i_\varphi}) \right) \nonumber \\
\phantom{\sum_{i_\varphi=1}^{m_\varphi-s} =} \cdot \left( \sum_{i_\varphi=1}^{m_\varphi-s} \varepsilon_{r_1,i_\varphi} \varepsilon_{r_2,i_\varphi+s}
+ \sum_{i_\varphi=1}^{s} \varepsilon_{r_1,m_\varphi+i_\varphi-s} \varepsilon_{r_2,i_\varphi} \right)
\label{eq:twistsum}
\end{IEEEeqnarray}
Equation~(\ref{eq:twistsum}) uses the result in Eq.~(\ref{eq:DetectedCovVarSeg}) to relate the detected quantities to the primary quantities.
In the case here (where $s \geq 1$), the second piece of Eq.~(\ref{eq:DetectedCovVarSeg}) is always 0, because the terms never have the same $\varphi$.
Equation~(\ref{eq:twistsum}) can be inverted to allow one to compute the sum of invariant twisted covariances for primary particles from detected values.
\begin{IEEEeqnarray}{l}
\sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi+s})
+ \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^P_{r_1,m_\varphi+i_\varphi-s},N^P_{r_2,i_\varphi}) = \nonumber \\
\phantom{\sum_{i_\varphi=1}^{m_\varphi-s}} m_\varphi \cdot \frac{ \sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^D_{r_1,i_\varphi},N^D_{r_2,i_\varphi+s})
+ \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^D_{r_1,m_\varphi+i_\varphi-s},N^D_{r_2,i_\varphi})}
{\sum_{i_\varphi=1}^{m_\varphi-s} \varepsilon_{r_1,i_\varphi} \varepsilon_{r_2,i_\varphi+s}
+ \sum_{i_\varphi=1}^{s} \varepsilon_{r_1,m_\varphi+i_\varphi-s} \varepsilon_{r_2,i_\varphi}}
\label{eq:twistsuminverted}
\end{IEEEeqnarray}
The same analysis can be performed on the first term in Eq.~(\ref{eq:SumOfCovariancesTwists}), but now, when invoking Eq.~(\ref{eq:DetectedCovVarSeg}) the second piece must be kept as it may not vanish (when calculating a variance for example).
\begin{IEEEeqnarray}{rCl}
\sum_{i_\varphi=1}^{m_\varphi} \textrm{Cov}(N^D_{r_1,i_\varphi},N^D_{r_2,i_\varphi}) &=&
\sum_{i_\varphi=1}^{m_\varphi} \left( \textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi}) \cdot \varepsilon_{r_1,i_\varphi} \varepsilon_{r_2,i_\varphi}
\vphantom{+ \delta_{r_1 r_2} \cdot \langle N^P_{r_1,i_\varphi} \rangle \cdot \varepsilon_{r_1,i_\varphi} \left( 1 - \varepsilon_{r_1,i_\varphi} \right)} \right. \nonumber \\
&& \hphantom{\sum_{i_\varphi=1}^{m_\varphi} \left( \vphantom{\textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi}) \cdot \varepsilon_{r_1,i_\varphi} \varepsilon_{r_2,i_\varphi}
+ \delta_{r_1 r_2} \cdot \langle N^P_{r_1,i_\varphi} \rangle \cdot \varepsilon_{r_1,i_\varphi} \left( 1 - \varepsilon_{r_1,i_\varphi} \right)} \right.}
\left. \vphantom{\textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi}) \cdot \varepsilon_{r_1,i_\varphi} \varepsilon_{r_2,i_\varphi}}
+ \delta_{r_1 r_2} \cdot \langle N^P_{r_1,i_\varphi} \rangle \cdot \varepsilon_{r_1,i_\varphi} \left( 1 - \varepsilon_{r_1,i_\varphi} \right) \right) \nonumber \\
&=& \frac{\sum_{i_\varphi=1}^{m_\varphi} \textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi})}{m_\varphi} \cdot \sum_{i_\varphi=1}^{m_\varphi} \varepsilon_{r_1,i_\varphi} \varepsilon_{r_2,i_\varphi} \nonumber \\
&& + \delta_{r_1 r_2} \cdot \frac{\sum_{i_\varphi=1}^{m_\varphi} \langle N^P_{r_1,i_\varphi} \rangle}{m_\varphi} \cdot \sum_{i_\varphi=1}^{m_\varphi} \varepsilon_{r_1,i_\varphi} \left( 1 - \varepsilon_{r_1,i_\varphi} \right)
\label{eq:nontwistsum}
\end{IEEEeqnarray}
Equation~(\ref{eq:nontwistsum}) can similarly be inverted to compute the sum of the non-twisted portion of Eq.~(\ref{eq:SumOfCovariancesTwists}) for primary particles:
\begin{IEEEeqnarray}{rCl}
\sum_{i_\varphi=1}^{m_\varphi} \textrm{Cov}(N^P_{r_1,i_\varphi},N^P_{r_2,i_\varphi}) &=&
m_\varphi \cdot \frac{\sum_{i_\varphi=1}^{m_\varphi} \textrm{Cov}(N^D_{r_1,i_\varphi},N^D_{r_2,i_\varphi})}{\sum_{i_\varphi=1}^{m_\varphi} \varepsilon_{r_1,i_\varphi} \varepsilon_{r_2,i_\varphi}} \nonumber \\
&& - \delta_{r_1 r_2} \cdot m_\varphi \cdot \frac{\sum_{i_\varphi=1}^{m_\varphi} \varepsilon_{r_1,i_\varphi} \left( 1 - \varepsilon_{r_1,i_\varphi} \right)}{\sum_{i_\varphi=1}^{m_\varphi} \varepsilon^2_{r_1,i_\varphi}} \cdot \frac{\sum_{i_\varphi=1}^{m_\varphi} \langle N^D_{r_1,i_\varphi} \rangle}{\sum_{i_\varphi=1}^{m_\varphi} \varepsilon_{r_1,i_\varphi}}
\label{eq:nontwistsuminverted}
\end{IEEEeqnarray}
where Eq.~(\ref{eq:FinalMeanFormula}) was used to relate the mean number of primary particles to the mean number of detected particles.
The final expression for the covariance of primary particles is obtained by inserting Eqs.~(\ref{eq:twistsuminverted}) and (\ref{eq:nontwistsuminverted}) into Eq.~(\ref{eq:SumOfCovariancesTwists}):
\begin{IEEEeqnarray}{l}
\textrm{Cov}(N^P_{r_1},N^P_{r_2}) = \nonumber \\
\phantom{\textrm{Cov}} m_\varphi \cdot \frac{\sum_{i_\varphi=1}^{m_\varphi} \textrm{Cov}(N^D_{r_1,i_\varphi},N^D_{r_2,i_\varphi})}{\sum_{i_\varphi=1}^{m_\varphi} \varepsilon_{r_1,i_\varphi} \varepsilon_{r_2,i_\varphi}} \nonumber \\
\phantom{\textrm{Cov}} + m_\varphi \cdot \sum_{s=1}^{m_\varphi-1} \left\{ \frac{ \sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^D_{r_1,i_\varphi},N^D_{r_2,i_\varphi+s})
+ \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^D_{r_1,m_\varphi+i_\varphi-s},N^D_{r_2,i_\varphi})}
{\sum_{i_\varphi=1}^{m_\varphi-s} \varepsilon_{r_1,i_\varphi} \varepsilon_{r_2,i_\varphi+s}
+ \sum_{i_\varphi=1}^{s} \varepsilon_{r_1,m_\varphi+i_\varphi-s} \varepsilon_{r_2,i_\varphi}} \right\} \nonumber \\
\phantom{\textrm{Cov}} - \delta_{r_1 r_2} \cdot m_\varphi \cdot \frac{\sum_{i_\varphi=1}^{m_\varphi} \varepsilon_{r_1,i_\varphi} \left( 1 - \varepsilon_{r_1,i_\varphi} \right)}{\sum_{i_\varphi=1}^{m_\varphi} \varepsilon^2_{r_1,i_\varphi}} \cdot \frac{\sum_{i_\varphi=1}^{m_\varphi} \langle N^D_{r_1,i_\varphi} \rangle}{\sum_{i_\varphi=1}^{m_\varphi} \varepsilon_{r_1,i_\varphi}}
\label{eq:FinalCovFormula}
\end{IEEEeqnarray}
While the result using Eq.~(\ref{eq:FinalCovFormula}) must deviate from the result obtained from the distribution of primary particles (due to the imperfect detector response resulting in partial information loss), tests show a vast improvement over using Eqs.~(\ref{eq:CorrectedCov}) and (\ref{eq:CorrectedVar}).
Results using Eq.~(\ref{eq:FinalCovFormula}) often agree within statistical error with the results obtained from the primary distribution as will be shown in section~\ref{sec:verification}.
\begin{figure}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=0.9\textwidth]{5segs_60peracc_adjdead.pdf}
\vspace*{-6mm}
\hphantom{\hspace{\textwidth}}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$s$ & 0 & 1 & 2 & 3 & 4\\
\hline
$N_{meas}$ & 3 & 2 & 1 & 1 & 2\\
\hline
scale factor & $\frac{5}{3}$ & $\frac{5}{2}$ & $\frac{5}{1}$ & $\frac{5}{1}$ & $\frac{5}{2}$\\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=0.9\textwidth]{5segs_60peracc_nonadjdead.pdf}
\vspace*{-6mm}
\hphantom{\hspace{\textwidth}}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$s$ & 0 & 1 & 2 & 3 & 4\\
\hline
$N_{meas}$ & 3 & 1 & 2 & 2 & 1\\
\hline
scale factor & $\frac{5}{3}$ & $\frac{5}{1}$ & $\frac{5}{2}$ & $\frac{5}{2}$ & $\frac{5}{1}$\\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\includegraphics[width=0.9\textwidth]{5segs_40peracc_adjlive.pdf}
\vspace*{-6mm}
\hphantom{\hspace{\textwidth}}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$s$ & 0 & 1 & 2 & 3 & 4\\
\hline
$N_{meas}$ & 2 & 1 & 0 & 0 & 1\\
\hline
scale factor & $\frac{5}{2}$ & $\frac{5}{1}$ & $\frac{5}{0}$ & $\frac{5}{0}$ & $\frac{5}{1}$\\
\hline
\end{tabular}
\end{minipage}
\caption{(Color online) The figures depict sets of dead regions (colored gray) for a detector that has 5 $\varphi$ segments.
If one assumes the same dead regions apply to both the forward and backward pseudorapidity regions (as an example), one can compute the number of times each invariant twist is measured ($N_{meas}$) for each configuration with a shift of $s$ segments and compute a scale factor that must be applied to give the correct contribution to the variance or covariance.
For the same total dead area one can still obtain different variances or covariances, because the configuration of dead regions affects the measurement through different scale factors applied to each invariant twist (shown in the left and middle panes).
The right pane shows that if one has less than 50\% acceptance, one will lack a measure of an invariant twist making the measurement impossible to calculate with this method.}
\label{fig:DeadConfig}
\end{figure}
If one considers the situation where each region has either full or zero acceptance, the denominators in the first two terms of Eq.~(\ref{eq:FinalCovFormula}) count the number of times the twisted (or non-twisted) quantities are measured.
The number of $\varphi$ segments divided by that number gives how much that quantity must be scaled up to give the appropriate contribution to covariance between the two region over $2\pi$ in azimuth. This is shown in Fig.~\ref{fig:DeadConfig}.
One should note that there is a limitation to this method that greater than 50\% of the acceptance must be present in each of the two regions being correlated.
If this requirement is not satisfied, one or more of the denominators summing over multiplications of efficiency factors in Eqs.~(\ref{eq:twistsuminverted}) and (\ref{eq:nontwistsuminverted}) will be 0.
This is a direct result of applying only rotational invariance to arrive at Eq.~(\ref{eq:FinalCovFormula}).
One may be able to lift this constraint by assuming that the invariant twists for primary particles shifted by $s$ segments are the same (on average) to the ones shifted by $m_\varphi-s$ segments.
Applying this symmetry was, however, not investigated further in this paper.
\subsection{Including an $\eta$ Dependent Efficiency}
\label{sec:etagrad}
If the efficiency additionally depends on $\eta$, the calculation of the correlation factor will also be affected.
In this case, the solution (if possible) is again to segment the detector (this time along $\eta$).
No redundancy necessarily exists in $\eta$ though, so one must be able to measure the variance of these sub-regions and the covariance of each sub-region to every other sub-region to accurately compute the correlation factor.
This means that each individual $\eta$ sub-region must have greater than 50\% acceptance.
The resulting equations are quite similar to those found in section~\ref{sec:eventshape}.
To specify the sub-region, a further subscript must be added to the primary and measured multiplicity to specify which $\eta$ and $\varphi$ segment is being referred to.
In extending Eq.~(\ref{eq:FinalMeanFormula}), the region is divided into $m_\eta$ $\eta$ segments.
This produces the following result:
\begin{IEEEeqnarray}{rCl}
\langle N^P_r \rangle &=& m_\varphi \cdot \sum_{i_\eta=1}^{m_\eta} \frac{\sum_{i_\varphi=1}^{m_\varphi}\langle N^D_{r,i_\eta,i_\varphi}\rangle}{\sum_{i_\varphi=1}^{m_\varphi}\varepsilon_{r,i_\eta,i_\varphi}}
\label{eq:FinalMeanFormulaEta}
\end{IEEEeqnarray}
For the covariance and the variances, region 1 and 2 will be segmented further into $m_{\eta,1}$ and $m_{\eta,2}$ $\eta$ segments, respectively.
This results in Eq.~(\ref{eq:DetectedCovVarSeg}) becoming
\begin{IEEEeqnarray}{rCl}
\textrm{Cov}(N^D_{r_1,i_\eta,i_\varphi},N^D_{r_2,j_\eta,j_\varphi}) &=&
\textrm{Cov}(N^P_{r_1,i_\eta,i_\varphi},N^P_{r_2,j_\eta,j_\varphi}) \cdot \varepsilon_{r_1,i_\eta,i_\varphi} \varepsilon_{r_2,j_\eta,j_\varphi} \nonumber\\
&& + \delta_{r_1 r_2} \cdot \delta_{i_\eta j_\eta} \cdot \delta_{i_\varphi j_\varphi} \cdot \langle N^P_{r_1,i_\eta,i_\varphi} \rangle \cdot \varepsilon_{r_1,i_\eta,i_\varphi} \left( 1 - \varepsilon_{r_1,i_\eta,i_\varphi} \right)
\label{eq:DetectedCovVarSegEta}
\end{IEEEeqnarray}
and Eq.~(\ref{eq:SumOfCovariancesTwists}) correspondingly becomes
\begin{IEEEeqnarray}{rCl}
\textrm{Cov}(N^P_{r_1},N^P_{r_2}) &=& \sum_{i_\eta=1}^{m_{\eta,1}} \sum_{j_\eta=1}^{m_{\eta,2}} \sum_{i_\varphi=1}^{m_\varphi} \sum_{j_\varphi=1}^{m_\varphi} \textrm{Cov}(N^P_{r_1,i_\eta,i_\varphi},N^P_{r_2,j_\eta,j_\varphi}) \nonumber \\
&=& \sum_{i_\eta=1}^{m_{\eta,1}} \sum_{j_\eta=1}^{m_{\eta,2}} \sum_{i_\varphi=1}^{m_\varphi} \textrm{Cov}(N^P_{r_1,i_\eta,i_\varphi},N^P_{r_2,j_\eta,i_\varphi}) \nonumber \\
&& + \sum_{i_\eta=1}^{m_{\eta,1}} \sum_{j_\eta=1}^{m_{\eta,2}} \sum_{s=1}^{m_\varphi-1} \left\{
\sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^P_{r_1,i_\eta,i_\varphi},N^P_{r_2,j_\eta,i_\varphi+s})
\vphantom{+ \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^P_{r_1,i_\eta,m_\varphi+i_\varphi-s},N^P_{r_2,j_\eta,i_\varphi})}
\right. \nonumber \\
&& \hphantom{+ \sum_{i_\eta=1}^{m_{\eta,1}} \sum_{j_\eta=1}^{m_{\eta,2}} \sum_{s=1}^{m_\varphi-1} \left\{
\vphantom{\sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^P_{r_1,i_\eta,i_\varphi},N^P_{r_2,j_\eta,i_\varphi+s})
+ \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^P_{r_1,i_\eta,m_\varphi+i_\varphi-s},N^P_{r_2,j_\eta,i_\varphi})}
\right.}
\left.
\vphantom{\sum_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^P_{r_1,i_\eta,i_\varphi},N^P_{r_2,j_\eta,i_\varphi+s})}
+ \sum_{i_\varphi=1}^{s} \textrm{Cov}(N^P_{r_1,i_\eta,m_\varphi+i_\varphi-s},N^P_{r_2,j_\eta,i_\varphi})
\right\}
\label{eq:SumOfCovariancesTwistsEta}
\end{IEEEeqnarray}
Equations~(\ref{eq:twistsuminverted}) and (\ref{eq:nontwistsuminverted}) apply to each $\eta$ segment pair and, therefore, the final formula incorporating both a $\varphi$ and $\eta$ efficiency gradient is the following:
\begin{IEEEeqnarray}{l}
\textrm{Cov}(N^P_{r_1},N^P_{r_2}) = \nonumber \\
\phantom{\textrm{Cov}} m_\varphi \cdot \sum_{i_\eta=1}^{m_{\eta,1}} \sum_{j_\eta=1}^{m_{\eta,2}} \frac{\sum\limits_{i_\varphi=1}^{m_\varphi} \textrm{Cov}(N^D_{r_1,i_\eta,i_\varphi},N^D_{r_2,j_\eta,i_\varphi})}{\sum\limits_{i_\varphi=1}^{m_\varphi} \varepsilon_{r_1,i_\eta,i_\varphi} \varepsilon_{r_2,j_\eta,i_\varphi}} \nonumber \\
\phantom{\textrm{Cov}} + m_\varphi \cdot \sum_{i_\eta=1}^{m_{\eta,1}} \sum_{j_\eta=1}^{m_{\eta,2}} \sum_{s=1}^{m_\varphi-1} \left\{ \frac{ \sum\limits_{i_\varphi=1}^{m_\varphi-s} \textrm{Cov}(N^D_{r_1,i_\eta,i_\varphi},N^D_{r_2,j_\eta,i_\varphi+s})
+ \sum\limits_{i_\varphi=1}^{s} \textrm{Cov}(N^D_{r_1,i_\eta,m_\varphi+i_\varphi-s},N^D_{r_2,j_\eta,i_\varphi})}
{\sum\limits_{i_\varphi=1}^{m_\varphi-s} \varepsilon_{r_1,i_\eta,i_\varphi} \varepsilon_{r_2,j_\eta,i_\varphi+s}
+ \sum\limits_{i_\varphi=1}^{s} \varepsilon_{r_1,i_\eta,m_\varphi+i_\varphi-s} \varepsilon_{r_2,j_\eta,i_\varphi}} \right\} \nonumber \\
\phantom{\textrm{Cov}} - \delta_{r_1 r_2} \cdot m_\varphi \cdot \sum_{i_\eta=1}^{m_{\eta,1}} \frac{\sum\limits_{i_\varphi=1}^{m_\varphi} \varepsilon_{r_1,i_\eta,i_\varphi} \left( 1 - \varepsilon_{r_1,i_\eta,i_\varphi} \right)}{\sum\limits_{i_\varphi=1}^{m_\varphi} \varepsilon^2_{r_1,i_\eta,i_\varphi}} \cdot \frac{\sum\limits_{i_\varphi=1}^{m_\varphi} \langle N^D_{r_1,i_\eta,i_\varphi} \rangle}{\sum\limits_{i_\varphi=1}^{m_\varphi} \varepsilon_{r_1,i_\eta,i_\varphi}}
\label{eq:FinalCovFormulaEta}
\end{IEEEeqnarray}
Equation~(\ref{eq:FinalCovFormulaEta}) can also be used when the $\eta$ bin width for the final desired measurement is larger than the $\eta$ segmentation of the detector.
If the detector additionally has inactive channels that do not cover the full desired $\eta$ bin width, one can account for this using Eq.~(\ref{eq:FinalCovFormulaEta}) to further reduce measurement bias.
\section{Verification}
\label{sec:verification}
The validity of the developed method is verified through studies using simulations of proton-proton collisions.
The event generator used here is Pythia 6.4~\cite{Sjostrand:2006za}.
It has been chosen, because many pre-configured tunes exist which predict substantially different quantities for different observables and specifically, in this case, for forward-backward correlations.
The properties of the tunes can be found in~\cite{Skands:2010ak}.
The tunes used for this study are Perugia3, Perugia0, and DW.
The DW tune produces quite different correlation factors when compared to the other two tunes (see Fig.~\ref{fig:PrimComp}) due to the significantly different relative contributions of initial state radiation compared to multiple parton interactions.
One should note that the bin width in $\eta$, $\Delta_{\textrm{bin}}$, affects the value of $b$ with $b \rightarrow 0$ as $\Delta_{\textrm{bin}} \rightarrow 0$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.32\textwidth]{PrimComp_b_Ref_EtaRp4d00_EtaB04.pdf}
\includegraphics[width=0.32\textwidth]{PrimComp_b_Ref_EtaRp4d00_EtaB08.pdf}
\includegraphics[width=0.32\textwidth]{PrimComp_b_Ref_EtaRp4d00_EtaB16.pdf}
\caption{(Color online) Correlation factors ($b$) obtained using the particle multiplicities computed at $\eta = \frac{\Delta\eta}{2}$ and $\eta = -\frac{\Delta\eta}{2}$ from the Pythia6 event-generator.
Three different tunes were selected: Perugia3, Perugia0, and DW.
Three different bin widths, $\Delta_{\textrm{bin}}$, in $\eta$ were used. Left: $\Delta_{\textrm{bin}}=1$. Middle: $\Delta_{\textrm{bin}}=0.5$. Right: $\Delta_{\textrm{bin}}=0.25$.}
\label{fig:PrimComp}
\end{center}
\end{figure}
The most common method of accounting for a detector effect is to use simulated data to evaluate a quantity both with and without detector effects included.
The ratio is then used as a correction to the actual measured value which manifestly includes detector effects.
The validity of that method must, however, be assessed to establish if any residual dependence on the parameters of the generator exists.
As an example, the primary particles that were used to produce Fig.~\ref{fig:PrimComp} were subjected to a uniform contiguous acceptance hole of 40\% in $\varphi$ for all $\eta$ bins.
The detected correlation factors found with each tune were then corrected using the ratio of the true to detected factors found with the other tunes.
The result is shown in Fig.~\ref{fig:SimCorr}.
Deviations from the original primary correlation factors of up to 8\% are found in this case.
The deviations clearly show residual generator dependencies and biases.
Furthermore, real data could disagree even further with the tune chosen to correct with and could, therefore, produce even bigger biases.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.32\textwidth]{NaiveCorrection_b_Phi_Sym_EtaRp4d00_EtaB04_PhiS10_Accp0d60.pdf}
\includegraphics[width=0.32\textwidth]{NaiveCorrection_b_Phi_Sym_EtaRp4d00_EtaB08_PhiS10_Accp0d60.pdf}
\includegraphics[width=0.32\textwidth]{NaiveCorrection_b_Phi_Sym_EtaRp4d00_EtaB16_PhiS10_Accp0d60.pdf}
\caption{(Color online) Correlation factors ($b$) obtained when only 60\% of the total acceptance exists in $\varphi$ equivalently over all $\eta$ correcting the found correlation factor from one Pythia6 tune by the ratio of the primary to found correlation factor found with another tune.
Three different bin widths, $\Delta_{\textrm{bin}}$, in $\eta$ were shown. Left: $\Delta_{\textrm{bin}}=1$. Middle: $\Delta_{\textrm{bin}}=0.5$. Right: $\Delta_{\textrm{bin}}=0.25$.
The bottom part of each figure shows the ratio of the corrected correlation factor to the actual primary correlation factor for the tune that the data was generated from.}
\label{fig:SimCorr}
\end{center}
\end{figure}
In the following examples the simulation independent method developed in the previous section is used to evaluate the correlation factors.
Unless otherwise specified, the primary Pythia tune used in the examples is Perugia3.
Additionally, for all plots shown in the rest of this paper, $\Delta_{\textrm{bin}}=0.5$ is the bin width in $\eta$ unless the bin width is explicitly stated.
\subsection{Reduced Acceptance}
\label{sec:reducedacc}
The initial study involves the reduction of the acceptance of each $\eta$ bin.
Two examples are studied: a simple case, where inactive regions have identical $\varphi$ locations in all $\eta$ bins, and a realistic case, where inactive regions have been placed randomly into each $\eta$ bin.
In both cases, geometrical areas are chosen to be inactive with respect to particle detection, meaning that any particle with a momentum vector pointing toward an inactive region is excluded from the detected quantities.
\subsubsection{The Simple Case}
\label{sec:simplecase}
Four simple examples are investigated in this section.
The inactive areas are chosen such that they begin at $\varphi=0$ and extend to $n\cdot\frac{2\pi}{10}$ where $n=1,2,3,\textrm{ and } 4$.
This results in geometric acceptances for each $\eta$ bin of 90\%, 80\%, 70\%, and 60\%.
The acceptance maps are shown in Fig.~\ref{fig:AccMapSym}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.4\textwidth]{AccMap_P3_Sym_EtaRp4d00_Accp0d90.pdf}
\hspace{4ex}
\includegraphics[width=0.4\textwidth]{AccMap_P3_Sym_EtaRp4d00_Accp0d80.pdf}
\includegraphics[width=0.4\textwidth]{AccMap_P3_Sym_EtaRp4d00_Accp0d70.pdf}
\hspace{4ex}
\includegraphics[width=0.4\textwidth]{AccMap_P3_Sym_EtaRp4d00_Accp0d60.pdf}
\caption{(Color online) The four simple examples of the acceptance maps of the $\eta$ bins.
The four panes show the inactive regions with acceptances of 90\%, 80\%, 70\%, and 60\%.}
\label{fig:AccMapSym}
\end{center}
\end{figure}
Regardless of the cause, undetected particles will result in a loss of information and will affect the measured correlation factor.
We would intuitively expect that the correlation factors are attenuated when the efficiency of a bin is less than 1.
Equation~(\ref{eq:BasicAccCorrection}) demonstrates this.
This effect is illustrated in left pane of Fig.~\ref{fig:VarAccP3Simp}.
The correlation factor at the event-generator level is black while other colors are used for each case of reduced acceptance.
The graph shows that more attenuation exists when the size of the inactive areas is increased.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{VariedAcc_P3_b_Phi_Det_Sym_EtaRp4d00_EtaB08_PhiS01.pdf}
\includegraphics[width=0.49\textwidth]{VariedAccOnePhiSegment_P3_b_Phi_Corr_Sym_EtaRp4d00_EtaB08.pdf}
\caption{(Color online) Left: Attenuation of measured correlation factors as a result of decreased $\varphi$ acceptance in the $\eta$ bins (see Fig.~\ref{fig:AccMapSym}).
Right: The obtained correlation factors computed using Eq.~(\ref{eq:BasicAccCorrection}), which does not exploit $\varphi$ segmentation.
Both: The bottom parts of the figures show the ratio of the obtained correlation factors to the primary correlation factors.
The simple assumption of uniform detection probability reduces the discrepancy from the primary correlation factor by more than a factor of 3, but still leaves substantial discrepancies between the obtained correlation factors and those from the generator output.}
\label{fig:VarAccP3Simp}
\end{center}
\end{figure}
To illustrate the necessity of segmentation, the results are first computed without any segmentation (using Eq.~(\ref{eq:BasicAccCorrection})).
The results are shown in the right pane of Fig.~\ref{fig:VarAccP3Simp}.
Although the computed correlation factors are now less than 10\% from the primary correlation factors, the discrepancy is still sizable.
To further reduce this discrepancy, one can divide the $\eta$ bins into segments of equal size in $\varphi$ and use Eq.~(\ref{eq:FinalCovFormula}) to calculate the correlation factors.
In the left pane of Fig.~\ref{fig:VarAccCorrP3Simp}, this has been done using 10 $\varphi$ segments.
While up to 40\% of the bin is inactive, detecting down to 60\% of the particles, the obtained correlation factors now agree to within a few per mill of the primary values.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{VariedAcc_P3_b_Phi_Corr_Sym_EtaRp4d00_EtaB08_PhiS10.pdf}
\includegraphics[width=0.49\textwidth]{VariedPhiSegs_P3_b_Phi_Corr_Sym_EtaRp4d00_EtaB08.pdf}
\caption{(Color online) Left: Correlation factors computed using 10 $\varphi$ segments to account for the azimuthal event shape.
Right: Correlation factors obtained when 70\% acceptance and a varied number of $\varphi$ segments in each $\eta$ bin exists.
Note that there is no improvement when increasing from 10 to 20 $\varphi$ segments, because both segmentations produce the same result analytically for the acceptance map applied here.
Both: The bottom parts of the figures show the ratio of the obtained correlation factors to the primary correlation factors.
Note that the scale in the bottom part of the left figure has a much smaller range than that from the previous plots with no $\varphi$ segmentation.}
\label{fig:VarAccCorrP3Simp}
\end{center}
\end{figure}
The chosen number of segments in the analysis influences the accuracy of the result.
This is shown in the right pane of Fig.~\ref{fig:VarAccCorrP3Simp} where the correlation factor has been computed using 1, 5, 10, and 20 $\varphi$ segments.
Using one $\varphi$ segment produces the same result as in the right pane of Fig.~\ref{fig:VarAccP3Simp}, while choosing more segments improves the result up to having 10 segments.
The results when using 10 and 20 segments are identical.
This is true, because every adjacent pair, in $\varphi$, of acceptance values is the same and, therefore, the 20 segment version of Eq.~(\ref{eq:FinalCovFormula}) simplifies identically into the 10 segment version of that equation.
If one had, for instance, the same acceptance value for every $\varphi$ segment in an $\eta$ bin, Eq.~(\ref{eq:FinalCovFormula}) would identically simplify to Eq.~(\ref{eq:CorrectedCov}) or (\ref{eq:CorrectedVar}) depending on whether it corresponded to a covariance or a variance.
In the example in the right pane of Fig.~\ref{fig:VarAccCorrP3Simp}, 10 segments is enough to ensure segments of equal size, while also ensuring that all segments have the same detection efficiency of either 1 or 0.
This is not the case when the correlation factor is computed using 5 segments.
In that case, one (or more) segments have an average efficiency of 0.5.
This makes the 5 segment case more inaccurate because the assumption of uniform efficiency in the bin is violated.
This study shows that, while finer segmentation can produce more accurate results, there may exist a limit beyond which no further accuracy is attained.
In fact, if possible, the segmentation used in the analysis should only be fine enough to ensure that all segments have an efficiency of either 1 or 0, if acceptance is the only effect being accounted for, since this will reduce the required storage of information to perform the measurement.
\subsubsection{A Realistic Case}
\label{sec:realisticcase}
The simple test shown in section~\ref{sec:simplecase} demonstrates the general effect of reduced acceptance.
Realistic detector acceptances lack that simplicity though.
To test the method more generally, 20 inactive regions were placed randomly over the analysis region.
The only restriction placed on the randomness was that there must be greater than 50\% acceptance in every $\eta$ bin to ensure that the correlation factor can be calculated using this method.
The resulting acceptance map is shown in the left pane of Fig.~\ref{fig:AccMapRandomResult}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{AccMap_P3_EtaRp4d00_EtaB16_PhiS20_Accp0d40.pdf}
\includegraphics[width=0.49\textwidth]{RandomHoles_P3_b_Corr_EtaRp4d00_EtaB08.pdf}
\caption{(Color online) Left: Acceptance map with 20 randomly placed inactive regions.
The individual regions span 1 unit in $\eta$ and $\frac{2\pi}{10}$ in azimuth.
Right: The correlation factors found after accounting for randomly placed inactive regions using different $\varphi$ segmentations.
The bottom part shows the ratio of the obtained correlation factors to the primary correlation factors.}
\label{fig:AccMapRandomResult}
\end{center}
\end{figure}
The right pane of Fig.~\ref{fig:AccMapRandomResult} shows the result of the analysis with different numbers of $\varphi$ segments.
When the acceptance varies in each $\eta$ bin, structure can be seen in the obtained correlation factors with no $\varphi$ segmentation that is not present in the simple case presented in section~\ref{sec:simplecase}.
Including $\varphi$ segmentation minimizes this effect.
Increasing the number of $\varphi$ segments to 10 gives the same accuracy as seen in the simple case.
Also as for the simple case, increasing the segmentation beyond 10 $\varphi$ segments in these examples does not produce a more accurate measurement.
\subsection{Efficiency}
\label{sec:efficiency}
In this section we address the case where the detection efficiency can have any value between 0 and 1.
This is in contrast to the previous cases where the detection efficiency was 1 for active regions and 0 for inactive regions.
This case is quite realistic for most detectors since perfect detection efficiency is never achieved.
A continuous efficiency gradient (in both $\varphi$ and $\eta$) is applied to the primary particles from the generator.
\subsubsection{$\varphi$ Dependent Efficiency}
\label{sec:phiefficiency}
To study the effect of a $\varphi$ efficiency gradient, a sine function of the form $\varepsilon(\varphi)=0.6\sin(\varphi/2)+0.2$ is imposed such that the range of efficiency values is $0.2 \leq \varepsilon \leq 0.8$ for $0 \leq \varphi \leq 2\pi$.
The resulting efficiency map is shown in the left pane of Fig.~\ref{fig:PhiEffGradMapResult}.
Note that, due to binning, the values portrayed in the figure show the average efficiency of the detection regions and not the continuous distribution which is actually imposed on the particles.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{EffGradMap_P3_Phi_EtaRp4d00_EtaB16_PhiS20_Accp0d80.pdf}
\includegraphics[width=0.49\textwidth]{EffGrad_P3_b_Phi_Corr_EtaRp4d00_EtaB08_Accp0d20.pdf}
\caption{(Color online) Left: The figure shows efficiency map from a sine function of the form $\varepsilon(\varphi)=0.6\sin(\varphi/2)+0.2$.
The efficiencies take on values in the range $0.2 \leq \varepsilon \leq 0.8$ with the azimuthal angular range of $0 \leq \varphi \leq 2\pi$.
Note that the colors indicate the average efficiency within the bins and not the values from the efficiency function itself.
Right: Resultant correlation factors obtained using Eq.~(\ref{eq:FinalCovFormula}) with efficiency values extracted from the shown efficiency map using different azimuthal segmentations.
The bottom part shows the ratio of the obtained correlation factors to the primary correlation factors.}
\label{fig:PhiEffGradMapResult}
\end{center}
\end{figure}
The results from applying a continuous efficiency gradient in $\varphi$ are shown in the right pane of Fig.~\ref{fig:PhiEffGradMapResult}.
In principle, the accuracy can always be improved by increasing the number of segments, because the gradient never vanishes.
In this case, one must choose the number of segments corresponding to the desired accuracy and available statistics.
In this analysis an accuracy of better than 1\% is already achieved by using 5 $\varphi$ segments.
\subsubsection{$\eta$ Dependent Efficiency}
\label{sec:etaefficiency}
To study the effect of an $\eta$ efficiency gradient, a sine function of the form $\varepsilon(\eta)=0.6\sin((\eta/4+1)\cdot\pi/2)+0.2$ is imposed such that the range of efficiency values is again $0.2 \leq \varepsilon \leq 0.8$ for $-4 \leq \eta \leq 4$.
The efficiency map for this gradient is shown in the left pane of Fig.~\ref{fig:EtaEffGradMapResult}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.49\textwidth]{EffGradMap_P3_Eta_EtaRp4d00_EtaB08_PhiS01_Accp0d80.pdf}
\includegraphics[width=0.49\textwidth]{EfGrad_P3_b_Eta_Corr_EtaRp4d00_EtaB08_Accp0d20.pdf}
\caption{(Color online) Left: The figure shows efficiency map from a sine function of the form $\varepsilon(\eta)=0.6\sin((\eta/4+1)\cdot\pi/2)+0.2$.
The efficiencies take on values in the range $0.2 \leq \varepsilon \leq 0.8$ with the pseudorapidity range of $-4 \leq \eta \leq 4$.
The colors indicate the average efficiency within the bins and not the values from the efficiency function itself.
Right: Resultant correlation factors obtained using Eq.~(\ref{eq:FinalCovFormulaEta}) with efficiency values extracted from the shown efficiency map using different pseudorapidity segmentations for each measured point.
The bottom part shows the ratio of the obtained correlation factors to the primary correlation factors.}
\label{fig:EtaEffGradMapResult}
\end{center}
\end{figure}
The results from applying a continuous efficiency gradient in $\eta$ are shown in the right pane of Fig.~\ref{fig:EtaEffGradMapResult}.
Again, in principle, the accuracy can always be improved by increasing the number of segments, because the gradient never vanishes.
However, while one does see improvement increasing the $\eta$ segmentation from 1 to 5 segments per $\eta$ bin, one sees virtual no improvement continuing to 10 segments.
\subsection{Comparison between Different Tunes}
\label{sec:GenComp}
The need for such accuracy achieved with this method can be shown by looking at the results from the different generators.
Figure~\ref{fig:VariedGenerator} shows the results using the same particles that produced the curves in Fig.~\ref{fig:PrimComp} with the 60\% simple acceptance configuration in each $\eta$ bin applied.
The bins were divided into 10 azimuthal segments and Eq.~(\ref{eq:FinalCovFormula}) was used to obtain the results.
The method has been applied with no simulation input and only the knowledge of the acceptance for all results.
The results show no particularly different behavior for the discrepancies from the true values for any specific generator (tune).
For the vast majority of points, for all bin widths, the accuracy of the obtained values is within 1\%.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.32\textwidth]{AccGenComp_b_Phi_Sym_EtaRp4d00_EtaB04_PhiS10_Accp0d60.pdf}
\includegraphics[width=0.32\textwidth]{AccGenComp_b_Phi_Sym_EtaRp4d00_EtaB08_PhiS10_Accp0d60.pdf}
\includegraphics[width=0.32\textwidth]{AccGenComp_b_Phi_Sym_EtaRp4d00_EtaB16_PhiS10_Accp0d60.pdf}
\caption{(Color online) Resultant correlation factors obtained using different tunes of Pythia and three different $\eta$ bin widths ($\Delta_{\textrm{bin}} = 1, 0.5,\textrm{ and } 0.25$).
Data from each tune are subjected to the same geometrical acceptance (60\%).
The bottom part of the figures shows the ratio of the obtained correlation factor to the primary value for that tune (shown in Fig.~\ref{fig:PrimComp}).
The figures show no significant dependence of the deviations on the tune.}
\label{fig:VariedGenerator}
\end{center}
\end{figure}
Figure~\ref{fig:VariedGenerator} reproduces the curves in Fig.~\ref{fig:PrimComp}.
For small $\Delta\eta$ (which many detectors possess), one must achieve high accuracy and precision to distinguish between different tunes and, consequently, the relative strengths of the underlying physical processes.
The methods presented here allow one to make correlation measurements with high enough accuracy and precision to achieve this goal.
\section{Conclusions}
\label{sec:conclusion}
The effect of reduced acceptance and imperfect detection efficiency on forward-backward correlations is derived using a statistical approach.
No assumptions about the distribution of primary particles were made and, therefore, the derived results are valid for physical data as well as the simulated data studied here.
Furthermore a framework to evaluate the effect of detector acceptance and efficiency on any order multiplicity correlation has been established.
If the acceptance and the efficiency are well determined, the method can evaluate forward-backward correlations very accurately depending on the capabilities of the detector (segmentation) as long as the inactive regions are smaller than 50\% in all $\eta$ regions.
Considerations must be made concerning the desired segmentation used in the analysis.
The number of segments should be large enough to ensure nearly constant efficiency within the segments while balancing against the storage required for recording the necessary information for the analysis.
The presented method allows one to achieve high accuracy for computing multiplicity correlations necessary to distinguish between the underlying processes governing particle production in the collision.
The framework could be further used to investigate higher order multiplicity correlations that could put additional constraints on models.
To further gain the power to distinguish between the underlying processes, one must allow for these correlation measurements to be performed accurately with large $\Delta\eta$.
This often requires using detectors which have little ability to reject secondary particles (which this paper has not investigated).
Extending this framework to deal with this effect would provide a powerful tool in the analysis of correlations over wide $\eta$ ranges.
\section*{Acknowledgments}
We would like to thank the Danish National Research Foundation (DNRF), the Danish Natural Science Research Council (FNU), and the Villum Foundation for their financial support of this research.
|
1702.07926
|
\section{Introduction}
It is well known that the manifestation of the dynamical aspects of quantum chaos is possible only within its characteristic timescales, the Heisenberg time in the regular case and the logarithmic timescale in the chaotic case, where typical phenomena with a semiclassical description are possible such as relaxation, exponential sensitivity, etc \cite{Ber89,Cas95,Gut90,Haa01,Sto99}.
The logarithmic timescale determines the time interval where the wave--packet motion is as random as the classical trajectory by spreading over the whole phase space \cite{Hel89}. It should be noted that some authors consider the logarithmic timescale as a satisfactory resolution of the apparent contradiction between the Correspondence Principle and the quantum to classical transition in chaotic phenomena \cite{Cas95}.
Concerning the chaotic dynamics in quantum systems, the Kolmogorov--Sinai entropy (KS--entropy) \cite{Lic92,Tab79,Wal82} has proven to be one of the most used indicators. The main reason is that the behavior of chaotic systems of continuous spectrum can be modeled from discretized
models such that the KS--entropies of the continuous systems and of the discrete ones coincide for a
certain time range. Taking into account the graininess of the quantum phase space due the Indetermination Principle, non commutative quantum extensions of the KS--entropy can be found \cite{Ben04,Ben05,Cri93,Cri94,Fal03}. Thus, the issue of graininess is intimately related
to the quantum chaos timescales \cite{Eng97,Gomez14,Ike93,Lan07}.
To complete this picture, it should be mentioned that a relevant property in dynamical systems is the ergodicity, i.e. when the subsets of phase space have a correlation decay such that any two subsets are statistically independent ``in time average" for large times. This property, that is assumed as an hypothesis in thermodynamics and in ensembles theory \cite{Hua87,Pat72}, is at the basis of the statistical mechanics by allowing the approach to the equilibrium by means of densities that are uniformly distributed in phase space. In this sense, in previous works \cite{Cas09,Gom15} quantum extensions of the ergodic property were studied, from which characterizations of the chaotic behavior of the Casati--Prosen model \cite{Cas05} and of the phase transitions of the kicked rotator were obtained \cite{Gom14}.
The main goal of this paper is to exploit the graininess of the quantum phase space and the properties of the KS--entropy in an ergodic dynamics in order to get an estimation of the logarithmic timescale in the semiclassical limit.
The paper is organized as follows. In Section 2 we give the preliminaries and an estimation of the KS--entropy in an ergodic dynamics is presented. In Section 3 we show that this estimation can be extended for the classical analogue of a quantum system in the semiclassical limit. From this estimation and a time rescaled KS--entropy of the classical analogue, we obtain the logarithmic timescale. Section 4 is devoted to a discussion of the results and its physical relevance.
Finally, in Section 5 we draw some conclusions, and future research directions are outlined.
\section{Preliminaries}
The definitions, concepts and theorems given in this Section are extracted from the Ref. \cite{Wal82}.
\subsection{Kolmogorov Sinai entropy}
We recall the definition of the KS--entropy within the standard framework of measure theory.
Consider a dynamical system given by $(\Gamma, \Sigma, \mu, \{T_t\}_{t\in J})$, where $\Gamma$ is the phase space, $\Sigma$ is a $\sigma$-algebra, $\mu:\Sigma \rightarrow [0,1]$ is a normalized measure and $\{T_t\}_{t\in J}$ is a semigroup of measure--preserving transformations. For instance, $T_t$ could be the classical Liouville transformation or the corresponding classical transformation associated to the quantum Schr\"{o}dinger transformation. $J$ is usually $\mathbb{R}$ for continuous dynamical systems and $\mathbb{Z}$ for discrete ones.
Let us divide the phase space $\Gamma$ in a partition $Q$ of $m$ small cells $A_{i}$ of measure $\mu (A_{i})$. The entropy of $Q$ is defined as
\begin{equation}\label{entropy partition}
H(Q)=-\sum_{i=1}^{m}\mu(A_{i})\log\mu(A_{i}).
\end{equation}
Now, given two partitions $Q_1$ and $Q_2$ we can obtain the partition $Q_1\vee Q_2$ which is $\{a_i\cup b_j: a_i\in Q_1, b_j\in Q_2\}$, i.e. $Q_1\vee Q_2$ is a refinement of $Q_1$ and $Q_2$.
In particular, from $Q$ we can obtain the partition $H(\vee_{j=0}^{n}T^{-j}Q)$ being $T^{-j}$ the inverse of $T_{j}$ (i.e. $T^{-j}=T_{j}^{-1}$) and $T^{-j}Q=\{T^{-j}a:a\in Q\}$.
From this, the KS--entropy $h_{KS}$ of the dynamical system is defined as
\begin{equation}\label{KS-entropy}
h_{KS}=\sup_{Q}\{\lim_{n\rightarrow\infty}\frac{1}{n}H(\vee_{j=0}^{n}T^{-j}Q)\}
\end{equation}
where the supreme is taken over all measurable initial partitions $Q$ of $\Gamma$. In addition, the Brudno theorem expresses that the KS--entropy is the average unpredictability of information of all possible trajectories in the phase space. Furthermore, Pesin theorem relates the KS--entropy with the exponential instability of motion given by the Lyapunov exponents. Thus, from the Pesin theorem it follows that $h_{KS}>0$ is a sufficient condition for chaotic motion \cite{Lic92,Tab79}.
\subsection{Time rescaled KS--entropy}
By taking $(\Gamma, \Sigma, \mu, \{T_t\}_{t\in J})$ as the classical analogue of a quantum system and considering the timescale $\tau$ within the quantum and classical descriptions coincide, the definition \eqref{KS-entropy} can be expressed as
\begin{equation}\label{KS-entropy1}
h_{KS}=\sup_{Q}\{\lim_{n\tau\rightarrow\infty}\frac{1}{n\tau}H(\vee_{j=0}^{n\tau}T^{-j}Q)\}
\end{equation}
Now since $T^{-j\tau}=(T_{\tau})^{-j}$ one can recast \eqref{KS-entropy1} as
\begin{equation}\label{KS-entropy2}
h_{KS}=\frac{1}{\tau}\sup_{Q}\{\lim_{n\rightarrow\infty}\frac{1}{n}H(\vee_{j=0}^{n}(T_{\tau})^{-j}Q)\} \nonumber
\end{equation}
Finally, from this equation one can express $h_{KS}$ as
\begin{equation}\label{KS-entropy3}
h_{KS}=\frac{1}{\tau}h_{KS}^{(\tau)} \ \ \ , \ \ \ h_{KS}^{(\tau)}=\sup_{Q}\{\lim_{n\rightarrow\infty}\frac{1}{n}H(\vee_{j=0}^{n}(T_{\tau})^{-j}Q)\}
\end{equation}
The main role of the time rescaled KS--entropy $h_{KS}^{(\tau)}$ is that allows to introduce the timescale $\tau$ as a parameter. This concept will be an important ingredient for obtaining the logarithmic timescale.
\subsection{Ergodicity}
In dynamical systems theory, the correlation decay of ergodic systems is one of the most important properties for the validity of the statistical description of the dynamics because different regions of phase space become statistical independent ``in time average" when they are enough separated in time. More precisely, if one has a dynamical system $(\Gamma,\mu,\Sigma,\{T_t\})$ that is ergodic then the correlations between two arbitrary sets $A,B\subseteq \Gamma$ that are sufficiently separated in time satisfy
\begin{eqnarray}\label{ergodic}
lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}C(T_tA,B)dt=0 \ \ \ \ , \ \ \ \ \textrm{for all} \ A,B\subseteq\Gamma \nonumber
\end{eqnarray}
where $C(T_tA \cap B)=\mu(T_tA \cap B)-\mu(A)\mu(B)$ is the correlation between $T_tA$ and $B$ with $T_tA$ the set $A$ at time $t$. This equation expresses the so called \emph{ergodicity property} which guarantees the equality between the time average and the space average of any function along the trajectories of the dynamical system. Ergodicity property is satisfied by all the chaotic systems, like chaotic billiards, chaotic maps, including systems described by ensemble theory. Furthermore, the calculation of the KS--entropy is intimately related with the ergodicity property, as we shall see.
\subsection{Some methods for calculating the Kolmogorov--Sinai entropy}
In order to calculate the Kolmogorov--Sinai entropy $h_{KS}$ the concept of generator is of fundamental importance. A numerable partition $\widetilde{Q}=\{a_1,a_2,\ldots,a_i,\ldots\}$ of $\Gamma$ is called a \emph{generator} of $\Gamma$ for an invertible measure--preserving $T$ if
\begin{eqnarray}\label{generator}
\vee_{n=-\infty}^{\infty}T^{n}\widetilde{Q}=\Sigma
\end{eqnarray}
This equation expresses that the entire $\sigma$--algebra $\Sigma$ can be generated by means of numerable intersections of the form \\
\noindent $\ldots \cap T^{-2}a_{k_{-2}}\cap T^{-1}a_{k_{-1}} \cap a_{k_{0}} \cap T^{1}a_{k_{1}} \cap T^{2}a_{k_{2}} \cap \ldots$ where $a_{k_j}\in \widetilde{Q}$ for all $j\in\mathbb{Z}$. It can be proved that if $\widetilde{Q}$ is a generator and $H(\widetilde{Q})<\infty$ then
\begin{eqnarray}\label{generator2}
h_{KS}=\lim_{n\rightarrow\infty}\frac{1}{n}H(\vee_{j=0}^{n}T^{-j}\widetilde{Q}) \nonumber
\end{eqnarray}
which reduces the problem of taking the supreme in the formula of the $h_{KS}$ to find a generator $\widetilde{Q}$. In practice, still having found a generator, the calculation of $H(\vee_{j=0}^{n}T^{-j}\widetilde{Q})$ turns out a difficult task due to the large number of subsets of the partition $\vee_{j=0}^{n}T^{-j}\widetilde{Q}$ as soon as $n$ increases. However, a good estimation of the $h_{KS}$ can be made by means of the existence of finite generators. This is the content of the following theorem.
\begin{theorem}\label{estimation KS}(Estimation of the KS--entropy by means of finite generators)
\noindent If $(\Gamma, \Sigma, \mu, \{T_t\}_{t\in J})$ and $T=T_{1}$ is an ergodic invertible measure--preserving transformation with $h_{KS}<\infty$ then $T$ has a finite generator $\widetilde{Q}$
\begin{eqnarray}\label{generator3}
\widetilde{Q}=\{a_1,a_2,\ldots,a_n\}
\end{eqnarray}
such that
\begin{eqnarray}\label{generator4}
e^{h_{KS}}\leq n \leq e^{h_{KS}}+1
\end{eqnarray}
\end{theorem}
\section{Logarithmic timescale in an ergodic dynamics in the semiclassical limit}
With the help of the Theorem \ref{estimation KS} and taking into account the graininess of the quantum phase space, one can obtain a semiclassical version of the Theorem \ref{estimation KS} from which the logarithmic timescale can be deduced straightforwardly. We begin by describing the natural graininess of the quantum phase space.
\subsection{A quantum generator in the quantum phase space}
\begin{figure}[th] \label{fig1}
\begin{center}
\includegraphics[width=8cm]{discreteness2.png}
\end{center}
\caption{Bounded motion and graininess in quantum phase space. In the semiclassical limit $q\gg1$ the region $R$ that the classical analogue occupies has a volume that is approximately the sum of the volumes of the rigid boxes $\Delta q\Delta p$ contained in $R$. The region $\Sigma$ corresponding to the rigid boxes that intersect the frontier of $R$ can be neglected in the limit $q\gg1$.}
\end{figure}
In quantum mechanics, the Uncertainty Principle leads to a granulated phase space composed by rigid boxes of minimal size $\Delta q\Delta p\geq\hbar^{D}$ where $2D$ is the dimension of the phase space. This is the so called \emph{graininess} of the quantum phase space. In a typical chaotic dynamics the motion in phase space $\Gamma$ is bounded, occupying the system a finite region $R$ of volume $\textrm{vol}(R)$. In turn, in the semiclassical limit $q=\frac{\textrm{vol}(R)}{\hbar^{D}}\gg1$ the value of $\textrm{vol}(R)$ can be approximated by the sum of all the rigid boxes $\Delta q\Delta p$ that are contained in $R$.
Let us call $c_1,c_2,\ldots,c_n$ to these boxes. In such situation the region $\Sigma$ corresponding to the rigid boxes that intersect the frontier of $R$ can be neglected. An illustration for $D=1$ is shown in Fig. 1.
Now, since there is no subset in grained phase space having a volume smaller than $\Delta q\Delta p$ it follows that
\begin{eqnarray}\label{generator6}
\widetilde{Q}=\{c_1,c_2,\ldots,c_n\}
\end{eqnarray}
is the unique generator of $R$, that we will call \emph{quantum generator}. Moreover, one has that any $\sigma$-algebra in quantum phase space can be only composed by unions of the rigid boxes $c_1,c_2,\ldots,c_n$.
\subsection{Estimation of the logarithmic timescale in the semiclassical limit}
In order to obtain a semiclassical version of Theorem \ref{estimation KS}, we consider a quantum system having a classical analogue $(\Gamma,\mu,\Sigma,\{T_t\})$ provided with a finely grained phase space $\Gamma$ in the semiclassical limit $q\gg1$, as is shown in Fig. 1.
Also, the partition $\widetilde{Q}$ of \eqref{generator6} is the quantum generator of the region $R$ occupied by the classical analogue.
Let us assume that the dynamics in phase space is ergodic\footnote{Note that the condition of invertibility is guaranteed since the equations of motion in classical mechanics are time-reversal.}. Then, we can arrive to our main contribution of this work, established by means of the following result.
\begin{theorem}\label{estimation logarithmic}(Estimation of the time rescaled KS--entropy of the classical analogue)
\noindent Assume one has a quantum system having a classical analogue $(\Gamma, \Sigma, \mu, \{T_t\}_{t\in J})$ that occupies a region $R$ of a discretized quantum phase space of dimension $2D$. If $T=T_{\tau}$ is an ergodic and invertible measure--preserving transformation with $h_{KS}^{(\tau)}<\infty$ the time rescaled KS--entropy of the classical analogue, then in the semiclassical limit $q\gg1$ one has
\begin{eqnarray}\label{generator7}
e^{h_{KS}^{(\tau)}}\leq n \leq e^{h_{KS}^{(\tau)}}+1
\end{eqnarray}
where $n=\frac{\textrm{vol}(R)}{\hbar^{D}}$ is the quasiclassical parameter $q$ and $\widetilde{Q}=\{c_1,c_2,\ldots,c_n\}$ is the quantum generator.
\end{theorem}
\begin{proof}
It is clear that the partition $\widetilde{Q}=\{c_1,c_2,\ldots,c_n\}$ of eq. \eqref{generator6} is the only quantum generator in the quantum phase space and since $\textrm{vol}(R)$ is $n$ times the volume $\hbar^{D}$ of each rigid box contained in $R$ then one obtains $\textrm{vol}(R)=n\hbar^{D}$. Then,
the result follows by applying the Theorem \ref{estimation KS} to the classical analogue in the semiclassical limit.
\end{proof}
\noindent From the equation \eqref{generator7} one has
\begin{eqnarray}\label{generator9}
\tau h_{KS}\leq \log q \leq \log (e^{\tau h_{KS}}+1) \nonumber
\end{eqnarray}
from which follows that
\begin{eqnarray}\label{generator10}
\tau \leq \frac{\log q}{h_{KS}} \leq \frac{\log (e^{\tau h_{KS}}+1)}{h_{KS}} \nonumber
\end{eqnarray}
Now, assuming a chaotic motion of the classical analogue by means of the condition $h_{KS}>0$, then one can make the approximation $e^{\tau h_{KS}}+1\approx e^{\tau h_{KS}}$. Replacing this in the last inequality one has
\begin{eqnarray}\label{generator11}
\tau = \frac{\log q}{h_{KS}} \ \ \ \ \ \textrm{with} \ \ \ \ \ q=\frac{\textrm{vol}(R)}{\hbar^{D}}
\end{eqnarray}
which is precisely the logarithmic timescale.
\section{Physical relevance}
Here we provide a discussion about the physical relevance of the results obtained at the light of the quantum chaos theory.
Several previous work based on the quantum dynamics of observable values \cite{Ber78}, quantization by means of symmetric and ordered expansions \cite{Ang03}, and the wave--packet spreadings along hyperbolic trajectories \cite{Sch12} among others, show that a unified scenario for a characterization of the quantum chaos timescales is still absent.
Furthermore, the mathematical structure used in most of these approaches makes difficult to visualize intuitively the quantum and classical elements that are present, or even in some cases the results are restricted to special initial conditions \cite{Ang03}.
Nevertheless, we can mention the aspects of our contribution in agreement with some standard approaches used. Below we quote some results of the literature and discuss them from the point of view of the present work.
\begin{itemize}
\item \emph{The timescale $\tau_{\hbar}$ may be one of the universal and fundamentals characteristic of quantum chaos accessible to experimental observation \cite{Ber78,Cas95}. In fact, the existence of $\tau_{\hbar}$
\begin{eqnarray}\label{universal timescale}
\tau_{\hbar}=C_1\log\left(\frac{C_2}{\hbar}\right)
\end{eqnarray}
where $C_{1,2}$ are constants has been observed and discussed in detail for some typical models of quantum chaos \cite{Ber92,Zas81}.}
\vspace{0.1cm}
The relation \eqref{universal timescale} results as a mathematical consequence of Theorem \ref{estimation logarithmic} for any phase space of arbitrary dimension $2D$. In fact, from \eqref{generator11} one obtains $C_1=\frac{1}{h_{KS}}$, and $C_2=\textrm{vol(R)}$ the volume occupied by the system along the dynamics.
\vspace{0.1cm}
\item \emph{Every classical structure with a phase--space
area smaller than Planck's constant $\hbar$ has no quantum correspondence. Only
the classical structures extending in phase space over scales larger than Planck's
constant are susceptible to emerge out of quantum--mechanical waves \cite{Gas14}.}
\vspace{0.1cm}
From Theorem \ref{estimation logarithmic} one can see that the classical structure of KS--entropy estimation (Theorem \ref{estimation KS}) emerges in terms of the quasiclassical parameter $q$ in the semiclassical limit, as is expressed in \eqref{generator7}.
\vspace{0.1cm}
\item \emph{If strong chaos occurs in the classical limit, then for a rather short time $\tau_{\hbar}$ the wave--packet spreads over the phase volume:
$\Delta I=\hbar\exp(\lambda \tau_{\hbar})$ where $\lambda$ is the characteristic Lyapunov exponent. Therefore, for the time--scale $\tau_{\hbar}$, one has: $\tau_{\hbar}\sim \lambda^{-1}\ln (\Delta I/\hbar)=(\ln \kappa)/\lambda$, where $\kappa$ is of the order of the number of quanta of characteristic wave packet width \cite{Cas95}.}
\vspace{0.1cm}
In fact, from \eqref{generator7} with $D=1$ it follows that for the case of the wave--packet one has: $h_{KS}^{(\tau)}=\tau_{\hbar}\lambda$, $h_{KS}=\lambda$ (Pesin theorem), and $\kappa=q=\frac{\textrm{vol}(R)}{\hbar}$. In this way, the number of quanta of the characteristic wave packet width is equal to the number $n$ of the members of the quantum generator given by \eqref{generator6}.
\end{itemize}
\begin{figure}[th] \label{fig2}
\begin{center}
\includegraphics[width=14cm]{timescale.png}
\end{center}
\caption{An schematic picture of Theorem \ref{estimation logarithmic} showing the necessary elements for obtaining the logarithmic timescale.}
\end{figure}
A panoramic outlook of the content of Theorem \ref{estimation logarithmic} is shown in Fig. 2.
\section{Conclusions}
We have presented, in the semiclassical limit, an estimation of the logarithmic timescale for a quantum system having a classical analogue provided with an ergodic dynamics in its phase space. The three ingredients that we used were: 1) the fine granularity of the quantum phase space in the semiclassical limit, 2) the existence of an estimation of the KS--entropy in terms of finite generators of the region that the system occupies in phase space, and 3) a time rescaling of the KS--entropy that allows to introduce the characteristic timescale as a parameter.
In summary, our contribution is three--fold. On the one hand, the logarithmic timescale arises, in the semiclassical limit, as a formal result of the ergodic theory applied to a discretized quantum phase space of the classical analogue in an ergodic dynamics, thus providing a theoretical bridge between the ergodic theory and the graininess of the quantum phase space.
On the other hand, the Theorem \ref{estimation logarithmic} makes more visible and rigorous the simultaneous interaction of the effects of the quantum dynamics and the classical instability in phase space. In fact, the semiclassical parameter $q$ is expressed as the number of members of the quantum generator of the region that the classical analogue occupies in phase space.
Finally, one can consider the Theorem \ref{estimation logarithmic} as a mathematical proof of the existence of the logarithmic timescale when the dynamics of the classical analogue is chaotic, i.e. a positive KS--entropy. However, since the quasiclassical parameter $q$ and the KS--entropy are system--specific, in each example the parameters of the logarithmic timescale must be determined by the experimental observation.
One final remark. It is pertinent to point out that, in addition to Theorem \ref{estimation KS}, the techniques employed (i.e. the existence of a single quantum generator of the quantum phase space and the time rescaling property of the KS--entropy) can be used to extend semiclassically others results of the ergodic theory.
\section*{Acknowledgments}
This work was partially supported by CONICET and Universidad Nacional de La Plata, Argentina.
|
2009.08385
|
\section{Introduction}
\label{intro}
Electroencephalography (EEG) has applications in many fields, spanning from basic neuroscientific research to clinical domains. However, despite the technological advances in recording precision, the full potential of EEG is currently not being exploited. One possible way to do so is to use computational models in order to integrate findings from electrophysiology, network-level models (the level of neuroimaging), and behavior \citep{franceschiello2018neuromathematical, franceschiello2019geometrical}.
A model is defined in terms of a set of equations which describe the relationships between variables. Importantly, models exist for different spatial scales \citep{varela2001brainweb, deco2008dynamic}, spanning from the single cell spike train up to macroscopic oscillations. The equations are used to simulate how each variable changes over time, or, in rare cases, to find analytical solutions for the relationships among the variables. The dynamics of the resulting time series are also influenced by a set of parameters, which can either be estimated from available data - for example, a model which simulates the firing of a certain neuron type could contain a time constant estimated from recordings on that type of neuron in rodents - or its value can be varied systematically in an exploratory manner. The goal is to produce time series of variables that can be compared to real data. In particular, one can simulate perturbations to brain activity, be it sensory stimulation, a therapeutic intervention like DBS or a drug, or a structural change due to the onset of a pathology, like neurodegeneration or a lesion, and predict what would be the resulting alterations observed in neural and clinical data.
An important application of EEG models is in the clinical domain. Psychiatric and neurological disorders impact a growing portion of the population, both as patients and caregivers, and with an enormous cost - both economical and humanitarian - to healthcare systems worldwide \citep{steel2014global, vigo2016estimating, feigin2019global}. One of the main obstacles in advancing patient care is the lack of individual diagnosis, prognosis, and treatment planning \citep{wium2017personalized}. Computational models can be adapted to the individual by setting their parameters according to available data (i.e. either setting the parameter directly, if it is measurable, or looking for the parameter value which results in time series whose dynamics match recorded data). The adjusted parameter(s) can then be related to clinical markers, symptoms, and behavior, allowing for example to discriminate between pathologies. Using models in this personalized manner could provide additional diagnostic features in the form of model parameters and model output, eventually assisting clinicians in diagnosis and treatment planning.
Another obstacle is a general lack of scientific knowledge of disease mechanisms, including the mechanisms by which therapies exert their effect. As an example, deep brain stimulation (DBS) is a highly effective treatment for advanced Parkinsonism, in which electric pulses are delivered directly to certain deep brain structures via permanently implanted electrodes. Yet, it is largely unknown how exactly the applied stimulation manages to suppress motor symptoms such as tremor \citep{chiken2016mechanism}. This is also because the way in which motor symptoms result from the degradation of dopaminergic neurons in the substantia nigra is not fully understood \citep{mcgregor2019circuit}. Besides animal models - which have their own ethical issues - \emph{in silico} models are an indispensable tool for understanding brain disorders. Combining data available from a patient or group of patients with knowledge and hypotheses about mechanisms, a model can be generated which can help test these hypotheses.
Last but not least, models are much cheaper than animal testing or clinical trials. While models will not replace these approaches - at least not in the foreseeable future - they could help to formulate more specific hypotheses and thus, lead to smaller-scale experiments.
Collecting invasive data is not generally possible in humans. EEG is an extremely versatile technology which allows non-invasive recording of neural activity in behaving humans. EEG is a cheap and portable technology, particularly compared to (f)MRI and MEG. Apart from these cost-efficiency considerations, EEG, like MEG, is a direct measure of the electromagnetic fields generated by the brain, and allows millisecond-precision recordings, thus giving access to rich aspects of brain function which can inform models in a way that e.g. fMRI cannot (see section~\ref{sec:eeg} for more details). In general, using different complementary sources of data to construct and validate a model will lead to better model predictions, as each recording technique has its own strengths and weaknesses, and a multimodal approach can balance them.
In our opinion, there are mostly two reasons why EEG has not been used more extensively in modeling studies, and particularly in a clinical context. First, there are numerous technical problems which make the processing and interpretation of EEG data challenging. EEG - like MEG - is measured on the scalp, and the problem of projecting this 2D-space into the 3D-brain space arises \citep{michel2019eeg}. While multiple solutions exist for this inverse problem, it is unclear which one is the best and under which circumstances \citep{hassan2014eeg, mahjoory2017consistency, hedrich2017comparison}. EEG data require extensive preprocessing, e.g. removal of artifacts due to movements, eye blinks, etc., but these steps are far from being standardized, and many options exist. The recently started EEG-BIDS effort \citep{pernet2019eeg} is a step towards the direction of standardization of EEG data and should facilitate, alongside with the much larger amount of publicly available data, studies that systematically evaluate the impact of preprocessing steps and compare source reconstruction algorithms. As the interest in EEG rises, the need to resolve these issues will trigger larger efforts that will benefit the entire community.
The second obstacle to a more routine usage of computational models in EEG research, which we hope to address in this review, is that such models usually require an understanding of the mathematics involved, if only to be able to choose the model that is useful for the desired application.
Both variables and parameters are not always clearly related to quantities which can be measured in a clinical or experimental context, and more generally, models need to be set up in such a way that they meet existing clinical demands or research questions.
The contribution of this paper is threefold. First, this article summarizes computational approaches at different spatial scales in EEG, targeting non-experts readers. To the best of our knowledge, this paper represents the first review on this topic. Second, we will point out several ways in which computational models integrate EEG recordings, by using biologically relevant variables. Third, we discuss the clinical applications of computational models in EEG which have been developed. The field is greatly expanding and contains promising advancements both from research and clinical standpoints. We believe that this overview will make the field accessible for a broad audience, and indicate the next steps required to push modeling of EEG forward.
\section{Electroencephalography}
\label{sec:eeg}
EEG is a non-invasive neuroimaging technique that measures the electrical activity of the brain \citep{biasiucci2019electroencephalography}. EEG recordings have been a driver of research and clinical applications in neuroscience and neurology for nearly a century. EEG relies on the placement of electrodes on the person's scalp, measuring the postsynaptic potentials of pyramidal neurons \citep{tivadar2019primer,da2013eeg}. EEG does not directly measure the action potentials of neurons, though there are some indications of high-frequency oscillations being linked to spiking activity \citep{telenczuk2011high}. The neurotransmitter release generated by action potentials, whether excitatory or inhibitory, results in local currents at the apical dendrites that in turn lead to current sources and sinks in the extracellular space around the dendritic arbor (i.e. postsynaptic potentials, see Figure \ref{fig:eeg_and_comp_model}, bottom right block). EEG is generated by the local field potential (LFP), a signal that reflects summed synaptic activity of local populations of neurons. In the neocortex, pyramidal neurons are generally organized perpendicularly to the cortical surface, with apical dendrites toward the pial surface and axons pointing inferiorly towards the grey-white matter border. This alignment leads to the electrical fields of many neurons being summed up to generate a signal that is measurable at the scalp \citep{tivadar2019primer}. Importantly, individual neurons of these populations need to be (nearly) synchronously active to be detectable by EEG.\\
\indent As mentioned above, the electrical activity of the brain is recorded by means of electrodes, made of conductive materials, placed at the scalp. The propagation of electrical fields takes place due to the conductive properties of brain and head tissues, a phenomenon known as volume conduction \citep{kajikawa2011local}. The electrodes are connected to an amplifier which boosts the signal. Due to the biophysical nature of what is measured, i.e. a voltage - the difference of potential able to move charges from one site to the other - EEG records the differential measurements between an electrode at a specific position on the scalp and a reference site. Common analyses in EEG are the study of local phenomena such as peaks at specific latencies or scalp sites (event-related potential, ERPs); or the study of topography, i.e. the shape of the electric field at the scalp, which represents a global brain signature \citep{murray2008topographic}. EEG is known for its high temporal resolution. The biggest pitfalls of the technique are, on the other hand, the low spatial resolution and signal to noise ratio. A clear and exhaustive walk through these topics as well as an overview of strengths and pitfalls of using EEG is contained in \citet{tivadar2019primer} and for non-experts of the field in \citet{biasiucci2019electroencephalography}.\\
\indent Despite being a measurement of the scalp activity, EEG can reveal the underlying neurophysiological mechanisms of the brain, and that is what classifies it as brain imaging tool. The estimation of the loci of active sources for the recorded brain activity at the scalp is called source reconstruction \citep{michel2004eeg}. However, the loci can belong to areas not necessarily below the considered electrode, a pitfall caused by volume conduction. Source reconstruction is a mathematical ill-posed inverse problem, as the solution is not unique. However, the addition of biophysical constraints to the inverse problem allows to retrieve a solution, which has been validated by means of intracranial recordings \citep{michel2012towards}. Having obtained the source activity, one can estimate the functional connectivity between the sources, i.e. the statistical dependencies between brain areas, assumed to indicate their interactions (see also table~\ref{tab:words}). This can then be complemented with neuroanatomical/structural connectivity (table~\ref{tab:words}), which estimates white matter connections between brain areas. \\
\indent Computational models stand at the interface between the physiology of neurons at different scales (single neuron, population, macro-scale) and perceptual behavior. EEG would greatly benefit from the integration of \textit{in-silico} simulations, as computational models could complement both the neurophysiological and behavioral interpretations of EEG recordings. In the following sections, we will discuss different types of computational models, i.e. the different scales at which the neural activity is simulated, how such models can be integrated in the analysis of EEG signals, and how such models have been used in new clinical applications.
\section{Different types of computational models for EEG}
A straightforward classification of computational models for EEG can be done based on the different scales of neurophysiological activity they integrate. For instance, we can distinguish three types of models (Figure~\ref{fig:model_scales} A):
\begin{enumerate}
\item microscopic models on the level of single cells and micro-circuits;
\item mesoscopic models on the level of neural masses and neural fields;
\item macroscopic models taking into account the connectome/white matter.
\end{enumerate}
The integration of computational models has greatly advanced the field of applications of EEG, both for research and clinical purposes.
\begin{figure*}
\centering
\includegraphics[width = 1\linewidth]{Figures/Figure20200722.png}
\caption{Electrophysiology of neural activity and EEG at different scales.}
\label{fig:eeg_and_comp_model}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width = 1\linewidth]{Figures/schematic.png}
\caption{\textbf{A} - Illustration of computational models at the three scales treated here. \emph{Microscopic scale}: Simple example of two ($i=1,2$) leaky integrate-and-fire (LIF) neurons coupled together, a pyramidal neuron making an excitatory synapse to the interneuron, which in turn makes an inhibitory synapse to the pyramidal cell. This minimal circuit implements feedback inhibition, as the pyramidal cell, when activated, will excite the interneuron, which in turn will inhibit it. In the equation, $V_i$ is the membrane potential of each of the two cells $i = 1,2$; $V_L$ is the leak, or resting potential of the cells; $R$ is a constant corresponding to the membrane resistance; $I_i$ is the synaptic input that each cell receives from the other, and possibly background input; $\tau$ is the time constant determining how quickly $V_i$ decays. The model is simulated by setting a firing threshold, at which, when reached, a spike is recorded and $V_i$ is reset to $V_L$. \emph{Mesoscopic scale}: The Wilson-Cowan-model, in which an excitatory ($E$) and an inhibitory ($I$) population are coupled together. The mean field equations describe the mean activity of a large number of neurons. $f_E$ and $f_I$ are sigmoid transfer functions whose values indicate how many neurons in the population reach firing threshold, and $h_E$/$h_I$ are external inputs like background noise. $w_{EE}$ and $w_{II}$ are constants correponding to the strength of self-excitation/inhibition, and $w_{EI}$ and $w_{IE}$ the strength of synaptic coupling between populations. \emph{Macroscopic scale}: In order to simulate long-range interactions between cortical and even subcortical areas, brain network models couple together many mesoscopic (``local") models using the connection weights defined in the empirical structural connectivity matrix $C$. The example equation defines the Kuramoto model, in which the phase $\theta_n$ of each node $n$ is used as a summary of its oscillatory activity around its natural frequency $\omega$. Each node's phase depends on the phases of connected nodes $p$ taking into account the time delay $\tau_{np}$, defined by the distances between nodes $n$ and $p$. $k$ is a global scaling parameter controlling the strength of internode connections. \textbf{B} - Illustration of a typical modeling approach at the macroscopic scale. Activity is simulated for each node using the defined macroscopic model, e.g. the Kuramoto model from panel A, right. The feature of interest is then computed from this activity. Shown here is the functional connectivity, e.g., phase locking values between nodes (table~\ref{tab:words}). This can then be compared to the empirical functional connectivity matrix computed in exactly the same way from experimental data, e.g. by correlating the entries of the matrix. The model fit can be determined depending on parameters of the model, e.g. the scaling parameter $k$ or the unit speed, here indicated with ``tau".}
\label{fig:model_scales}
\end{figure*}
\subsection{Computational models for EEG on the level of single cells and microcircuits}
The purpose of this level of modeling is to address the origin of the EEG signal by investigating the relationship between its features and electrophysiological mechanisms (Figure~\ref{fig:eeg_and_comp_model}, column A) with the tools of computational neuroscience. As detailed above, the EEG signal recorded from the scalp is the result of the spatial integration of the potential fluctuations in the extracellular medium. The EEG signal is mainly caused by the local field potential (LFP), while LFP is mainly driven by synaptic activity \citep{Logothetis3963, Buszaki2012} and volume conduction \citep{kajikawa2011local}. From the experimental standpoint, local network activity is usually measured as LFP (mainly \emph{in vivo} - and rarely \emph{in vitro} - animal data). By virtue of superposition, fluctuations in the LFP, and EEG more generally, are signatures of correlated neural activity \citep{pesaran2018investigating}. Cellular and microcircuit modeling are thus aimed at understanding the neurophysiological underpinnings of these correlations and the role played by cell types, connectivity and other properties in shaping the collective activity of neurons.
A primary goal of EEG modeling at the microscopic scale is on the one hand to predict the EEG signal generated by the summation of local dynamics on the microscopic scale and, on the other hand, to reconstruct the microscopic neural activity underlying the observed EEG. The first goal is far from being achieved, and the second is ill-posed due to the number of possible circuit and cellular combinations at the source level leading to similar EEGs. Implicit to these goals is to understand how features of neural circuits, such as the architecture, synapses and cell types, contribute to the generation of electromagnetic fields and their properties in a bottom-up fashion. Despite key insights, many shortcomings limit the interpretability of microcircuit models and the establishment of a one-to-one correspondence with EEG data. For instance, the contribution of spiking activity and correlated cellular fluctuations to LFPs and EEG power spectra remains unclear. Most microcircuit models characterize the net local network activity - used as a proxy for EEG - using the average firing rate or via the mean somatic membrane potential taken amongst populations of cells (of various types). Other studies have used a heuristic approach and approximated the EEG signal as a linear combination of somatic membrane potentials with random coefficients to account for both conduction effects and observational noise \citep{herrmann2016shaping, lefebvre2017stochastic}. As such, microcircuit model predictions and experimental data cannot always be compared directly.
Cellular multicompartmental models, which oftentimes take cellular morphology and spatial configuration into consideration, are based on the celebrated Hodgkin-Huxley equations, which describe the temporal evolution of ionic flux across neuronal membranes (see \citet{catterall2012hodgkin}, for a recent review). Such conductance-based models, which possess explicit and spatially distributed representations for cellular potentials, facilitate the prediction and/or comparison with LFP recordings. In contrast, single compartment models are difficult to interpret: while more abstract single compartment models such as Poisson neurons or integrate-and-fire models (Figure~\ref{fig:model_scales} A, left) are often used for their relative tractability and computational efficiency to construct more elaborate microcircuit models, they generally lack the neurophysiological richness to estimate EEG traces. Despite this, several computational advancements in recent years investigated how networks of integrate-and-fire neurons generate LFPs, clarifying the microscopic dynamics reflected in the EEG signal \citep{mazzoni2015PCB, mazzoni2008encoding, MAZZONI2010956, MAZZONI20112, deco_dynamic_2008, Buehlmann2008, Barbieri14589}. Such approaches have been used to understand the formation of correlated activity patterns in the hippocampus (e.g. oscillations), and their associated spectral fingerprints in the LFP \citep{chatzikalymniou2018deciphering}. Furthermore, a broad range of works modeled the origin of the local field potential and how it diffuses via volume conduction to generate the EEG signal \citep{hindriks2017linear, linden2011modeling, maki2019biophysical, skaar2019estimation, Gaute2013, telen2017, bedard2009}.
The key missing element for understanding the link between spiking network activity, LFP, and EEG signal, is the functional and spatial architecture of the networks. In particular, there are two open challenges. The first is to understand how the network connectivity affects the model dynamics that generate the LFP, and the second is to clarify how the spatial arrangement and morphology of neurons affect LFP diffusion \citep{mazzoni2015PCB}.
From this perspective, models of pyramidal cell dynamics and circuits should guide the interpretation of the EEG signal. For example, Destexhe and colleagues recently addressed the long-debated issue of the relative contribution of inhibitory and excitatory signals to the extracellular signal \citep{telenczuk2019modeling}, suggesting that the main source of the EEG signal may stem from inhibitory - rather than excitatory - inputs to pyramidal cells. A recent spiking network model \citep{saponati2019integrate} incorporates the modular architecture of the thalamus, in which subnetworks connect to different parts of the cortex \citep{barardi2016}. This model was used to show how the propagation of activity from the thalamus shapes gamma oscillations in the cortex.
Computational models at the level of single cells and microcircuits have also been instrumental in elucidating the mechanisms underlying multiple EEG phenomena. For instance, such models were used to better understand EEG rhythm changes observed before, during and after anesthesia, using spiking network models \citep{Kopell2008, Kopell2012} and/or cortical micro-circuit models \citep{hutt2018suppression}. Some of these models have been extended to account for the effect of thalamocortical dynamics on EEG oscillations \citep{ching2010thalamocortical, hutt2018suppression}, highlighting the key role played by the thalamus on shaping EEG dynamics. In addition, microcircuit models have been used to understand the EEG response of cortical networks to non-invasive brain stimulation (e.g. TACS, TMS), especially in regard to the interaction between endogenous EEG oscillatory activity and stimulation patterns \citep{herrmann2016shaping}, in which thalamic interactions were found to play an important role \citep{lefebvre2017stochastic}.
\subsection{Computational models for EEG on the level of neural masses and neural fields} \label{subsec:neural_masses}
In this section we discuss models of population dynamics and how they could determine specific features of the electrical activity recorded by EEG (Figure~\ref{fig:eeg_and_comp_model}, column B). Mean field models describe the average activity of a large population of neurons by modeling how the population - as a whole - transforms its input currents into an average output firing rate (Figure~\ref{fig:model_scales} A, middle; for details on how networks of spiking neurons are reduced to mean field formulations, see \citep{wong2006recurrent,deco2013resting,coombes2019next,byrne2020next}). If we consider a population to be a small portion of the cortex containing pyramidal cells, the average activity modelled by the mean field can be understood as the LFP. Two types of models can be distinguished: neural mass models, where variables are a function of time only, and neural field models, where variables are functions of time and space. In this sense, neural field models can be seen as an extension of neural mass models, by taking into account the continuous shape of cortical tissue and the spatial distribution of neurons. These models allow for the description of local lateral inhibition as well as local axonal delays \citep{hutt2003pattern,atay2006neural}. An important application of neural field models is found in phenomenological models of visual hallucinations \citep{ermentrout1979mathematical,bressloff2001geometric}, and they have been used to model sleep and anaesthesia \citep{steyn1999theoretical,bojak2005modeling}. Future applications may also involve both neural mass and neural field models to describe different cortical structures, similarly to the multiscale approach proposed in \citet{cattani2016hybrid}.
The most popular model on this mesoscopic scale was first described by Wilson \& Cowan \citep{wilson1973mathematical,cowan2016wilson} (Figure~\ref{fig:model_scales} A, middle), and all mean field models can be seen as deriving from this form. It consists of an inhibitory and an excitatory population, where usually, for the purpose of EEG, it is assumed that the excitatory population models pyramidal neurons while the inhibitory population takes the role of interneurons. A variant of this model was described in Jansen \& Rit \citep{jansen1995electroencephalogram} and goes back to the ``lumped parameter" model by Lopes da Silva \citep{da1974model}. It uses three distinct populations, i.e. a population of excitatory interneurons in addition to the two populations already mentioned. The reason this model has been popular in EEG modeling is that it accounts for the observation that inhibitory and excitatory synapses tend to deliver inputs to different parts of the pyramidal cell body \citep{sotero2007realistically}. In addition, thalamocortical loops are thought to greatly contribute to the generation of oscillations observed in the cortex \citep{steriade1993thalamocortical}, and an important class of neural field models deals with these loops and their dependency on external stimuli \citep{robinson2001prediction, robinson_dynamics_2002}.
The dynamical behavior of models can be manipulated to simulate different phenomena by varying their parameters. For example, the coupling parameters that determine the strength and speed of feedback-inhibition and feedforward-excitation can be varied (parameters $w_{IE}$ and $w_{EI}$ in Figure~\ref{fig:model_scales} A, middle), both within and between populations. Also it is possible to modify time constants (which govern the decay of activity in the local populations) or the strength of background noise. Changing these parameters \emph{in silico} can be interpreted biologically. For example, in \citet{bojak2005modeling}, the authors describe how a modified neural field model reproduces EEG spectra recorded during anaesthesia. The strength of inputs from the thalamus to the cortical neural populations was varied within a biologically plausible range.
Neural mass and neural field models are able to reproduce a range of dynamical behaviors that are observed in EEG, like oscillations in typical EEG frequency bands \citep{david2003neural}, phase-amplitude-coupling \citep{onslow2014canonical,sotero2016topology}, evoked responses \citep{jansen1993neurophysiologically,jansen1995electroencephalogram,david2005modelling}, and allows to model the EEG spectrum \citep{david2003neural,bojak2005modeling,moran2007neural}.
By coupling together more than one model/set of populations, one can start investigating the effect that delays have on neural activity \citep{jirsa1996field}. In fact, Jansen \& Rit \citep{jansen1995electroencephalogram} coupled together two neural mass models in order to simulate the effect of interactions between cortical columns on their activity.
Often, activity simulated by mean field models is assumed to be related to local field potentials \citep{liley2002spatially}. However, models are usually set up such that the local field potential derives directly from the mean firing rate. In this way, an important aspect that underlies the EEG signal is neglected, namely, the synchrony (coherence) of the firing within a neural population (as opposed to synchrony between populations, which can be studied using e.g. instantaneous phase differences \citep{breakspear2004novel}). Phenomena such as event-related synchronization and -desynchronization result from a change in this synchrony rather than from a change in firing rate. Recent models \citep{byrne2017mean,byrne2020next} propose therefore a link between the firing rate and the Kuramoto order parameter, which is a measure of how dispersed firing is within a population.
\subsection{Macroscopic computational models for EEG taking into account the connectome}\label{subsec:macro}
In this section, we review existing literature on macroscopic computational models that take into account the connectome and discuss their potential to reveal the generative mechanisms of the macroscopic brain activity patterns detected with EEG and MEG (Figure~\ref{fig:eeg_and_comp_model}, column C). We will use the term ``brain network models" (BNM) in order to clearly distinguish this framework from other approaches to whole-brain modeling \citep{breakspear2017dynamic}, e.g. using neural field models \citep{jirsa1996field,robinson1997propagation,coombes2007modeling} or expansions of the thalamocortical models discussed above \citep{robinson2001prediction,freyer2011biophysical}. We will also leave aside the large body of literature on dynamic causal modeling (DCM) \citep{kiebel2008dynamic,pinotsis2012dynamic}, as this deserves a more detailed review than the scope of this paper can provide.
\paragraph{Brain network models.} In recent years, the interest in the human connectome has experienced a boom, creating the prolific and successful field of ``connectomics". In the framework of connectomics, the brain is conceptualized as a network made up of nodes and edges. Each node represents a brain region, and nodes are coupled together according to a weighted matrix representing the wiring structure of the brain (Figure~\ref{fig:model_scales} A, right). This so-called structural connectivity matrix (SC) is derived from white matter fiber bundles which connect distant brain regions \citep{behrens2003non,zhang2010noninvasive,hagmann2008mapping,sepulcre2010organization,wedeen2012geometric} and are measured using diffusion weighted magnetic resonance imaging (dMRI) (table~\ref{tab:words}). The set of all fiber bundles is called the connectome \citep{sporns2011human}. By coupling brain regions together according to the weights in the SC, the activity generated in each region depends also on the activity propagated from other regions along the connections given by the SC.
BNMs are used to study the role of structural connectivity in shaping brain activity patterns. Because this is a complex problem that involves the entire brain, it is important to find a balance between realism and reduction, so that useful predictions can be made
In practice, a common simplification is to assume that all brain regions are largely identical in their dynamical properties \citep{passingham2002anatomical}. This \emph{reductionist} approach keeps the number of parameters at a manageable level and still allows to investigate how collective phenomena emerge from the \emph{realistic} connectivity between nodes.
In other words, BNMs do not necessarily aim at maximizing the fit to the empirically recorded brain signals. Rather, the goal is to reproduce specific temporal, spatial or spectral features of the empirical data emerging at the macroscopic scale whose underlying mechanisms remain unclear (Figure~\ref{fig:model_scales} B).
\paragraph{Choice of local model.} In mathematical terms, brain activity is simulated according to a system of coupled differential equations. The activity of each node is described by a mean-field model, such as the ones described in section \ref{subsec:neural_masses}, and coupling between the mean field models is parametrized by the empirical SC (Figure~\ref{fig:eeg_and_comp_model} A, right).
Importantly, the type of mean-field model used at the local level must be selected according to the hypothesis being tested. For example, BNMs have proved to be a powerful tool to elucidate the non-linear link between the brain's structural wiring and the functional patterns of brain activity captured with resting-state functional magnetic resonance imaging (rsfMRI) \citep{deco_identification_2014, deco_resting_2013, honey_predicting_2009, deco2009key, cabral2011role}. However, oscillations in frequency ranges important for M/EEG (2-100 Hz) are often neglected in studies aiming at reproducing correlated fluctuations on the slow time scale of the fMRI signal. Thus, despite the insights gained by BNMs to understand rsfMRI signal dynamics, the same models do not necessarily serve to understand M/EEG signals and vice-versa \citep{cabral2017functional}.
In \citet{cabral2014exploring}, the local model employed includes a mechanism for the generation of collective oscillatory signals in order to address oscillatory components of M/EEG. To model brain-wide interactions between local nodes oscillating around a given natural frequency (in this case, 40 Hz, in the gamma frequency range), the Kuramoto model \citep{kuramoto2003chemical,yeung1999time}, was extended to incorporate realistic brain connectivity (SC) and time delays (determined by the lengths of the fibers in the SC (see also \citet{finger2016modeling}; Figure~\ref{fig:model_scales} A, right). This model shows how, for a specific range of parameters, groups of nodes (communities) can temporarily synchronize at community-specific lower frequencies, obeying to universal rules that govern the behaviour of coupled oscillators with time delays. Thus, the model proposed a mechanism that explains how slow global rhythms in the alpha- and beta-range emerge from interactions of fast local (gamma) oscillations generated by neuronal networks.
In contrast, \citet{deco2019brain} used a mean field model \citep{wilson1973mathematical, brunel2001effects, deco2012ongoing, deco2014local}, which was tuned not to exhibit intrinsic oscillations. Because the brain could thus be considered as being in a noisy, low-activity state, the number of parameters was sufficiently reduced to investigate how activation patterns change over time on different time scales. Time scales including that of M/EEG (ten to several 100 ms) as well as that typical for fMRI (1-3 seconds) were considered, and the question was asked whether there is a time scale at which brain dynamics are particularly rich. The authors found that the best frequency resolution was on the scale of 200 ms, as both the number of co-activation patterns as well as their dynamics were richest, compared to other resolutions.
\paragraph{Emerging class of harmonics-based models.}
Although both the described BNMs as well as DCM (dynamic causal modeling) have a long history of success in modeling brain activity patterns, they have high-dimensionality, and usually require local oscillators governed by region-specific or spatially-varying model parameters. While this imbues such models with rich features capable of recreating complex behavior, they are challenging for some clinical applications where a small set of global features might be desired to assess the effect of disease on network activity. Therefore recently some laboratories have focused on low-dimensional processes involving diffusion or random walks (table~\ref{tab:words}) on the structural graph (table~\ref{tab:words}) instead of mean-field models, providing a simpler means of simulating functional connectivity (FC). These simpler models were able to match or exceed the predictive power of complex neural mass models or DCMs in predicting empirical FC \citep{abdelnour2014network}. Higher-order walks on graphs have also been quite successful; typically these methods involve a series expansion of the graph adjacency or Laplacian matrices \citep{Meier2016, Becker2018} (table~\ref{tab:words}). Not surprisingly, the diffusion and series expansion methods are closely related, and most of these approaches may be interpreted as special cases of each other, as demonstrated elegantly in recent studies \citep{robinson2016eigenmodes, deslauriers2020, tewarie2020}.
Whether using graph diffusion or series expansion, these models of spread naturally employ the so-called eigenmodes, or harmonics, of graph adjacency or Laplacian matrix. Hence these methods were generalized to yield spectral graph models whereby e.g. Laplacian harmonics were sufficient to reproduce empirical FC, using only a few eigenmodes \citep{Atasoy2016, Abdelnour2018}. The Laplacian matrix in particular has a long history in graph modeling, and its eigenmodes are the orthonormal basis of the network and can thus represent arbitrary patterns on the network \citep{Stewart1999}. Such spectral graph models are computationally attractive due to low-dimensionality and more interpretable analytical solutions. The SC's Laplacian eigenmodes may be thought of as the substrate on which functional patterns of the brain are established via a process of network transmission \citep{Abdelnour2018, Atasoy2016, robinson2016eigenmodes, preti_decoupling_2019, glomb2020connectome}. These models were strikingly successful in replicating canonical functional networks, which are stable large scale circuits made up of functionally distinct brain regions distributed across the cortex that were extracted by clustering a large fMRI dataset \citep{Yeo2011}.
While spectral graph models have demonstrated ability to capture essential steady-state, stationary characteristics of real brain activity, they are limited to modeling passive spread without oscillatory behavior. Hence they may not suitably accommodate a larger repertoire of dynamically-varying microstates or rich power spectra at higher frequencies typically observed on EEG or MEG. Capturing this rich repertoire would require a full accounting of axonal propagation delays as well as local neural population dynamics within graph models, as previously advocated \citep{cabral2011role}. Band-specific MEG resting-state networks were successfully modeled with a combination of delayed neural mass models and eigenmodes of the structural network \citep{tewarie2019spatially}, suggesting delayed interactions in a brain's network give rise to functional patterns constrained by structural eigenmodes. Recently another effort was undertaken to characterise wide-band brain activity using graph harmonics in closed form (i.e. requiring no time-domain simulations), a rarity in the field of computational neuroscience \citep{Raj2020}. This ``spectral graph model" of brain activity produced realistic power spectra that could successfully predict both the spatial as well as temporal properties of MEG empirical recordings \citep{Raj2020}. Intriguingly, the model has very few (six) parameters, all of which are global and not dependent on local oscillations. This method therefore exemplifies the power of graph methods in reproducing more complex and rich repertoire of brain activity, while keeping to a parsimonious approach that does not require the kinds of high-dimensional and non-linear oscillatory models that have traditionally held sway.
\section{Applications of computational models of EEG}
Network oscillations, captured through EEG, are thought to be relevant for brain functions, such as cognition, memory, perception and consciousness \citep{ward_synchronous_2003}. Local brain regions produce oscillatory activity that propagates through the network to other brain regions. Alterations of oscillatory activity can be a sign of a brain disorder, and they are thought to be due to changes at the level of tissue and local/global connectivity. Due to its ability to capture such oscillatory activity, EEG is commonly used in research and clinical fields to study the neurophysiological bases of brain disorders, helping diagnosis and treatment \citep{iv_handbook_2014}. Physiologically and theoretically inspired computational models are able to reproduce EEG signals, offering a unique tool which complements experimental approaches. The application of computational models reveals disease mechanisms, helps testing new clinical hypotheses, and to explore new surgical strategies \emph{in silico}. This section presents computational models of EEG that have been employed to study different states of consciousness - wakefulness, deep sleep, anesthesia, and disorders of consciousness - as well as diseases such as neuropsychiatric disorders and epilepsy.
\subsection{States of consciousness}\label{subsec:DoC}
A variety of models have been employed to investigate the brain dynamics in different physiological brain states, such as wakefulness and deep sleep (non-rapid eye movement, NREM) \citep{hill_modeling_2005,cona_thalamo-cortical_2014,robinson_dynamics_2002,roberts_corticothalamic_2012}, and pharmacological conditions, such as anesthesia \citep{ching_modeling_2014,ching_thalamocortical_2010,sheeba_neuronal_2008,hutt_effects_2010,liley_propofol_2010}. Other modeling approaches seek to elucidate the neurophysiological mechanisms underlying the presence or the absence of consciousness in wakefulness, NREM sleep and anesthesia, and they have crucial implications for the study of disorders of consciousness (DoC). DoC refer to a class of clinical conditions that may follow a severe brain injury (hypoxic/ischemic or traumatic brain injury) and include coma, vegetative state or unresponsive wakefulness syndrome (VS/UWS), and minimally conscious state (MCS). Coma has been defined as a state of unresponsiveness characterized by the absence of arousal (patients lie with their eyes closed) and, hence, of awareness. VS/UWS denotes a condition of wakefulness with reflex movements and without behavioural signs of awareness, while patients in MCS show unequivocal signs of interaction with the environment.
The current gold standard for clinical assessment of consciousness relies on the Coma Recovery Scale Revised \citep{giacino2004jfk}, which scores the ability of patients to behaviourally respond to sensory stimuli or commands. However, behavioral-based clinical diagnoses can lead to misclassification of MCS as VS/UWS because some patients may regain consciousness without recovering their ability to understand, move and communicate \citep{childs_accuracy_1993,andrews_misdiagnosis_1996,schnakers_diagnostic_2009}. A great effort has been devoted to develop advanced imaging and neurophysiological techniques for assessing covert consciousness and to improve diagnostic and prognostic accuracy \citep{edlow_early_2017,bodart_measures_2017,stender_diagnostic_2014,bruno_unresponsive_2011,owen_detecting_2008,stender_minimal_2016}. A novel neurophysiological approach to unravel the capacity of the brain to sustain consciousness exploits Transcranial Magnetic Stimulation (TMS) in combination with EEG \citep{rosanova_sleep-like_2018, casarotto_stratification_2016}. Specifically, the EEG response evoked by TMS in \emph{conscious} subjects exhibits complex patterns of activation resulting from preserved cortical interactions. In contrast, when \emph{unconscious} patients are stimulated with TMS, the evoked-response shows a local pattern of activation, similar to the one observed in healthy controls during NREM sleep and anesthesia.
The perturbational complexity index (PCI) \citep{casali_theoretically_2013} captures the dynamical complexity of TMS-evoked EEG potentials by means of the Lempel-Ziv compression algorithm, showing high values (low compressibility) for complex chains of activation typical of the awake state, and low values (high compressibility) for stereotypical patterns of activation typical of sleep and anesthesia. PCI has been validated on a benchmark population of 150 conscious and unconscious controls and tested on 81 severely brain-injured patients \citep{casarotto_stratification_2016}, showing an unprecedented high sensitivity (94.7\%) in discriminating conscious from unconscious states.
A recently published modeling approach \citep{bensaid_coalia_2019} investigates the physiological mechanisms underlying the generation of complex or stereotypical TMS-evoked EEG responses. The proposed brain network model, named COALIA, describes local dynamics as neural masses \citep{wendling_epileptic_2002} that include populations of pyramidal neurons and three different types of interneurons. Each neural mass describes the local field activity of one of 66 cortical brain regions \citep{desikan_automated_2006}. Neural masses are connected with each other through long-range white matter fibers as described above (section~\ref{subsec:macro}). EEG signals are then simulated as neural mass activity. A systematic comparison of the complexity of simulated and real TMS-evoked EEG potentials through PCI suggested that the rhythmically patterned thalamocortical activity, typical of sleep, plays a key role in disrupting the complex patterns of activation evoked by TMS \citep{bensaid_coalia_2019}. Indeed, this rhythmical thalamocortical activity results in inhibition within the cortex that prevents information from propagating from one brain region to another, and thus disrupts functional integration, i.e. the ability of the brain to integrate information originating from different brain regions or groups of brain regions \citep{tononi_consciousness_1998}. Functional integration is necessary, along with functional segregation, i.e. the ability of brain regions or groups of brain regions to fulfill a certain function distinct from other areas of the brain \citep{lord_understanding_2017}, to generate complex time-varying patterns of coordinated cortical activity that are typical of the awake brain, and thought to sustain consciousness and cognitive functions \citep{casali_theoretically_2013,demertzi_human_2019}.
\subsection{Neuropsychiatric disorders}
Disruption of integration and segregation balance, which is fundamental for consciousness as mentioned in section~\ref{subsec:DoC}, have been linked
also to several neuropsychiatric disorders as a result of altered structural and functional connectivity \citep{bassett_human_2009,fornito_connectomics_2015, menon_large-scale_2011, deco_rethinking_2015}. Among neuropsychiatric disorders, as reviewed in \citet{lord_understanding_2017}, Alzheimer's disease is characterized by a decrease in long-range functional connectivity, directly affecting integration between functional modules of the brain \citep{stam_small-world_2007,sanz-arigita_loss_2010}. Schizophrenia has been linked to a ``subtle randomization" of global functional connectivity, such that the so-called ``small-world" character of the network is disrupted \citep{alexander-bloch_disrupted_2010,lynall_functional_2010}; a small-world network is characterized by short path lengths and strong modularity, network properties that are thought to promote information processing in the brain \citep{bassett2006small} (but see \citet{hilgetag2016brain}). Loss of integration has also been observed in schizophrenia \citep{damaraju_dynamic_2014}.
As explained in section~\ref{subsec:macro}, whole-brain computational models provide insights into how anatomical connections shape and constrain functional connectivity \citep{deco_identification_2014,deco_resting_2013,honey_predicting_2009}. Using BNMs, Cabral and colleagues have shown that the alterations reported in schizophrenia \citep{lynall_functional_2010} can be explained by a decrease in connectivity between brain areas, occurring either at the local or global level and encompassing either axonal or synaptic mechanisms, hence reinforcing the idea of schizophrenia being the behavioural consequence of a multitude of causes disrupting connectivity between brain areas \citep{cabral2012functional, cabral2012modeling}.
However, these models have focused on reproducing fMRI findings and are yet to be extended to address alterations in EEG spectral signatures in schizophrenia, namely increased EEG gamma-band power and decreased alpha power \citep{uhlhaas2013high}, which, following previous modeling insights \citep{cabral2014exploring}, may arise from reduced coupling between local gamma-band oscillators. Furthermore, BNMs can be employed to test how clinical interventions may help to re-establish healthy network properties such as balance between integration and segregation or small-worldness \citep{deco2018perturbation, deco2019awakening}.
\subsection{Epilepsy}\label{sec:epilepsy}
Models have been employed to study pathological alterations of network oscillatory activity related to many diseases, including epilepsy \citep{stefanescu_computational_2012, wendling_neurocomputational_2005, lytton_computer_2008, holt_computational_2013}. Epilepsy is a complex disease which impacts 1\% of the world population and is drug resistant in approximately 30\% of cases. Due to its intrinsic complexity, epilepsy research has strongly benefited, and will do so even more in the future, from an \emph{in silico} environment where hypotheses about brain mechanisms of epileptic seizures can be tested in order to guide strategies for surgical, pharmacological and electrical stimulation techniques.
Focal epilepsy is a prototypical example of a disease that involves both local tissue and network properties. Focal epilepsy occurs when seizures originate in one or multiple sites, so-called epileptogenic zones (EZ), before recruiting close and distal non-epileptogenic areas pertaining to the pathological network. Patients with a history of drug-resistant focal epilepsy are candidates for surgery which targets epileptogenic areas and/or critical nodes presumably involved in the epileptogenic network. Successful outcomes of these procedures critically rely on the ability of clinicians to precisely identify the EZ.
A promising modeling approach aims at studying focal epilepsy through a single-subject virtual brain \citep{proix_individual_2017, terry_seizure_2012, hutchings_predicting_2015, bansal_personalized_2018, soltesz_computational_2011}, bringing together the description of how seizures start and end (seizure onset and offset, respectively) at a local level (through neural mass models) \citep{robinson_dynamics_2002, wendling_epileptic_2002, lopes_da_silva_epilepsies_2003, breakspear_unifying_2006, jirsa_nature_2014} with individual brain connectivity derived from dMRI data. In this personalized approach, a patient's brain is virtually reconstructed, such that systematic testing of many surgical scenarios is possible. The individual virtual brain approach provides clinicians with additional information, helping them to identify locations which are responsible for starting or propagating the seizure and whose removal would therefore lead to the patient being seizure-free while avoiding functional side effects of removing brain regions and connections \citep{olmi_controlling_2019,proix_individual_2017}.
\begin{table*}
\caption{Some terminology used in this paper.}
\centering
\begin{tabular}{p{0.25\textwidth}|p{0.7\textwidth}}
Functional connectivity & (FC) Statistical dependencies between time series recorded from different brain regions or simulated at different nodes. Such dependencies are taken to indicate a functional relatedness of the brain regions/nodes. Many measures are available, for example correlation between amplitude envelopes, phase locking value, imaginary coherence, etc. See for example \citet{colclough2016reliable} for an overview. Note that FC does not establish a causal relationship \citep{friston2011functional}. \\
\\
Structural connectivity & (SC) Also known as neuroanatomical, anatomical, or white matter connectivity. Diffusion-weighted MRI (dMRI) is able to measure the diffusion of water through brain tissue \citep{basser1994mr}. As water diffuses preferably \emph{along} axons rather than across their walls, the orientation of large fiber bundles can be inferred from dMRI via algorithms known as fiber tracking \citep{jones2010challenges}. Note that SC does not take into account local anatomical connections made within the gray matter, and that fiber counts or densities do not allow making conclusions about the weight of that connection \citep{jeurissen2019diffusion}. Furthermore, fiber tracking algorithms are unable to resolve ambiguities introduces by crossing fibers, and it is difficult to track long fibers. \\
\\
Graph & A brain network model, which consists of nodes and edges (Figure~\ref{fig:model_scales} A, right), can be formalized as a graph \citep{bassett2017network, sporns2018graph}. This can be visualized using so-called adjacency matrices, which contain a weight for the edge between each pair of nodes ((Figure~\ref{fig:model_scales} B)). In this sense, both FC and SC matrices are adjacency matrices. This formalization opens up the analysis of brain networks to the tools of graph analysis. These tools allow for example the characterization of the graph/network using many different quantitative measures \citep{rubinov2010complex}, partitioning the graph/network into subnetworks or modules \citep{bassett2017network, donetti2004detecting}, or classifying nodes depending on their role in the network \citep{hagmann2008mapping}.\\
Random walk & A random walk is a random process taking place on the graph in which a ``walker" is initiated at a node and proceeds to another node following existing edges. Edges are selected by the walker with a probability proportional to their weight. Such a simulation can be used to approximate the dynamics of spreading activation, and enables the researcher to approximate for example the probability that activity will spread from node $i$ to node $j$ given the edges that exist between them, or the time that it will take for activity to spread from node $i$ to node $j$. \\
\\
Laplacian & The Laplace operator is ubiquitous in many physical systems and is used to describe standing waves, diffusion, heat dispersion, and many other phenomena. For a network, the Laplacian is obtained directly from the adjacency matrix (see above). An intuitive interpretation is that it describes the ``flow" of activity along the edges.
\\
Eigenmodes & Many physical systems that consist of interacting elements can vibrate at certain frequencies, for example the string of a violin or the vibrating sheets of Chladni \citep{chladni1802akustik}. Each system has its own set of frequencies at which it can vibrate, determined for example by its shape. Mathematically, these eigenmodes are obtained via eigendecomposition of the Laplacian (see above). \\
\end{tabular}
\label{tab:words}
\end{table*}
\section{Discussion}
In this paper we introduced different computational model types and their application to EEG, using a simple classification by spatial scale. Clearly not all models in the literature would necessarily belong to one category, but we believe this taxonomy can provide an entry point for non-experts. The main motivation behind this review was to identify obstacles that stand in the way of applying EEG modeling in both a research and clinical context, and to point out future directions that could remove these obstacles.
We have pointed out several recent efforts that have begun to more closely align models and experimental findings. Such integration of theory and experiment guarantees the use of biologically relevant measures within computational models of EEG, a crucial element if one wishes to use EEG models together with acquired data. For example, recent microcircuit models address the gap between theory and experiment by linking average firing rate - a measure of population activity preferred by the modeling community - and local field potential (LFP) - a measure that is generally thought to be a good proxy of the EEG signal \citep{saponati2019integrate}; recent mean field models explicitly include the contribution of neural synchronization to the LFP \citep{byrne2020next}, thereby integrating experimental knowledge about how the EEG signal is generated \citep{da2013eeg}; brain network models explore the contribution of empirically measured connectomes to macroscopic brain activity \citep{cabral2014exploring}; and applications of computational models already exist that use clinical measures to study e.g. coma \citep{bensaid_coalia_2019}, epilepsy \citep{olmi_controlling_2019,proix_individual_2017}, and neuropsychiatric disorders \citep{spiegler_selective_2016, kunze_transcranial_2016}. Furthermore, some modeling approaches focus on providing a simple model for large-scale dynamics, making results more interpretable both from a theoretical and clinical standpoint \citep{Abdelnour2018,Raj2020}.
We have reviewed computational models on three spatial scales (Figure~\ref{fig:model_scales} A). Each scale models qualitatively different biological processes which can be measured using distinct recording techniques \citep{varela2001brainweb} (Figure~\ref{fig:eeg_and_comp_model}). While EEG records activity at the macro-scale, mechanisms at each scale have an impact on the EEG signal and should therefore inform its interpretation. Therefore, ideally, scales should be combined to provide a complete picture of neural mechanisms underlying EEG activity, something that started to be explored for example in the simulation platform The Virtual Brain (TVB) \citep{sanz2013virtual, falcon2016new} or in studies showing the theoretical relationship between spiking networks and mean field formulations \citep{wong2006recurrent, deco2013resting, coombes2019next, byrne2020next}. Using models in this hierarchical manner is the only way to disentangle different contributions to the EEG signal without using invasive techniques, i.e., to distinguish neural signals \citep{michel2012towards, seeber2019subcortical}, volume conduction \citep{hindriks2017linear, linden2011modeling, maki2019biophysical, skaar2019estimation, Gaute2013, telen2017, bedard2009}, and noise. Furthermore, brain disorders can impact brain structure and function on any scale. Using models on multiple scales is necessary if one wishes to understand how pathological changes manifest in clinically measurable EEG signals. Such an understanding would also allow to use EEG to evaluate clinical interventions that affect the micro- or mesoscale (e.g., drugs).
Models can thus play an important role as a ``bridge" that connects different fields. In translational applications, knowledge from basic research can be integrated into a model and the model can be designed in such a way that it is useful for a clinical application. An example for a successful ``bridge" is the case of Brain Computer Interfaces. In order to realize multi-scale models, researchers working on animal recordings and researchers focusing on non-invasive recordings in humans have to come together with modeling experts that can incorporate findings from both fields in a model.
As an outlook, EEG modeling could play an important part in future endeavors towards precision medicine, or ``personal health". Individual brain models could be used to integrate different sources of data (EEG, fMRI, ECG, etc.) in a ``virtual patient". This could complement data-driven approaches like connectome fingerprinting (in which individuals are identified using their individual connectome \citep{finn2015functional, pallares2018extracting, abbas2020geff}). The ultimate goal would be to use this virtual patient to tailor diagnosis and therapies around the needs of the patient \citep{wium2017personalized}, reducing the economical burden and patient discomfort of clinical analyses and hospitalisation.
\begin{acknowledgements}
The authors would like to thank Ana Hernando Ariza for the first image contained in this paper and the creativity, time and efforts she put into transforming the important messages of this article in graphics. The authors are particularly grateful to Prof. J\'{e}r\'{e}mie Lefebvre and Prof. Micah M. Murray for their contributions, guidance and mentoring during the preparation of this article.
\end{acknowledgements}
\section*{Funding}
K.G. was funded by Schweizerischer Nationalfonds zur F\"{o}rderung der Wissenschaftlichen Forschung Award ID: 170873. J.C. was funded by the Portuguese Foundation for Science and Technology (FCT) CEECIND/03325/2017 and by projects UID/Multi/50026 (FCT), NORTE-01-0145-FEDER- 000013 and NORTE-01-0145-FEDER-000023. A.C. was supported by the Tiny Blue Dot Foundation. A.M. was supported by PREVIEW - Bando salute 2018 - Tuscany Regional Government. A.R. was supported by NIH grants R01NS092802, RF1AG062196 and R56AG064873. B.F. received financial support for this work by the Fondation Asile des aveugles (grant number 232933), a grantor advised by Carigest SA (number 232920).
\section*{Conflict of interest}
The authors declare that they have no conflict of interest.
\section*{Authors contribution}
K.G. and B.F. conceived the review. J.C. and A.R. contributed to section 2.2. A.M. contributed to section 2.1. A.C. entirely conceived section 3. K.G. and B.F. outlined all remaining contents and all authors contributed to the final draft and review.
\bibliographystyle{spbasic}
|
2009.08283
|
\section{Introduction}
Unlike conventional cameras recording intensity frames at fixed time intervals, event cameras sample light based on scene dynamics by asynchronously measuring per-pixel brightness\footnote{Defined as the logarithm of the pixel intensity, i.e., $L\doteq \log(I)$.} changes at the time they occur \cite{gallego2019event}. This results in streams of sparse events encoding the polarity of the perceived changes. Because of this paradigm shift, event cameras offer several advantages over their frame-based counterparts, namely low power consumption, high dynamic range (HDR), low latency and high temporal resolution.
Despite the advantages, the novel output format of event cameras poses new challenges in terms of algorithm design. Unless working with spiking neural networks \cite{paredes2020unsupervised}, events are usually converted into intermediate representations that facilitate the extraction of information \cite{gallego2019event}. Among others, intensity frames are an example of a powerful representation since they allow the evaluation of the appearance of a visual scene, thus bridging the gap between event cameras and the existing frame-based computer vision literature \cite{rebecq2019events, rebecq2019high}. For this reason, there has been a significant research drive to develop new methods to reconstruct images from events with similar statistics to those captured by standard cameras.
\begin{figure}[!t]
\centering
\includegraphics[width=0.485
\textwidth]{figures/tikz/overview2.pdf}
\caption{Overview of the proposed framework. Our model is trained in a self-supervised fashion to perform optical flow estimation and image reconstruction from event data using the contrast maximization proxy loss and the event-based photometric constancy, respectively. Colored reverse arrows indicate error propagation for each loss.}
\label{fig:overview}
\end{figure}
Recent work has mostly approached this problem from a machine learning perspective. With their E2VID artificial neural network, Rebecq \textit{et al.}\ \cite{rebecq2019events, rebecq2019high} were the first to show that learning-based methods trained to maximize perceptual similarity via supervised learning outperform hand-crafted techniques by a large margin in terms of image quality. Later, Scheerlinck \textit{et al.}\ \cite{scheerlinck2020fast} achieved high speed inference with FireNet, a simplified model of E2VID. Despite the high levels of accuracy reported, these architectures were trained with large sets of synthetic data from event camera simulators \cite{rebecq2018esim}, which adds extra complexity to the reconstruction problem due to the \textit{simulator-to-reality gap}. In fact, Stoffregen, Scheerlinck \textit{et al.}\ \cite{stoffregen2020train} recently showed that if the statistics of the synthetic training datasets do not closely resemble those seen during inference, image quality degrades and the generalizability of these architectures remains limited.
In this work, we propose to come back to the theoretical basics of event cameras to relax the dependency of learning-based reconstruction methods on ground-truth and synthetic data. Specifically, we introduce the self-supervised learning (SSL) framework in \figrefede{fig:overview}, which consists of two artificial neural networks, \textit{FlowNet} and \textit{ReconNet}, for optical flow estimation and image reconstruction, respectively. FlowNet is trained through the contrast maximization proxy loss from Zhu \textit{et al.}\ \cite{zhu2019unsupervised}, while ReconNet makes use of the flow-intensity relation in the \textit{event-based photometric constancy} \cite{gallego2015event} to reconstruct the frames that best satisfy the input events and the estimated flow. Using our method, we retrain several networks from the image reconstruction \cite{rebecq2019events, scheerlinck2020fast} and optical flow \cite{zhu2018ev} literature. In terms of accuracy, results show that the reconstructed images are in line with those generated by most learning-based approaches despite the lack of ground-truth data during training. Additionally, we propose \textit{FireFlowNet}, a lightweight architecture for optical flow estimation that, inspired by \cite{scheerlinck2020fast}, achieves high speed inference with only a minor drop in performance.
In summary, this paper contains \textit{two main contributions}. First, a novel SSL framework to train artificial neural networks to perform event-based image reconstruction that, with the aid of optical flow, does not require ground truth of any kind and can learn directly on real event data. Second, we introduce FireFlowNet: a novel, lightweight neural network architecture that performs fast optical flow estimation from events. We validate our self-supervised method and optical flow network through extensive quantitative and qualitative evaluations on multiple datasets.
\section{Related Work}
Early methods to image reconstruction from event data approached the problem through the \textit{photometric constancy}: each event provides one equation relating the intensity gradient and the optical flow \cite{gallego2015event}. Kim \textit{et al.}\ \cite{kim2008simultaneous} were the first in the field and developed an Extended Kalman Filter that, under rotational and static scene assumptions, reconstructs a gradient image that is later transformed into the intensity space via Poisson integration. They later extended this approach to 6 degrees-of-freedom camera motion \cite{kim2016real}. Under the same assumptions, Cook \textit{et al.}\ \cite{cook2011interacting} simultaneously recovered intensity images, optical flow, and angular velocity through bio-inspired, interconnected network of interacting maps. Bardow \textit{et al.}\ \cite{bardow2016simultaneous} developed a variational energy minimization framework to simultaneously estimate optical flow and intensity from sliding windows of events, relaxing for the first time the static scene assumption.
Instead of relying on the photometric constancy, several approaches based on direct event integration have been proposed, which do not assume scene structure or motion dynamics. Reinbacher \textit{et al.}\ \cite{reinbacher2016real} formulated intensity reconstruction as an energy minimization problem via direct integration with periodic manifold regularization. Scheerlinck \textit{et al.}\ \cite{Scheerlinck18accv} achieved computationally efficient reconstruction by filtering events with a high-pass filter prior to integration.
Several machine learning approaches have also been proposed. Training generative adversarial networks with real grayscale frames was proposed by Wang \textit{et al.}\ \cite{wang2019event} and Pini \textit{et al.}\ \cite{pini2019learn}. However, Rebecq \textit{et al.}\ \cite{rebecq2019events, rebecq2019high} showed that training in a supervised fashion with a large synthetic dataset allowed for higher quality reconstructions with their \textit{E2VID} architecture. Focused on computational efficiency, Scheerlinck \textit{et al.}\ \cite{scheerlinck2020fast} managed to significantly reduce E2VID complexity with \textit{FireNet}, with only a minor drop in accuracy. Inspired by these works, Choi \textit{et al.}\ \cite{choi2020learning} and Wang \textit{et al.}\ \cite{wang2020eventsr} recently proposed hybrid approaches that incorporate super resolution aspects in the training process and architecture design to improve image quality. Lastly, Stoffregen, Scheerlinck \textit{et al.}\ \cite{stoffregen2020train} recently highlighted that, when training with ground truth, the statistics of the training dataset play a major role in the reconstruction quality. They showed that a slight change in the training statistics of E2VID leads to significant improvements across multiple datasets.
Our proposed SSL framework (see \figrefede{fig:overview}) is based on the event-based photometric constancy used by early reconstruction methods. Similarly to Bardow \textit{et al.}\ \cite{bardow2016simultaneous}, we simultaneously estimate intensity and optical flow from the input events. However, instead of relying on a joint optimization scheme, we achieve it via two independent neural networks that only share information during training. Further, we reconstruct intensity directly from the photometric constancy, instead of from an oversimplified model of the event camera. This approach allows, for the first time, to relax the strong dependency of learning-based approaches on ground-truth and synthetic data
\section{Method}\label{sec:method}
An event camera consist of an array of independent pixels that respond to changes in the brightness signal $L(t)$, and transmit these changes through streams of sparse and asynchronous events \cite{lichtsteiner2008128}. For an ideal camera, an event $\boldsymbol{e}_i=(\boldsymbol{x}_i,t_i,p_i)$ is triggered at pixel $\boldsymbol{x}_i=(x_i,y_i)^T$ and time $t_i$ whenever the brightness change since the last event at that pixel reaches a contrast sensitivity threshold $C$. Therefore, the brightness increment occurred in a time window $\Delta t_k$ is encoded in the event data via pixel-wise accumulation:
\begin{align}\label{eqn:deltaevents2}
\Delta L_k(\boldsymbol{x}) = \sum_{\boldsymbol{e}_i\in \Delta t_k} p_i C
\end{align}
where $C>0$, and the polarity $p_i\in\{+,-\}$ encodes the sign of the brightness change.
As in \cite{gallego2015event}, under the assumptions of Lambertian surfaces, constant illumination and small $\Delta t$, we can linearize \eqnref{eqn:deltaevents2} to obtain the event-based photometric constancy:
\begin{equation}\label{eqn:deltamodel}
\Delta L_k(\boldsymbol{x})\approx -\nabla L_{k-1}(\boldsymbol{x})\cdot \boldsymbol{u}_k(\boldsymbol{x})\Delta t_k
\end{equation}
which encodes that events are caused by the spatial gradients of the brightness signal, $\nabla L=(\delta_x L, \delta_y L)^T$, moving with optical flow $\boldsymbol{u}=(u, v)^T$. The dot product conveys that no events are generated if the flow vector is parallel to an edge ($\boldsymbol{u}\bot\nabla L$), while they are generated at the highest rate if perpendicular ($\boldsymbol{u}\parallel\nabla L$). Thus, events are caused by the projection of the optical flow vector in the $\nabla L$ direction
\subsection{Overview}
Our goal is to learn, in an SSL fashion, to transform a continuous stream of events into a sequence of intensity images $\smash{\{\hat{I}_k\}}$. To achieve this, we propose the pipeline in \figrefede{fig:overview} in which two neural networks are jointly trained. On the one hand, \textit{FlowNet} is a convolutional network that learns to estimate optical flow by compensating for the motion blur in the input events. On the other hand, \textit{ReconNet} is a recurrent convolutional network that learns to perform image reconstruction through the event-based photometric constancy.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.875
\textwidth]{figures/tikz/contancy.pdf}
\caption{Brightness reconstruction via the event-based photometric constancy formulation proposed in this work. The most recent event-based optical flow estimate from FlowNet $\hat{\boldsymbol{u}}_k$ is used to (i) warp the input events, (ii) warp the spatial gradients of the last reconstructed image $\hat{L}_{k-1}$, and (iii) in the dot product with the warped gradients. The predicted brightness increment image $\Delta \hat{L}_k^{*}$ is compared to that obtained with the deblurred (and averaged) input events, $\Delta L_k^{*}$, and the error is propagated backwards towards ReconNet to improve reconstruction accuracy.}
\label{fig:constancy}
\end{figure*}
\subsection{Input Event Representation}\label{sec:voxel}
As proposed in \cite{zhu2019unsupervised}, the input to both our networks is a voxel grid $E_k$ with $B$ temporal bins that gets populated with consecutive, non-overlapping partitions of the event stream $\boldsymbol{\varepsilon}_k\doteq\{\boldsymbol{e}_i\}_{i=0}^{N-1}$, each containing a fixed number of events, $N$. For each partition, every event (with index $i$) distributes its polarity $p_i$ to the two closest bins according to:
\begin{align}
E(\boldsymbol{x}_i, t_b) &= \sum_i p_i \kappa(t_b-t_{i}^{*}\left(B-1\right))\label{eqn:inputone}\\
\kappa(a) &= \max(0, 1-|a|)\\
t_{i}^{*} &= \frac{\left(t_i-t_{0}^{k}\right)}{\left(t_{N-1}^{k}-t_{0}^{k}\right)}\label{eqn:inputtwo}
\end{align}
where $b$ is the bin index, and $t_{i}^{*}\in\left[0,1\right]$ denotes the normalized event timestamp. This representation adaptively normalizes the temporal dimension of the input depending on the timestamps of each partition of events
\subsection{Optical Flow via Contrast Maximization}\label{sec:flow}
We aim to learn to reconstruct $L$ through the photometric constancy in \eqnref{eqn:deltamodel}, which, besides the spatial and temporal derivatives of the brightness itself, also depends on the optical flow $\boldsymbol{u}$. One could use ground-truth optical flow to solve for this ill-posed problem. However, due to the limited availability of event-camera datasets with accurate ground-truth data, we opt for training our FlowNet to perform flow estimation in a self-supervised manner, using the contrast maximization proxy loss for motion compensation \cite{gallego2018unifying}.
A partition of events is said to be blurry whenever there is a spatiotemporal misalignment among its events, i.e., events generated by the same portion of a moving edge are captured with different timestamps and pixel locations. The idea behind the motion compensation framework \cite{gallego2018unifying} is that accurate optical flow can be retrieved by finding the motion model of each event that best deblurs $\boldsymbol{\varepsilon}_k$. Knowing the per-pixel optical flow, the events can be propagated to a reference time $t_{\text{ref}}$ through:
\begin{align}\label{eqn:warp}
\boldsymbol{x}'_i = \boldsymbol{x}_i + (t_{\text{ref}} - t_i)\boldsymbol{u}(\boldsymbol{x}_i)
\end{align}
In this work, we adopt the deblurring quality measure proposed by Mitrokhin \textit{et al.}\ \cite{mitrokhin2018event} and later refined by Zhu \textit{et al.}\ \cite{zhu2019unsupervised}: the per-pixel and per-polarity average timestamp of the resulting image of warped events (IWE), $H$. The lower this metric, the better the deblurring. As in \cite{zhu2019unsupervised}, we generate an image of the average (normalized) timestamp at each pixel for each polarity $p'$ via bilinear interpolation:
\begin{align}\label{eqn:flowloss}
\begin{aligned}
T_{p'}(\boldsymbol{x}{;}\boldsymbol{u} |t_{\text{ref}}^{*}) &= \frac{\sum_{j} \kappa(x - x'_{j})\kappa(y - y'_{j})t_{j}^{*}}{\sum_{j} \kappa(x - x'_{j})\kappa(y - y'_{j})+\epsilon}\\
j = \{i \mid p_{i}=&p'\}, \hspace{15pt}p'\in\{+,-\}, \hspace{15pt} \epsilon\approx 0
\end{aligned}
\end{align}
and minimize the sum of the squared images resulting from warping the events forward and backward to prevent scaling issues during backpropagation:
\begin{align}
\mathcal{L}_{\text{contrast}}(t_{\text{ref}}^{*}) &= \sum_{\boldsymbol{x}}T_{+}(\boldsymbol{x}{;}\boldsymbol{u} |t_{\text{ref}}^{*})^2+T_{-}(\boldsymbol{x}{;}\boldsymbol{u} |t_{\text{ref}}^{*})^2\\
\mathcal{L}_{\text{contrast}} &= \mathcal{L}_{\text{contrast}}(1) + \mathcal{L}_{\text{contrast}}(0)
\end{align}
The total loss used to train FlowNet is then given by:
\begin{equation}\label{eqn:totalflow}
\mathcal{L}_{\text{FlowNet}} = \mathcal{L}_{\text{contrast}} + \lambda_1 \mathcal{L}_{\text{smooth}}
\end{equation}
where $\mathcal{L}_{\text{smooth}}$ is a Charbonnier smoothness prior \cite{charbonnier1994two}, and $\lambda_1$ is a scalar balancing the effect of the two losses. Note that, since $\mathcal{L}_{\text{contrast}}$ does no propagate the error back to pixels without events, we mask FlowNet's output so that null optical flow vectors are returned at these pixel locations.
\subsection{Reconstruction via Photometric Constancy}\label{sec:recons}
We formulate the SSL reconstruction problem from an image registration perspective \cite{lucas1981iterative} via brightness increment images. Specifically, we propose to use the difference between the reference increment image $\Delta L$ (event integration, \eqnref{eqn:deltaevents2}) and the predicted $\Delta \hat{L}$ (photometric constancy, \eqnref{eqn:deltamodel}) to reconstruct the brightness signal that best explains the input events, assuming known error-free optical flow. This reconstructed brightness is denoted by $\hat{L}$. FlowNet predictions are used in the computation of $\Delta \hat{L}$, and as registration parameters to warp both increment images to a common temporal frame (indicated by the superscript $\text{}^*$). A schematic of the proposed formulation is shown in \figrefede{fig:constancy}.
To minimize motion blur in the reconstructed frames, instead of directly integrating the input events, we define the reference brightness increment $\Delta L^{*}$ via the per-pixel and per-polarity average number of warped events:
\begin{align}
\Delta L^{*}(\boldsymbol{x}{;}\boldsymbol{u})\doteq C\left(G_{+}(\boldsymbol{x}{;}\boldsymbol{u}|1)-G_{-}(\boldsymbol{x}{;}\boldsymbol{u}|1)\right)\\
\begin{aligned}
G_{p'}(\boldsymbol{x}{;}\boldsymbol{u}|t_{\text{ref}}^{*}) &= \frac{H_{p'}(\boldsymbol{x}{;}\boldsymbol{\boldsymbol{u}}|t_{\text{ref}}^{*})}{P_{p'}(\boldsymbol{x}{;}\boldsymbol{\boldsymbol{u}}|t_{\text{ref}}^{*}) + \epsilon}\\
\end{aligned}\hspace{25pt}
\end{align}
where $P$ is a two-channel image containing the number of pixel locations from where the IWE $H$ receives events in the event warping process (see \secrefede{sec:flow}). Therefore, $\Delta L^{*}$ is a deblurred representation of the contrast change encoded in the input events. An ablation study on the impact of event deblurring prior to event integration can be found in the supplementary material.
On the other hand, we adapt the event-based photometric constancy in \eqnref{eqn:deltamodel} and compute $\Delta \hat{L}$ by warping the spatial gradients of the last reconstructed image to the current time instance via spatial transformers \cite{jaderberg2015spatial}:
\begin{align}\label{eqn:pred}
\Delta \hat{L}^{*}(\boldsymbol{x}{;}\boldsymbol{u})\doteq -\mathcal{W}_{k-1}^{k}(\nabla \hat{L}_{k-1}(\boldsymbol{x}))\cdot \hat{\boldsymbol{u}}_k(\boldsymbol{x}
\end{align}
where $\mathcal{W}_{k-1}^{k}$ is the warping function of the optical flow $\hat{\boldsymbol{u}}_k$.
Following a maximum likelihood approach \cite{lichtsteiner2008128, gehrig2020eklt}, we define the photometric reconstruction loss as the squared $L_2$ norm of the difference of the warped brightness increments:
\begin{align}\label{eqn:reconloss}
\mathcal{L}_{\text{PE}} &= \norm{\Delta L^{*}(\boldsymbol{x}{;}\boldsymbol{u}) - \Delta \hat{L}^{*}(\boldsymbol{x}{;}\boldsymbol{u})}_{2}^{2}
\end{align}
where, besides $\hat{L}$, the contrast threshold $C$ is the only remaining unknown. To relax the dependency on this parameter, our ReconNet uses linear activation in its last layer instead of the frequently used sigmoid function \cite{rebecq2019high, scheerlinck2020fast}. The resulting unbounded brightness estimate is first transformed into the intensity space through $\hat{I}_k=\exp(\hat{L}_k)$, and then linearly normalized to get the final reconstruction $\smash{\hat{I}_k^f}$:
\begin{align}
\begin{aligned}
\hat{I}_k^f &= \frac{\hat{I}_k - m}{M - m}
\end{aligned}
\end{align}
where $m$ and $M$ are the $1\%$ and $99\%$ percentiles of $\hat{I}_k$, and $\hat{I}_k^f$ is clipped to the range $\left[0,1\right]$. This min/max normalization allows the use of any value of $C$ for training as long as the ratio of positive and negative contrast thresholds resembles that of the evaluation sequences. We assume that most event-camera datasets were recorded with $C_{+}/C_{-}\approx 1$, and set both thresholds to $1$.
On its own, \eqnref{eqn:reconloss} is not sufficient for the reconstruction of temporally consistent images. Because of the dot product in \eqnref{eqn:pred}, the absence of input events can be ambiguously understood as lack of apparent motion, lack of spatial image gradients, or both. To solve for this issue, we introduce an explicit temporal consistency loss based on the frame-based formulation of the photometric constancy \cite{jason2016back}. In essence, we define the temporal loss as the photometric error between two successive reconstructed frames:
\begin{align}
\begin{aligned}
\mathcal{L}_{\text{TC}} = &\norm{\hat{L}_{k} - \mathcal{W}_{k-1}^{k}(\hat{L}_{k-1})}_
\end{aligned}
\end{align}
The total loss used to train ReconNet is then given by:
\begin{align}
\mathcal{L}_{\text{ReconNet}} = \sum_{k=0}^{S} \mathcal{L}_{\text{PE}} + \lambda_2 \sum_{k=S_0}^{S} \mathcal{L}_{\text{TC}} + \lambda_3 \sum_{k=0}^{S} \mathcal{L}_{\text{TV}}
\end{align}
where $S$ denotes the number of steps we unroll the recurrent network for during training, $\mathcal{L}_{\text{TV}}$ is a smoothness total-variation constraint \cite{rudin1992nonlinear}, and $\lambda_2$ and $\lambda_3$ are scalars balancing the effect of the three losses.
\begin{figure}[!t]
\centering
\includegraphics[width=0.475
\textwidth]{figures/tikz/networks2.pdf}
\vspace{-32.5pt}
\begin{center}
\line(1,0){240}
\end{center}
\vspace{-1pt}
\includegraphics[width=0.475
\textwidth]{figures/tikz/networks.pdf}
\caption{Neural networks evaluated in this work.
}
\label{fig:nets}
\end{figure}
\subsection{Network Architectures}
We evaluate the two trends on network design for event cameras when trained with our SSL framework. The evaluated architectures are shown in \figrefede{fig:nets}.
\noindent\textbf{FlowNet: FireFlowNet.} FireFlowNet is our proposed lightweight architecture for fast optical flow estimation. Inspired by FireNet \cite{scheerlinck2020fast}, the network consists of three encoder layers that perform single-strided convolutions, two residual blocks \cite{he2016deep}, and a final prediction layer that performs depthwise (i.e., $1\times 1$) convolutions with two output channels. All layers have 32 output channels and use $3\times 3$ kernels and ReLU activations except for the final, which uses tanh activations. A comparison of the key architectural differences between our FireFlowNet and the current state-of-the-art is shown in \tabrefede{tab:flownet}.
\noindent\textbf{FlowNet: EV-FlowNet \cite{zhu2018ev}.} The input voxel grid $\boldsymbol{E}_k$ is passed through four strided convolutional layers with output channels doubling after each layer starting from $64$. The resulting activations are then passed through two residual blocks \cite{he2016deep} and four decoder layers that perform bilinear upsampling followed by convolution. After each decoder, there is a (concatenated) skip connection from the corresponding encoder, as well as another depthwise convolution to produce a lower scale flow estimate, which is then concatenated with the activations of the previous decoder. The $\mathcal{L}_{\text{FlowNet}}$ loss (see \eqnref{eqn:totalflow}) is applied to each intermediate flow estimate via flow upsampling. All layers use $3\times 3$ convolutional kernels and ReLU activations except for the flow prediction layers, which use tanh activations.
\begin{table}[!t]
\caption{Main architectural differences between our FireFlowNet and EV-FlowNet \cite{zhu2018ev}. FireFlowNet has $250\times$ fewer parameters, consuming only $0.41\%$ of the memory.}
\label{tab:flownet}
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{lcc}
\thickhline
& EV-FlowNet \cite{zhu2018ev} & FireFlowNet (Ours)\\\hline
No. params. (k) & 14130.28 & 57.03 \\
Memory (Mb) & 53.90 & 0.22 \\
Downsampling & Yes & No \\
\thickhline
\end{tabular}}
\end{table}
\begin{table*}[!t]
\caption{Quantitative evaluation of our FlowNet architectures on the MVSEC dataset \cite{zhu2018multivehicle}. For each sequence, we report the AEE (lower is better, $\downarrow$) in pixels and the percentage of points with endpoint error greater than 3 pixels, $\%_{\text{Outlier}}$ ($\downarrow$). Best in bold, runner up underlined.
}
\label{tab:flowevluation}
\centering
\resizebox{0.85\linewidth}{!}{%
{\renewcommand{\arraystretch}{1.1}
\begin{tabular}{lccccccccccc}
\thickhline
& \multicolumn{2}{c}{outdoor\_day1}&& \multicolumn{2}{c}{indoor\_flying1} && \multicolumn{2}{c}{indoor\_flying2} && \multicolumn{2}{c}{indoor\_flying3}\\\cline{2-3}\cline{5-6}\cline{8-9}\cline{11-12}
& AEE & $\%_{\text{Outlier}}$&& AEE & $\%_{\text{Outlier}}$&& AEE & $\%_{\text{Outlier}}$&& AEE & $\%_{\text{Outlier}}$\\\hline
EV-FlowNet$\text{}_{\text{GT-SIM}}$ \cite{stoffregen2020train} & 0.68 & 1.0 && \textbf{0.56} & \underline{1.0} && \textbf{0.66} & \textbf{1.0} && \textbf{0.59} & \textbf{1.0} \\
EV-FlowNet$\text{}_{\text{FW-MVSEC}}$ \cite{zhu2018ev} & \underline{0.49} & \underline{0.2}&& 1.03 & 2.2 && 1.72 & 15.1 && 1.53 & 11.9 \\
EV-FlowNet$\text{}_{\text{EW-MVSEC}}$ \cite{zhu2019unsupervised} & \textbf{0.32} & \textbf{0.0}&& \underline{0.58} & \textbf{0.0} && \underline{1.02} & \underline{4.0} && \underline{0.87} & \underline{3.0} \\
EV-FlowNet$\text{}_{\text{EW-DR}}$ (Ours) & 0.92 & 5.4&& 0.79 & 1.2 && 1.40 & 10.9 && 1.18 & 7.4 \\
FireFlowNet$\text{}_{\text{EW-DR}}$ (Ours) & 1.06 & 6.6 && 0.97 & 2.6 && 1.67 & 15.3 && 1.43 & 11.0 \\
\thickhline
\end{tabular}}}
\end{table*}
\noindent\textbf{ReconNet: FireNet \cite{scheerlinck2020fast}.} Same architecture as FireFlowNet except for the second and third encoder, which are recurrent ConvGRU layers \cite{ballas2015delving}. As in \cite{scheerlinck2020fast}, each layer has 16 output channels, but we use linear activation in the final layer.
\noindent\textbf{ReconNet: E2VID \cite{rebecq2019high}.} The input voxel grid $\boldsymbol{E}_k$ is passed through a convolutional head layer, three recurrent encoders performing strided convolution followed by ConvLSTM \cite{xingjian2015convolutional}, two residual blocks \cite{he2016deep}, three decoder layers that perform bilinear upsampling followed by convolution, and a final depthwise convolutional prediction layer. There are (element-wise sum) skip connections between symmetric encoder and decoder layers, and the number of output channels in the head layer is 32 and doubles after each encoder. Head, encoder, and decoder layers use $5\times 5$ kernels, while the rest uses $3\times 3$. All layers use ReLU activations except for the final prediction layer which uses linear.
\section{Experiments}
We train our networks on the indoor forward facing sequences from the UZH-FPV Drone Racing Dataset (DR) \cite{delmerico2019we}, which is characterized by a much wider distribution of optical flow vectors than other datasets, such as MVSEC \cite{zhu2018multivehicle}, the Event-Camera Dataset (ECD) \cite{mueggler2017event}, or the High Quality Frames (HQF) dataset \cite{stoffregen2020train}. Our training sequences consist of approximately 15 minutes of event data recorded with a racing quadrotor flying aggressive six-degree-of-freedom trajectories. We split these recordings and generate 440 $128\times 128$ (randomly cropped) sequences of 2 seconds each, and use them for training with $B=5$. We further augment this data using random horizontal, vertical and polarity flips, besides with artificial pauses of the input event stream (i.e., forward-pass with null input voxel). For training, we fixed the number of input events per pixel to $0.3$.
\begin{table}[!t]
\caption{Computational cost evaluation of our FireFlowNet against EV-FlowNet \cite{zhu2018ev}. We report inference time on GPU and the floating point operations (FLOPs) per forward-pass at common sensor resolutions. We used a single NVIDIA GeForce GTX 1080 Ti GPU for all experiments.
}
\label{tab:flowmetrics}
\centering
\resizebox{\linewidth}{!}{%
{\renewcommand{\arraystretch}{1.1}
\begin{tabular}{lccccc}
\thickhline
& \multicolumn{2}{c}{GPU (ms)}&& \multicolumn{2}{c}{FLOPs (G)}\\\cline{2-3}\cline{5-6}
& EV-FlowNet & FireFlowNet && EV-FlowNet & FireFlowNet \\\hline
$240\times 180$ & 4.33 & \textbf{1.97}&& 8.91 & \textbf{2.47} \\
$346\times 260$ & 7.05 & \textbf{3.81}&& 18.60 & \textbf{5.14} \\
$640\times 480$ & 17.04 & \textbf{12.55}&& 61.47 & \textbf{17.59} \\
$1280\times 720$ & 49.32 & \textbf{34.24}&& 184.41 & \textbf{52.67} \\
\thickhline
\end{tabular}}}
\end{table}
Our framework is implemented in PyTorch\footnote{The project's code and additional qualitative results can be found at \href{http://mavlab.tudelft.nl/ssl_e2v/}{http://mavlab.tudelft.nl/ssl\_e2v/}.}. We use the Adam optimizer \cite{kingma2014adam} and a learning rate of $0.0001$ for both networks, and train with a batch size of 1 for 120 epochs. We empirically set the weights for each loss to $\{\lambda_1, \lambda_2, \lambda_3\} =\{1.0, 0.1, 0.05\}$, ReconNet's unrolling $S$ to 20 steps, and $S_0$ to $10$ steps.
\subsection{Optical Flow Evaluation}
To validate FireFlowNet as a lightweight alternative to the current state-of-the-art in event-based optical flow estimation, we evaluated both of our FlowNet architectures on the indoor\_flying and outdoor\_day sequences from the MVSEC dataset \cite{zhu2018multivehicle} with the ground-truth data provided by Zhu \textit{et al.}\ \cite{zhu2018ev}. Optical flow predictions were generated at each grayscale frame timestamp, and scaled to be the displacement between two successive frames.
Quantitative results are presented in \tabrefede{tab:flowevluation}. We use the average endpoint error (AEE) and the percentage of points with endpoint error greater than 3 pixels to compare our FlowNet architectures against three EV-FlowNet from literature; two of them trained with frame- (FW) \cite{zhu2018ev} and event-warping (EW) \cite{zhu2019unsupervised} SSL proxy losses on MVSEC \cite{zhu2018multivehicle}, and one trained with synthetic ground-truth data (GT) \cite{stoffregen2020train}. For our networks, the number of input events per pixel was set to $0.3$. Error metrics were only acquired over pixels with valid ground-truth data and at least one event; and, for comparison, we used the quantitative results reported in \cite{zhu2019unsupervised,stoffregen2020train}.
From \tabrefede{tab:flowevluation}, the first noticeable aspect is the accuracy gap between EV-FlowNet$\text{}_{\text{GT-SIM}}$ and the rest of networks. Training with ground-truth dense optical flow entails certain ability to resolve the aperture problem \cite{de2020neural} that most SSL approaches lack. Regarding the latter, our EV-FlowNet performs consistently better than EV-FlowNet$\text{}_{\text{FW-MVSEC}}$ in all sequences except for outdoor\_day1, but underperforms EV-FlowNet$\text{}_{\text{EW-MVSEC}}$ despite using the same architecture and training procedure. We believe this is mostly due to the different training datasets and the fact that we did not fine-tune the number of input events for this evaluation. Further, note that these literature architectures were trained on a very similar driving sequence from MVSEC, while our training data is much more diverse in terms of optical flow vectors \cite{delmerico2019we}.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.975
\textwidth]{figures/tikz/qualitative.pdf}
\caption{Qualitative comparison of our method with the state-of-the-art E2VID+ and FireNet+ architectures \cite{stoffregen2020train} on sequences from the ECD \cite{mueggler2017event} and HQF \cite{stoffregen2020train} datasets. Local histogram equalization not used for this comparison.}
\label{fig:qualitativerecon}
\end{figure*}
Using our EV-FlowNet as reference, \tabrefede{tab:flowevluation} shows that the proposed FireFlowNet is characterized by a comparable accuracy despite the significant reduction in model complexity. This performance drop is likely due to the narrow receptive field of the architecture, which entails limitations due to the aperture problem. Regarding computational cost, \tabrefede{tab:flowmetrics} shows that FireFlowNet runs ${\sim}1.3$-$2.2$ times faster than EV-FlowNet on GPU, requiring less than ${\sim}30\%$ of FLOPS per forward-pass.
\begin{table}[!t]
\caption{Quantitative evaluation of our FlowNet architectures on the ECD \cite{mueggler2017event} and HQF \cite{stoffregen2020train} datasets. For each dataset, we report the mean FWL \cite{stoffregen2020train} (higher is better, $\uparrow$). Best in bold, runner up underlined.
}
\label{tab:flow}
\centering
\resizebox{0.75\linewidth}{!}{%
{\renewcommand{\arraystretch}{1.1}
\begin{tabular}{lcc}
\thickhline
& \multicolumn{1}{c}{ECD$^*$}&\multicolumn{1}{c}{HQF}\\\hline
EV-FlowNet$_\text{FW-MVSEC}$ \cite{zhu2018ev} & 1.36 & 1.25\\
EV-FlowNet$_\text{GT-SIM}$ \cite{stoffregen2020train} & \textbf{1.51} & 1.39\\
EV-FlowNet$_\text{EW-DR}$ (Ours) & 1.31 & \underline{1.51}\\
FireFlowNet$_\text{EW-DR}$ (Ours) & \underline{1.39} & \textbf{1.58}\\
\thickhline
\multicolumn{3}{l}{\small $^*$Sequence cuts in the supplementary material.}
\end{tabular}}}
\end{table}
For completeness, we also evaluate our FlowNet architectures on the ECD \cite{mueggler2017event} and HQF \cite{stoffregen2020train} datasets via the Flow Warp Loss (FWL) \cite{stoffregen2020train}. This metric, which does not require ground-truth data, measures the sharpness of the IWE in relation to that of the original partition of events. Similarly to \cite{stoffregen2020train}, we set the number of input events to 50k for all sequences in this evaluation\footnote{Note that the formulation of the FWL metric is sensitive to the number of input events \cite{stoffregen2020train}.}. \tabrefede{tab:flow} shows that both our FlowNet architectures, which are specifically trained to perform event deblurring (see \secrefede{sec:flow}), are in line with or outperform the state-of-the-art EV-FlowNet trained with either frames \cite{zhu2018ev} or synthetic ground truth \cite{stoffregen2020train} according to this metric. More interestingly, FireFlowNet outperforms our EV-FlowNet in both datasets. A qualitative evaluation of our FlowNet architectures can be found in the supplementary material.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.985
\textwidth]{figures/tikz/rebecq.pdf}
\caption{Qualitative results of our E2VID$_\text{E}$ on sequences from the High Speed and HDR Dataset \cite{rebecq2019high}.}
\label{fig:rebecq}
\end{figure*}
\subsection{Reconstruction Evaluation}
We evaluated the accuracy of our ReconNet architectures against the DAVIS240C \cite{brandli2014240} frames from the ECD \cite{mueggler2017event} and HQF \cite{stoffregen2020train} datasets, and compared their performance to the state-of-the-art of image reconstruction networks trained with ground-truth supervision: E2VID \cite{rebecq2019high}, FireNet \cite{scheerlinck2020fast}, E2VID+ \cite{stoffregen2020train}, and FireNet+ \cite{stoffregen2020train}. Super resolution and adversarial methods are not considered in this comparison. We used the results and code provided by Stoffregen, Scheerlinck \textit{et al.}\ \cite{stoffregen2020train} for the quantitative and qualitative evaluations. The subscripts F and E indicate whether our networks were trained together with FireFlowNet or EV-FlowNet.
\begin{table}[!t]
\caption{Quantitative evaluation of our ReconNet architectures on the ECD \cite{mueggler2017event} and HQF \cite{stoffregen2020train} datasets. For each dataset, we report the mean MSE ($\downarrow$), SSIM \cite{wang2004image} ($\uparrow$) and LPIPS \cite{zhang2018unreasonable} ($\downarrow$). Best in bold; runner up underlined.
}
\label{tab:reconstruction}
\centering
\resizebox{\linewidth}{!}{%
{\renewcommand{\arraystretch}{1.1}
\begin{tabular}{lccccccc}
\thickhline
& \multicolumn{3}{c}{ECD$^*$}& &\multicolumn{3}{c}{HQF}\\\cline{2-4}\cline{6-8}
& MSE & SSIM & LPIPS && MSE & SSIM & LPIPS \\\hline
E2VID \cite{rebecq2019high} & 0.08 & 0.54 & 0.37 && 0.14 & 0.46 & 0.45 \\
FireNet \cite{rebecq2019high} & 0.06 & \underline{0.57} & \underline{0.29} && 0.07 & \underline{0.48} & 0.42 \\
E2VID+ \cite{stoffregen2020train} & \textbf{0.04} & \textbf{0.60} & \textbf{0.27} && \textbf{0.03} & \textbf{0.57} & \textbf{0.26} \\
FireNet+ \cite{stoffregen2020train} & \underline{0.06} & 0.51 & 0.32 && \underline{0.05} & 0.47 & \underline{0.36} \\
E2VID$\text{}_{\text{F}}$ (Ours) & 0.07 & 0.52 & 0.38 && 0.07 & 0.44 & 0.47 \\
E2VID$\text{}_{\text{E}}$ (Ours) & 0.06 & 0.55 & 0.37 && 0.06 & 0.48 & 0.47 \\
FireNet$\text{}_{\text{F}}$ (Ours) & 0.06 & 0.52 & 0.38 && 0.06 & 0.46 & 0.47 \\
FireNet$\text{}_{\text{E}}$ (Ours) & 0.06 & 0.51 & 0.41 && 0.06 & 0.46 & 0.51 \\
\thickhline
\multicolumn{8}{l}{\small $^*$Sequence cuts in the supplementary material.}
\end{tabular}}}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=0.475
\textwidth]{figures/tikz/limits.pdf}
\caption{Common failure cases of our SSL framework, namely motion blur in case of suboptimal optical flow estimation (left), ghosting artifacts in large texture-less regions (center), and inconsistent reconstructions due to the lack of information about the initial brightness $L_0$ (right).}
\label{fig:limitations}
\end{figure}
For all methods, reconstructions were generated at each DAVIS frame timestamp. We first applied local histogram equalization \cite{yadav2014contrast} to both frames, and then computed mean squared error (MSE), structural similarity (SSIM) \cite{wang2004image}, and perceptual similarity (LPIPS) \cite{zhang2018unreasonable}. Only for this evaluation, instead of using a fixed number of input events, we used all the events \textit{in between DAVIS frames}, thus generating image sets with the same number of frames as the ground truth. Quantitative results are presented in \tabrefede{tab:reconstruction}, and are supported by qualitative results in \figrefedetwo{fig:qualitativerecon}{fig:rebecq}. Additional results can be found in the supplementary material.
Despite not using any ground-truth data during training, results show that our method is in line with the state-of-the-art in terms of reconstruction accuracy. Quantitatively, the error metrics of all our ReconNet architectures closely resemble the results obtained with the original E2VID and FireNet, but the accuracy gap increases if compared against these same networks trained with the refined data augmentation mechanisms from Stoffregen, Scheerlinck \textit{et al.}\ \cite{stoffregen2020train}. This gap is particularly notable in the LPIPS loss because these literature networks are specifically trained to maximize perceptual similarity to ground-truth frames. On the other hand, there is no major quantitative difference between the evaluated versions of ReconNet, regardless of their architecture or the accompanying flow network.
Qualitative results confirm that our method reconstructs high quality HDR images. However, it is possible to identify several differences with respect to the state-of-the-art. Firstly, our images appear less sharp. Our architectures learn to correlate the spatial gradients of the estimated brightness $\hat{L}$ to the averaged IWE (see \secrefede{sec:recons}). This entails that the reconstructed images are affected by the accuracy of the optical flow. Suboptimal optical flow estimations lead to imperfect event deblurring during training, which in turn is reflected in the reconstructed images as motion blur. Note that this blur diminishes when using an appropriate fixed number of input events for each sequence. Secondly, the dynamic range of the images differs. State-of-the-art methods learn to map the input events into bounded estimates of $\hat{L}$
via supervised learning. On the contrary, our brightness estimate is unbounded, and normalization is used to encode this signal as bounded images. Besides this, there is no significant difference between the evaluated ReconNet versions, despite the limited smoothing capabilities of FireNet. Lastly, although our method does not suffer from the stretch marks mostly present in FireNet+ images, it is characterized by three common failure cases. As shown in \figrefede{fig:limitations}, these are: (i) the aforementioned motion blur, (ii) ``ghosting'' artifacts in large texture-less regions due to limited extrapolation of edge information, and (iii) incoherent reconstructions due to the lack of information about the initial brightness $L_0$.
\section{Conclusion}
In this paper, we went back to the basics of event cameras and presented the first self-supervised learning-based approach to event-based image reconstruction, which does not rely on any ground-truth or synthetic data during training. Instead, our SSL method makes use of the flow-intensity relation used by early methods to reconstruct the frames that best satisfy the input events and the estimated optical flow. Results confirm that our method performs almost as well as the state-of-the-art, but that the reconstructed images are characterized by several artifacts that need to be addressed by future work. Additionally, we presented FireFlowNet: a fast, lightweight neural network that performs event-based optical flow estimation. We believe this work shows the exciting potential of SSL to take over the research on image reconstruction from event data, and it opens up avenues for further improvement by leveraging the great amount of unlabeled event data available. Moreover, we have proposed a general self-supervised learning framework that can be extended in multiple ways via more sophisticated reconstruction losses and other event-based optical flow algorithms.
{\small
\bibliographystyle{IEEEtran}
\input{cvpr.bbl}
}
\clearpage
|
2101.07146
|
\section{Introduction}
Following the seminal work of Barnsley \cite{MF1}, Navascu\'es \cite{M2,M1} studied the approximation of functions using their fractal counterparts termed as $\alpha$-fractal functions. In the same vein,
Verma and Masspoust \cite{VM} recently introduced the notion of dimension preserving approximation. We use dim and $Gr(f)$ respectively to represent fractal dimension and graph of a function of $f$. \par
Various concepts of fractal dimensions are available but we cover only those fractal dimensions that are suitable for this article.
We only need to mention the Hausdorff dimension, the box dimension, and the packing dimension defined for nonempty subsets of $\R^n$, $n\in \N$, and denoted by $\dim_H,~\dim_B$ and $\dim_P $ respectively.
To know these fractal dimensions readers are suggested to go through, for instance, \cite{Fal,PM1}.
The following relations are established between these fractal dimensions. (see \cite{Fal}):
\[
\dim_H F \leq \underline{\dim}_B F \leq \overline{\dim}_B F
\]
and
\[
\dim_H F \leq \dim_P F \leq \overline{\dim}_B F.
\]
\par
The class of all real-valued continuous functions on $ \rectangle:=I \times J$ is defined by $ \cC\big(\rectangle\big)$ where $I=[a,b ]$ and $J=[c,d] .$
For a bivariate function $f$, we denote the derivative of $(k,l)$-th order by $D^{(k,l)}f$, that is, $D^{(k,l)}f:= \dfrac{\partial^{k+l}f}{\partial x^k \partial y^l}$. Let
$$\mathcal{C}^{m,n}(\rectangle)= \{f: \rectangle \to \mathbb{R}; ~ D^{(k,l)}f \in \cC\big(\rectangle\big),~~ \forall~~ 0\le k \le m, ~0\le l\le n \}.$$
If $D^{(k,l)}f(\boldsymbol{x}) \ge 0, ~\forall~~ \boldsymbol{x} \in \rectangle,$ then we say the function $f$ is $(m,n)$-convex.
Let $g\in \cC\big(\rectangle\big)$ such that $ \dim (Gr(g)) > 2$. We may refer to \cite{Shen} for the existence of such functions. The function $f:\rectangle \to \mathbb{R}$ defined by $f(x,y) :=\int\limits_{a}^{x}\int\limits_{c}^{y} g(t,s)dt ds$ satisfies the following:
\[
\dim (Gr(f)) =2\quad\text{and}\quad\dim Gr(D^{(1,1)}f) =\dim (Gr(g)) > 2,
\]
where $\dim$ denotes a fractal dimension.
Recall that the tensor product Bernstein polynomial on $\rectangle$ is defined as:
$$B_{m,n}(f)(x,y)= \sum_{i=0}^m \sum_{j=0}^n f\Big(a+\frac{i(b-a)}{m}, c+\frac{j(d-c)}{n}\Big) {m \choose i} {n \choose j} (x-a)^i (b-x)^{m-i} (y-c)^j (d-y)^{n-j}.$$
Let us approximate a function $f \in \mathcal{C}^{k,l}(\rectangle)$ by $B_{m,n}(f)$, then (see \cite{Gal} for several properties of Bernstein polynomials) we have the following:
\begin{itemize}
\item $B_{m,n}(f) \to f$ uniformly on $ \rectangle.$
\item $\Big(D^{(k,l)}(B_{m,n}(f))\Big) \to D^{(k,l)}f$ uniformly on $ \rectangle.$
\item Since $B_{m,n}(f)$ and $D^{(k,l)}(B_{m,n}(f))$ are polynomials, then $\dim\Big(Gr\big(D^{(k,l)}(B_{m,n}(f))\big)\Big)=\dim(Gr(B_{m,n}(f)))=\dim(Gr(f))=2.$
\end{itemize}
The above items may conclude that the approximation by Bernstein polynomials maintains the smoothness of a function but not (necessarily) the dimensions of its partial derivatives.
The present paper explores the approximation perspective relative to fractal dimension of a function and its partial derivatives.
The paper is structured as follows. In Section 1, we give a brief introduction and some preliminaries needed for the paper. In Section 2, we start to prove some results regarding dimension preserving approximation. In Section 3, we define some multi-valued mappings which are defined with the help of bivariate $\alpha$-fractal functions, and establish some properties of them.
\section{Dimension preserving approximation of bivariate functions}
Firstly, we mention the following result required for our paper:
\begin{lemma}[\cite{VM}, Lemma $3.1$]\label{lipdim}
Let $A \subset \mathbb{R}^m $ and $f,g:A \rightarrow \mathbb{R}^n$ be continuous functions. Then,
\[
\dim_H (Gr(f+g)) = \dim_H (Gr(g))\quad\text{and}\quad\dim_P (Gr(f+g)) = \dim_P (Gr(g))
\]
provided that $f$ is a Lipschitz function.
\end{lemma}
\begin{remark}
Note that the above lemma is also true for box dimensions.
\end{remark}
Let us denote the class of $Y$-valued Lipschitz functions on $X$ by $\mathcal{L}ip (X,Y),$ where $(X,d_X)$ is a compact metric space and $(Y,\|.\|_Y)$ is a normed linear space. Note that this space is a dense subset of $\mathcal{C}(X,Y)$ with respect to the supremum norm.
In view of Lipschitz invariance property of dimension, one may conclude that the upcoming theorem holds for all aforementioned dimensions.
\begin{theorem}\label{densethm}
Let $\dim(X) \leq \beta \leq \dim(X)+\dim(Y)$. Then the set $\mathcal{S}_{\beta}:=\{f\in \mathcal{C}(X,Y): \dim (Gr(f)) = \beta\}$ is dense in $\mathcal{C}(X,Y).$
\end{theorem}
\begin{proof}
Let $f\in \mathcal{C}(X,Y)$ and $\epsilon>0.$ Using the density of $\mathcal{L}ip(X,Y)$ in $\mathcal{C}(X,Y)$, there exists $g$ in $\mathcal{L}ip (X,Y)$ such that $$ \|f-g\|_{\infty,Y} < \frac{\epsilon}{2}.$$ Further, we consider a non-vanishing function $h \in \mathcal{S}_{\beta}.$ Let $h_*= g +\frac{\epsilon}{2\|h\|_{\infty,Y}}h,$ which immediately gives $$\|g-h_*\|_{\infty,Y} \le \frac{\epsilon}{2}.$$ This together with Lemma \ref{lipdim} implies that $\dim(Gr(h_*))=\dim(Gr(h))=\beta.$ Hence, we have $h_* \in \mathcal{S}_{\beta}$ and $$ \|f-h_*\|_{\infty,Y} \le \|f-g\|_{\infty,Y} + \|g-h_*\|_{\infty,Y} < \epsilon .$$ Thus, the proof of the theorem is complete.
\end{proof}
To the best our knowledge, the univariate version of the next theorem is well-known, however, we could not find a proof of the theorem in bivariate setting. Hence, we write a detailed proof of it.
\begin{theorem}\label{Rudinthm}
Let $\big(f_k\big)$ be a sequence of differentiable functions on $\rectangle$. Assume that for some $(x_0,y_0) \in \rectangle,$ the sequences $\big(f_{k}(x_{0},.)\big)$ and $\big(f_{k}(.,y_0)\big)$ converges uniformly on $ [c, d]$ and $[a,b]$ respectively. If $(D^{(1,1)}f _k)$ converges uniformly on $\rectangle,$ then $\big(f_k\big)$ converges uniformly on $\rectangle$ to a function $f$, and
$$D^{(1,1)}f(\boldsymbol{x})=\lim_{k \to \infty }D^{(1,1)} f_k(\boldsymbol{x}),$$ for every $\boldsymbol{x} \in \rectangle.$
\end{theorem}
\begin{proof}
Let $\epsilon>0$. Since $(D^{(1,1)}f _k)$ converges uniformly, there exists $N_1 \in \mathbb{N}$ such that
$$|D^{(1,1)} f_{k}(\boldsymbol{x})-D^{(1,1)} f_{m}(\boldsymbol{x})| < \frac{\epsilon}{4(b-a)(d-c)}, ~~\forall~\boldsymbol{x} \in \rectangle, ~k,m \ge N_1.$$
By the mean-value theorem, see, for instance, \cite[Theorem $9.40$]{Rudin}, we have
\begin{equation}\label{visha2}
\begin{aligned}
&\big|f_{k}(x+h,y+k)-f_{m}(x+h, y+k)-f_{k}(x+h,y)+f_{m}(x+h, y)-f_{k}(x,y+k)+f_{m}(x, y+k)\\ & +f_{k}(x,y)-f_{m}(x, y)\big| \\= ~ & hk~ \big|D^{(1,1)}(f_k-f_{m})(t,s)\big| \\ \leq ~ & hk \max_{(t,s)\in \rectangle}\big|D^{(1,1)}f_k(t,s)- D^{(1,1)}f_{m}(t,s)\big| \\ \leq ~ & \frac{\epsilon}{4(b-a)(d-c)}hk \\ \leq ~ & \frac{\epsilon}{4}.
\end{aligned}
\end{equation}
By the hypothesis for $(x_{0},y_{0}) \in \rectangle,$
one can choose $ N_0 ~(>N_1)~\in \mathbb{N}$ such that
$$|f_k(x_{0},y)-f_{m}(x_{0},y)|< \frac{\epsilon}{4} ~~\forall~ k,m\geq N_0$$
and
$$|f_k(x,y_{0})-f_{m}(x,y_{0})| < \frac{\epsilon}{4} ~~ \forall~ k,m\geq N_0.$$
Now, using the above estimates and Equation \ref{visha2} we have
\begin{equation*}
\begin{aligned}
|f_k(x,y)-f_{m}(x,y)| \leq& \frac{\epsilon}{4}+|f_k(x,y_{0})-f_{m}(x,y_{0})|+|f_k(x_{0},y)-f_{m}(x_{0},y)|\\&+|f_k(x_{0},y_{0})-f_{m}(x_{0},y_{0})| \\ < & \frac{\epsilon}{4}+\frac{\epsilon}{4}+\frac{\epsilon}{4}+\frac{\epsilon}{4}\\ < & \epsilon,
\end{aligned}
\end{equation*}
for every $(x,y) \in \rectangle$ and $k,m\geq N_0.$ This immediately confirms the uniform convergence of $(f_k).$ The rest part follows by routine calculations, hence omitted.
\end{proof}
\begin{lemma}\label{newlem2}
Let $f:I \rightarrow \mathbb{R} $ be a Lipschitz map and $g:J \rightarrow \mathbb{R}$ be a continuous function. A mapping $h: \rectangle \to \mathbb{R}$ defined by $$h(x,y)=f(x)+g(y),$$ then $$dim_H (Gr(h)) = \dim_H (Gr(g))+1.$$
\end{lemma}
\begin{proof}
Proof follows by defining a bi-Lipschitz mapping from $Gr(h)$ to the set $\{(x,y,g(y)):x \in I,~ y \in J \}.$
\end{proof}
Here, let us recall some dimensional results for univariate functions. Mauldin and Williams \cite{RD} considered the following class of functions:
$$W_{b}(x):=\sum_{n=-\infty}^{\infty}b^{-\alpha n}[\phi(b^{n}x+\theta_{n})-\phi({\theta_{n}})],$$ where $\theta_{n}$ is an arbitrary real number, $\phi$ is a periodic function with period one and $ b > 1,$ $0<\alpha<1.$ They showed that for a large enough $b$ there exists a constant $C>0$ such that $\dim_H (Gr(W_{b})$ is bounded below by $2-\alpha-(C/\ln b).$
Further, a significant progress in dimension theory of functions is contributed by Shen \cite{Shen} for the following class of functions:
$$f^{\phi}_{\lambda,b}(x):=\sum_{n=0}^{\infty}\lambda^{n} \phi(b^{n}x)$$
where $b\geq 2$ and $\phi$ is a real-valued, $\mathbb{Z}$-periodic, non-constant, $C^{2}$-function defined on $\mathbb{R}$. He proved that there exists a constant $K_{0}$ depending on $\phi$ and $b$ such that if $1< \lambda b <K_{0}$ then $$\dim_H (Gr(f^{\phi}_{\lambda,b})= 2+ \frac{\log\lambda}{\log b}.$$
For $f \in \mathcal{C}^{1,1}(\rectangle),$ we get $\dim(Gr(f))=2.$ However, no conclusion can be drawn for dimensions of its partial derivatives. This is evident from the following example: let Weierstrass-type nowhere differentiable continuous function $W:I \to \mathbb{R}$ as in \cite{Shen} with $1 \le \dim( Gr(W))\le 2$. Now, we define $h:\rectangle \to \mathbb{R}$ by $$h(x,y)= W(x)+y.$$ Here, by Lemma \ref{newlem2}, we obtain $2 \le \dim( Gr(h))=\dim( Gr(W))+1 \le 3.$ Then for the function $f$ defined by $$f(x,y):=\int\limits_{a}^{x} \int\limits_{c}^{y}h(t,s)dtds,$$ we have $\dim (Gr(f)) =2$ and $2 \le \dim (Gr(D^{(1,1)}f)) =\dim (Gr(h))\le 3.$
\begin{theorem}\label{mainthm}
Let $f \in \mathcal{C}^{1,1}(\rectangle)$ such that $\dim (Gr(D^{(1,1)}f)) =\beta$ for some $2 \le \beta \le 3.$ Then we have a sequence $(f_k)$ in $\mathcal{C}^{1,1}(\rectangle)$ such that $\dim (Gr(D^{(1,1)} f_k)) =\beta$ and $f_k \to f$ uniformly on $\rectangle.$
\end{theorem}
\begin{proof}
In view of Theorem \ref{densethm}, there exists a sequence $(g_k)$ in $ \mathcal{C}(\rectangle)$ such that $\dim (Gr(g_k)) =\beta$ and $g_k \to D^{(1,1)}f$ uniformly on $\rectangle.$ Further, let us consider a function $f_k: \rectangle \to \mathbb{R}$ defined by $$f_k(x,y):=\int\limits_{a}^{x}\int\limits_{c}^{y} g_k(t,s)dtds.$$ Then $D^{(1,1)}f_k=g_k$ and $(D^{(1,1)}f_k) \to D^{(1,1)}f$ uniformly. Next, we have that $\big(f_k(a,y)\big) \to 0$ and $\big(f_k(x,c)\big) \to 0$ uniformly on $I$ and $J$ respectively. Now, Theorem \ref{Rudinthm} provides the proof.
\end{proof}
\begin{theorem}
Let $f \in \mathcal{C}(\rectangle)$ with $f(\boldsymbol{x}) \ge 0 ~\forall~\boldsymbol{x} \in \rectangle.$ Then, for a given $\epsilon >0,$ there exists $g \in \mathcal{S}_{\beta}$ satisfying the following: $$g(\boldsymbol{x}) \ge 0 ~\forall~\boldsymbol{x} \in \rectangle ~\text{and}~ \|f-g\|_{\infty} < \epsilon.$$
\end{theorem}
\begin{proof}
Let $\epsilon >0.$ Theorem \ref{densethm} yields an element $h \in \mathcal{S}_{\beta}$ such that $$ \|f-h\|_{\infty} < \frac{\epsilon}{2}.$$ We define $$g(\boldsymbol{x}):=h(\boldsymbol{x})+ \frac{\epsilon}{2}, ~\forall~ \boldsymbol{x} \in \rectangle.$$ Then, by Lemma \ref{lipdim}, $g \in \mathcal{S}_{\beta},$ and by routine calculations, we get $$g(\boldsymbol{x})=h(\boldsymbol{x})-f(\boldsymbol{x})+f(\boldsymbol{x})+\frac{\epsilon}{2} \ge -\|f-h\|_{\infty} +f(\boldsymbol{x})+ \frac{\epsilon}{2}> f(\boldsymbol{x}) \ge 0.$$
Furthermore, one has $$\|f- g\|_{\infty} \le \|f- h\|_{\infty} +\|h- g\|_{\infty} < \epsilon,$$ hence the proof.
\end{proof}
\begin{theorem}
Let $f:\rectangle \to \mathbb{R}$ be a $(m,n)$-convex function such that $f(a,y)=f(x,c)=0, ~\forall~x \in I, ~y \in J.$ Then for $\epsilon >0,$ there exists $(m,n)$-convex function $g$ such that $D^{(m,n)}g \in \mathcal{S}_{\beta}$ and $\|f-g\|_{\infty} < \epsilon.$
\end{theorem}
\begin{proof}
Let $\epsilon >0.$ Using Theorem \ref{densethm}, there exists $h \in \mathcal{S}_{\beta}$ such that $\|D^{(m,n)}f-h\| < \frac{\epsilon}{(b-a)^m(d-c)^n}.$
By choosing $$g(x,y):= \int_{a}^{x} \int_c^y \dots \int_{a}^{x_{m-1}} \int_{c}^{y_{n-1}} h(x_m,y_n)dx_{m}dy_{n}\dots dx_1 dy_1,$$ we have
\[
\|f-g\| = \sup_{(x,y) \in \rectangle} \Big\{\Big| f- \int_{a}^{x} \int_c^y \dots \int_{a}^{x_{m-1}} \int_{c}^{y_{n-1}} h(x_m,y_n)dx_{m}dy_{n}\dots dx_1 dy_1\Big|\Big\} < \epsilon,
\]
proving the assertion.
\end{proof}
\begin{theorem}\label{BSOSA}
Let $f \in \mathcal{C}(\rectangle).$ Then, for $\epsilon >0$ there exists $g \in \mathcal{S}_{\beta}$ such that $$g(\boldsymbol{x}) \le f(\boldsymbol{x})~ \forall ~\boldsymbol{x} \in \rectangle~ \text{and}~ \|f-g\|_{\infty} < \epsilon.$$
\end{theorem}
\begin{proof}
Since $f \in \mathcal{C}(\rectangle)$ and $\epsilon >0$,
Theorem \ref{densethm} generates a member $h \in \mathcal{S}_{\beta}$ such that $$\|f-h\|_{\infty} < \frac{\epsilon}{2}.$$
Choose $g(\boldsymbol{x}) :=h(\boldsymbol{x})- \frac{\epsilon}{2}, ~~~\forall~~\boldsymbol{x} \in \rectangle.$ Then, $$g(\boldsymbol{x})=h(\boldsymbol{x})-f(\boldsymbol{x})+f(\boldsymbol{x})-\frac{\epsilon}{2} \le \|f-h\|_{\infty} +f(\boldsymbol{x})- \frac{\epsilon}{2} < f(\boldsymbol{x}).$$ Furthermore, $$\|f- g\|_{\infty} \le \|f- h\|_{\infty} +\|h- g\|_{\infty} < \epsilon,$$
establishing the proof.
\end{proof}
Now, we aim to show the existence of best one-sided approximation. Let $\beta \in [2, 3],$ and define
$$\mathcal{C}_{\beta}(\rectangle) := \{ f \in \mathcal{C}(\rectangle) : \overline{\dim}_B (Gr(f)) \le \beta\}.$$ In view of \cite[Proposition $3.4$]{Fraser}, recall that $\mathcal{C}_{\beta}(\rectangle)$ is a normed linear space.
Let $\{g_1,g_2,\dots,g_n\}$ be a linearly independent subset of $\mathcal{C}_{\beta}(\rectangle).$ Further, for a bounded below and Lebesgue integrable function $f: \rectangle \rightarrow \mathbb{R}$, we define $$\mathcal{Y}_{n}^{\beta}(f):= \Big\{h \in span\{g_1,g_2, \dots, g_n\}: h (\boldsymbol{x}) \le f(\boldsymbol{x}) ~\forall ~\boldsymbol{x} \in \rectangle \Big\}.$$ Theorem \ref{BSOSA} guarantees the nonemptyness of $\mathcal{Y}_{n}^{\beta}(f).$
A function $h_f \in \mathcal{Y}_{n}^{\beta}(f)$ is said to be a best one-sided approximation from below to $f$ on $\rectangle$ if $$ \int_{\rectangle} h_f(\boldsymbol{x})~ d\boldsymbol{x} = \sup \Big\{\int_{\rectangle } h(\boldsymbol{x})~ d\boldsymbol{x}: h \in \mathcal{Y}_{n}^{\beta}(f) \Big\}.$$
In a similar way, we define best one-sided approximations from above. We state the next theorem for one-sided approximation from below. Though a similar result can be proved in terms of one-sided approximation from above, see, for instance, \cite{Devore,VV2}.
\begin{theorem}
For a bounded below and integrable function $f: \rectangle \rightarrow \mathbb{R}$, there exists a member in $\mathcal{Y}_{n}(f)$ of best one-sided approximant from below to $f$ on $\rectangle$.
\end{theorem}
\begin{proof}
Let $(h_m)$ be a sequence in $\mathcal{Y}_{n}(f)$ such that
\begin{equation}\label{osidedeqn}
\int_{\rectangle} h_m(\boldsymbol{x})~ d\boldsymbol{x} \to A ~~~\text{as}~~m \to \infty,
\end{equation}
where $A = \sup \Big\{\int_{\rectangle } h(\boldsymbol{x})~ d\boldsymbol{x}: h \in \mathcal{Y}_{n}^{\beta}(f) \Big\}.$
With an appropriate constant $M_*> 0,$ we have
\begin{equation*}
\begin{aligned}
\int_{\rectangle} |h_m(\boldsymbol{x})|~ d\boldsymbol{x} \le& \int_{\rectangle} \Big|h_m(\boldsymbol{x})- \frac{A}{(b-a)(d-c)}\Big|~ d\boldsymbol{x}\\ & + \int_{\rectangle} \frac{A}{(b-a)(d-c)}~ d\boldsymbol{x} \le M_* ,
\end{aligned}
\end{equation*}
where $I=[a,b]$ and $J=[c,d].$
Since $\mathcal{Y}_{n}^{\beta}(f)$ is a subset of finite-dimensional linear space, the closed set of radius $M_*$ in $\mathcal{Y}_{n}^{\beta}(f)$ is compact. Therefore, there exist a subsequence $ (h_{m_k})$ and a function $h$ in $\mathcal{Y}_{n}^{\beta}(f)$ such that the sequence $(h_{m_k})$ converges to $h$ in $\mathcal{L}^1(\rectangle).$ Recall a basic functional analysis result that every norm is equivalent on a finite-dimensional linear space. Now, from the finite-dimensionality of $\mathcal{Y}_{n}^{\beta}(f)$, it follows that the sequence $(h_{m_k})$ also converges to $h$ uniformly.
Further, since $ h_m(\boldsymbol{x}) \le f(\boldsymbol{x}), ~\forall ~\boldsymbol{x} \in \rectangle,$ and $h_{m_k} \to h$ uniformly, we get $h(\boldsymbol{x}) \le f(\boldsymbol{x}), ~~ \forall~ \boldsymbol{x} \in \rectangle.$ Thus, $h \in \mathcal{Y}_{n}^{\beta}(f).$ Now, by (\ref{osidedeqn}), we have $$\int_{\rectangle}h(\boldsymbol{x})~ d\boldsymbol{x}=\lim_{k \to \infty} \int_{\rectangle} h_{m_k}(\boldsymbol{x})~ d\boldsymbol{x} = A,$$
completing the task.
\end{proof}
\subsection{Construction of dimension preserving approximants}
First, Hutchinson \cite{H} hinted at the generation of parameterized fractal curves. In \cite{MF1}, Barnsley introduced Fractal Interpolation Functions (FIFs) via Iterated Function System (IFSs). It is important to choose IFS appropriately that it is fitted as an attractor for a graph of a continuous function called FIF. We refer to the reader \cite{MF1} for more study regarding the construction of FIFs.
Computation of dimensions of fractal functions has been an integral part of fractal geometry. In \cite{MF1}, Barnsley proved estimates for the Hausdorff dimension of an affine FIF. Falconer also established a similar results in \cite{Falc2}. Barnsley and his collaborators \cite{MF4,MF6,Hardin} computed the box dimension of classes of affine FIFs. In \cite{MF4}, FIFs generated by bilinear maps have been studied. In \cite{HM}, a formula for the box dimension of FIFs $\R^n\to\R^m$ was proved. A particular case of FIFs given by Navascu\'es \cite{M2}, namely, (univariate) \emph{$\alpha$-fractal function} has been proven very useful in approximation theory and operator theory. Using series expansion, the box dimension of (univariate) $\alpha$-fractal function is estimated in \cite{VV3}.
\par
Let us recall a construction of bivariate $\alpha$-fractal function introduced in \cite{VV1}, which was influenced by Ruan and Xu \cite{Ruan}, on rectangular grids.\\
Let $x_0=a,~x_N=b,~y_0=c,~y_M=d,$ and $f \in \mathcal{C}(\rectangle).$ Let us denote $\Sigma_k=\{1,2,\dots,k\},$ $ \Sigma_{k,0}=\{0,1,\dots k \},$ $\partial \Sigma_{k,0}=\{0,k\} $ and int$\Sigma_{k,0}=\{1,2,\dots,k-1\}.$ Further, a net $\Delta$ on $\rectangle$ is defined as follows:
$$ \Delta:=\{(x_i,y_j):i \in \Sigma_{N,0},~j \in \Sigma_{M,0}~ \text{and}~ x_0<x_1<\dots<x_N; ~y_0<y_1<\dots<y_M\}.$$
For each $i \in \Sigma_N$ and $j \in \Sigma_M$, let us define $I_i=[x_{i-1},x_i],~J_j=[y_{j-1},y_j]$ and $\rectangle_{ij}:=I_i \times J_j.$ Let $i \in \Sigma_N,$ we define contraction mappings $u_i:I \rightarrow I_i$ such that
$$ u_i(x_0)=x_{i-1}, ~~ u_i(x_N)=x_i, ~~\text{if $i$ is odd}, ~~\text{and}
~ u_i(x_0)=x_i,~~ u_i(x_N)=x_{i-1},~~ \text{if $i$ is even.}$$
Similar to the above, for each $j \in \Sigma_M,$ we define $v_j:J \rightarrow J_j,$ and $Q_{ij}(\boldsymbol{x}):= (u_i^{-1}(x),v_j^{-1}(y)),$ where $\boldsymbol{x}=(x,y) \in \rectangle_{ij}.$
Let $\alpha \in \mathcal{C}(\rectangle)$ be such that $\|\alpha\|_{\infty}<1.$ Assume further that $s \in \mathcal{C}(\rectangle)$ satisfying $s(x_i,y_j)=f(x_i,y_j),$ for all $i \in \partial \Sigma_{N,0}, j \in \partial \Sigma_{M,0}.$ By \cite[Theorem $3.4$]{VV2}, we have a unique function $f^{\alpha}_{\Delta,s} \in \mathcal{C}(\rectangle)$ termed as $\alpha$-fractal function, such that
\begin{equation*}
f^{\alpha}_{\Delta,s}(\boldsymbol{x})= f(\boldsymbol{x})+\alpha(\boldsymbol{x})~ f^{\alpha}_{\Delta,s}\big(Q_{ij}(\boldsymbol{x})\big)- \alpha(\boldsymbol{x})~s\big(Q_{ij}(\boldsymbol{x})\big),
\end{equation*}
for $\boldsymbol{x} \in \rectangle_{ij},~ (i,j) \in \Sigma_N \times \Sigma_M.$
\begin{note}
In this note, we recall Theorem $5.16$ in \cite{VV2}.
With the metric $$d_{\rectangle}(\boldsymbol{x},\boldsymbol{y}):=\sqrt{(x_1-y_1)^2+(x_2-y_2)^2},~~\text{where}~\boldsymbol{x}=(x_1,x_2),~ \boldsymbol{y}=(y_1,y_2),$$ we consider $f$ and $s$ such that
\begin{equation}\label{Hypo}
\begin{aligned}
& |f(\boldsymbol{x}) -f(\boldsymbol{y})| \le K_f d_{\rectangle}(\boldsymbol{x},\boldsymbol{y})^{\sigma},\\&
|s(\boldsymbol{x}) -s(\boldsymbol{x})| \le K_s d_{\rectangle}(\boldsymbol{x},\boldsymbol{y})^{\sigma}.
\end{aligned}
\end{equation}
for every $\boldsymbol{x},\boldsymbol{y} \in \rectangle,$ and for fixed $K_f, K_s > 0.$ Assume that for some $k_f>0, \delta_0 >0$ the following holds: for each $\boldsymbol{x} \in \rectangle $ and $ 0< \delta <\delta_0$ there exists $\boldsymbol{y}$ such that $d_{\rectangle}(\boldsymbol{x},\boldsymbol{y}) \le \delta$ and
\begin{equation} \label{HCeq2}
|f(\boldsymbol{x})-f(\boldsymbol{y})| \ge k_fd_{\rectangle}(\boldsymbol{x},\boldsymbol{y})^{\sigma}.
\end{equation}
Furthermore, we suppose $N=M,~x_i-x_{i-1} = \frac{1}{N},~y_j-y_{j-1} =\frac{1}{M}, \forall ~ i \in \Sigma_N, ~j \in \Sigma_M$ and constant scaling function $\alpha.$\\
If $ |\alpha|< \min\Big\{\frac{1}{M},\frac{k_f}{(K_{f^\alpha}+K_s)M^{\sigma}}\Big\},$ then $\dim_B\big(Gr(f^{\alpha})\big) = 3 - \sigma.$
\end{note}
\begin{remark}
With the assumptions in the above note, one may construct dimension preserving approximants for a given function, see, for instance, \cite[Theorem $3.16$]{VM}.
\end{remark}
Navascu\'es \cite{M1} developed the notion of (univariate) $\alpha$-fractal function via so-called (univariate) fractal operator. In \cite{VV1,VV2}, her collaborators extended some of her results in bivariate setting. On putting $L= B_{m,n}$ in \cite[Theorem $3.1$]{VV1}, we have a unique function $f^{\alpha}_{\Delta,B_{m,n}} \in \mathcal{C}(\rectangle)$ such that
\begin{equation}\label{Fnleq1}
f^{\alpha}_{\Delta,B_{m,n}}(\boldsymbol{x})= f(\boldsymbol{x})+\alpha(\boldsymbol{x})~ f^{\alpha}_{\Delta,B_{m,n}}\big(Q_{ij}(\boldsymbol{x})\big)- \alpha(\boldsymbol{x})~B_{m,n}(f)\big(Q_{ij}(\boldsymbol{x})\big),
\end{equation}
for $\boldsymbol{x} \in \rectangle_{ij},~ (i,j) \in \Sigma_N \times \Sigma_M.$
Following the work of \cite{VV1}, we define a single-valued fractal operator
$\mathcal{F}^\alpha_{m,n}: \mathcal{C}(\rectangle) \to \mathcal{C}(\rectangle)$ by $$\mathcal{F}_{m,n}^\alpha(f) =f^{\alpha}_{\Delta,B_{m,n}}.$$
In \cite{VV1}, several operator theoretic results for fractal operator are obtained. We recall that $\mathcal{F}^\alpha_{m,n}$ is a bounded linear operator, see, for instance, \cite[Theorem $3.2$]{VV1}.
\begin{lemma}[\cite{CC}, Lemma $1$]
Let $(X,\|.\|)$ be a Banach space, $T: X \to X$ be a linear operator. Suppose there exist constants $\lambda_1, \lambda_2 \in [0,1)$ such that
$$ \|Tx-x\| \le \lambda_1 \|x\| + \lambda_2 \|Tx\|, \quad \forall~~ x \in X.$$
Then $T$ is a topological isomorphism, and
$$\frac{1-\lambda_2}{1+\lambda_1} \|x\| \le \| T^{-1}x\| \le \frac{1+\lambda_2}{1-\lambda_1} \|x\|,\quad \forall~~x \in X.$$
\end{lemma}
\begin{note}\label{use7}
We have the following.
\begin{equation*}
\begin{split}
B_{m,n}(f)(\boldsymbol{x})= &\frac{1}{(b-a)^m(d-c)^n}\sum_{i=0}^m \sum_{j=0}^n {m \choose i} {n \choose j}(x-a)^i (b-x)^{m-i}\\& (y-c)^j (d-y)^{n-j}f\Big(a+\frac{i(b-a)}{m}, c+\frac{j(d-c)}{n}\Big),
\end{split}
\end{equation*}
Choosing $f=1,$ we have
\begin{equation*}
\begin{split}
B_{m,n}1(\boldsymbol{x}) & = \frac{1}{(b-a)^m(d-c)^n}\sum_{i=0}^m \sum_{j=0}^n {m \choose i} {n \choose j}(x-a)^i (b-x)^{m-i}(y-c)^j (d-y)^{n-j}\\ &=\frac{1}{(b-a)^m(d-c)^n}\sum_{i=0}^m {m \choose i}(x-a)^i (b-x)^{m-i}\sum_{j=0}^n {n \choose j}(y-c)^j (d-y)^{n-j}\\ & =\frac{1}{(b-a)^m(d-c)^n}\sum_{i=0}^m {m \choose i}(x-a)^i (b-x)^{m-i}(y-c +d-y)^{n}\\ & =\frac{1}{(b-a)^m(d-c)^n}(x-a+b-x)^m (y-c +d-y)^{n} \\ & =1.
\end{split}
\end{equation*}
This implies that $\|B_{m,n}\| \ge 1.$
Now, for every $f \in \mathcal{C}(\rectangle)$ we get
\begin{equation*}
\begin{split}
|B_{m,n}(f)(\boldsymbol{x})| & \le \frac{\|f\|_{\infty}}{(b-a)^m(d-c)^n}\sum_{i=0}^m \sum_{j=0}^n {m \choose i} {n \choose j}(x-a)^i (b-x)^{m-i}(y-c)^j (d-y)^{n-j}\\ & = \|f\|_{\infty},
\end{split}
\end{equation*}
which produces $\|B_{m,n}\| \le 1.$ Therefore, we have $\|B_{m,n}\|=1.$
\end{note}
\begin{theorem}\label{thmtopiso}
The fractal operator $\mathcal{F}^\alpha_{m,n}:
\mathcal{C}(\rectangle) \to \mathcal{C}(\rectangle)$ is a topological isomorphism.
\end{theorem}
\begin{proof}
Using equation (\ref{Fnleq1}) and note \ref{use7}, one gets
\begin{equation*}
\begin{split}
\big \| f - \mathcal{F}^\alpha_{m,n}(f)\big\|_\infty \le~ \|\alpha\|_\infty \big\|\mathcal{F}_{m,n}^\alpha(f) - B_{m,n}f \big\|_\infty = ~\|\alpha\|_\infty \big\|\mathcal{F}_{m,n}^\alpha(f)\big\|_\infty+ \|\alpha\|_\infty \|f\|_\infty .
\end{split}
\end{equation*}
Since $\|\alpha\|_\infty < 1$, the previous lemma yields that the fractal operator $\mathcal{F}_{m,n}^\alpha$ is a topological isomorphism.
\end{proof}
\begin{remark}
The above theorem may strengthen item-4 of \cite[Theorem $3.2$]{VV1}. To be precise, item-4 tells that $\mathcal{F}_{m,n}^\alpha$ is a topological isomorphism if $ \|\alpha\|_\infty < \big(1+\|I - B_{m,n}\|\big)^{-1},$ which is more restricted than the standing assumption considered in the above theorem, that is, $\|\alpha\|_\infty < 1.$
\end{remark}
\begin{theorem}
Let $f \in \mathcal{C}(\rectangle)$ be such that $f(\boldsymbol{x}) \geq 0,~\forall ~\boldsymbol{x} \in \rectangle.$ Then for $\epsilon >0,$ and for $\alpha \in \mathcal{C}(\rectangle)$ satisfying $\|\alpha \|_{\infty} < 1,$ we have an $\alpha$-fractal function $ g_{\Delta,B_{m,n}}^{\alpha}$ satisfying $$ g_{\Delta,B_{m,n}}^{\alpha}(\boldsymbol{x}) \geq 0, ~~\forall ~\boldsymbol{x} \in \rectangle~~\text{and}~ \|f- g_{\Delta,B_{m,n}}^{\alpha}\|_{\infty} <\epsilon.$$
\end{theorem}
\begin{proof}
Note that the Bernstein operator $B_{m,n}$ fixes the constant function $1$, that is, $B_{m,n}(1)=1,$ where $ 1(\boldsymbol{x})=1 $ on $\rectangle.$ Consider $ \alpha \in \mathcal{C}(\rectangle)$ such that $\|\alpha\|_{\infty} < 1.$ From Equation \ref{Fnleq1}, we deduce
$$ \|g_{\Delta,B_{m,n}}^{\alpha}- g\|_{\infty} \leq \|\alpha \|_{\infty}\|g_{\Delta,B_{m,n}}^{\alpha}- B_{m,n}g\|_{\infty}, ~\forall~ g \in \mathcal{C}(\rectangle).$$ Choose $g=1$, then the above inequality gives $$ \|f_{\Delta,B_{m,n}}^{\alpha}- 1\|_{\infty} \leq \|\alpha \|_{\infty}\|f_{\Delta,B_{m,n}}^{\alpha}- 1\|_{\infty},$$ and this further yields $\|f_{\Delta,B_{m,n}}^{\alpha}- 1\|_{\infty} = 0.$ Therefore, $f_{\Delta,B_{m,n}}^\alpha=1$, that is, $\mathcal{F}_{m,n}^{\alpha}(1)=1.$\\
For $\epsilon >0 $, $\alpha \in \mathcal{C}(\rectangle)$ and $ f \in \mathcal{C}(\rectangle).$ Using Theorem \ref{densethm}, there exists a function $h_{\Delta,B_{m,n}}^\alpha$ such that $$ \|f- h_{\Delta,B_{m,n}}^{\alpha}\|_{\infty} <\frac{\epsilon}{2}, ~ \text{ where}~ \mathcal{F}_{m,n}^{\alpha}(h)=h_{\Delta,B_{m,n}}^{\alpha}.$$ Define
$ g_{\Delta,B_{m,n}}^{\alpha}(\boldsymbol{x})=h_{\Delta,B_{m,n}}^{\alpha}(\boldsymbol{x})+ \frac{\epsilon}{2} $ for all $\boldsymbol{x} \in \rectangle.$ Since $\mathcal{F}_{m,n}^{\alpha}(1)=1,$ $$ g_{\Delta,B_{m,n}}^{\alpha}(\boldsymbol{x})=h_{\Delta,B_{m,n}}^{\alpha}(\boldsymbol{x})+ \frac{\epsilon}{2}1(\boldsymbol{x}) = h_{\Delta,B_{m,n}}^{\alpha}(\boldsymbol{x})+ \frac{\epsilon}{2}1^{\alpha}(\boldsymbol{x}).$$
Further, since $\mathcal{F}_{m,n}^{\alpha}$ is a linear operator $$ g_{\Delta,B_{m,n}}^{\alpha} = h_{\Delta,B_{m,n}}^{\alpha}+ \frac{\epsilon}{2}1^{\alpha}= \mathcal{F}_{m,n}^{\alpha}(h+\frac{\epsilon}{2} 1).$$
Moreover,
\begin{equation*}
\begin{aligned}
g_{\Delta,B_{m,n}}^{\alpha}(\boldsymbol{x}) & = h_{\Delta,B_{m,n}}^{\alpha}(\boldsymbol{x})+ \frac{\epsilon}{2}\\ & = h_{\Delta,B_{m,n}}^{\alpha}(\boldsymbol{x})+ \frac{\epsilon}{2}-f(\boldsymbol{x})+f(\boldsymbol{x}) \\ & \geq f(\boldsymbol{x})+ \frac{\epsilon}{2} - \| h_{\Delta,B_{m,n}}^{\alpha}- f \|_{\infty}\\& \geq 0.
\end{aligned}
\end{equation*}
Further, we get
\begin{equation*}
\begin{aligned}
\| f-g_{\Delta,B_{m,n}}^{\alpha} \|_{\infty} & \leq \| f-h_{\Delta,B_{m,n}}^{\alpha} \|_{\infty}+\| h_{\Delta,B_{m,n}}^{\alpha}-g_{\Delta,B_{m,n}}^{\alpha} \|_{\infty}\\&< \frac{\epsilon}{2}+\frac{\epsilon}{2} \\&= \epsilon,
\end{aligned}
\end{equation*}
completing the proof.
\end{proof}
\section{Some multi-valued mappings}
First, we collect some definitions and related results which will be used in this section.
\begin{definition}(\cite{Aubin}).
Let $(X,\|.\|_X)$ and $(Y,\|.\|_Y)$ be normed linear spaces. For a multi-valued (set-valued) mapping $T: X \rightrightarrows Y$, the
domain of $T$ is defined by
$\text{Dom}(T):= \{x \in X: T(x) \neq \emptyset\}.$ Then $T: X \rightrightarrows Y$ is
\begin{enumerate}
\item \emph{convex} if $$\lambda T(x_1)+(1-\lambda)T(x_2) \subseteq T\big(\lambda x_1+(1-\lambda)x_2\big), ~\forall ~x_1, x_2 \in \text{Dom}(T),~~\lambda \in [0,1].$$
\item \emph{process} if $$\lambda T(x)= T(\lambda x),~ \forall~x \in X,~\lambda > 0, ~\text{and}~ 0 \in T(0).$$
\item \emph{linear} if $$\beta T(x_1)+ \gamma T(x_2) \subseteq T\big(\beta x_1+\gamma x_2\big), ~\forall ~ x_1, x_2 \in \text{Dom}(T),~\beta, \gamma \in \mathbb{R}.$$
\item \emph{closed} if the graph of $T$ defined by $Gr(T):= \big\{(\boldsymbol{x})\in X \times Y: y \in T(x) \big\}$ is closed.
\item \emph{Lipschitz} if
$$T(x_1) \subseteq T(x_2) + l \|x_1-x_2\|_X ~U_Y,~\forall~x_1, x_2 \in \text{Dom}(T), ~\text{for some constant}~ l>0,$$
where $U_Y=\{y\in Y: \|y\|_Y\le 1\}$.
\item \emph{lower semicontinuous} at $x\in X$ if there exists a $\delta > 0$ such that $$U \cap T(x') \neq \emptyset ~\text{ whenever}~ \|x-x'\|_X < \delta$$ holds for a given open set $U$ in $Y$ satisfying $ U \cap T(x) \neq \emptyset .$
\end{enumerate}
\end{definition}
Note that the above definitions are also applicable in metric spaces with obvious modifications, see, for instance, \cite{Aubin}.
\begin{theorem}[\cite{DS}, Corollary $1.4$] \label{Multhm2}
Let $T: \text{Dom}(T)=X \rightrightarrows Y$ be linear such that $T(0)= \{0\}.$ Then, $T$ is single-valued.
\end{theorem}
\begin{theorem}[\cite{DS}, Corollary $2.1$]\label{Multhm2a}
Let $T: \text{Dom}(T)=X \rightrightarrows Y$ be such that $T(x_0)$ is singleton for some $x_0\in X.$ Then the following are equivalent:
\begin{itemize}
\item $T$ is single-valued and affine.
\item $T$ is convex.
\end{itemize}
\end{theorem}
Our work in this part is partly motivated by \cite{VV3}.
\begin{theorem}\label{Multhm3}
The multi-valued mapping $ \mathcal{W}^\alpha_{\Delta}: \mathcal{C}(\rectangle) \rightrightarrows \mathcal{C}(\rectangle)$ defined by $$\mathcal{W}^\alpha_{\Delta}(f) =\{f^{\alpha}_{\Delta,B_{m,n}}: m,~n \in \mathbb{N} \}$$ is a Lipschitz process.
\end{theorem}
\begin{proof}
Using the linearity of $\mathcal{F}_{m,n}^\alpha,$ we have
\begin{equation*}
\begin{aligned}
\mathcal{W}^\alpha_{\Delta} (\lambda f) = \{(\lambda f)^{\alpha}_{\Delta,B_{m,n}}: m,~n \in \mathbb{N} \}
= \lambda \mathcal{W}^\alpha_{\Delta}(f),~ \forall~f \in \mathcal{C}(\rectangle),~\lambda > 0.
\end{aligned}
\end{equation*}
Again by linearity of $\mathcal{F}_{m,n}^\alpha,$ it is plain that $\mathcal{W}^\alpha_{\Delta} (0) = \{0\}.$ Therefore, $\mathcal{W}^\alpha_{\Delta}$ is a process.
\par
Let $f,g \in \mathcal{C}(\rectangle).$
On applying Equation \ref{Fnleq1}, we have
\begin{equation*}
\begin{aligned}
\big|f^{\alpha}_{\Delta,B_{m,n}}(\boldsymbol{x})- g^{\alpha}_{\Delta,B_{m,n}}(\boldsymbol{x})\big| \le & ~ \|f- g\|_{\infty}+\|\alpha\|_{\infty}\|f^{\alpha}_{\Delta,B_{m,n}}- g^{\alpha}_{\Delta,B_{m,n}}\|_{\infty}\\&+\|\alpha\|_{\infty} \|B_{m,n}(g)- B_{m,n}(f)\|_{\infty},
\end{aligned}
\end{equation*}
for any $\boldsymbol{x} \in \rectangle.$
Further, we deduce
$$\|f^{\alpha}_{\Delta,B_{m,n}}- g^{\alpha}_{\Delta,B_{m,n}}\|_{\infty} \le \frac{1+\|\alpha\|_{\infty}\|B_{m,n}\|}{1- \|\alpha\|_{\infty}} \|f-g\|_{\infty}.$$
Using $\|B_{m,n} \| = 1,$
$$\|f^{\alpha}_{\Delta,B_{m,n}}- g^{\alpha}_{\Delta,B_{m,n}}\|_{\infty} \le \frac{1+\|\alpha \|_{\infty}}{1- \|\alpha \|_{\infty}} \|f-g\|_{\infty}.$$
Consequently, we have
$$ \mathcal{W}^\alpha_{\Delta}(g) \subseteq \mathcal{W}^\alpha_{\Delta}(f) +\dfrac{1+ \|\alpha \|_{\infty}}{1-\|\alpha\|_{\infty}}~ \|f-g\|_{\infty}U_{\mathcal{C}(\rectangle)},$$
proving the Lipschitzness of $\mathcal{W}^\alpha_{\Delta},$ and hence the proof.
\end{proof}
\begin{remark}
For the multivalued mapping $\mathcal{W}^\alpha_{\Delta}$, let us first note the following:
\begin{enumerate}
\item By linearity of $\mathcal{F}_{\Delta, B_{m,n}}^\alpha,$ we have $\mathcal{W}^\alpha_{\Delta} (0) = \{0\}.$
\item Since if $\alpha \neq 0$, $m \neq k$ then $f_{\Delta,B_{m,n}}^\alpha \neq f_{\Delta,B_{k,l}}^\alpha$, hence $ \mathcal{W}^\alpha_{\Delta}: \mathcal{C}(\rectangle) \rightrightarrows \mathcal{C}(\rectangle)$ is not single-valued.
\end{enumerate}
In view of the above items, Theorems \ref{Multhm2}-\ref{Multhm2a} produce that the mapping $ \mathcal{W}^\alpha_{\Delta}: \mathcal{C}(\rectangle) \rightrightarrows \mathcal{C}(\rectangle )$ is not convex.
\end{remark}
\begin{theorem}
Let a fixed net $\triangle$ and $m,n \in \mathbb{N},$ the multivalued mapping $\mathcal{T}^{\Delta}_{m,n}: \mathcal{C}(\rectangle) \rightrightarrows \mathcal{C}(\rectangle )$ by $$ \mathcal{T}^{\Delta}_{m,n}(f) =\{f^{\alpha}_{\triangle,B_{m,n}}: \alpha \in \mathcal{C}(\rectangle ) ~\text{such that}~\|\alpha\|_{\infty} < 1 \}$$ is a process.
\end{theorem}
\begin{proof}
Let $f \in \mathcal{C}(\rectangle)$ and $\lambda > 0,$
\begin{equation*}
\begin{aligned}
\lambda \mathcal{T}^{\Delta}_{m,n}(f)=&\lambda\{f^{\alpha}:\alpha \in \mathcal{C}(\rectangle ) ~\text{such that}~\|\alpha\|_{\infty} < 1 \}\\
=&\{ \lambda f^{\alpha}:\alpha \in \mathcal{C}(\rectangle ) ~\text{such that}~\|\alpha\|_{\infty} < 1 \}\\
=& \mathcal{T}^{\Delta}_{m,n}(\lambda f).
\end{aligned}
\end{equation*}
Moreover, Using linearity of fractal operator, we have $f^{\alpha}=0,$ whenever $f=0.$ That is, $0 \in \mathcal{T}^{\Delta}_{m,n}(0).$
Therefore, $\mathcal{T}^{\Delta}_{m,n}$ is a process.
\end{proof}
\begin{remark}
One may see that $\mathcal{T}^{\Delta}_{m,n}$ is not convex through the following lines. Let $f, g \in \mathcal{C}(\rectangle),$
\begin{equation*}
\begin{aligned}
\mathcal{T}^{\Delta}_{m,n}(f+g)=&\{( f + g )^{\alpha}:\|\alpha\|_{\infty} < 1\}\\
=&\{ f^{\alpha}+g^{\alpha}:\|\alpha\|_{\infty} < 1 \}\\
\subseteq &\{ f^{\alpha}+g^{\beta}:\|\alpha\|_{\infty} < 1,\|\beta\|_{\infty} < 1 \}\\
=&\{f^{\alpha}:\|\alpha\|_{\infty} < 1 \}+\{g^{\beta}:\|\beta\|_{\infty} < 1 \}\\
\subseteq & \mathcal{T}^{\Delta}_{m,n}(f)+\mathcal{T}^{\Delta}_{m,n}(g).
\end{aligned}
\end{equation*}
\end{remark}
\begin{theorem}
Let a fixed net $\triangle$ and $m,n \in \mathbb{N},$ the multivalued mapping $\mathcal{T}^{\Delta}_{m,n}: \mathcal{C}(\rectangle) \rightrightarrows \mathcal{C}(\rectangle)$ defined by $$ \mathcal{T}^{\Delta}_{m,n}(f) =\{f^{\alpha}_{\triangle,B_{m,n}}: \|\alpha\|_{\infty} \le q < 1 \}, $$ satisfies the following: $$ \|\mathcal{T}^{\Delta}_{m,n}\| \le 1 + \frac{q}{1-q}\|Id -B_{m,n} \|.$$
\end{theorem}
\begin{proof}
We have
\begin{equation*}
\begin{aligned}
\|\mathcal{T}^{\Delta}_{m,n}\|=& \sup_{f \in \mathcal{C}(\rectangle)} \frac{d(0, \mathcal{T}^{\Delta}_{m,n}(f))}{\|f\|_{\infty}}\\
=& \sup_{f \in \mathcal{C}(\rectangle)}\inf_{f^{\alpha} \in \mathcal{T}^{\Delta}_{m,n}(f)} \frac{\| f^{\alpha}\|}{\|f\|}\\
\le & \sup_{f \in \mathcal{C}(\rectangle)} \Big(1+ \frac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}}\|Id - B_{m,n}\| \Big)\\
\le & \sup_{f \in\mathcal{C}(\rectangle)} \Big(1+ \frac{q}{1-q}\|Id - B_{m,n}\| \Big)\\
=& 1+ \frac{q}{1-q}\|Id - B_{m,n}\|,
\end{aligned}
\end{equation*}
hence the proof.
\end{proof}
\begin{theorem}
For a fixed net $\triangle$ and operator $L,$ the multivalued mapping $\mathcal{T}^{\Delta}_{m,n}:\mathcal{C}(\rectangle) \rightrightarrows \mathcal{C}(\rectangle)$ defined by $$ \mathcal{T}^{\Delta}_{m,n}(f) =\{f^{\alpha}_{\triangle,B_{m,n}}: \|\alpha \|_{\infty} < 1 \}$$ is lower semicontinuous.
\end{theorem}
\begin{proof}
Let $f \in \mathcal{C}(\rectangle),$ let $f^{\alpha} \in \mathcal{T}^{\Delta}_{m,n}(f)$ and a sequence $(f_k)$ in $\mathcal{C}(\rectangle)$ such that $f_k \to f.$ Since the fractal operator is continuous, we have $f^{\alpha}_k \rightarrow f^{\alpha}.$ It is clear that $f^{\alpha}_k \in \mathcal{T}^{\Delta}_{m,n}(f_k).$ Therefore, the result follows.
\end{proof}
\begin{theorem}
Let $\triangle$ be a net of $\rectangle$ and $m,n \in \mathbb{N}.$ The multi-valued mapping $\mathcal{T}^{\Delta}_{m,n}:\mathcal{C}(\rectangle) \rightrightarrows \mathcal{C}(\rectangle)$ defined by $$ \mathcal{T}^{\Delta}_{m,n}(f) =\{f^{\alpha}_{\triangle,B_{m,n}}: \|\alpha\|_{\infty} \le q < 1 \}, $$ is Lipschitz.
\end{theorem}
\begin{proof}
Let $f,g \in \mathcal{C}(\rectangle).$
Equation (\ref{Fnleq1}) yields
\begin{equation*}
\begin{aligned}
\big|f^{\alpha}_{\triangle,B_{m,n}}(\boldsymbol{x})- g^{\alpha}_{\triangle,B_{m,n}}(\boldsymbol{x})\big| = & \|f- g\|_{\infty}+\|\alpha\|_{\infty} \|f^{\alpha}_{\triangle,B_{m,n}}- g^{\alpha}_{\triangle,B_{m,n}}\|_{\infty}\\ & + \|\alpha\|_{\infty} \|B_{m,n}g- B_{m,n}f\|_{\infty},
\end{aligned}
\end{equation*}
for every $\boldsymbol{x} \in \rectangle.$
Further, we deduce
$$\|f^{\alpha}_{\triangle,B_{m,n}}- g^{\alpha}_{\triangle,B_{m,n}}\| \le \frac{1+\|\alpha\|_{\infty}\|B_{m,n}\|}{1- \|\alpha\|_{\infty}} \|f-g\|_{\infty}.$$
Since $\|\alpha\|_{\infty} \le q $ and $\|B_{m,n}\|=1,$ we get
$$\|f^{\alpha}_{\triangle,B_{m,n}}- g^{\alpha}_{\triangle,B_{m,n}}\| \le \frac{1+q}{1- q} \|f-g\|.$$
Choosing $l= \frac{1+ q}{1-q}, $ we have
$$ \mathcal{T}^{\Delta}_{m,n}(g) \subset \mathcal{T}^{\Delta}_{m,n}(f) +l~ \|f-g\|_{\infty} U_{\mathcal{C}(\rectangle)},$$
proving the assertion.
\end{proof}
\begin{theorem}
For a fixed admissible scale vector $\alpha$ and $m,n \in \mathbb{N},$ the multivalued mapping $\mathcal{V}^{\alpha}_{m,n}:\mathcal{C}(\rectangle) \rightrightarrows \mathcal{C}(\rectangle)$ defined by $$ \mathcal{V}^{\alpha}_{m,n}(f) =\{f^{\alpha}_{\triangle,B_{m,n}}: \text{all possible net}~ \triangle \}$$ is a process.
\end{theorem}
\begin{proof}
Let $f \in \mathcal{C}(\rectangle)$ and $\lambda > 0,$ then
\begin{equation*}
\begin{aligned}
\lambda \mathcal{V}^{\alpha}_{m,n}(f)= & \lambda\{f^{\alpha}_{\triangle,B_{m,n}}:\text{all possible net}~ \triangle \}\\
=&\{ \lambda f^{\alpha}_{\triangle,B_{m,n}}:\text{all possible net}~ \triangle \}\\
=&\{(\lambda f)^{\alpha}_{\triangle,B_{m,n}}:\text{all possible net}~ \triangle \}\\
=& \mathcal{V}^{\alpha}_{m,n}(\lambda f).
\end{aligned}
\end{equation*}
The third equality follows from the fact that the fractal operator $ \mathcal{F}^{\alpha}_{m,n}$ is a linear operator.
Moreover, using linearity of the fractal operator, we have $f^{\alpha}_{\triangle,B_{m,n}}=0,$ whenever $f=0.$ That is, $0 \in \mathcal{V}^{\alpha}_{m,n}(0).$
Therefore, $\mathcal{V}^{\alpha}_{m,n}$ is a process.
\end{proof}
\begin{theorem}
For a fixed admissible scale function $\alpha$ and $m,n \in \mathbb{N},$ the multivalued mapping $\mathcal{V}^{\alpha}_{m,n}$ is lower semicontinuous.
\end{theorem}
\begin{proof}
Let $f \in \mathcal{C}(\rectangle),$ let $f^{\alpha}_{\triangle,B_{m,n}} \in \mathcal{V}^{\alpha}_{m,n}(f)$ and a sequence $(f_k)$ converges to $f$ in $\mathcal{C}(\rectangle).$ Since the fractal operator is continuous, we have $(f_k)^{\alpha}_{\triangle,B_{m,n}} \rightarrow f^{\alpha}_{\triangle,B_{m,n}}.$ By definition of $\mathcal{V}^{\alpha}_{m,n},$ $(f_k)^{\alpha}_{\triangle,B_{m,n}} \in \mathcal{V}^{\alpha}_{m,n}(f_k).$ Hence, the lower semicontinuity of $\mathcal{V}^{\alpha}_{m,n}$ follows.
\end{proof}
\begin{theorem}
The multi-valued function $\Phi:[\dim(X),\dim(X)+\dim(Y)] \rightarrow \mathcal{C}(X,Y)$ defined by
\[
\Phi(\beta) :=\{f \in \mathcal{C}(X,Y): \dim (Gr(f)) =\beta \}
\]
is lower semicontinuous.
\end{theorem}
\begin{proof}
Let $U$ be an open set of $\mathcal{C}(X,Y).$
In the light of Theorem \ref{densethm}, that is, $\Phi(\alpha)=\mathcal{S}_{\alpha}$ is a dense subset of $\mathcal{C}(X,Y)$, we obtain
\[
\mathcal{S}(\alpha) \cap U \ne \emptyset, ~\forall ~ \alpha \in [\dim(X),\dim(X)+\dim(Y)].
\]
Now, by the very definition of lower semicontinuous, the result follows.
\end{proof}
\begin{remark}
Note that the multivalued mapping $\Phi$ is not closed. To show this, let $f \in \mathcal{C}(X,Y)$ with $\dim(Gr(f)) > \dim(X).$ Consider a sequence of Lipschitz functions $(f_k)$ converging to $f$ uniformly. It is obvious that $\dim(Gr(f_k))= \dim(X).$ Now, we have $\big(\dim(X),f_k\big) \to \big(\dim(X),f\big)$ as $n \to \infty.$ Using $\big(\dim(X), f_k\big) \in Gr(\Phi)$ and $\big(\dim(X), f_k\big) \rightarrow \big(\dim(X),f\big)$ with $\dim(Gr(f)) > \dim(X),$ we get the result.
\end{remark}
\section{conclusion}
This paper has been intended to develop a newly defined notion of constrained approximation termed as dimension preserving approximation for bivariate functions. The later work of the paper has introduced some multi-valued operators associated with bivariate $\alpha$-fractal functions. The notion of dimension preserving approximation is new, and demands further developments. In particular, dimension preserving approximation of set-valued mappings may be one of our future investigations.
\bibliographystyle{amsplain}
|
2111.05485
|
\section{Conclusion}
We present a structure-based feature point extraction method and use the thin-plate spline interpolation method to achieve multi-modal forearm registration. We verified the accuracy of its registration and its robustness when the forearm rotates in the axial direction, and the relationship between the axial rotation angle of the forearm and the feature peak value is given. With further acceleration, such as C++ implementation, the algorithm has the potential for application in time-sensitive clinical environments, such as forearm tendon or nerve repair surgery.
\clearpage
\section{Discussion}
In this paper, we proposed a structure-based method to extract the key points of the forearm, and provided a framework for multi-modal forearm registration. The framework was tested on a dataset containing 360° axial rotation of the forearm and evaluated using DC, JC, HD, ASD, and ASSD evaluation indicators. In addition, the FID was used to measure the registration similarity. Finally, the relationship between the rotation angle of the forearm and the peak value of the feature curve is given.
\par
\begin{figure}[hbp]
\begin{center}
\includegraphics[width=13cm]{picture/figure_10.pdf}
\captionv{12}{Short title - can be blank}
{\textbf{Changes in the projected area of the palm.} This picture shows the change of the forearm binary graph when the forearm axis rotates from 0 degree to 90 degrees. The red box indicates the difference in the change of the palm as it rotates.
\label{figure_10}
}
\end{center}
\end{figure}
The structure-based feature point extraction method performs better than other traditional feature extraction-based registration methods on the multi-modal forearm registration dataset. First, we compare the accuracy of the registration results between our algorithm and other classic registration algorithms, as shown in Table \ref{tab_methods_eval}. The results show that the structure-based feature extraction method can reach a higher level in the DC and JC indicators, and can reach a lower level in the HD, ASD, and ASSD distance indicators. Secondly, the registration effect of our algorithm was verified in different images where the forearm was rotated 360 degrees. The FID was used to evaluate the similarity of the registration, and the Euler distance of the feature points was used to evaluate the accuracy of feature point extraction. As shown in Figure \ref{figure_7}, it can be observed that the feature point extraction algorithm based on the structure has better robustness. Finally, when we draw the feature curve of each image, it is observed that the first peak of the feature curve has a clear correlation with the rotation angle of the forearm, as shown in Figure \ref{figure_9}. This is because when the forearm rotates axially, the number of projections in the normal direction of the forearm will change. This change is most obvious in the palm of the hand, as shown in Figure \ref{figure_10}. Therefore, the peak value of the feature curve of the structure-based feature extraction method can correspond to the rotation angle of the forearm.
\par
As shown in Figure \ref{figure_6}, the difficulty of the forearm registration dataset is that the texture of the fixed image is relatively simple, while the texture of the moving image is more complicated. It is difficult to complete multi-modal forearm registration through the registration algorithm based on feature extraction and matching. The structure-based feature point extraction method can quickly locate the wrist position based on the forearm structure, resulting in a matching feature point pair. However, the position of the elbow feature point can not be directly located by the feature curve, and must rely on skin color segmentation to locate the forearm boundary.
\par
\begin{figure}[t]
\begin{center}
\includegraphics[width=10cm]{picture/figure_11.pdf}
\captionv{12}{Short title - can be blank}
{\textbf{Real forearm registration result.} The left part is the forearm images of different people, and the right part is the result of the registration of the FAM-TPS algorithm we proposed.
\label{figure_11}
}
\end{center}
\end{figure}
We did not consider the corresponding relationship of feature points based on the frames before and after the video in this paper. Therefore, for the registration of the video, we only regard a single frame as an independent image, so the speed of processing the video will be sacrificed. In the future, we will study the feature point position tracking mode based on time information. After the feature point position is determined in the first frame of the video, the image processing of the subsequent frames is omitted, and the feature point extraction is transformed into a tracking problem, which can improve the registration speed. In addition, the non-rigid registration in our algorithm uses thin-plate spline interpolation. Due to the lack of registration and labeling samples, it is difficult for us to train the neural network in our experiment. In the future, we would study some unsupervised learning registration methods based on structural information to further improve registration accuracy.
\par
This research mainly focuses on the registration of multi-modal forearm images. Through the feature point extraction method based on the structure we proposed and thin-plate spline interpolation, a higher level of registration can be achieved. Future work will collect datasets of other limbs and study more robust structure feature extraction methods. At the same time, the algorithm is further accelerated to meet the clinical needs of repairing arm tendons or nerves by C++ implementation.
\clearpage
\section{Experiments}
Several experiments are conducted to demonstrate the capabilities of FAM in terms of its robustness and general applicability to medical forearm image registration. We have implemented several classic descriptor extraction methods and registered forearm images under the same framework. First, we use DC (Dice coefficient), JC (Jaccard coefficient), HD (Hausdorff Distance), ASD (Average surface distance metric), and ASSD (Average symmetric surface distance) to evaluate the registration effect of different methods. Second, by using the forearm image group that rotates 360 degrees axially, the Euler distance between the projection of the feature point and the ground-truth was tested to the accuracy of matching points, and the FID of the image before and after registration was tested to verify the accuracy of registration results. Finally, when the forearm was rotated in the axial direction, experimental results validated that the feature peak value and the rotation angle of the forearm have an obvious corresponding relationship.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=13cm]{picture/figure_6.pdf}
\captionv{12}{Short title - can be blank}
{\textbf{Results of forearm registration with different rotation angles.} From left to right, the fixed image, moving image, FAM rigid registration image, and FAM-TPS deformable registration image are shown in the forearm registration process. From top to bottom, it represents the image group where the forearm axis rotates 0-180°, sampling every 30 degrees. The red rectangle in the third column represents the edge defect of rigid registration. The registered image is a weighted superposition of the fixed image and the transformed floating image (the weight of fixed image is 0.4, the weight of moving image is 0.6).
\label{figure_6}
}
\end{center}
\end{figure}
\subsection{Registration methods comparison experiment}
To demonstrate the superiority of our proposed algorithm, all registration methods compared with FAM were tested on the forearm registration dataset in the same registration framework. The registration methods based on image descriptors such as SIFT, BRISK\cite{leutenegger2011brisk}, SURF, AKAZE\cite{alcantarilla2011fast}, ORB\cite{rublee2011orb} are used for comparison. Using the DC, JC, HD, ASD, and ASSD indicators to evaluate the registered images, the final displayed results are shown in Table \ref{tab_methods_eval}.
\par
The registration framework first extracts the feature points of the fixed image and the moving image through different methods and match them with the Flann matcher. Then, it calculates the affine transformation matrix between the two images and completes the image registration through affine transformation. In terms of parameters setting, SIFT and SURF algorithms use KD tree for nearest neighbor matching, ORB, BRISK and AKAZE algorithms use LSH for nearest neighbor matching, and both recursive 50 times. When using the extracted matching point pairs to calculate the affine transformation matrix of the two images, the RANSAC method is used to eliminate the wrong matching points, and the threshold parameter is set to 4.
\begin{table}[htbp]
\begin{center}
\captionv{10}{}{Overall registration result for the evaluated methods.
Seven descriptor extraction methods were tested, including the two methods we proposed. Five evaluation criteria are used to evaluate the accuracy of the registration results of these methods. For DC and JC indicators, larger is better. For HD, ASD, and ASSD indicators, smaller is better. Bold font indicates the best result.
\label{tab_methods_eval}
\vspace*{2ex}
}
\begin{tabular}{llllll}
\hline\noalign{\smallskip}
\multicolumn{1}{l}{} &\multicolumn{5}{l}{Evaluation Standards} \\
\hline\noalign{\smallskip}
Methods & $DC\uparrow$ & $JC\uparrow$ & $HD\downarrow$ & $ASD\downarrow$ & $ASSD\downarrow$\\
\hline\noalign{\smallskip}
SIFT & \makecell[r]{0.47} & \makecell[r]{0.37} & \makecell[r]{485.21} & \makecell[r]{146.93} & \makecell[r]{155.11}\\
BRISK & \makecell[r]{0.31} & \makecell[r]{0.18} & \makecell[r]{515.37} & \makecell[r]{160.60} & \makecell[r]{188.74}\\
SURF & \makecell[r]{0.50} & \makecell[r]{0.38} & \makecell[r]{447.64} & \makecell[r]{196.74} & \makecell[r]{123.58}\\
AKAZE & \makecell[r]{0.56} & \makecell[r]{0.45} & \makecell[r]{447.38} & \makecell[r]{71.93} & \makecell[r]{90.83}\\
ORB & \makecell[r]{0.28} & \makecell[r]{0.16} & \makecell[r]{516.71} & \makecell[r]{230.29} & \makecell[r]{235.96}\\
FAM(Ours) & \makecell[r]{0.987} & \makecell[r]{0.974} & \makecell[r]{\textbf{387.21}} & \makecell[r]{1.22} & \makecell[r]{4.02}\\
FAM-tps(Ours) & \makecell[r]{\textbf{0.991}} & \makecell[r]{\textbf{0.982}} & \makecell[r]{395.69} & \makecell[r]{\textbf{0.958}} & \makecell[r]{\textbf{3.11}}\\
\hline
\end{tabular}
\end{center}
\end{table}
\par
It can be observed from the table that our method is the best among other registration algorithms on descriptors in all indicators. At the same time, based on the matching points extracted by our FAM method, the registration accuracy of the registered image after the thin-plate spline interpolation can be further improved.
\subsection{Axial rotation registration}
The forearm axial rotation dataset contains fixed images and moving images in which the forearm rotates 360 degrees around the axis, a set of image pairs every 5 degrees, a total of 72 image pairs. We conducted a registration experiment on this dataset, and tested the registration results of FAM and FAM-TPS, as shown in Figure \ref{figure_6}. It can be seen that the affine transformation matrix calculated by FAM can register the fixed image with the moving image, but there exist gaps at the edge of forearms. To solve this problem, we use the two image feature point pairs extracted by FAM as control points to perform thin-plate spline interpolation. It can be seen that the registration effect of FAM-TPS is better than FAM, and the gaps at the edges of the image are also eliminated. In addition, FAM-TPS has also been tested for registration on the forearms of different people. As shown in Figure \ref{figure_11}, our algorithm can still register successfully even if the shape of the forearms is quite different.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{picture/figure_7.pdf}
\captionv{12}{Short title - can be blank}
{\textbf{The FID registration evaluation result of the forearm.} The figure shows the registration accuracy curve of the forearm one revolution in the axial direction. The red solid line represents the FID curve obtained by FAM rigid registration, the red dashed line represents the average value of the method, and the blue solid line represents the FAM-TPS FID curve obtained by deformable registration, the blue dashed line indicates the average value of the method.
\label{figure_7}
}
\end{center}
\end{figure}
\par
FID and Euclidean distance are used as indicators to evaluate the difference in registration accuracy of FAM and FAM-TPS algorithms for forearms with different rotation angles. FID is a distance index that measures whether two images are similar. From Figure \ref{figure_7}, it can be observed that the average FID distance of the FAM algorithm that introduces thin-plate spline interpolation is slightly larger. This is because the FID distance is used to judge the similarity between the registered fused image and the fixed image in this experiment. The gap between the affine-transformed moving image and the fixed image is smaller, the FID is larger.
\begin{figure}[hbp]
\begin{center}
\includegraphics[width=10cm]{picture/figure_8.pdf}
\captionv{12}{Short title - can be blank}
{\textbf{The ED registration evaluation result of the forearm.} The figure shows a curve based on the structure-based feature point extraction accuracy of the forearm. The abscissa represents the axial rotation angle of the forearm, and the ordinate represents the Euler distance between the affine transformation projection point and the real feature point.
\label{figure_8}
}
\end{center}
\end{figure}
\par
In addition, we perform the affine transformation on the 20 feature points of the moving extracted by FAM, project them into the fixed image, and calculate the Euclidean distance with the ground-truth to evaluate the matching accuracy of FAM to forearm images. As shown in Figure \ref{figure_8}, it can be observed that when the forearm rotates axially, the projection error of the projection matrix calculated by FAM is very small, and the average value is within 7.5 pixels (the width of the image is 1680 pixels).
\subsection{Relationship between peak value and rotation angle}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{picture/figure_9.pdf}
\captionv{12}{Short title - can be blank}
{\textbf{Change of peak of forearm feature curve.} This figure shows the corresponding relationship between the forearm axial rotation angle and the peak value of the feature curve. The curves of different colors correspond to the 0-90° rotation image of the forearm.
\label{figure_9}
}
\end{center}
\end{figure}
The relationship between the peak value of the feature curve of the forearm image extracted by FAM and the axial rotation angle of the forearm is shown in Figure \ref{figure_9}. It can be observed that there is a significant correlation between the peak value of the feature curve when the forearm rotates between 0 and 90 degrees. When the forearm rotation angle is larger, the peak value is smaller. Therefore, the approximate rotation angle of the forearm based on the extracted feature curve could be calculated. In addition, by detecting the orientation of the thumb, the axial rotation angle of forearms can be distinguished between 0-90 degrees or 90-180 degrees.
\clearpage
\section{Introduction}
Soft tissue injuries are widespread in sports, and they are particularly common in soccer\cite{ekstrand2011epidemiology}, rugby\cite{lopez2012profile}, basketball\cite{borowski2008epidemiology}, track, and field\cite{jacobsson2012prevalence}. The mechanism of injury might be direct, indirect, or mixed trauma\cite{lopez2012profile,borowski2008epidemiology} and can result in a disability that will take surgeries to repair. Preoperative planning is required for some critical conditions, such as tendon rupture\cite{burnham2011technique}. Using the forearm as an example, the augmented reality (AR) technology based on image registration can project the digital anatomical model on the forearm image, which is convenient for the location of the injury location and the formulation of the surgical plan. Image registration lies in the core of AR, which aligns the virtual scene with reality. As a result, image registration accuracy is crucial for surgical planning, which leads to a variety of image registration approaches.
\par
Image registration is the process of transforming different image datasets into one coordinate system with matched imaging contents, which has significant applications in the medical image processing field. However, for non-rigid objects registration, deformable multi-modal registration methods are needed to be introduced to eliminate the gap that cannot be avoided by rigid registration algorithms. A deformable registration strategy usually consists of two sequential steps: one is a globally aligned affine transformation, and then a deformable transformation. We concentrate on the first step, in which we design a feature point extraction and matching method based on the forearm structure.
\par
Affine transformation needs to calculate the affine transformation matrix between the two paired images, which's meant to find at least four pairs of matching points between the two images. An image matching framework usually consists of three major parts: feature detection, feature description, and matching methods. For image matching, finding an appropriate feature descriptor is the most important and challenging step. Descriptors based on various features have emerged recently. Gradient statistic approaches are often used to form float type descriptors such as the histogram of oriented gradients (HOG)\cite{dalal2005histograms} as presented in SIFT\cite{lowe1999object,lowe2004distinctive}. In SIFT, feature scale and orientation are respectively determined by DoG computation and the largest bin in a histogram of gradient orientation from a local circular region around the detected keypoint, thus achieving scale and rotation invariance. Another representative descriptor, namely, SURF\cite{bay2006surf}, can accelerate the SIFT operator by using the responses of Haar wavelets to approximate gradient computation than SIFT. However, these traditional algorithms are often difficult to work in the situation where the source image and the target image are quite different. This requirement has prompted investigations on learning-based descriptors, which have recently become dominantly popular due to their data-driven property and promising performances. In general, existing methods based on learning consists of two forms, namely, metric learning\cite{weinberger2009distance,zagoruyko2015learning,han2015matchnet,kedem2012non,wang2017deep} and descriptor learning\cite{salti2015learning,balntas2016pn,zhang2017learning,mishchuk2017working,wei2018kernelized,he2018local,tian2019sosnet,luo2019contextdesc}, according to the output of deep learning-based descriptors.
\par
Despite the existence of quite accurate feature point extraction algorithms, they still suffer from a lack of robustness, which is a critical aspect for AR. As a result, the robustness of multi-modal forearm registration is the emphasis of this work. The inconsistency of the textual expression of the multi-modal forearm image and the registration stability of the forearm axial rotation are the two key issues. Hence, we proposed a forearm feature representation curve (FFRC) based on the structural feature of the forearm. This curve is not only unaffected by the textural feature, but it can also represent the axial rotation angle of the forearm. Moreover, we designed a forearm registration framework (FAM and FAM-TPS) based on FFRC and used a variety of indicators to verify the stability of our framework.
\section{Methods}
The forearm image is defined as a fixed image in our registration framework, as shown in Figure \ref{figure_1}, while the digital anatomical model is defined as a floating image. The skin color extraction method and the morphological method are then used to extract their binary maps. After rotating the paired binary maps to the level and computing the main direction, we propose the forearm feature representation curve for extracting their matching feature points. Finally, the feature points are used for deformable registration including affine transformation and thin-plate Spline methods.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=15cm]{picture/figure_1.pdf}
\captionv{12}{Short title - can be blank}
{\textbf{The registration framework based on FFRC.} This figure depicts a multi-modal registration framework based on FFRC. From left to right, extract the mask of the forearm, the forearm representation curve, the feature point matching, and the deformable registration.\label{figure_1}
}
\end{center}
\end{figure}
\subsection{Skin Color Feature Extraction}
Skin color feature extraction is to extract the mask of the forearm region from the image, which can provide a basis for extracting feature points of the forearm. The forearm region in the image is segmented using the YCrCb color space and OTSU threshold segmentation.
\par
The YCrCb color space is a typical skin color detection color model, where Y represents brightness, Cr represents the red component in the light source, and Cb represents the blue component in the light source. The display range of human skin color in the YCrCb space is confined in a limited portion of the Cr channel. As a result, we extract the Cr component from the image and perform threshold segmentation through OTSU, resulting in a well-segmented human skin color area. The process of skin color extraction can be represented as:
\begin{equation}
S(\mathbf{F_o})=OTSU(G(\mathbf{F_o(Cr)})),
\end{equation}
where $\mathbf{F_o}$ denotes the original image of the forearm, $S(\cdot)$ denotes the skin color feature extraction result, $\mathbf{F_o(Cr)}$ denotes the Cr component of the forearm image, $G(\cdot)$ denotes a 5x5 Gaussian filter, $OTSU(\cdot)$ denotes the adaptive threshold processing method based on maximum between-class variance.
\par
The extraction result is shown in Figure \ref{figure_3}. By comparing the forearm image and its extracted mask, we can observe that the algorithm is able to extract the forearm from the background.
\subsection{Binary Map Principal Direction Correction}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{picture/figure_2.pdf}
\captionv{12}{Short title - can be blank}
{\textbf{Extraction of main direction for the forearm.} This picture shows the extraction process of the main direction of the binary image (For visual display, the forearm is turned to black.). The picture on the right shows the angle of the rotation axis between 0-90°, and the picture on the left shows the angle of the rotation axis between 90-180°. The AB line segment represents the length of the binary image projected onto the axis of rotation.\label{figure_2}
}
\end{center}
\end{figure}
The main direction of the binary map notes that the angle between the trend direction of the pixel point distribution in the image and the X-axis. The binary image can be parallel to the X-axis according to the main direction angle after determining the main direction of the binary map including the forearm. This operation can accelerate the extraction speed of forearm feature points in the third section.
\par
In this paper, the main direction is limited in the range of $[0,180^{\circ})$. As shown in Figure \ref{figure_2}, we rotate a straight line once within this range, and respectively calculate the length of the projection line segment $AB$(\ref{equ:AB}) where the point of interest in the image falls on this axis of rotation.
\begin{equation}
\begin{array}{l}
A={Min}\left(\left|P_{r}\right|\right) \\
B={Max}\left(\left|P_{r}\right|\right)
\end{array}
\label{equ:AB}
\end{equation}
where $P_r$ denotes the projection point of the binary graph on the rotation axis, $|\cdot|$ denotes the distance from the projection point to the origin.
\par
Since the rotation axis passes through the origin, the linear equation can be represented by $(y=kx)$(except for $90^{\circ}$), and the projection coordinates of the point on the image to the rotation axis can be easily calculated. The projection line segment falling on the rotation axis has a maximum value during the rotation process when the rotation axis is at an angle, and this angle is the major direction. When calculating the length of the projection line segment, if the rotation angle of the rotation axis is greater than 90 degrees, we translate the image along the X-axis to ensure that the projection line segment is always above the Y-axis.
\par
Simultaneously, in order to speed up this phase, we use a method that is easier to calculate but loses some precision. Since the forearm can be approximately regarded as a rigid body, after obtaining the binary map of the forearm, we extract the minimum circumscribed rectangle of the forearm and directly treat the rotation angle of the minimum circumscribed rectangle as the main direction of the binary map.
\subsection{Forearm Feature Representation Curve}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=15cm]{picture/figure_3.pdf}
\captionv{12}{Short title - can be blank}
{\textbf{Forearm Feature Representation Curve.} The figure on the left shows the forearm feature curve extracted by the structure-based feature extraction method. The blue curve represents the original curve, and the red curve represents the Kalman filtered curve. The image on the right shows the result of skin color extraction.\label{figure_3}
}
\end{center}
\end{figure}
After skin color extraction and the main direction extraction of the binary image, we can get the forearm mask parallel to the X-axis of the image. We count the total number of points in each column of the image with a pixel value of 255 and construct a one-dimensional curve $C_o$, as represented by the blue curve in Figure \ref{figure_3}. $C_o$ can be expressed as:
\begin{equation}
\begin{array}{l}
C_{o}(i)=\sum_{j=0}^{h}\left(S^{\prime}\left(\mathbf{F_{o}}\right)_{x=i}^{y=j}\right), 0 \leq i \leq w \\
\end{array}
\end{equation}
where $\mathbf{F_o}$ denotes the original image of forearm, $S^{\prime}\left(\mathbf{F_{o}}\right)$ is expressed in formula (\ref{equ:S(F)_prime}), $w$ denotes the width of $\mathbf{F_o}$, $h$ denotes the height of $\mathbf{F_o}$.
\begin{equation}
S^{\prime}\left(F_{o}\right)=\left\{\begin{array}{ll}
1, & \text{if } S\left(\mathbf{F_{o}}\right)>0 \\
0, & \text{if } S\left(\mathbf{F_{o}}\right)=0
\end{array}\right.
\label{equ:S(F)_prime}
\end{equation}
It can be seen that the location of the wrist of the forearm corresponds to the position of the valley in this feature curve $C_o$. The blue curve is unsmooth and there is noise interference due to the presence of burrs on the edge of the forearm mask. Therefore, Kalman filter is used to process the original features $C_o$ to obtain the filtered features $C_k$.
\par
The trough of the curve can be filtered and estimated using Eq.(\ref{equ:P_w}), which yields the trough corresponding to the wrist X-axis location ${P_x}^1$ based on the forearm feature representation curve.
\begin{equation}
P_{x}^w=\text{Argmax}\left(\sum_{i=1}^{L} \text{sign}\left(C\left(t_{j}-\frac{L}{2}+i\right)-C\left(t_{j}\right)\right)\right) \mid j=1,2 \ldots, n
\label{equ:P_w}
\end{equation}
where $P_{x}^w$ denotes the x coordinate of the key point of the wrist, $sign(\cdot)$ denotes expressed by the formula\ref{equ:sign}. $C(\cdot)$ denotes the feature value in $C_k$, $t_j$ denotes the x coordinate of the $j-th$ valley point, L denotes a hyperparameter related to image width.
\begin{equation}
\operatorname{sign}(x)=\left\{\begin{array}{ll}
1, & x>0 \\
0, & x \leq 0
\end{array}\right.
\label{equ:sign}
\end{equation}
Concurrently, the most distal point of the forearm mask from the wrist point is considered as another feature point ${P_x}^n$. We uniformly interpolate and sample the two locations based on the structural feature of the forearm, but we only have the column coordinates of these spots at this time. With the convenience of the forearm mask being parallel to the X-axis of the image, we can directly obtain the point coordinates of the upper and lower edges through the change of pixel value just as Figure \ref{figure_4}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{picture/figure_4.pdf}
\captionv{12}{Short title - can be blank}
{\textbf{Feature point search scheme.} This picture shows how to find the upper and lower edges of the forearm through the index of the feature point. The red dotted line represents the search path.
\label{figure_4}
}
\end{center}
\end{figure}
These extracted forearm boundary points are regarded as feature points of the forearm image. It should be noted that the actual forearm length corresponding to different forearm imaging models is the same for one person, so these feature points are also in an one-to-one correspondence in practice. This completes the extraction and matching of feature points.
\subsection{Deformable Registration Framework}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=12cm]{picture/figure_5.pdf}
\captionv{12}{Short title - can be blank}
{\textbf{Non-rigid registration.} This picture shows the method of selecting control points for thin-plate spline interpolation. The red cross indicates the control point, and the green cross indicates the target position.
\label{figure_5}
}
\end{center}
\end{figure}
When performing deformable registration, the matrix $H$ of the affine transformation of the two images is calculated by the matching points extracted by FFRC. Hence, floating images can be mapped to fixed images by the affine transformation. The result of rigid registration is shown in Figure \ref{figure_6}. Subsequently, we use these matching points as the control points of the thin plate spline interpolation for deformable registration and finally, obtain the deformed registration result. The selection of control points is shown in Figure \ref{figure_5}. According to the physiological structure of the forearm, the green points are regarded as control points, and the corresponding red points are regarded as target positions. It can be observed that the anatomical image of the forearm fits well with the forearm image. The specific evaluation criteria will be explained in the experimental chapter.
\newpage
\section*{Appendix}
\section*{References}
\addcontentsline{toc}{section}{\numberline{}References}
\vspace*{-20mm}
\input{./latex_template_MedPhys_2018.bbl}
\bibliographystyle{./medphy.bst}
\end{document}
|
1603.03850
|
\section{introduction}
Homological algebra is at the root of modern techniques in many areas of mathematics including commutative and non commutative algebra, algebraic geometry, algebraic topology and representation theory. Not only that all these areas make use of the homological methods but homological algebra serves as a common language and this makes interactions between these areas possible and fruitful. A relative version of homological algebra is the area called Gorenstein homological algebra. This newer area started in the late 60’s when Auslander introduced a class of finitely generated modules that have a complete resolution. Auslander used these modules to define the notion of the G-dimension of a finite module over a commutative noetherian local ring. Then Auslander and Bridger extended the definition to two sided noetherian rings (1969). The area really took off in the mid 90’s, with the introduction of the Gorenstein (projective and injective) modules by Enochs and Jenda (\cite{enochs:95:gorenstein}). Avramov, Buchweitz, Martsinkovsky, and Reiten proved that if the ring $R$ is both right and left noetherian and if $G$ is a finitely generated Gorenstein projective module, then Enochs' and Jenda's definition agrees with that of Auslander's and Bridger's of module of G-dimension zero.
The Gorenstein flat modules were introduced by Enochs, Jenda and Torrecillas as another extension of Auslander's Gorenstein dimension.\\
The Gorenstein homological methods have proved to be very useful in characterizing various classes of rings. Also, methods and results from Gorenstein homological algebra have successfully been used in algebraic geometry, as well as in representation theory. But the main problem in using the Gorenstein homological methods is that they can only be applied when the corresponding Gorenstein resolutions exist. So the main open problems in this area concern identifying the type of rings over which Gorenstein homological algebra works. Of course one hopes that this is the case for any ring. But so far only the existence of the Gorenstein flat resolutions was proved over arbitrary rings (in \cite{yang:14:gorflat}, 2014). The existence of the Gorenstein projective resolutions and the existence of the Gorenstein injective resolutions are still open problems. And they have been studied intensively in recent years (see for example \cite{CH}, \cite{enochs:12:gorenstein.injective.covers}, \cite{EJL}, \cite{iacob:15:gor.flat.proj}, \cite{holm:05:gor.dim}, \cite{iacob:15:gor.flat}, \cite{murfet:11:gor.proj}).
The Gorenstein (projective, injective, flat) modules are defined in terms of totally acyclic complexes. We recall that an acyclic complex $P$ of projective $R$-modules ($R$ is an arbitrary ring) is called \emph{totally acyclic} if the complex $Hom(P,Q)$ is still exact for any projective module $Q$. A totally acyclic complex of injective modules is defined dually. And an exact complex $F$ of flat left $R$-modules is said to be \emph{$F$-totally acyclic} if $I \otimes F$ is exact for any injective right $R$-module $I$. A module $M$ is Gorenstein injective if and only if it is a cycle of a totally acyclic complex of injective modules. Dually, a module $G$ is Gorenstein projective if it is a cycle of a totally acyclic complex of projective modules. And a Gorenstein flat module is a cycle of an F-totally acyclic complex of flat modules.\\
An Iwanaga Gorenstein ring (\cite{iwanaga:79:gor} and \cite{iwanaga:80:gor}) is a two sided noetherian ring $R$ that has finite self injective dimension on both sides. It is known that a commutative Gorenstein ring of finite Krull dimension is Iwanaga-Gorenstein. Over an Iwanaga-Gorenstein ring the exact complexes of projective (injective, flat) modules have some very nice homological properties. More precisely, over an Iwanaga Gorenstein ring every acyclic complex of projective (injective) modules is totally acyclic. And every acyclic complexes of flat modules is F-totally acyclic over any Iwanaga-Gorenstein ring. So over such a ring the class of Gorenstein projective (injective, flat) modules coincides with that of the cycles of acyclic complexes of projectives (injective, flat modules respectively).\\
It is a natural question to consider whether or not these conditions actually characterize Gorenstein rings, or more generally whether or not it is possible to characterize Gorenstein rings in terms of acyclic complexes of (Gorenstein) injectives, (Gorenstein) projectives and (Gorenstein) flats. This is one of the main goals of this paper. In the commutative case we encompass and extend recent results by Murfet and Salarian in \cite{murfet:11:gor.proj} and by Iyengar and Krause in \cite{IyengarKrause}.
We give equivalent characterizations of the condition that every acyclic complex of injective (flat, projective respectively) modules is totally acyclic. It involves $\widetilde{\mathcal{A}}$ complexes, as well as dg-$\mathcal{A}$ complexes, so we recall first the following definitions.\\
\begin{definition}
Let $\mathcal{A}$ be a class of $R$-modules. An acyclic complex $X$ is in $\widetilde{\mathcal{A}}$ if $Z_j(X) \in \mathcal{A}$ for all integers j.
\end{definition}
\begin{definition}
Let $(\mathcal{A}, \mathcal{B})$ be a cotorsion pair in $R$-Mod. A complex $Y$ is a dg-$\mathcal{A}$ complex if each $Y_n \in \mathcal{A}$ and if $\mathrm{Ext}^1(Y, U)=0$ for any complex $U$ in $\widetilde{\mathcal{B}}$.
\end{definition}
Throughout the paper we use $\mathcal{GI}$ to denote the class of Gorenstein injective modules, $\mathcal{GP}$ for the class of Gorenstein projective modules, and $\mathcal{GF}$ for that of Gorenstein flat modules.\\
We prove first (Proposition 3) that, over any ring $R$, every acyclic complex of projective modules is totally acyclic if and only if the cycles of every acyclic complex of Gorenstein projective modules are Gorenstein projective. Proposition 4 shows that the analogue result for injective and Gorenstein injective modules also holds over any ring $R$. And Proposition 5 proves that when the ring $R$ is GF-closed the analogue result for flat and Gorenstein flat modules also holds.\\
We prove then (Theorem 2) that when $R$ is a left noetherian ring, a third equivalent condition can be added to those in Proposition 3. More precisely we have:\\
\textbf{Theorem 2.} Let $R$ be a left noetherian ring. The following are equivalent:\\
1. Every acyclic complex of injective left $R$-modules is totally acyclic.\\
2. Every acyclic complex of Gorenstein injective left $R$-modules is in $\widetilde{\mathcal{GI}}$.\\
3. Every complex of Gorenstein injective left $R$-modules is dg-Gorenstein injective. \\%$dg (GI) = dw (GI)$.\\
Then, using Proposition 5, we prove the following result:\\
\textbf{Theorem 3.} Let $R$ be a left coherent ring. The following are equivalent:\\
1. Every acyclic complex of flat right $R$-modules is F-totally acyclic. \\
2. Every acyclic complex of Gorenstein flat right $R$-modules is in $\widetilde{\mathcal{GF}}$.\\
3. Every complex of Gorenstein flat right $R$-modules is dg-Gorenstein flat.
We also prove (Theorem 4) that when moreover $R$ is left coherent and right $n$-perfect (that is, every flat right $R$-module has finite projective dimension $\leq n$) then statements 1, 2, 3 in Theorem 2 are also equivalent to the following:\\
4. Every acyclic complex of projective right $R$-modules is totally acyclic.\\
5. Every acyclic complex of Gorenstein projective right $R$-modules is in $\widetilde{\mathcal{GP}}$.\\
6. Every complex of Gorenstein projective right $R$-modules is dg-Gorenstein projective.\\
We recall that a commutative noetherian ring $R$ is called Gorenstein if for each prime (resp., maximal) ideal $p$, $inj dim_{R_p} R_p < \infty$ (Bass, \cite{Bass}, Section 1). We prove:\\
\textbf{Corollary 1} Let $R$ be a commutative noetherian ring. Then the following statements are equivalent:\\
1. $R$ is a Gorenstein ring.\\
2. Every acyclic complex of flat $R$-modules is F-totally acyclic. \\
3. Every acyclic complex of Gorenstein flat $R$-modules is in $\widetilde{\mathcal{GF}}$.\\
4. Every complex of Gorenstein flat $R$-modules is dg-Gorenstein flat.
If moreover $R$ has finite Krull dimension then Theorems 2 and 4 give that the following are equivalent (Corollary 2):\\
1. $R$ is an Iwanaga-Gorenstein ring.\\
2. Every acyclic complex of injective $R$-modules is totally acyclic.\\
3. Every acyclic complex of flat $R$-modules is F-totally acyclic. \\
4. Every acyclic complex of Gorenstein flat $R$-modules is in $\widetilde{\mathcal{GF}}$.\\
5. Every acyclic complex of Gorenstein injective $R$-modules is in $\widetilde{\mathcal{GI}}$.\\
6. Every complex of Gorenstein injective $R$-modules is dg-Gorenstein injective. \\%$dg (GI) = dw (GI)$.\\
7. Every complex of Gorenstein flat $R$-modules id dg-Gorenstein flat.\\
8. Every acyclic complex of projective $R$-modules is totally acyclic.\\
9. Every acyclic complex of Gorenstein projective $R$-modules is in $\widetilde{\mathcal{GP}}$.\\
10. Every complex of Gorenstein projective $R$-modules is dg-Gorenstein projective.
Our Corollary 2 improves on results by Iyengar and Krause (\cite{IyengarKrause}) and by Murfet and Salarian (\cite{murfet:11:gor.proj}). Iyengar and Krause proved that for a commutative noetherian ring $R$ with a dualizing complex, the class of acyclic complexes of injectives coincides with that of totally acyclic complexes of injectives if and only if $R$ is Gorenstein. Then Murfet and Salarian removed the dualizing complex hypothesis and characterized Gorenstein rings in terms of totally acyclic complexes of projectives. We are adding more equivalent characterizations, still under the assumption that $R$ is commutative noetherain of finite Krull dimension.
We consider then a two sided noetherian ring $R$ such that every acyclic complex of injective modules is totally acyclic. We prove (Proposition 7) that if furthermore $R$ satisfies the Auslander condition and has finitistic flat dimension then every injective $R$-module has finite flat dimension. We use this result to prove the following characterization of Iwanaga-Gorenstein rings (Theorem 7):\\
Let $R$ be a two sided noetherian ring of finitistic flat dimension that satisfies the Auslander condition. Then the following are equivalent:\\
1. $R$ is Iwanaga Gorenstein.\\
2. Every acyclic complex of injective left $R$-modules is totally acyclic and every acyclic complex of injective right $R$-modules is totally acyclic.
\section{preliminaries}
Throughout the paper $R$ will denote an associative ring with unit (non necessarily commutative). The category of left $R$-modules will be denoted by $R\textrm{-Mod}$, and the category of unbounded complexes of left $R$-modules will be denoted by $Ch(R)$. Modules are, unless otherwise explicitly stated, left modules.
We recall that an acyclic complex of projective modules $P = \ldots \rightarrow P_1 \rightarrow P_0 \rightarrow P_{-1} \rightarrow \ldots $ is said to be \emph{totally acyclic } if for any projective $R$-module $P$, the complex $\ldots \rightarrow Hom(P_{-1}, P) \rightarrow Hom(P_0, P) \rightarrow Hom(P_1, P) \rightarrow \ldots$ is still acyclic.\\
A module $G$ is Gorenstein projective if it is a cycle of such a totally acyclic complex of projective modules ($G = Z_j(P)$, for some integer $j$).\\
Dually, an acyclic complex of injective modules $I= \ldots \rightarrow I_1 \rightarrow I_0 \rightarrow I_{-1} \rightarrow \ldots $ is said to be \emph{totally acyclic } if for any injective $R$-module $I$, the complex $\ldots \rightarrow Hom(I, I_1) \rightarrow Hom(I, I_0) \rightarrow Hom(I, I_{-1}) \rightarrow \ldots$ is still acyclic.\\
A module $M$ is Gorenstein injective if it is a cycle in a totally acyclic complex of injective modules ($M = Z_j(I)$ for some integer $j$).\\
We also recall that an acyclic complex of flat left $R$-modules, $F = \ldots \rightarrow F_1 \rightarrow F_0 \rightarrow F_{-1} \rightarrow \ldots $ is said to be \emph{F-totally acyclic } if for any injective right $R$-module $I$, the complex $\ldots \rightarrow I \otimes F_1 \rightarrow I \otimes I_0 \rightarrow I \otimes F_{-1} \rightarrow \ldots$ is still acyclic.\\
A module $H$ is Gorenstein flat if it is a cycle in an F-totally acyclic complex of flat modules ($H = Z_j(F)$ for some integer $j$).\\
A pair of classes $(\mathcal{A}, \mathcal{B})$ in an abelian category $\mathcal D$ is called a {\it cotorsion pair} if $\mathcal{B}=\mathcal{A}^{\perp}$ and $\mathcal{A}=^{\perp}\!\!\! \mathcal{B}$, where for a given class of modules $\mathcal C$, the right orthogonal class $\mathcal C^{\perp}$ is defined to be the class of objects $Y$ in $\mathcal D$ such that $\mathrm{Ext}^1(A,Y)=0$ for all $A\in \mathcal{A}$. Similarly, we define the left orthogonal class $^{\perp}{\mathcal C}$. The cotorsion pair is called {\it hereditary} if $\mathrm{Ext}^i(A,B)=0$ for all $A\in\mathcal A$, $B\in \mathcal B$, and $i\geq 1$. The cotorsion pair is called {\it complete} if for each object $M$ in $\mathcal D$ there exist short exact sequences $0\to M\to B\to A\to 0$ and $0\to B'\to A'\to M\to 0$ with $A,A'\in \mathcal A$ and $B,B'\in \mathcal B$.
Following Gillespie \cite[Definition 3.3 and Proposition 3.6]{Gill04} there are four classes of complexes in $Ch(R)$ that are associated with a cotorsion pair $(\mathcal{A}, \mathcal{B})$ in $R$-Mod:\\
1. An acyclic complex $X$ is an $\mathcal{A}$-complex if $Z_j(X) \in \mathcal{A}$ for all integers $j$. We denote by $\widetilde{\mathcal{A}}$ the class of all acyclic $\mathcal A$-complexes.\\
2. An acyclic complex $U$ is a $\mathcal{B}$-complex if $Z_j(X) \in \mathcal{B}$ for all integers $j$. We denote by $\widetilde{\mathcal{B}}$ the class of all acyclic $\mathcal B$-complexes.\\
3. A complex $Y$ is a dg-$\mathcal{A}$ complex if each $Y_n \in \mathcal{A}$ and each map $Y\to U$ is null-homotopic, for each complex $U\in \widetilde{\mathcal{B}}$. We denote by $ dg (\mathcal{A})$ the class of all dg-$\mathcal{A}$ complexes. \\
4. A complex $W$ is a dg-$\mathcal{B}$ complex if each $W_n \in \mathcal{B}$ and each map $V\to W$ is null-homotopic, for each complex $V\in \widetilde{\mathcal{A}}$. We denote by $ dg (\mathcal{B})$ the class of all dg-$\mathcal{B}$ complexes.
Yang and Liu showed in \cite[Theorem 3.5]{yang:11:cotorsion} that when $(\mathcal{A}, \mathcal{B})$ is a complete hereditary cotorsion pair in $R$-Mod, the pairs $(dg (\mathcal{A}), \widetilde{\mathcal{B}})$ and $(\widetilde{\mathcal{A}}, dg (\mathcal{B}))$ are complete (and hereditary) cotorsion pairs. Moreover, by Gillespie \cite[Theorem 3.12]{Gill04}, we have that $\widetilde{\mathcal{A}}=dg (\mathcal{A})\bigcap \mathcal E$ and $ \widetilde{\mathcal{B}}=dg (\mathcal{B})\bigcap \mathcal E$ (where $\mathcal E$ is the class of all acyclic complexes). For example, from the (complete and hereditary) cotorsion pairs $(Proj,R\textrm{-Mod})$ and $(R\textrm{-Mod},Inj)$ one obtains the standard (complete and hereditary) cotorsion pairs $(\mathcal E,dg(Inj))$ and $(dg(Proj),\mathcal E)$. Over a left noetherian ring $R$, the pair $(^\bot \mathcal{GI}, \mathcal{GI})$ is complete hereditary cotorsion pair. This is essentially due to Krause in \cite[Theorem 7.12]{Krause} (see Enochs and Iacob \cite[Corollary 1]{enochs:12:gorenstein.injective.covers} for a precise formulation). Therefore $(dg (^\bot \mathcal {GI}), \widetilde{\mathcal {GI}})$ is a complete cotorsion pair in $Ch(R)$. \\ We recall that a ring $R$ is right $n$-perfect if each flat right $R$-module has finite projective dimension $\leq n$. Then if $R$ is left coherent and right $n$-perfect the pair $(\mathcal{GP}, \mathcal{GP}^\bot)$ is also a complete hereditary cotorsion pair in the category of right $R$-modules (see Bravo, Gillespie and Hovey \cite[Proposition 8.10]{gillespie:14:stable} or Estrada, Iacob and Odaba\c s\i\ \cite[Propostion 7]{iacob:15:gor.flat.proj}). Hence the pair $(\widetilde{\mathcal{GP}}, dg (\mathcal{GP}^\bot))$ is a complete cotorsion pair. \\
Finally, if $R$ is a left coherent ring, then $(\mathcal{GF},\mathcal{GF}^{\perp})$ is a complete hereditary cotorsion pair (where $\mathcal{GF}$ is the class of Gorenstein flat right $R$-modules). This is due to Enochs, Jenda and L\'opez-Ramos in \cite{EJL}. The class $\mathcal{GF}^{\perp}$ is known as the class of {\it Gorenstein cotorsion modules} and it is usually denoted by $\mathcal{GC}$. So the pair
$(\widetilde{\mathcal{GF}}, dg (\mathcal{GC}))$ is a complete cotorsion pair.
We also recall that given a class of modules $\mathcal{A}$, we denote by $dw(\mathcal{A})$ the class of complexes of modules, $X$, such that each component, $X_n$, is in $\mathcal{A}$.\\
\section{$\mathcal{A}$-periodic modules}
We prove in this section (Proposition 3) that over any ring $R$ the following statements are equivalent:\\
1. Every acyclic complex of projective modules is totally acyclic.\\
2. Every acyclic complex of Gorenstein projective modules is in $\widetilde{\mathcal{GP}}$.\\
Proposition 4 shows that the analogue result for injective and Gorenstein injective modules also holds over any ring $R$. And Proposition 5 proves that when the ring $R$ is GF-closed the analogue result for flat and Gorenstein flat modules also holds.\\
We will work in a more general setting. Let $\mathcal{A}$ be a class of modules that is closed under isomorphisms
\begin{definition}
A module $M$ is called $\mathcal{A}$-periodic module if there exists a short exact sequence $0\rightarrow M\rightarrow A\rightarrow M\rightarrow 0$ with $A\in\mathcal{A}$.
\end{definition}
Assume that $\mathcal{A}$ is closed under direct sums. Then one can see easily that the class of $\mathcal{A}$-periodic modules is closed under direct sums. Let $A = \ldots \rightarrow A_{n+1} \rightarrow A_n \rightarrow A_{n-1} \rightarrow \ldots $ be an acyclic complex. Then, by Fu and Herzog (\cite{FH}), the averaging complex of $A$ is the complex $$\oplus_{n\in \mathbb{Z}}\Sigma^nA=\cdots\to \oplus A_{n}\to \oplus A_n\to \oplus A_{n}\to\cdots$$ associated to $A$, defined as the coproduct of all the iterated suspensions and desuspensions of $A$. It is clear that the cycle of the averaging complex is a periodic $\mathcal{A}$-module, and every cycle of the complex $A$ is a direct summand of the cycle of the averaging complex. This trick was used by Christensen and Holm in \cite[Proposition 7.6]{CH} firstly, and later by Fu and Herzog in \cite{FH}, to show that a result of Neeman's \cite[Theorem 8.6 and Remark 2.15]{neeman} (every acyclic complex of projectives with flat cycles is contractible) can be deduced from a result proved by Benson and Goodearl in \cite[Theorem 2.5]{BG}.
If we let $\mathcal{B}$ be a class of modules which is closed under isomorphisms and direct summands, then the above trick will give us the following easy observation:\\
\begin{proposition}\label{eq.periodic} Assume that $\mathcal{A}$ is closed under direct sums.
The following are equivalent:\\
1. The cycles of every acyclic $dw \mathcal{A}$-complex belong to $\mathcal{B}$.\\
2. Every $\mathcal{A}$-periodic module belongs to $\mathcal{B}$.
\end{proposition}
\begin{proof} 1. $\Rightarrow$ 2. Let $M$ be an $\mathcal{A}$-periodic module. Then there is a short exact sequence $0\to M\to A\to M\to 0$ with $A\in\mathcal{A}$. One gets an acyclic complex $\cdots\to A\to A\to A\to\cdots$ immediately such that $M$ is a cycle of this complex. Then $M$ belongs to $\mathcal{B}$ by (1).\\
2. $\Rightarrow$ 1. Let $A=\cdots\to A_{n+1}\to A_{n}\to A_{n-1}\to\cdots$ be an acyclic $dw \mathcal{A}$-complex and take the averaging complex $\oplus_{n\in\mathbb{Z}}\Sigma^n A=\cdots\to \oplus A_{n}\to \oplus A_n\to \oplus A_{n}\to\cdots$ of $A$. Then one can see that every cycle of the averaging complex is an $\mathcal{A}$-periodic module, and hence belongs to $\mathcal{B}$. But every cycle of the complex $A$ is a direct summand of the cycle of the averaging complex, and hence belongs to $\mathcal{B}$, since $\mathcal{B}$ is closed under direct summands.
\end{proof}
If the class $\mathcal{A}$ is closed under direct products, then for a complex $A = \ldots \rightarrow A_{n+1} \rightarrow A_n \rightarrow A_{n-1} \rightarrow \ldots$ one can use the complex $$\prod_{n\in \mathbb{Z}}\Sigma^nA=\cdots\to \prod A_{n}\to \prod A_n\to \prod A_{n}\to\cdots$$ instead of the averaging complex of $A$, and prove that we still have that every cycle of $A$ is a direct summand of the cycle of the complex $\prod_{n\in \mathbb{Z}}\Sigma^nA$. So we also have
\begin{proposition}\label{eq.periodic1} Assume that $\mathcal{A}$ is closed under direct products.
The following are equivalent:\\
1. The cycles of every acyclic $dw \mathcal{A}$-complex belong to $\mathcal{B}$.\\
2. Every $\mathcal{A}$-periodic module belongs to $\mathcal{B}$.
\end{proposition}
Next let $\mathcal{B}$ be a class of modules closed under isomorphisms, direct summands and extensions, and assume that both $\mathcal{A}$ and $\mathcal{B}$ are closed under direct sums (resp., direct products). Moreover, we assume that $\mathcal{A} \subseteq \mathcal{B}$ and that every module in $\mathcal{B}$ appears as a cycle of some acyclic $dw \mathcal{A}$-complex. Then we have the following main theorem of this section.
\begin{theorem}The following are equivalent:\\
1. The cycles of every acyclic $dw \mathcal{A}$-complex belong to $\mathcal{B}$.\\
2. The cycles of every acyclic $dw \mathcal{B}$-complex belong to $\mathcal{B}$.\\
3. Every $\mathcal{A}$-periodic module belongs to $\mathcal{B}$.\\
4. Every $\mathcal{B}$-periodic module belongs to $\mathcal{B}$.\\
\end{theorem}
\begin{proof} Note that $1. \Leftrightarrow 3.$ and $2. \Leftrightarrow 4.$ follow from Proposition 1 or Proposition 2, and $4.\Rightarrow 3.$ is trivial since in this case $\mathcal{A}\subseteq\mathcal{B}$. So we only need to show $3. \Rightarrow 4 $. By our assumptions on $\mathcal{B}$ and (3), we see that $\mathcal{B}$ coincides with the class of cycles of acyclic $dw \mathcal{A}$-complexes.
Let $K$ be a $\mathcal{B}$-periodic module. Then there is an exact sequence $0\to K\to G\to K\to 0$ with $G\in\mathcal{B}$. Since $G$ belongs to $\mathcal{B}$, there is an exact sequence $0\to G_1\to E_0\to E^0\to G^1\to 0$ such that $G$ is the image of $E_0\to E^0$, $E_0$ and $E^0$ belong to $\mathcal{A}$, and $G_1$ and $G^1$ belong to $\mathcal{B}$. Composing the two epimorphisms $G\to K$ and $E_0\to G$, we get the following commutative diagram
$$\xymatrix{~&0\ar[d]&0\ar[d]&~&~\\
~&G_1\ar[d]\ar@{=}[r]&G_1\ar[d]&~&~\\
0\ar[r]&X_0\ar[r]\ar[d]&E_0\ar[r]\ar[d]&K\ar[r]\ar@{=}[d]&0\\
0\ar[r]&K\ar[r]\ar[d]&G\ar[r]\ar[d]&K\ar[r]&0\\
~&0&0&~&~}.$$
Similarly, we have a commutative diagram:\\
$$\xymatrix{~&~&0\ar[d]&0\ar[d]&~\\
0\ar[r]&K\ar[r]\ar@{=}[d]&G\ar[r]\ar[d]&K\ar[r]\ar[d]&0\\
0\ar[r]&K\ar[r]&E^0\ar[r]\ar[d]&X^0\ar[r]\ar[d]&0\\
~&~&G^1\ar[d]\ar@{=}[r]&G^1\ar[d]&~\\
~&~&0&0&~}.$$
The above two diagrams give us the following commutative diagram\\
$$\xymatrix{~&E_0\ar[rr]\ar[rd]&&E^0\ar[rd]&~\\
X_0\ar[ru]\ar[rd]&&K\ar[ru]\ar[rd]&&X^0\\
~&0\ar[ru]&&0\ar[ru]&~}$$
Now take the pushout of $K\to G$ along the exact sequence $0\to K\to X^0\to G^1\to 0$, we get the following commutative diagram\\
$$\xymatrix{~&0\ar[d]&0\ar[d]&~&~\\
0\ar[r]&K\ar[r]\ar[d]&X^0\ar[r]\ar[d]&G^1\ar[r]\ar@{=}[d]&0\\
0\ar[r]&G\ar[r]\ar[d]&Y^0\ar[r]\ar[d]&G^1\ar[r]&0\\
~&K\ar[d]\ar@{=}[r]&K\ar[d]&~&~\\
~&0&0~&&~}.$$
Since $\mathcal{B}$ is closed under extensions, and $G$ and $G^1$ belong to $\mathcal{B}$, we have $Y^0 \in\mathcal{B}$. Then there is an exact sequence $0\to Y^0\to E^1\to Y^1\to 0$ with $E^1\in\mathcal{A}$, and $Y^1$ and $Y^0$ in $\mathcal{B}$. Now composing the two monomorphisms $X^0\to Y^0$ and $Y^0\to E^1$, we get the following two commutative diagrams:
$$\xymatrix{~&~&0\ar[d]&0\ar[d]&~\\
0\ar[r]&X^0\ar[r]\ar@{=}[d]&Y^0\ar[r]\ar[d]&K\ar[r]\ar[d]&0\\
0\ar[r]&X^0\ar[r]&E^1\ar[r]\ar[d]&X^1\ar[r]\ar[d]&0\\
~&~&Y^1\ar[d]\ar@{=}[r]&Y^1\ar[d]&~\\
~&~&0&0&~}$$
and
$$\xymatrix{~&E_0\ar[rr]\ar[rd]&&E^0\ar[rd]\ar[rr]&&E^1\ar[rd]\\
X_0\ar[ru]\ar[rd]&&K\ar[ru]\ar[rd]&&X^0\ar[ru]\ar[rd]&&X^1\\
~&0\ar[ru]&&0\ar[ru]&&0\ar[ru]&~}.$$ Continuing this procedure, we get the follow exact sequence
$$E_0\to E^0\to E^1\to\cdots .$$ Using the sequence $0\to G_1\to X_0\to K\to 0$ and a dual argument to the one above, finally we get an acyclic $\mathcal{A}$-complex
$$\cdots\to E_1\to E_0\to E^0\to E^1\to\cdots$$ with $K$ a cycle of this complex, so $K \in \mathcal{B}$, as desired.
\end{proof}
As applications, we obtain the results mentioned in the beginning of this section.\\
\begin{proposition}The following are equivalent:\\
1. Every acyclic complex of projective modules is totally acylic;\\
2. The cycles of every acyclic complex of Gorenstein projective modules are Gorenstein projective.
\end{proposition}
\begin{proof}
Let $\mathcal{A}$ be the class of projective modules, and let $\mathcal{B}$ be the class of Gorenstein projective modules. Then $\mathcal{A}$ and $\mathcal{B}$ are closed under direct sums, direct summands, and extensions. Note that $2 \Rightarrow 1$ is trivial. For $1 \Rightarrow 2$, it is clear that if every acyclic complex of projective modules is totally acylic, then every cycle of an acyclic complex of projective modules is Gorenstein projective. Therefore by Theorem 1, we get that the cycles of every acyclic complex of Gorenstein projective modules are Gorenstein projective.
\end{proof}
If $\mathcal{A}$ is the class of injective modules, and $\mathcal{B}$ is the class of Gorenstein injective modules, then by a similar argument as above, we have the following result.
\begin{proposition}The following are equivalent:\\
1. Every acyclic complex of injective modules is totally acylic.\\
2. The cycles of every acyclic complex of Gorenstein injective modules are Gorenstein injective.
\end{proposition}
Recall that a ring $R$ is GF-closed if the class Gorenstein flat modules is closed under extensions. \\
If $\mathcal{A}$ is the class of flat modules, and $\mathcal{B}$ is the class of Gorenstein flat modules, then over a GF-closed ring $R$ we have the following result.
\begin{proposition}Let $R$ be a GF-closed ring. The following are equivalent:\\
1. Every acyclic complex of flat modules is totally F-acylic.\\
2. The cycles of every acyclic complex of Gorenstein flat modules are Goresntein flat.
\end{proposition}
\section{Totally acyclic complexes of injective, projective, flat modules}
Theorem 2 below shows that when $R$ is a left noetherian ring, we can add a third equivalent statement to those in Proposition 4. The proof uses the fact that over such a ring $(^\bot \mathcal{GI}, \mathcal{GI})$ is a complete hereditary cotorsion pair, and therefore we have that the pair $(dg (^\bot \mathcal{GI}), \widetilde{\mathcal{GI}})$ is a complete cotorsion pair in $Ch(R)$.
\begin{theorem}
Let $R$ be a left noetherian ring. The following are equivalent:\\
1. Every acyclic complex of injective $R$-modules is totally acyclic.\\
2. Every acyclic complex of Gorenstein injective $R$-modules is in $\widetilde{\mathcal{GI}}$.\\
3. Every complex of Gorenstein injective $R$-modules is a dg-Gorenstein injective complex. \\% $dg (GI) = dw (GI)$.\\
\end{theorem}
\begin{proof}
1. $\Leftrightarrow$ 2. By Proposition 4.\\
2. $\Rightarrow$ 3. Let $X$ be a complex of Gorenstein injective $R$-modules. Since $(\mathcal{E}, dg (Inj))$ is a complete cotorsion pair, there is an exact sequence $0 \rightarrow A \rightarrow B \rightarrow X \rightarrow 0$ with $A$ a DG-injective complex and with $B$ an acyclic complex. Then for each $n$ there is an exact sequence $0 \rightarrow A_n \rightarrow B_n \rightarrow X_n \rightarrow 0$ with $A_n$ injective and with $X_n$ Gorenstein injective. It follows that each $B_n$ is Gorenstein injective. So $B$ is an acyclic complex of Gorenstein injective modules; by (2), $B$ is in $\widetilde{\mathcal{GI}}$, and therefore in $dg (\mathcal{\mathcal{GI}})$. \\
Let $Y \in \widetilde{^\bot \mathcal{GI}}$. The exact sequence $0 \rightarrow A \rightarrow B \rightarrow X \rightarrow 0$ gives an exact sequence $0 =\mathrm{ Ext}^1(Y, B) \rightarrow \mathrm{Ext}^1(Y,X) \rightarrow \mathrm{Ext}^2(Y, A)=0$ (since $Y$ is acyclic and $A$ is a DG-injective complex). It follows that $\mathrm{Ext}^1(Y,X)=0$ for any $Y \in \widetilde{^\bot \mathcal{GI}}$, so $X \in dg (\mathcal{GI})$. So we have that $dw (\mathcal{GI}) \subseteq dg (\mathcal{GI})$. The other inclusion always holds, thus $dg (\mathcal{GI}) = dw (\mathcal{GI})$.\\
3. $\Rightarrow$ 1. Let $X$ be an acyclic complex of injective $R$-modules. In particular, $X \in dw (\mathcal{GI})$ and by (3), $X$ is in $dg (\mathcal{GI})$. Since $X \in dg (\mathcal{GI})$ and $X$ is acyclic, it follows that $X \in \widetilde{\mathcal{GI}}$, and therefore $Z_n(X) \in \mathcal{GI}$ for all $n$. Thus $X$ is a totally acyclic complex.
\end{proof}
We recall that any left coherent ring is right GF-closed (see Bennis \cite[Proposition 2.2(1)]{B}).\\ The dual result of Theorem 2 (for flat/Gorenstein flat modules) is the following:\\
\begin{theorem}
Let $R$ be a left coherent ring. Then the following are equivalent.\\
1. Every acyclic complex of flat right $R$-modules is F-totally acyclic. \\
2. Every acyclic complex of Gorenstein flat right $R$-modules is in $\widetilde{\mathcal{GF}}$.\\
3. Every complex of Gorenstein flat right $R$-modules is a dg-Gorenstein flat complex.
\end{theorem}
\begin{proof}
1. $\Leftrightarrow$ 2. by Proposition 5.\\
2. $\Rightarrow$ 3. Let $X$ be a complex of Gorenstein flat right $R$-modules. Since $(dg(Proj), \mathcal{E})$ is a complete cotorsion pair, there exists an acyclic sequence $0 \rightarrow X \rightarrow C \rightarrow D \rightarrow 0$ with $D \in dg(Proj)$ and with $C$ an acyclic complex. Then for each $n$ we have an exact sequence $0 \rightarrow X_n \rightarrow C_n \rightarrow D_n \rightarrow 0$ with both $D_n$ and $X_n$ Gorenstein flat right $R$-modules. It follows that each $C_n$ is Gorenstein flat right $R$-module. Thus $C$ is an acyclic complex of Gorenstein flat right $R$-modules, so by (2), $C$ is in $\widetilde{\mathcal{GF}}$.\\
Let $A \in \widetilde{\mathcal{GC}}$. The exact sequence $0 \rightarrow X \rightarrow C \rightarrow D \rightarrow 0$ gives an exact sequence $0 = \mathrm{Ext}^1 (C,A) \rightarrow \mathrm{Ext}^1(X,A) \rightarrow\mathrm{Ext}^2(D,A) = 0$ (since $A$ is acyclic and $D$ is DG-projective). It follows that $\mathrm{Ext}^1(X,A) = 0$ for any $A \in \widetilde{\mathcal{GC}}$, so $X \in dg (\mathcal{GF})$.\\
3. $\Rightarrow$ 1. Let $Y$ be an acyclic complex of flat right $R$-modules. By (3), $Y \in dg (\mathcal{GF})$. Since $Y$ is also acyclic it follows that $Y$ is in $\widetilde{\mathcal{GF}}$. Therefore $Z_n(Y)$ is Gorenstein flat right $R$-module for each $n$. So $Y$ is F-totally acyclic.
\end{proof}
Using Theorem 3 we obtain the following characterization of commutative Gorenstein rings.\\
\begin{corollary}
Let $R$ be a commutative noetherian ring. The following are equivalent:
1. $R$ is Gorenstein.\\
2. Every acyclic complex of flat $R$-modules is F-totally acyclic.\\
3. Every acyclic complex of Gorenstein flat $R$-modules is in $\widetilde{\mathcal{GF}}$.\\
4. Every complex of Gorenstein flat $R$-modules is dg-Gorenstein flat.
\end{corollary}
\begin{proof}
1. $\Leftrightarrow$ 2. by Murfet and Salarian \cite[Theorem 4.27]{murfet:11:gor.proj}.\par\noindent
By Theorem 3, (2), (3) and (4) are equivalent.
\end{proof}
We show that if $R$ is a left coherent and right $n$-perfect ring, then the equivalent characterizations from Theorem 3 can be extended to include the analogue results for the Gorenstein projective modules. The proof uses the fact that over such a ring $R$ the pair $(\mathcal{GP}, \mathcal{GP}^\bot)$ is a complete hereditary pair. As noted in Section 2, this gives a complete cotorsion pair, $(dg(\mathcal{GP}), \widetilde{\mathcal{GP}^\bot})$, in the category of complexes of right $R$-modules.
\begin{theorem}
Let $R$ be a left coherent and right $n$-perfect ring. The following statements are equivalent:\\
1. Every acyclic complex of flat right $R$-modules is F-totally acyclic. \\
2. Every acyclic complex of Gorenstein flat right $R$-modules is in $\widetilde{\mathcal{GF}}$.\\
3. Every complex of Gorenstein flat right $R$-modules is a dg-Gorenstein flat complex.\\
4. Every acyclic complex of projective right $R$-modules is totally acyclic.\\
5. Every acyclic complex of Gorenstein projective right $R$-modules is in $\widetilde{\mathcal{GP}}$.\\
6. Every complex of Gorenstein projective right $R$-modules is a dg-Gorenstein projective complex.
\end{theorem}
\begin{proof}
By Theorem 3, statements (1), (2) and (3) are equivalent. And by Proposition 3, statements (4) and (5) are equivalent.\\
2 $\Rightarrow$ 5. Let $X$ be an acyclic complex of Gorenstein projective right $R$-modules. Since the ring is coherent and right $n$-perfect, by Christensen, Frankild and Holm \cite[Proposition 3.7]{christensen:06:ongorenstein}, every Gorenstein projective right $R$-module is Gorenstein flat. So $X \in \widetilde{\mathcal{GF}}$. Then, for each $j$, $Z_j(X) \in \mathcal{GF}$. By \cite[Proposition 5]{iacob:15:gor.flat.proj}, $G.p.d. Z_j(X) \le n$. Since we have an exact sequence $0 \rightarrow Z_{n+j}(X) \rightarrow X_{n+j-1} \rightarrow X_{n+j-2} \rightarrow \ldots \rightarrow X_{j+1} \rightarrow Z_j(X) \rightarrow 0$ with $Z_j(X)$ Gorenstein flat and all the $X_i$'s Gorenstein projective right $R$-modules, it follows that $Z_{j+n}(X)$ is Gorenstein projective for all integers $j$. But then by replacing $j$ with $j-n$ we obtain that $Z_j(X)$ is Gorenstein projective for all $j$.\\
5. $\Rightarrow$ 6. Let $X$ be a complex of Gorenstein projective right $R$-modules. There exists an exact sequence $0 \rightarrow X \rightarrow C \rightarrow D \rightarrow 0$ with $C$ acyclic and with $D$ a DG-projective complex. For each $j$, the exact sequence $0 \rightarrow X_n \rightarrow C_n \rightarrow D_n \rightarrow 0$ with both $X_n$ and $D_n$ Gorenstein projective right modules gives that each $C_n$ is Gorenstein projective. Thus $C$ is an acyclic complex of Gorenstein projective right $R$-modules, so by (5), $C$ is in $\widetilde{\mathcal{GP}}$. \\ Let $A \in \widetilde{\mathcal{GP}^\bot}$. The exact sequence $0 \rightarrow X \rightarrow C \rightarrow D \rightarrow 0$ gives an exact sequence $0 = \mathrm{Ext}^1(C,A) \rightarrow \mathrm{Ext}^1(X,A) \rightarrow \mathrm{Ext}^2(D,A) =0$ (since $D$ is DG-projective, and $A$ is acyclic).\\
Since $\mathrm{Ext}^1(X,A)=0$ for any $A \in \widetilde{\mathcal{GP}^\bot}$, and $(dg(\mathcal{GP}), \widetilde{\mathcal{GP}^\bot})$ is a cotorsion pair it follows that $X$ is a dg-Gorenstein projective complex.\\
6 $\Rightarrow$ 2. Let $X$ be an acyclic complex of Gorenstein flat right $R$-modules. Consider a partial projective resolution of $X$:\\
$0 \rightarrow Y \rightarrow P_{n-1} \rightarrow \ldots \rightarrow P_0 \rightarrow X \rightarrow 0$. Since $R$ is right $n$-perfect and each $X_j$ is Gorenstein flat by \cite[Proposition 5]{iacob:15:gor.flat.proj} we have that for each $j$, $G.p.d. X_j \le n$, so each $Y_j$ is Gorenstein projective. Then $Y$ is an acyclic complex of Gorenstein projective right $R$-modules, so, by (6), $Y$ is in $\widetilde{\mathcal{GP}}$. Therefore $Z_j(Y) \in \mathcal{GP}$ for all $j$, so the exact sequence $0 \rightarrow Z_j(Y) \rightarrow Z_j(P_{n-1}) \rightarrow \ldots \rightarrow Z_j(P_0) \rightarrow Z_j(X) \rightarrow 0$ gives that $G.p.d. Z_j(X) \le n$ for all $j$. By \cite[Proposition 3.7]{christensen:06:ongorenstein} we have $G.f.d. Z_j(X) \le G.p.d. Z_j(X) \le n$. The exact sequence $0 \rightarrow Z_{j+n}(X) \rightarrow {X}_{j+n-1} \rightarrow \ldots \rightarrow {X}_{j+1} \rightarrow Z_j(X) \rightarrow 0$ with all $X_i$ Gorenstein flat and with $G.f.d. Z_j(X) \le n$ gives that $Z_{j+n}(X) \in \mathcal{GF}$ for all $j$. Then by replacing $j$ with $j-n$ we obtain that $Z_j(X)$ is Gorenstein flat for all integers $j$. So $X \in \widetilde{\mathcal{GF}}$.\\
\end{proof}
Using Theorem 2 and Theorem 4 we obtain the following:\\
\begin{corollary}
Let $R$ be a commutative noetherian ring of finite Krull dimension (for instance if $R$ has a dualizing complex). The following are equivalent:\\
1. $R$ is an Iwanaga-Gorenstein ring.\\
2. Every acyclic complex of injective modules is totally acyclic.\\
3. Every acyclic complex of flat $R$-modules is F-totally acyclic. \\
4. Every acyclic complex of Gorenstein flat $R$-modules is in $\widetilde{\mathcal{GF}}$.\\
5. Every acyclic complex of Gorenstein injective $R$-modules is in $\widetilde{\mathcal{GI}}$.\\
6. Every complex of Gorenstein injective $R$-modules is dg-Gorenstein injective. \\%$dg (GI) = dw (GI)$.\\
7. Every complex of Gorenstein flat $R$-modules is dg-Gorenstein flat.\\
8. Every acyclic complex of projective $R$-modules is totally acyclic.\\
9. Every acyclic complex of Gorenstein projective $R$-modules is in $\widetilde{\mathcal{GP}}$.\\
10. Every complex of Gorenstein projective $R$-modules is dg-Gorenstein projective.
\end{corollary}
\begin{proof}
1 $\Rightarrow$ 2 follows from \cite[Theorem 10.1.13(1)]{enochs:00:relative}.\par\noindent\\
By Theorem 2, 2 $\Leftrightarrow$ 5 $\Leftrightarrow$ 6.\\
2. $\Rightarrow$ 3. Let $F$ be an acyclic complex of flat modules. Then $F^+$ is an acyclic complex of injective modules. By hypothesis $F^+$ is totally acyclic.
This means that $Hom(I,F^+)$ is acyclic for every injective module $I$. But $Hom(I,F^+)\simeq (I\otimes F)^+$. So $(I\otimes F)^+$ is acyclic, which implies that $I\otimes F$ is acyclic, for every injective $I$. That is, $F$ is F-totally acyclic.\par\noindent
By Theorem 4, we have that 3 $\Leftrightarrow$ 4 $\Leftrightarrow$ 7 $\Leftrightarrow$ 8 $\Leftrightarrow$ 9 $\Leftrightarrow$ 10.\\
3 $\Leftrightarrow$ 1. By Murfet and Salarian \cite[Theorem 4.27]{murfet:11:gor.proj}, the ring $R$ is Gorenstein. Since $R$ has finite Krull dimension, it follows that $inj. dim_R R < \infty$ (see for example \cite{Bass}, Section 1). So $R$ is an Iwanaga-Gorenstein ring.
\end{proof}
One of the main open problems in Gorenstein homological algebra is: ``What is the most general type of ring over which the class of Gorenstein injective modules is (pre)covering (preenveloping respectively)?". We give a sufficient condition in order for $\mathcal{GI}$ be both covering and enveloping.\\
We will use the folowing.\\
\begin{proposition}
Let $R$ be a two sided noetherian ring such that every acyclic complex of injective $R$-modules is totally acyclic. Then the character module of any Gorenstein injective $R$-module is a Gorenstein flat right $R$-module.
\end{proposition}
\begin{proof}
Let $_RG$ be a Gorenstein injective module. Then there exists an acyclic complex of injective $R$-modules $I = \ldots \rightarrow I_1 \rightarrow I_0 \rightarrow I_{-1} \rightarrow \ldots $ with $G = Z_0(I)$. Then $I^{++}$ is an acyclic complex of injective $R$-modules, so by hypothesis, $I^{++}$ is totally acyclic. Therefore $G^{++} = Z_0(I^{++})$ is Gorenstein injective. Since $(G^+)^+$ is Gorenstein injective, it follows that $G^+$ is Gorenstein flat (by Holm \cite{holm:05:gor.dim}, Theorem 3.6).
\end{proof}
\begin{theorem}
Let $R$ be a two sided noetherian ring such that every acyclic complex of injective $R$-modules is totally acyclic. Then the class of Gorenstein injective modules is both covering and enveloping in $R\textrm{-Mod}$.
\end{theorem}
\begin{proof}
By Proposition 6, over such a ring $R$ the character module of any Gorenstein injective $R$-module is Gorenstein flat. By Iacob \cite[Theorems 3 and 5]{iacob:15:gor.inj}, the class of Gorenstein injective $R$-modules is both covering and enveloping.
\end{proof}
\begin{theorem}
Let $R$ be a two sided noetherain ring such that every acyclic complex of injective $R$-modules is totally acyclic. Then the class of Gorenstein flat right $R$-modules is preenveloping in $R$-Mod.
\end{theorem}
\begin{proof}
Since over such a ring the character modules of Gorenstein injective modules are Gorenstein flat the result follows from Iacob \cite[Theorem 1]{iacob:15:gor.flat}.
\end{proof}
\section{rings that satisfy the Auslander condition}
We recall that Bass proved that a commutative noetherian ring $R$ is an Iwanaga-Gorenstein ring if and only if the flat dimension of the $i$th term in a minimal injective resolution of $R$ is at most $i-1$, for all $i \ge 1$. In the non-commutative case, Auslander proved that this condition is left-right symmetric (see Fossum, Griffith and Reiten \cite[Theorem 3.7]{fossum:75:auslander}). In this case the ring is said to satisfy the \emph{Auslander condition}.\\
In \cite{huang:14:auslander} Huang introduces the notion of modules satisfying the Auslander condition.
We recall the definition (\cite{huang:14:auslander}): given a left noetherian ring $R$, a left $R$-module $M$ is said to satisfy the Auslander
condition if the flat dimension of the $i$th term in the minimal injective
resolution of $M$ is at most $i-1$ for any $i \ge 1$.\\
We also recall the following:\\
\textbf{Theorem} (this is part of \cite[Theorem 1.3]{huang:14:auslander}) If $R$ is a left noetherian ring then the following are
equivalent:\\
1. $_RR$ satisfies the Auslander condition.\\
2. $fd_R E^0(M) \le fd_RM$ for any $_RM$, where $E^0(M)$ is the injective envelope of $M$.\\
If moreover $R$ is left and right noetherian then the statements above are also equivalent to:\\
3. the opposite version of (i) ($1 \le i \le 2$).\\
We recall that a ring $R$ has {\it finite finitistic flat dimension} if the maximum of flat dimensions among the modules with finite flat dimension is finite. In the following we prove (Proposition 6) that if $R$ is two sided noetherian of finite finitistic flat dimension, such that $R$ satisfies the Auslander condition and every acyclic complex of injective $R$-modules is totally acyclic then every injective $R$-module has finite flat dimension.\\
We recall that a module $M$ is \emph{strongly cotorsion} if $\mathrm{Ext}^1(F,M)=0$ for any module $F$ of finite flat dimension. By Yan \cite[Theorem 2.5 and Proposition 2.14]{yan:10:cotorsion}, if $R$ has finite finitistic flat dimension then $(\mathcal{F}, \mathcal{SC})$ is a complete hereditary cotorsion pair (where $\mathcal{F}$ denotes the class of modules of finite flat dimension and $\mathcal{SC}$ is the class of strongly cotorsion modules). We use this result to prove (Theorem 7) that if every acyclic complex of injective left $R$-modules is totally acyclic and every acyclic complex of injective right $R$-modules is totally acyclic then $R$ is an Iwanaga-Gorenstein ring.\\
We start with the following result:\\
\begin{lemma}
If $R$ is two sided noetherian such that every acyclic complex of injective modules is totally acyclic, then any Gorenstein injective module is strongly cotorsion.
\end{lemma}
\begin{proof}
By Proposition 6 the character module of any Gorenstein injective $R$-module is Gorenstein flat right $R$-module. By Iacob \cite[Lemma 2]{iacob:15:gor.inj}, we have that $K \in ^\bot\!\! \mathcal{GI}$ if and only if $K^+$ is Gorenstein cotorsion right $R$-module. Since for any flat module $K$, we have that $K^+$ is an injective right $R$-module, it follows that any flat module is in $^\bot \mathcal{GI}$.\\
Let $C$ be a module of finite flat dimension. Then there is an exact sequence $0 \rightarrow F_n \rightarrow \ldots \rightarrow F_0 \rightarrow C \rightarrow 0$ with each $F_j$ flat. Let $G$ be a Gorenstein injective $R$-module. Then by the above $\mathrm{Ext}^l(C,G)=0$ for all $l \ge n+1$.\\
Also, there is an exact sequence $0 \rightarrow G_n \rightarrow E_{n-1} \rightarrow \ldots \rightarrow E_0 \rightarrow G \rightarrow 0$ with each $E_j$ injective and with all $\mathrm{Ker}(E_j \rightarrow E_{j-1})$ Gorenstein injective. Then $\mathrm{Ext} ^1 (C,G) \simeq \mathrm{Ext}^{n+1}(C, G_n) = 0$. So $G$ is strongly cotorsion.
\end{proof}
\begin{lemma}
Let $R$ be a two sided noetherian ring that satisfies the Auslander condition and such that every acyclic complex of injective $R$-modules is totally acyclic. Then every strongly cotorsion module has Gorenstein injective dimension $\le 1$.
\end{lemma}
\begin{proof}
Let $M$ be a strongly cotorsion module. Consider the exact sequence $0 \rightarrow M \rightarrow A \rightarrow L \rightarrow 0$ with $A$ injective. Since both $A$ and $M$ are strongly cotorsion it follows that $L$ is also strongly cotorsion. Since the injective envelope of $_R R$ is flat it follows (from Enochs and Huang \cite[Theorem 4.4(5)]{enochs:12:huang}) that the injective cover $I_0\to L$ is surjective. By Wakamatsu's lemma (\cite[Corollary 7.2.3]{enochs:00:relative}) $J_0=\mathrm{Ker}(I_0\to L)\in Inj^\bot$. Hence we have the short exact sequence $0 \rightarrow J_0 \rightarrow I_0 \rightarrow L \rightarrow 0$ with $I_0$ injective and $J_0 \in Inj^\bot$. Since $A$ is injective and $I_0 \rightarrow L$ is an injective precover, there is a commutative diagram:\\
\[
\begin{diagram}
\node{0}\arrow{e}\node{M}\arrow{s,r}{u}\arrow{e}\node{A}\arrow{s,r}{u}\arrow{e,t}{f}\node{L}\arrow{s,=}\arrow{e}\node{0}\\
\node{0}\arrow{e}\node{J_0}\arrow{e}\node{I_0}\arrow{e,t}{g}\node{L}\arrow{e}\node{0}
\end{diagram}
\]
So we have an exact sequence: $0 \rightarrow M \rightarrow J_0 \oplus A \rightarrow I_0 \rightarrow 0$ with both $M$ and $I_0$ strongly cotorsion modules. It follows that $J_0$ is strongly cotorsion. Then, by the same reasoning as above, there exists an exact sequence $0 \rightarrow J_1 \rightarrow I_1 \rightarrow J_0 \rightarrow 0$ with $I_1$ an injective module and with $J_1$ in $Inj^\bot$. In fact, since $J_0 \in Inj^\bot$, we have that $\mathrm{Ext}^1 (E, J_0) = \mathrm{Ext}^2(E, J_1)=0$ for any injective $R$-module $E$. \\
We show that $J_1$ is a strongly cotorsion module.\\
Let $F$ be a module of finite flat dimension. Consider the exact sequence $0 \rightarrow F \rightarrow E \rightarrow D \rightarrow 0$ with $E$ the injective envelope of $F$. Since $R$ satisfies the Auslander condition, $E$ has finite flat dimension. It follows that $D$ is also a module of finite flat dimension, so $\mathrm{Ext}^1(D, J_0)=0$. The exact sequence $0 \rightarrow F \rightarrow E \rightarrow D \rightarrow 0$ gives a long exact sequence $$0 = \mathrm{Ext}^1(E,J_1) \rightarrow {\rm Ext}^1(F,J_1) \rightarrow \mathrm{Ext}^2(D,J_1) \rightarrow \mathrm{Ext}^2(E, J_1)=0.$$ So $\mathrm{Ext}^1(F,J_1) \simeq \mathrm{Ext}^2(D,J_1)$.\\
Also, the exact sequence $0 \rightarrow J_1 \rightarrow I_1 \rightarrow J_0 \rightarrow 0$ gives the exact sequence: $0 =\mathrm{Ext}^1(D, J_0) \rightarrow \mathrm{Ext}^2(D, J_1) \rightarrow \mathrm{Ext}^2(D,I_1) = 0$. Thus $\mathrm{Ext}^2(D, J_1)=0$, and by the above, $\mathrm{Ext}^1(F,J_1)=0$ for any $R$-module $F$ of finite flat dimension. So $J_1$ is a strongly cotorsion module, and therefore its injective cover is a surjective map. Continuing this process, we obtain an acyclic left injective resolution of $L$= $\ldots \rightarrow I_2 \rightarrow I_1 \rightarrow I_0 \rightarrow L \rightarrow 0$. Pasting it together with a right injective resolution of $L$, we obtain an acyclic complex of injective modules: $\ldots \rightarrow I_2 \rightarrow I_1 \rightarrow I_0 \rightarrow E^0 \rightarrow E^1 \rightarrow \ldots $. By hypothesis, this is a totally acyclic complex. So $L$ is Gorenstein injective. Then the exact sequence $0 \rightarrow M \rightarrow A \rightarrow L \rightarrow 0$ with both $A$ and $L$ Gorenstein injective modules gives that $G.i.d. M \le 1$.\\
\end{proof}
\begin{lemma}
Let $R$ be a two sided noetherian ring that satisfies the Auslander condition. If $V$ is a strongly cotorsion module of finite flat dimension then $V$ is injective.
\end{lemma}
\begin{proof}
Consider the exact sequence $0 \rightarrow V \rightarrow \mathcal{E}(V) \rightarrow W \rightarrow 0$ with $\mathcal{E}(V)$ the injective envelope of $V$. Since $R$ satisfies the Auslander condition, $f.d. (\mathcal{E}(V)) < \infty$. It follows that $W$ also has finite flat dimension. Since $V$ is strongly cotorsion, $\mathrm{Ext}^1 (W,V)=0$. So the sequence is split acyclic, and therefore $\mathcal{E}(V) \simeq V \oplus W$. Thus $V$ is an injective module.
\end{proof}
\begin{proposition}
Let $R$ be a two sided noetherian ring of finite finitistic flat dimension and that satisfies the Auslander condition. If moreover every acyclic complex of injective $R$-modules is totally acyclic then every strongly cotorsion $R$-module is Gorenstein injective.
\end{proposition}
\begin{proof}
Let $M$ be a strongly cotorsion $R$-module. Then its flat cover is injective (by \cite[Theorem 4.4(5)]{enochs:12:huang}), and therefore there exists an exact sequence $0 \rightarrow J \rightarrow I \rightarrow M \rightarrow 0$ with $I$ injective and with $J \in Inj^\bot$. Since $(\mathcal{F}, \mathcal{SC})$ is a complete cotorsion pair there is also an exact sequence $0 \rightarrow J \rightarrow U \rightarrow V \rightarrow 0$ with $U$ strongly cotorsion and with $V$ of finite flat dimension. \\
Since $I$ is an injective module we have a commutative diagram\\
\[
\begin{diagram}
\node{0}\arrow{e}\node{J}\arrow{s,=}\arrow{e} \node{U}\arrow{s} \arrow{e} \node{V}\arrow{s}\arrow{e}\node{0}\\
\node{0}\arrow{e}\node{J}\arrow{e}\node{I}\arrow{e}\node{M}\arrow{e}\node{0}
\end{diagram}
\]
and therefore an exact sequence $0 \rightarrow U \rightarrow I \oplus V \rightarrow M \rightarrow 0$.\\
Both $M$ and $U$ are strongly cotorsion, so $V$ is also strongly cotorsion. But $V$ has finite flat dimension. So by Lemma 3, $V$ is injective.\\
And by Lemma 2, $G.i.d. U \le 1$. The exact sequence $0 \rightarrow U \rightarrow I \oplus V \rightarrow M \rightarrow 0$ with $I \oplus V$ injective and with $G.i.d. U \le 1$ gives that $M$ is Gorenstein injective.
\end{proof}
\begin{corollary}
Let $R$ be a two sided noetherian ring of finitistic flat dimension and satisfies the Auslander condition. If moreover every acyclic complex of injective $R$-modules is totally acyclic then the class of strongly cotorsion $R$-modules coincides with that of the Gorenstein injective modules.
\end{corollary}
\begin{proof}
This follows from Lemma 1 and Proposition 7.
\end{proof}
We can prove now:\\
\begin{proposition}
Let $R$ be a two sided noetherian ring of finitistic flat dimension such that $R$ satisfies the Auslander condition and every acyclic complex of injective $R$-modules is totally acyclic. Then every injective $R$-module has finite flat dimension.
\end{proposition}
\begin{proof}
By Corollary 3 above we have that $\mathcal{GI} = \mathcal{SC}$ in this case. It follows that $^\bot \mathcal{GI} = \mathcal{F}$ with $\mathcal{F}$ the class of modules of finite flat dimension. Since the class of injective modules is contained in $^\bot \mathcal{GI}$, we have that $Inj \subseteq \mathcal{F}$.
\end{proof}
We can give now the following characterization of noncommutative Iwanaga-Gorenstein rings:\\
\begin{theorem}
Let $R$ be a two sided noetherian ring of finitistic flat dimension that satisfies the Auslander condition. The following are equivalent:\\
1. $R$ is an Iwanaga-Gorenstein ring.\\
2. Every acyclic complex of injective left $R$-modules is totally acyclic and every acyclic complex of injective right $R$-modules is totally acyclic.
\end{theorem}
\begin{proof}
1 $\Rightarrow$ 2. is known (\cite[Theorem 10.1.13]{enochs:00:relative}).\\
2 $\Rightarrow$ 1. Let $F$ be a flat left $R$-module. Then $F^+$ is an injective right $R$-module. By Proposition 7, $flat. dim. F^+ < \infty$. Therefore $inj.dim. F^{++} < \infty$. Since $R$ is left noetherian and $F \subseteq F^{++}$ is a pure submodule it follows that $inj. dim F \le inj. dim. F^{++} < \infty$ (\cite[Lemma 9.1.5]{enochs:00:relative}). In particular, for $F = R$ we obtain that $inj.dim._R R < \infty$.\\
Since $R$ is left and right noetherian and $inj.dim._R R < \infty$ it follows (\cite{enochs:00:relative}, Proposition 9.1.6) that $inj.dim R_R < \infty$. Thus $R$ is an Iwanaga-Gorenstein ring.
\end{proof}
\bibliographystyle{plain}
\bibliographystyle{plain}
|
1603.04092
|
\section{Supporting information}
\subsection{I. Plasmon modes supported by the double layer}
The plasmon spectra are obtained by a self-consistent solution of the Poisson's equation
\begin{multline}
\label{Poisson}
-q^2 \delta \varphi(z) + \frac{\partial^2 \delta \varphi (z)}{\partial z^2} = \\
-\frac{4\pi}{\kappa}\left[ \delta Q_{t}\delta(z-d/2) + \delta Q_{b}\delta(z+d/2) \right],
\end{multline}
the continuity equations
\begin{equation}
-i\omega\delta Q_{t,b}+ i {\bf q}{\delta {\bf j}_{t,b}}= \mp \delta J_{\rm tun},
\end{equation}
and the linear-response relation between current density and electric field, $\delta {\bf j}_{t,b}= \sigma_\parallel({\bf q},\omega) \delta{\bf E}_{t,b}$, $\delta J_{\rm tun} = G_\bot({\bf q},\omega) (\delta\varphi_{t} - \delta\varphi_{b} )$. Here ${\bf q}$ is the two-dimensional plasmon wave vector, $d$ is the distance between layers, $\kappa$ is the background dielectric permittivity, $\delta Q_{t}$ and $\delta Q_{b}$ are the small-signal variations of charge density in the top and bottom layers, respectively, $\sigma_\parallel$ and $G_\bot$ are the in-plane and tunnel conductivities (note that the dimensionalities of these quantities are different), the indices $t$ and $b$ distinguish between the quantities corresponding to the top and bottom layers. In the absence of built-in voltage, due to the electron-hole symmetry, the charge densities in the layers are equal in modulus an opposite in sign, moreover, the layer conductivities are equal. This allows us to seek for the solutions of Eq.~(\ref{Poisson}) being symmetric and anti-symmetric with respect to $z$. A straightforward calculation brings us to the following dispersions~\cite{Voltage_controlled,Hwang_PRB_2GL}
\begin{equation}
\label{AntiSymmetric-disp}
1+\frac{2\pi i q}{\omega \kappa}\left[\sigma_\parallel ({\bf q},\omega) + \frac{2G_\bot ({\bf q},\omega )}{q^2}\right]\left( 1-e^{-qd} \right)=0
\end{equation}
for the antisymmetric (acoustic) mode, and
\begin{equation}
\label{Symmetric-disp}
1+\frac{2\pi i q}{\omega \kappa}\sigma_\parallel ({\bf q},\omega) \left( 1+e^{-qd} \right)
\end{equation}
for the symmetric (optical mode).
\begin{figure}[ht]
\includegraphics[width=0.9\linewidth]{Plasmon_spectra.eps}
\caption{
\label{PS}
Spectra of acoustic and optical plasmons supported by the double graphene layer calculated for the following parameters: Fermi energy $\varepsilon_F = 100$ meV, temperature $T=300$ K, insulator thickness $d=3$ nm, dielectric constant $\kappa = 5$.
}
\end{figure}
Both equations (\ref{AntiSymmetric-disp}) and (\ref{Symmetric-disp}) can be considered as the zeros of the generalized polarizability of the double layer structure:
\begin{multline}
\tilde\epsilon({\bf q},\omega) = \left\{1+\frac{2\pi i q}{\omega \kappa}\left[\sigma_\parallel ({\bf q},\omega) + \frac{2G_\bot ({\bf q},\omega )}{q^2}\right]\left( 1-e^{-qd} \right)\right\}\\
\left\{1+\frac{2\pi i q}{\omega \kappa}\sigma_\parallel ({\bf q},\omega) \left( 1+e^{-qd} \right)\right\}.
\end{multline}
The imaginary part of the generalized polarizability inverted is the spectral function of the surface plasmons,
\begin{equation}
{\cal S}({\bf q},\omega) = {\rm Im}\tilde\epsilon^{-1}({\bf q},\omega),
\end{equation}
the positions of its peaks determine the SP spectra, its sign determines whether the excitations are amplified or damped, and the width of the peaks determines the magnitude of plasmon damping or gain. As the generalized polarizability decouples into the two terms with zeros yielding the dispersions of acoustic and optical modes, the spectral function ${\cal S}({\bf q},\omega)$ can be also presented as a product of acoustic and optical plasmons' spectral functions:
\begin{equation}
{\cal S}({\bf q},\omega) = {\cal S}_{\rm ac}({\bf q},\omega){\cal S}_{\rm opt}({\bf q},\omega).
\end{equation}
The spectral functions of acoustic and optical SPs are depicted in Fig.~\ref{PS} for highly doped ($\varepsilon_F=100$ meV) closely located graphene layers ($d=3$ nm).
It is possible to write down the analytical approximations to the plasmon spectra in the absence of tunneling. Being interested in the long-wavelength limit, $q d \ll 1$, we perform the expansions $1-e^{-qd}\approx qd$, $1+e^{-qd} \approx 2$. In the long-wavelength limit, the conductivity is essentially classical, moreover, the interband transitions do not affect the low-energy part of the spectra. With these assumptions, we use the following (collisionless) approximation for the conductivity which follows from the solution of the kinetic equation:
\begin{equation}
{{\sigma }_{\bf{q}\omega }}=i g \frac{{e^2}}{\hbar }\frac{{\tilde{\varepsilon}}_F}{2\pi \hbar}\frac{\omega }{q^2v_0^2}\left[ \frac{\omega }{\sqrt{{\omega^2}- q^2 v_0^2}}-1 \right],
\end{equation}
where $\tilde\varepsilon_F = T \ln(1+e^{\varepsilon_F/T})$. Equation~(\ref{AntiSymmetric-disp}) admits an analytical solution $\omega(q)$ with a sound-like dispersion
\begin{equation}
\omega_- = v_0 \frac{1 + 4 \alpha_c q_F d}{\sqrt{1 + 8 \alpha_c q_F d}} q.
\end{equation}
Here, we have introduced the Fermi wave vector $q_F = \tilde\varepsilon_F/\hbar v_0$, and the coupling constant $\alpha_C = e^2/\hbar\kappa v_0$. The velocity of the acoustic mode always exceeds the Fermi velocity, thus the Landau damping is avoided. The dispersion equation for the optical mode $\omega_+(q)$ is cubic, however, in the long-wavelength limit the spatial dispersion of conductivity can be neglected as the phase velocity of this mode significantly exceeds the Fermi velocity. The approximate relation for $\omega_+(q)$ has the following form
\begin{equation}
\omega_+ \approx v_0 \sqrt{4 \alpha_c q q_F}.
\end{equation}
\begin{figure}[ht]
\includegraphics[width=0.9\linewidth]{Shape_function.eps}
\caption{\label{Shape} Dimensionless electric potential (normalized by its on-plane value) in the acoustic and optical modes calculated for the double layer structure with $d=2.5$ nm and wave vector $qv_0 = 100$ meV.
}
\end{figure}
In the subsequent calculations we shall also require the spatial dependence of the plasmon potential in the acoustic mode, which can be obtained from (\ref{Poisson}). It is convenient to present it as
\begin{equation}
\label{Plasm-potential}
\delta\varphi(z) = \delta\varphi_0 s(z),
\end{equation}
where $\varphi_0$ is the electric potential on the top layer, and $s(z)$ is the dimensionless 'shape function' having the following form
\begin{equation}
\label{Shape-function}
s\left( z \right)=\left\{ \begin{aligned}
& {{e}^{-q\left( z+d/2 \right)}},\,\,z<-d/2, \\
& -\frac{\sinh \left( qz \right)}{\sinh \left( qd/2 \right)},\,\,\left| z \right|<d/2, \\
& -{{e}^{-q\left( z-d/2 \right)}},\,\,z>d/2. \\
\end{aligned} \right.
\end{equation}
The spatial dependence of the shape functions for acoustic and optical modes is shown in Fig.~\ref{Shape}.
\subsection{II. Electron states in tunnel-coupled layers}
The tight-binding Hamiltonian of the tunnel-coupled graphene layers in the absence of the propagating plasmon [${\hat H}_0$ in Eq.~(\ref{Hamiltonian})] constitutes the blocks describing isolated graphene layers ${\hat H}_{G\pm}$ and the block describing tunnel hopping $\hat{\mathcal T}$. Such description of electron states is common for graphene bilayer with possible interlayer twist~\cite{Twisted_GBL}. In more comprehensive theories, the $\hat{\mathcal T}$-matrix is affected by the band structure of dielectric layer~\cite{Brey_PRA}. Here, for the sake of analytical traceability, we choose the tunneling matrix in its simplest form which is applicable to the AA-stacked perfectly aligned graphene bilayer, $\hat{\mathcal T} = \Omega \hat I$, where $\Omega$ can be interpreted as the tunnel hopping frequency.
To estimate its value, we switch for a while from the tight binding to the continuum description of electron states in the $z$-direction. We model each graphene layer with a delta-well~\cite{SP-lasing}
\begin{equation}
U_{t,b}(z) = 2\sqrt{\frac{\hbar^2 U_b}{2m^*}} \delta (z - z_{t,b}),
\end{equation}
where the potential strength chosen to provide a correct value of electron work function $U_b$ from graphene to the surrounding dielectric, and $m^*$ is the effective electron mass in the dielectric. The effective Schrodinger equation in the presence of voltage bias $\Delta/e$ between graphene layers takes on the following form
\begin{multline}
-\frac{\hbar^2}{2m^*}\frac{\partial^2\Psi(z)}{\partial z^2} + \left[ U_t(z) + U_b(z) + U_F(z)\right]\Psi(z) = E \Psi(z),
\end{multline}
where $U_F$ is the potential energy created by the applied field
\begin{equation}
U_F\left( z \right)=\frac{\Delta}{2} \left\{ \begin{aligned}
& 1,\,\,z<-d/2, \\
& 2 z/d,\,\,\left| z \right|<d/2, \\
& -1,\,\,z>d/2. \\
\end{aligned} \right.
\end{equation}
The solutions of effective Schrodinger equation represent decaying exponents at $|z| > d/2$, and a linear combination of Airy functions in the middle region $|z| < d/2$
\begin{equation}
\Psi_M(z) = C {\rm Ai} \left(-z/a + \varepsilon \right) + D {\rm Bi} \left(-z/a + \varepsilon \right),
\end{equation}
where $\varepsilon= 2m^* |E| a^2/\hbar^2 $ is the dimensionless energy and $a = (\hbar^2 d/ 2 m^*\Delta)^{1/3}$ is the effecive length in the electric field. A straightforward matching of the wave functions at the graphene layers yields the dispersion equation
\begin{widetext}
\begin{equation}
\label{Energy_spectrum}
\det \left( \begin{matrix}
{e^{-k_1 d/2}} & -\text{Ai}\left( d/2a+\varepsilon \right) & -\text{Bi}\left( d/2a+\varepsilon \right) & 0 \\
\left(2 k_b - k_1 \right){{e}^{-{k_1}d/2}} & -\frac{1}{a}\text{Ai}'\left( d/2a+\varepsilon \right) & -\frac{1}{a}\text{Bi}'\left( d/2a+\varepsilon \right) & 0 \\
0 & -\text{Ai}\left( -d/2a+\varepsilon \right) & -\text{Bi}\left( -d/2a+\varepsilon \right) & {{e}^{-{{k}_{2}}d/2}} \\
0 & -\frac{1}{a}\text{Ai}'\left( -d/2a+\varepsilon \right) & -\frac{1}{a}\text{Bi}'\left( -d/2a+\varepsilon \right) & \left( 2k_b - k_2 \right){e^{-k_2 d/2}} \\
\end{matrix} \right)=0,
\end{equation}
\end{widetext}
where $k_b = \sqrt{2m^* U_b/\hbar^2}$ is the decay constant of the bound state wave function in a single delta-well, $k_1 =\sqrt{2m^* (E + \Delta/2)/\hbar^2}$, $k_2 =\sqrt{2m^* (E - \Delta/2)/\hbar^2}$. Equation (\ref{Energy_spectrum}) yields two energy levels $E_l$ ($l = \pm 1$) which can be found only numerically (see Fig.~\ref{WF}A). The respective wave functions are shown in Fig.~\ref{WF}B, at strong bias they are {\it almost} the wave functions localized on the different layers (see the discussion below). Despite the complexity of Eq.~(\ref{Energy_spectrum}), the dependence of $E_l$ on the energy separation between layers $\Delta$ can be accurately modelled by
\begin{equation}
\label{Energy_spectrum_typical}
E_l (\Delta) = -U_b + \frac{l}{2} \sqrt{\left(E_{+1,\Delta=0} - E_{-1,\Delta=0}\right)^2 + \Delta^2}.
\end{equation}
The energy spectrum (\ref{Energy_spectrum_typical}) is typical for the tunnel coupled quantum wells~\cite{vasko_book}; the same functional dependence of energy levels on $\Delta$ is naturally obtained by diagonalizing the block Hamiltonian (\ref{Hamiltonian}),
\begin{equation}
\label{Energy_spectrum_block}
E_l (\Delta) = -U_b + l \sqrt{\Omega^2+ \frac{\Delta^2}{4}}.
\end{equation}
This allows us to estimate the tunnel coupling $\Omega$ as half the energy splitting of states in double graphene layer well in the absence of applied bias
\begin{equation}
\Omega=\frac{1}{2}\left[E_{+1,\Delta=0} - E_{-1,\Delta=0} \right].
\end{equation}
\begin{figure}[ht]
\includegraphics[width=0.9\linewidth]{Wave_Functions.eps}
\caption{\label{WF} Energy levels (A) and wave functions (B) of the tunnel-coupled graphene layers calculated for $\Delta=200$ meV and $2.5$ nm WS$_2$ as a tunnel barrier. Solid lines in (B) show the wave functions corresponding to $l=+1$ (red) and $l=-1$ (blue), while the dashed lines show the wave functions of the top and bottom layers obtained as a linear combination (\ref{LK}) of the eigen functions.
}
\end{figure}
The $l$-index governs the $z$-localization of electron in a biased double quantum well. At large bias $\Delta \gg \Omega$, the delta-wells interact weakly, thus $l = +1$ corresponds to the state localized almost completely in the top layer and $l = -1$ corresponds to the electron in the bottom layer. The wave functions corresponding to a relatively strong bias $\Delta = 200$ meV are shown in Fig.~\ref{WF}. It is simple to relate the true eigen functions $\Psi_+(z)$ and $\Psi_-(z)$ to the functions located on the top and bottom layers $\Psi_t(z)$ and $\Psi_b(z)$
\begin{gather}
\label{LK}
\Psi_b = \cos\alpha \Psi_- + \sin\alpha \Psi_+,\\
\Psi_t = -\sin\alpha \Psi_- + \cos\alpha \Psi_+,
\end{gather}
where
\begin{equation}
\cos\alpha = \frac{2\Omega}{\sqrt{(2\Omega)^2 + (\Delta - \tilde\Delta)^2}}.
\end{equation}
At small bias $\Delta \ll \Omega$ the wave function of $l = +1$ is odd and that of $l = -1$ is even.
\subsection{III. Electron-plasmon interaction and solution of the quantum Liouville equation}
The presence of plasmon propagating along the double graphene layer results in an additional potential energy of electron
\begin{equation}
\label{Plasm_energy}
\delta \hat V ({\bf r},t) = e \delta\varphi(z) e^{i(q x-\omega t)},
\end{equation}
where we assume the direction of plasmon propagation to be along the $x$-axis, and the dependence of potential on the $z$-coordinate is given by Eqs.~(\ref{Plasm-potential}) and (\ref{Shape-function}). The additional terms in Hamiltonian due to the vector-potential are negligible as far as the speed of light substantially exceeds the plasmon velocity.
\begin{figure}[ht]
\includegraphics[width=0.9\linewidth]{Overlap_Factors.eps}
\caption{\label{OF} Dependence of the overlap factors $S_{++}$ and $S_\pm$ of $\hat H_0$-eigenfunctions and dimensionless plasmon potential $s(z)$ calculated for the WS$_2$ (2.5 nm) dielectric layer.
}
\end{figure}
With our choice of the tight-binding basis functions as those localized on a definite layer and on a definite lattice cite, we shall require 16 matrix elements of the potential energy (\ref{Plasm_energy}) connecting those basis states. However, it is more convenient to work out the matrix elements of (\ref{Plasm_energy}) connecting the {\it eigen} states of Hamiltonian (\ref{Hamiltonian}). The good quantum numbers of these states are the in-plane momentum ${\bf p}$, the band index $s = \pm 1$ ($+1$ for the conduction band and $-1$ for the valence band) and the $l$ - index discussed above. The respective matrix elements are
\begin{multline}
\bra{{\bf p}sl}\delta \hat V \ket{{\bf p}'s'l'} = \\
\delta_{{\bf p},{\bf p}'-{\bf q}} u^{ss'}_{{\bf pp}'} e \delta\varphi_0 \int_{-\infty}^{\infty}{\Psi^*_l(z) s(z) \Psi_{l'}(z)}.
\end{multline}
We introduce the shorthand notations for the overlap factors of dimensionless plasmon potential and eigen functions of coupled layers
\begin{gather}
S_{++} = \int_{-\infty}^{\infty}{\Psi^*_{+1}(z) s(z) \Psi_{+1}(z)},\\
S_{\pm} = \int_{-\infty}^{\infty}{\Psi^*_{+1}(z) s(z) \Psi_{-1}(z)},
\end{gather}
and, obviously, $S_{--} = -S_{++}$, $S_{\pm} = S_{\mp}$.
The dependence of the overlap factors $S_{++}$ and $S_{\pm}$ on the interlayer potential drop $\Delta$ is shown in Fig.~\ref{OF}. We note that these overlap factors weakly depend on the plasmon wave vector $q$ as far as it is much smaller than electron wave function decay constant $k_b$. In this approximation, one can set $s(z)\approx 2z/d$ for $|z|<d/2$, $s(z)\approx 1$ at $z < - d/2$, and $s(z)\approx 1$ at $z > d/2$.
Having obtained the matrix elements of electron-plasmon interaction, we pass to the solution of the quantum Liuoville equation for the electron density matrix $\hat \rho$. In the linear response, the latter is decomposed as $\hat \rho = \hat \rho^{(0)} + \delta \hat\rho$, where $\delta \hat\rho$ emerges due to the plasmon field. This component of the density matrix is found from
\begin{equation}
\label{Neumann}
i\hbar\frac{\partial \delta \hat\rho}{\partial t} = [{\hat H}_0, \delta \hat\rho] + [\delta \hat V,\hat\rho^{(0)}].
\end{equation}
Considering the harmonic time dependence, Eq.~(\ref{Neumann}) is exactly (non-perturbatively) solved in the diagonal basis of $\hat H_0$. In this basis, the commutator
\begin{equation}
[{\hat H}_0,\delta \hat\rho]_{\alpha \beta} = (\varepsilon_\alpha - \varepsilon_\beta)\delta \rho_{\alpha \beta},
\end{equation}
where $\alpha$ and $\beta$ run over good quantum numbers ${\bf p}$, $s$ and $l$. Thus, one readily writes down the solution
\begin{equation}
\delta\rho_{\alpha\beta} = \frac{\left[\delta \hat V , \hat\rho^{(0)} \right]_{\alpha\beta}}{\hbar\omega + i\delta - (\varepsilon_\alpha - \varepsilon_\beta) }.
\end{equation}
The first-order correction $\delta \hat\rho$ is now expressed through the density matrix in the absence of plasmon field $\hat\rho^{(0)}$. A particular choice of $\hat\rho^{(0)}$ requires the solution of kinetic equation in the voltage-biased tunnel-coupled layers, however, in several limiting cases the situation is greatly simplified~\cite{Kazarinov_Suris}. If the tunneling rate $\Omega$ is slower than the electron energy relaxation rate $\nu_{\varepsilon}$ (e.g., due to phonons and carrier-carrier scattering), the quasi-equilibrium distribution function is established in each individual layer. In this situation, $\hat\rho^{(0)}$ is diagonal in the basis formed by the wave functions localized on top and bottom layers its elements being the respective Fermi distribution functions. In the other limiting case, when tunneling is stronger than scattering ($\Omega \gg \nu_\varepsilon$), the electron is 'collectivized' by the two layers, and the density matrix $\hat\rho^{(0)}$ is approximately diagonal in the basis of ${\hat H}_0$-eigenstates. For the parameters used in our calculations, $\hbar\Omega\approx 10$ meV exceeds the relaxation rate $\hbar \nu\approx 1$ meV, and the latter limiting case is justified. Setting $\rho^{(0)}_{\alpha\beta} = f_\alpha \delta_{\alpha\beta}$, where $f$ is the Fermi distribution function, we find
\begin{multline}
\bra{{\bf p},s,l}{\delta \hat \rho}\ket{{\bf p}'s'l'} = \\
= \delta\varphi_0 S_{ll'} u^{ss'}_{{\bf pp}'} \frac{f^{s'l'}_{{\bf p}'} - f^{sl}_{\bf p}}{\hbar \omega + i\delta - (\varepsilon^{sl}_{\bf p} - \varepsilon^{s'l'}_{{\bf p}'}) }.
\end{multline}
We note that a different choice of the zero-order density matrix also leads to the emergence of the negative tunnel conductivity, with a larger coefficient in front of $G_\bot$.
The subsequent calculation of the in-plane and tunnel conductivities is based on the following relations. From the charge conservation on the top layer one has
\begin{multline}
\label{Continuity-2}
\frac{\partial \delta Q_t}{\partial t} = - {\bf \nabla} {\delta \bf j} - \delta J_{\rm tun} = \\
q^2\left[\sigma_\parallel({\bf q},\omega) + 2\frac{G_\bot({\bf q},\omega)}{q^2}\right]\delta\varphi_0.
\end{multline}
On the other hand, the time derivative of the charge density can be obtained by statistical averaging of the operator
\begin{equation}
\frac{\partial Q_{\alpha\beta}}{\partial t} = \frac{i}{\hbar} Q_{\alpha \beta} (\varepsilon_\alpha - \varepsilon_\beta),
\end{equation}
where, as before, the indices $\alpha$ and $\beta$ run over in-plane momentum $\bf p$, band index $s$, and $z$-localization index $l$. The rule of statistical averaging of $\partial\hat Q/\partial t$ in extended form reads
\begin{multline}
\label{Averaging}
\frac{\partial \delta Q_t}{\partial t} = {\rm Tr}\frac{\partial \hat Q_t}{\partial t}\delta \hat\rho=\\
-\frac{i}{\hbar}\sum\limits_{{\bf p}ss'll'}{\bra{{\bf p}_+ s' l' }\hat Q\ket{{\bf p}_- s l}\bra{{\bf p}_- s l} \delta\hat\rho \ket{{\bf p}_+ s' l'} ( \varepsilon^{sl}_{\bf p_-} - \varepsilon^{s'l'}_{\bf p_+} ) }.
\end{multline}
Due to the linear dependence of $\delta\hat\rho$ on the plasmon potential amplitude $\delta\varphi_0$, the average time derivative of the charge density in Eq.~(\ref{Averaging}) is also a linear function of $\delta\varphi_0$. The proportionality coefficient, according to Eq.~(\ref{Continuity-2}), is the sought-for combination of conductivities $q^2 \sigma_\parallel + 2 G_\bot$. The terms in the sum, Eq.~(\ref{Averaging}), with non-equal $l$-indices are related to the tunnel conductivity, and those with equal $l$-indices -- to the in-plane conductivity.
Actually, the distinction between in-plane and tunnel conductivity is meaningful only in the case of weak coupling (or strong bias $\Delta \gg \Omega$). In the this case $S_{++} \cos\theta_M \rightarrow 1 $, and Eq.~(\ref{In-plane}) yields the conductivity of a single graphene layer~\cite{Falkovsky-Varlamov}. In the same limit, the tunnel conductivity, Eq.~(\ref{Out-plane}) possesses a small prefactor $S_{\pm} \sin\theta_M \propto e^{-2 k_b d}$, where $k_b$ is the decay constant of the electron wave function. In the opposite case of weak bias $\Omega \gtrsim \Delta$, the notions of in-plane and tunnel conductivities lose their meaning as the states of individual layers are highly mixed~\cite{DasSarma-PRL-tunnel-plasmon}. Ultimately, at zero bias, $\sigma_\parallel$ vanishes, which reflects the impossibility of electron transitions between states with the same $z$-symmetry under the perturbation odd in $z$.
However, even in the case of strong bias $\Delta \gg \Omega$, the in-plane conductivity of tunnel-coupled layers responding to the plasmon field is renormalized compared to its value for a single isolated layer in uniform field $\sigma_0$, namely
\begin{equation}
\sigma_\parallel = S_{++} \cos\theta_M \sigma_0.
\end{equation}
The factor $S_{++}<1$ comes from the broadening of the electron cloud beyond a single layer and non-uniformity of the plasmon field. Loosely speaking, a part of the electron wave function feels the reduced magnitude of the plasmon field ${\cal E}_\parallel$ outside of graphene layer. The factor $\cos\theta_M < 1$ comes from the mixing of electron states in individual layers forming the state with definite value of $l$.
\subsection{IV. Analytical approximations to the in-plane and tunnel conductivity}
Despite a complex structure of Eqs.~(\ref{In-plane}) and (\ref{Out-plane}), several analytical approximations can be made in the frequency range of interest $\hbar \omega < 2\varepsilon_F$, where the plasmons are weakly damped -- at least, for the real part of conductivity that determines absorption or gain. For brevity, in this section we work with 'god-given units' $\hbar = v_0 \equiv 1 $ We start with the evaluation of in-plane interband conductivity associated with the electron transitions from the valence band to the conduction band
\begin{multline}
{\rm Re} {\sigma_0}^{v\rightarrow c}( {\bf q},\omega )=-i g \frac{e^2}{\omega}\times\\
\sum\limits_{\bf{p}}{ {| {\bf v}_{{\bf p}{\bf p}'}^{vc} |^2} [f_{{\bf p}_-}^v-f_{{\bf p}_-}^c] \delta(\omega - \varepsilon_{\bf p_+} - \varepsilon_{\bf p_-})}.
\end{multline}
Here ${\bf v}^{\rm{vc}}_{ {\bf p}+,{\bf p}-}$ is the interband matrix element of velocity operator in graphene $\hat{\bf v} = {\boldsymbol \sigma}$, and $\epsilon_{\bf p} = p$ is the dispersion law. Known the eigen functions of graphene Hamiltonian $\hat{H}_G = {\boldsymbol \sigma } {\bf p}$,
\begin{equation}
\ket{ s {\bf p}} =\frac{1}{\sqrt{2}}\left(
\begin{aligned}
& {{e}^{-i{{\theta }_{\bf{p}}}/2}} \\
& {s{e}^{i{{\theta }_{\bf{p}}}/2}} \\
\end{aligned} \right) e^{i {\bf p r}},
\end{equation}
one readily finds $\bra{c {\bf p}_-} \hat{v}_x \ket{v {\bf p}_+} = i \sin\left[(\theta_{{\bf p}+} + \theta_{{\bf p}-})/2\right]$. The subsequent calculations are conveniently performed in the elliptic coordinates
\begin{equation}
{\bf p} =\frac{q}{2}\left\{ \cosh u\cos v, \sinh u \sin v \right\}.
\end{equation}
In these coordinates $|{\bf p}_\pm| = (q/2)[\cosh u \pm \cos v]$, $|\bra{c {\bf p}_-} \hat{v}_x \ket{v {\bf p}_+}|^2 dp_x dp_y = q^2/4 \cosh^2u\sin^2v du dv$. This leads us to
\begin{multline}
\label{Inter-exact}
{\rm{Re}}\sigma^{v\rightarrow c}_0({\bf q},\omega) = \frac{e^2}{2\pi}\frac{\omega}{\sqrt{\omega^2 - q^2}}\int\limits_0^\pi dv \sin^2v \times \\
\left\{f_0\left[-\frac{\omega}{2} + \frac{q}{2} \cos v \right] - f_0\left[\frac{\omega}{2} + \frac{q}{2} \cos v \right] \right\}.
\end{multline}
To proceed further, we note that in the domain of interest $\omega > q$ one always has $ q \cos v < \omega$. Due to this fact, the difference of distribution functions is a smooth function of $v$, while the prefactor $\sin^2v$ varies strongly. This allows us to integrate $\sin^2v$ exactly, and replace the difference of distribution functions with its angular average. This leads us to
\begin{multline}
\label{Inter-approx2}
{\rm{Re}}\sigma^{v \rightarrow c}_{0}({\bf q},\omega) \approx \frac{e^2}{4} \frac{T \omega}{q} \chi(q,\omega) \times\\
\ln\frac{\cosh \frac{\varepsilon_F}{T} + \cosh \frac{\omega + q}{2T}}{\cosh \frac{\varepsilon_F}{T} + \cosh \frac{\omega - q}{2T}},
\end{multline}
where we have introduced a resonant factor
\begin{equation}
\chi(q,\omega) = \frac{\theta(\omega)}{{\sqrt{\omega^2 - q^2}}}.
\end{equation}
Clearly, the neglect of spatial dispersion in the case of acoustic SPs with velocity slightly exceeding the Fermi velocity results in an underestimation of the real part of the interband conductivity and, hence, of the damping.
Similar approximations can be made to evaluate the interlayer interband conductivity, the only difference is that electrons in different layers have different chemical potentials. We present these results without derivation
\begin{widetext}
\begin{gather}
\frac{2G_\bot^{v \rightarrow c}}{q^2} = -e^2 \frac{T\omega}{2 q}
\left\{\chi(q,\tilde\Delta - \omega) \ln\displaystyle{\frac{\cosh\frac{q + eV-\omega}{4T}}{\cosh\frac{q - eV + \omega}{4T}}} - \chi(q,\tilde\Delta + \omega) \ln\displaystyle{\frac{\cosh\frac{q + eV + \omega}{4T}}{\cosh\frac{q - eV - \omega}{4T}}} \right\},\\
\frac{2G_\bot^{c \rightarrow v}}{q^2} = -e^2\frac{T\omega}{2 q} \
\left\{\chi(q, \omega - \tilde\Delta)
\ln\displaystyle{\frac{\cosh\frac{q + eV - \omega}{4T}}{\cosh\frac{q - eV +\omega}{4T}}} - \chi(q,- \tilde\Delta - \omega) \ln\displaystyle{\frac{\cosh\frac{q + eV + \omega}{4T}}{\cosh\frac{q - eV -\omega}{4T}}} \right\}.
\end{gather}
\end{widetext}
We now pass to the in-plane conductivity associated with the intraband transitions. Here, we can restrict ourselves to the classical description of the electron motion justified at frequencies $\omega \ll \varepsilon_F$, $q \ll q_F$ -- otherwise, strong interband SP damping takes place. Clearly, one could work out the terms with $s=s'$ and $l=l'$ in Eq.~(\ref{In-plane}), however, the accurate inclusion of carrier scattering in such equations is challenging. Instead, we use the kinetic equation to evaluate $\sigma_\parallel^{c\rightarrow c}$; this formalism allows an inclusion of carrier scattering in a consistent manner. One should, however, keep in mind that in non-local case $q\neq 0$ a simple $\tau_p$-approximation is not particle-conserving. A particle-conserving account of collisions is achieved with the Bhatnagar-Gross-Krook collision integral~\cite{BGK-collisions} in the right-hand side of the kinetic equation,
\begin{multline}
\label{Kinetic-BGK}
-i\omega \delta f ({\bf p}) +i{\bf q v} \delta f ( {\bf p} ) + i e {\bf q v}{\delta\varphi}\frac{\partial {f_0}}{\partial \varepsilon }=\\
-\nu \left[ \delta f ( {\bf p} ) + \frac{d{{\varepsilon }_{F}}}{dn}\frac{\partial {f_0}}{\partial \varepsilon }\delta {n} \right].
\end{multline}
Here $\delta f (\bf{p})$ is the sought-for field-dependent correction to the equilibrium electron distribution function $f_0$, $\delta n_{\bf q}$ is the respective correction to the electron density, ${\bf v} = {\bf p}/p$ is the quasi-particle velocity, and $\nu$ is the electron collision frequency which is assumed to be energy-independent. The current density, associated with the distribution function $\delta f (\bf{p})$ reads:
\begin{equation}
\label{Current_BGK}
\delta {\bf j} = - e g \sum_{\bf p}{
{\bf v} \frac{df_0}{d\varepsilon}
\frac{i e {\bf qv} \delta\varphi - i \nu (d\varepsilon_F/dn)\delta n }{\omega + i \nu - {\bf q v}}
}.
\end{equation}
Recalling the relation between small-signal variations of density and current, $\omega \delta n= q \delta {\bf j}$, and evaluating the integrals in Eq.~(\ref{Current_BGK}), we find the in-plane intraband conductivity:
\begin{equation}
\label{Sigma_intra}
\sigma_{\rm{intra}}({\bf q},\omega) = \frac{i g e^2 \tilde\varepsilon_F}{(2\pi)^2q}\displaystyle{\frac{J_2(\frac{\omega + i \nu}{q})}{1 - \frac{i\nu}{2\pi\omega}J_1(\frac{\omega + i \nu}{q})}},
\end{equation}
where
\begin{equation}
J_n(x) = \int_0^{2\pi}{\frac{\cos^n\theta d\theta}{x - \cos\theta}}.
\end{equation}
Similar to the real part of the interband absorption, the intraband absorption is generally larger in the non-local case $q\neq 0$ compared to the local case. This difference is illustrated in Fig.~(\ref{Net_Sigma}), where the local ($q=0$) and non-local expressions at the acoustic plasmon dispersion ($q = \omega /s$) are compared. This result is in agreement with the recent measurements of plasmon propagation length in graphene on hBN: the local Drude formula underestimated the plasmon damping, and the account of non-locality was crucial to explain the experimental data~\cite{Principi_plasmon_loss_hBN}.
\begin{figure}[ht]
\includegraphics[width=0.9\linewidth]{ReSigmaNet.eps}
\caption{\label{Net_Sigma} Comparison of the real parts of the interband (red) and intraband (blue) conductivities of a single graphene layer evaluated in the local limit (dashed) and at finite wave vector corresponding to the acoustic SP dispersion $q = \omega /s$ (solid). The parameters used in the calculation are $\varepsilon_F = 100$ meV, $T=300$ K, $s=1.2v_0$. Acoustic phonons are considered as the main carrier relaxation mechanism.
}
\end{figure}
Finally, we provide the analytical estimates for the intraband tunnel conductivity, which mainly governs the tunneling effects on the plasmon dispersion. After passing to the elliptic coordinates in terms with $l\neq l'$ and $s=s'$ one readily finds
\begin{multline}
{\rm Re}\frac{2G^{c\rightarrow c}_\bot}{q^2} = - e^2 S_{\pm}\sin\theta_M\frac{\omega}{2\pi}\\
\left\{\psi(q,\omega - \tilde\Delta) I \left(\frac{q}{2T},\frac{eV-\omega}{2T}\right) - \psi(q,\omega + \tilde\Delta) I \left(\frac{q}{2T},\frac{eV + \omega}{2T}\right) \right\},
\end{multline}
where we have introduced the resonant factor associated with the intraband interlayer transitions
\begin{equation}
\psi(q,\omega) = \frac{1}{\sqrt{q^2 - \omega^2}},
\end{equation}
and an auxiliary dimensionless integral
\begin{equation}
I(\alpha,\beta) = \int\limits_1^\infty{dt\sqrt{t^2 - 1}\left[F(\alpha t - \beta) - F(\alpha t+\beta)\right]},
\end{equation}
here $F(\zeta) =(1+e^\zeta)^{-1}$ is the dimensionless Fermi function.
The collinear tunneling singularities are smeared in the presence of carrier scattering. To account for the latter, we replace the delta-peaked spectral functions of individual particles in the expressions for conductivity (\ref{In-plane}) and (\ref{Out-plane}) with Lorentz functions using the following rule
\begin{multline}
\label{Broadening}
\sum_{\bf p}{\frac{1}{\omega+i\delta - (\varepsilon^{sl}_{\bf p} - \varepsilon^{s'l'}_{\bf p'})}} = \\
\int{d\varepsilon d\varepsilon' \sum_{\bf p}{\frac{ \delta(\varepsilon - \varepsilon^{sl}_{\bf p}) \delta(\varepsilon' - \varepsilon^{s'l'}_{\bf p'}) }{\omega + i \delta - (\varepsilon - \varepsilon')}}} \Rightarrow \\
\frac{1}{(2\pi)^2}
\int{
d\varepsilon d\varepsilon'
\sum_{\bf p}{
\frac
{{\cal A}_{sl}({\bf p},\varepsilon) {\cal A}_{s'l'}({\bf p}',\varepsilon) }
{\omega + i \delta - (\varepsilon - \varepsilon')}
}
}.
\end{multline}
The spectral function is given by
\begin{equation}
{\cal A}_{sl}({\bf p},\varepsilon) = \frac{2 \Sigma''_{sl}({\bf p}, \varepsilon)}{[\varepsilon - \varepsilon^{sl}_{\bf p}]^2 + [\Sigma''_{sl}({\bf p}, \varepsilon)]^2},
\end{equation}
where we have taken the imaginary part of the spectral function as half the electron-phonon collision frequency evaluated at the Fermi surface~\cite{Vasko-Ryzhii}
\begin{equation}
2\Sigma''_{sl}({\bf p}, \varepsilon) = \left.\frac{\varepsilon}{T}\frac{D^2 T^2}{2\rho s^2 v_0^2}\right|_{\varepsilon = \varepsilon_F}.
\end{equation}
The approximation (\ref{Broadening}) corresponds to the neglect of vertex corrections in the current-current correlator represented by the bubble diagram. Moreover, the interlayer electron-phonon interactions are neglected. While these assumptions can be hardly justified, they do not affect much the calculated plasmon spectral functions and dispersions because the plasmon spectra do not enter the domain of singular conductivity. Nevertheless, the account of scattering in a more consistent manner may lead to the new results. As example, Kazarinov and Suris have shown that the interference of scattering events in different layers can lead to a sufficient decrease in the effective collision frequency governing the width of resonances in dynamic tunnel conductivity~\cite{Kazarinov_Suris}. This effective collision frequency should be much less than transport collision frequency and, a fortiori, the relaxation frequency. The extension of their results to the case of plasmon-assisted tunneling will be the subject of the future work.
|
1603.03769
|
\section{Introduction}
\label{Introduction}
Thyroid cancer is one of common cancers, whose incidence increases in accordance with developments of biomedical imaging modalities \cite{Wartofsky2010}.
There are several kinds of thyroid cancers, and most of them can be accurately diagnosed by using the ultrasound tomography and fine needle aspiration cytology to examine the cellular structures \cite{Levi2013}.
Nevertheless, diagnosis of follicular thyroid carcinoma is still difficult,
because neither the ultrasound tomography nor the needle aspiration cytology can differentiate between them.
Hence, other imaging modality is essential for comprehensive diagnosis of thyroid cancer.
Diffuse optical tomography (DOT) has a potential to accurately and non-invasively diagnose whether the tumor is benign or malignant based on the difference in optical properties \cite{Yamada2014, Gibson2005}.
The optical properties (e.g. the absorption and scattering coefficients, and anisotropic factor) characterize
the strength of absorption and scattering of light by a turbid medium such as biological tissue.
Unlike a conventional image reconstruction algorithm for X-ray computed tomography, of which fundamentals are measuring line integrals of the attenuation coefficients and solving a set of integral equations for the attenuation coefficients,
DOT needs an inversion process to reconstruct a distribution of tomographic image of the optical properties inside the tissues based on a mathematical model describing the diffusive nature of photon migration in scattering media \cite{Arridge1999}.
Hence, for obtaining a high quality image of DOT, an accurate photon migration model is essential, and an efficient scheme for solving the model is required.
Mainly, two types of the governing equations for photon migration have been used:
the radiative transport equation (RTE) and the diffusion equation (DE).
The RTE can accurately describe photon migration in wide ranges of time-length scales,
and numerical schemes for solving the RTE have extensively been developed by using the finite-difference \cite{Klose2002, Klose2005}, finite-element \cite{Abdoulaev2003}, and finite-volume methods \cite{Marin2014}.
However, their computational loads for solving the RTE are heavy because the RTE is an integro-differential equation with many independent variables.
Due to this reason, the RTE has usually been applied to small-sized media such as human fingers and rats.
Meanwhile, the DE can be applied to large-sized media (e.g. the human brain) \cite{Schweiger1993, Gao2002, Okawa2011} because the approximation can decrease the number of the independent variables to reduce the computational loads.
However, it is well understood the DE is valid in spatial and temporal regions after photons undergo a sufficient number of scattering events.
In non-scattering or void regions, the DE is invalid and fails to describe photon migration \cite{Yoo1990, Hielsher1998, Yuan2009}.
The human neck is a large-sized and inhomogeneous medium consisting of the trachea,
arteries, veins, muscles, bones, adipose tissue, and so on.
The trachea is a void region where photons travel straight without being absorbed and scattered.
Also, the arteries and veins are highly absorbing regions, where photons are totally absorbed before photons undergo a sufficient number of scattering events for the DE to be valid.
Considering the above-mentioned characteristics of the human neck,
an efficient scheme for solving the RTE is still in need of further development.
This paper develops the numerical scheme for solving the RTE using the finite-difference and discrete-ordinate methods
(the 3rd order upwind and the 4th order Runge-Kutta methods).
At first, the validity of the developed scheme is confirmed for homogeneous media by comparison with analytical solutions.
After the validation, the developed scheme is applied to investigation of photon migration in the human neck.
The following section provides an explanation of the numerical method based on the RTE.
Section 3 provides the numerical results for a 2D homogeneous square medium and an inhomogeneous medium modeled from an MR image of the human neck.
Finally, conclusions are given in section 4.
\section{Light propagation model}
\section{Numerical method}
\subsection{Radiative transport equation and numerical scheme}
2D photon migration in turbid media such as biological tissues is accurately formulated by the RTE \cite{Chandra1960},
\begin{equation}
\left[ \frac{\partial}{v\partial t}+\bm{\Omega}\cdot \nabla+\mu_a(\bm{r})+\mu_s(\bm{r})\right]
I(\bm{r}, \bm{\Omega}, t)
= \mu_s(\bm{r}) \int_{2\pi} d\bm{\Omega}' P(\bm{r}, \bm{\Omega} \cdot \bm{\Omega}')I(\bm{r}, \bm{\Omega}', t) + q(\bm{r}, \bm{\Omega}, t),
\label{eq:2_1_1}
\end{equation}
where $I(\bm{r}, \bm{\Omega}, t)$ [W/cm rad] represents the intensity, which describes the photon energy flow
as a function of position, $\bm{r}=(x, y)$ [cm], angular direction,
$\bm{\Omega}=(\Omega_x, \Omega_y)=(\cos \theta, \sin \theta)$ with angle, $\theta$ [rad], and time, $t$ [ps].
$\mu_a(\bm{r})$ [1/cm] and $\mu_s(\bm{r})$ [1/cm] are the absorption and scattering coefficients, respectively,
$v$ [cm/ps] is the velocity of light in the turbid medium, $P(\bm{r}, \bm{\Omega} \cdot \bm{\Omega}')$ [1/rad] is the scattering phase function with $\bm{\Omega}$ and $\bm{\Omega}'$ denoting the scattered and incident directions, respectively, and $q(\bm{r}, \bm{\Omega}, t)\; \rm{[W/cm^2 \;rad]}$ is a source.
In this study, $P(\bm{r}, \bm{\Omega} \cdot \bm{\Omega}')$ is given by the Henyey-Greenstein function \cite{Henyey1941} in two dimensions \cite{Heino2003},
\begin{equation}
P(\bm{r}, \bm{\Omega} \cdot \bm{\Omega}')= \frac{1}{2 \pi} \frac{1-\{g(\bm{r})\}^2}{1+
\{ g(\bm{r}) \}^2-2g(\bm{r}) \bm{\Omega} \cdot \bm{\Omega}'},
\label{eq:2_1_2}
\end{equation}
where $g(\bm{r})$ represents the anisotropic factor defined as the average cosine of $P(\bm{r}, \bm{\Omega} \cdot \bm{\Omega}')$.
For simplicity, $q(\bm{r}, \bm{\Omega}, t)$ is given by an isotropic delta function.
The boundary condition under the refractive-index mismatching is employed with the reflectivity, $R(n, \bm{\Omega} \cdot \hat{\bm{e}}_n(\bm{r}_b))$, calculated by the Fresnel's law and Snell's law, where $n$ denotes the refractive index of the medium and $\hat{\bm{e}}_n(\bm{r}_b)$ denotes the outward normal vector at the boundary position, $\bm{r}_b$ \cite{Klose2005}.
In this study, the RTE is numerically solved based on the finite-difference and discrete-ordinate methods.
For numerical discretization, $x$, $y$, $\theta$, and $t$ are divided into $x_i=i \Delta x$ $(i \in \{0, \;\cdots, \;N_x\})$, $y_j=j \Delta y$ $(j \in \{0, \;\cdots, \;N_y\})$,
$\theta_k=k \Delta \theta$ $(k \in \{0, \;\cdots, \;N_{\theta} \})$, and $t_m=m \Delta t$
$(m \in \{0, \;\cdots, \;N_t\})$ with constant step sizes of
$\Delta x$, $\Delta y$, $\Delta \theta$, and $\Delta t$, respectively,
and numbers of grid nodes and timesteps, $N_x$, $N_y$, $N_{\theta}$, and $N_t$, respectively.
In the same manner, $\bm{\Omega}$ is discretized as $(\Omega_{kx}, \Omega_{ky})=(\cos \theta_k, \sin \theta_k)$,
and $I(\bm{r}, \bm{\Omega}, t)$ at the grid node and timestep is expressed as $I_{ijk}^m$.
For the finite-difference discrete-ordinate methods, previous papers mostly employ the 1st order scheme:
the 1st order upwind (1UW) scheme for $\bm{r}$, the extended trapezoidal rule for $\bm{\Omega}$,
and the forward Euler method (FE) for $t$ \cite{Fujii2014JQSRT, Klose1999}.
The 1st order scheme is stable and accurate if the spatial and temporal step sizes, $\Delta x$ and $\Delta t$, are sufficiently fine, but the fine step sizes result in heavy computational loads.
To reduce the computational loads, this study employs the 3rd order upwind (3UW) scheme and 4th order Runge-Kutta (4RK) method.
In the 3UW, the advection term in the $x$-direction, $\Omega_x \partial I (\bm{r}, \bm{\Omega})/\partial x$, is discretized as
\begin{equation}
\Omega_x \partial I (\bm{r}, \bm{\Omega})/\partial x \sim
\begin{cases}
\frac{\Omega_{kx}}{6 \Delta x} \left[ 2 I_{i+1 j k}+3 I_{ijk}-6I_{i-1jk}+I_{i-2jk} \right]& \Omega_{kx} \geq 0 \\
\frac{\Omega_{kx}}{6 \Delta x} \left[ - I_{i+2 j k}+6 I_{i+1jk}-3I_{ijk}-2I_{i-1jk} \right] & \Omega_{kx} < 0
\end{cases},
\label{eq:2_1_3}
\end{equation}
for the internal nodes.
For the grids near the boundary, the 1UW scheme is employed.
The advection term in the $y$-direction, $\Omega_y \partial I (\bm{r}, \bm{\Omega})/\partial y$, is also discretized in the same manner as in the $x$-direction.
The 3UW is slightly unstable compared to the 1st order scheme,
and oscillations appear at early arriving times and at positions very close to the source.
In this study, the media is so large that the oscillations are negligibly small.
Based on the extended trapezoidal rule, the scattering integral is calculated as
\begin{equation}
\int_{2\pi} d\bm{\Omega}' P(\bm{r}, \bm{\Omega}\cdot\bm{\Omega}')I(\bm{r}, \bm{\Omega}') \sim
\sum_{k'=1}^{N_{\theta}} w_{k'}f_{k'}P_{kk'} I_{ijk'},
\label{eq:2_1_4}
\end{equation}
where $w_{k'}=2\pi/N_{\theta}$ is a weight and $f_{k'}$ is a renormalizing factor.
To satisfy the normalization of the scattering phase function $\int_{2\pi} d\bm{\Omega}' P=1$,
$f_{k'}$ is formulated with a slight modification of the Liu's renormalization \cite{Liu2002},
$1/f_{k'}=\sum_{k'=1}^{N_{\theta}} w_{k'} P_{kk'}$.
In the matrix notation, the discretized RTE except for the time variable, $t$, is given as
an algebraic equation,
\begin{equation}
\frac{\partial}{v\partial t}\bm{I} + \bm{A}\bm{I}=\bm{P}\bm{I}+\bm{Q},
\label{eq:2_1_5}
\end{equation}
where a matrix $\bm{A}$ represents the second, third, and fourth terms in the left-hand side of Eq. (\ref{eq:2_1_1}),
a matrix $\bm{P}$ represents the scattering integral (Eq. (\ref{eq:2_1_4})) multiplied by $\mu_s(\bm{r})$,
vectors $\bm{I}$ and $\bm{Q}$ are the intensity and source, respectively.
Based on the 4RK, the temporal change in the intensity is calculated as
\begin{equation}
\bm{I}^{m+1} \sim \bm{I}^{m}+\frac{1}{6}(\bm{k}_1+2\bm{k}_2+2\bm{k}_3+\bm{k}_4),
\label{eq:2_1_6}
\end{equation}
where $\bm{k}_l$($l=1$, 2, 3, 4) are given by the following equations,
\begin{eqnarray}
\bm{k}_1 &=&v \Delta t (-\bm{A}+\bm{P})\bm{I}^{m}, \nonumber \\
\bm{k}_2 &=&v \Delta t (-\bm{A}+\bm{P})(\bm{I}^{m}+0.5\bm{k}_1), \nonumber \\
\bm{k}_3 &=&v \Delta t (-\bm{A}+\bm{P})(\bm{I}^{m}+0.5\bm{k}_2), \nonumber \\
\bm{k}_4 &=&v \Delta t (-\bm{A}+\bm{P})(\bm{I}^{m}+\bm{k}_3).
\label{eq:2_1_7}
\end{eqnarray}
Because the explicit method is used for 4RK,
it should satisfy the Courant-Friedrichs-Lewy (CFL) condition, $\Delta t \le \Delta x/2v$, for stable solutions \cite{Klose1999, Guo2001}.
In this study, parallel programming is implemented with a 24 threads computer
(Intel Xeon X5690 @3.47GHz) by using OpenMP,
which is a portable and shared-memory programming scheme.
\subsection{Geometry, optical properties, and step sizes in discretization}
To construct the human neck model, the study uses an MR image of the human neck as shown in Fig. \ref{fig1} (a).
Segmentation of the MR image is performed, and primary organs in the human neck such as common carotid artery, internal jugular vein, trachea, spinal cord, and spine are extracted as shown in Fig. \ref{fig1} (b).
Other organs such as human muscle, bone, and adipose tissue are collected to form a homogeneous background.
Then, irregular boundaries of the organs and background in the image are mapped with regular grids for numerical calculations.
At the surface of the human neck, the Fresnel's law is employed for the boundary condition which describes the reflection and refraction at the boundary due to the difference in the refractive index between the medium and surrounding air.
Meanwhile, we assume that the refractive index inside the medium is homogeneous.
So neither reflection nor refraction takes places at the interfaces between different organs, and no boundary condition is used at the interfaces.
The optical properties of the organs and background tissues in the human neck in the near-infrared wavelength range are listed in Table \ref{tab:1}, referring to \cite{Bashkatov2011, Dehaes2011},
and the properties of the thyroid gland is considered to be the same as those of the background.
Strictly speaking, the arteries have different values of the properties from the veins, but
it is simply assumed that the both have the same optical properties.
Except the trachea, the tissues are assumed to have a constant value of $g=0.9$, corresponding to highly forward-peaked scattering.
Appropriate values of the step sizes are dependent on the optical properties;
$\Delta x$ and $\Delta t$ are inversely proportional to $\mu'_s=\mu_s(1-g)$ and $\mu_a$, and
$\Delta \theta$ should be smaller when $g$ approaches to one.
In each case, we set the appropriate values of the step sizes.
\begin{figure}[tb]
\centering
(a)
\hspace{-0.9cm}
\mbox{\raisebox{0.2cm}{\includegraphics[scale=0.24]{mri_neck_resize_backalpha.eps}}}
\hspace{0.2cm}
(b)
\mbox{\raisebox{0.0cm}{\includegraphics[scale=0.35]{humanneck_grid_withText-crop.eps}}}
\hspace{-1.0cm}
\caption{(a) MR image of the human neck, (b) primary organs after the segmentation}
\label{fig1}
\end{figure}
\begin{table}
\centering
\caption{Optical properties of the primary organs and background in the human neck \cite{Bashkatov2011, Dehaes2011}}
\label{tab:1}
\begin{tabular}{lccc}
\hline
& $\mu_a$[1/cm]
& $\mu_s$[1/cm]
& $g$\\
\hline
Artery and vein & 4.76 & 675 & 0.9 \\
Trachea & 0.0001 & 0.0001 & 0.0 \\
Spinal cord & 0.17 & 882 & 0.9 \\
Spine & 0.25 & 148 & 0.9 \\
Background & 0.30 & 80.0 & 0.9 \\
\hline
\end{tabular}
\end{table}
\section{Numerical results}
\subsection{Validation of the developed numerical scheme}
Before a discussion on the results of photon migration in the human neck model,
the developed numerical scheme is validated for the case of a homogeneous 2D square medium with a side of 4.0 cm by comparison with analytical solutions for infinite media.
When the source is located inside the medium far from the boundary,
the numerical solutions of the intensities at the positions far from the boundary are less influenced by the boundary.
Then, the comparisons of the numerical solutions for finite media with the analytical solutions for infinite media are reasonable.
In this subsection, an isotropic impulse point source is incident at ($x$, $y$) = (1.5 cm, 2.0 cm), and the intensities at (2.5 cm, 2.0 cm) are calculated.
At first, the case of isotropic scattering ($g=0.0$) is examined because
the exact analytical solution of the RTE has been given by Paasschens \cite{Paasschens1997} as follows,
\begin{equation}
\Phi_{ER}(\rho, t)
=v\frac{\delta(vt-\rho)}{2\pi \rho}e^{-\mu_t v t}
+\frac{v \mu_s}{2\pi} \frac{\exp(\mu_s \sqrt{(vt)^2-\rho^2})}{\sqrt{(vt)^2-\rho^2}}e^{-\mu_t v t}\Theta(vt-\rho),
\label{eq:3_1}
\end{equation}
where $\Phi$ is the fluence rate defined as $\int_{2\pi} d\bm{\Omega} I$, $\Theta$ is the Heaviside step function, $\mu_t=\mu_a+\mu_s$, and $\rho$ is the distance from the source.
Also, the analytical solution of the DE \cite{Chandra1943}, $\Phi_{DE}$, is given as,
\begin{equation}
\Phi_{DE}(\rho, t) = \frac{1}{4\pi D t} e^{-\mu_a v t} e^{-\frac{\rho^2}{4Dvt}},
\label{eq:3_2}
\end{equation}
where the diffusion coefficient $D$ is given as $[2(1-g)\mu_s]^{-1}$ for the 2D time-dependent case \cite{Furutsu1994, Pierrat2006}.
The absorption and scattering coefficients are given as the same as those of either
(a) the background: $\mu_a=0.30$ $\rm cm^{-1}$ and $\mu_s=80.0$ $\rm cm^{-1}$,
or (b) the artery: $\mu_a=4.76$ $\rm cm^{-1}$ and $\mu'_s=67.5$ $\rm cm^{-1}$ as listed in Table 1.
The former and latter cases correspond to weakly and strongly absorbing media, respectively.
The step sizes are given as (a) $\Delta x=\Delta y=0.02$ cm, $\Delta \theta=0.13$ rad, and $\Delta t=0.3$ ps, and (b) $\Delta x=\Delta y=0.005$ cm, $\Delta \theta=0.13$ rad, and $\Delta t=0.07$ ps, respectively.
Figure \ref{fig2} shows the time-resolved profiles of $\Phi$ calculated numerically and analytically at $\rho=1.0$ cm.
As shown in Fig. \ref{fig2} (a), the numerical solution of the RTE based on the 3UW+4RK agrees very well with the analytical solutions in the case of the weakly absorbing medium.
Meanwhile, the numerical solution based on the 1UW+FE deviates from the analytical solutions.
This result indicates that in the case of the 1st order scheme, the current values of $\Delta x$ and $\Delta t$ are inappropriate, and smaller values of them are necessary to obtain good agreement with the analytical solutions.
Note that $\Phi_{DE}$ is in good agreement with $\Phi_{ER}$ and the numerical solution based on the higher order scheme.
This is because $\rho$ is sufficiently long for the DE to hold.
The results of the case of the strongly absorbing medium are shown in Fig. \ref{fig2} (b), where $\Phi$ has sharp peaks at earlier times than in Fig. \ref{fig2} (a) due to stronger absorption.
Similarly to the case of the weakly absorbing medium (Fig. \ref{fig2} (a)),
the numerical solution based on the 3UW+4RK agrees very well with $\Phi_{ER}$,
meanwhile the solution based on the 1UW+FE deviates from $\Phi_{ER}$.
It is seen that the curve of $\Phi_{DE}$ has a similar shape to that of $\Phi_{ER}$ even for the strongly absorbing medium,
although the peak time and width of the curve of $\Phi_{DE}$ are slightly different from those of $\Phi_{ER}$.
\begin{figure}[tb]
\centering
(a)
\includegraphics[scale=0.365]{Back_iso.eps}
\hspace{1.0 cm}
(b)
\includegraphics[scale=0.36]{Art_iso.eps}
\caption{Normalized time-resolved profiles of $\Phi$ for the RTE and the DE in the case of the isotropic scattering ($g=0.0$) at $\rho=1.0$ cm: pluses and circles represent the numerical solution based on the 1UW+FE, and the 3UW+4RK, respectively, and red and blue solid curves the analytical solutions, $\Phi_{ER}$ and $\Phi_{DE}$, respectively.
The optical properties are given as (a) $\mu_a=0.30\;\rm cm^{-1}$, $\mu_s=80.0\;\rm cm^{-1}$, and $g=0.0$ and (b) $\mu_a=4.76\;\rm cm^{-1}$, $\mu_s=67.5\;\rm cm^{-1}$, and $g=0.0$. }
\label{fig2}
\end{figure}
In the case of anisotropic scattering media ($g\neq0$), a derivation of exact analytical solutions of the RTE is still a difficult problem
even in infinite homogeneous media.
In this study, an approximate RTE solution, $\Phi_{AR}$, proposed by Martelli \cite{Martelli2007} is used to be compared with the numerical solutions.
The basic assumption for $\Phi_{AR}$ is that $\mu_s$ and $\mu_t$ in Eq. (\ref{eq:3_1}) can be replaced by
$\mu'_s=\mu_s(1-g)$ and $\mu'_t=\mu'_s+\mu_a$, respectively.
Again, two sets of the optical properties of the medium are considered: (a) $\mu_a=0.30$ $\rm cm^{-1}$, $\mu_s=80.0$ $\rm cm^{-1}$, and $g=0.9$ of the background,
and (b) $\mu_a=4.76$ $\rm cm^{-1}$, $\mu_s=675$ $\rm cm^{-1}$, and $g=0.9$ of the artery.
Other conditions are the same as those in the case with the isotropic scattering media.
As shown in Fig. \ref{fig3} (a) in the case of the weak absorption, the numerical solution of the RTE based on the 3UW+4RK agree well with $\Phi_{AR}$.
However, in details, a small difference between them is seen during the period from about 100 ps to 400 ps.
Even if the step sizes are finer, the numerical solution is found to be unchanged.
Thus, this small difference could be the error in $\Phi_{AR}$ due to the approximation.
In Fig. \ref{fig3} (a), the deviations of the numerical solution based on the 1UW+FE are small compared with that in the case of isotropic scattering.
Because the appropriate value of $\Delta x$ is inversely proportional to $\mu'_s$,
it is much larger in the anisotropic scattering case with $\mu'_s=8.0\; {\rm cm^{-1}}$ than in the isotropic scattering case with $\mu'_s=80\; {\rm cm^{-1}}$.
This is the reason why the 1UW+FE performs better in the anisotropic scattering case.
Figure \ref{fig3} (a) shows that $\Phi_{DE}$ is in agreement with the approximate RTE solution at the peak time ($t \sim$100 ps) and the decay period ($t>100$ ps).
At early arriving time ($t<100$ ps), meanwhile, $\Phi_{DE}$ deviates from the approximate RTE solution.
This deviation comes from the difference in the speed of light between the RTE and the DE.
In the RTE, the speed of light is given as $v$, while in the DE it is infinite \cite{Guo2001}.
The strongly absorbing and anisotropic scattering case is shown in Fig. \ref{fig3} (b).
The curves of $\Phi$ in the case of anisotropic scattering are the same as those in the case of isotropic scattering (Fig. \ref{fig2} (b)).
This is maybe because the curves are mostly determined by $\mu'_s$, and the values of $\mu'_s$ are the same for the two cases.
From the results shown in Figs. \ref{fig2} and \ref{fig3},
the developed schemes is validated.
\begin{figure}[tb]
\centering
(a)
\includegraphics[scale=0.32]{Back_g09.eps}
\hspace{1.0 cm}
(b)
\includegraphics[scale=0.36]{Art_g09.eps}
\caption{Anisotropic scattering case ($g=0.9$) of $\Phi$ at $\rho=1.0$ cm: a red solid curve represents the approximate solution of the RTE, $\Phi_{AR}$. (a) $\mu_a=0.30\;\rm cm^{-1}$, $\mu_s=80.0\;\rm cm^{-1}$, and $g=0.9$ and (b) $\mu_a=4.76\;\rm cm^{-1}$, $\mu_s=675\;\rm cm^{-1}$, and $g=0.9$. Other details are the same as in Fig. \ref{fig2}. }
\label{fig3}
\end{figure}
\subsection{Photon migration in the human neck model}
The developed numerical scheme of the 3UW+4RK is used to describe photon migration in the human neck model shown in Fig. \ref{fig1} (b).
Figure \ref{fig4} shows the snapshots of $\Phi$-distributions at fixed times in the human neck model, where light is incident on the front surface of the human neck at the position of ($x$, $y$) = (3.5 cm, 6.3 cm).
As listed in Table \ref{tab:1}, the trachea is a void region indicated by a white solid curve in each figure of Fig. \ref{fig4}.
Therefore, at the early time of 46.2 ps shown in Fig. \ref{fig4} (a),
photons propagate without being scattered or absorbed inside the trachea.
At the time of 79.9 ps shown in Fig. \ref{fig4} (b), the photons reach the trachea boundary far from the source position,
and migrate diffusively in the background tissue where they are scattered in all directions,
and some portion of the scattered photons return back to the trachea as the diffusive reflection.
Then, the reflected photons travel back to the trachea boundary near the source position as shown in Fig. \ref{fig4} (c),
repeat diffusive reflections back and forth inside the trachea, and
gradually migrate into the tissue outside the trachea as diffusive propagation as shown in Fig. \ref{fig4} (d) to (f).
It looks as if the void region of the trachea acts as a light source.
Note that the magnitudes of $\Phi$ indicated by the color bars rapidly decay as time goes and photons migrate into the tissue.
Also note that photons are absorbed especially by veins and arteries having large $\mu_a$ which appear as dark blue spots in all of Fig. \ref{fig4} (a) to (f).
Thus, few photons reach the backside of the human neck.
In fact, it is confirmed that at the backside position,
($x$, $y$) = (3.5 cm, 0.0 cm),
the magnitude of $\Phi$ is kept at less than $10^{-10}$ during the time period from 0 ps to 800 ps.
These results suggest that near the light source, the accurate scheme for solving the RTE is necessary, while far from the light source, the approximate and efficient scheme is good enough to describe photon migration in the human neck model.
\begin{figure}[tb]
\centering
\subfigure[46.2 ps]{
\includegraphics[scale=0.26]{46ps_mri.eps}
\subfigure[79.9 ps]{
\includegraphics[scale=0.26]{80ps_mri.eps}
\subfigure[163.9 ps]{
\includegraphics[scale=0.26]{164ps_mri.eps}
}\\%--
\subfigure[285.8 ps]{
\includegraphics[scale=0.26]{286ps_mri.eps}
\subfigure[512.8 ps]{
\includegraphics[scale=0.26]{513ps_mri.eps}
\subfigure[580.0 ps]{
\includegraphics[scale=0.26]{580_mri.eps}
}
\caption{Spatial distributions of $\Phi$ calculated from the RTE in the human neck model (Fig. \ref{fig1} (b)) at fixed times. Unit of the color bar is [W/cm]. The light source is represented by a red arrow in (a), and the boundary of the trachea is denoted by a white solid curve in each figure. }
\label{fig4}
\end{figure}
\section{Conclusions}
To describe photon migration in the human neck,
we have developed the accurate and efficient numerical scheme to solve the RTE based on the 3rd order upwind and 4th order Runge-Kutta methods for the purpose of comprehensive diagnose of thyroid cancer by optical imaging.
We have confirmed the validity of the developed scheme by comparison with the analytical solutions in homogeneous media.
Numerical simulations of photon migration in the human neck model have shown a complicated pattern of photon migration,
arising from multiple diffusive reflection near the trachea.
\section*{Acknowledgements}
This work is supported in part by Grants-in-Aid for Regional R\&D Proposal-Based
Program from Northern Advancement Center for Science \& Technology of
Hokkaido Japan, and the Japan Science and Technology Agency.
We would like to express appreciation to Dr. Kazumichi Kobayashi for fruitful discussions.
|
2004.13086
|
\section{Introduction}
The problem of Boolean matrix multiplication has a wide range of
fundamental applications, for instance, in graph algorithms (see,
e.g., \cite{Z02}). The Boolean product of two Boolean $n\times n$
matrices can be computed using $O(n^3)$ Boolean AND and OR binary
operations following its definition. This is optimal if only these two
types of operations are allowed \cite{MG76,Pa75,P75}. If also the one argument NOT
operation is allowed then the product can be computed using
$O(n^{\omega})$ operations, where $\omega$ stands for the exponent of
the so called fast arithmetic matrix multiplication. The
$O(n^{\omega})$ bound follows by a straightforward reduction of
Boolean matrix multiplication to the arithmetic one for $0-1$ matrices.
Already five decades ago,
Strassen presented an algorithm for arithmetic matrix multiplication
breaking out of the standard method and
using only $O(n^{2.8074})$ multiplications, additions and subtractions \cite{S69}.
The following series of improvements of the exponent
$\omega$ of fast matrix multiplication culminates
in the recent results of
Le Gall and Vassilevska Williams showing
$\omega < 2.373$
\cite{LG14,Vassilevska12}. On the other hand, Alman
et al. have recently shown that there is an $\ell > 2$
such that one cannot derive
an upper bound on $\omega$ better than $\ell$
by using the known approaches \cite{AV18}.
Also, the substantially sub-cubic algebraic algorithms
for matrix multiplication have very large overhead
which makes them hardly practical. Unfortunately,
no the so called combinatorial (i.e., with small constants)
algorithm for Boolean matrix multiplication
running in $O(n^{3-\epsilon})$ time, for a fixed $\epsilon > 0,$
is known \cite{BW12,VWW}.
The fastest known combinatorial algorithm
for Boolean matrix multiplication due to Yu runs in
$O(n^3 poly(\log \log n)/ \log^4n))$ time \cite{Yu15}.
Hence, a natural question arises about the complexity of
Boolean matrix multiplication in different models
of computation, possibly more powerful or unconventional.
To begin with, if one uses huge integers, requiring
$O(n\log n)$ bits then the Boolean product of two $n\times n$ Boolean
matrices can computed in $O(n^2)$ steps on RAM as observed in \cite{PS16}.
Recently, also a nondeterministic algorithm for $n\times n$ matrix
multiplication using $O(n^2)$ arithmetic operations has been presented
by Korec and Wiedermann in \cite{KW14}. It results from a
derandomization of Freivalds' $O(n^2)$-time randomized algorithm for
integer matrix product verification \cite{F77}. Simply, the algorithm guesses
first the product matrix and then verifies its correctness. Again, the
derandomization involves huge numbers requiring $O(n)$ times more bits
than the input numbers \cite{KW14}. More recently, Wiedermann has
presented two further, slightly slower nondeterministic algorithms for
matrix multiplication, both running in $O(n^2\log n)$ time and relying
on the derandomization of Freivalds' algorithm. The first runs on a
real RAM, the second on a unit-cost RAM using only integers of size
proportional to that of the largest entry in the input matrices
\cite{W14}. On the other hand, no quantum algorithm for Boolean matrix
product faster than those based on fast algebraic matrix
multiplication in the standard computational model has been
devised so far \cite{{LeGallisaac12}}.
From the perspective of modern electronic computing, mechanical
computing seems not only old-fashioned but even unconventional. The
history of mechanical devices assisting computing stretches several
thousands years back, e.g., the so called Roman abacus was used as
early as 2700-2300 B.C. in Babylonia \cite{Wik}. Under several
thousands of years mechanical devices assisted astronomical and
navigation calculations. The slide-rule has been used since 1620 and
mechanical calculators were intensively developed in 17, 18 and 19
centuries (e.g., by Pascal, Leibniz and Babbage).
In this paper, we study the problem of determining the Boolean product
of two $n\times n$ Boolean matrices in an unconventional computational
model allowing for mechanical operations. We show that the Boolean
product of an $n\times n$ Boolean matrix with an $n$-dimensional
Boolean column vector can be computed using $O(n)$ operations. Hence,
we can infer that $O(n^2)$ operations are sufficient to compute the
Boolean product of two $n\times n$ Boolean matrices in this model.
For smaller matrices the mechanical operations can be performed even
manually while for the larger ones the operations can be automatized
using electric power. Our result demonstrates that already ancient
civilizations had sufficient technology to perform relatively fast
matrix multiplication of moderately large Boolean matrices.
To the best of our knowledge, we are not familiar with any
prior mechanical approach to (Boolean) matrix multiplication.
\subsection{Basic ideas}
Our mechanical algorithm for Boolean matrix multiplication computes
the Boolean product of the two input $n\times n$ Boolean matrices by
repetitive multiplication of the first matrix with consecutive columns
of the second matrix. While computing the Boolean product of a fixed
Boolean $n\times n$ matrix with a Boolean column vector one encounters
two major bottlenecks that prohibit obtaining a substantially
sub-quadratic in $n$ number of operations in the standard
computational model (solely poly-logarithmic speedups are known after
preprocessing \cite{RW}). Note that only the $j$-th columns of the
fixed matrix where the $j$-th coordinate of the input column vector is
$1$ can affect the output product vector. The first bottleneck is the
selection of the aforementioned columns (alternatively, activating
them and deactivating the remaining ones). The other bottleneck is the
computation of the disjunction of entries belonging to active columns
for each row of the fixed matrix. We design a special purpose
mechanical processor for computing the Boolean product of a Boolean
$n\times n$ matrix with a Boolean $n$-dimensional column vector that
solves each of the two bottleneck problems using $O(n)$ mechanical
operations.
\junk{\footnote{Any substantially subquadratic
in $n$ solution to the bottleneck problems
(even allowing for a subcubic in $n$ preprocessing of the matrix)
in a standard sequential model
would lead to substantially subcubic in $n$ combinatorial
algorithm for Boolean matrix multiplication, which would
be a breakthrough.}.}
The matrix-vector processor (MVP) needs also to perform a
number of other operations, e.g., reading the input vector or the
input matrix or outputting the product vector etc., that can be
implemented purely mechanically (if one insists on that) in many ways.
We leave here the details to the reader.
\section{Requirements on a processor for matrix-vector product}
We shall use a special purpose processor for computing
the Boolean product of a Boolean $n\times n$ matrix with a Boolean
$n$-dimensional column vector. We shall term such a processor MVP for short. In
particular, an MVP has an input $n \times n$ array part, an input
$n$-dimensional vector part, and an output $n$-dimensional vector
part, see Fig. 3.
In this section, we shall provide solely general requirements
that an MVP should satisfy. In Section 4, we shall outline
implementations of MVP including mechanical operations that allow for
fulfillment of these requirements. The list of requirements is as follows.
\begin{enumerate}
\item A Boolean $n\times n$ matrix can be read into the MVP input
array using $O(n^2)$ operations.
\item A Boolean $n$-dimensional vector can be read into
the MVP input vector using $O(n)$ operations.
\item The MVP output vector can be reported using $O(n)$ operations.
\item All $j$-th columns of the MVP input array can be
appropriately activated/deactivated (if necessary) such that afterwards
the $j$-th column is active if and only if the $j$-th
coordinate of the MVP input vector is set to $1$,
using totally $O(n)$ operations.
\item For all $i$-th rows of the MVP input array that
include at least one $1$ belonging to an active
column of the array the $i$-th coordinate
of the MVP output vector can be set to $1$ while
all other coordinates of the output vector
can be set to $0$ using
totally $O(n)$ operations.
\item The MVP output
vector can be reset to an initial state using
$O(n)$ operations.
\end{enumerate}
\section{The algorithms for Boolean matrix-vector and matrix-matrix products}
In this section, we present two simple mechanical algorithms for
Boolean matrix-vector product and Boolean matrix-matrix product,
respectively. Both are based on the use of an MVP. It will be
convenient to refer to the first algorithm as a functional procedure
$Matvec(A,V)$ in order to facilitate its repetitive use in the second
algorithm.
\begin{figure}[t]
\centering
\fbox{\begin{minipage}[t]{0.90\textwidth}
\begin{algorithmic}[1]
\REQUIRE a Boolean $n\times n$ matrix $A$ and
a Boolean $n$-dimensional column vector $V$
in the MVP input array and the MVP input vector,
respectively.
\ENSURE the Boolean column vector product of
$A$ and $V.$
\FOR {$j = 1$ {\bf to} $n$}
\STATE {\bf if} $V[j]=1$ and the $j$-th
column of the MVP input array is not active {\bf then} activate the $j$-th
column
\STATE {\bf if} $V[j]=0$ and the $j$-th
column of the MVP input array is active {\bf then} deactivate the $j$-th
column
\ENDFOR
\FOR {$i = 1$ {\bf to} $n$}
\STATE {\bf if} the $i$-th row of the MVP input array includes
at least one $1$ that belongs to an active
column of the array {\bf then} set the $i$-th
coordinate of the MVP output vector to $1$
{\bf else} set this coordinate of the MVP output vector to $0$
\ENDFOR
\RETURN the output vector
\end{algorithmic}
\end{minipage}}
\caption{The functional procedure Matvec(A,V)}
\label{fig: algo1}
\end{figure}
\begin{lemma}\label{lem: matvec}
The procedure Matvec(A,V) computes the
Boolean column vector product of $A$
and $V$ correctly, using $O(n)$ operations.
\end{lemma}
\begin{proof}
The correctness of the procedure follows from
the following two facts.
\begin{enumerate}
\item
The $i$-th coordinate of the MVP output vector is set
to $1$ if and only if the $i$-th row of
the MVP input array includes at least one $1$
belonging to an active column of the array.
\item The $j$-th
column of the MVP input array is active if and only
if the $j$-th coordinate of the MVP input vector
is set to $1.$
\end{enumerate}
The upper bound on the number of operations
necessary to implement $Matvec(A,V)$ follows from
the requirements on an MVP (see the previous section).
In particular the upper bound on the total
number of operations necessary to implement
the first loop follows from Requirement 4 while that for
the second loop from Requirement 5.
\junk{
If columns of the input array are activated separately
than Step 1 requires at most $n$ activation operations
and $O(n)$ walking steps to scan the input vector
and activate the appropriate columns.
If we use a more refined mechanical processor then
the activation of the columns of the input array
corresponding to $1$ in the input vector
be even done just by lowering the representation
of the input vector appropriately using $O(1)$
operations.
Step 2 requires $n$ ladder movements and
$O(n)$ walking steps.}
\qed
\end{proof}
By repetitively applying $Matvec(A,B[*,j])$
to consecutive columns $B[*,j])$ of the
second input Boolean matrix $B,$ we obtain our
algorithm (Algorithm 1) for Boolean matrix-matrix product.
\begin{figure}[t]
\centering
\fbox{\begin{minipage}[t]{0.90\textwidth}
\begin{algorithmic}[1]
\REQUIRE Boolean $n\times n$ matrices $A$ and
$B.$
\ENSURE the Boolean matrix product of
$A$ and $B$
\STATE read the matrix $A$ into the MVP input array
\FOR {$j = 1$ {\bf to} $n$}
\STATE read the $j$-th column $B[*,j]$ of $B$ in the MVP input vector
\RETURN $Matvec(A,B[*,j])$
\STATE reset the MVP output vector to their initial state
\ENDFOR
\end{algorithmic}
\end{minipage}}
\caption{The mechanical algorithm for Boolean matrix multiplication (Algorithm 1)}
\label{fig: algo1}
\end{figure}
\begin{theorem}
Algorithm 1 computes the Boolean product of the
matrices $A$ and $B$ correctly, using
$O(n^2)$ operations.
\end{theorem}
\begin{proof}
The correctness of the algorithm follows from the fact
that it computes the consecutive columns of
the Boolean product of $A$ and $B$ correctly
by
\newline
Lemma \ref{lem: matvec}.
The $n$ calls of the procedure $Matvec$ require
totally $O(n^2)$ operations by Lemma \ref{lem: matvec}.
Reading the matrix $A$ in the input array
requires $O(n^2)$ operations by Requirement 1 on an MVP.
Reading the consecutive columns of the matrix $B$
into the MVP input vector
totally requires $O(n^2)$ operations by Requirement 2.
Finally, resetting the MVP output vectors requires
$O(n)$ operations by Requirement 6.
\qed
\end{proof}
\begin{remark}
Observe that all the requirements on MVP but
for the bottleneck ones 4 and 5 can be easily
realized up to logarithmic factors
in such standard sequential computational models
as RAM, Turing machine or Boolean circuits. Hence,
if the bottleneck requirements could be
realized in substantially subquadratic
in $n$ time
(even allowing for a subcubic in $n$ preprocessing of the matrix)
by combinatorial algorithms in the aforementioned standard models, it
would yield substantially subcubic in $n$ combinatorial
algorithms for Boolean matrix multiplication, which would
be a breakthrough (cf. \cite{Yu15,RW}).
\end{remark}
\section{A mechanical MVP}
In this section, we outline an idea of implementation of an MVP with
help of mechanical operations.
Each column of the MVP input array is modelled by an axis with
attached equal size segments representing consecutive column
entries. Each such a segment is either aligned to the axis which is
interpreted as the corresponding entry is set to $0$ or it is set into
a perpendicular to the axis position which is interpreted as the
corresponding entry is set to $1.$ See Fig. 4
for an example. If the column
modelled by the axis is not active,
the perpendicular segments are placed horizontally. If the column
should be activated then the
axis rotates $90$ degrees
so all
the attached perpendicular segments take vertical positions under the axis.
Symmetrically, if the column is active and it should be deactivated
the axis modelling it is rotated in the reverse direction
by $90$ degrees. Such an
activation/deactivation of a column is counted as a single operation. There are many
possible ways to implement the rotation of the axis by $90$ degrees
that we omit here. The whole process of activating/deactivating (if necessary)
the $j$-th columns of the
input array such that
afterwards the $j$-th column is active
if and only if the $j$-th coordinate of the MVP input vector is
set to $1$
requires at most $n$ operations of column activations/deactivations and
scanning the representation of the input MVP vector. For instance, we
may assume that the representation of the input vector is placed
perpendicularly to the column axes close to their start-points so the
whole process of activating/deactivating columns requires $O(n)$
walking and scanning steps besides the $O(n)$ activation/deactivation
operations. In this way, Requirement 4 can be fulfilled.
In order to fulfill Requirement 5 for each row of the MVP input array,
we place a ladder under the segments attached to the axes
corresponding to the row, in the initial state when no column is
active. The ladders are placed perpendicularly to the axes,
see Fig. 5, 6.
They can
have some wheels to facilitate their horizontal movements in the
direction perpendicular to the axes.
We shall term the rectangular space between two successive ladder steps
and the arms of the ladder as an opening. Each opening of the ladder
lies exactly under one of the segments attached
to an axis in the initial state when no
column is active. Importantly, the placements and sizes of the axes,
ladders, segments etc. are so chosen that the rotations of the axes
with attached perpendicular segments are not obstructed by the
ladders. If a segment is placed vertically after the rotation then it
goes through and a bit sticks out under the corresponding ladder
opening, see Fig. 5.
A representation of the MVP output vector is placed along
the rightmost axis. Initially, all coordinates of the output vector are
set to $1.$ In order to set the $i$-th coordinate of the output
vector, the ladder corresponding to the $i$-th row of the input array
is moved from the left to the right at most by the length of one
opening, see Fig. 6.
If no perpendicular segment sticks through any of its
openings such a movement by the length of one opening is possible and
the ladder hits the representation of the output vector in the section
corresponding to its $i$-th coordinate switching its state from $1$ to
$0.$ (Again, there are many possible technical ways of representing
the MVP output vector and implementing such a switch that we omit
here.) Otherwise, such a full movement is not possible and the state
of the section remains set to $1.$ In effect, the $i$-th coordinate of
the output vector is set to $1$ if and only if the $i$-th row of the
input array includes at least one $1$ belonging to an active column of
the array. Each such a movement of the ladder with a possible triggering
the switch of the state of the corresponding section is counted as ,say,
$O(1)$ operations. Hence, setting all the coordinates of the output
vector requires $O(n)$ operations and Requirement 5 is fulfilled.
To read a Boolean $n\times n$ matrix $A$ into the MVP input array
one can traverse the array for example in a column-wise
snake fashion and pull out segments corresponding to
the $1$ entries of the matrix $A$ that are
aligned to their axis and push back the pulled out segments
that correspond to the $0$ entries of the matrix $A.$
Thus, we may assume that the reading of a Boolean $n\times n$
matrix in the input array
requires $O(n^2)$ operations.
Similarly, reading a Boolean $n$-dimensional vector
into the MVP input vector as well as outputting
the MVP output vector
require $O(n)$ operations.
Finally, to reset the MVP output vector, we need to
pull back the $n$ ladders to the left to their
initial positions and switch all the $0$ coordinates
of the output vector back to $1$ using $O(n)$
operations. It follows that the remaining requirements can be
fulfilled in this way.
\begin{figure}\label{fig: parts}
\begin{center}
\includegraphics[height=3.5cm]{figmek1}
\end{center}
\caption{The basic parts of the mechanical MVP}
\end{figure}
\begin{figure}\label{fig: axis}
\begin{center}
\includegraphics[height=4cm]{figmek3}
\end{center}
\caption{(a) An axis modeling a not active column
with entries $1,\ 1,\ 0, \ 1.$
(b) In order to activate the column the axis
rotates by 90 degrees.}
\end{figure}
\begin{figure}\label{fig: stics}
\begin{center}
\includegraphics[height=2cm]{figmek4}
\end{center}
\caption{A vertical segment attached to an axis
representing a $1$ entry of the active column
modelled by the axis sticks through an opening
of a ladder. The segment blocks the full movement
of the ladder by the length of one opening to the right.
In result, the coordinate of the MVP output vector
remains set to $1.$}
\end{figure}
\begin{figure}\label{fig: array}
\begin{center}
\includegraphics[height=6cm]{figmek2}
\end{center}
\caption{(a) The MVP input array with
columns set to $1101,$ $0100,$ $1001,$
and $0101,$ respectively. The columns
are not active and the ladders are in their
initial positions.
(b) The input array after the activation
of the first and third columns
and the movement of the ladders to the right. }
\end{figure}
\junk{the rotation then it
goes through and a bit sticks out under the corresponding ladder
opening. A representation of the MVP output vector is placed along
the rightmost axis. Initially, all coordinates of the output vector are
set to $1.$ In order to set the $i$-th coordinate of the output
vector, the ladder corresponding to the $i$-th row of the input array
is moved from the left to the right at most by the length of one
opening. If no perpendicular segment sticks through any of its
openings such a movement by the length of one opening is possible and
the ladder hits the representation of the output vector in the section
corresponding to its $i$-th coordinate switching its state from $1$ to
$0.$ (Again, there are many possible technical ways of representing
the MVP output vector and implementing such a switch that we omit
here.) Otherwise, such a full movement is not possible and the state
of the section remains set to $1.$ In effect, the $i$-th coordinate of
the output vector is set to $1$ if and only if the $i$-th row of the
input array includes at least one $1$ belonging to an active column of
the array. Each such a movement of the ladder with possible triggering
switching the state of the corresponding sections are counted as ,say,
$O(1)$ operations. Hence, setting the all coordinates of the output
vector requires $O(n)$ operations and Requirement 5 is fulfilled.
To read a Boolean $n\times n$ matrix $A$ into the MVP input array
one can traverse the array for example in a column-wise
snake fashion and pull out segments corresponding to
the $1$ entries of the matrix $A$ that are
aligned to their axis and push back the pulled out segments
that correspond to the $0$ entries of the matrix $A.$
Thus, we may assume that the reading of a Boolean $n\times n$
matrix in the input array
requires $O(n^2)$ operations.
Similarly, reading a Boolean $n$-dimensional vector
into the MVP input vector as well as outputting
the MVP output vector
require $O(n)$ operations.
Finally, to reset the MVP output vector, we need to
pull back the $n$ ladders to the left to their
initial positions and switch all the $1$ coordinates
of the output vector back to $1$ using $O(n)$
operations. It follows that the remaining requirements can be
fulfilled in this way.}
\subsection{A parallel mechanical MVP}
In fact, the activations/deactivations of the appropriate columns of
the MVP input array as well as setting the values of the coordinates
of the MVP output vector can be done in parallel.
It is not difficult to design a bit more complex mechanical MVP where
an appropriate representation of the MVP input vector is moved towards
the start-points of the axes and the $1$s in the representation hit
the start-pints triggering the activation of the corresponding columns
of the input array. More exactly, if the $j$-th coordinate of the
input vector is set to $1$ then the representation of this $1$ hits the
start-point of the axis modelling the $j$-th column casing its
activation. Hence, the activations of all the columns can be done in
one step of the movement of the representation of the input vector.
In order to fulfill Requirement 4, we need to deactivate all columns
first before the activation movement. The deactivation of all the
columns can be obtained by the reverse movement of the representation
of the previous input vector.
As for setting the output vector, i.e., Requirement 5, all the $n$
movements of the ladders to the right can be done
independently of each other in one parallel step. (Still, if one
would like to count the ladder movements as a single operation, one
could elevate slightly the whole left part of the processor to cause
sliding the ladders to the right at most by one opening length etc.) The
same holds for the reverse movements of the ladders back to the left,
i.e., resetting the output vector. Finally, the reading of an input
vector into the MVP input vector can be also done in a single
parallel step using $O(n)$ operations
while the reading of the input matrix into the input array
can be done in $O(n)$ parallel steps, each using $O(n)$ operations.
Hence, we obtain the following lemma and theorem.
\begin{lemma}
By using the parallel mechanical MVP,
the procedure $Matvec(A,V)$ can be implemented
in $O(1)$ parallel steps using $O(n)$ operations.
\end{lemma}
\begin{theorem}
By using the parallel mechanical MVP,
the Boolean product of two Boolean $n\times n$
matrices can be computed
in $O(n)$ parallel steps, each using $O(n)$ operations.
\end{theorem}
\section{An alternative implementation of MVP}
\begin{figure}\label{fig: wall}
\begin{center}
\includegraphics[height=6cm]{figmek5}
\end{center}
\caption{(a) A thin wall modeling a not active column
of the MVP input array
with entries $1,\ 1,\ 0, \ 1.$
To activate the column the wall has to be moved down.
(b) The second and fourth columns of the MVP input array are active.}
\end{figure}
Here, we shall outline an alternative implementation
of MVP using mechanical and light operations.
Each column of the MVP input $n\times n$ array is modelled
by a thin movable wall vertically divided into $2n$
sections. Each even numbered section has an always open
round window of the same size in its middle. The $2k-1$-st
section models the $k$-th entry
of the column in the input array. It has also
a window of the same shape and size in the middle.
The window can be either open or closed. The former is
interpreted as setting the $k$-th entry in the column
to $0$ while the latter as setting this entry to $1.$
Now, the activation of a column in the input array
consists in shifting the corresponding wall by
the length of a single section down.
See Fig. 7
for an example.
The deactivation of the column is obtained
by the reverse move of the wall up.
Both moves are counted as single operations.
Hence, arguing similarly as in the previous section,
we infer that Requirement 4 can be fulfilled.
To fulfill Requirement 5, lights are placed in front of even sections
of the first wall in its initial deactivated position.
See Fig. 7.
The distance
between two consecutive walls is very small. Such an $i$-th light can
be observed on the other side at the last wall through the windows of
walls modelling deactivated columns and the open windows of walls
modelling active columns if and only if the $i$-th row of the input
array does not contain any $1$ belonging to an active column. Thus, in
case the light is observed the $i$-th coordinate of the MVP output
vector is set to $0$ and otherwise it is set to $1.$ Since the
observation of the light is counted as a single observation,
Requirement 5 can be also fulfilled. The fulfillment of the remaining
MVP requirements do not require any involved ideas and can be achieved
in many ways so we omit its discussion.
\section{Final remarks}
The
mechanical operations used in our method
like turning or shifting mechanical units
are very slow compared to the electronic ones.
Also, it would be technically hard to apply the aforementioned
mechanical operations
to very large matrices. For these reasons, in the range of matrix
sizes feasible for the mechanical operations, Boolean matrix
multiplication can be performed much faster by electronic computers,
even if they run the straightforward cubic-time algorithm.'
Still, it seems of at least theoretical interest that in the realm
of mechanical computations one can design an algorithm for
Boolean matrix multiplication using a quadratic number of operations
in contrast to the straightforward one using a cubic number
of operations.
The two ideas behind our mechanical algorithm for
Boolean matrix multiplication are switching on/off
all entries of an array column in a single mechanical step
and computing the disjunction of switched on entries
of an array row in a single mechanical step. It would be interesting to
study the complexity of other matrix/graph problems
assuming that the aforementioned operations can be performed
in single steps.
\subsection*{Acknowledgments}
The research has been
supported in part by
Swedish Research Council grant 621-2017-03750.
\junk{
\section*{Acknowledgments}
We thank for valuable comments.
\begin{figure}\label{fig: wall}
\begin{center}
\includegraphics[height=6cm]{figmek5}
\end{center}
\caption{(a) A thin wall modeling a not active column
of the MVP input array
with entries $1,\ 1,\ 0, \ 1.$
To activate the column the wall has to be moved down.
(b) The second and fourth columns of the MVP input array are active.}
\end{figure}}
\vfill
{\small
|
1402.4395
|
\section{Introduction}
Wavelength multi/demultiplexers are central components in optical telecommunication networks, and they have been subject of intense research and development since the advent of wavelength-division multiplexing (WDM) in the early 90s\cite{brackett}. Components for these networks are subject to demanding requirements, both in terms of performance and manufacturing. While performance depends on the particular component, all need to provide stable operation. In terms of manufacturing requirements, a reproducible and mass fabrication process is mandatory. Hence, photonic integration is usually the basis for large count WDM multiplexers. The cost of an integrated circuit is fundamentally related to its footprint \cite{kirchain_nphot, pennings_jstqe1996}, and it has a direct impact on the economies of scale for manufacturing, where in general more devices per wafer, i.e. more compact devices, are desired.
Amongst the different implementations for multiplexers, the Arrayed Waveguide Grating (AWG) \cite{smit, dragone} is one of a few that aligns with the previous statements. Traditionally manufactured on Silica on Silicon integration technology \cite{dragone_ptl1991}, it finds room nearly in all the relevant material platforms, such as Indium Phosphide, Silicon on Insulator, Silicon and Silicon Nitride (for a summary see \cite{munoz_icton2013}). The physical layout of an AWG consists of the combination of waveguides and slab couplers \cite{smit, dragone}. In its most common shape, two slab couplers with input/output waveguides are inter-connected by a set of waveguides, usually referred as arrayed waveguides (AWs). Consecutive waveguides in the array have a length differing a constant amount, which imposes a wavelength-dependent lineal phase front on the signal fed from the first slab coupler. This linear phase front, in combination with the second slab coupler, enable the spatial separation of different wavelengths in different outputs.
In terms of footprint, other integrated multiplexer implementations as the Echelle Diffraction Grating (EDG) achieve considerable size reduction compared to the AWG \cite{lycett}. The layout of an EDG includes a single slab coupler, with input/output waveguides on one side, and a reflective grating on the opposite end. It is a so-called reflective multi/demultiplexer. One issue with EDGs is to maximize the reflection on the grating, in order to minimize the overall insertion losses, issue which is otherwise not present in a regular AWG. Different approaches exist to increase the reflectivity of the grating in an EDG, the most employed being the deposition of metal layers at the edge of the grating \cite{feng}, or the addition to the grating of other structures such as Bragg reflectors \cite{ryckeboer}. While the former supplies broadband reflectors, it requires resorting to additional fabrication steps. Conversely, Bragg reflectors can be manufactured in the same steps that the EDG, but it is well known the reflection bandwidth is inversely proportional to their strength \cite{pruessner}.
Similarly, AWG layouts with reflective structures midway in the array, i.e. reflective AWGs (R-AWG) are possible as well. They can have a footprint ideally half of a regular AWG, and closer to the one of an EDG. Hence, the signals travelling in the arrayed waveguides arrive to the reflectors, and are bounced back to the (single) slab coupler. Although the functionality is the same than in the case of a regular AWG, some additional design considerations are required \cite{peralta}. The reflectors can be implemented in similar ways to the ones for the EDGs, and the literature shows solutions as reflective coatings on a fact of the chip where the arrayed waveguides end \cite{inoue, soole}, photonic crystals \cite{dai}, external reflectors \cite{peralta2} and even Bragg reflectors \cite{okamoto_dbr} at the end of the arrayed waveguides.
A common issue of all the described approaches for the reflector is that broadband full reflectivity requires additional fabrication steps, and therefore increases the final cost of the multiplexer. In this paper we propose a novel configuration for a R-AWG, where the well known Sagnac Loop Reflectors (SLR) are used as reflective elements at the end of the arrayed waveguides. A SLR is composed of an optical coupler with two output waveguides, that are connected to each other forming a loop. These reflectors are broadband, can supply total reflection, and can be fabricated in the same lithographic process than the rest of the AWG. Moreover, the reflection of a SLR depends on the coupling constant of the coupler. Hence, it can be different for each of the waveguides in the array. The modification of the field pattern in the arrayed waveguides of an AWG allows for spectral response shaping, as for example box like transfer function \cite{okamoto} and multi-channel coherent operations \cite{doerr} amongst other.
The paper is structured as follows. In Section \ref{sec:model}, the theoretical equations describing the full field (amplitude and phase) transfer function of the R-AWG are developed, following the model in \cite{pascual}. The equations are then particularized for two cases: firstly, the case for which all the SLRs have total reflection, and the AWG response obtained is Gaussian; secondly, the case where the reflectivity of each SLR is designed to be such a way that the field profile in the array is a sinc function as in \cite{okamoto} to obtain a flat-top response at the output. In Section \ref{sec:sims}, the equations are used to design and simulate a R-AWG on Silicon-on-Insulator (SOI) technology. Finally, Section \ref{sec:conclusion} presents the summary and outlook.
\begin{figure
\centering
\includegraphics[width=0.8\textwidth]{srawg.eps}
\caption{R-AWG schematic view. Abbreviations: $PS$ phase shifter, $K$ coupling constant, $x_i$ (i=0,1,2,3) are reference coordinates and $i_{j}$, $o_{j}$ are input and output waveguides, respectively.}
\label{fig:srawg}
\end{figure}
\section{\label{sec:model}R-AWG theoretical model}
The schematic view for a reflective AWG (R-AWG) shown in Fig.~\ref{fig:srawg} is used as reference for the equations in this section. As a regular AWG, the layout consists of a group of input and output waveguides connected to one side of the (single) slab coupler. Each of the arrayed waveguides, which are connected to the opposite side of the slab coupler, is terminated with a SLR. The lengths of consecutive waveguides in array differ by a constant amount \cite{smit}. The layout in Fig.~\ref{fig:srawg} includes a phase shifter (PS) section in between the waveguides in the array and the SLR, the purpose of which will be detailed later on.
Hence, this configuration allows for adjusting independently the field amplitude and phase in each AW. Though the operation is similar to a regular AWG, is summarized here for completeness. The field introduced through an input waveguide is diffracted in the slab coupler, and collected by the arrayed waveguides on the opposite side. Then the light travels forth and back through each individual and independent AW, PS and SLR. The reflection (amplitude and phase) in each SLR can be different, depending on the coupling constant for the coupler. The overall reflected field reaching back the slab coupler will be diffracted by the AWs. The overall phase relations between AWs will determine the R-AWG behaviour. In the most simple case, a constant phase difference between consecutive propagation paths in the array will spatially separate the different wavelengths on the input/output side of the slab coupler.
\subsection{Elements}
Though well known, further details are given for the SLR, for which a reference layout is shown in Fig.~\ref{fig:sagnac_1coupler}.
\begin{figure
\centering
\subfloat[] {
\label{fig:sagnac_1coupler}
\includegraphics[width=0.35\textwidth]{sagnac.eps}}
\hspace{0.1\textwidth}
\subfloat[] {
\label{fig:sagnac_2couplers}
\includegraphics[width=0.35\textwidth]{sagnac_2c.eps}}
\caption{Sagnac Loop Reflector (a) and SLR analysis as two serial couplers (b). Abbreviations: $i$ and $o$ stand for input and output waveguides, respectively. $K$ stands for coupling constant and $L$ stands for loop length.}
\label{fig:sagnac}
\end{figure}
The transfer matrix for the SLR can be expressed as:
\begin{equation}
\begin{pmatrix} o_{0} \\ o_{1} \end{pmatrix} = \begin{pmatrix} \sqrt{1-k} & j\sqrt{k} \\ j\sqrt{k} & \sqrt{1-k} \end{pmatrix} \begin{pmatrix} i_{0} \\ i_{1} \end{pmatrix}
\end{equation}
Hence, it is possible to analyze the SLR as two couplers connected in a series, Fig.~\ref{fig:sagnac_2couplers}. For the case where only the input $i_{0}$ is used, the transfer functions are:
\begin{equation}
o'_{0} = 2j\sqrt{\left( 1-k \right) k} e^{-j\beta L}i_{0}
\end{equation}
\begin{equation}
o'_{1} = \left( 1-2k \right) e^{-j \beta L}i_{0}
\end{equation}
which at the reproduced the well known total reflection to $i_{0}$ when the coupling constant is set to $k = 0.5$. Note that the equations include a phase change due to the length of the loop, $L$. For coupling constants $k$ other than 0.5, the reflected power will be less than 100\%.
The coupler for the SLR can be implemented in multiples ways: directional coupler (DC) \cite{saleh}, wavelength insensitive coupler (WINC) \cite{jinguji} or multimode interference coupler (MMI) \cite{soldano}. The reflectors for the proposed R-AWG need to be broadband, i.e. the coupling constant needs to be constant over a wide range of wavelengths. Therefore only the WINC and MMI couplers satisfy this condition. Moreover, footprint considerations lead to the selection of MMI vs WINC, since the latter is in general larger than the former. Finally, MMIs can be designed to have arbitrary coupling constant, as described in \cite{bachmann, besse}. Different coupling constants may ultimately result in different MMI lengths/phase shifts.
The purpose of the formerly introduced PS sections is to compensate the phase imbalances between AWs due to different phase shifts/coupling constant/reflection between the SLRs of the different AWs. As the coupler, the phase shifter is required to be broadband. This is possible by means of regular straight waveguides, or by tapered waveguides as in \cite{jeong}.
\subsection{Principle of operation}
The principle of operation for the AWG requires that the phase shift ($ \Delta \phi$) between two consecutive AWs is integer number ($m$) times of $2\pi$.
In the most general case of Fig.~\ref{fig:srawg}, the total phase shift the light undergoes in each AW can be given by:
\begin{equation}
\phi_{i} = \phi_{w,i} + \phi_{PS,i} + \phi_{SLR,i}
\end{equation}
where $i$ is the number of the waveguide and subscripts $w$, $PS$ and $SLR$ stand for waveguide, Phase Shifter and Sagnac Loop Reflector, respectively. Consider the field in an input waveguide, placed for simplicity at the center input side of the slab coupler, and approximated by the following normalized Gaussian function:
\begin{equation}
b_{i} \left( x_{0} \right) = \sqrt[4]{\frac{2}{\pi \omega_{i}^{2}}} e^{- \left( \frac{x_{0}}{\omega_{i}} \right)^{2}}
\end{equation}
being $\omega_{i}$ the mode field radius and $x_{0}$ the spatial coordinate at the input plane. This field is radiated to the AW side of the slab coupler, where the light spatial distribution can be obtained by the spatial Fourier transform of the input profile, using the paraxial approximation \cite{goodman}:
\begin{equation}
B_{i} \left( x_{1} \right) = \mathcal{F} \left. \left\lbrace b_{i} \left( x_{0} \right) \right\rbrace \right|_{u=\frac{x_{1}}{\alpha}} = \sqrt[4]{2\pi \frac{\omega_{i}^{2}}{\alpha^{2}}} e^{- \left( \pi \omega_{i} \left( \frac{x_{1}}{\alpha} \right) \right)^{2}}
\end{equation}
where $u$ is the spatial frequency domain variable of the Fourier transform and $\alpha$ is the equivalent to the wavelength focal length product in Fourier optics propagation, expressed as:
\begin{equation}
\alpha = \frac{cL_{f}}{n_{s}\nu}
\end{equation}
being $L_{f}$ the slab length, $n_{s}$ the effective index of the slab coupler mode, $\nu$ the frequency and $c$ the speed of light in vacuum. The total field distribution for an arbitrary number of this way illuminated AWs, placed at the $x_{1}$ plane, is:
\begin{equation}
f_{1} \left( x_{1} \right) = \sqrt[4]{2\pi \omega_{g}^{2}} \sum_{r} B_{i} \left( r d_{\omega} \right) b_{g} \left( x_{1}-rd_{\omega} \right)
\end{equation}
where $r$ is the AW number, $d_{\omega}$ is the spacing, $\omega_{g}$ is the mode field radius and $b_{g} \left( x \right)$ is the field profile of the AWs. Particularizing for a set of an arbitrary number $N$ waveguides:
\begin{equation} \label{eq:f1}
f_{1} \left( x_{1} \right) = \sqrt[4]{2\pi \omega_{g}^2} \left[ \prod \left( \frac{x_{1}}{Nd_{\omega}} \right) B_{i} \left( x_{1} \right) \sum_{r=-\infty}^{+\infty} \delta \left( x_{1} - rd_{\omega} \right) \right] \otimes b_{g} \left( x_{1} \right)
\end{equation}
with $\otimes$ being the convolution, and $\prod \left( x_{1} / Nd_{\omega} \right)$ being a truncation function defined as:
\begin{equation}
\prod \left( \frac{x_{1}}{Nd_{\omega}} \right) = \left\{ \begin{array}{ll}
1, & \mbox{if $\left| x \right| \leq \frac{Nd_{\omega}}{2}$}\\
0, & \mbox{otherwise} \end{array} \right.
\end{equation}
Let the length of waveguide number $r$ be given by:
\begin{equation}
l_{r} = \frac{l_{0}}{2} + \frac{\Delta l}{2} \left( r + \frac{N}{2} \right)
\end{equation}
where $l_{0}$/2 is the (base) length of the shortest waveguide. The value of the incremental length between arrayed waveguides ($\Delta l$/2) is set to an integer multiple $m$, known as AWG grating order, of the named central design wavelength $\lambda_{0}$:
\begin{equation}
\Delta l = \frac{m\lambda_{0}}{n_{c}}
\end{equation}
where $n_{c}$ is the effective index in the waveguides. The value of $\Delta l$ ensures that the lightwave from a central input waveguide will be focused on a central output waveguide in a regular AWG for $\lambda_0$. The phase shift introduced by waveguide $r$ will be:
\begin{equation}
\Delta \phi_{r} = \beta l_{r} = 2 \pi \frac{n_{c}}{c} \nu l_{r}
\end{equation}
where $\beta$ is the propagation constant of the (single) mode in the waveguide. The latter term is introduced in Eq. (\ref{eq:f1}) to obtain:
\begin{equation} \label{eq:f1p}
f'_{1} \left( x_{1}, \nu \right) = \sqrt[4]{2\pi \omega_{g}^2} \left[ \prod \left( \frac{x_{1}}{Nd_{\omega}} \right) B_{i} \left( x_{1} \right) \phi \left( x_{1}, \nu \right) \sum_{r=-\infty}^{+\infty} \delta \left( x_{1} - rd_{\omega} \right) \right] \otimes b_{g} \left( x_{1} \right)
\end{equation}
where $\phi \left( x_{1}, \nu \right)$ is defined as:
\begin{equation}
\phi \left( x_{1}, \nu \right) = \psi \left( \nu \right) e^{-j\pi m \frac{\nu}{\nu_{0}} \frac{x_{1}}{d_{\omega}}}
\end{equation}
\begin{equation}
\psi \left( \nu \right) = e^{-j 2 \pi \nu \left( \frac{n_{c} l_{0}}{2c} + \frac{mN}{4\nu_{0}} \right)}
\end{equation}
Eq. (\ref{eq:f1p}) shows the field at the input of the phase shifters. Both the PS and the SLR will introduce an additional phase shift, and the SLR an amplitude change, hence:
\begin{equation}
\begin{aligned}
f''_{1} \left( x_{1}, \nu \right) = \sqrt[4]{2\pi \omega_{g}^2} \Biggl[ \prod \left( \frac{x_{1}}{Nd_{\omega}} \right) & B_{i} \left( x_{1} \right) \phi \left( x_{1}, \nu \right) \\
& \sum_{r=-\infty}^{+\infty} \delta \left( x_{1} - rd_{\omega} \right) e^{-j \psi_{PS,r} \left( \nu \right)} j A_{r} e^{-j\beta l_{SLR,r}} \Biggr] \otimes b_{g} \left( x_{1} \right)
\end{aligned}
\end{equation}
where $A_{r}$ is the SLRs amplitude term given by:
\begin{equation}
A_{r} = 2 \sqrt{\left( 1-k_{r} \right) k_{r}}
\end{equation}
and $\psi_{PS,r} \left( \nu \right)$ is the phase shift introduced by the PS, $k_{r}$ is the SLR coupling constant and $l_{SLR,r}$ is the length of the loop waveguide within the SLR. The reflected field from the SLRs at the plane $x_{2}$, which is the same that the plane $x_{1}$ in a R-AWG, is given by:
\begin{equation} \label{eq:f2}
\begin{aligned}
f_{2} \left( x_{2}, \nu \right) = \sqrt[4]{2\pi \omega_{g}^2} \Biggl[ \prod \left( \frac{x_{2}}{Nd_{\omega}} \right) &B_{i} \left( x_{2} \right) \phi' \left( x_{2}, \nu \right) \\
&\sum_{r=-\infty}^{+\infty} \delta \left( x_{2} - rd_{\omega} \right) e^{-2j \psi_{PS,r} \left( \nu \right)} j A_{r} e^{-j\beta l_{SLR,r}} \Biggr] \otimes b_{g} \left( x_{2} \right)
\end{aligned}
\end{equation}
where now the phase term $\phi' \left( x_{2}, \nu \right)$ is:
\begin{equation}
\phi' \left( x_{2}, \nu \right) = \psi' \left( \nu \right) e^{-j2\pi m \frac{\nu}{\nu_{0}} \frac{x_{2}}{d_{\omega}}}
\end{equation}
\begin{equation}
\psi' \left( \nu \right) = e^{-j 2 \pi \nu \left( \frac{n_{c} l_{0}}{c} + \frac{mN}{2\nu_{0}} \right)}
\end{equation}
The field at the plane $x_{3}$ (that is the same that $x_{0}$ in a R-AWG) can be calculated using the spatial Fourier transform as:
\begin{equation} \label{eq:f3}
f_{3} \left( x_{3}, \nu \right) = \mathcal{F} \left. \left\lbrace f_{2} \left( x_{2}, \nu \right) \right\rbrace \right|_{u=\frac{x_{3}}{\alpha}}
\end{equation}
Contrary to our previous model in \cite{pascual} where a closed analytical solution for the field at the output plane is derived, no straightforward closed analytical solution is possible in the general case, due to the arbitrary phase shift for each AW. Nonetheless, the previous equation is the basis for the particular cases derived in the next paragraphs. Independently, the frequency response at the output waveguide $q$ can be calculated through the following overlap integral:
\begin{equation}
\label{eq:tq}
t_{q} \left( x_{3} \right) = \int_{-\infty}^{+\infty} f_{3} \left( x_{3},\nu \right) b_{0} \left( x_{3}- qd_{o} \right) \partial x_{3}
\end{equation}
where $d_{o}$ is the spacing between, and $b_0\left(x_3\right)$ is the field profile of, the output waveguides.
\subsubsection{\label{subsec:gaussian}Gaussian spectral response}
The basic case for the R-AWG concept introduced in this paper is when all the SLRs are equal and with total reflection, i.e. coupling constant $k=0.5$. Since the SLRs are ideally identical, no phase shifters are required in this configuration. The layout for this configuration is the same than in Fig.~\ref{fig:srawg}, where the phase shifters have been removed, all the SLRs are identical and therefore the length between consecutive AW differ by an incremental length $\Delta l/2$.
The equations can be particularized to this case, i.e. Eq. (\ref{eq:f2}) can be rewritten as:
\begin{equation} \label{eq:f2_sinsinc}
f_{2} \left( x_{2}, \nu \right) = j \sqrt[4]{2\pi \omega_{g}^2} e^{-j \beta l_{SLR}} \Biggl[ \left. \prod \left( \frac{x_{2}}{Nd_{\omega}} \right) B_{i} \left( x_{2} \right) \phi' \left( x_{2}, \nu \right) \sum_{r=-\infty}^{+\infty} \delta \left( x_{2} - rd_{\omega} \right) \right] \otimes b_{g} \left( x_{2} \right)
\end{equation}
and the field at the output plane ($x_{3}$) described using Eq. (\ref{eq:f3}) as:
\begin{equation}
\begin{aligned}
f_{3} \left( x_{3}, \nu \right) = j \sqrt[4]{2\pi \frac{\omega_{g}^{2}}{\alpha^{2}}} B_{g} \left( x_{3} \right) e^{-j \beta l_{SLR}} \Biggl[ b_{i} \left( x_{3} \right) &\otimes \mathrm{sinc} \left( N d_{\omega} \frac{x_{3}}{\alpha} \right) \\
& \otimes \Phi \left( x_{3}, \nu \right) \otimes \sum_{r=-\infty}^{+\infty} \delta \left( x_{3} -r\frac{\alpha}{d_{\omega}} \right) \Biggr]
\end{aligned}
\end{equation}
where the different terms therein are given by:
\begin{equation}
b_{i} \left( x_{3} \right) = \mathcal{F} \left. \left\lbrace B_{i} \left( x_{2} \right) \right\rbrace \right|_{u=\frac{x_{3}}{\alpha}}
\end{equation}
\begin{equation}
B_{g} \left( x_{3} \right) = \mathcal{F} \left. \left\lbrace b_{g} \left( x_{2} \right) \right\rbrace \right|_{u=\frac{x_{3}}{\alpha}}
\end{equation}
\begin{equation}
\Phi \left( x_{3}, \nu \right) = \mathcal{F} \left. \left\lbrace \phi' \left( x_{2}, \nu \right) \right\rbrace \right|_{u=\frac{x_{3}}{\alpha}} = \phi' \left( \nu \right) \delta \left( x_{3} + \frac{\alpha m}{d_{\omega} \nu_{0}} \nu \right)
\end{equation}
and therefore,
\begin{equation} \label{eq:f3_sinsinc}
f_{3} \left( x_{3}, \nu \right) = j \sqrt[4]{2\pi \frac{\omega_{g}^{2}}{\alpha^{2}}} B_{g} \left( x_{3} \right) e^{-j \beta l_{SLR}} \psi' \left( \nu \right) \sum_{r=-\infty}^{+\infty} f_{M} \left( x_{3} -r\frac{\alpha}{d_{\omega}} + \frac{\nu}{\gamma} \right)
\end{equation}
where $f_{M} \left( x_{3} \right)$ is defined as:
\begin{equation}
f_{M} \left( x_{3} \right) = \mathrm{sinc} \left( N d_{\omega} \frac{x_{3}}{\alpha} \right) \otimes b_{i} \left( x_{3} \right)
\end{equation}
Recall during the first steps in the derivation of the model we assumed for simplicity the input waveguide is centered with respect to the slab coupler. Nonetheless Eq. (\ref{eq:f3_sinsinc}) can be readily extended to account for the other input positions, i.e. for a the field at input waveguide number $p$, the equation can be then expressed as follows:
\begin{equation} \label{eq:bip}
b_{i,p} \left( x_{0} \right) = \sqrt[4]{\frac{2}{\pi \omega_{i}^{2}}} e^{- \left( \frac{x_{0}-pd_{i}}{\omega_{i}} \right)^{2}} = b_{i} \left( x_{0} - pd_{i} \right)
\end{equation}
where $d_{i}$ is the distance between the input waveguides. Then, rewriting Eq. (\ref{eq:f3_sinsinc}) using (\ref{eq:bip}):
\begin{equation} \label{eq:f3_p}
f_{3,p} \left( x_{3}, \nu \right) = j \sqrt[4]{2\pi \frac{\omega_{g}^{2}}{\alpha^{2}}} B_{g} \left( x_{3} \right) e^{-j \beta l_{SLR}} \psi' \left( \nu \right) \sum_{r=-\infty}^{+\infty} f_{M} \left( x_{3} + pd_{i}-r\frac{\alpha}{d_{\omega}} + \frac{\nu}{\gamma} \right)
\end{equation}
In conclusion, the functionality of the R-AWG and AWG can be described by similar formulation \cite{pascual}. One important difference in the case of a R-AWG is the positioning of the input/output waveguides, which has implications for the selection of the central design wavelength, however this can be readily accounted for during design as described in \cite{peralta}. The dispersion angle ($\theta$) with respect to the center of the slab coupler is given by \cite{smit}:
\begin{equation}
\label{eq:theta}
\theta = \arcsin \left( \frac{\beta \Delta l - m2\pi}{\beta_{S} d_{\omega}} \right)
\end{equation}
where $\beta$ and $\beta_{S}$ are the propagation constants of the AW mode and slab modes, respectively, and $d_{\omega}$ is the spacing between AWs.
For the positioning of the input/output waveguides in the R-AWG, the wavelength routing properties of the AWG need to be observed \cite{takahashi}. Let $\lambda_{p,q}$ be the wavelength routed from input $p$ to output $q$. Changing the input position, for instance to $p-p'$, will route the same $\lambda_{p,q}$ to ouput $q+q'$, with $p'=q'$ provided the positions of the input/outputs corresponds to the same wavelength displacement given by the derivative of Eq. (\ref{eq:theta}) (see \cite{smit}). Fig.~\ref{fig:srawg} shows a layout for N inputs and M outputs, accounting for these routing properties. The central input waveguide $p=0$ is placed a distance to the left from the center of the slab. Therefore, the central output waveguide $q=0$ needs to be placed the same distance to the right from the center.
\subsubsection{Flat spectral response} \label{subsubsec:flat}
There are different techniques to flatten the spectral response of an AWG, amongst them the use of parabolic waveguide horns \cite{okamoto_horn}, MMIs \cite{pascual_mmi} and interferometers \cite{doerr_mzi} at the input/output waveguides. Other technique proposes the modification of the amplitude and phase in the AWs to obtain a sinc field profile \cite{okamoto}. The latter builds upon the signal theory duality between fields at both sides of the slab coupler, through the (spatial) Fourier transform. To obtain a box like field pattern at the ouput side of the slab coupler, through the diffracted (the Fourier transform) field, a sinc distribution is required in the AWs \cite{okamoto_book, okamoto}. As mentioned in the introduction, the R-AWG concept proposed in this paper allows for the modification of the phase front by means of the phase shifters, while the amplitude can be adjusted by means of the SLRs. Recall the Fourier transform of a $\Pi$ function is:
\begin{equation}
\mathcal{F} \left. \left\lbrace \prod \left( \frac{x}{A} \right) \right\rbrace \right|_{u=\frac{y}{\alpha}} = A \textrm{sinc} \left( A \frac{y}{\alpha} \right)
\end{equation}
where $A$ is the rectangular width, and $x$, $y$ the spatial variables. Therefore, the field at the plane $x_{2}$ will be modified to adjust it to a sinc function as described in \cite{okamoto_book}. In our formulation the adjustment can be incorporated by the terms in Eq. (\ref{eq:f2}), to be precise $2j \sqrt{\left(1-k_{r} \right) k_{r}} e^{-j2\psi_{PS,r} \left( \nu \right)} e^{-j \beta l_{SLR,r} \left( \nu \right)}$, from which the following amplitude and phase conditions are derived to turn the input far field Gaussian profile $B_{i} \left( x_{2} \right)$ into a sinc function. Hence, the amplitude condition is written as follows:
\begin{equation} \label{eq:amp_condition}
B_{i} \left( r d_{w} \right) \sqrt{\left(1-k_{r} \right) k_{r}} = \left| \mathrm{sinc} \left( a \frac{rd_{\omega}}{\alpha} \right) \right|
\end{equation}
where $a$ will be the obtained rectangular function width when using the Fourier transform, as is detailed below. In addition, the following phase condition is required to turn the all positive values from the input far field Gaussian into negative ($\pi$ shift) where needed:
\begin{equation} \label{eq:phase_condition}
2 \psi_{PS,r} \left( \nu \right) + \beta l_{SLR,r} = \left\{ \begin{array}{lll}
0, & \mbox{if $\frac{2n\alpha}{a} \leq \left| rd_{\omega} \right| \leq \frac{\left(2n+1\right)\alpha}{a}$} & \mbox{with n=0,1,...}\\
\pi, & \mbox{otherwise} \end{array} \right.
\end{equation}
Under these conditions, the sinc field profile for the AWs can be introduced in Eq. (\ref{eq:f2}) for $x_{2}$, resulting in:
\begin{equation} \label{eq:f2_sinc}
f_{2} \left( x_{2}, \nu \right) = j \sqrt[4]{2\pi \omega_{g}^2} \left[ \prod \left( \frac{x_{2}}{Nd_{\omega}} \right) B_{i} \left( x_{2} \right) \phi' \left( x_{2}, \nu \right) \sum_{r=-\infty}^{+\infty} \delta \left( x_{2} - rd_{\omega} \right) \mathrm{sinc} \left( a \frac{rd_{\omega}}{\alpha} \right) \right] \otimes b_{g} \left( x_{2} \right)
\end{equation}
Finally, the field at the plane $x_{3}$ calculated through the spatial Fourier transform of Eq. (\ref{eq:f2_sinc}) is given by:
\begin{equation}
f_{3,p} \left( x_{3}, \nu \right) = j \sqrt[4]{2\pi \frac{\omega_{g}^{2}}{\alpha^{2}}} B_{g} \left( x_{3} \right) \psi' \left( \nu \right) \sum_{r=-\infty}^{+\infty} f'_{M} \left( x_{3} + pd_{i}-r\frac{\alpha}{d_{\omega}} + \frac{\nu}{\gamma} \right)
\end{equation}
where $f'_{M} \left( x_{3} \right)$ is in this case:
\begin{equation} \label{eq:fmp}
f'_{M} \left( x_{3} \right) = \mathrm{sinc} \left( N d_{\omega} \frac{x_{3}}{\alpha} \right) \otimes \frac{\alpha}{a} \prod \left( \frac{x_{3}}{a} \right)
\end{equation}
An important aspect for the design of flat-top AWGs following this technique is the number of AWs sinc field distribution zeros. Eq. (\ref{eq:fmp}) describes the field shape at the output plane, being the second term in the equation the obtained rectangular profile from the sinc field distribution in the AWs. Fig.~\ref{fig:rect_sinc} illustrates the relation between the positions of the sinc zeros and the width of the obtained dual rectangular function.
\begin{figure
\centering
\includegraphics[width=0.6\textwidth]{rect_sinc.eps}
\caption{Rectangular and sinc functions.}
\label{fig:rect_sinc}
\end{figure}
In general, there is a trade-off between the desired channel flatness and the acceptable channel width increase, which is inversely proportional to the number of zeros of the sinc field distribution in the AWs. Furthermore, the use of a more ``compressed'' sinc will reduce the amplitude of the obtained dual rectangular function, i.e. the AWG will have a flatter response, but with more peak insertion loss.
\section{\label{sec:sims}Simulations}
In this section, design cases with transfer function simulation using the equations above (similar to previously validated models \cite{munoz_validation, leijtens}) is presented. Despite SLRs can be implemented in nearly all integration technologies \cite{munoz_icton2013}, the footprint advantage will be so only in those where the confinement in optical waveguides is strong, i.e. the bend radius can be small. Amongst the different waveguide technologies available, the smallest bend radius is for Silicon-on-Insulator (can be less than 5 $\mu$m). Therefore the case provided in the following is for SOI technology, with a 220-nm-thick Si guiding layer on a SiO$_{2}$ substrate with no cladding. The effective indexes, calculated using a commercial software, are 2.67 in the arrayed waveguides ($n_{c}$) -waveguide with 0.8 $\mu$m to minimize phase errors, see \cite{bogaerts}- and 2.83 in the slab coupler ($n_{s}$). The R-AWG parameters are the following: the center wavelength is 1550 nm, using 6 channels with a spacing of 1.6~nm and a FSR of 19.2~nm. The calculated focal length is 217.37~$\mu$m, the incremental length between AWs is 31.38~$\mu$m and the number of AWs is 57. The bend radius was set to 5 $\mu$m, and the SLR loop length was set to a circumference of that radius, 31.4~$\mu$m. The design makes unse of a single input waveguide $i_0$ placed at the center position of the slab coupler input/output side. Consequently, the output waveguides are divided in tow halves, each at one different side of the input waveguide. This will result into a wavelength displacement of half channel (0.8 nm) in the output spectra, with respect to the design wavelength \cite{takahashi}. The motivation to use this special input/output configuration is given by the fact a Rowland mounting is used as input/output plane. In the case the input waveguide is displaced from the center, additional fine tuning techniques are required to compensate for the non-uniformities that arise, for example the modification of the angle and position for the waveguides as described in \cite{beelen_patent}.
\subsection{Gaussian and flatenned responses}
The first set of simulations is for a Gaussian device as described in Section \ref{subsec:gaussian}. The response was calculated, without phase shifters and assuming all SLRs have ideally identical coupling constant $k=0.5$, i.e. total reflection. Fig.~\ref{fig:rawg_sinsinc_aw} shows the Gaussian field distribution at the plane $x_{2}$ obtained as the summation of all the AW contributions through Eq. (\ref{eq:f2_sinsinc}). The corresponding end-to-end transfer function for this R-AWG is depicted in Fig.~\ref{fig:rawg_sinsinc_t}.
Note the simulations show losses of approximately 1~dB for the central channel and lower than 2~dB for the side channels. We did not include the propagation loss in the waveguides for SOI (typically around 4 dB/cm) and other detrimental effects as fabrication imperfections. The actual peak insertion loss of a regular SOI AWG can be as low as 4-5 dB. From the simulation the 1-dB, 3-dB and 20-dB bandwidths are 0.37~nm, 0.65~nm and 1.64~nm respectively.
\begin{figure
\centering
\subfloat[]{
\label{fig:rawg_sinsinc_aw}
\includegraphics[width=0.48\textwidth]{field_aw_sinsinc.eps}}
\subfloat[]{
\label{fig:rawg_sinsinc_t}
\includegraphics[width=0.48\textwidth]{t_output_sinsinc.eps}}
\caption{Gaussian R-AWG simulation with 1 input and 6 outputs. (a) Field at the arrayed waveguides. (b) Transfer function from $i_{0}$ to the output waveguides.}
\label{fig:rawg_7io_sim}
\end{figure}
The second set of simulations employ the same physical parameters, but for a R-AWG with sinc field distribution in the AWs, resulting in a flattened spectral response. A sinc profile with a~=~3.5~$\mu$m is incorporated. From this distribution, the required coupling constant $k_{r}$ for each SLR is calculated using Eq. (\ref{eq:amp_condition}). Note the use of a different coupler in each AW may introduce a different phase shift in each arm \cite{besse}, as mentioned in the introduction. For simplicity, this phase shift has not been included in Eq. (\ref{eq:phase_condition}) since it can be compensated through the phase shifters. Fig~\ref{fig:field_aw_sinc35} shows the field distribution at the plane $x_{2}$, being this field the summation of all the AW contributions in blue trace. On the same figure, the sinc function applied is shown in green line. Moreover, the secondary axis shows in red crosses the required coupling constant for each SLR to obtain the sinc profile.
\begin{figure
\centering
\subfloat[]{
\label{fig:field_aw_sinc35}
\includegraphics[width=0.48\textwidth]{field_aw_sinc35.eps}}
\subfloat[]{
\label{fig:t_output_sinc35}
\includegraphics[width=0.48\textwidth]{t_output_sinc35.eps}}
\caption{Flat-top R-AWG using a sinc field distribution at the arrayed waveguides. (a) Field at the arrayed waveguides (blue), the sinc profile applied (green) and SLR coupling constant $k_{r}$ in each arm of the array (red crosses). (b) Transfer function from $i_0$ to the output waveguides. (Both for a sinc distribution with parameter a=3.5$\mu$m).}
\label{fig:t_output}
\end{figure}
As described in the previous section, to obtain a wider rectangular function, a more compressed sinc function at the AWs is required. However, widening comes at the expense of increased channel insertion loss. This can also be understood by comparing Fig.~\ref{fig:rawg_sinsinc_aw} and Fig~\ref{fig:field_aw_sinc35}, from which is clear the sinc field distribution is attained in part by modifying the amplitude of the original Gaussian field distribution, with partial reflectors, i.e. some signal is lost. The transfer function for the flat-top R-AWG is shown in Fig.~\ref{fig:t_output_sinc35}. The flat spectral response and increased insertion losses are clearly noticeable. The obtained losses in this case are 5.6~dB and 6.7~dB for the central and side channels respectively. The bandwidths at the points of interest are in this case 1.01~nm, 1.36~nm and 2.37~nm, for 1-dB, 3-dB and 20-dB fall from the channel center. As expected, an increase in the channel bandwidth is attained at the expense of more insertion losses.
\begin{figure
\centering
\subfloat[]{
\label{fig:field_all}
\includegraphics[width=0.5\textwidth]{field_all.eps}}
\subfloat[]{
\label{fig:k_all}
\includegraphics[width=0.5\textwidth]{k_all.eps}}
\caption{(a) Field at the AWs when different profiles have been applied. (b) Required coupling factor parameter at the SLRs when applying each profile.}
\label{fig:field_k}
\end{figure}
\subsection{Arbitrary spectral responses}
\begin{figure
\centering
\includegraphics[width=0.8\textwidth]{fosN_all.eps}
\caption{Field focused at the output plane when using the central wavelength ($\lambda_{0}$) for each different profile applied at the AWs.}
\label{fig:fosN_all}
\end{figure}
Although the semi-analytical model in previous sections was only derived for the Gaussian and flattened response cases, in principle it is possible to apply any desired field distribution for the AWs, which will result in different spectral responses. In this subsection we present several field distributions and corresonding spectral responses, which we take from well-known Fourier transform pairs. To be precise, we targetted triangular, decaying exponential, truncated cosine and Lorentzian spectral responses. Fig.~\ref{fig:field_k}\subref{fig:field_all} shows the required AWs field distributions, i.e. at plane $x_{2}$. Note the legend labels are for the target transform pair, not the actual function employed in the field distribution for the AWs. Detailed expressions can be find elsewhere, as for instance in \cite{ft_pairs}.
Similar to the previously shown case for the flattened response (AWs sinc distribution), the required coupling factors $k$ to be applied in each AW are shown in Fig.~\ref{fig:field_k}\subref{fig:k_all}. Note there is no plotted value for the Gaussian case, since all the SLRs are use $k=0.5$ for full reflection. An additional important remark is how the regular Gaussian field distribution ingoing to the array is transformed into the targeted one. In principle some field distributions, can have amplitudes higher than those of the Gaussian for some of the waveguides in the array. This would required amplification, which is not contemplated with the proposed SLR-based layout. Therefore, the targeted profile needs to be inscribed under the starting Gaussian profile. Hence, the amplitude in each AW needs to be reduced to inscribe the profile inside the Gaussian, at the cost of more insertion losses. This is the result in Fig.~\ref{fig:field_k}\subref{fig:k_all}, where all the field profiles at the AWs have amplitude levels below the starting Gaussian distribution.
\begin{figure
\centering
\includegraphics[height=0.4\textheight]{t2_all_lineal.eps}
\caption{Transfer function (linear) in one output waveguide for each different profile applied: (a) Gaussian, (b) rectangular, (c) triangular, (d) decaying exponential, (e) truncated cosine and (f) Lorentzian functions.}
\label{fig:t2_all_lineal}
\end{figure}
\begin{figure
\centering
\includegraphics[height=0.4\textheight]{t2_all.eps}
\caption{Transfer function (logarithmic) in one output waveguide for each different profile applied: (a) Gaussian, (b) rectangular, (c) triangular, (d) decaying exponential, (e) truncated cosine and (f) Lorentzian functions.}
\label{fig:t2_all}
\end{figure}
The flat spectral response case developed in Section \ref{subsubsec:flat} shows the shape of the field at the output plane is given by Eq.~(\ref{eq:fmp}). For all the targeted spectral responses, this far field at the output plane is plotted in Fig.~\ref{fig:fosN_all} for $\lambda_{0}$. The far field is not exactly the Fourier transform pair of the AWs field distribution in Fig.~\ref{fig:field_k}\subref{fig:field_all}. As expressed in Eq.~(\ref{eq:fmp}), the field profiles at $x_2$ have a finite extensions (i.e. a finite number of AWs is employed), therefore the field profiles are truncated and the far field is in fact the convolution between a sinc function (Fourier transform of a truncation function in the array) and the Fourier transform of the profile applied at the AWs. From all the curves in Fig.~\ref{fig:fosN_all} the triangular function case (red line) is the most suitable to understand this fact. Ideally, for an infinity (unpractical) number of AWs, one would expect a perfect (sharp) triangular shape, but in practice the truncation by a finite number of waveguides results in some smoothing in the curves. This is similar to the well-know problem of function approximation with Fourier series using a finite number of terms.
In addition to this intrinsic smoothing, the corresponding end-to-end transfer functions for the R-AWGs involves the calculation of the convolution integral between the (already smoothed) far field at the output plane and the mode at the output waveguide, as described by Eq. (\ref{eq:tq}). The transfer functions are depicted in Fig.~\ref{fig:t2_all_lineal} and Fig.~\ref{fig:t2_all}, using linear and logarithmic units, for one output waveguide, in this case o$_{3}$ placed at a distance 2.24~$\mu$m from the slab center.
\section{\label{sec:conclusion}Conclusion and outlook}
This paper proposes a novel type of reflective Arrayed Waveguide Grating, that makes use of a configuration based on phase shifters and Sagnac Loop Reflectors, build with an optical coupler with looped back waveguides. The layout enables the control of the field amplitude and phase per arm in the array, with the combination of phase shifters and SLRs whose reflectivity is set through the coupling constant of the optical coupler. A theoretical model for the analysis and design of the device has been provided, both for the cases of Gaussian and flattened response, the latter achieved by adjusting the AW field distribution to a sinc function by means of the SLRs and phase shifters. The model was used to design and simulate Silicon-on-Insulator implementations using typical waveguide cross-sections for the technology. This was presented for the Gaussian and flattened spectral response cases, as well as for different field distributions in the AWG arms that result in principle with an arbitrarily customizable spectral response.
We believe profiles in the AWs can be found to pre-equalize the intrinsic spectral response smoothings described, in order to obtain closer to target spectral responses. The processing of photonic signals in the wavelength domain, i.e. multi-wavelength spectral filtering/shapping, is just one of the possible applications of this versatile and novel R-AWG layout. We envisage more applications for which the field distribution in the AWs is determinant, as the use of AWGs for pulse rate multiplication, where the envelope of the train of pulses generated by the AWGs is directly dictated by the field distribution in the arms \cite{leaird}.
\section*{Acknowledgment}
The authors acknowledge financial support by the Spanish MICINN project TEC2010-21337, acronym ATOMIC, the MINECO project TEC2013-42332-P, acronym PIC4ESP, project FEDER UPVOV 10-3E-492 and project FEDER UPVOV 08-3E-008. B. Gargallo acknowledges financial support through FPI grant BES-2011-046100. The authors thank J.S. Fandi\~no for helpfull discussions.
|
1903.08247
|
\section{Introduction}
We consider the average-case complexity of counting $k$-cliques in $s$-uniform Erd\H{o}s-R\'{e}nyi hypergraphs $G(n, c, s)$, where every $s$-subset of the $n$ vertices is a hyperedge independently with probability $c$. Our main result is a reduction for counting $k$-cliques on worst-case hypergraphs given a blackbox algorithm solving the problem on $G(n, c, s)$ with low error probability. Our approach is closely related to the recent work \cite{goldreich2018counting}, which showed a worst-case to average-case reduction for counting cliques for a particular efficiently-samplable distribution on graphs.
Our reduction yields two different sets of average-case lower bounds for counting $k$-cliques in graphs sampled from the natural distribution $G(n, c, s)$ in the dense and sparse cases of $c = \Theta(1)$ and $c = \Theta(n^{-\alpha})$, with tradeoffs between runtime and $c$.
We also show that these average-case lower bounds often match algorithmic upper bounds.
The complexity of clique problems on Erd\H{o}s-R\'{e}nyi random graphs has become a central topic in average-case complexity, discrete probability and high-dimensional statistics. A body of work has analyzed algorithms for finding large cliques in Erd\H{o}s-R\'{e}nyi graphs\footnote{In both ordinary Erd\H{o}s-R\'{e}nyi graphs and the planted clique model.} \cite{kuvcera1995expected,alon1998finding, feige2000finding, mcsherry2001spectral, feige2010finding, ames2011nuclear, dekel2014finding, deshpande2015finding, chen2016statistical}, and hardness results have been shown for
greedy algorithms \cite{karp1976, grimmett1975colouring, jerrum1992large, mcdiarmid1984colouring, pittel1982probable}, local algorithms \cite{gamarnik2014limits, coja2015independent, rahman2017local}, query models \cite{feige2018finding}, bounded-depth circuits \cite{rossman2008constant}, monotone circuits \cite{rossman2010monotone}, low-degree sum of squares (SOS) relaxations \cite{barak2016nearly}, statistical query algorithms \cite{feldman2013statistical}, and resolution \cite{atserias2018clique}. The hardness of clique problems on Erd\H{o}s-R\'enyi graphs has been used as an average-case assumption in cryptography \cite{juels2000hiding} and to show information-computation gaps in a variety of statistical problems \cite{berthet2013complexity, koiran2014hidden, chen2015incoherence, hajek2015computational, ma2015computational, brennan2018reducibility, brennan2019universality, brennan2019optimal}.
All of the above lower bounds for clique problems on Erd\H{o}s-R\'{e}nyi random graphs are against restricted classes of algorithms. One reason for this is that there are general obstacles to basing average-case complexity on worst-case complexity. For example, natural approaches to polynomial-time worst-case to average-case reductions for NP-complete problems fail unless coNP $\subseteq$ NP/poly \cite{feigenbaum1993random, bogdanov2006worst, bogdanov2006average}. The objective of this work is to show that this worst-case characterization of average-case complexity is possible in a fine-grained sense for the natural problem of counting $k$-cliques in $s$-uniform Erd\H{o}s-R\'{e}nyi hypergraphs $G(n, c, s)$ with edge density $c$.
A motivating recent work by Goldreich and Rothblum \cite{goldreich2018counting} also considered worst-case to average-case reductions for $k$-clique counting. They provided such a reduction mapping to an efficiently sampleable distribution on graphs with a high min-entropy of $\tilde{\Omega}(n^2)$. In contrast to \cite{goldreich2018counting}, our objectives are to: (1) map precisely to the natural distribution $G(n, c, s)$ for different edge densities $c$, including $c = \Theta(1)$ and the sparse case $c = \Theta(n^{-\alpha})$; and (2) to characterize the tradeoff between the time-complexity of counting $k$-cliques in $G(n, c, s)$ and the sparsity parameter $\alpha$. Achieving this requires new ingredients for the self-reducibility of counting $k$-cliques as a low-degree polynomial and a tight analysis of random biased binary expansions over $\mathbb{F}_p$ with finite Fourier analysis.
However, our techniques also come at the cost of requiring a low error probability ($1/\text{polylog}(n)$ in the dense case and $1/\poly(n)$ in the sparse case) for the average-case blackbox solving $k$-clique counting on $G(n, c, s)$. This is in contrast to \cite{goldreich2018counting}, where a very high error probability of $1 - 1/\text{polylog}(n)$ is tolerated. It remains an interesting open problem to extend our results for $G(n, c, s)$ to tolerate higher error blackboxes. This error tolerance and open problem are discussed further in Sections \ref{subsec:reductionthmstatements} and \ref{sec:openproblems}, and how our techniques relate to those in \cite{goldreich2018counting} is discussed in Sections \ref{subsec:wcacoverview} and \ref{sec:averagecasereductionproof}. As a step towards increasing the allowed blackbox error, we also give a variant of our reduction for computing the \emph{parity} of the $k$-clique count that only requires a constant bound on the error probability (for each fixed $k$) of the blackbox algorithm solving the problem on $G(n, c, s)$ when $c = 1/2$. We now give an overview of our contributions.
\subsection{Overview of Main Results}
We provide two complementary main results on the fine-grained average-case complexity of counting $k$-cliques in $G(n, c, s)$. The precise formulations of the problems we consider are in Section \ref{sec:worstcasehardnessconjectures}.
\paragraph{Worst-case to average-case reduction.} We give a worst-case to average-case reduction from counting $k$-cliques in worst-case $s$-uniform hypergraphs to counting $k$-cliques in hypergraphs drawn from $G(n, c, s)$. The key guarantees of this reduction are summarized in the following simplified version of our main theorem.
\begin{theorem}[Simplified Main Result]
If $2 \leq s \le k$ are constant integers and $c = c(n)$ satisfies $0 < c \le 1 - \Omega(1)$, then there is a parameter $\Upsilon_{\#} = c^{-\binom{k}{s}} (\log n)^{O(1)}$ such that the following holds. If there is a randomized algorithm counting $k$-cliques in time $O(n^t)$ with error probability less than $1/\Upsilon_{\#}$ on hypergraphs drawn from $G(n, c, s)$, then there is a randomized algorithm counting $k$-cliques on worst-case $s$-uniform hypergraphs with error probability less than $1/3$ running in time $O\left(\Upsilon_{\#} \cdot n^{\max\{t, s\}} \right)$.
\end{theorem}
We discuss the necessity of the error tolerance and the multiplicative slowdown in our worst-case to average-case reduction in Section \ref{subsec:reductionthmstatements}. This result has a number of consequences for basing the average-case fine-grained complexity of $k$-clique counting over Erd\H{o}s-R\'{e}nyi hypergraphs on its worst-case complexity, which we now overview.
Counting $k$-cliques in worst-case hypergraphs is known to take $n^{\Omega(k)}$ time for randomized algorithms assuming the randomized Exponential Time Hypothesis (rETH)\footnote{rETH asserts that any randomized algorithm takes at least $2^{c n}$ time to solve 3-SAT in the worst-case, for some constant $c > 0$.} if $k$ does not grow with $n$ \cite{chen2006strong,calabro2008complexity}. The best known worst-case algorithms up to subpolynomial factors are the $O\left(n^{\omega \lceil k/3 \rceil}\right)$ time algorithm of \cite{nevsetvril1985complexity} in the graph case of $s = 2$ and exhaustive $O(n^k)$ time search on worst-case hypergraphs with $s \ge 3$. Here, $\omega \leq 2.373$ denotes the matrix multiplication constant. Our reduction is the first worst-case to average-case reduction to Erd\H{o}s-R\'{e}nyi hypergraphs. It has different implications for the cases of dense and sparse hypergraphs because of the factor $\Upsilon_{\#}$, as described next.
\begin{enumerate}
\item \textit{Dense Erd\H{o}s-R\'{e}nyi graphs and hypergraphs.} When $k$ and $c$ are constant, our reduction constructs an efficient $k$-clique counting algorithm that succeeds on a worst-case input hypergraph with high probability, using $\polylog(n)$ queries to an average-case oracle that correctly counts $k$-cliques on a $1 - 1/\polylog(n)$ fraction of Erd\H{o}s-R\'enyi hypergraphs drawn from $G(n,c,s)$. This essentially shows that $k$-clique counting in the worst-case matches that on dense Erd\H{o}s-R\'{e}nyi hypergraphs. More precisely, $k$-clique counting on $G(n, c, s)$ with $k, c$ and $s$ constant must take $\tilde{\Omega}\left( n^{\omega \lfloor k/3 \rfloor} \right)$ time when $s = 2$ and $\tilde{\Omega}(n^k)$ time when $s \ge 3$, unless there are faster worst-case algorithms. Furthermore, our reduction shows that it is rETH-hard to count $k$-cliques in $n^{o(k)}$ time on $G(n, c, s)$ with $k, c$ and $s$ constant.
\item \textit{Sparse Erd\H{o}s-R\'{e}nyi graphs and hypergraphs.} Our reduction also applies with a different multiplicative slowdown and error tolerance to the sparse case of $c = \Theta(n^{-\alpha})$, where the fine-grained complexity of $k$-clique counting on $G(n, c, s)$ is very different than on worst-case inputs. Our reduction implies fine-grained lower bounds of $\tilde{\Omega}\left(n^{\omega \lceil k/3 \rceil - \alpha \binom{k}{2}} \right)$ when $s = 2$ and $\tilde{\Omega}\left(n^{k - \alpha \binom{k}{s}} \right)$ when $s \ge 3$ for inputs drawn from $G(n, c, s)$, unless there are faster worst-case algorithms. We remark that in the hypergraph case of $s \ge 3$, this lower bound matches the expectation of the quantity being counted, the number of $k$-cliques in $G(n, c, s)$, up to $\polylog(n)$ factors.\footnote{For the sub-class of algorithms that enumerate $k$-cliques one by one, the $k$-clique count is a trivial lower bound on the runtime. Our general lower bound matches this heuristic lower bound.}
\end{enumerate}
Precise statements of our results can be found in Section \ref{subsec:reductionthmstatements}. For simplicity, our results should be interpreted as applying to algorithms that succeed with probability $1 - (\log n)^{-\omega(1)}$ in the dense case and $1 - n^{-\omega(1)}$ in the sparse case.
We also give a second worst-case to average-case reduction for computing the parity of the number of $k$-cliques which has a weaker requirement of $1 - \Theta_{k, s}(1)$ on the error probability for the blackbox solving the problem on $G(n, c, s)$ in the dense case of $c = 1/2$. We provide an overview of our multi-step worst-case to average-case reduction in Section \ref{subsec:wcacoverview}. The steps are described in detail in Section \ref{sec:averagecasereductionproof}.
\paragraph{Algorithms for $k$-clique counting on $G(n, c, s)$.} We also analyze several natural algorithms for counting $k$-cliques in sparse Erd\H{o}s-R\'{e}nyi hypergraphs. These include an extension of the natural greedy algorithm mentioned previously from $k$-\textsc{clique} to counting $k$-cliques, a modification to this algorithm using the matrix multiplication step of \cite{nevsetvril1985complexity} and an iterative algorithm achieving nearly identical guarantees. These algorithms count $k$-cliques in $G(n, c, s)$ when $c = \Theta(n^{-\alpha})$ with several different runtimes, the best of which are as follows:
\begin{itemize}
\item $\tilde{O}\left( n^{k + 1 - \alpha \binom{k}{s}} \right)$ if $s \ge 3$ and $k < \tau + 1$;
\item $\tilde{O}\left( n^{\tau + 2 - \alpha \binom{\tau + 1}{s}} \right)$ if $s \ge 3$ and $\tau + 1 \le k \le \kappa + 1$; and
\item $\tilde{O}\left( n^{\omega \lceil k/3 \rceil + \omega - \omega \alpha \binom{\lceil k/3 \rceil}{2}} \right)$ if $s = 2$ and $k \le \kappa + 1$.
\end{itemize}
Here, $\tau$ and $\kappa$ are the largest positive integers satisfying that $\alpha \binom{\tau}{s - 1} < 1$ and $\alpha \binom{\kappa}{s - 1} < s$. The thresholds $\kappa$ and $\tau$ have natural interpretations as roughly the clique number and most frequent clique size in the graph $G(n, c, s)$, respectively. Throughout, we restrict our attention to $k$ with $k \le \kappa + 1$ since the probability that the largest clique in $G$ has size $\omega(G) > \kappa + 1$ is $1/\text{poly}(n)$.
The threshold $\tau + 1$ also has a natural interpretation as the $k$-clique percolation threshold \cite{derenyi2005clique, palla2007critical, dorogovtsev2008critical}, which roughly corresponds to the largest value of $k$ at which a local search algorithm can explore all the cliques in the hypergraph starting from any given clique. In particular, in the graph case of $s = 2$, $\tau + 1$ is the largest integer $k$ such that $\alpha < \frac{1}{k - 1}$, which is exactly the $k$-clique percolation threshold described below. Given a hypergraph $G$, define two $k$-cliques of $G$ to be adjacent if they share $(k - 1)$ of their $k$ vertices. This induces a hypergraph $G_k$ on the set of $k$-cliques. For graphs $G$ drawn from $G(n, c)$, \cite{derenyi2005clique} introduced the $k$-clique percolation threshold of $c = \frac{1}{k - 1} \cdot n^{-\frac{1}{k - 1}}$, above which a giant component emerges in $G_k$. This threshold and extensions were rigorously established in \cite{bollobas2009clique}. Following the same heuristic as in \cite{derenyi2005clique}, our threshold $\tau + 1$ is a natural extension of the $k$-clique percolation threshold to the hypergraph case of $s \ge 3$.
\begin{figure*}[t!]
\centering
\begin{tikzpicture}[scale=0.06]
\tikzstyle{every node}=[font=\footnotesize]
\def0{0}
\def108{108}
\def0{0}
\def60{60}
\fill [green!20, domain=0:100, variable=\x]
(0, 0)
-- plot ({\x}, {2.373*\x/3 - 2.373*(2/99)*(\x/3)*(\x/3 - 1)/2})
-- (100, 58) -- (0, 58) -- (0, 0);
\fill [gray!20, domain=0:100, variable=\x]
(0, 0)
-- plot ({\x}, {2.373*\x/3 - 2.373*(2/99)*(\x/3)*(\x/3 - 1)/2})
-- (100, 0) -- (0, 0);
\fill [blue!20, domain=0:79.35, variable=\x]
(0, 0)
-- plot ({\x}, {2.373*\x/3 - (2/99)*(\x)*(\x - 1)/2})
-- (0, 0);
\node at (20, 60) [below] {Graphs ($s = 2$)};
\node at (12, 40) {feasible};
\node at (40, 5) {infeasible};
\node at (90, 40) {open};
\node at (70, 15) {$\frac{\omega k}{3} - \alpha \binom{k}{2}$};
\node [rotate=28] at (50, 38) {$\frac{\omega k}{3} - \frac{\omega \alpha}{9} \binom{k}{2}$};
\draw[->] (0,0) -- (108,0) node[right] {$k$};
\draw[->] (0,0) -- (0,60) node[above] {$\log_n T$};
\draw[domain=0:100,smooth,variable=\x,blue] plot ({\x}, {2.373*\x/3 - 2.373*(2/99)*(\x/3)*(\x/3 - 1)/2});
\draw[domain=0:79.35,smooth,variable=\x,blue] plot ({\x}, {2.373*\x/3 - (2/99)*(\x)*(\x - 1)/2});
\node at (98, 0) [below] {$\omega(G)$};
\end{tikzpicture}
\begin{tikzpicture}[scale=0.06]
\tikzstyle{every node}=[font=\footnotesize]
\def0{0}
\def108{108}
\def0{0}
\def60{60}
\fill [green!20, domain=0:61.981, variable=\x]
(0, 58) -- (0, 0)
-- plot ({\x}, {\x - (\x/97)*((\x - 1)/96)*((\x - 2)/95)*(\x - 3)})
-- (98, 46.866)
-- (98, 58)
-- (0, 58);
\fill [blue!20, domain=0:98, variable=\x]
(0, 0)
-- plot ({\x}, {\x - (\x/97)*((\x - 1)/96)*((\x - 2)/95)*(\x - 3)})
-- (0, 0);
\fill [gray!20, domain=61.981:98, variable=\x]
plot ({\x}, {\x - (\x/97)*((\x - 1)/96)*((\x - 2)/95)*(\x - 3)})
-- (98, 46.866)
-- (61.981, 46.866);
\draw[->] (0,0) -- (108,0) node[right] {$k$};
\draw[->] (0,0) -- (0,60) node[above] {$\log_n T$};
\draw[dashed] (61.981, 0) -- (61.981, 46.866);
\node at (61.981, 0) [below] {$k$-clique percolation};
\node at (96, 0) [below] {$\omega(G)$};
\node at (26, 60) [below] {Hypergraphs ($s \ge 3$)};
\node at (12, 40) {feasible};
\node at (35, 5) {infeasible};
\node at (90, 40) {open};
\node [rotate=45] at (20, 25) {$k - \alpha \binom{k}{s}$};
\node at (80, 51) {$\tau + 1 - \alpha \binom{\tau + 1}{s}$};
\draw[blue] (61.981, 46.866) -- (98, 46.866);
\draw[domain=0:98,smooth,variable=\x,blue] plot ({\x}, {\x - (\x/97)*((\x - 1)/96)*((\x - 2)/95)*(\x - 3)});
\end{tikzpicture}
\caption{Comparison of our algorithms and average-case lower bounds for counting $k$-cliques in sparse Erd\H{o}s-R\'{e}nyi Hypergraphs $G(n, c, s)$ with $c = \Theta(n^{-\alpha})$. Green denotes runtimes $T$ feasible for each $k$, blue denotes $T$ infeasible given that the best known worst-case algorithms are optimal and gray denotes $T$ for which the complexity of counting $k$-cliques is open after this work. The left plot shows the graph case of $s = 2$ and the right plot shows the hypergraph case of $s \ge 3$. For simplicity, all quantities shown are up to constant $O_{k, \alpha}(1)$ additive error.}
\label{fig:upperlowerbds}
\end{figure*}
\paragraph{Comparing our upper and lower bounds.} A comparison of our algorithmic guarantees and average-case lower bounds based on the best known worst-case algorithms for counting $k$-cliques is shown in Figure \ref{fig:upperlowerbds}.
\begin{enumerate}
\item \textit{Graph Case $(s = 2)$.} In the graph case, our lower and upper bounds have the same form and show that the exponent in the optimal running time is $\frac{\omega k}{3} - C \alpha \binom{k}{2} + O_{k, \alpha}(1)$ where $\frac{\omega}{9} \le C \le 1$ as long as $k \le \kappa + 1 = 2\alpha^{-1} + 1$. As shown in Figure \ref{fig:upperlowerbds}, our upper and lower bounds approach each other for $k$ small relative to $\kappa + 1$.
\item \textit{Hypergraph Case $(s \ge 3)$.} In the hypergraph case of $s \ge 3$, the exponents in our lower and upper bounds are nearly identical at $k - \alpha \binom{k}{s} + O_{k, \alpha}(1)$ up to the $k$-clique percolation threshold. After this threshold, our lower bounds slowly deteriorate relative to our algorithms until they become trivial at the clique number of $G$ by $k = \kappa + 1$.
\end{enumerate}
Because we consider sparse Erd\H{o}s-R\'{e}nyi hypergraphs, for each $n, k$, and $s$ we actually have an entire family of problems parametrized by the edge probability $c$ and the behavior changes as a function of $c$; this is the first worst-to-average-case hardness result we are aware of for which the complexity of the same problem over worst-case versus average-case inputs is completely different and can be sharply characterized over the whole range of $c$ starting from the same assumption.
It is surprising that our worst-case to average-case reduction techniques -- which range from the self-reducibility of polynomials to random binary expansions -- together yield tight lower bounds matching our algorithms in the hypergraph case.
Two interesting problems left open by our work are to show average-case lower bounds with an improved constant $C$ in the graph case and to show tight average-case lower bounds beyond the $k$-clique percolation threshold in the case $s \ge 3$. These, other open problems and some extensions of our methods are discussed in Section \ref{sec:openproblems}.
\subsection{Overview of Reduction Techniques}\label{subsec:wcacoverview}
For clarity of exposition, in this section we will restrict our discussion to the graph case $s = 2$, as well as the case of constant $k$. A key step of our worst-case to average-case reduction uses the random self-reducibility of multivariate low-degree polynomials -- i.e., evaluating a polynomial on any worst-case input can be efficiently reduced to evaluating it on several random inputs. This result follows from a line of work \cite{lipton1989new,feigenbaum1993random,gemmell1991self,gemmell1992highly} that provides a method to efficiently compute a polynomial $P : \FF^N \to \FF$ of degree $d \leq |\FF|/20$ on any worst-case input $x \in \FF^N$, given an oracle $\tilde{P} : \FF^N \to \FF$ that agrees with $P$ on a $\frac{1}{2} + \frac{1}{\poly(N)}$ fraction of inputs. Thus, for any low-degree polynomial over a large enough finite field, evaluating the polynomial on a random element in the finite field is roughly as hard as evaluating the polynomial on any adversarially chosen input.
\paragraph{Random self-reducibility for counting $k$-cliques.} With the random self-reducibility of polynomials in mind, a natural approach is to express the number of $k$-cliques in a graph as a low-degree polynomial of the $n \times n$ adjacency matrix $A$
$$P(A) = \sum_{\substack{S \subset [n] \\ |S| = k}} \Big(\prod_{i < j \in S} A_{ij}\Big).$$
This polynomial has been used in a number of papers, including by Goldreich and Rothblum \cite{goldreich2018counting} to construct a distribution on dense graphs for which counting $k$-cliques is provably hard on average. However, their techniques are primarily focused on the error probability requirement for the average-case blackbox. As a result, the distribution they obtain is far from Erd\H{o}s-R\'enyi and their approach does not yield tight bounds for sparse graphs.
The significant obstacle that arises in applying the random self-reducibility of $P$ is that one needs to work over a large enough finite field $\FF_p$, so evaluating $P$ on worst-case graph inputs in $\{0,1\}^{\binom{n}{2}}$
only reduces to evaluating $P$ on uniformly random inputs in $\FF_p^{\binom{n}{2}}$.
In order to further reduce to evaluating $P$ on graphs, given a random input $A \in \FF_p^{\binom{n}{2}}$ \cite{goldreich2018counting} uses several gadgets (including replacing vertices by independent sets and taking disjoint unions of graphs) in order to create a larger unweighted random graph $A'$ whose $k$-clique count is equal to $k! \cdot P(A)\pmod{p}$ for appropriate $p$. However, any nontrivial gadget-based reduction seems to have little hope of arriving at something close to the Erd\H{o}s-R\'enyi distribution, because gadgets inherently create non-uniform structure.
\paragraph{Reducing to $k$-partite graphs.} We instead consider a different polynomial for graphs on $nk$ vertices with $nk \times nk$ adjacency matrix $A$,
$$P'(A) = \sum_{v_1 \in [n]} \sum_{v_2 \in [2n] \sm [n]} \dots \sum_{v_k \in [kn] \sm [(k-1)n]} \left(\prod_{1 \leq i < j \leq k} A_{v_i v_j}\right).$$ The polynomial $P'$ correctly counts the number of $k$-cliques if $A$ is $k$-partite with vertex $k$-partition $[n] \sqcup ([2n] \sm [n]) \sqcup \dots \sqcup ([kn] \sm [(k-1)n])$. We first reduce clique-counting in the worst case to computing $P'$ in the worst case; this is a simple step, because it is a purely worst-case reduction.
Next, we construct a recursive counting procedure that reduces evaluating $P'$ on Erd\H{o}s-R\'enyi graphs to counting $k$-cliques in Erd\H{o}s-R\'enyi graphs. Therefore, it suffices to prove that if evaluating $P'$ is hard in the worst case, then evaluating $P'$ on Erd\H{o}s-R\'enyi graphs is also hard.
Applying the Chinese Remainder theorem as well as the random self-reducibility of polynomials, computing $P'$ on worst-case inputs in $\{0,1\}^{\binom{nk}{2}}$ reduces to computing $P'$ on several uniformly random inputs in $\FF_p^{\binom{nk}{2}}$, for several different primes $p$ each on the order of $\Theta(\log n)$. The main question is: how can one evaluate $P'$ on inputs $X \sim \mathrm{Unif}[\FF_p^{\binom{nk}{2}}]$ using an algorithm that evaluates $P'$ on $G(n,c,2)$ Erd\H{o}s-R\'enyi graphs (i.e., inputs $Y \sim \Ber(c)^{\otimes \binom{nk}{2}}$)?
\paragraph{Eliminating weights with random sparse binary expansions.} We solve this by decomposing the random weighted graph $X \sim \mathrm{Unif}[\FF_p^{\binom{nk}{2}}]$ into a weighted sum of graphs $Y^{(0)},\ldots,Y^{(t)} \in \{0,1\}^{\binom{nk}{2}}$ such that each $Y^{(i)}$ is close to Erd\H{o}s-R\'enyi $G(n,c,2)$. Specifically, this additive decomposition satisfies $X \equiv \sum_{i=0}^t 2^i Y^{(i)} \pmod{p}$, i.e., that we can write $X$ as a binary expansion modulo $p$ of Erd\H{o}s-R\'enyi graphs. Importantly, in Section \ref{sec:randombinaryexpansions} we derive near-optimal bounds on $t$ and prove that we can take $t$ to be quite small, growing only as $\poly(c^{-1}(1-c)^{-1} \log(p))$. This technique seems likely to have applications elsewhere. For the unbiased case of $c = 1/2$, a version of this binary expansions technique appeared previously in \cite{goldreich2017worst}.
Now, using the binary expansion decomposition of $X$, we algebraically manipulate $P'$ as follows:
\begin{align*}P'(X) &= \sum_{v_1 \in [n]} \sum_{v_2 \in [2n] \sm [n]} \dots \sum_{v_k \in [kn] \sm [(k-1)n]} \prod_{1 \leq i < j \leq k} \left(\sum_{l \in \{0,\ldots,t\}} 2^l \cdot Y^{(l)}_{v_i v_j}\right) \\ &= \sum_{f \in \{0,\ldots,t\}^{\binom{k}{2}}} \left(\prod_{1 \leq i \leq j \leq k} 2^{f_{ij}}\right) \\
&\quad \quad \quad \quad \times \left(\sum_{v_1 \in [n]} \sum_{v_2 \in [2n] \sm [n]} \dots \sum_{v_k \in [kn] \sm [(k-1)n]} \prod_{1 \leq i < j \leq k} Y^{(f_{ij})}_{v_iv_j} \right)
\\&= \sum_{f \in \{0,\ldots,t\}^{\binom{k}{2}}} \left(\prod_{1 \leq i \leq j \leq k} 2^{f_{ij}}\right) P'\left(Y^{(f)}\right).\end{align*}
Here $Y^{(f)}$ is the $nk$-vertex graph with entries given by $Y^{(f_{\bar{a}\bar{b}})}_{ab}$ for $1\leq a< b\leq nk$, where $\bar{a} = \ceil{a/n}$ and $\bar b = \ceil{b/n}$.
We thus reduce the computation of $P'(X)$ to the computation of a weighted sum of $\poly(c^{-1}(1-c)^{-1} \log(n))^{\binom{k}{2}}$ different evaluations of $P'$ at graphs close in total variation to $G(n,c,2)$. This concludes our reduction.\footnote{If we had instead worked with $P$, then this argument would fail. The argument uses the $k$-partiteness structure of $P'$ as follows: for every pair of vertices $a,b \in [nk]$ and $f \in \{0,\ldots,t\}^{\binom{k}{2}}$, the term $Y_{ab}^{(f_{ij})}$ appearing in the sum is uniquely determined by $a \in [ik] \sm [(i-1)k]$ and $b \in [jk] \sm [(j-1)k]$. So given $f$ we can define a graph $Y^{(f)}$ uniquely. On the other hand, running the same argument with the polynomial $P$, the term $Y_{ab}^{(f_{ij})}$ for many different $i,j$ would appear in the sum, and there is no way to uniquely define a graph $Y^{(f)}$.}
We remark that an important difference between our reduction and the reduction in \cite{goldreich2018counting} is the number of and structure of the calls to the average-case blackbox. Our reduction requires many successful calls to the blackbox in order to obtain a single correct evaluation of the polynomial $P'(A)$, which is where our low error probability requirement comes from. The gadgets in \cite{goldreich2018counting} are specifically designed to only require a single successful call to obtain a single correct evaluation of $P(A)$. Thus even given a blackbox with a constant error probability, the Berkelamp-Welch algorithm can recover $P(A)$ in the case of \cite{goldreich2018counting}.
We also give a different worst-case to average-case reduction for determining the parity of the number of $k$-cliques in Erd\H{o}s-R\'{e}nyi hypergraphs, as discussed in Sections \ref{subsec:reductionthmstatements} and \ref{sec:averagecasereductionproof}.
\subsection{Related Work on Worst-Case to Average-Case Reductions}
The random self-reducibility of low-degree polynomials serves as the basis for several worst-case to average-case reductions found in the literature. One of the first applications of this method was to prove that the permanent is hard to evaluate on random inputs, even with polynomially-small probability of success, unless $\mathsf{P^{\#P}} = \mathsf{BPP}$ \cite{sudan1997decoding,cai1999hardness}. (Under the slightly stronger assumption that $\mathsf{P^{\#P}} \neq \mathsf{AM}$, and with different techniques, \cite{feige1992hardness} proved that computing the permanent on large finite fields is hard even with exponentially small success probability.) Recently, \cite{ball2017average} used the polynomial random self-reducibility result in the fine-grained setting in order to construct polynomials that are hard to evaluate on most inputs, assuming fine-grained hardness conjectures for problems such as \textsc{3-SUM}, \textsc{Orthogonal-Vectors}, and/or \textsc{All-Pairs-Shortest-Paths}. The random self-reducibility of polynomials was also used by Gamarnik \cite{gamarnik2018computing} in order to prove that exactly computing the partition function of the Sherrington-Kirkpatrick model in statistical physics is hard on average.
If a problem is random self-reducible, then random instances of the problem are essentially as hard as worst-case instances, and therefore one may generate a hard instance of the problem by simply generating a random instance. Because of this, random self-reducibility plays an important role in cryptography: it allows one to base cryptographic security on random instances of a problem, which can generally be generated efficiently. A prominent example of a random-self reducible problem with applications to cryptography is the problem of finding a short vector in a lattice. In a seminal paper, Ajtai \cite{ajtai1996generating} gave a worst-case to average-case reduction for this short-vector problem. His ideas were subsequently applied to prove the average-case hardness of the Learning with Errors (LWE) problem, which underlies lattice cryptography \cite{ajtai1996generating,regev2009lattices}. A good survey covering worst-case to average-case reductions in lattice cryptography is \cite{regev2010learning}.
There are known restrictions on problems that are self-reducible. For example, non-adaptive worst-case to average-case reductions for $\mathsf{NP}$-complete problems fail unless $\mathsf{coNP} \subseteq \mathsf{NP/poly}$ \cite{feigenbaum1993random, bogdanov2006worst, bogdanov2006average}.
\subsection{Notation and Preliminaries}
A $s$-uniform hypergraph $G = (V(G), E(G))$ consists of a vertex set $V(G)$ and a hyperedge set $E(G) \subseteq \binom{V(G)}{s}$. A $k$-clique $C$ in $G$ is a subset of vertices $C \subset V(G)$ of size $|C| = k$ such that all of the possible hyperedges between the vertices are present in the hypergraph: $\binom{C}{s} \subseteq E(G)$. We write $\cl_k(G)$ to denote the set of $k$-cliques of the hypergraph $G$. One samples from the Erd\H{o}s-R\'enyi distribution $G(n,c,s)$ by independently including each of the $\binom{n}{s}$ hyperedges with probability $c$.
We denote the law of a random variable $X$ by $\cL(X)$. We use $T(A,n)$ to denote the worst-case run-time of an algorithm $A$ on inputs of size parametrized by $n$. We work in the Word RAM model of computation, where the words have $O(\log n)$ bits. All algorithms in this paper are randomized, and each (possibly biased) coin flip incurs constant computational cost.
\section{Problem Formulations and Average-Case Lower Bounds}
\label{sec:overview}
\subsection{Clique Problems and Worst-Case Fine-Grained Conjectures} \label{sec:worstcaseproblemslist}
\label{sec:worstcasehardnessconjectures}
In this section, we formally define the problems we consider and the worst-case fine-grained complexity conjectures off of which our average-case lower bounds are based. We focus on the following computational problems.
\begin{definition}
\textsc{\#$(k,s)$-clique} denotes the problem of counting the number of $k$-cliques in an $s$-uniform hypergraph $G$.
\end{definition}
\begin{definition}
\textsc{Parity-$(k,s)$-clique} denotes the problem of counting the number of $k$-cliques up to parity in an $s$-uniform hypergraph $G$.
\end{definition}
\begin{definition}
\textsc{Decide-$(k,s)$-clique} denotes the problem of deciding whether or not an $s$-uniform hypergraph $G$ contains a $k$-clique.
\end{definition}
Both \textsc{\#$(k,s)$-clique} and \textsc{Decide-$(k,s)$-clique} are fundamental problems that have long been studied in computational complexity theory and are conjectured to be computationally hard in the worst-case setting. When $k$ is allowed to be an unbounded input to the problem, \textsc{Decide-$(k,s)$-clique} is known to be NP-complete \cite{karp1972reducibility} and \textsc{\#$(k,s)$-clique} is known to be \#P-complete \cite{valiant1979complexity}. In this work, we consider the fine-grained complexity of these problems, where $k$ either can be viewed as a constant or a very slow-growing parameter compared to the number $n$ of vertices of the hypergraph. In this context, \textsc{Parity-$(k,s)$-clique} can be interpreted as an intermediate problem between the other two clique problems that we consider. The worst-case reduction from \textsc{Parity-$(k,s)$-clique} to \textsc{\#$(k,s)$-clique} is immediate. As we show in Appendix \ref{sec:decidetoparityreduction}, in the worst-case setting, \textsc{Decide-$(k,s)$-clique} also reduces to \textsc{Parity-$(k,s)$-clique} with a multiplicative overhead of $O(k2^k)$ time.
When $k$ is a constant, the trivial brute-force search algorithms for these problems are efficient in the sense that they take polynomial time. However, these algorithms do not remain efficient under the lens of fine-grained complexity since brute-force search requires $\Theta(n^k)$ time, which can grow significantly as $k$ grows. In the hypergraph case of $s \ge 3$, no algorithm taking time $O(n^{k-\eps})$ on any of these problems is known, including for \textsc{Decide-$(k,s)$-clique} \cite{yuster2006finding}. In the graph case of $s = 2$, the fastest known algorithms for all of these problems take $\Theta(n^{\omega \ceil{k /3}})$ time, where $2 \leq \omega < 2.4$ is the fast matrix multiplication constant \cite{itai1978finding,nevsetvril1985complexity}. Since this is the state of the art, one may conjecture that \textsc{Decide-$(k,s)$-clique} and \textsc{\#$(k,s)$-clique} take $n^{\Omega(k)}$ time in the worst case.
Supporting this conjecture, Razborov \cite{razborov1985lower} proves that monotone circuits require $\tilde{\Omega}(n^k)$ operations to solve \textsc{Decide-$(k,2)$-clique} in the case of constant $k$. Monotone circuit lower bounds are also known in the case when $k = k(n)$ grows with $n$ \cite{alon1987monotone,amano2005superpolynomial}. In \cite{downey1995fixed}, \textsc{Decide-$(k,2)$-clique} is shown to be $\mathsf{W[1]}$-hard. In other words, this shows that if \textsc{Decide-$(k,2)$-clique} is fixed-parameter tractable -- admits an algorithm taking time $f(k) \cdot \poly(n)$ -- then any algorithm in the parametrized complexity class $\mathsf{W[1]}$ is also fixed-parameter-tractable. This provides further evidence that \textsc{Decide-$(k,2)$-clique} is intractable for large $k$. Finally, \cite{chen2006strong} shows that solving \textsc{Decide-$(k,2)$-clique} in $n^{o(k)}$ time is ETH-hard for constant $k$\footnote{These hardness results also apply to \textsc{Decide-$(k,s)$-clique} for $s \ge 3$ since there is a reduction from \textsc{Decide-$(k,2)$-clique} to \textsc{Decide-$(k,s)$-clique} in $n^s$ time. The reduction proceeds by starting with a graph $G$ and constructing an $s$-uniform hypergraph $G'$ that contains a $s$-hyperedge for every $s$-clique in $G$. The $k$-cliques of $G$ and $G'$ are in bijection. This construction also reduces \textsc{\#$(k,2)$-clique} to \textsc{\#$(k,s)$-clique}.}. We therefore conjecture that the $k$-clique problems take $n^{\Omega(k)}$ time on worst-case inputs when $k$ is constant, as formalized below.
\begin{conjecture}[Worst-case hardness of \textsc{\#$(k,s)$-clique}] \label{conj:weakworstcasecounting} Let $k$ be constant. Any randomized algorithm $A$ for \textsc{\#$(k,s)$-clique} with error probability less than $1/3$ takes time at least $n^{\Omega(k)}$ in the worst case for hypergraphs on $n$ vertices.
\end{conjecture}
\begin{conjecture}[Worst-case hardness of \textsc{Parity-$(k,s)$-clique}] \label{conj:weakworstcaseparity} Let $k$ be constant. Any randomized algorithm $A$ for \textsc{Parity-$(k,s)$-clique} with error probability less than $1/3$ takes time at least $n^{\Omega(k)}$ in the worst case for hypergraphs on $n$ vertices.
\end{conjecture}
\begin{conjecture}[Worst-case hardness of \textsc{Decide-$(k,s)$-clique}] \label{conj:weakworstcasedeciding} Let $k$ be constant. Any randomized algorithm $A$ for \textsc{Decide-$(k,s)$-clique} with error probability less than $1/3$ takes time at least $n^{\Omega(k)}$ in the worst case for hypergraphs on $n$ vertices.
\end{conjecture}
The conjectures are listed in order of increasing strength. Since Conjecture \ref{conj:weakworstcasedeciding} is implied by rETH, they all follow from rETH. We also formulate a stronger version of the clique-counting hardness conjecture, which asserts that the current best known algorithms for $k$-clique counting are optimal.
\begin{conjecture}[Strong worst-case hardness of \textsc{\#$(k,s)$-clique}]\label{conj:strongworstcasecounting}
Let $k$ be constant. Any randomized algorithm $A$ for \textsc{\#$(k,s)$-clique} with error probability less than $1/3$ takes time $\tilde{\Omega}(n^{\omega \ceil{k/3}})$ in the worst case if $s = 2$ and $\tilde{\Omega}(n^k)$ in the worst case if $s \geq 3$.
\end{conjecture}
\subsection{Average-Case Lower Bounds for Counting $k$-Cliques in $G(n, c, s)$}
\label{subsec:reductionthmstatements} \label{subsec:timeslowdownnecessity}
Our first main result is a worst-case to average-case reduction solving either \textsc{\#$(k,s)$-clique} or \textsc{Parity-$(k,s)$-clique} on worst-case hypergraphs given a blackbox solving the problem on {\em most} Erd\H{o}s-R\'enyi hypergraphs drawn from $G(n, c, s)$. We discuss this error tolerance over sampling Erd\H{o}s-R\'enyi hypergraphs as well as the multiplicative overhead in our reduction below. These results show that solving the $k$-clique problems on Erd\H{o}s-R\'enyi hypergraphs $G(n,c,s)$ is as hard as solving them on worst-case hypergraphs, for certain choices of $k,c$ and $s$. Therefore the worst-case hardness assumptions, Conjectures \ref{conj:weakworstcasecounting}, \ref{conj:weakworstcaseparity} and \ref{conj:strongworstcasecounting}, imply average-case hardness on Erd\H{o}s-R\'enyi hypergraphs for \textsc{\#$(k,s)$-clique} and \textsc{Parity-$(k,s)$-clique}.
\begin{theorem}[Worst-case to average-case reduction for \textsc{\#$(k,s)$-clique}] \label{thm:averagecasehardnesscounting}
There is an absolute constant $C > 0$ such that if we define
$$\Upsilon_{\#}(n,c,s,k) \triangleq \left(C (c^{-1}(1-c)^{-1}) (s\log k + s\log \log n) (\log n)\right)^{\binom{k}{s}}$$
then the following statement holds. Let $A$ be a randomized algorithm for \textsc{\#$(k,s)$-clique} with error probability less than $1/\Upsilon_{\#}$ on hypergraphs drawn from $G(n,c,s)$. Then there exists an algorithm $B$ for \textsc{\#$(k,s)$-clique} that has error probability less than $1/3$ on any hypergraph, such that
$$T(B,n) \leq (\log n) \cdot \Upsilon_{\#} \cdot \left(T(A,nk) + (nk)^s \right),$$ where $T(\cA,\ell)$ denotes the runtime of algorithm $\cA$ on $\ell$-vertex hypergraphs.
\end{theorem}
For \textsc{Parity-$(k,s)$-clique} we also give an alternative reduction with an improved reduction time and error tolerance in the dense case when $c = 1/2$.
\begin{theorem}[Worst-case to average-case reduction for \textsc{Parity-$(k,s)$-clique}] \label{thm:averagecasehardnessparity}
We have that:
\begin{enumerate}
\item There is an absolute constant $C > 0$ such that if we define
$$\Upsilon_{P,1}(n,c,s,k) \triangleq \left(C (c^{-1}(1-c)^{-1}) (s\log k)\left(s\log n + \binom{k}{s} \log \log \binom{k}{s}\right)\right)^{\binom{k}{s}}$$
then the following statement holds. Let $A$ be a randomized algorithm for \textsc{Parity-$(k,s)$-clique} with error probability less than $1/\Upsilon_{P,1}$ on hypergraphs drawn from $G(n,c,s)$. Then there exists an algorithm $B$ for \textsc{Parity-$(k,s)$-clique} that has error probability less than $1/3$ on any hypergraph, such that
$$T(B,n) \leq \Upsilon_{P,1} \cdot \left(T(A,nk) + (nk)^s\right)$$
\item There is an absolute constant $C > 0$ such that if we define
$$\Upsilon_{P,2}(s,k) \triangleq \left(C s \log k\right)^{\binom{k}{s}}$$
then the following statement holds. Let $A$ be a randomized algorithm for \textsc{Parity-$(k,s)$-clique} with error probability less than $1/\Upsilon_{P,2}$ on hypergraphs drawn from $G(n,1/2,s)$. Then there exists an algorithm $B$ for \textsc{Parity-$(k,s)$-clique} that has error probability less than $1/3$ on any hypergraph, such that
$$T(B,n) \leq \Upsilon_{P,2} \cdot \left(T(A,nk) + (nk)^s\right)$$
\end{enumerate}
\end{theorem}
Our worst-case to average-case reductions yield the following fine-grained average-case lower bounds for $k$-clique counting and parity on Erd\H{o}s-R\'enyi hypergraphs based on Conjectures \ref{conj:weakworstcasecounting} and \ref{conj:strongworstcasecounting}. We separate these lower bounds into the two cases of dense and sparse Erd\H{o}s-R\'enyi hypergraphs. We remark that, for all constants $k$, an error probability of less than $(\log n)^{-\omega(1)}$ suffices in the dense case and error probability less than $n^{-\omega(1)}$ suffices in the sparse case.
\begin{corollary}[Average-case hardness of \textsc{\#$(k,s)$-clique} on dense $G(n,c,s)$]\label{cor:averagecasecountingdense} If $k,c,\eps > 0$ are constant, then we have that
\begin{enumerate}
\item Assuming Conjecture \ref{conj:weakworstcasecounting}, then any algorithm $A$ for \textsc{\#$(k,s)$-clique} that has error probability less than $(\log n)^{-\binom{k}{s} - \eps}$ on Erd\H{o}s-R\'enyi hypergraphs drawn from $G(n,c,s)$ must have runtime at least $T(A, n) \ge n^{\Omega(k)}$.
\item Assuming Conjecture \ref{conj:strongworstcasecounting}, then any algorithm $A$ for \textsc{\#$(k,s)$-clique} that has error probability less than $(\log n)^{-\binom{k}{s} - \eps}$ on Erd\H{o}s-R\'enyi hypergraphs drawn from $G(n,c,s)$ must have runtime at least $T(A, n) \ge \tilde{\Omega}\left(n^{\omega\ceil{k/3}}\right)$ if $s = 2$ and $T(A, n) \ge \tilde{\Omega}(n^{k})$ if $s \geq 3$.
\end{enumerate}
\end{corollary}
\begin{corollary}[Average-case hardness of \textsc{\#$(k,s)$-clique} on sparse $G(n,c,s)$]\label{cor:averagecasecountingsparse}
Let $k, \alpha, \eps > 0$ be constants and $c = \Theta(n^{-\alpha})$. Assuming Conjecture \ref{conj:strongworstcasecounting}, then any algorithm $A$ for \textsc{\#$(k,s)$-clique} that has error probability less than $n^{-\alpha \binom{k}{s} - \eps}$ on Erd\H{o}s-R\'enyi hypergraphs drawn from $G(n,c,s)$ must have runtime at least $T(A, n) \ge \tilde{\Omega}\left(n^{\omega\ceil{k/3} - \alpha\binom{k}{s}}\right)$ if $s = 2$ and $T(A, n) \ge \tilde{\Omega}\left(n^{k - \alpha\binom{k}{s}}\right)$ if $s \geq 3$.
\end{corollary}
We remark that Conjecture \ref{conj:weakworstcasecounting} implies there is a constant $C > 0$ such that a version of Corollary \ref{cor:averagecasecountingsparse} holds with the weaker conclusion $T(A, n) \ge n^{\Omega(k)}$ holds for all $\alpha \le C \binom{k}{s}$. For \textsc{Parity-$(k,s)$-clique}, we consider here the implications of Theorem \ref{thm:averagecasehardnessparity} only for $c = 1/2$, since this is the setting in which we obtain substantially different lower bounds than for \textsc{\#$(k,s)$-clique}. As shown, an error probability of $o(1)$ on $G(n,1/2,s)$ hypergraphs suffices for our reduction to succeed.
\begin{corollary}[Average-case hardness of \textsc{Parity-$(k,s)$-clique} on $G(n,1/2,s)$]\label{cor:averagecaseparityhalf}
Let $k$ be constant. Assuming Conjecture \ref{conj:weakworstcaseparity}, there is a small enough constant $\eps \triangleq \eps(k,s)$ such that if any algorithm $A$ for \textsc{Parity-$(k,s)$-clique} has error less than $\eps$ on $G(n,1/2,s)$ then $A$ must have runtime at least $T(A, n) \ge n^{\Omega(k)}$.
\end{corollary}
We remark on one subtlety of our setup in the sparse case. Especially in our algorithms section, we generally restrict our attention to $c = \Theta(n^{-\alpha})$ satisfying $\alpha \le k\binom{k}{s}^{-1} = s\binom{k}{s - 1}^{-1}$, which is necessary for the expected number of $k$-cliques in $G(n, c, s)$ to not tend to zero. However, even when this expectation is decaying, the problem \textsc{\#$(k,s)$-clique} as we formulate it is still nontrivial. The simple algorithm that always outputs zero fails with a polynomially small probability that does not appear to meet the $1/\Upsilon_{\#}$ requirement in our worst-case to average-case reduction. A simple analysis of this error probability can be found in Lemma \ref{lem:cliquenumber}. Note that even when $\alpha > s\binom{k}{s - 1}^{-1}$, $\textsc{greedy-random-sampling}$ and its derivative algorithms in Section \ref{sec:algs} still has guarantees and succeeds with probability $1 - n^{-\omega(1)}$. We now discuss the multiplicative overhead and error tolerance in our worst-case to average-case reduction for \textsc{\#$(k,s)$-clique}.
\paragraph{Discussion of the Multiplicative Slowdown $\Upsilon_{\#}$} In the sparse case of $c = \Theta(n^{-\alpha})$, our algorithmic upper bounds in Section \ref{sec:algs} imply lower bounds on the multiplicative overhead factor $\Upsilon_{\#}$ in Theorem \ref{thm:averagecasehardnesscounting}. In the hypergraph case of $s \ge 3$ and below the $k$-clique percolation threshold, it must follow that the overhead is at least $\Upsilon_{\#} = \tilde{\Omega}\left( n^{\alpha \binom{k}{s}} \right) = \tilde{\Omega}\left(c^{-\binom{k}{s}} \right)$. Otherwise, our algorithms combined with our worst-case to average-case reduction would contradict Conjecture \ref{conj:strongworstcasecounting}. Up to $\polylog(n)$ factors, this exactly matches the $\Upsilon_{\#}$ from our reduction. In the graph case of $s = 2$, it similarly must follow that the overhead is at least $\Upsilon_{\#} = \tilde{\Omega}\left( n^{\frac{\omega \alpha}{9} \binom{k}{s}} \right) = \tilde{\Omega}\left(c^{-\frac{\omega}{9}\binom{k}{s}} \right)$ to not contradict Conjecture \ref{conj:strongworstcasecounting}. This matches the $\Upsilon_{\#}$ from our reduction up to a constant factor in the exponent.
\paragraph{Discussion of the Error Tolerance $1/\Upsilon_{\#}$} Notice that our worst-case to average-case reductions in Theorems \ref{thm:averagecasehardnesscounting} and \ref{thm:averagecasehardnessparity} require that the error of the average-case blackbox on Erd\H{o}s-R\'enyi hypergraphs go to zero as $k$ goes to infinity. This error tolerance requirement is unavoidable. When $k = \omega(\log n)$ in the dense Erd\H{o}s-R\'enyi graph case of $G(n, 1/2)$, there is a $k$-clique with at most $\binom{n}{k} 2^{-\binom{k}{2}} = o(1)$ probability by a union bound on $k$-subsets of vertices. So in this regime clique-counting on $G(n,1/2)$ with constant error probability is not hard: the algorithm that always outputs zero achieves $o(1)$ average-case error.
If $k \triangleq 3 \log_2 n$, then the probability of a $k$-clique on $G(n,1/2)$ is less than $\binom{n}{k} 2^{-\binom{k}{2}} \leq 2^{-k^2/6}$. So average-case $k$-clique counting is not hard with error more than $2^{-k^2/6}$. On the other hand, our \textsc{\#$(k,2)$-clique} reduction works with average-case error less than $1/\Upsilon_{\#} = 2^{-\Omega(k^2 \log \log n)}$. And our \textsc{Parity-$(k,2)$-clique} reduction is more lenient, requiring error only less than $2^{-\Omega(k^2 \log \log \log n)}$. Thus, the error bounds required by our reductions are quite close to the $2^{-k^2/6}$ error bound that is absolutely necessary for any reduction in this regime.
In the regime where $k = O(1)$ is constant and on $G(n, 1/2)$, our \textsc{Parity-$(k,2)$-clique} reduction only requires a small constant probability of error and our \textsc{\#$(k,2)$-clique} reduction requires less than a $1/\polylog(n)$ probability of error. We leave it as an intriguing open problem whether the error tolerance of our reductions can be improved in this regime.
Finally, we remark that the error tolerance of the reduction must depend on $c$. The probability that a $G(n,c)$ graph contains a $k$-clique is less than $(nc^{(k-1)/2})^{k}$. For example, if $c = 1/n$ then the probability that there exists a $k$-clique is less than $n^{-\Omega(k^2)}$. As a result, no worst-case to average-case reduction can tolerate average-case error more than $n^{-O(k^2)}$ on $G(n, 1/n)$ graphs. And therefore our reductions for \textsc{\#$(k,2)$-clique} and for \textsc{Parity-$(k,2)$-clique} are close to optimal when $c = 1/n$, because our error tolerance scales as $n^{-O(k^2 \log \log n)}$.
\section{Worst-Case to Average-Case Reduction for $G(n, c, s)$}
\label{sec:averagecasereductionproof}
In this section, we give our main worst-case to average-case reduction that transforms a blackbox solving \textsc{\#$(k,s)$-clique} on $G(n, c, s)$ into a blackbox solving \textsc{\#$(k,s)$-clique} on a worst-case input hypergraph. This also yields a worst-case to average-case reduction for \textsc{Parity-$(k,s)$-clique} and proves Theorems \ref{thm:averagecasehardnesscounting} and \ref{thm:averagecasehardnessparity}. The reduction involves the following five main steps, the details of which are in Sections \ref{sec:kpartite} to \ref{sec:erdosrenyi}.
\begin{enumerate}
\item Reduce \textsc{\#$(k,s)$-clique} and \textsc{Parity-$(k,s)$-clique} on general worst-case hypergraphs to the worst-case problems with inputs that are $k$-partite hypergraphs with $k$ parts of equal size.
\item Reduce the worst-case problem on $k$-partite hypergraphs to the problem of computing a low-degree polynomial $P_{n,k,s}$ on $N \triangleq N(n,k,s)$ variables over a small finite field $\FF$.
\item Reduce the problem of computing $P_{n,k,s}$ on worst-case inputs to computing $P_{n,k,s}$ on random inputs in $\FF^N$.
\item Reduce the problem of computing $P_{n,k,s}$ on random inputs in $\FF^N$ to computing $P_{n,k,s}$ on random inputs in $\{0,1\}^N$. This corresponds to \textsc{\#$(k,s)$-clique} and \textsc{Parity-$(k,s)$-clique} on $k$-partite Erd\H{o}s-R\'enyi hypergraphs.
\item Reduce the resulting average-case variants of \textsc{\#$(k,s)$-clique} and \textsc{Parity-$(k,s)$-clique} on $k$-partite Erd\H{o}s-R\'enyi hypergraphs to non-$k$-partite Erd\H{o}s-R\'enyi hypergraphs.
\end{enumerate}
These steps are combined in Section \ref{sec:proofsofmain} to complete the proofs of Theorems \ref{thm:averagecasehardnesscounting} and \ref{thm:averagecasehardnessparity}. Before proceeding to our worst-case to average-case reduction, we establish some definitions and notation, and also give pseudocode for the counting reduction in Figure \ref{fig:pseudocodecounting} -- the parity reduction is similar.
The intermediate steps of our reduction crucially make use of $k$-partite hypergraphs with $k$ parts of equal size, defined below.
\begin{definition}[$k$-Partite Hypergraphs] \label{def:kpartiteness}
Given a $s$-uniform hypergraph $G$ on $nk$ vertices with vertex set $V(G) = [n] \times [k]$, define the vertex labelling
$$L : (i,j) \in [n] \times [k] \mapsto j \in [k]$$
If for all $e = \{u_1,\ldots,u_s\} \in E(G)$, the labels $L(u_1), L(u_2), \dots, L(u_s)$ are distinct, then we say that $G$ is $k$-partite with $k$ parts of equal size $n$.
\end{definition}
In our reduction, it suffices to consider only $k$-partite hypergraphs with $k$ parts of equal size. For ease of notation, our $k$-partite hypergraphs will always have $nk$ vertices and vertex set $[n] \times [k]$. In particular, the edge set of a $k$-partite $s$-uniform hypergraph is an arbitrary subset of
$$E(G) \subseteq \left\{\{u_1,\ldots,u_s\} \subset V(G) : L(u_1),\ldots,L(u_s) \text{ are distinct} \right\}$$
Taking edge indicators yields that the $k$-partite hypergraphs on $nk$ vertices we consider are in bijection with $\{0,1\}^N$, where $N \triangleq N(n,k,s) = \binom{k}{s} n^s$ is this size of this set of permitted hyperedges. Thus we will refer to elements $x \in \{0,1\}^N$ and $k$-partite $s$-uniform hypergraphs on $nk$ vertices interchangeably. This definition also extends to Erd\H{o}s-R\'enyi hypergraphs.
\begin{definition}[$k$-Partite Erd\H{o}s-R\'enyi Hypergraphs] \label{def:erdosrenyikpartite}
The $k$-partite $s$-uniform Erd\H{o}s-R\'enyi hypergraph $G(nk,c,s,k)$ is a distribution over hypergraphs on $nk$ vertices with vertex set $V(G) = [n] \times [k]$. A sample from $G(nk,c,s,k)$ is obtained by independently including hyperedge each $e = \{u_1,\ldots,u_s\} \in E(G)$ with probability $c$ for all $e$ with $L(u_1), L(u_2), \dots, L(u_s)$ distinct.
\end{definition}
Viewing the hypergraphs as elements of $G(nk, c, s, k)$ as a distribution on $\{0,1\}^N$, it follows that $G(nk,c,s,k)$ corresponds to the product distribution $\Ber(c)^{\otimes N}$.
\begin{figure}[t!]
\centering
\begin{algbox}
\textbf{Algorithm} \textsc{To-ER-\#}$(G,k,A,c)$
\vspace{2mm}
\textit{Inputs}: $s$-uniform hypergraph $G$ with vertex set $[n]$, parameters $k$, $c$, algorithm $A$ for \textsc{\#$(k,s)$-clique} on Erd\H{o}s-R\'enyi hypergraphs with density $c$.
\begin{enumerate}
\item Construct an $s$-uniform hypergraph $G'$ on vertex set $[n] \times [k]$ by defining
\begin{align*}
E(G') &= \Big\{\{(v_1,t_1),(v_2,t_2),\dots,(v_s,t_s)\} \\
&\quad \quad \quad \quad \quad \quad \quad : \{v_1,\ldots,v_s\} \in E(G) \text{ and } \substack{1 \leq v_1 < v_2 < \cdots < v_s \leq n
\\ \\ 1 \leq t_1 < t_2 < \cdots < t_s \leq k} \Big\}.
\end{align*}
Since $G'$ is $k$-partite, view it as an indicator vector of edges $G' \in \{0,1\}^N$ for $N := N(n,k,s) = \binom{k}{s} n^s$.
\item Find the first $T$ primes $12\binom{k}{s} < p_1 < \dots < p_T$ such that $\prod_{i=1}^T p_i > n^k$.
\item Define $L : (a,b) \in [n] \times [k] \mapsto b \in [k]$, and $$P_{n,k,s}(x) = \sum_{\substack{\{u_1,\ldots,u_k\} \in V(G') \\ L(u_i) = i \ \forall i}} \prod_{\substack{S \subseteq [k] \\ |S| = s}} x_{u_S}$$
For each $1 \leq t \leq T$, compute $P_{n,k,s}(G') \pmod{p_t}$, as follows:
\begin{enumerate}
\item[(1)] Use the procedure of \cite{gemmell1992highly} in order to reduce the computation of $P_{n,k,s}(G') \pmod{p_t}$ to the computation of $P_{n,k,s}$ on $M = 12 \binom{k}{s}$ distinct inputs $x_1,\ldots,x_M \sim \mathrm{Unif}[\FF_{p_t}^N]$.
\item[(2)] For each $1 \leq m \leq M$, compute $P_{n,k,s}(x_m) \pmod{p_t}$ as follows:
\begin{enumerate}
\item[(i)] Use the rejection sampling procedure of Lemma \ref{lem:samplefrommodp} in order to sample $(Y^{(0)},\ldots,Y^{(B)})$ close to $(\Ber(c)^{\otimes N})^{\otimes B}$ in total variation distance, such that $x_m \equiv \sum_{b=0}^{B} 2^b \cdot Y^{(b)} \pmod{p_t}$. It suffices to take $B = \Theta(c^{-1}(1-c)^{-1} s (\log n)(\log p_t))$.
\item[(ii)] For each function $a : \binom{[k]}{s} \to \{0,\ldots,B\}$, define $Y^{(a)}_S = Y^{a(L(S))}$ for all $S \in [N] \subset \binom{[n]}{s}$. Note that for each $a$, the corresponding $Y^{(a)}$ is approximately distributed as $\Ber(c)^{\otimes N}$. Use algorithm $A$ and the recursive counting procedure of Lemma \ref{lem:erkpartitetoergeneralreductioncounting} in order to compute $P_{n,k,s}(Y^{(a)})$ for each $a$.
\item[(iii)] Set $P_{n,k,s}(G') \leftarrow \sum_{a : \binom{[k]}{s} \to \{0,\ldots,B\}} 2^{|a|_1} \cdot P_{n,k,s}(Y^{(a)})$.
\end{enumerate}
\end{enumerate}
\item Since $0 \leq P_{n,k,s}(G') \leq n^k$, use Chinese remaindering and the computations of $P_{n,k,s}(G') \pmod{p_i}$ in order to
calculate and output $P_{n,k,s}(G')$.
\end{enumerate}
\vspace{1mm}
\end{algbox}
\caption{Reduction $\textsc{To-ER-\#}$ for showing computational lower bounds for average-case \textsc{\#$(k,s)$-clique} on Erd\H{o}s-R\'enyi $G(n,c,s)$ hypergraphs based on the worst-case hardness of \textsc{\#$(k,s)$-clique}.}
\label{fig:pseudocodecounting}
\end{figure}
\subsection{Worst-Case Reduction to $k$-Partite Hypergraphs}
\label{sec:kpartite}
In the following lemma, we prove that the worst-case complexity of \textsc{\#$(k,s)$-clique} and \textsc{Parity-$(k,s)$-clique} are nearly unaffected when we restrict the inputs to be worst-case $k$-partite hypergraphs. This step is important, because the special structure of $k$-partite hypergraphs will simplify future steps in our reduction.
\begin{lemma}\label{lem:worstcasehardnesskpartite}
Let $A$ be an algorithm for \textsc{\#$(k,s)$-clique}, such that $A$ has error probability less than $1/3$ for any $k$-partite hypergraph $G$ on $nk$ vertices. Then, there is an algorithm $B$ for \textsc{\#$(k,s)$-clique} with error probability less than $1/3$ on any hypergraph $G$ satisfying that $T(B,n) \leq T(A,n) + O(k^s n^s)$. Furthermore, the same result holds for \textsc{Parity-$(k,s)$-clique} in place of \textsc{\#$(k,s)$-clique}.
\end{lemma}
\begin{proof}
Let $G$ be an $s$-uniform hypergraph on $n$ vertices. Construct the $s$-uniform hypergraph $G'$ on the vertex set $V(G') = [n] \times [k]$ with edge set
$$E(G') = \left\{\{(v_1,t_1),(v_2,t_2),\dots,(v_s,t_s)\} : \{v_1,\ldots,v_s\} \in E(G) \text{ and } \substack{1 \leq v_1 < v_2 < \cdots < v_s \leq n
\\ \\ 1 \leq t_1 < t_2 < \cdots < t_s \leq k}\right\}$$
The hypergraph $G'$ can be constructed in $O(k^s n^s)$ time. Note that $G'$ is $k$-partite with the vertex partition $L : (i,j) \in [n] \times [k] \mapsto j \in [k]$. There is also a bijective correspondence between $k$-cliques in $G'$ and $k$-cliques in $G$ given by
$$\{v_1,v_2,\ldots,v_k\} \mapsto \{(v_1,1),(v_2, 2),\ldots,(v_k,k)\}$$
where $v_1 < v_2 < \dots < v_k$. Thus, the $k$-partite $s$-uniform hypergraph $G'$ on $nk$ vertices has exactly the same number of $k$-cliques as $G$. It suffices to run $A$ on $G'$ and to return its output.
\end{proof}
A corollary to Lemma \ref{lem:worstcasehardnesskpartite} is that any worst-case hardness for \textsc{\#$(k,s)$-clique} and \textsc{Parity-$(k,s)$-clique} on general $s$-uniform hypergraphs immediately transfers to the $k$-partite case. For instance, the lower bounds of Conjectures \ref{conj:weakworstcasecounting}, \ref{conj:weakworstcaseparity}, and \ref{conj:strongworstcasecounting} imply corresponding lower bounds in the $k$-partite case. Going forward in our worst-case to average-case reduction, we may restrict our attention to $k$-partite hypergraphs without loss of generality.
\subsection{Counting $k$-Cliques as a Low-Degree Polynomial}
A key step in our worst-case to average-case reduction is to express the number of $k$-cliques as a low-degree polynomial in the adjacency matrix. As mentioned in the introduction, a similar step -- but without the $k$-partiteness constraint -- appears in the worst-case to average-case reduction of Goldreich and Rothblum \cite{goldreich2018counting}.
Let $\mathcal{E} \subset \binom{V(G)}{s}$ be the set of possible hyperedges that respect the $k$-partition: i.e., $\mathcal{E} = \{A \in \binom{V(G)}{s} : |L(A)| = s\}$. Let $N \triangleq N(n,k,s) = |\mathcal{E}|$ and identify $\mathcal{E}$ with $[N]$ through a bijection $\pi : [N] \to \mathcal{E}$. To simplify the notation, we will omit the map $\pi$ in the proof, and simply treat $[N]$ and $\mathcal{E}$ as the same set. Thus, each $x \in \{0, 1\}^N$ corresponds to a $k$-partite hypergraph where $x_A$ is the indicator that $A \in \mathcal{E}$ is an edge in the hypergraph. The number of $k$-cliques of a $k$-partite hypergraph $x \in \{0,1\}^N$ is a degree-$D$ polynomial $P_{n,k,s} : \{0,1\}^N \to \ZZ$ where $D \triangleq D(k,s) = \binom{k}{s}$:
\begin{equation}\label{eq:cliquecountpolynomialkpartite}
P_{n,k,s}(x) = \sum_{\substack{\{u_1,\ldots,u_k\} \subset V(G) \\ \forall i \ L(u_i) = i}} \prod_{\substack{S \subset [k] \\ |S| = s}} x_{u_S}
\end{equation}
For any finite field $\FF$, this equation defines $P_{n,k,s}$ as a polynomial over that finite field. For clarity, we write this polynomial over $\FF$ as $P_{n,k,s,\FF} : \FF^N \to \FF$. Observe that for any hypergraph $x \in \{0,1\}^N$, we have that
$$P_{n,k,s,\FF}(x) = P_{n,k,s}(x) \pmod{\mathrm{char}(\FF)}$$
where $\mathrm{char}(\FF)$ is the characteristic of the finite field. We now reduce computing \textsc{\#$(k,s)$-clique} and \textsc{Parity-$(k,s)$-clique} on a $k$-partite hypergraph $x \in \{0,1\}^N$ to computing $P_{n,k,s,\FF}(x)$ for appropriate finite fields $\FF$. This is formalized in the following two propositions.
\begin{proposition}\label{prop:countingchineseremaindertheorem}
Let $x \in \{0,1\}^N$ denote a $s$-uniform hypergraph that is $k$-partite with vertex labelling $L$. Let $p_1,p_2,\ldots,p_t$ be $t$ distinct primes, such that $\prod_{i} p_i > n^k$. Then, solving \textsc{\#$(k,s)$-clique} reduces to computing $P_{n,k,s,\FF_{p_i}}(x)$ for all $i \in [t]$, plus $O(k \log n)$ additive computational overhead. Moreover, computing $P_{n,k,s,\FF_{p_i}}(x)$ for all $i \in [t]$ reduces to computing \textsc{\#$(k,s)$-clique}, plus $O(t k \log n)$ computational overhead.
\end{proposition}
\begin{proof}
Note that $P_{n,k,s}(x) \leq n^k$ since there are at most $n^k$ cliques in the hypergraph. So the claim follows from the Chinese Remainder Theorem and the fact that for any $i \in [t]$, it holds that $P_{n,k,s,\FF_{p_i}}(x) \equiv P_{n,k,s}(x) \pmod{p_i}$.
\end{proof}
\begin{proposition}\label{prop:parityequivchartwoevaluation}
Let $\FF$ be a finite field of characteristic $2$. Let $x \in \{0,1\}^N$ be a $s$-uniform hypergraph that is $k$-partite with vertex labelling $L$. Then solving \textsc{Parity-$(k,s)$-clique} for $x$ is equivalent to computing $P_{n,k,s,\FF}(x)$.
\end{proposition}
\begin{proof}
This is immediate from $P_{n,k,s,\FF}(x) \equiv P_{n,k,s}(x) \pmod{\mathrm{char}(\FF)}$.
\end{proof}
\subsection{Random Self-Reducibility: Reducing to Random Inputs in $\FF^N$}
Expressing the number and parity of cliques as low-degree polynomials allows us to perform a key step in the reduction: because polynomials over finite fields are random self-reducible, we can reduce computing $P_{n,k,s,\FF}$ on worst-case inputs to computing $P_{n,k,s,\FF}$ on several uniformly random inputs in $\FF^N$.
The following well-known lemma states the random self-reducibility of low-degree polynomials. The lemma first appeared in \cite{gemmell1992highly}. We follow the proof of \cite{ball2017average} in order to present the lemma with explicit guarantees on the running time of the reduction.
\begin{lemma}[Theorem 4 of \cite{gemmell1992highly}] \label{lem:reedsolomon} Let $\FF$ be a finite field with $|\FF| = q$ elements.
Let $N,D > 0$. Suppose $9 < D < q/12$. Let $f : \FF^N \to \FF$ be a polynomial of degree at most $D$. If there is an algorithm $A$ running in time $T(A,N)$ such that $$\PP_{x \sim \mathrm{Unif}\left[\FF^N\right]} [A(x) = f(x)] > 2/3,$$ then there is an algorithm $B$ running in time $O((N+D)D^2 \log^2 q + T(A,N) \cdot D)$ such that for {\em any} $x \in \FF^N$, it holds that $\PP[B(x) = f(x)] > 2/3$.
\end{lemma}
For completeness, we provide a proof of this lemma in Appendix \ref{app:reedsolomonproof}. Lemma \ref{lem:reedsolomon} implies that if we can efficiently compute $P_{n,k,s,\FF}$ on at least a 2/3 fraction of randomly chosen inputs in $\FF^N$, then we can efficiently compute the polynomial $P_{n,k,s,\FF}$ over a worst-case input in $\FF^N$.
\subsection{Reduction to Evaluating the Polynomial on $G(nk,c,s,k)$}
So far, we have reduced worst-case clique-counting over unweighted hypergraphs to the average-case problem of computing $P_{n,k,s,\FF}$ over $k$-partite hypergraphs with random edge weights in $\FF$. It remains to reduce from computing $P_{n,k,s,\FF}$ on inputs $x \sim \text{Unif}\left[\FF^N\right]$ to random hypergraphs, which correspond to $x \sim \text{Unif}\left[\{0,1\}^N\right]$. Since $\{0, 1\}^N$ is an exponentially small subset of $\FF^N$ if $|\FF| > 2$, the random weighted and unweighted hypergraph problems are very different. In this section, we carry out this reduction using two different arguments for \textsc{Parity-$(k,s)$-clique} and \textsc{\#$(k, s)$-clique}. The latter reduction is based on the total variation convergence of random binary expansion modulo $p$ to $\text{Unif}[\FF_p]$ and related algorithmic corollaries from Section \ref{sec:randombinaryexpansions}.
We first present the reduction that will be applied in the case of \textsc{Parity-$(k,s)$-clique}. Recall $D = \binom{k}{s}$ is the degree of $P_{n,k,s}$. The following lemma will be used only for the \textsc{Parity-$(k,s)$-clique} case:
\begin{lemma}\label{lem:fpttofp}
Let $p$ be prime and $t \ge 1$. Suppose $A$ is an algorithm that computes $P_{n,k,s,\FF_p}(y)$ with error probability less than $\delta \triangleq \delta(n)$ for $y \sim \textnormal{Unif}\left[ \FF_p^N \right]$ in time $T(A, n)$. Then there is an algorithm $B$ that computes $P_{n,k,s,\FF_{p^t}}(x)$ with error probability less than $t^D \cdot \delta$ for $x \sim \textnormal{Unif}\left[ \FF_{p^t}^N \right]$ in time $T(B,n) = O\left(N t^4 (\log p)^3 + t^D \cdot T(A,n)\right)$.
\end{lemma}
\begin{proof}
We give a reduction computing $P_{n,k,s,\FF_{p^t}}(x)$ where $x \sim \textnormal{Unif}\left[ \FF_{p^t}^N \right]$ given blackbox access to $A$. Let $\beta$ be such that $\beta, \beta^p, \beta^{p^2}, \ldots,\beta^{p^{t-1}} \in \FF_{p^t}$ forms a normal basis for $\FF_{p^t}$ over $\FF_p$. Now for each $i \in [N]$, compute the basis expansion
$$x_i = x_i^{(0)} \beta + x_i^{(1)} \beta^p + \dots + x_i^{(t-1)} \beta^{p^{t-1}}.$$
One can find a generator for a normal basis $\beta \in \FF_{p^t}$ in time $O((t^2 + \log p)(t \log p)^2)$ by Bach et al. \cite{bach1993factor}. Computing $x^{(0)},\ldots,x^{(t-1)}$ then takes time $O(N t^3 (\log p)^3)$ because $N$ applications of Gaussian elimination each take at most $O(t^3)$ operations over $\FF_p$. \footnote{For a good survey on normal bases, we recommend \cite{gao1993normal}.} Note that since $x$ is uniformly distributed and $\beta, \beta^p, \ldots,\beta^{p^{t-1}}$ form a basis, it follows that $x^{(0)},x^{(1)},\ldots,x^{(t-1)}$ are distributed i.i.d according to $\text{Unif}\left[\FF_p^N\right]$.
Given a coloring of the hyperedges $b : [N] \to \{0,1,\ldots,t-1\}$, define $x^{(b)} \in \FF_p^N$ as $x_i^{(b)} = x_i^{(b(i))}$ for all $i \in [N]$. Observe that for any fixed coloring $b$, the vector $x^{(b)}$ is uniform in $\FF_p^N$.
In our proof, for every map $a : \binom{[k]}{s} \to \{0, 1, \ldots,t-1\}$, we construct a coloring $a \circ L : [N] \to \{0,\ldots,t-1\}$ of the hyperedges $[N]$ using the $k$-partiteness of the hypergraph. Given a hyperedge $W = \{w_1,\ldots,w_s\} \in \mathcal{E} = [N]$, we have that $L(W) \in \binom{[k]}{s}$ by the $k$-partiteness of the hypergraph, and hence the color $(a \circ L)(W) \triangleq a(L(W))$ is well-defined. As above, for any fixed $a$, the vector $x^{(a \circ L)}$ is uniform in $\FF_p^N$.
We now manipulate $P_{n,k,s,\FF_{p^t}}$. First we write each entry $x_{u_S}$ in the normal basis, and then we redistribute terms to write $P_{n,k,s,\FF_{p^t}}$ as a weighted sum of clique-counts modulo $p$:
\allowdisplaybreaks
\begin{align*}
P_{n,k,s,\FF_{p^t}}(x) &= \sum_{\substack{\{u_1,\ldots,u_k\} \subset V(G) \\ \forall j \ L(u_j) = j}} \prod_{S \in \binom{[k]}{s}} x_{u_S} \\
&= \sum_{\substack{\{u_1,\ldots,u_k\} \subset V(G) \\ \forall j \ L(u_j) = j}} \prod_{S \in \binom{[k]}{s}} \left(\sum_{i=0}^{t-1} x_{u_S}^{(i)} \beta^{p^i}\right) \\
&= \sum_{ a : {[k] \choose s} \to \{0,\ldots,t-1\}} \left(\sum_{\substack{\{u_1,\ldots,u_k\} \subset V(G) \\ \forall i \ L(u_i) = i}} \prod_{S \in \binom{[k]}{s}} \left( x_{u_S}^{(a(S))} \beta^{p^{a(S)}}\right)\right) \\
&= \sum_{ a : {[k] \choose s} \to \{0,\ldots,t-1\}} \left(\prod_{S \in \binom{[k]}{s}} \beta^{p^{a(S)}}\right) \left(\sum_{\substack{\{u_1,\ldots,u_k\} \subset V(G) \\ \forall i \ L(u_i) = i}} \prod_{S \in \binom{[k]}{s}} x_{u_S}^{(a(S))}\right) \\
&= \sum_{ a : {[k] \choose s} \to \{0,\ldots,t-1\}} \left(\prod_{S \in \binom{[k]}{s}} \beta^{p^{a(S)}}\right) P_{n,k,s,\FF_p}\left(x^{(a \circ L)}\right)
\end{align*}
Since $x^{(a \circ L)} \sim \text{Unif}\left[ \FF_p^N \right]$ for each fixed map $a$, computing $P_{n,k,s,\FF_{p^t}}(x)$ reduces to evaluating $P_{n,k,s,\FF_p}$ on $t^D$ uniformly random inputs in $\FF_p^N$ and outputting a weighted sum of the evaluations. The error probability is bounded by a union bound.
\end{proof}
We now give the reduction to evaluating $P_{n, k, s}$ on random hypergraphs drawn from $G(nk, c, s, k)$ in the case of \textsc{\#$(k, s)$-clique}. One of the main lemmas driving the reduction is the following.
\begin{lemma} \label{lem:packedbinaryexpansions}
There is an absolute constant $K$ such that the following holds. For any $\eps > 0$, $c \in (0,1)$, prime $p > 2$, and $t \ge K \cdot c^{-1} (1-c)^{-1} \log (p/\epsilon^2) \log p$, there is an $O(pt \log(p/\eps))$-time algorithm that, given $x \in [p]$, samples random variables $X_0,\ldots,X_{t-1} \in \{0,1\}$ satisfying $\sum_{i=0}^{t-1} 2^i \cdot X_i \equiv x \pmod{p}\mbox{ almost surely}.$ Moreover, if $x$ is chosen uniformly at random from $[p]$ then $d_{TV}(\cL(X_1,\ldots,X_t),\Ber(c)^{\otimes t}) \leq \eps.$
\end{lemma}
The proof of Lemma~\ref{lem:packedbinaryexpansions} is deferred to Section~\ref{sec:randombinaryexpansions}. It is a central ingredient in the \textsc{\#$(k,s)$-clique} reduction and will be used through the following lemma.
\begin{lemma}\label{lem:fptogncsk}
Let $p$ be prime and let $c = c(n), \gamma = \gamma(n) \in (0,1)$. Suppose that $A$ is an algorithm that computes $P_{n,k,s,\FF_p}(y)$ with error probability less than $\delta \triangleq \delta(n)$ when $y \in \{0,1\}^N$ is drawn from $G(nk,c,s,k)$. Then, for some $t = O(c^{-1}(1-c)^{-1} \log(Np/\gamma) \log p)$, there is an algorithm $B$ that evaluates $P_{n,k,s,\FF_p}(x)$ with error probability at most $\gamma + t^D\cdot\delta$ when $x\sim \textnormal{Unif}\left[\FF_p^N\right]$ in time upper bounded by $T(B,n) = O\left(N p t \log(Np/\gamma) + t^D \cdot T(A,n)\right)$.
\end{lemma}
\begin{proof}
We give a reduction computing $P_{n,k,s,\FF_p}(x)$ where $x \sim \textnormal{Unif}\left[ \FF_{p}^N \right]$ given blackbox access to $A$. We first handle the case in which $p > 2$. For each $j \in [N]$, apply the algorithm from Lemma \ref{lem:packedbinaryexpansions} to sample $x_j^{(0)}, x_j^{(1)}, \ldots,x_j^{(t-1)} \in \{0,1\}$ satisfying
$$d_{\text{TV}}\left(\cL(x_j^{(0)},\ldots,x_j^{(t-1)}), \Ber(c)^{\otimes t}\right) \le \eps \triangleq \gamma/N \quad \text{and} \quad \sum_{i=0}^{t-1} 2^i x_j^{(i)} \equiv x_j \pmod{p}$$
By Lemma \ref{lem:packedbinaryexpansions}, we may choose $t = O(c^{-1}(1-c)^{-1} \log(Np/\gamma) \log p)$ and this sampling can be carried out in $O(Npt \log(Np/\gamma))$ time. By the total variation bound, for each $j$ we may couple $(x_j^{(0)},\ldots,x_j^{(t-1)})$ with $(Z_j^{(0)},\ldots,Z_j^{(t-1)}) \sim \Ber(c)^{\otimes k}$, so that $\PP[x_j^{(i)} = Z_j^{(i)}\ \forall i,j] \geq 1 - \gamma$. Moreover, $x_j^{(i)}$ is independent of $x_l^{(k)}$ whenever $j \neq l$, so we may choose the $Z$ so that $Z_j^{(i)}$ is independent of $Z_l^{(k)}$ whenever $j \neq l$.
As in the proof of Lemma \ref{lem:fpttofp}, given any coloring $b : [N] \to \{0,\ldots,t-1\}$, we define $Z^{(b)} \in \{0,1\}^N$ by $Z_j^{(b)} = Z_j^{(b(j))}$, for all $j \in [N]$. We also note that for any fixed $b$, the entries $Z_1^{(b)},\ldots,Z_N^{(b)}$ are independent and distributed as $\Ber(c)$. Therefore,
$$Z^{(b)} \sim G(nk,c,s,k)$$
Now compute the following quantity, writing each entry of $Z$ as a binary expansion and redistributing terms similarly to the calculations in Lemma \ref{lem:fpttofp}. We are working in $\FF_p$ so the equalities hold modulo $p$:\allowdisplaybreaks
\begin{align*}\tilde{P}_{n,k,s,\FF_{p}}(Z) &\triangleq \sum_{\substack{\{u_1,\ldots,u_k\} \subset V(G) \\ \forall j \ L(u_j) = j}} \prod_{S \in \binom{[k]}{s}} \left(\sum_{i=0}^{t-1} 2^i \cdot Z_{u_S}^{(i)}\right) \\
&= \sum_{ a : {[k] \choose s} \to \{0,\ldots,t-1\}} \left(\sum_{\substack{\{u_1,\ldots,u_k\} \subset V(G) \\ \forall i \ L(u_i) = i}} \prod_{S \in \binom{[k]}{s}} \left( 2^{a(S)} \cdot Z_{u_S}^{(a(S))}\right)\right) \\
&= \sum_{ a : {[k] \choose s} \to \{0,\ldots,t-1\}} \left(\prod_{S \in \binom{[k]}{s}} 2^{a(S)}\right) \left(\sum_{\substack{\{u_1,\ldots,u_k\} \subset V(G) \\ \forall i \ L(u_i) = i}} \prod_{S \in \binom{[k]}{s}} Z_{u_S}^{(a(S))}\right) \\ &= \sum_{ a : {[k] \choose s} \to \{0,\ldots,t-1\}} \left(\prod_{S \in \binom{[k]}{s}} 2^{a(S)}\right) P_{n,k,s,\FF_p}(Z^{(a \circ L)}).\end{align*}
We may use algorithm $A$ to evaluate the $t^D$ values of $P_{n,k,s,\FF_p}(Z^{(a \circ L)})$, with probability $< t^D \cdot \delta$ of any error (by a union bound). Computing $\tilde{P}_{n,k,s,\FF_p}(Z)$ reduces to computing a weighted sum over the $t^D$ evaluations. Conditioned on the event that $x_j^{(i)} = Z_j^{(i)}\ \forall i,j$, then $P_{n,k,s,\FF_p}(x) = \tilde{P}_{n,k,s,\FF_p}(Z)$, because \begin{align*}P_{n,k,s,\FF_p}(x) &= \sum_{\substack{\{u_1,\ldots,u_k\} \subset V(G) \\ \forall j \ L(u_j) = j}} \prod_{S \in \binom{[k]}{s}} x_{u_S} \\
&= \sum_{\substack{\{u_1,\ldots,u_k\} \subset V(G) \\ \forall j \ L(u_j) = j}} \prod_{S \in \binom{[k]}{s}} \left(\sum_{i=0}^{t-1} 2^i \cdot x_{u_S}^{(i)}\right) \\
&= \sum_{\substack{\{u_1,\ldots,u_k\} \subset V(G) \\ \forall j \ L(u_j) = j}} \prod_{S \in \binom{[k]}{s}} \left(\sum_{i=0}^{t-1} 2^i \cdot Z_{u_S}^{(i)}\right) = \tilde{P}_{n,k,s,\FF_p}(Z).\end{align*}
Since $\PP[x_j^{(i)} = Z_j^{(i)}\ \forall i,j] \geq 1 - \gamma$, by a union bound with the error in calculation we have computed $P_{n,k,s,\FF_p}(x)$ with probability of error $\leq \gamma + t^D \cdot \delta$. The claim follows for the case $p > 2$.
If $p = 2$, then the proof is almost identical, except that since $2 \equiv 0 \pmod{2}$, we may no longer use the result on random binary expansions of Lemma \ref{lem:packedbinaryexpansions}. In this case, for each $j \in [N]$ we sample $x_j^{(0)},\ldots,x_j^{(t-1)} \in \{0,1\}^N$ such that each $d_{\text{TV}}(\cL(x_j^{(0)},\ldots,x_j^{(t-1)}), \Ber(c)^{\otimes t}) \leq \eps \triangleq \gamma/N$, and so that $$\sum_{i=0}^{t-1} x_j^{(i)} = x_j \pmod{p}.$$ By Lemma \ref{lem:samplefrommodtwo} (deferred but analogous to Lemma~\ref{lem:packedbinaryexpansions}), we may choose $t = O(c^{-1}(1-c)^{-1}\log(N/\gamma))$, and we may sample in time $O(Nt \log(N/\gamma))$. Again, we couple the $x_j^{(i)}$ variables with variables $Z_j^{(i)} \sim \Ber(c)$ such that the event $E$ that $x_j^{(i)} = Z_j^{(i)}$ for all $i,j$ has probability $\PP[E] \geq 1 - \gamma$ and such that $Z_j^{(i)}$ is independent of $Z_l^{(k)}$ whenever $j \neq l$. By a similar, and simpler, calculation to the one for the case $p > 2$, we have that $\tilde{P}_{n,k,s,\FF_2}(Z) = P_{n,k,s,\FF_2}(x)$ conditioned on $E$, where\allowdisplaybreaks
\begin{align*} \tilde{P}_{n,k,s,\FF_{2}}(Z) \triangleq \sum_{ a : {[k] \choose s} \to \{0,\ldots,t-1\}} P_{n,k,s,\FF_2}(Z^{(a \circ L)}).\end{align*}
This can be calculated using the algorithm $A$ similarly to the $p > 2$ case, because each $Z^{(a \circ L)}$ is distributed as $G(nk,c,s,k)$.
\end{proof}
\subsection{Reduction to Counting $k$-Cliques in $G(n,c,s)$}
\label{sec:erdosrenyi}
So far, we have reduced \textsc{Parity-$(k,s)$-clique} and \textsc{\#$(k, s)$-clique} for worst-case input hypergraphs to average-case inputs drawn from the $k$-partite Erd\H{o}s-R\'{e}nyi distribution $G(nk,c,s,k)$. We now carry out the final step of the reduction, showing that \textsc{Parity-$(k,s)$-clique} and \textsc{\#$(k, s)$-clique} on inputs drawn from $G(nk,c,s,k)$ reduce to inputs drawn from the non-$k$-partite Erd\H{o}s-R\'{e}nyi distribution $G(n,c,s)$. Recall that a hypergraph $G$ drawn from $G(nk,c,s,k)$ has vertex set $V(G) = [n] \times [k]$ and vertex partition given by the labels $L : (i,j) \in [n] \times [k] \mapsto j \in [k]$.
\begin{lemma}\label{lem:erkpartitetoergeneralreductioncounting}
Let $\delta = \delta(n) \in (0,1)$ be a non-increasing function of $n$ and let $c = c(n) \in (0,1)$. Suppose that $A$ is a randomized algorithm for \textsc{\#$(k,s)$-clique} such that for any $n$, $A$ has error probability less than $\delta(n)$ on hypergraphs drawn from $G(n,c,s)$ in $T(A, n)$ time. Then there exists an algorithm $B$ solving \textsc{\#$(k,s)$-clique} that has error probability less than $2^k \cdot \delta(n)$ on hypergraphs drawn from $G(nk,c,s,k)$ and that runs in $T(B,n) = O\left(2^k \cdot T(A,nk) + k^s n^s + s^2 k^3 2^k \log^2(nk) \right)$ time.
\end{lemma}
\begin{proof}
It suffices to count the number of $k$-cliques in $G \sim G(nk,c,s,k)$ given blackbox access to $A$. Construct the hypergraph $H$ over the same vertex set $V(H) = [n] \times [k]$ by starting with $G$ and adding every edge $e = \{v_1, v_2, \ldots, v_s\} \in \binom{[n] \times [k]}{s}$ satisfying the condition $|\{L(v_1), \ldots, L(v_s)\}| < s$ independently with probability $c$. In other words, independently add each edge to $G$ containing two vertices from the same part of $G$. It follows that $H$ is distributed according to $G(nk,c,s)$. More generally, for every $S \subset [k]$, $H_S$ is distributed according to $G(|L^{-1}(S)|,c,s)$ where $H_S$ is the restriction of $H$ to the vertices $L^{-1}(S) \subset V(H)$ with labels in $S$. Note that $H$ can be constructed in $O(k^s n^s)$ time.
Now observe that for each $S \neq \emptyset$, it holds that $n \le |L^{-1}(S)| \le nk$ and the algorithm $A$ succeeds on each $H_S$ with probability at least $1 - \delta(n)$. By a union bound, we may compute the number of $k$-cliques $|\cl_k(H_S)|$ in $H_S$ for all $S \subset [k]$ with error probability less than $2^k \cdot \delta(n)$. Note that this can be done in $O\left(2^k \cdot T(A,nk)\right)$ time. From these counts $|\cl_k(H_S)|$, we now to inductively compute
$$t_d \triangleq |\{S \in \cl_k(H) : |L(S)| = d\}|$$
for each $d \in [k]$. Note that $t_0 = 0$ in the base case $d = 0$. Given $t_0, t_1, \ldots, t_d$, the next count $t_{d+1}$ can be expressed by inclusion-exclusion as
\begin{align*}
t_{d+1} &= \sum_{T \subset [k], |T| = d+1} |\{S \in \cl_k(H) : L(S) = T\}| \\
&= \sum_{T \subset [k], |T| = d+1} \left(|\cl_k(H_T)| - \sum_{i=0}^{d} \sum_{U \subset T, |U| = i} |\{S \in \cl_k(H) : L(S) = U\}|\right) \\
&= \left(\sum_{T \subset [k], |T| = d+1} |\cl_k(H_T)|\right) - \sum_{i=0}^{d} \binom{k-i}{d+1-i} |\{S \in \cl_k(H) : |L(S)| = i\}| \\
&= \sum_{T \subset [k], |T| = d+1} |\cl_k(H_T)| - \sum_{i=0}^d \binom{k-i}{d+1-i} t_i
\end{align*}
After $O(k2^k)$ operations, this recursion yields the number of $k$-cliques $t_k = |\{S \in \cl_k(H) : |L(S)| = k\}| = |\cl_k(G)|$ in the original $k$-partite hypergraph $G$. The sizes of the integers manipulated are always at most $\max(2^k \binom{nk}{s})$, so each arithmetic operation takes $O((ks \log(nk))^2)$ time.
\end{proof}
Repeating the same proof over $\FF_2$ yields an analogue of Lemma \ref{lem:erkpartitetoergeneralreductioncounting} for \textsc{Parity-$(k,s)$-clique}, as stated below.
\begin{lemma}\label{lem:erkpartitetoergeneralreductionparity}
Lemma \ref{lem:erkpartitetoergeneralreductioncounting} holds when \textsc{\#$(k,s)$-clique} is replaced by \textsc{Parity-$(k,s)$-clique}.
\end{lemma}
\subsection{Proofs of Theorems \ref{thm:averagecasehardnesscounting} and \ref{thm:averagecasehardnessparity}}
\label{sec:proofsofmain}
We now combine Steps 1-5 formally in order to prove Theorems \ref{thm:averagecasehardnesscounting} and \ref{thm:averagecasehardnessparity}.
\begin{proof}[Proof of Theorem \ref{thm:averagecasehardnesscounting}]
Our goal is to construct an algorithm $B$ solving \textsc{\#$(k,s)$-clique} with error probability $< 1/3$ on any $s$-uniform hypergraph $x$. We are given an algorithm $A$ that solves \textsc{\#$(k,s)$-clique} with probability of error $< 1/\Upsilon_{\#}$ on hypergraphs drawn from $G(n,c,s)$. We will construct the following intermediate algorithms in our reduction:
\begin{itemize} \item Algorithm $A_0$ that solves \textsc{\#$(k,s)$-clique} with error probability $< 1/3$ for any worst-case $k$-partite hypergraph.
\item Algorithm $A_1(x,p)$ that computes $P_{n,k,s,\FF_p}(x)$ for any $x \in \FF_p^N$ and for any prime $p$ such that $12 \binom{k}{s} < p < 10 \log n^k$, with worst-case error probability $< 1/3$.
\item Algorithm $A_2(y,p)$ for primes $12 \binom{k}{s} < p < 10 \log n^k$ computing $P_{n,k,s,\FF_p}(y)$ on inputs $y \sim \mathrm{Unif}[\FF_p^N]$ with error probability $< 1/3$.
\item Algorithm $A_3(z)$ that computes $P_{n,k,s}(z)$ on inputs $z \sim G(nk,c,s,k)$ with error probability $< \delta$. (The required value of $\delta$ will be determined later on.)
\end{itemize}
We construct algorithm $B$ from $A_0$, $A_0$ from $A_1$, $A_2$ from $A_3$, and $A_3$ from $A$.
\\
\textit{1. Reduce to computing \textsc{\#$(k,s)$-clique} for $k$-partite hypergraphs.} We use Lemma \ref{lem:worstcasehardnesskpartite} to construct $B$ from $A_0$, such that $B$ runs in time $$T(B,n) = T(A_0,n) + O((nk)^s).$$
\textit{2. Reduce to computing $P_{n,k,s,\FF_p}$ on worst-case inputs.} We use Proposition \ref{prop:countingchineseremaindertheorem} to construct $A_0$ from $A_1$ such that $A_0$ runs in time $$T(A_0,n) \leq O(T(A_1,n) \cdot \log n^k + (\log n^k)^2).$$ The algorithm $A_0$ starts by using a sieve to find the first $T$ primes $12\binom{k}{s} < p_1 < \dots < p_T$ such that $\prod_{i=1}^T p_i > n^k$. Notice that $p_T \leq 10 \log n^k$, so this step takes time $O((\log n^k)^2)$. Then, given a $k$-partite hypergraph $x \in \{0,1\}^N$, the algorithm $A_0$ computes $P_{n,k,s}(x)$ by computing $P_{n,k,s,\FF_{p_i}}(x)$ for all $p_i$, boosting the error of $A_1$ by repetition and majority vote. Since $T = O((\log n^k) / (\log \log n^k))$, we only need to repeat $O(\log \log n^k)$ times per prime; this yields a total slowdown factor of $O(\log n^k)$. Once we have computed $P_{n,k,s}(x)$, we recall that it is equal to the number of $k$-cliques in $x$.
\textit{3. Reduce to computing $P_{n,k,s,\FF_p}$ on random inputs in $\FF_p^N$.} We use Lemma \ref{lem:reedsolomon} to construct $A_1$ from $A_2$ such that $A_1$ runs in time
\begin{align*}
T(A_1,n) &= O((N+D)D^2 \log^2p + D \cdot T(A_2,n)) \\
&= O(n^s \binom{k}{s}^3 \log^2 \log n^k + \binom{k}{s} \cdot T(A_2,n)).
\end{align*}
\textit{4. Reduce to computing $P_{n,k,s}$ on random inputs in $\{0,1\}^N$.} We use Lemma \ref{lem:fptogncsk} to construct $A_2$ from $A_3$ such that $A_2$ runs in time $$T(A_2,n) = O(Npt \log (Np) + t^{\binom{k}{s}} \cdot T(A_3,n)),$$ for some $t = O(c^{-1}(1-c)^{-1} s (\log n) (\log p)).$ For this step, we require the error probability $\delta$ of algorithm $A_3(z)$ on inputs $z \sim G(nk,c,s,k)$ to be at most $1/(4t^D) = 1 /(4 t^{\binom{k}{s}})$.
\textit{5. Reduce to computing \textsc{\#$(k,s)$-clique} for $G(n,c,s)$ hypergraphs.}
We use Lemma \ref{lem:erkpartitetoergeneralreductioncounting} to construct $A_3$ from $A$ such that $A_3$ runs in time $$T(A_3,n) = O((nk)^s + s^2 k^3 2^k\log(nk) + 2^k \cdot T(A,nk)),$$ and such that $A_3$ has error probability at most $\delta < 2^k / \Upsilon_{\#}$.
\\
As in the theorem statement, let $\Upsilon_{\#}(n,c,s,k) \triangleq (C (c^{-1}(1-c)^{-1}) s (\log n)(\log k + \log \log n))^{\binom{k}{s}}$, where $C > 0$ is a large constant to be determined. If we take $C$ large enough, then $4 t^{\binom{k}{s}} \cdot 2^k \leq \Upsilon_{\#}$. In this case, the error $\delta$ of $A_3$ will be at most $1/(4 t^{\binom{k}{s}})$, which is what we needed for the fourth step. Putting the runtime bounds together,
\begin{align*}
T(B,n) &= O\Big((nk)^s + (\log n^k)^2 \\
&\quad \quad + (\log n^k) \cdot \Big(n^s t k \binom{k}{s}^3 (\log n)^2 + \binom{k}{s} \cdot (4t)^{\binom{k}{s}} \cdot (T(A,nk) + (nk)^s)\Big)\Big) \\
&= O\Big(n^s k^3 \binom{k}{s}^3 (c^{-1}(1-c)^{-1})(\log k + \log \log n)\log^4 n \\
&\quad \quad + (\log n) \cdot \Upsilon_{\#} \cdot (T(A,nk) + (nk)^s)\Big),
\end{align*}
if we choose $C > 0$ large enough. Hence, the second term dominates and $$T(B,n)= O((\log n) \cdot \Upsilon_{\#} \cdot (T(A,nk) + (nk)^s)),$$ as $\binom{k}{s} \geq 3$ without loss of generality.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:averagecasehardnessparity}]
The proof of item 1 of Theorem \ref{thm:averagecasehardnessparity} is analogous to the proof of Theorem \ref{thm:averagecasehardnesscounting}, except that it does not use the Chinese remainder theorem. Moreover, special care is needed in order to ensure that the field $\FF$ over which we compute the polynomial $P_{n,k,s,\FF}$ in the intermediate steps is large enough that we may use the random self-reducibility of polynomials.
Our goal is to construct an algorithm $B$ that solves \textsc{Parity-$(k,s)$-clique} with error probability $< 1/3$ on any $s$-uniform hypergraph $x$. We are given an algorithm $A$ that solves \textsc{Parity-$(k,s)$-clique} with probability of error $< 1/\Upsilon_{P,1}$ on hypergraphs drawn from $G(n,c,s)$. We will construct the following intermediate algorithms in our reduction:
\begin{itemize} \item Algorithm $A_0$ that solves \textsc{Parity-$(k,s)$-clique} with error probability $< 1/3$ for any worst-case $k$-partite hypergraph.
\item Algorithm $A_1(w)$ that computes $P_{n,k,s,\FF_{2^{\kappa}}}(w)$ on inputs $w \sim \mathrm{Unif}[\FF_{2^{\kappa}}^N]$ for $\kappa = \ceil{\log_2 (12 \binom{k}{s})}$, with error probability $< 1/3$.
\item Algorithm $A_2(y)$ that computes $P_{n,k,s,\FF_2}(y)$ on inputs $y \sim \mathrm{Unif}[\FF_2^N]$ with error probability $< \delta_2$. (The required value of $\delta_2$ will be determined later on.)
\item Algorithm $A_3(z)$ that computes $P_{n,k,s,\FF_2}(z)$ on inputs $z \sim G(nk,c,s,k)$ with error probability $< \delta_3$. (The required value of $\delta_3$ will be determined later on.)
\end{itemize}
We construct algorithm $B$ from $A_0$, $A_0$ from $A_1$, $A_2$ from $A_3$, and $A_3$ from $A$.
\\
\textit{1. Reduce to computing \textsc{Parity-$(k,s)$-clique} for $k$-partite hypergraphs.} We use Lemma \ref{lem:worstcasehardnesskpartite} to construct $B$ from $A_0$, such that $B$ runs in time $$T(B,n) = T(A_0,n) + O((nk)^s).$$
\textit{2. Reduce to computing $P_{n,k,s,\FF_{2^{\kappa}}}$ on random inputs in $\FF_{2^{\kappa}}^N$.} Note that by Proposition \ref{prop:parityequivchartwoevaluation} if we can compute $P_{n,k,s,\FF_{2^{\kappa}}}$ for worst-case inputs, then we can solve \textsc{Parity-$(k,s)$-clique}. We use Lemma \ref{lem:reedsolomon} to construct $A_0$ from $A_1$ such that $A_0$ runs in time $$T(A_0,n) = O(\kappa^2 (N+D)D^2 + D \cdot T(A_1,n)) = O(n^s \binom{k}{s}^2 \kappa^2 + \binom{k}{s} \cdot T(A_1,n))$$
\textit{3. Reduce to computing $P_{n,k,s,\FF_2}$ on random inputs in $\FF_2^N$.} We use Lemma \ref{lem:fpttofp} to construct $A_1$ from $A_2$ such that $A_1$ runs in time $$T(A_1,n) \leq O(N\kappa^4 + \kappa^{\binom{k}{s}} \cdot T(A_2,n)),$$ and has error probability at most $\delta_2 \cdot \kappa^{\binom{k}{s}}$ on random inputs $w \sim \mathrm{Unif}[\FF_{2^{\kappa}}^N]$. Thus, $A_2$ must have error probability at most $\delta_2 < 1/(3\kappa^{\binom{k}{s}})$ on random inputs in $y \sim \mathrm{Unif}[\FF_2^N]$ for this step of the reduction to work.
\textit{4. Reduce to computing $P_{n,k,s,\FF_2}$ on random inputs in $\{0,1\}^N$.} We use Lemma \ref{lem:fptogncsk} to construct $A_2$ from $A_3$ such that $A_2$ runs in time $$T(A_2,n) = O(N t \log (N/\gamma) + t^{\binom{k}{s}} \cdot T(A_3,n)),$$ for some $t = O(c^{-1}(1-c)^{-1} (s \log(n) + \log(1/\gamma))).$ The error probability of $A_2$ on random inputs $z \sim G(nk,c,s,k)$ will be at most $\delta_2 < \delta_3 \cdot t^{\binom{k}{s}} + \gamma$. Since we require error probability at most $\delta_2 \leq 1/(3\kappa^{\binom{k}{s}})$ of algorithm $A_2(z)$ on inputs $z \sim G(nk,c,s,k)$, we set $\gamma = 1/(10 \kappa^{\binom{k}{s}})$ and require $\delta_3 \leq 1/(10 (t \kappa)^{\binom{k}{s}}),$ which is sufficient. For this choice of $\gamma$, we have $t = O(c^{-1}(1-c)^{-1} (s \log(n) + \binom{k}{s} \log \log \binom{k}{s}))$.
\textit{5. Reduce to computing \textsc{Parity-$(k,s)$-clique} for $G(n,c,s)$ hypergraphs.}
We use Lemma \ref{lem:erkpartitetoergeneralreductionparity} to construct $A_3$ from $A$ such that $A_3$ runs in time $$T(A_3,n) = O((nk)^s + s^2 k^3 2^k \log(nk) + 2^k \cdot T(A,nk)),$$ and such that $A_3$ has error probability at most $\delta_3 < 2^k / \Upsilon_{P,1}$.
\\
As in the theorem statement, let
$$\Upsilon_{P,1}(n,c,s,k) \triangleq \left(C (c^{-1}(1-c)^{-1}) s (\log k)\left(s\log n + \binom{k}{s} \left(\log \log \binom{k}{s}\right)\right)\right)^{\binom{k}{s}}$$ for some large enough constant $C$.
If we take $C$ large enough, then $(\kappa t)^{\binom{k}{s}} \leq \frac{1}{10} \cdot 2^{-k} \cdot \Upsilon_{P,1}$, as desired. In this case, the error of $A_0$ on uniformly random inputs will be at most $1/3$, which is what we needed. Putting the runtime bounds together,
\begin{align*}
T(B,n) &= O\Big(n^s \binom{k}{s}^2 \log^2 \kappa + n^s \binom{k}{s}^2 \kappa^{\binom{k}{s}} t \log\left(n^s \kappa^{\binom{k}{s}}\right) \\
&\quad \quad + n^s \binom{k}{s}^2 \kappa^4 + \binom{k}{s} \cdot (4\kappa t)^{\binom{k}{s}} \cdot (T(A,nk) + (nk)^s) \Big) \\
&= O(n^s \binom{k}{s}^2(t k \kappa^{\binom{k}{s}}\log^2 \kappa + \kappa^4) + \Upsilon_{P,1} \cdot (T(A,nk) + (nk)^s)),
\end{align*}
if we choose $C > 0$ large enough. Since $\binom{k}{s} \geq 3$ without loss of generality, the second term dominates and $$T(B,n) = O(\Upsilon_{P,1} \cdot (T(A,nk) + (nk)^s)).$$
For item 2 of the theorem, we restrict the inputs to come from $G(n,1/2,s)$, and we achieve a better error tolerance because algorithm $A_3$ is the same as $A_2$. This means that we may skip step 4 of the proof of item 1. In particular, we only need $\delta_3 = \delta_2 \leq 1 / (3 \kappa^{\binom{k}{s}})$. So algorithm $A$ only needs to have error $< 1 / \Upsilon_{P,2}$, for $\Upsilon_{P,2}(k,s) \triangleq (C s \log k)^{\binom{k}{s}}$. It is not hard to see that, skipping step 4, the algorithm $B$ that we construct takes time $T(B,n) = O(\Upsilon_{P,2} \cdot (T(A,nk) + (nk)^s))$.
\end{proof}
\section{Random Binary Expansions Modulo $p$}
\label{sec:randombinaryexpansions}
In this section, we consider the distributions of random binary expansions of the form
$$Z_t \cdot 2^t + Z_{t - 1} \cdot 2^{t - 1} + \cdots + Z_0 \pmod{p}$$
for some prime $p$ and independent, possibly biased, Bernoulli random variables $Z_i \in \{0, 1\}$. We show that for $t$ polylogarithmic in $p$, these distributions become close to uniformly distributed over $\mathbb{F}_p$, more or less regardless of the biases of the $Z_i$. This is then used to go in the other direction, producing approximately independent Bernoulli variables that are the binary expansion of a number with a given residue. The special case of this argument in which the Bernoulli variables are unbiased has already appeared in an earlier work by Goldreich and Rothblum \cite{goldreich2017worst}. In that case, the proof of correctness is much simpler, because the Fourier-analytic tools used below can be avoided.
For $p > 2$, the main result of the section is the following slightly more general restatement of Lemma~\ref{lem:packedbinaryexpansions}. It implies that we can efficiently sample biased binary expansions, conditioned on the expansion being equivalent to some $x$ modulo $p$.
\begin{lemma}[Restatement of Lemma~\ref{lem:packedbinaryexpansions}] \label{lem:packedbinaryexpansionsrestated}
There is an absolute constant $K$ such that the following holds. Let $p > 2$ be prime, $t \geq K \cdot c^{-1} (1-c)^{-1} \log (p/\epsilon^2) \log p$, $c \le q_0, q_1, \dots, q_t \le 1 - c$ for some $c \in (0, 1/2]$ and $\eps > 0$. Let $Z_i \sim \Ber(q_i)$ be independent. Then there is an $O(pt \log(p/\eps))$-time algorithm that given $x \in [p]$ samples $X_0,\ldots,X_{t} \in \{0,1\}$ satisfying $\sum_{i=0}^{t} 2^i \cdot X_i \equiv x \pmod{p}$ almost surely. Moreover, if $x$ is chosen uniformly in $[p]$, then $d_{\text{TV}}(\cL(X_0,\ldots,X_t),\cL(Z_0,\ldots,Z_t)) \leq \eps$.
\end{lemma}
Our argument uses finite Fourier analysis on $\mathbb{F}_p$. Given a function $f : \FF_p \to \RR$, define its Fourier transform to be $\hat{f} : \FF_p \to \CC$, where $\hat{f}(t) = \sum_{x=0}^{p-1} f(x) \omega^{tx}$ and $\omega = e^{2\pi i / p}$. In this section, we endow $\mathbb{F}_p$ with the total ordering of $\{0, 1, \dots, p - 1\}$ as elements of $\mathbb{Z}$. Given a set $S$, let $2S = \{ 2s : s \in S\}$. We begin with a simple claim showing that sufficiently long geometric progressions with ratio 2 in $\mathbb{F}_p$ contain a middle residue modulo $p$.
\begin{claim} \label{claim:geoprog}
Suppose that $a_1,\ldots,a_k \in \mathbb{F}_p$ is a sequence with $a_1 \neq 0$ and $a_{i+1} = 2a_i$ for each $1 \leq i \leq k-1$. Then if $k \ge 1 + \log_2(p/3)$, there is some $j$ with $\frac{p}{3} \le a_j \le \frac{2p}{3}$.
\end{claim}
\begin{proof}
Let $S = \{x \in \FF_p : x < p/3\}$ and $T = \{x \in \FF_p : x > 2p/3\}$. Observe that $2S \cap T = \emptyset$ and $S \cap 2T = \emptyset$, which implies that there is no $i$ such that $a_i$ and $a_{i + 1}$ are both in $S$ and $T$. Therefore if $(a_1, a_2, \dots, a_k)$ contains elements of both $S$ and $T$, there must be some $j$ with $a_j \in (S \cup T)^C$ and the claim follows. It thus suffices to shows that $(a_1, a_2, \dots, a_k)$ cannot be entirely contained in one of $S$ or $T$. First consider the case that it is contained in $S$. Define the sequence $(a_1', a_2', \dots, a_k')$ of integers by $a'_{i+1} = 2a'_i$ for each $1 \le i \le k - 1$ and $a_1' \in [1, p/3)$ is such that $a_1' \equiv a_1 \pmod{p}$. It follows that $a_i' \equiv a_i \pmod{p}$ for each $i$ and $a_k' \ge 2^{k - 1} \ge p/3$. Now consider the smallest $j$ with $a_j' > p/3$. Then $p/3 \ge a'_{j-1} = a'_j/2$ by the minimality of $j$, and $p/3 \le a_j \le 2p/3$ which is a contradiction. If the sequence is contained in $T$, then $(-a_1,-a_2,\ldots,-a_k)$ is contained in $S$ and applying the same argument to this sequence proves the claim.
\end{proof}
We now bound the total variation between the distribution of random binary expansions modulo $p$ and the uniform distribution. In Appendix~\ref{app:binarytightness}, we show Lemma~\ref{lem:tvislowfourier} is tight assuming there are infinitely-many Mersenne primes.
\begin{lemma} \label{lem:tvislowfourier}
There is an absolute constant $K > 0$ such that the following holds. Let $p > 2$ be a prime, let $c, \epsilon \in (0, 1/2]$ be arbitrary and suppose that $c \le q_0, q_1, \dots, q_t \le 1 - c$. Then if $t \ge K \cdot c^{-1} (1-c)^{-1} \log (p/\epsilon^2) \log p$ and $Z_i \sim \Ber(q_i)$ are independent, the distribution of $S = \sum_{i=0}^t Z_i \cdot 2^i \pmod{p}$ is within $\epsilon$ total variation distance of the uniform distribution on $\FF_p$.
\end{lemma}
\begin{proof}
Let $f : \FF_p \to \RR$ be the probability mass function of $\sum_{i=0}^t 2^i Z_i \pmod{p}$. By definition, we have that
$$f(x) = \sum_{z \in \{0, 1\}^{t+1}} \left( \prod_{i = 0}^t q_i^{z_i} (1 - q_i)^{1 - z_i} \right) \mathbf{1} \left\{ \sum_{i = 0}^t z_i \cdot 2^i \equiv x \pmod{p} \right\}$$
Now observe that $\hat{f}(s)$ is given by
$$\hat{f}(s) = \sum_{x=0}^{p-1} f(x) \omega^{sx} = \prod_{i = 0}^{t} \left( 1 - q_i + q_i \cdot \omega^{2^i \cdot s} \right)$$
The last equality follows directly from expanding the product and the definition of $f(x)$. Note that the constant function $\mathbf{1}$ has Fourier transform $p \cdot \mathbf{1}_{\{ s = 0\}}$. By Cauchy-Schwarz and Parseval's theorem, we have that
\begin{align*}
4 \cdot d_{\text{TV}}\left( \mathcal{L}(S), \text{Unif}[\mathbb{F}_p] \right)^2 &= \| f - p^{-1} \cdot \mathbf{1} \|_1^2 \le p \cdot \| f - p^{-1} \cdot \mathbf{1} \|_2^2 = \| \hat{f} - \mathbf{1}_{\{ s = 0\}} \|_2^2 \\
&= \sum_{s \neq 0} \prod_{i = 0}^{t} \left| 1 - q_i + q_i \cdot \omega^{2^i \cdot s} \right|^2
\end{align*}
where $\mathcal{L}(S)$ denotes the law of $S$. Note that $|1 - q + q \cdot \omega^a| \le 1$ by the triangle inequality for all $a \in \mathbb{F}_p$ and $q \in (0, 1)$. Furthermore, if $a \in \mathbb{F}_p$ is such that $p/3 \le a \le 2p/3$ and $q \in [c, 1 - c]$, then we have that
\begin{align*}
\left|1 - q + q \cdot \omega^a\right|^2 &= (1 - q)^2 + q^2 + 2q(1 - q) \cos(2\pi a/p) \\
&= 1 - 2q(1 - q) \left( 1 - \cos(2\pi a /p) \right) \\
&\le 1 - 2c(1 - c) \left( 1 - \cos(4\pi/3) \right) \\
&= 1 - 3c(1 - c)
\end{align*}
since $\cos(x)$ is maximized at the endpoints on the interval $x \in [2\pi/3, 4\pi/3]$ and $q(1 - q)$ is minimized at the endpoints on the interval $[c, 1 - c]$. Now suppose that $t$ is such that
$$t \ge \left\lceil \frac{\log(4\epsilon^2/p)}{\log(1 - 3c(1 - c))} \right\rceil \cdot \left\lceil 1 + \log_2(p/3) \right\rceil = \Theta\left( c^{-1} (1-c)^{-1} \log (p/\epsilon^2) \log p \right)$$
Fix some $s \in \mathbb{F}_p$ with $s \neq 0$. By Claim \ref{claim:geoprog}, any $\left\lceil 1 + \log_2(p/3) \right\rceil$ consecutive terms of the sequence $s, 2s, \dots, 2^t s \in \mathbb{F}_p$ contain an element between $p/3$ and $2p/3$. Therefore this sequence contains at least $m = \left\lceil \frac{\log(4\epsilon^2/p)}{\log(1 - 3c(1 - c))} \right\rceil$ such terms, which implies that
$$\prod_{i = 0}^{t} \left| 1 - q_i + q_i \cdot \omega^{2^i \cdot s} \right|^2 \le \left( 1 - 3c(1 - c) \right)^{m} \le \frac{4\epsilon^2}{p}$$
by the inequality above and the fact that each term in this product is at most $1$. Since this holds for each $s \neq 0$, it now follows that
$$4 \cdot d_{\text{TV}}\left( \mathcal{L}(S), \text{Unif}[\mathbb{F}_p] \right)^2 \le \sum_{s \neq 0} \prod_{i = 0}^{t} \left| 1 - q_i + q_i \cdot \omega^{2^i \cdot s} \right|^2 < 4\epsilon^2$$
and thus $d_{\text{TV}}\left( \mathcal{L}(S), \text{Unif}[\mathbb{F}_p] \right) < \epsilon$, proving the lemma.
\end{proof}
Using the lemma, we can sample $(Z_0,\ldots,Z_t)$ conditioned on $S \equiv x \pmod{p}$:
\begin{lemma}\label{lem:samplefrommodp}
Let $p > 2$ be prime. Suppose that $c \le q_1, q_2, \dots, q_t \le 1 - c$ for some $c \in (0, 1/2]$ and that $Z_i \sim \Ber(q_i)$ are independent. Let $Y = \sum_{i=0}^t Z_i \cdot 2^i$ and for each $x \in \FF_p$, let $Y_x \sim \cL(Y | Y \equiv x \pmod{p})$. Consider $Y_R$, where $R$ is chosen uniformly at random with $R \sim \textnormal{Unif}[\FF_p]$. If $S = Y \pmod{p}$ and $\Delta = d_{\text{TV}}\left(\mathcal{L}(S), \textnormal{Unif}[\FF_p]\right) < p^{-1}$, then it holds that
\begin{enumerate}
\item Given $x \in \FF_p$, we may sample $\cL(Y_x)$ within $\delta$ total variation distance in $O\left(\frac{t\log(1/\delta)}{p^{-1} - \Delta}\right)$ time.
\item $d_{\text{TV}}(\cL(Y), \cL(Y_R)) \le \Delta$.
\end{enumerate}
\end{lemma}
\begin{proof}
Note that the $x \to Y_x$ defines a Markov transition sending $S \to Y$ and $R \to Y_R$. The data-processing inequality yields $d_{\text{TV}}(\cL(Y), \cL(Y_R)) \le d_{\text{TV}}(\mathcal{L}(S), \mathcal{L}(R)) = \Delta$, implying the second item.
The second item can be achieved by rejection sampling from the distribution $\mathcal{L}(Y)$ until receiving an element congruent to $x$ modulo $p$ or reaching the cutoff of
$$m = \left\lceil \frac{\log \delta}{\log(1 - p^{-1} + \Delta)} \right\rceil = O\left(\frac{\log(1/\delta)}{p^{-1} - \Delta} \right)$$
rounds. Each sample from $\mathcal{L}(Y)$ can be obtained in $O(t)$ by sampling $Z_0, Z_1, \dots, Z_t$ and forming the number $Y$ with binary digits $Z_t, Z_{t-1}, \dots, Z_0$. If we receive a sample by the $m$th round, then it is exactly sampled from the conditional distribution $\cL(Y_x) = \cL(Y | Y \equiv x \pmod{p})$. Therefore the total variation between the output of this algorithm and $\cL(Y_x)$ is upper bounded by the probability that the rejection sampling scheme fails to output a sample. Now note that the probability that a sample is output in a single round is
$$\mathbb{P}[S = x] \ge p^{-1} - d_{\text{TV}}\left(\mathcal{L}(S), \textnormal{Unif}[\FF_p]\right) = p^{-1} - \Delta$$
by the definition of total variation. By the independence of sampling in different rounds, the probability that no sample is output is at most
$$\left(1 - \mathbb{P}[S = x] \right)^m \le \left( 1 - p^{-1} + \Delta \right)^m \le \delta$$
which completes the proof of the first item.
\end{proof}
Lemma~\ref{lem:packedbinaryexpansionsrestated} now follows by combining Lemmas~\ref{lem:tvislowfourier} and \ref{lem:samplefrommodp}:
\begin{proof}[Proof of Lemma~\ref{lem:packedbinaryexpansionsrestated}]
Let $\eps' = \eps/(2p)$. Given $x$, we approximately sample $Y_x$ up to $\eps'$ total variation error as in Lemma~\ref{lem:samplefrommodp}. Then we return the $(X_0,\ldots,X_t)$ which are uniquely determined by $Y_x = \sum_{i=0}^t 2^i X_i$.
We prove correctness of the algorithm. $Y_x \equiv x \pmod{p}$ by construction. And by Lemma~\ref{lem:tvislowfourier}, when $t \geq K \cdot c^{-1} (1-c)^{-1} \log (p/\epsilon^2) \log p$ for a large enough constant $K$, we have $d_{TV}(\cL(S), \mathrm{Unif}[\FF_p]) \leq \eps'$. So by the first item of Lemma~\ref{lem:samplefrommodp}, we may sample from $\mathcal{L}(Y_x)$ within $\eps'$ total variation distance in $O(pt \log (p/\eps))$ time. Finally, by the second item of Lemma~\ref{lem:samplefrommodp} we also have $d_{TV}(\mathcal{L}(Y), \mathcal{L}(Y_R)) \leq \eps'$ if $x = R \sim \mathrm{Unif}[\FF_p]$ is chosen uniformly at random in $[p]$. So overall, we have $$d_{TV}(\mathcal{L}(X_0,\ldots,X_t),\mathcal{L}(Z_0,\ldots,Z_t)) \leq d_{TV}(\mathcal{L}(Y_R),\mathcal{L}(Y)) + \eps' \leq 2\eps' \leq \eps$$
which concludes the proof of the lemma.
\end{proof}
We conclude with a sampling result analogous to Lemma \ref{lem:packedbinaryexpansionsrestated}, but for $p = 2$.
\begin{lemma}\label{lem:samplefrommodtwo}
There is a constant $K > 0$ such that the following holds. Let $\eps > 0$, $t \geq Kc^{-1}(1-c)^{-1} \log (1/\eps)$, and $c \le q_0, q_1, \dots, q_t \le 1 - c$ for some $c \in (0, 1/2]$. And let $Z_i \sim \Ber(q_i)$ be independent. Given a random variable $R \sim \textnormal{Unif}[\FF_2]$, in $O(t \log(1/\eps))$ time one may sample $X_0,\ldots,X_t$ supported on $\{0,1\}$, such that $R = \sum_{i=0}^t X_i \pmod{2}$ and $d_{\text{TV}}(\cL(X_0,\ldots,X_t), \cL(Z_0,\ldots,Z_t)) < \eps$.
\end{lemma}
\begin{proof}
By induction on $t$, one may show that
$$\PP\left[\sum_{i=0}^t Z_i \equiv 0 \pmod{2}\right] = \frac{1}{2} - \frac{\prod_{i=0}^t (1-2q_i)}{2}$$
If $t$ satisfies the lower bound $t \geq \ceil{\log(\eps/2)/\log(|1-2c|)} + 1 = \Omega(c^{-1}(1-c)^{-1} \log(1/\eps))$, it holds that $d_{\text{TV}}(\cL(\sum_{i=0}^t Z_i), \cL(R)) \leq \eps/2$. Sample the distribution
$$X \sim \cL\left(Z \, \Big| \, \sum_{i=0}^t Z_i \equiv R \pmod{2}\right)$$
within $\eps/2$ total variation distance by rejection sampling. This takes $O(t \log(1/\eps))$ time, because it consists of at most $O(\log(1/\eps))$ rounds of sampling fresh copies of $Z_i \sim \Ber(q_i)$ for all $i \in \{0,\ldots,t\}$ and checking if $\sum_{i=0}^t Z_i = R$. By triangle inequality, it suffices to show that $d_{\text{TV}}(\cL(X),\cL(Z)) \leq \eps/2$. This is true by data processing inequality, since $R$ is uniform and hence is within $\eps/2$ total variation distance of $\sum_{i=0}^t Z_i \pmod{2}$.
\end{proof}
\section{Algorithms for Counting $k$-Cliques in $G(n, c, s)$}\label{sec:algs}
In this section, we consider several natural algorithms for counting $k$-cliques in $G(n, c, s)$ with $c = \Theta(n^{-\alpha})$ for some $\alpha \in (0, 1)$. The main objective of this section is to show that, when $k$ and $s$ are constant, these algorithms all run faster than all known algorithms for $\#(k, s)$-\textsc{clique} on worst-case hypergraphs and nearly match the lower bounds from our reduction for certain $k$, $c$ and $s$. This demonstrates that the average-case complexity of $\#(k, s)$-\textsc{clique} on Erd\H{o}s-R\'{e}nyi hypergraphs is intrinsically different from its worst-case complexity. As discussed in Section \ref{subsec:timeslowdownnecessity}, this also shows the necessity of a slowdown term comparable to $\Upsilon_{\#}$ in our worst-case to average-case reduction for $\#(k, s)$-\textsc{clique}. We begin with a randomized sampling-based algorithm for counting $k$-cliques in $G(n, c, s)$, extending well-known greedy heuristics for finding $k$-cliques in random graphs. We then present an improvement to this algorithm in the graph case and a deterministic alternative.
\subsection{Greedy Random Sampling}
\label{subsec:greedysampling}
In this section, we consider a natural greedy algorithm $\textsc{greedy-random-sampling}$ for counting $k$-cliques in a $s$-uniform hypergraph $G \sim G(n, c, s)$ with $c = \Theta(n^{-\alpha})$. Given a subset of vertices $A \subseteq [n]$ of $G$, define $\textsc{cn}_G(A)$ to be
$$\textsc{cn}_G(A) = \left\{ v \in V(G) \backslash A : B \cup \{ v \} \in E(G) \text{ for all } (s - 1)\text{-subsets } B \subseteq A \right\}$$
or, in other words, the set of common neighbors of $A$. The algorithm $\textsc{greedy-random-sampling}$ maintains a set $S$ of $k$-subsets of $[n]$ and for $T$ iterations does the following:
\begin{enumerate}
\item Sample distinct starting vertices $v_1, v_2, \dots, v_{s-1}$ uniformly at random and proceed to sample the remaining vertices $v_s, v_{s+1}, \dots, v_k$ iteratively so that $v_{i+1}$ is chosen uniformly at random from $\textsc{cn}_G(v_1, v_2, \dots, v_i)$ if it is nonempty.
\item If $k$ vertices $\{v_1, v_2, \dots, v_k\}$ are chosen then add $\{v_1, v_2, \dots, v_k\}$ to $S$ if it is not already in $S$.
\end{enumerate}
This algorithm is an extension of the classical greedy algorithm for finding $\log_2 n$ sized cliques in $G(n, 1/2)$ in \cite{karp1976, grimmett1975colouring}, the Metropolis process examined in \cite{jerrum1992large} and the greedy procedure solving $k$-\textsc{clique} on $G(n, c)$ with $c = \Theta\left(n^{-2/(k-1)}\right)$ discussed by Rossman in \cite{rossman2016lower}. These and other natural polynomial time search algorithms fail to find cliques of size $(1 + \epsilon) \log_2 n$ in $G(n, 1/2)$, even though its clique number is approximately $2 \log_2 n$ with high probability \cite{mcdiarmid1984colouring, pittel1982probable}. Our algorithm $\textsc{greedy-random-sampling}$ extends this greedy algorithm to count $k$-cliques in $G(n, c, s)$. In our analysis, we will see a phase transition in the behavior of this algorithm at $k = \tau$ for some $\tau$ smaller than the clique number of $G(n, c, s)$. This is analogous to the breakdown of the natural greedy algorithm at cliques of size $\log_2 n$ on $G(n, 1/2)$.
Before analyzing $\textsc{greedy-random-sampling}$, we state a simple classical lemma counting the number of $k$-cliques in $G(n, c, s)$. This lemma follows from linearity of expectation and Markov's inequality. Its proof is included in Appendix \ref{sec:cliquecounts} for completeness.
\begin{lemma} \label{lem:cliquenumber}
For fixed $\alpha \in (0, 1)$ and $s$, let $\kappa \ge s$ be the largest positive integer satisfying $\alpha \binom{\kappa}{s - 1} < s$. If $G \sim G(n, c, s)$ where $c = O(n^{-\alpha})$, then $\mathbb{E}[|\cl_k(G)|] = \binom{n}{k} c^{\binom{k}{s}}$ and $\omega(G) \le \kappa + 1 + t$ with probability at least $1 - O\left(n^{-\alpha t(1 - s^{-1}) \binom{\kappa + 2}{s - 1}}\right)$ for any fixed nonnegative integer $t$.
\end{lemma}
In particular, this implies that the clique number of $G(n, c, s)$ is typically at most $(s! \alpha^{-1} )^{\frac{1}{s-1}} + s - 1$. In the graph case of $s = 2$, this simplifies to $1 + 2\alpha^{-1}$. In the next subsection, we give upper bounds on the number of iterations $T$ causing all $k$-cliques in $G$ to end up in $S$ and analyze the runtime of the algorithm. The subsequent subsection improves the runtime of $\textsc{greedy-random-sampling}$ for graphs when $s = 2$ through a matrix multiplication post-processing step. The last subsection gives an alternative deterministic algorithm with a similar performance to $\textsc{greedy-random-sampling}$.
\subsection{Sample Complexity and Runtime of Greedy Random Sampling}
In this section, we analyze the runtime of $\textsc{greedy-random-sampling}$ and give upper bounds on the number of iterations $T$ needed for the algorithm to terminate with $S = \cl_k(G)$. The dynamic set $S$ needs to support search and insertion of $k$-cliques. Consider labelling the vertices of $G$ with elements of $[n]$ and storing the elements of $S$ in a balanced binary search tree sorted according to the lexicographic order on $[n]^k$. Search and insertion can each be carried out in $O(\log |\cl_k(G)|) = O(k \log n)$ time. It follows that each iteration of $\textsc{greedy-random-sampling}$ therefore takes $O(n + k \log n) = O(n)$ time as long as $k = O(1)$. Outputting $|S|$ in $\textsc{greedy-random-sampling}$ therefore yields a $O(nT)$ time algorithm for $\#(k, s)$-\textsc{clique} on $G(n, c, s)$ that succeeds with high probability.
The following theorem provides upper bounds on the minimum number of iterations $T$ needed for this algorithm to terminate with $S = \cl_k(G)$ and therefore solve $\#(k, s)$-\textsc{clique}. Its proof is deferred to Appendix \ref{sec:grs-analysis}.
\begin{theorem} \label{thm:greedysample}
Let $k$ and $s$ be constants and $c = \Theta(n^{-\alpha})$ for some $\alpha \in (0, 1)$. Let $\tau$ be the largest integer satisfying $\alpha \binom{\tau}{s - 1} < 1$ and suppose that
$$T \ge \left\{ \begin{matrix} 2n^{\tau + 1} c^{\binom{\tau + 1}{s}} (\log n)^{3(k - \tau) (1 + \epsilon)} & \text{if } k \ge \tau + 1 \\ 2n^{k} c^{\binom{k}{s}} (\log n)^{1 + \epsilon} & \text{if } k < \tau + 1 \end{matrix} \right.$$
for some $\epsilon > 0$. Then $\textsc{greedy-random-sampling}$ run with $T$ iterations terminates with $S = \cl_k(G)$ with probability $1 -n^{-\omega(1)}$ over the random bits of the algorithm $\textsc{greedy-random-sampling}$ and with probability $1 - n^{-\omega(1)}$ over the choice of random hypergraph $G \sim G(n, c, s)$.
\end{theorem}
Implementing $S$ as a balanced binary search tree and outputting $|S|$ in \textsc{greedy-random-sampling} yields the following algorithmic upper bounds for $\#(k, s)$-\textsc{clique} with inputs sampled from $G(n, c, s)$.
\begin{corollary}
Suppose that $k$ and $s$ are constants and $c = \Theta(n^{-\alpha})$ for some $\alpha \in (0, 1)$. Let $\tau$ be the largest integer satisfying $\alpha \binom{\tau}{s - 1} < 1$. Then it follows that
\begin{enumerate}
\item If $k \ge \tau + 1$, there is an $\tilde{O}\left(n^{\tau+2 - \alpha \binom{\tau + 1}{s}}\right)$ time randomized algorithm solving $\#(k, s)$-\textsc{clique} on inputs sampled from $G(n, c, s)$ with probability at least $1 - n^{-\omega(1)}$.
\item If $k < \tau + 1$, there is an $\tilde{O}\left(n^{k + 1 - \alpha \binom{k}{s}}\right)$ time randomized algorithm solving $\#(k, s)$-\textsc{clique} on inputs sampled from $G(n, c, s)$ with probability at least $1 - n^{-\omega(1)}$.
\end{enumerate}
\end{corollary}
By Lemma \ref{lem:cliquenumber}, the hypergraph $G \sim G(n, c, s)$ has clique number $\omega(G) \le \kappa + 2$ with probability $1 - 1/\text{poly}(n)$ if where $\kappa \ge s$ is the largest positive integer satisfying $\alpha \binom{\kappa}{s - 1} < s$. In particular, when $k > \kappa + 2$ in the theorem above, the algorithm outputting zero succeeds with probability $1 - 1/\text{poly}(n)$ and $\#(k, s)$-\textsc{clique} is trivial. For there to typically be a nonzero number of $k$-cliques in $G(n, c, s)$, it should hold that $0 < \alpha \le s \binom{k - 1}{s - 1}^{-1}$. In the graph case of $s = 2$, this simplifies to the familiar condition that $0 < \alpha \le \frac{2}{k - 1}$. We also remark that when $k < \tau + 1$, the runtime of this algorithm is an $\tilde{O}(n)$ factor off from the expectation of the quantity being counted, the number of $k$-cliques in $G \sim G(n, c, s)$.
\subsection{Post-Processing with Matrix Multiplication}
In this section, we improve the runtime of $\textsc{greedy-random-sampling}$ as an algorithm for $\#(k, s)$-\textsc{clique} in the graph case of $s = 2$. The improvement comes from the matrix multiplication step of Ne\u{s}et\u{r}il and Poljak from their $O\left(n^{\omega \lfloor k/3 \rfloor + (k \pmod{3})}\right)$ time worst-case algorithm for $\#(k, 2)$-\textsc{clique} \cite{nevsetvril1985complexity}. Our improved runtime for the algorithm $\textsc{greedy-random-sampling}$ is stated in the following theorem.
\begin{theorem} \label{thm:matrixaug}
Suppose that $k > 2$ is a fixed positive integer and $c = \Theta(n^{-\alpha})$ where $0 < \alpha \le \frac{2}{k - 1}$ is also fixed. Then there is a randomized algorithm solving $\#(k, 2)$-\textsc{clique} on inputs sampled from $G(n, c)$ with probability $1 - n^{-\omega(1)}$ that runs in $\tilde{O}\left( n^{\omega \lceil k/3 \rceil + \omega - \omega \alpha \binom{\lceil k/3 \rceil}{2}} \right)$ time.
\end{theorem}
\begin{proof}
Label the vertices of an input graph $G \sim G(n, c)$ with the elements of $[n]$. Consider the following application of $\textsc{greedy-random-sampling}$ with post-processing:
\begin{enumerate}
\item Run $\textsc{greedy-random-sampling}$ to compute the two sets of cliques $S_1 = \cl_{\lfloor k/3 \rfloor}(G)$ and $S_2 = \cl_{\lceil k/3 \rceil}(G)$ with the number of iterations $T$ as given in Theorem \ref{thm:greedysample}.
\item Construct the matrix $M_1 \in \{0, 1\}^{|S_1| \times |S_1|}$ with rows and columns indexed by the elements of $S_1$ such that $(M_1)_{A, B} = 1$ for $A, B \in S_1$ if $A \cup B$ forms a clique of $G$ and all labels in $A$ are strictly less than all labels in $B$.
\item Construct the matrix $M_2 \in \{0, 1\}^{|S_1| \times |S_2|}$ with rows indexed by the elements of $S_1$ and columns indexed by the elements of $S_2$ such that $(M_2)_{A, B} = 1$ for $A \in S_1$ and $B \in S_2$ under the same rule that $A \cup B$ forms a clique of $G$ and all labels in $A$ are strictly less than all labels in $B$. Construct the matrix $M_3$ with rows and columns indexed by $S_2$ analogously.
\item Compute the matrix product
$$M_P = \left\{ \begin{matrix} M_1^2 &\text{if } k \equiv 0 \pmod{3} \\ M_1 M_2 &\text{if } k \equiv 1 \pmod{3} \\ M_2 M_3 &\text{if } k \equiv 2 \pmod{3} \end{matrix} \right.$$
\item Output the sum of entries
$$\sum_{(A, B) \in \mathcal{S}} (M_P)_{A, B}$$
where $\mathcal{S}$ is the support of $M_1$ if $k \equiv 0 \pmod{3}$ and $\mathcal{S}$ is the support of $M_2$ if $k \not \equiv 0 \pmod{3}$.
\end{enumerate}
We will show that this algorithm solves $\#(k, 2)$-\textsc{clique} with probability $1 - n^{-\omega(1)}$ when $k \equiv 1 \pmod{3}$. The cases when $k \equiv 0, 2 \pmod{3}$ follow from a nearly identical argument. By Theorem \ref{thm:greedysample}, the first step applying $\textsc{greedy-random-sampling}$ succeeds with probability $1 - n^{-\omega(1)}$. Note that $(M_P)_{A, B}$ counts the number of $\lfloor k/3 \rfloor$-cliques $C$ in $G$ such that the labels of $C$ are strictly greater than those of $A$ and less than those of $B$ and such that $A \cup C$ and $C \cup B$ are both cliques. If it further holds that $(M_2)_{A, B} = 1$, then $A \cup B$ is a clique and $A \cup B \cup C$ is also clique. Therefore the sum output by the algorithm exactly counts the number of triples $(A, B, C)$ such that $A \cup B \cup C$ is a clique, $|A| = |C| = \lfloor k/3 \rfloor$, $|B| = \lceil k/3 \rceil$ and the labels of $C$ are greater than those of $A$ and less than those of $B$. Observe that any clique $\mathcal{C} \in \cl_k(G)$ is counted in this sum exactly once by the triple $(A, B, C)$ where $A$ consists of the lowest $\lfloor k/3 \rfloor$ labels in $\mathcal{C}$, $B$ consists of the highest $\lceil k/3 \rceil$ labels in $\mathcal{C}$ and $C$ contains the remaining vertices of $\mathcal{C}$. Therefore this algorithm solves $\#(k, 2)$-\textsc{clique} as long as Step 1 succeeds.
It suffices to analyze the additional runtime incurred by this post-processing. Observe that the number of cliques output by a call to $\text{greedy-random-sampling}$ with $T$ iterations is at most $T$. Also note that if $\alpha \le \frac{2}{k - 1}$, then $\tau \ge \lfloor \frac{k}{2} \rfloor - 1$. If $k \ge 3$, then it follows that $\tau +1 \ge \lfloor \frac{k}{2} \rfloor \ge \lceil \frac{k}{3} \rceil$. It follows by Theorem \ref{thm:greedysample} that $\max\{ |S_1|, |S_2| \} = \tilde{O}\left( n^{\lceil k/3 \rceil + 1 - \alpha \binom{\lceil k/3 \rceil}{s}} \right)$. Note that computing the matrix $M_P$ takes $\tilde{O}\left( \max\{|S_1|, |S_2|\}^\omega \right) = \tilde{O}\left( n^{\omega \lceil k/3 \rceil + \omega - \omega \alpha \binom{\lceil k/3 \rceil}{2}} \right)$ time. Now observe that all other steps of the algorithm run in $\tilde{O}\left( n^{2\lceil k/3 \rceil - 2\alpha \binom{\lceil k/3 \rceil}{s}} \right)$ time, which completes the proof of the theorem since the matrix multiplication constant satisfies $\omega \ge 2$.
\end{proof}
We remark that for simplicity, we have ignored minor improvements in the runtime that can be achieved by more carefully analyzing Step 4 in terms of rectangular matrix multiplication constants if $k \neq 0 \pmod{3}$. Note that the proof above implicitly used a weak large deviations bound on $|\cl_k(G)|$. More precisely, it used the fact that if $\textsc{greedy-random-sampling}$ with $T$ iterations succeeds, then $|\cl_k(G)| \le T$. Theorem \ref{thm:greedysample} thus implies that $|\cl_k(G)|$ is upper bounded by the minimal settings of $T$ in the theorem statement with probability $1 - n^{-\omega(1)}$ over $G \sim G(n, c, s)$.
When $k \le \tau + 1$, these upper bounds are a $\text{polylog}(n)$ factor from the expectation of $|\cl_k(G)|$. While this was sufficient in the proof of Theorem \ref{thm:matrixaug}, stronger upper bounds will be needed in the next subsection to analyze our deterministic iterative algorithm. The upper tails of $|\cl_k(G)|$ and more generally of the counts of small subhypergraphs in $G(n, c, s)$ have been studied extensively in the literature. We refer to \cite{vu2001large, janson2002infamous, janson2004upper, demarco2012tight} for a survey of the area and recent results. Given a hypergraph $H$, let $N(n, m, H)$ denote the largest number of copies of $H$ that can be constructed in an $s$-uniform hypergraph with at most $n$ vertices and $m$ hyperedges. Define the quantity
$$M_H(n, c) = \max \left\{ m \le \binom{n}{s} : N(n, m, H') \le n^{|V(H')|} c^{|E(H')|} \text{ for all } H' \subseteq H \right\}$$
The following large deviations result from \cite{dudek2010subhypergraph} generalizes a graph large deviations bound from \cite{janson2004upper} to hypergraphs to obtain the following result.
\begin{theorem}[Theorem 4.1 from \cite{dudek2010subhypergraph}] \label{thm:largedev}
For every $s$-uniform hypergraph $H$ and every fixed $\epsilon > 0$, there is a constant $C(\epsilon, H)$ such that for all $n \ge |V(H)|$ and $c \in (0, 1)$, it holds that
$$\mathbb{P}\left[ X_H \ge (1 + \epsilon) \mathbb{E}[X_H] \right] \le \exp\left( - C(\epsilon, H) \cdot M_H(n, c) \right)$$
where $X_H$ is the number of copies of $H$ in $G \sim G(n, c, s)$.
\end{theorem}
Proposition 4.3 in \cite{dudek2010subhypergraph} shows that if $H$ is a $d$-regular $s$-uniform hypergraph and $c \ge n^{-s/d}$ then $M_H(n, c) = \Theta(n^s c^d)$. This implies that
$$\mathbb{P}\left[ |\cl_k(G)| \ge (1 + \epsilon) \binom{n}{k} c^{\binom{k}{s}} \right] \le \exp\left(-C'(\epsilon) \cdot n^s c^{\binom{k - 1}{s - 1}} \right)$$
as long as $c \ge n^{-s!(k - s)!/(k - 1)!}$. This provides strong bounds on the upper tails of $|\cl_k(G)|$ that will be useful in the next subsection.
\subsection{Deterministic Iterative Algorithm for Counting in $G(n, c, s)$}
In this section, we present an alternative deterministic algorithm $\textsc{it-gen-cliques}$ achieving a similar runtime to $\textsc{greedy-random-sampling}$. Although they have very different analyses, the algorithm $\textsc{it-gen-cliques}$ can be viewed as a deterministic analogue of $\textsc{greedy-random-sampling}$. Both are constructing cliques one vertex at a time. The algorithm $\textsc{it-gen-cliques}$ takes in cutoffs $C_{s-1}, C_s, \dots, C_k$ and generates sets $S_{s-1}, S_s, \dots, S_k$ as follows:
\begin{enumerate}
\item Initialize $S_{s - 1}$ to be the set of all $(s - 1)$-subsets of $[n]$.
\item Given the set $S_i$, for each vertex $v \in [n]$, iterate through all subsets $A \in S_i$ and add $A \cup \{v \}$ to $S_{i + 1}$ if $A \cup \{v \}$ is a clique and $v$ is larger than the labels of all of the vertices in $A$. Stop if ever $|S_{i+1}| \ge C_{i+1}$.
\item Stop once $S_k$ has been generated and output $S_k$.
\end{enumerate}
Suppose that $C_t$ are chosen to be any high probability upper bounds on the number of $t$-cliques in $G \sim G(n, c, s)$ such as the bounds in Theorem \ref{thm:largedev}. Then we have the following guarantees for the algorithm $\textsc{it-gen-cliques}$.
\begin{theorem}
Suppose that $k$ and $s$ are constants and $c = \Theta(n^{-\alpha})$ for some $\alpha \in (0, 1)$. Let $\tau$ be the largest integer satisfying $\alpha \binom{\tau}{s - 1} < 1$ and $C_t = 2n^t c^{\binom{t}{s}}$ for each $s \le t \le k$. Then $\textsc{it-gen-cliques}$ with the cutoffs $C_t$ outputs $S_k = \cl_k(G)$ with probability $1 - n^{-\omega(1)}$ where
\begin{enumerate}
\item The runtime of $\textsc{it-gen-cliques}$ is $O\left(n^{\tau+2 - \alpha \binom{\tau + 1}{s}}\right)$ if $k \ge \tau + 2$.
\item The runtime of $\textsc{it-gen-cliques}$ is $O\left(n^{k - \alpha \binom{k - 1}{s}}\right)$ if $k < \tau + 2$.
\end{enumerate}
\end{theorem}
\begin{proof}
We first show that $S_k = \cl_k(G)$ with probability $1 - n^{-\omega(1)}$ in the algorithm $\textsc{it-gen-cliques}$. By a union bound and Theorem \ref{thm:largedev}, it follows that $|\cl_t(G)| < C_t$ for each $s \le t \le k$ with probability at least $1 - (k - s + 1) n^{-\omega(1)}$. The following simple induction argument shows that $S_t = \cl_t(G)$ for each $s - 1 \le t \le k$ conditioned on this event. Note that $\cl_{s - 1}(G)$ is by definition the set of all $(s - 1)$-subsets of $[n]$ and thus $S_{s - 1} = \cl_{s - 1}(G)$. If $S_t = \cl_t(G)$, then each $(t + 1)$-clique $\mathcal{C}$ of $G$ is added exactly once to $S_{t + 1}$ as $A \cup \{ v \}$ where $v$ is the vertex of $\mathcal{C}$ with the largest label and $A = \mathcal{C} \backslash \{v \} \in \cl_t(G)$ are the remaining vertices. Now note that the runtime of $\textsc{it-gen-cliques}$ is
$$O\left( \sum_{t = s - 1}^{k - 1} nC_t \right) = O\left( \max_{s - 1 \le t \le k - 1} \left( nC_t \right) \right) = \left\{ \begin{matrix} O\left(n^{\tau+2 - \alpha \binom{\tau + 1}{s}}\right) & \text{if } k \ge \tau + 2 \\ O\left(n^{k - \alpha \binom{k - 1}{s}}\right) & \text{if } k < \tau + 2 \end{matrix} \right.$$
since $k = O(1)$. To see the second inequality, note that $\log_n (C_{t + 1}/C_t) = 1 - \alpha \binom{t}{s - 1}$. This implies that $C_{t + 1} > C_t$ if $t \le \tau$ and $C_t$ is maximized on $s \le t \le k$ when $t = \tau + 1$. This completes the proof of the theorem.
\end{proof}
We remark that in the case of $k < \tau + 1$, $\textsc{it-gen-cliques}$ attains a small runtime improvement over $\textsc{greedy-random-sampling}$. However, the algorithm $\textsc{greedy-random-sampling}$ can be modified to match this runtime up to a $\text{polylog}(n)$ factor by instead generating the $(k - 1)$-cliques of $G$ and applying the last step of $\textsc{it-gen-cliques}$ to generate the $k$-cliques of $G$. We also remark that $\textsc{it-gen-cliques}$ can also be used instead of $\textsc{greedy-random-sampling}$ in Step 1 of the algorithm in Theorem \ref{thm:matrixaug}, yielding a nearly identical runtime of $\tilde{O}\left( n^{\omega \lceil k/3 \rceil - \omega \alpha \binom{\lceil k/3 \rceil - 1}{2}} \right)$ for $\#(k, 2)$-\textsc{clique} on inputs sampled from $G(n, c)$.
\section{Extensions and Open Problems}\label{sec:extensions}
\label{sec:openproblems}
In this section, we outline several extensions of our methods and problems left open after our work.
\paragraph{Improved Average-Case Lower Bounds.} A natural question is if tight average-case lower bounds for $\#(k, s)$\textsc{-clique} can be shown above the $k$-clique percolation threshold when $s \ge 3$ and if the constant $C$ in the exponent of our lower bounds for the graph case of $s = 2$ can be improved from $1$ to $\omega/9$.
\paragraph{Raising Error Tolerance for Average-Case Hardness.}
A natural question is if the error tolerance of the worst-case to average-case reductions in Theorems \ref{thm:averagecasehardnesscounting} and \ref{thm:averagecasehardnessparity} can be increased. We remarked in the introduction that for certain choices of $k$, the error tolerance cannot be significantly increased -- for example, when $k = 3 \log_2 n$, the trivial algorithm that outputs $0$ on any graph has subpolynomial error on graphs drawn from $G(n,1/2)$, but is useless for reductions from worst-case graphs. Nevertheless, for other regimes of $k$, such as when $k = O(1)$ is constant, counting $k$-cliques with error probability less than $1/4$ on graphs drawn from $G(n,1/2)$ appears to be nontrivial. It is an open problem to prove hardness for such a regime. In general, one could hope to understand the tight tradeoffs between computation time, error tolerance, $k$, $c$, and $s$ for $k$-clique-counting on $G(n,c,s)$.
\paragraph{Hardness of Approximating Clique Counts.}
Another interesting question is if it is hard to approximate the $k$-clique counts, within some additive error $\epsilon$, of hypergraphs drawn from $G(n,c,s)$. Since the number of $k$-cliques in $G(n,c,s)$ concentrates around the mean $\mu \approx c^{\binom{k}{s}} n^k$ with standard deviation $\sigma$, one would have to choose $\epsilon \ll \sigma$ for approximation to be hard.
\paragraph{Inhomogeneous Erd\H{o}s-R\'enyi Hypergraphs.}
Consider an inhomogeneous Erd\H{o}s-R\'enyi hypergraph model, where each hyperedge $e$ is independently chosen to be in the hypergraph with probability $c(e)$.
Also suppose that we may bound $c(e)$ uniformly away from $0$ and $1$ (that is, $c(e) \in [c, 1-c]$ for all possible hyperedges $e$ and for some constant $c$). We would like to prove that \textsc{\#$(k,s)$-clique} and \textsc{Parity-$(k,s)$-clique} are hard on average for inhomogeneous Erd\H{o}s-R\'enyi hypergraphs.
Unfortunately, this does not follow directly from our proof techniques because step 5 in the proof of Theorems \ref{thm:averagecasehardnesscounting} and \ref{thm:averagecasehardnessparity} breaks down due to the inhomogeneity of the model. Nevertheless, steps 1-4 still hold, and therefore we can show that \textsc{\#$(k,s)$-clique} and \textsc{Parity-$(k,s)$-clique} are average-case hard for $k$-partite inhomogeneous Erd\H{o}s-R\'enyi hypergraphs -- when only the edges $e$ that respect the $k$-partition are chosen to be in the hypergraph with inhomogeneous edge-dependent probability $c(e) \in [c, 1-c]$.
\paragraph{General Subgraph Counts.} Given a hypergraph $H$ on $k$ vertices, let \textsc{$H$-counting} be the problem of counting the number of occurrences (as an induced subgraph) of $H$ in a hypergraph $G$. Can one show that \textsc{$H$-counting} in the worst case reduces to \textsc{$H$-counting} in the average case on Erd\H{o}s-R\'enyi hypergraphs?
Our reduction (Theorem \ref{thm:averagecasehardnesscounting}) applies to the special case when $H$ is a clique. Unfortunately, the proof of Theorem \ref{thm:averagecasehardnesscounting} breaks down when counting general hypergraphs. First, the reductions to and from $k$-partite hypergraphs (steps 1 and 5) no longer work, because $H$ contains non-edges, and therefore there may be a copy of $H$ that contains more than one vertex in a given $k$-partition. In order to remedy this, we could consider the modification \textsc{$H$-counting}$'$ of the \textsc{$H$-counting} problem that respects $k$-partite structure, by only counting the copies of $H$ in a $k$-partite hypergraph $G$, such that the $k$ vertices of the copy of $H$ lie in the $k$ different parts of the vertex partition of $G$. For this modified problem, the strategy of our reduction still fails -- this time at Step 4, because the polynomial that counts copies of $H$ in $G$ is not homogeneous. Indeed, for clique-counting, Step 4 of the reduction uses the fact that the variables of the clique-counting polynomial can be split up into $\binom{k}{s}$ groups, such that each monomial contained exactly one variable from each group.
\printbibliography
\begin{appendix}
\section{Reduction from \textsc{Decide-$(k,s)$-clique} to \textsc{Parity-$(k,s)$-clique}}\label{sec:decidetoparityreduction}
The following is a precise statement and proof of the reduction from \textsc{Decide-$(k,s)$-clique} to \textsc{Parity-$(k,s)$-clique} claimed in Section \ref{sec:worstcasehardnessconjectures}.
\begin{lemma}\label{lem:decidetoparityreduction}
Given an algorithm $A$ for \textsc{Parity-$(k,s)$-clique} with error probability $< 1/3$ on any $s$-uniform hypergraph $G$, there is an algorithm $B$ that runs in time $O(k 2^k |A|)$ and solves \textsc{Decide-$(k,s)$-clique} with error $< 1/3$ on any $s$-uniform hypergraph $G$.
\end{lemma}
\begin{proof}
Let $\mathrm{cl}_k(G)$ denote the set of $k$-cliques in hypergraph $G = (V,E)$. Consider the polynomial $$P_G(x_V) = \sum_{S \in \mathrm{cl}_k(G)} \prod_{v \in S} x_v \pmod{2},$$ over the finite field $\FF_2$. If $G$ has a $k$-clique at vertices $S \subset V$, then $P_G$ is nonzero, because $P_G(1_S) = 1$. If $G$ has no $k$-clique, then $P_G$ is zero. Therefore, deciding whether $G$ has a $k$-clique reduces to testing whether or not $P_G$ is identically zero. $P_G$ is of degree at most $k$, so if $P_G$ is nonzero on at least one input, then it is nonzero on at least a $2^{-k}$ fraction of inputs. One way to see this is that if we evaluate $P_G$ at all points $a \in \{0,1\}^m$, the result is a non-zero Reed-Muller codeword in $RM(k,m)$. Since the distance of the $RM(k,m)$ code is $2^{m-k}$, and the block-length is $2^m$, the claim follows \cite{muller1954application}. We therefore evaluate $P_G$ at $c \cdot 2^k$ independent random inputs for some large enough $c > 0$, accept if any of the evaluations returns 1, and reject if all of the evaluations return 0. Each evaluation corresponds to calculating \textsc{Parity-$(k,s)$-clique} on a hypergraph $G'$ formed from $G$ by removing each vertex independently with probability $1/2$. As usual, we boost the error of $A$ by running the algorithm $O(k)$ times for each evaluation, and using the majority vote.
\end{proof}
\section{Proof of Lemma \ref{lem:reedsolomon}}\label{app:reedsolomonproof}
We restate and prove Lemma \ref{lem:reedsolomon}.
\begin{lemma}[Theorem 4 of \cite{gemmell1992highly}] Let $\FF$ be a finite field with $|\FF| = q$ elements.
Let $N,D > 0$. Suppose $9 < D < q/12$. Let $f : \FF^N \to \FF$ be a polynomial of degree at most $D$. If there is an algorithm $A$ running in time $T(A,N)$ such that $$\PP_{x \sim \mathrm{Unif}[\FF^N]} [A(x) = f(x)] > 2/3,$$ then there is an algorithm $B$ running in time $O((N+D)D^2 \log^2 q + T(A,N) \cdot D)$ such that for {\em any} $x \in \FF^N$, $$\PP[B(x) = f(x)] > 2/3.$$
\end{lemma}
\begin{proof}
Our proof of the lemma is based off of the proof that appears in \cite{ball2017average}. The only difference is that in \cite{ball2017average}, the lemma is stated only for finite fields whose size is a prime. Suppose we wish to calculate $f(x)$ for $x \in \FF^N$. In order to do this, choose $y_1,y_2 \stackrel{i.i.d}{\sim} \mathrm{Unif}[\FF^N]$, and define the polynomial $g(t) = x + t y_1 + t^2 y_2$ where $t \in \FF$. We evaluate $A(g(t))$ at $m$ different values $t_1,\ldots,t_m \in \FF$. This takes $O(m N D \log^2 q + m \cdot T(A,N))$ time. Suppose that we have the guarantee that at most $(m-2D)/2$ of these evaluations are incorrect. Then, since $f(g(t))$ is a univariate polynomial of degree at most $2D$, we may use Berlekamp-Welch to recover $f(g(0)) = A(x)$ in $O(m^3)$ arithmetic operations over $\FF$, each of which takes $O(\log^2 q)$ time. Since $g(t_i)$ and $g(t_j)$ are pairwise independent and uniform in $\FF^N$ for any distinct $t_i,t_j \neq 0$, by the second-moment method, with probability $> 2/3$, at most $(m-2D)/2$ evaluations of $A(g(t))$ will be incorrect if we take $m = 12D$.
\end{proof}
\section{Tightness of Bounds in Section~\ref{sec:randombinaryexpansions}}\label{app:binarytightness}
In this appendix, we briefly discuss the tightness of the bounds on $t$ in Lemma~\ref{lem:tvislowfourier} and how the case of $c = 1/2$ differs from $c \neq 1/2$. Note that if $q_i = 1/2$ for each $i$, then $\sum_{i = 0}^t Z_i \cdot 2^i$ is uniformly distributed on $\{0, 1, \dots, 2^{t + 1} - 1 \}$. It follows that
$$d_{\text{TV}}\left( \mathcal{L}(S), \text{Unif}[\mathbb{F}_p] \right) = \sum_{x \in \mathbb{F}_p} \left| p^{-1} - \mathbb{P}[S = x] \right|_+ = \frac{a(p - a)}{2^{t+1}p} \le \frac{p}{2^{t+1}}$$
if $0 \le a \le p - 1$ is such that $2^{t+1} \equiv a \pmod{p}$. Here, $|\cdot |_+$ denotes $| x |_+ = \max(x, 0)$. Therefore $S$ is within total variation of $1/\text{poly}(p)$ of $\text{Unif}[\mathbb{F}_p]$ if $t = \Omega(\log p)$. However, note that for $c$ constant and $\epsilon = 1/\text{poly}(p)$, our lemma requires that $t = \Omega(\log^2 p)$. This raises the question: is the additional factor of $\log p$ necessary or an artifact of our analysis? We answer this question with an example suggesting that the extra $\log p$ factor is in fact necessary and that the case $c = 1/2$ is special.
Suppose that $p$ is a Mersenne prime with $p = 2^r - 1$ for some prime $r$ and for simplicity, take $q_i = 1/3$ for each $i$. Observe by the triangle inequality that
$$\left|\hat{f}(1)\right| = \left| \sum_{x \in \mathbb{F}_p} \left(f(x) - p^{-1}\right) \cdot \omega^x \right| \le \left\| f - p^{-1} \cdot \mathbf{1} \right\|_1 = 2 \cdot d_{\text{TV}}\left( \mathcal{L}(S), \text{Unif}[\mathbb{F}_p] \right)$$
Now suppose that $t = ar - 1$ for some positive integer $a$. As shown in the lemma, we have
$$\left|\hat{f}(1)\right|^2 = \prod_{i = 0}^t \left| \frac{2}{3} + \frac{1}{3} \cdot \omega^{2^i} \right|^2 = \left[ \prod_{i = 0}^{r - 1}\left( \frac{5}{9} + \frac{4}{9} \cdot \cos\left(\frac{2\pi}{p} \cdot 2^i \right) \right) \right]^a$$
where the second equality is due to the fact that the sequence $2^i$ has period $r$ modulo $p$. Now observe that since $\frac{5}{9} + \frac{4}{9} \cdot \cos(x) \ge e^{-x^2}$, we have that
$$\prod_{i = 0}^{r - 1}\left( \frac{5}{9} + \frac{4}{9} \cdot \cos\left(\frac{2\pi}{p} \cdot 2^i \right) \right) \ge \exp\left( - \frac{4\pi^2}{p^2} \sum_{i = 0}^{r - 1} 2^{2i} \right) = \exp\left( - \frac{4\pi^2}{p^2} \cdot \frac{2^{2r} - 1}{3} \right) = \Omega(1)$$
which implies that $a$ should be $\Omega(r)$ for $\hat{f}(1)$ to be polynomially small in $p$. Thus the extra $\log p$ factor is necessary in this case and our analysis is tight. Note that in the special case of $c = 1/2$, the factors in the expressions for $\hat{f}(s)$ are of the form $\frac{1}{2} + \frac{1}{2} \cdot \omega^{2^i \cdot s}$ which can be arbitrarily close to zero. We remark that the construction, as stated, relies on there being infinitely many Mersenne primes. However, it seems to suggest that the extra $\log p$ factor is necessary. Furthermore, similar examples can be produced with $p$ that are not Mersenne, as long as the order of $2$ modulo $p$ is relatively small.
\section{Clique Counts in Sparse Erd\H{o}s-R\'{e}nyi Hypergraphs}
\label{sec:cliquecounts}
We prove the following classical lemma from Section \ref{subsec:greedysampling}.
\begin{lemma}
For fixed $\alpha \in (0, 1)$ and $s$, let $\kappa \ge s$ be the largest positive integer satisfying $\alpha \binom{\kappa}{s - 1} < s$. If $G \sim G(n, c, s)$ where $c = O(n^{-\alpha})$, then $\mathbb{E}[|\cl_k(G)|] = \binom{n}{k} c^{\binom{k}{s}}$ and $\omega(G) \le \kappa + 1 + t$ with probability at least $1 - O\left(n^{-\alpha t(1 - s^{-1}) \binom{\kappa + 2}{s - 1}}\right)$ for any fixed nonnegative integer $t$.
\end{lemma}
\begin{proof}
Let $C > 0$ be such that $c \le Cn^{-\alpha}$ for sufficiently large $n$. For any given set $\{ v_1, v_2, \dots, v_k\}$ of $k$ vertices in $[n]$, the probability that all hyperedges are present among $\{ v_1, v_2, \dots, v_k\}$ and thus these vertices form a $k$-clique in $G$ is $c^{\binom{k}{s}}$. Linearity of expectation implies that the expected number of $k$-cliques is $\mathbb{E}[|\cl_k(G)|] = \binom{n}{k} c^{\binom{k}{s}}$. Now consider taking $k = \kappa + 2 + t$ and note that
\begin{align*}
\mathbb{E}[|\cl_k(G)|] &= \binom{n}{k} c^{\binom{k}{s}} \\
&\le n^k c^{\binom{k}{s}} \le C^{\binom{k}{s}} \cdot \exp\left( \left( 1 - \frac{\alpha}{s} \binom{k - 1}{s - 1} \right) k \log n \right) \\
&\le C^{\binom{k}{s}} \cdot \exp\left( \left( 1 - \frac{\alpha}{s} \binom{\kappa + 1}{s - 1} \right) k \log n - \frac{\alpha}{s} \cdot t \binom{\kappa + 1}{s - 2} k \log n \right) \\
&\le C^{\binom{k}{s}} n^{-\alpha t(1 - s^{-1}) \binom{\kappa + 2}{s - 1}}
\end{align*}
since $k \ge \kappa + 2$ and $\binom{\kappa + 1 + t}{s - 1} \ge \binom{\kappa + 1}{s - 1} + t \binom{\kappa + 1}{s - 2}$ by iteratively applying Pascal's identity. Observe that $\kappa = O(1)$ and thus $C^{\binom{k}{s}} = O(1)$. Now by Markov's inequality, it follows that $\mathbb{P}[\omega(G) \ge k] = \mathbb{P}[|\cl_k(G)| \ge 1] \le \mathbb{E}[|\cl_k(G)|]$, completing the proof of the lemma.
\end{proof}
\section{Analysis of Greedy Random Sampling}
\label{sec:grs-analysis}
This section is devoted to proving Theorem \ref{thm:greedysample}, which is restated below for convenience.
\begin{theorem}
Let $k$ and $s$ be constants and $c = \Theta(n^{-\alpha})$ for some $\alpha \in (0, 1)$. Let $\tau$ be the largest integer satisfying $\alpha \binom{\tau}{s - 1} < 1$ and suppose that
$$T \ge \left\{ \begin{matrix} 2n^{\tau + 1} c^{\binom{\tau + 1}{s}} (\log n)^{3(k - \tau) (1 + \epsilon)} & \text{if } k \ge \tau + 1 \\ 2n^{k} c^{\binom{k}{s}} (\log n)^{1 + \epsilon} & \text{if } k < \tau + 1 \end{matrix} \right.$$
for some $\epsilon > 0$. Then $\textsc{greedy-random-sampling}$ run with $T$ iterations terminates with $S = \cl_k(G)$ with probability $1 -n^{-\omega(1)}$ over the random bits of the algorithm $\textsc{greedy-random-sampling}$ and with probability $1 - n^{-\omega(1)}$ over the choice of random hypergraph $G \sim G(n, c, s)$.
\end{theorem}
\begin{proof}
We first consider the case where $k \ge \tau + 1$. Fix some $\epsilon > 0$ and let $v = (v_1, v_2, \dots, v_k)$ be an ordered tuple of distinct vertices in $[n]$. Define the random variable
$$Z_v = n(n-1)\cdots (n - s + 2) \prod_{i = s - 1}^{k-1} \left| \textsc{cn}_G(v_1, v_2, \dots, v_i) \right|$$
The key property of $Z_v$ is that, in each iteration of $\textsc{greedy-random-sampling}$, the probability that the $k$ vertices $v_1, v_2, \dots, v_k$ are chosen in that order is exactly $1/Z_v$. The proof of this theorem will proceed by establishing upper bounds on $Z_v$ that hold for all $k$-cliques $v$ with high probability over the randomness of $G$, which will yield a bound on the number of iterations $T$ needed to exhaust all such $k$-cliques in $G$.
Consider the following event over the sampling $G \sim G(n, c, s)$
$$A_v = \left\{ Z_v \ge 2n^{\tau + 1} c^{\binom{\tau + 1}{s}} (\log n)^{3(k - 1 - \tau) (1 + \epsilon)} \quad \text{and} \quad \{ v_1, v_2, \dots, v_k \} \in \cl_k(G) \right\}$$
We now proceed to bound the probability of $A_v$ through simple Chernoff and union bounds over $G$. In the next part of the argument, we condition on the event that $\{ v_1, v_2, \dots, v_k \}$ forms a clique in $G$. For each $i \in \{s - 1, s, \dots, k - 1\}$, let $Y_{v, i}$ be the number of common neighbors of $v_1, v_2, \dots, v_i$ in $V(G) \backslash \{v_1, v_2, \dots, v_k\}$. Note that $Y_{v, i} \sim \text{Bin}\left(n - k, c^{\binom{i}{s - 1}}\right)$ and that $\left| \textsc{cn}_G(v_1, v_2, \dots, v_i) \right| = k - i + Y_{v, i}$. The standard Chernoff bound for the binomial distribution implies that for all $\delta_i > 0$,
\begin{align*}
&\mathbb{P}\left[ \left| \textsc{cn}_G(v_1, v_2, \dots, v_i) \right| \ge k - i + (1 + \delta_i) (n - k) c^{\binom{i}{s - 1}} \right] \\
&\quad \quad \le \exp \left( - \frac{\delta_i^2}{2 + \delta_i} \cdot (n - k) c^{\binom{i}{s - 1}} \right)
\end{align*}
Now define $\kappa_i$ to be
$$\kappa_i = (n - k)^{-1} c^{- \binom{i}{s - 1}} \cdot (\log n)^{1 + \epsilon}$$
for each $i \in \{s-1, s, \dots, k - 1\}$. Let $\delta_i = \sqrt{\kappa_i}$ if $i \le \tau$ and $\delta_i = \kappa_i$ if $i > \tau$. Note that for sufficiently large $n$, $\delta_i < 1$ if $i \le \tau$ and $\delta_i \ge 1$ if $i > \tau$. These choices of $\delta_i$ ensure that the Chernoff upper bounds above are each at most $\exp\left( - \frac{1}{3} (\log n)^{1 + \epsilon} \right)$ for each $i$. A union bound implies that with probability at least $1 - k\exp\left( - \frac{1}{3} (\log n)^{1 + \epsilon} \right)$, it holds that
$$\left| \textsc{cn}_G(v_1, v_2, \dots, v_i) \right| < k - i + (1 + \delta_i) (n - k) c^{\binom{i}{s - 1}} < (1 + 2\delta_i) (n-k) c^{\binom{i}{s - 1}}$$
for all $i$ and sufficiently large $n$. Here, we used the fact that $\delta_i (n - k) c^{\binom{i}{s - 1}} = \omega(1)$ for all $i$ by construction and $k = O(1)$. Observe that $(1 + 2\delta_i) (n-k) c^{\binom{i}{s - 1}} \le 3(\log n)^{1 + \epsilon}$ for all $i \ge \tau + 1$. These inequalities imply that
\begin{align*}
\log Z_v &< \log n^{s - 1} + \sum_{i = s - 1}^\tau \log \left( (1 + 2\delta_i) (n-k) c^{\binom{i}{s - 1}} \right) + 3(k - 1 - \tau) (1 + \epsilon) \log \log n\\
&< \log n^{\tau + 1} + (\log c) \sum_{i = s - 1}^\tau \binom{i}{s - 1} \\
&\quad \quad + \sum_{i = s - 1}^\tau \log (1 + 2\delta_i) + 3(k - 1 - \tau) (1 + \epsilon) \log \log n \\
&\le \log \left( n^{\tau + 1} c^{\binom{\tau + 1}{s}} \right) + 3(k - 1 - \tau) (1 + \epsilon) \log \log n + 2 \sum_{i = s - 1}^\tau \delta_i \\
&\le \log \left( n^{\tau + 1} c^{\binom{\tau + 1}{s}} \right) + 3(k - 1 - \tau) (1 + \epsilon) \log \log n + o(1)
\end{align*}
The last inequality holds since $\tau = O(1)$ and since $\delta_i \lesssim (\log n)^{\frac{1}{2} + \frac{\epsilon}{2}} n^{-\frac{1}{2} + \frac{1}{2}\alpha \binom{\tau}{s - 1}} = o(1)$ for all $i \le \tau$ because of the definition that $\alpha \binom{\tau}{s - 1} < 1$. In summary, we have shown that for sufficiently large $n$
\begin{align*}
&\mathbb{P}\left[ Z_v \ge 2n^{\tau + 1} c^{\binom{\tau + 1}{s}} (\log n)^{3(k - 1 - \tau) (1 + \epsilon)} \, \Big| \, \{ v_1, v_2, \dots, v_k \} \in \cl_k(G) \right] \\
&\quad \quad \le k\exp\left( - \frac{1}{3} (\log n)^{1 + \epsilon} \right) = n^{-\omega(1)}
\end{align*}
for any $k$-tuple of vertices $v = (v_1, v_2, \dots, v_k)$. Since $\mathbb{P}\left[ \{ v_1, v_2, \dots, v_k \} \in \cl_k(G) \right] = c^{\binom{k}{s}}$, we have that $\mathbb{P}[A_v] \le c^{\binom{k}{s}} n^{-\omega(1)} = n^{-\omega(1)}$ for each $k$-tuple $v$. Now consider the event
\begin{align*}
B &= \Big\{ Z_v < 2n^{\tau + 1} c^{\binom{\tau + 1}{s}} (\log n)^{3(k - 1 - \tau) (1 + \epsilon)} \text{ for all } v \\
&\quad \quad \quad \quad \quad \quad \text{ such that } \{ v_1, v_2, \dots, v_k \} \in \cl_k(G) \Big\}
\end{align*}
Note that $\overline{B} = \bigcup_{k\text{-tuples } v} A_v$ and a union bound implies that $\mathbb{P}[B] \ge 1 - \sum_{v} \mathbb{P}[A_v] \ge 1 - n^k \cdot n^{-\omega(1)} = 1 - n^{-\omega(1)}$ since there are fewer than $n^k$ $k$-tuples $v$.
We now show that as long as $B$ holds over the random choice of $G$, then the algorithm $\textsc{greedy-random-sampling}$ terminates with $S = \cl_k(G)$ with probability $1 - n^{-\omega(1)}$ over the random bits of $\textsc{greedy-random-sampling}$, which completes the proof of the lemma in the case $k > \tau + 1$. In the next part of the argument, we consider $G$ conditioned on the event $B$. Fix some ordering $v = (v_1, v_2, \dots, v_k)$ of some $k$-clique $\mathcal{C} = \{ v_1, v_2, \dots, v_k\}$ in $G$. Recall that in any one of the $T$ iterations of $\textsc{greedy-random-sampling}$, the probability that the $k$ vertices $v_1, v_2, \dots, v_k$ are chosen in that order is exactly $1/Z_v$. Since the $T$ iterations of $\textsc{greedy-random-sampling}$ are independent, we have that
$$\mathbb{P}\left[v \text{ is never chosen in a round} \right] = \left( 1 - \frac{1}{Z_v} \right)^T \le \exp\left( - \frac{T}{Z_v} \right) = n^{-\omega(1)}$$
since $T$ is chosen so that $T \ge Z_v (\log n)^{3(1+\epsilon)}$ for all $k$-tuples $v$, given the event $B$. Since there are at most $n^k$ possible $v$, a union bound implies that every such $v$ is chosen in a round of $\textsc{greedy-random-sampling}$ with probability at least $1 - n^{k} \cdot n^{- \omega(1)} = 1 - n^{-\omega(1)}$ over the random bits of the algorithm. In this case, $S = \cl_k(G)$ after the $T$ rounds of $\textsc{greedy-random-sampling}$. This completes the proof of the theorem in the case $k \ge \tau + 1$.
We now handle the case $k < \tau + 1$ through a nearly identical argument. Define $\kappa_i$ as in the previous case and set $\delta_i = \sqrt{\kappa_i}$ for all $i \in \{s - 1, s, \dots, k - 1\}$. By the same argument, for each $k$-tuple $v$ we have with probability $1 - n^{-\omega(1)}$ over the choice of $G$ that
\begin{align*}
\log Z_v &< \log n^{s - 1} + \sum_{i = s - 1}^{k - 1} \log \left( (1 + 2\delta_i) (n-k) c^{\binom{i}{s - 1}} \right) \\
&< \log n^k + (\log c) \sum_{i = s - 1}^{k - 1} \binom{i}{s - 1} + 2\sum_{i = s - 1}^{k - 1} \delta_i \\
&= \log \left( n^k c^{\binom{k}{s}} \right) + o(1)
\end{align*}
where again $\delta_i \lesssim (\log n)^{\frac{1}{2} + \frac{\epsilon}{2}} n^{-\frac{1}{2} + \frac{1}{2}\alpha \binom{\tau}{s - 1}} = o(1)$ for all $i \le k - 1 < \tau$. Define the event
$$B' = \left\{ Z_v < 2n^k c^{\binom{k}{s}} (\log n)^{1 + \epsilon} \text{ for all } v \text{ such that } \{ v_1, v_2, \dots, v_k \} \in \cl_k(G) \right\}$$
Note that $T$ is such that $T \ge Z_v (\log n)^{1 + \epsilon}$ for all $v$ if $B'$ holds. Now repeating the rest of the argument from the $k \ge \tau + 1$ case shows that $\mathbb{P}[B'] \ge 1 - n^{-\omega(1)}$ and that $\textsc{greedy-random-sampling}$ terminates with $S = \cl_k(G)$ with probability $1 - n^{-\omega(1)}$ over its random bits if $G$ is such that $B'$ holds. This completes the proof of the theorem.
\end{proof}
\end{appendix}
\end{document}
|
1807.03586
|
\section{Introduction}
This short example shows a contrived example on how to format the authors' information for {\it IJCAI--19 Proceedings}.
\section{Author names}
Each author name must be followed by:
\begin{itemize}
\item A newline {\tt \textbackslash{}\textbackslash{}} command for the last author.
\item An {\tt \textbackslash{}And} command for the second to last author.
\item An {\tt \textbackslash{}and} command for the other authors.
\end{itemize}
\section{Affiliations}
After all authors, start the affiliations section by using the {\tt \textbackslash{}affiliations} command.
Each affiliation must be terminated by a newline {\tt \textbackslash{}\textbackslash{}} command. Make sure that you include the newline on the last affiliation too.
\section{Mapping authors to affiliations}
If some scenarios, the affiliation of each author is clear without any further indication (\emph{e.g.}, all authors share the same affiliation, all authors have a single and different affiliation). In these situations you don't need to do anything special.
In more complex scenarios you will have to clearly indicate the affiliation(s) for each author. This is done by using numeric math superscripts {\tt \$\{\^{}$i,j, \ldots$\}\$}. You must use numbers, not symbols, because those are reserved for footnotes in this section (should you need them). Check the authors definition in this example for reference.
\section{Emails}
This section is optional, and can be omitted entirely if you prefer. If you want to include e-mails, you should either include all authors' e-mails or just the contact author(s)' ones.
Start the e-mails section with the {\tt \textbackslash{}emails} command. After that, write all emails you want to include separated by a comma and a space, following the same order used for the authors (\emph{i.e.}, the first e-mail should correspond to the first author, the second e-mail to the second author and so on).
You may ``contract" consecutive e-mails on the same domain as shown in this example (write the users' part within curly brackets, followed by the domain name). Only e-mails of the exact same domain may be contracted. For instance, contracting ``person@example.com" and ``other@test.example.com" is not allowed because the domains are different.
\end{document}
\section{Introduction}
Question Generation (QG) aims to generate natural and human-like questions from a range of data sources, such as image \cite{Mostafazadeh2016GeneratingNQ}, knowledge base \cite{Serban2016GeneratingFQ,Su2016OnGC}, and free text \cite{Du2017LearningTA}. Besides for constructing SQuAD-like dataset \cite{Rajpurkar2016SQuAD10}, QG is also helpful for the intelligent tutor system: The instructor can actively ask the learner questions according to reading comprehension materials \cite{Heilman2010GoodQS} or particular knowledge \cite{Danon2017ASA}.
In this paper, we focus on QG for reading comprehension text. For example, Figure \ref{figure:SQuAD_demo} gives three questions from SQuAD, our goal is to generate such questions.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{figures/example.pdf}
\caption{\label{figure:SQuAD_demo}Example questions from SQuAD. The answers of Q1 and Q2 are facts described in the sentences, thus they are easy to answer. But it is not straightforward to answer Q3}
\end{figure}
QG for reading comprehension is a challenging task because the generation should not only follow the syntactic structure of questions, but also ask questions to the point, i.e., having a specified aspect as its answer. Some template-based approaches \cite{Vanderwende2007AnsweringAQ,Heilman2010GoodQS} were proposed initially, where well-designed rules and heavy human labor are required for declarative-to-interrogative sentence transformation.
With the rise of data-driven learning approach and sequence to sequence (seq2seq) framework, some researchers formulated QG as a seq2seq problem \cite{Du2017LearningTA}: The question is regarded as the decoding target from the encoded information of its corresponding input sentence.
However, different from existing seq2seq learning tasks such as machine translation and summarization which could be loosely regarded as learning a one-to-one mapping, for question generation, different aspects of the given descriptive sentence can be asked, and hence the generated questions could be significantly different.
Several recent works tried to tackle this problem by incorporating the answer information to indicate what to ask about, which helps the models generate more accurate questions \cite{Song2018LeveragingCI,Zhou2017NeuralQG}.
In our work, we also focus on the answer-aware QG problem, which assumes the answer is given. Similar problems have been addressed in, e.g., \cite{Zhao2018ParagraphlevelNQ,Sun2018AnswerfocusedAP}.
In this paper, we investigate a new setting of QG, namely \textbf{D}ifficulty controllable \textbf{Q}uestion \textbf{G}eneration (\textbf{DQG}). In this setting, given a sentence in the reading comprehension paragraph, the text fragments (i.e., answers) that we want to ask questions about, and the specified difficulty levels, a framework needs to generate questions that are asked about the specified answers and satisfy the difficulty levels as much as possible. For example, given the sentence S3 and the answer ``the electric guitar'' in Figure \ref{figure:SQuAD_demo}, the system should be capable of asking both a hard question like Q3 and an easy one such as ``What is often emphasised as a rhythm instrument?''.
DQG has rich application scenarios. For instance, when instructors prepare learning materials for students, they may want to balance the numbers of hard questions and easy questions.
Besides, the generated questions can be used to test how a QA system works for questions with diverse difficulty levels.
Generating questions with designated difficulty levels is a more challenging task. First, no existing large-scale QA dataset has difficulty labels for questions to train a reliable neural network model. Second, for a single sentence and answer pair, we want to generate questions with diverse difficulty levels. However, the current datasets like SQuAD only have one given question for each sentence and answer pair. Finally, there is no metric to evaluate the difficulty of questions.
To overcome the first issue, we prepare a dataset of reading comprehension questions with difficulty labels. Specifically, we design a method to automatically label SQuAD questions with multiple difficulty levels, and obtain 76K questions with difficulty labels.
To overcome the second issue, we propose a framework that can learn to generate questions complying with the specified difficulty levels by exploring the following intuitions. To answer a SQuAD question, one needs to locate a text fragment in the input sentence as its answer. Thus, if a question has more hints that can help locate the answer fragment, it would be easier to answer. For the examples in Figure \ref{figure:SQuAD_demo}, the hint ``atomic number'' in Q1 is very helpful, because, in the corresponding sentence, it is just next to the answer ``8'', while for Q3, the hint ``instrument'' is far from the answer ``The electric guitar''.
The second intuition is inspired by the recent research on style-guided text generation, which incorporates a latent style representation (e.g., sentiment label or review rating score) as an input of the generator \cite{DBLP:conf/nips/ShenLBJ17,quase}. Similarly, performing difficulty control can be regarded as a problem of sentence generation towards a specified attribute or style. On top of the typical seq2seq architecture, our framework has two tailor-made designs to explore the above intuitions: (1) Position embeddings are learned to capture the proximity hint of the answer in the input sentence;
(2) Global difficulty variables are learned to control the overall ``difficulty'' of the questions. For the last issue, we propose to employ the existing reading comprehension (RC) systems to evaluate the difficulty of generated questions. Intuitively, questions which cannot be answered by RC systems are more difficult than these correctly answered ones.
In the quantitative evaluation, we compare our DQG model with state-of-the-art models and ablation baselines. The results show that our model not only generates questions of better quality under the metrics like BLEU and ROUGE, but also has the capability of generating questions complying with the specified difficulty labels. The manual evaluation finds that the language quality of our generated questions is better, and our model can indeed control the question difficulty.
\section{Task Definition}
In the DQG task, our goal is to generate SQuAD-like questions of diverse difficulty levels for a given sentence. Note that the answers of SQuAD questions are text spans in the input sentence, and they are significantly different from RACE questions \cite{lai2017large} such as ``What do you learn from the story?''. Considering their different emphases, SQuAD questions are more suitable for our task, while the difficulty of RACE questions mostly comes from the understanding of the story but not from the way how the question is asked.
Thereby, we assume that the answers for asking questions are given, and they appear as text fragments in the input sentences by following the paradigm of SQuAD.
We propose an end-to-end framework to handle DQG. Formally, let $a$ denote the answer for asking question, let $s$ denote the sentence containing $a$ from a reading comprehension paragraph.
Given $a$, $s$, and a specified difficulty level $d$ as input, the DQG task is to generate a question $q$ which has $a$ as its answer, and meanwhile should have $d$ as its difficulty level.
\section{The Protocol of Difficulty Labeling}
SQuAD \cite{Rajpurkar2016SQuAD10} is a reading comprehension dataset containing 100,000+ questions on Wikipedia articles. The answer of each question is a text fragment from the corresponding input passage.
We employ SQuAD questions to prepare our experimental dataset.
The difficulty level is a subjective notion and can be addressed in many ways, e.g., syntax complexity, coreference resolution and elaboration \cite{Sugawara2017EvaluationMF}.
To avoid the ambiguity of the ``question difficulty'' in this preliminary study, we design the following automatic labeling protocol and study the correlation between automatically labelled difficulty with human difficulty.
We first define two difficulty levels, \textit{Hard} and \textit{Easy}, in this preliminary dataset for the sake of simplicity and practicality. We employ two RC systems, namely R-Net \cite{NETWORKS2017RnetMR}~\footnote{\scriptsize{\url{https://github.com/HKUST-KnowComp/R-Net}}} and BiDAF \cite{Seo2016BidirectionalAF}~\footnote{\scriptsize{\url{https://github.com/allenai/bi-att-flow}}}, to automatically assess the difficulty of the questions. The \textit{labeling protocol} is: A question would be labelled with \textit{Easy} if both R-Net and BiDAF answer it correctly under the exact match metric, and labelled with \textit{Hard} if both systems fail to answer it. The remaining questions are eliminated for suppressing the ambiguity.
Note that we cannot directly employ the original data split of SQuAD to train a model of R-Net or BiDAF, and use the model to assess all questions. Such assessment is not appropriate, because models will overfit training questions and label them all as easy ones. To avoid this problem, we re-split the original SQuAD questions into 9 splits and adopt a 9-fold strategy. To label every single split (the current split), 7 splits are used as the training data, and the last split is used as the validation data. Then the trained model is used to assess the difficulty of questions in the current split. This way guarantees that the model is never shown with the questions for automatic labeling.
Finally, we obtain 44,723 easy questions and 31,332 hard questions.
To verify the reasonability of our labeling protocol, we evaluate its consistency with human being's judgment.
We sample 100 \textit{Easy} questions and 100 \textit{Hard} questions, and hire 3 annotators to rate the difficulty level of all these questions on a 1-3 scale (3 for the most difficult). The result shows that average difficulty rating for the \textit{Easy} questions is 1.90 while it is 2.52 for the \textit{Hard} ones.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\columnwidth]{figures/framework.pdf}
\caption{Overview of our DQG framework \textit{(better viewed in color)}}
\label{figure:pipeline}
\end{figure}
\section{Framework Description}
Given an input sentence $s=({w_1}, {w_2}, ..., {w_m})$, a text fragment $a$ in $s$, and a difficulty level $d$, our task is to generated a question $q$, which is asked with $s$ as its background information, takes $a$ as its answer, and has $d$ as its difficulty.
The architecture of our difficulty-controllable question generator is depicted in Figure \ref{figure:pipeline}. The encoder takes two types of inputs, namely, the word embeddings and the relative position embeddings (capturing the proximity hints) of sentence words (including the answer words). Bidirectional LSTMs are employed to encode the input into contextualized representations. Besides two standard elements, namely attention and copy, the decoder contains a special initialization to control the difficulty of the generated question. Specifically, we map the difficulty label $d$ into a global difficulty variable with a lookup table, and combine the variable with the last hidden state of the encoder to initialize the decoder.
\subsection{Exploring Proximity Hints}
Recall that our first intuition tells that the proximity hints are helpful for answering the SQuAD-like questions. Before introducing our design for implementing the intuition, we quantitatively verify it by showing some statistics. Specifically, we examine the average distance of those nonstop question words that also appear in the input sentence to the answer fragment. For example, for Q1 in Figure \ref{figure:SQuAD_demo} and its corresponding input sentence ``Oxygen is a chemical element with symbol O and atomic number 8'', we calculate the word-level average distance of words ``atomic'', ``number'', ``element'', and ``oxygen'' to the answer ``8''. The statistics are given in Table~\ref{tab:distance_stat}.
In contrast, the average distance of all nonstop sentence words to the answer is also given in the bottom line. If we only count those nonstop question words, we find that their distance to the answer fragment is much smaller than the sentence words, namely 8.43 vs. 11.20. We call this \textit{Question Word Proximity Hint }(\textbf{QWPH}). More importantly, the distance for hard questions is significantly larger than that for easy questions, namely 9.71 vs. 7.67, which well verifies our intuition that if a question has more obvious proximity hints (i.e., containing more words that are near the answer in the corresponding sentence), it would be easier to solve. We model QWPH for easy questions and hard questions separately and call this \textit{Difficulty Level Proximity Hint }(\textbf{DLPH}).
To implement the QWPH intuition, our model learns a lookup table which maps the distance of each sentence word to the answer fragment, i.e., $0$ (for answer words), 1, 2, etc., into a position embedding: $(\mathbf{p}_0, \mathbf{p}_{1}, \mathbf{p}_{2}, ..., \mathbf{p}_{L})$, where $\mathbf{p}_i\in{\mathbb{R}^{d_p}}$ and $d_p$ is the dimension. $L$ is the maximum distance we consider.
Different from QWPH that is difficulty agnostic, the DLPH intuition additionally explores the information of question difficulty levels. Therefore, we define two lookup tables: $(\mathbf{p}_0^e, \mathbf{p}_{1}^e, \mathbf{p}_{2}^e, ..., \mathbf{p}_{L}^e)$ for the Easy label, and $(\mathbf{p}_0^h, \mathbf{p}_{1}^h, \mathbf{p}_{2}^h, ..., \mathbf{p}_{L}^h)$ for the Hard label.
Note that the above position embeddings not only carry the information of sentence word position, but also let our model know which aspect (i.e., answer) to ask with the embeddings of position 0.
\begin{table}[t!]
\small
\centering
{\small
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l|ccc}
\Xhline{2\arrayrulewidth}
\hline
& {~Easy} & {~Hard} & {~All} \\
\hline
{Avg. distance of question words} & {~~7.67} & {~~9.71} & {~~8.43} \\
\hline
{Avg. distance of all sentence words} & {11.23} & {11.16} & {11.20} \\
\hline
\Xhline{2\arrayrulewidth}
\end{tabular}}
}
\caption{Distance statistics for non-stop words}
\label{tab:distance_stat}
\end{table}
\subsection{Characteristic-rich Encoder}
The characteristic-rich encoder incorporates several features into a contextualized representation.
For each sentence word $w$, an embedding lookup table is firstly used to map tokens in the sentence into dense vectors: $(\mathbf{w}_1,$ $\mathbf{w}_2, ..., \mathbf{w}_m)$, where $\mathbf{w}_i\in{\mathbb{R}^{d_w}}$ of $d_w$ dimensions.
Then we concatenate its word embedding and position embedding (proximity hints) to derive a characteristic-rich embedding: $\mathbf{x}=[\mathbf{w};\mathbf{p}$].
We use bidirectional LSTMs to encode the sequence $(\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_m)$ to get a contextualized representation for each token:
\begin{equation}\label{eqn.enc}
\overrightarrow{\mathbf{h}}_i = \overrightarrow{\text{{LSTM}}}(\overrightarrow{\mathbf{h}}_{i-1}, \mathbf{x}_i),~ \overleftarrow{\mathbf{h}}_i = \overleftarrow{\text{{LSTM}}}(\overleftarrow{\mathbf{h}}_{i+1}, \mathbf{x}_i), \nonumber
\end{equation}
where $\overrightarrow{\mathbf{h}}_i$ and $\overleftarrow{\mathbf{h}}_i$ are the hidden states at the $i$-th time step of the forward and the backward LSTMs.
We concatenate them together as $\mathbf{h}_i=[\overrightarrow{\mathbf{h}}_i;\overleftarrow{\mathbf{h}}_i]$.
\subsection{Difficulty-controllable Decoder}
We use another LSTM as the decoder to generate the question. We employ the difficulty label $d$ to initialize the hidden state of the decoder.
During the decoding, we incorporate the attention and copy mechanisms to enhance the performance.
\paragraph{Global Difficulty Control. }
We regard the generation of difficulty-controllable questions as a problem of sentence generation towards a specified style, i.e., easy or hard.
To do so, we introduce a global difficulty variable to control the generation. We follow the recent works for the task of style transfer that apply the control variable globally, i.e., using the style variable to initialize the decoder \cite{quase}.
Specifically, for the specified difficulty level $d$, we first map it to its corresponding difficulty variable $\mathbf{d}\in{\mathbb{R}^{d_d}}$, where $d_d$ is the dimension of a difficulty variable. Then we use the concatenation of $\mathbf{d}$ with the final hidden state $\mathbf h_m$ of the encoder to initialize the decoder hidden state $\mathbf{u}_0 = [\mathbf h_m;\mathbf{d}]$.
Note that in the training stage, we feed the model the ground truth difficulty labels, while in the testing stage, our model can take any specified difficulty labels, i.e., difficulty-controllable, for question generation.
We have also tried some variations by adding this variable to other places such as every encoder or decoder input in the model but it does not work.
\paragraph{Decoder with Attention \& Copy. }
The decoder predicts the word probability distribution at each decoding timestep to generate the question.
At the \textit{t}-th timestep, it reads the word embedding $\mathbf{w}_{t}$ and the hidden state $\mathbf{u}_{t-1}$ of the previous timestep to generate the current hidden state $\mathbf u_t = \text{LSTM}(\mathbf u_{t-1}, \mathbf w_{t})$.
Then the decoder employs the attention mechanism \cite{Luong2015EffectiveAT,Jiani18,Wang2019TitleguidedEF} and copy mechanism \cite{See2017GetTT} to generate the question by copying words in the sentence or generating words from a predefined vocabulary.
\section{Experiments}
\subsection{Experimental Settings}
\paragraph{Dataset.}
Our prepared dataset is split according to articles of the SQuAD data, and Table \ref{tab:stat} provides the detailed statistics.
Across the training, validation and test sets, the splitting ratio is around 7:1:1, and the easy sample ratio is around 58\% for all three.
\begin{table}[!t]
\small
\centering
{\small
\resizebox{0.7\columnwidth}{!}{
\begin{tabular}{l|ccc}
\Xhline{2\arrayrulewidth}
\hline
& {Train} & {Dev} & {Test} \\
\hline
{\# easy questions} & {34,813} & {4,973} & {4,937} \\
\hline
{\# hard questions} & {24,317} & {3,573} & {3,442} \\
\hline
{Easy ratio} & {58.88\%} & {58.19\%} & {58.92\%} \\
\hline
\Xhline{2\arrayrulewidth}
\end{tabular}}
}
\caption{The statistics of our dataset}
\label{tab:stat}
\end{table}
\paragraph{Baselines and Ablation Tests.}
We only employ neural network based methods as our baselines, since they perform better than non-neural methods as shown in recent works \cite{Du2017LearningTA,Zhou2017NeuralQG}.
The first baseline models the question generation as a seq2seq problem incorporating the attention mechanism, and we refer to it as \textbf{L2A} \cite{Du2017LearningTA}. The second baseline
\textbf{Ans} adds answer indicator embedding to the seq2seq model, similar to \cite{Zhou2017NeuralQG,Kumar2018AutomatingRC}.
Two ablations that only employ the question word proximity hint or the difficulty level proximity hint are referred to as \textbf{QWPH} and \textbf{DLPH}.
Moreover, we examine the effectiveness of the global difficulty control (\textbf{GDC}) combined with QWPH and DLPH, refer to them as \textbf{QWPH-GDC} and \textbf{DLPH-GDC}. All these methods are enhanced by the \textit{copy} mechanism.
\paragraph{Model Details and Parameter Settings.}
The embedding dimensions for the position embedding and the global difficulty variable, i.e. $d_p$ and $d_d$, are set to 50 and 10 respectively. We use the maximum relative distance $L=20$ in the position embedding.
We adopt teacher-forcing in the encoder-decoder training and use the ground truth difficulty labels.
In the testing procedure, we select the model with the lowest perplexity and beam search with size 3 is employed for question generation.
All important hyper-parameters, such as $d_p$ and $d_d$, are selected on the validation dataset.
\begin{table}[!t]
\small
\centering
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{l | c c c c | c c c c}
\Xhline{3\arrayrulewidth}
& \multicolumn{4}{c|}{\textbf{Easy} Questions Set} & \multicolumn{4}{c}{\textbf{Hard} Questions Set} \\
& \multicolumn{2}{c}{R-Net} & \multicolumn{2}{c|}{BiDAF} & \multicolumn{2}{c}{R-Net} & \multicolumn{2}{c}{BiDAF} \\
& \texttt{EM} & \texttt{F1} & \texttt{EM} & \texttt{F1} & \texttt{EM} & \texttt{F1} & \texttt{EM} & \texttt{F1} \\
\hline
\hline
Ans & 82.16 & 87.22 & 75.43 & 83.17 & 34.15 & 60.07 & 29.36 & 55.89 \\
QWPH & 82.66 & 87.37 & 76.10 & 83.90 & 33.35 & 59.50 & 28.40 & 55.21 \\
QWPH-GDC & 84.35 & 88.86 & 77.23 & 84.78 & 31.60 & 57.88 & 26.68 & 54.31 \\
DLPH & 85.49 & 89.50 & 78.35 & 85.34 & 28.05 & 54.21 & 24.89 & 51.25 \\
DLPH-GDC & \textbf{85.82} & \textbf{89.69} & \textbf{79.09} & \textbf{85.72} & \textbf{26.71} & \textbf{53.40} & \textbf{24.47} & \textbf{51.20} \\
\Xhline{3\arrayrulewidth}
\end{tabular}}
\caption{Difficulty of the generated questions, measured with R-Net and BiDAF. For easy questions, higher score indicates better difficulty-control, while for hard questions, lower indicates better}
\label{tab:generation-result-auto}
\end{table}
\begin{table}[!t]
\centering
\resizebox{0.95\columnwidth}{!}{
\begin{tabular}{l | c c c c | c c c c}
\Xhline{3\arrayrulewidth}
& \multicolumn{4}{c|}{\textbf{Easy} Questions Set} & \multicolumn{4}{c}{\textbf{Hard} Questions Set} \\
& \multicolumn{2}{c}{R-Net} & \multicolumn{2}{c|}{BiDAF} & \multicolumn{2}{c}{R-Net} & \multicolumn{2}{c}{BiDAF} \\
& \texttt{EM} & \texttt{F1} & \texttt{EM} & \texttt{F1} & \texttt{EM} & \texttt{F1} & \texttt{EM} & \texttt{F1} \\
\hline
\hline
QWPH-GDC & 7.41 & 5.72 & 7.13 & 5.88 & 6.45 & 5.47 & 6.13 & 5.10 \\
DLPH & 12.41 & 9.51 & 11.28 & 8.49 & 12.01 & 10.45 & 10.51 & 9.37 \\
DLPH-GDC & \textbf{12.91} & \textbf{9.95} & \textbf{12.40} & \textbf{9.23} & \textbf{12.68} & \textbf{10.76} & \textbf{11.22} & \textbf{9.97} \\
\Xhline{3\arrayrulewidth}
\end{tabular}}
\caption{The results of controlling difficulty, measured with R-Net and BiDAF. The scores are performance gap between questions generated with original difficulty label and questions generated with reverse difficulty label}
\label{tab:generation-gap-auto}
\end{table}
\subsection{Difficulty Control Results}
We run R-Net and BiDAF to assess the difficulty of our generated hard and easy questions. Here the R-Net and BiDAF systems are trained using the same train/validation splits as shown in Table \ref{tab:stat}, and we report their performance under the standard reading comprehension measures for SQuAD questions, i.e., Exact Match (\textbf{EM}) and macro-averaged F1 score (\textbf{F1}), on the easy and hard question sets respectively.
For all experiments, we firstly show the performance of difficulty-controllable question generation by feeding ground truth difficulty labels, then we feed the reverse difficulty labels to demonstrate our model can \textit{\textbf{control}} the difficulty of generated questions.
Recall that the generated questions can be split into an easy set and a hard set according to the difficulty labels. Here we evaluate the generated questions from the perspective that a reading comprehension system (e.g., R-Net and BiDAF) should perform better on the generated questions in the easy set, and perform worse on the hard question set. If a pipeline does not use the answer information, its generated questions are likely not about the answers, thus both BiDAF and R-Net cannot work well no matter for easy or hard questions. Therefore, we do not use L2A here.
\begin{table}[!t]
\centering
\resizebox{0.8\columnwidth}{!}
{\small
\begin{tabular}{l| c c c| c c c}
\Xhline{3\arrayrulewidth}
\multirow{2}{*}{\textbf{}} & \multicolumn{3}{c|}{\textbf{Easy} Question Set} & \multicolumn{3}{c}{\textbf{Hard} Question Set} \\ \cline{2-7}
& \texttt{F} & \texttt{D} & \texttt{R} & \texttt{F} & \texttt{D} & \texttt{R} \\ \hline
Ans & 2.91 & 2.02 & 0.74 & 2.87 & 2.12 & 0.58 \\
DLPH-GDC & 2.94 & 1.84 & 0.76 & 2.87 & 2.26 & 0.64 \\
\Xhline{3\arrayrulewidth}
\end{tabular}}
\caption{Human evaluation results for generated questions. \texttt{Fluency(F)} and \texttt{Difficulty(D)} take values from \{1, 2, 3\} (3 means the top fluency or difficulty), while \texttt{Relevance(R)} takes a binary value, i.e., 1 or 0}
\label{tab:manual}
\end{table}
As shown in Table \ref{tab:generation-result-auto}, for the easy set, the questions generated by the methods using the difficulty label ``Easy'' are easier to answer. Specifically, compared with Ans and QWPH which cannot control the difficulty, QWPH-GDC, DLPH, and DLPH-GDC generate easier questions, showing that they have the capability of generating difficulty-controllable questions. One instant doubt is that a model can simply produce trivial questions by having them contain the answer words. In fact, our models do not have this behaviour, because it will increase the training loss. To further verify this, we calculate the occurrence rate of answer words in the generated questions. The result shows that only 0.09\% answer words appear in the questions generated by our models.
For the hard set, we can draw the same conclusion by keeping in mind that a lower score indicates the corresponding method performs better in generating difficulty-controllable questions. (Note that questions irrelevant to the answer can also yield lower scores, and we have more discussion about this issue in Section \ref{sec:human} for the human evaluation.)
This observation shows that incorporating the difficulty information locally by the two position embeddings or globally by the difficulty-controlled initialization indeed guides the generator to generate easier or harder questions.
Comparing DLPH and QWPH-GDC, we find that the local difficulty control by the position embedding is more effective. DLPH-GDC performs the best by combining the local and global difficulty control signals.
Moreover, we find that QWPH achieves slightly better performance than Ans baseline.
A large performance gap between QWPH-GDC and QWPH again validates the effectiveness of the global difficulty control.
Meanwhile, the improvement from QWPH to DLPH shows that the local difficulty level proximity hint can stress the question difficulty at each time step to perform better.
On the other hand, another way to validate our model is testing whether our model can \textit{\textbf{control}} the difficulty by feeding the reversed difficulty labels.
For example, for a question in the easy set, if we feed the ``Hard'' label together with the input sentence and answer of this question into our model, we expect the generated question should be harder than feeding the ``Easy'' label.
Concretely, if a method has the better capability in controlling the difficulty, on two sets of questions generated with this method by taking the true label and the reversed label, the performance gap of a reading comprehension system should be larger. The results of this experiment are given in Table \ref{tab:generation-gap-auto}.
We only compare models which have difficulty control capability.
The model combining local and global difficulty signals, i.e., DLPH-GDC, achieves the largest gap, which again shows that: (1) DLPH-GDC has the strongest capability of generating difficulty-controllable questions; (2) The local difficulty control (i.e. DLPH) is more effective than the global (i.e. QWPH-GDC).
\subsection{Manual Evaluation} \label{sec:human}
We hire 3 annotators to rate the model generated questions. We randomly sampled 100 question with ``Easy`` labels and 100 with ``Hard`` labels from the test set, and let each annotator annotate these 200 cases.
During the annotation, each data point contains a sentence, an answer, and the questions generated by different models, without showing the difficulty labels.
We consider three metrics: \texttt{Fluency(F)}, \texttt{Difficulty(D)} and \texttt{Relevance(R)}. The annotators are first asked to read the generated questions to evaluate their grammatical correctness and fluency. Then, all annotators are required to rate the difficulty of each generated question by considering the corresponding sentence and answer. Finally, for relevance, we ask the annotators to judge if the question is asking about the answer.
\texttt{Fluency} and \texttt{Difficulty} take values from \{1, 2, 3\} (3 means the top fluency or difficulty), while \texttt{Relevance} takes a binary value (1 or 0).
Table \ref{tab:manual} shows the results of the manual evaluation. We compare our best model DLPH-GDC with the Ans baseline. We separate the \texttt{Easy} questions and \texttt{Hard} questions for statistics.
For both question sets, both models achieve high scores on \texttt{Fluency}, owing to the strong language modeling capability of neural models.
For \texttt{Difficulty}, we can find that DLPH-GDC can generate easier or harder questions than Ans by feeding the true difficulty labels.
Another observation is that, for the Ans baseline, questions generated in the \texttt{Easy} set are easier than those in the \texttt{Hard} set, which validates our difficulty labelling protocol from another perspective. Note that for human beings, all SQuAD-like questions are not really difficult, therefore, the difference of \texttt{Difficulty} values between the easy set and the hard set is not large.
Furthermore, we can observe our model can generate more relevant questions compared with the Ans baseline. The reason could be that our position embedding can not only tell where the answer words are, but also indicate the distance of the context words to the answer. Thus, it provides more information to the model for asking to the point questions. Ans only differentiates the answer token and non-answer token, and treats all non-answer tokens equally.
Recall that we had the concern regarding Table \ref{tab:generation-result-auto} that the generated hard questions by our difficulty-controlling models say DLPH-GDC may simply be irrelevant to the answer, which makes DLPH-GDC achieves lower EM/F1 scores than the Ans baseline. By comparing the \texttt{Relevance} scores in Table \ref{tab:manual} and EM/F1 scores in Table \ref{tab:generation-result-auto} for Hard Question Set, we find that the questions generated by DLPH-GDC are more relevant (as shown in Table \ref{tab:manual}) and more difficult (as shown in both Tables \ref{tab:generation-result-auto} and \ref{tab:manual}) than those generated by the Ans baseline. This observation resolves our doubt on the irrelevance issue and supports the conclusion that our DLPH-GDC does generate more difficult and relevant questions which can fail the two RC pipelines.
\subsection{\mbox{Automatic Evaluation of Question Quality}}
Here we evaluate the similarity of generated questions with the ground truth. Since our dataset is not parallel (i.e., for a sentence and answer pair, our dataset only has one question with the ``easy'' or ``hard'' label), here we only evaluate the question quality by feeding the ground truth difficulty labels.
We employ BLEU (B), METEOR (MET) and ROUGE-L (R-L) scores by following \cite{Du2017LearningTA}. BLEU evaluates the average N-gram precision on a set of reference sentences, with a penalty for overly long sentences. ROUGE-L is commonly employed to evaluate the recall of the longest common subsequences, with a penalty for short sentences.
\begin{table}[t]
\small
\centering
\resizebox{0.80\columnwidth}{!}{
\begin{tabular}{@{}l@{~} | @{~}c@{~} @{~}c@{~} @{~}c@{~} @{~}c@{~} @{~}c@{~} @{~}c@{~} }
\Xhline{3\arrayrulewidth}
\hline
& B1 & B2 & B3 & B4 & MET & R-L \\
\hline
\hline
L2A & 36.01 & 21.61 & 14.97 & 10.88 & 15.99 & 38.06 \\
Ans & 43.51 & 29.06 & 21.35 & 16.22 & 20.53 & 45.66 \\
QWPH & 43.75 & 29.28 & 21.61 & 16.46 & 20.70 & 46.02 \\
\cline{1-7}
QWPH-GDC & 43.99 & 29.60 & 21.86 & 16.63 & 20.87 & 46.26 \\
DLPH & 44.11 & 29.64 & 21.89 & 16.68 & 20.94 & 46.22 \\
DLPH-GDC & 43.85 & 29.48 & 21.77 & 16.56 & 20.79 & 46.16 \\
\Xhline{3\arrayrulewidth}
\end{tabular}}
\caption{Automatic evaluation for question quality}
\label{tab:generation-result-sim}
\end{table}
Table \ref{tab:generation-result-sim} shows the quality of generated questions.
Comparing the first three methods, we can find that the answer and position information helps a lot for asking to the point questions, i.e., more similar to the ground truth. Moreover, QWPH performs better than Ans, indicating that further distinguishing the different distance of the non-answer words to the answer provides richer information for the model to generate better questions.
The results in the lower half show that, given the ground truth difficulty labels, these three methods with the capability of difficulty control are better than the first three methods.
These three models achieve comparable performance, and DLPH-GDC sacrifices a little in N-gram based performance here while achieving the best difficulty control capability (refer to Tables \ref{tab:generation-result-auto} \& \ref{tab:generation-gap-auto}).
\subsection{Case Study}
Figure \ref{figure:casestudy} provides some examples of generated questions (with answers marked in red). The number after the model is the average distance of the overlapped nonstop words between the question and the input sentence to the answer fragment. The average distance corresponds to the our intuition proximity hints well. Compared with questions generated by Ans baseline, our model can give more hints (shorter distance) when asking easier questions and give less hints (longer distance) when asking harder questions.
For the first example, we observe that the ground truth question generated by Human is quite easy, just replacing the answer ``bodhi'' with ``what''. Among the three systems, Ans asks a question that is not about the answer. While both DLPH-GDC and DLPH-GDC (reverse) are able to generate to the point questions. Specifically, by taking the ``Easy'' label, DLPH-GDC tends to use more words from the input sentence, while DLPH-GDC (reverse) uses less and its generated question is relatively difficult.
For the second example, we find our system is also applicable to the question with ``Hard'' label.
\section{Related Work}\label{sec.lit_review}
In this section, we primarily review question generation (QG) works on free text.
\citet{Vanderwende2007AnsweringAQ} proposed this task,
later on, several rule-based approaches were proposed. They manually design some question templates and transform the declarative sentences to interrogative questions \cite{Mazidi2014LinguisticCI,Labutov2015DeepQW,Lindberg2013GeneratingNL,Heilman2010GoodQS}.
These Rule-based approaches need extensive human labor to design question templates, and usually can only ask annotators to evaluate the generated questions.
\citet{Du2017LearningTA} proposed the first automatic QG framework. They view QG as a seq2seq learning problem to learn the mapping between sentences and questions in reading comprehension.
Moreover, the procedure of QG from a sentence is not a one-to-one mapping, because given a sentence, different questions can be asked from different aspects. As \citet{Du2017LearningTA} mentioned, in their dataset, each sentence corresponds to 1.4 questions on average.
Seq2seq learning may not perform well for learning such a one-to-many mapping.
Some recent works attempt to solve this issue by assuming the aspect has been already known when asking a question \cite{Zhou2017NeuralQG,Yuan2017MachineCB} or can be detected with a third-party pipeline \cite{Du2018HarvestingPQ}.
This assumption makes sense, because for humans to ask questions, we usually first read the sentence to decide which aspect to ask.
In this paper, we explore another important dimension in QG, i.e., generating questions with controllable difficulty, that has never been studied before.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{figures/case.pdf}
\caption{Example questions (with answers marked in red). The human question for Input 2 uses some information (``hard rock'') in preceding sentences which are not shown here}
\label{figure:casestudy}
\end{figure}
\section{Conclusions}
In this paper, we present a novel setting, namely difficulty-controllable question generation for reading comprehension, which to the best of our knowledge has never been studied before.
We propose an end-to-end approach to learn the question generation with designated difficulty levels.
We also prepared the first dataset for this task, and extensive experiments show that our framework can solve this task reasonably well.
One interesting future direction is to explore generating multiple questions for different aspects in one sentence \cite{Gao2019GeneratingDF}.
\section*{Acknowledgments}
This work is supported by the Research Grants Council of the Hong Kong
Special Administrative Region, China (No. CUHK 14208815 and No. CUHK
14210717 of the General Research Fund).
We thank Department of Computer Science and Engineering, The Chinese University of Hong Kong for the conference grant support.
We would like to thank Jianan Wang for her efforts in the preliminary investigation.
\bibliographystyle{named}
|
1807.03864
|
\section{Introduction}
The application of quantum theory to gravity is pursued using a number of different approaches (see e.g. \cite{Duff:2012sqa} for a recent survey). These can be broadly divided into two -- those that are ``background dependent" and those that are not \cite{Isham:1993ji}. The term refers to what structures in the classical theory are to be held fixed in the passage to quantum theory. The canonical quantization approach formulated by DeWitt \cite{DeWitt:1967yk} is considered to be the defining case of a background independent approach to quantum gravity; this is also the paper where the very first quantization of the FRLW model was described.
The canonical quantization program naturally divides into two distinct approaches. These are referred to as (i) Dirac quantization, where the Hamiltonian constraint is imposed as an operator condition on wave function(al)s, and (ii) reduced phase space quantization, where time and spatial coordinate gauges are fixed in the classical theory before proceeding to quantization. It is the former that leads to the Wheeler-DeWitt equation. Solutions in either case are referred to as ``wave functions of the Universe."
In its more recent incarnations, the Dirac quantization condition is approached via a path integral as in the Hartle-Hawking method \cite{Hartle:1983ai}, or by imposing the condition directly as in Loop Quantum Gravity \cite{Ashtekar:2004eh,Thiemann:2007pyv}. In reduced phase space quantization, a phase space variable is first selected as a clock. Its conjugate variable provides the physical non-vanishing Hamiltonian. Quantization then proceeds as in conventional quantum theory with a time-dependent Schrodinger equation (or path integral). This division has led to much debate about the role of time in quantum gravity at both the philosophical and physical levels, and questions about the equivalence of the two methods \cite{Hartle:1984ut,Schleich:1990gd, Giesel:2017mfc}.
Because of the difficulty in solving the Wheeler-deWitt equation in the former case and the time-dependent Schrodinger equation in the latter case, nearly all concrete calculations are restricted to either homogeneous cosmological models or to inhomogeneous perturbations of these models. Examples of early work on such models include Refs. \cite{Misner:1969hg, Blyth:1975is}. The most recent works are in the framework of Loop Quantum Cosmology (LQC) \cite{Agullo:2016tjh}, and a revisit of the Hartle-Hawking prescription via a Lorentzian path integral \cite{Feldbrugge:2017kzv,DiazDorronsoro:2017hti}.
In this note we revisit the flat, homogeneous and isotropic cosmology with dust and a cosmological constant $\Lambda$. This remains the typical model to consider since current observations suggest that our Universe is modelled well by an FLRW cosmology with zero spatial curvature and a very small positive cosmological constant $\Lambda \sim 3 \times 10^{-122} l_P^{-2}$ \cite{Ade:2015xua}. We study the model using the reduced phase method in the dust time gauge \cite{Brown:1994py,Husain:2011tk,Husain:2011tm,Giesel:2012rb,Ali:2015ftw}; a recent study via Dirac-Wheeler-deWitt quantization appears in \cite{Maeda:2015fna}. In the context of matter time gauges, there are also several studies using scalar field time in quantum gravity and cosmology; a representative selection is \cite{Rovelli:1993bm,Feinberg:1995tf,Nakonieczna:2015dza, Assanioussi:2017tql}.
In the Arnowitt-Deser- Misner (ADM) canonical formalism, we show that dust time gauge leads to a surprising result: the corresponding physical Hamiltonian, after a canonical transformation, becomes exactly that of a simple harmonic oscillator; the oscillator's frequency is determined by $\sqrt{\Lambda}$. The corresponding quantum theory is therefore immediate.
For $\Lambda<0$ the potential is that of the usual oscillator, whereas for $\Lambda >0$ it is the inverted oscillator. The former case describes Universes either as stationary states, or as wave packets that expand and contract ad-infinitum. The latter case has only scattering solutions that give Universes with a single bounce. Depending on the choice of canonical parametrization, the oscillator is either on the half or the full line. All cases gives singularity avoidance, and for all choices of self-adjoint extensions of the Hamiltonian. Our work also exhibits one of the situations where Dirac and reduced phase space quantization give similar results for a particular choice of operator ordering in the Wheeler-Dewitt equation.
We begin by reviewing the general formalism for the dust time gauge, followed by its application to cosmology in the following sections.
\section{Dust time gauge}
The model we consider is general relativity coupled to a pressureless dust field $T$. The action
\begin{equation}
S= \int d^4x ~ \sqrt{-g}R - \int d^4x ~ \dfrac{1}{2} M \sqrt{-g} \left( g^{ab} \partial_aT \partial_bT + 1 \right)
\end{equation}
where $g_{ab}$ is the 4-metric, $R$ is the 4-Ricci Scalar, and $M$ is the dust energy density, leads to the canonical ADM action
\begin{equation}
S= \int d^3x\ dt \ \left(\pi^{ab} \dot{q}_{ab} + p_T \dot{T} - N{\cal H} - N^a {\cal C}_a \right) ,
\end{equation}
where
\begin{subequations}
\begin{eqnarray}
{\cal H} &\equiv& {\cal H}_G + \mathcal{H}_D \nonumber\\
&=&
\frac{1}{\sqrt{q}} (\pi^{ab}\pi_{ab} - \frac{1}{2} \pi^2 ) + \sqrt{q} (\Lambda - {}^{(3)} \!R) \nonumber\\
&& + \ \text{sgn}(M)\ p_T \sqrt{1+q^{ab}\partial_aT \partial_bT}, \\
{\cal C}_a &\equiv& -D_b \pi^b_{\ a} + p_T \partial_a T,
\end{eqnarray}
\end{subequations}
$q_{ab}$ is the 3-metric, $\pi^{ab}$ is its conjugate momentum, $p_T$ is the dust conjugate momentum, $N$ is the lapse, $N^a$ is the shift, and the metric is of the ADM form
\begin{equation}
ds^2 = -N^2dt^2 + (dx^a + N^a dt)(dx^b + N^b dt)q_{ab}. \label{admm}
\end{equation}
We define the canonical dust time gauge by
\begin{equation}
T=\epsilon t
\end{equation}
with $\epsilon= \pm 1$. The requirement that the gauge be preserved in time gives
\begin{equation}
\dot{T} = \epsilon = \Big\{ T, \int d^3x\ N {\cal H} \Big\}\Big|_{T=t} = \text{sgn}(M) N. \label{gauge-pres}
\end{equation}
The physical Hamiltonian $\mathcal{H}_{\text{p}}$ is obtained by substituting the gauge into the dust symplectic term in the canonical action, which identifies
$\mathcal{H}_{\text{p}} \equiv -\epsilon p_T$. Solving the Hamiltonian constraint
\begin{equation}
{\cal H}_G + \text{sgn}(M) p_T = 0
\end{equation}
then identifies the physical Hamiltonian
\begin{equation}
\mathcal{H}_{\text{p}} = -\epsilon p_T = \epsilon\ \text{sgn}(M) {\cal H}_G = N {\cal H}_G \ ,
\end{equation}
using (\ref{gauge-pres}) for the last equality. It is also useful to note, using $p_T = \sqrt{q}\ \dot{T}M/N$ and (\ref{gauge-pres}), the relation
\begin{equation}
p_T = \epsilon \sqrt{q}\ \frac{M}{N} = \epsilon \ \sqrt{q}\ \frac{\text{sgn}(M)}{N}\ |M| = \sqrt{q}\ |M| \ \ .
\end{equation}
which shows that $p_T > 0$ for $M\ne 0$, and
\begin{equation}
\mathcal{H}_{\text{p}} = -\epsilon \sqrt{q}\ |M| = N {\cal H}_G .
\end{equation}
Thus the requirement that the dust Hamiltonian satisfy ${\cal H}_D = \text{sgn}(M)p_T\ge 0$ implies $\text{sgn}(M)=+1$, since $p_T= \sqrt{q}\ |M| \ge 0$. This means that the dust field satisfies the weak energy condition. With this choice (\ref{gauge-pres}) gives $N=\epsilon$. In the following we make the choice $N=\epsilon = -1$ which gives the manifestly positive physical Hamiltonian density
\begin{equation}
\mathcal{H}_{\text{p}} = \sqrt{q}\ |M| = - {\cal H}_G \ge 0. \label{Hp2}
\end{equation}
\subsection{Application to cosmology}
Let us now consider the reduction of the dust-time gauge theory to homogeneous and isotropic cosmology This is obtained by setting
\begin{eqnarray}
q_{ab} &=& a^2(t) e_{ab}\nonumber \\
\pi^{ab} &=& \frac{p_a(t)}{6a(t)} e^{ab},
\end{eqnarray}
where $e_{ab} = \text{diag}(1,1,1)$ is a fiducial flat metric. The reduced phase space coordinates are $(a, p_a)$, and we take $a \in (0,\infty)$ and $p_a \in \mathbb{R}$ as the definition of this parametrization (since we must have $\text{det}(q_{ab}) = a^3 > 0$).
The physical Hamiltonian (\ref{Hp2}) for the flat case then becomes
\begin{equation}
\mathcal{H}_{\text{p}} = \frac{p_a^2}{24a} - \Lambda a^3.
\end{equation}
To briefly recap, this FRW model started with a four-dimensional phase space, that of the dust field and the scale factor. After fixing the time gauge and solving the Hamiltonian constraint, the reduced phase space becomes two-dimensional, with canonical coordinates $(a,p_a)$. This is unlike the vacuum deSitter model (see e.g.\cite{Halliwell:1988ik} ), which actually has no physical degrees of freedom; the physical meaning of ``wave functions of the Universe" without additional degrees of freedom is therefore unclear.
Let us now note the canonical transformation
\begin{equation}
p= \frac{p_a}{\sqrt{12a}},\ \ \ \ x = \frac{4}{\sqrt{3}} a^{3/2} \label{canon}
\end{equation}
and the rescaling $\Lambda \longrightarrow 4\Lambda/\sqrt{3}$ transforms the Hamiltonian to
\begin{equation}
\mathcal{H}_{\text{p}} = \frac{1}{2}\left({p^2} - \Lambda x^2 \right). \label{Hpfrw}
\end{equation}
There are thus three cases of interest: $\Lambda=0$ is a free particle, $\Lambda<0$ is the oscillator and $\Lambda>0$ is the inverted oscillator.
\section{Quantization and wave functions of the Universe}
This section consists of two parts where we describe quantization in the dust time gauge for two choices of the configuration space. These lead to quantum theories on either the half-line or the full line. In the former case there is a one parameter family of self-adjoint extensions of the physical Hamiltonian.
\subsection{Quantization on the half-line}
The classical theory is on the half-line, $x\in (0,\infty)$, so the obvious choice for the Hilbert space is $L^2(\mathbb{R}^+,dx)$. In this space it is known that Hamiltonians of the form $p^2 + V(x)$ have self-adjoint extensions. Specifically, it is readily checked that the physical Hamiltonian (\ref{Hpfrw}) is symmetric in the usual representation $\hat{p} \rightarrow -i \partial_x$, i.e. that $(\psi, \widehat{\mathcal{H}_{\text{p}}} \phi) = (\widehat{\mathcal{H}_{\text{p}}} \psi, \phi)$, provided $\displaystyle \lim_{x\rightarrow \infty} \phi=0$ and
\begin{equation}
\lim_{x\rightarrow 0}\left[\psi^* \phi' - \phi \psi^{* \prime} \right] = 0.
\end{equation}
This gives the boundary condition $\phi'(0) = \alpha \phi(0), \ \alpha \in \mathbb{R} $. Thus there is a one-parameter ($\alpha$) family of self-adjoint extensions of $\widehat{\mathcal{H}_{\text{p}}}$ on the half-line, so the Hilbert space is the subspace specified by
\begin{equation}
\mathbb{H}_\alpha = \left\{ \phi \in {\cal L}^2(\mathbb{R}^+,dx) \Big| \lim_{x\rightarrow 0} (\ln \phi) '= \alpha \in \mathbb{R} \right\}.
\end{equation}
We are interested in solving the time-dependent Schrodinger equation,
\begin{equation}
i \frac{\partial }{\partial t} \phi(x,t) = -\frac{1}{2} \frac{\partial^2}{\partial x^2} \phi(x,t) - \frac{1}{2} \Lambda x^2 \phi(x,t),
\end{equation}
with the boundary condition mentioned above. (In this equation all variables are dimensionless, or equivalently, written in Planck units.)
\noindent \underbar{$\Lambda=0$}: There are two types of elementary solutions. The first are the ingoing and outgoing waves of fixed energy (in the dust time gauge), and satisfying the above boundary condition,
\begin{equation}
\phi_{\alpha k}(x,t) = e^{-ik^2t/2} \left[e^{ikx} - \left(\frac{\alpha-ik}{\alpha+ik}\right) e^{-ikx} \right]
\end{equation}
Normalizable wave functions are constructed in the usual manner as
\begin{equation}
\psi_\alpha(x,t) = \int_{-\infty}^\infty dk \ f(k) \phi_{\alpha k}(x,t)
\end{equation}
All such solutions describe Universes with singularity avoidance and a bounce at the origin with a phase shift given by
$\alpha$.
The second type of solution is a bound state,
\begin{equation}
\phi(x,t) = e^{i\kappa^2t/2}\ e^{-\kappa x}, \ \ \ \kappa>0
\end{equation}
This corresponds to $\alpha = -\kappa$, a choice permitted by the boundary conditions. The Universe this describes is ruled out by experiment, since $ \langle a^{3/2}\rangle \sim \langle x \rangle = (2\kappa)^{-1}$ which has the interpretation of an emergent flat spacetime from the expectation value of the metric.
\medskip
\noindent \underbar{$\Lambda <0$}: This is the oscillator on the half-line with the boundary condition, $\psi'(0) - \alpha \psi(0)=0$. With $\Lambda = -1/l^2$ and $\zeta=t/l$, the propagator on $\mathbb{R}$ is a basic result,
\begin{eqnarray}
K(x,\zeta; x',0) &=& \sqrt{\frac{1}{2\pi i l \sin{\zeta}}} \nonumber\\
&&\times \exp\left\{ \frac{i[(x^2 +x'^2)\cos \zeta -2xx' ]}{2 l \sin \zeta} \right\}.\label{prop}
\end{eqnarray}
For the half-line problem at hand, given initial data $\psi(x,0)=f(x)$ for $x>0$, the solution with the required boundary condition at $x=0$ may be obtained by extending the given initial data $f(x)$ on $\mathbb{R}^+$ to the region $x<0$, such that
\begin{equation}
f'(x)-\alpha f(x) = - \left( f'(-x) - \alpha f(-x)\right) \label{f}, \ \ \ x<0,
\end{equation}
i.e. imposing antisymmetry on the boundary condition function. Solving this equation gives the required extension
\begin{eqnarray}
f_L(x) & \equiv& e^{\alpha x}\int_x^0 du\ e^{-\alpha u} \left[ f'(-u) - \alpha f(-u) \right] \nonumber\\
&& +\ e^{\alpha x} f(0), \ \ \ x<0,
\end{eqnarray}
where the integration constant is chosen such that $f_L(0) = f(0)$.
Convoluting the data so extended with the full-line propagator (\ref{prop}) then gives the solution
\begin{eqnarray}
\psi(x,\zeta) &=& \int_{-\infty}^0 dx' \ K(x,\zeta;x',0)\ f_L(x') \nonumber\\
&&+ \int_0^\infty dx' \ K(x,\zeta;x',0)\ f(x'), \ \ x>0.
\end{eqnarray}
It is straightforward to construct explicit examples of such solutions; all describe Universes that expand out to a maximum size, re-collapse, and bounce again. This is of course expected since wave packets are confined in the half-oscillator potential. Figure (\ref{fig1}) shows the dynamics of a representative Gaussian wave function with $\Lambda=-1$, and $\alpha=1$. The asymmetric bounce is evident, and the second and fourth frames demonstrate the multiple bounce feature.
\begin{figure}
\includegraphics[width=\linewidth]{Fig1.png}
\caption{Snapshots of $|\psi(x,t)|^2$ with the initial data $f(x) = \frac{e^{-(x-3)^2}}{\sqrt[4]{\pi/2}}$, and parameters $\Lambda = -1$ and $\alpha = 1.0$. The Universe moves toward the origin $(t=0.1 - 1.5)$, expands asymmetrically $(t=2.9)$, and contracts again $(t=4.8)$. The profiles at $t=1.5$ and $t= 4.8$ are nearly identical. }
\label{fig1}
\end{figure}
\smallskip
\noindent \underbar{$\Lambda >0$:} The Hamiltonian is not bounded below. However the unitary evolution operator is still well defined since the Hamiltonian has self-adjoint extensions. The propagator on $\mathbb{R}$ is obtained by the replacement $l\rightarrow il$ to give
\begin{eqnarray}
\bar{K}(x,\zeta; x',0) &=& \sqrt{\frac{1}{2\pi il\sinh{\zeta}}} \nonumber\\
&&\times \exp\left\{ \frac{i[(x^2 +x'^2)\cosh \zeta -2xx' ]}{2l\sinh \zeta} \right\}.\label{prop2}
\end{eqnarray}
Solutions of the time-dependent Schrodinger equation with the boundary condition $\phi'(0)-\alpha \phi(0)=0$ are found in the same way as above by extending the initial data function to $x<0$. It is evident that the propagator is damped for large times $\zeta$ due to the prefactor. However for the very small $\Lambda$ that is experimentally observed, the decay time would be very large. (It is useful to note that the issue of convergence of the Euclidean functional integral for the inverted oscillator was studied in \cite{Carreau:1990is}, where it is shown that the integral for the propagator converges if the propagation time is bounded by a factor of the oscillator frequency.) Fig. 2 shows the propagation of the same initial Gaussian wave packet as that in Fig. 1, but now for positive $\Lambda$. The wave packet moves outward and spreads rapidly.
%
\begin{figure}
\includegraphics[width = \linewidth]{Fig2.png}
\caption{Snapshots of $|\psi(x,t)|^2$ with the initial data $f(x) = \frac{e^{-(x-3)^2}}{\sqrt[4]{\pi/2}}$, and parameters $\Lambda = 1$ and $\alpha = 1.0$. The initial wave packet travels outwards and spreads. }
\label{fig2}
\end{figure}
\subsection{Quantization on $\mathbb{R}$ }
In the above we started with the standard canonical parametrization for the FLRW cosmology which led to the oscillator on the half-line.
There is an alternative parametrization that directly gives the oscillator on the real line after a rescaling of variables. This is
\begin{eqnarray}
q_{ab} &=& A^{4/3}(t) e_{ab} \nonumber\\
\pi^{ab} &=& \frac{1}{4 A^{1/3}(t)} \ P_A(t) e^{ab},
\end{eqnarray}
where the phase space $(A,P_A)$ is now $\mathbb{R}^2$.
In this parametrization there is an exact Lorentzian ``Hartle-Hawking'' wave function, which is the amplitude to create the universe from nothing, albeit in the dust time gauge. This is obtained from (\ref{prop2}) :
\begin{equation}
\Psi_{HH} \equiv\bar{K}(A,\zeta; 0,0) = \sqrt{\frac{1}{2\pi i l \sinh{\zeta}}}
\exp\left( -\frac{ iA^2}{2l\tanh \zeta} \right),\label{prop3}
\end{equation}
where $A^4= \text{det}(q_{ab})\equiv q$, and since we are now on the full line, $A\in \mathbb{R}$. This expression is just the oscillator propagator on the real line for $\Lambda = 1/l^2$ with $A_0=\zeta_0=0$.
For large times $\zeta=t/l$ this is
\begin{equation}
\bar{K}(q,\zeta; 0,0) \longrightarrow \frac{1}{\sqrt{\pi i l}} \exp\left(-\frac{ i\sqrt{q} + t}{2l }\right) . \label{prop3larget}
\end{equation}
This is oscillatory in 3-volume, and decays exponentially in time $t$.
\section{Discussion}
The basic result in this note is that in general relativity coupled to pressureless dust in the dust time gauge, the FLRW model with a cosmological constant has a physical Hamiltonian that is exactly that of a harmonic oscillator with frequency determined by $\sqrt{\Lambda}$. The Hamiltonian has a one-parameter ($\alpha$) set of self-adjoint extensions, and explicit solutions of the time-dependent Schrodinger equation are readily constructed. All cases give singularity avoidance, which here means that wave functions describing the Universe bounce at small spatial volume for any value of $\alpha$, regardless of whether the configuration space is the half line or the full line.
It is interesting to compare these results with those obtained in LQC \cite{Agullo:2016tjh} using the connection-triad variables. There the $\Lambda =0$ case was studied with scalar field time, where the form of the Hamiltonian is such that wave function dynamics requires numerical study. It was subsequently studied in the dust time \cite{Husain:2011tm}. In both these cases the Hamiltonian is essentially self-adjoint. In our case the bounce occurs for all self-adjoint extensions, and can be asymmetric in the sense that there is a phase shift at the bounce determined by $\alpha$. Only the $\alpha=0$ case gives a symmetric bounce.
For comparison with Dirac quantization, the corresponding quantum theory also resembles the oscillator, but only for the Laplace-Beltrami operator ordering in the kinetic term in the Wheeler-DeWitt operator \cite{Maeda:2015fna}; this work (which was pointed out to us after the present work was posted to the arXiv) considered only $\Lambda=1$, and did not address the most general self-adjoint extension with Robin boundary conditions. Nevertheless, it is one of the few cases where it seems possible to rigorously establish equivalence between Dirac and reduced phase space quantizations. It would be interesting to study this issue for full quantum gravity with dust time \cite{Husain:2011tk}.
Our consideration and results are entirely in the Lorentzian theory, and as such may be compared with similar models that invoke the Hartle-Hawking prescription in Lorentzian time, in particular the recent debate concerning integration contours for the propagator \cite{Feldbrugge:2017kzv,DiazDorronsoro:2017hti}. The latter work reports a suppression factor $\exp(-\Lambda l_p^2)$ in the propagator for the no boundary wave function of the Universe in the semiclassical approximation. We find a similar result, but our state is {\it exact}, (i.e. not just a semiclassical approximation), and also has explicit (dust) time dependence: eqn. (\ref{prop3larget}) has a suppression factor $\exp(-t/2l)$. From the currently observed value of $\Lambda$, $ l \sim 10^{60} l_p$, therefore the characteristic decay time is $\sim 10^{60}$ Planck times, which is close to the age of the Universe.
The model with spatial curvature $k\ne 0$ and additional matter fields such as the minimally coupled scalar field is not exactly solvable. The physical Hamiltonian for this case in the dust time gauge (after the canonical transformation (\ref{canon}) ) is
\begin{equation}
\mathcal{H}_{\text{p}}^k =\frac{1}{2}\left({p^2} - \Lambda x^2 \right) + k x^{2/3} + \frac{p_\phi^2}{2 x^2} + x^2 V(\phi).
\end{equation}
Gravitational perturbations can be added in a similar way. Models such as this demonstrate that it is useful to consider matter time gauges in the cosmological setting.
Lastly the $\Lambda<0$ case may be of interest in the context of the AdS/CFT conjecture and holography. Specifically the idea of using matter (or other) time gauge in the bulk might provide a useful mechanism to probe bulk dynamics and the holographic signatures of resolved singularities in such settings \cite{Bodendorfer:2018dgg}, something which appears so far to be largely unexplored.
\medskip
\noindent {\bf Acknowledgements} This work was supported by the Natural Science and Engineering Research Council of Canada. This work was initiated while V. H. was visiting the Perimeter Institute. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of
Ontario through the Ministry of Research and Innovation.
|
1807.03761
|
\section{Introduction}
In this paper, we prove several theorems about the number of integral points on elliptic curves over ${\mathbb Q}$. We bound the number of integral points using only the rank of the elliptic curve and the higher order divisors of its discriminant, and, using this bound, we show that the second moment of the number of integral points on elliptic curves over ${\mathbb Q}$ is bounded. To our knowledge, this is the first instance of an unconditional bound on higher moments for arithmetic data on elliptic curves.
We first give an explicit upper bound on the number of integral points on affine integral Weierstrass models of elliptic curves over ${\mathbb Q}$, depending only on the rank and the number of square divisors of the discriminant of the curve.
\begin{theorem}\label{thm:the bound on the number of integral points}
Let $A,B\in {\mathbb Z}$ be such that $\Delta_{A,B} := -16(4A^3 + 27B^2)\neq 0$. Let $E_{A,B}$ be the elliptic curve given by $y^2 = x^3 + Ax + B$. Then
$$\# E_{A,B}({\mathbb Z}) \ll 2^{\mathop{\mathrm{rank}}{E_{A,B}({\mathbb Q})}} \prod_{p^2 \mid \Delta_{A,B}} \left( 4 \floor{\frac{v_p(\Delta_{A,B})}{2}} + 1 \right).$$
In fact, the factor in the product may be taken to be the smaller of $4 \floor{\frac{v_p(\Delta_{A,B})}{2}} + 1$ and $2 \cdot 10^7$.
\end{theorem}
Here $v_p$ denotes the $p$-adic valuation for a prime $p$, and $E_{A,B}({\mathbb Z})$ is the set of integer solutions $\{(x,y)\in {\mathbb Z}^2 : y^2 = x^3 + Ax + B\}$. By $f\ll g$ we mean that there is a positive absolute constant $c > 0$ such that $|f|\leq c\cdot |g|$. Mordell was the first to prove the finiteness of the number of integral points on an elliptic curve (basically by the invariant-theoretic method we employ in this paper), a theorem generalized by Siegel to all curves of genus $g\geq 1$.
\begin{remark}\label{rmk:rightafterthemaintheorem}
If $v_p(\Delta_{A,B}) = 2$ or $3$, the factor in Theorem \ref{thm:the bound on the number of integral points} for $p$ may be improved to $4$ (rather than $5$). This results from a slightly more careful analysis of the $p$-adic argument at the end of Bombieri--Schmidt \cite{bombierischmidt}; see Remark \ref{rmk:fourtotwo}.
\end{remark}
Previous upper bounds on the number of integral points on elliptic curves have similar shapes but are not strong enough for our theorems on moments. For example, Helfgott and Venkatesh \cite{helfgottvenkatesh} show that, for any $E_{A,B}$,
$$\# E_{A,B}({\mathbb Z}) \ll O(1)^{\omega(\Delta_{A,B})}\cdot (\log \left| \Delta_{A,B} \right|)^2\cdot 1.33^{\mathop{\mathrm{rank}}{E_{A,B}({\mathbb Q})}}$$
where $\omega(n)$ denotes the number of distinct prime factors of $n$. For minimal short Weierstrass curves $E_{A,B}$ (i.e., such that there does not exist a prime $p$ with $p^4 \mid A$ and $p^6 \mid B$), Silverman \cite{silverman-quantSiegel} shows that
$$\# E_{A,B}({\mathbb Z}) \ll O(1)^{\mathop{\mathrm{rank}}{E_{A,B}({\mathbb Q})}+\omega_{\text{ss}}(\Delta_{A,B})}$$
where $\omega_{\text{ss}}(\Delta_{A,B})$ denotes the number of primes of semistable bad reduction and the $O(1)$ is on the order of $10^{10}$. (Since we have control on the average size of $n$-Selmer groups only for small $n$, and thus control on the average size of $n^{\mathop{\mathrm{rank}}{E_{A,B}({\mathbb Q})}}$ only for small $n$, this bound is unsuitable for our application.)
The strongest bound for minimal short Weierstrass curves is by Hindry and Silverman \cite{hindrysilverman}:
\begin{equation} \label{eq:hindrysilverman}
\# E_{A,B}({\mathbb Z}) \ll O(1)^{\mathop{\mathrm{rank}}{E_{A,B}({\mathbb Q})}+\sigma_{A,B}}
\end{equation}
where $\sigma_{A,B} := \frac{\log \left| \Delta_{A,B} \right|}{\log N_{A,B}}$ is the Szpiro ratio of $E_{A,B}$ (here $N_{A,B}$ denotes the conductor of $E_{A,B}$). Since the ABC conjecture implies that the Szpiro ratio is at most $6 + o(1)$, the Hindry--Silverman bound \eqref{eq:hindrysilverman} implies that, conditional on ABC and uniform boundedness of ranks, the number of integral points is uniformly bounded. (In fact, all one needs is Lang's conjecture that $\hat{h}(P)\gg h(E_{A,B})$ for non-torsion $P\in E_{A,B}({\mathbb Q})$.)\footnote{Alternatively, Abramovich \cite{abramovich} has shown that the Lang--Vojta conjecture for varieties of log general type implies uniform boundedness of the number of $S$-integral points on a stably minimal model of an elliptic curve.}
\vspace{\baselineskip}
We use Theorem \ref{thm:the bound on the number of integral points} to prove that the second moment of the number of integral points on elliptic curves over ${\mathbb Q}$ is bounded. In particular, we consider the family $\mathcal{F}_{\mathrm{univ}}$ of all integral Weierstrass models
$$y^2 = x^3 + A x + B$$
of elliptic curves over ${\mathbb Q}$, where $A, B \in {\mathbb Z}$ with $\Delta_{A,B} \neq 0$, and order this family by {\em naive height} $$H(E_{A,B}) = H(A,B) := \max(4|A|^3, 27B^2).$$ Not only do we prove that the second moment of the number of integral points in this family is bounded, we obtain the following slightly stronger result:
\begin{theorem}\label{thm:the bound on the moments}
If $0 < s < \log_2{5} = 2.3219\dotso$, we have
\begin{equation} \label{eq:momentbounded}
\mathrm{Avg}(\left|E_{A,B}({\mathbb Z})\right|^s)\ll_s 1
\end{equation}
where the average is taken over all elliptic curves in $\mathcal{F}_{\mathrm{univ}}$ ordered by naive height.
\end{theorem}
More precisely, let
$$\mathcal{F}_{\mathrm{univ}}^{\leq T} := \{(A,B) : \Delta_{A,B} \neq 0, H(A,B) \leq T \}$$
parametrize all integral Weierstrass models $E_{A,B}$ of elliptic curves with naive height up to $T$.
Then there exists a constant $C_s$, depending only on $s$, such that
$$\limsup_{T\to\infty} \frac{\sum\limits_{(A,B)\in \mathcal{F}_{\mathrm{univ}}^{\leq T}} \left|E_{A,B}({\mathbb Z})\right|^s}{\left|\mathcal{F}_{\mathrm{univ}}^{\leq T}\right|} < C_s.$$
In fact, we may take $C_s$ to be $O(1)^{2^{20 (\log_2{5} - s)^{-1}}}$.
One expects that elliptic curves should have no ``unexpected points" on average, i.e., that all these moments should be $0$ (since we are not counting the point at infinity).
In \cite{levent-intpts}, it is proved that \eqref{eq:momentbounded} holds for $0 < s < \log_3 5 = 1.4649\ldots$ (and thus by taking $s = 1$, that the average number of integral points is bounded, a result also proved by D.~Kim \cite{dohyeongkim}).
\begin{remark}
A related but different question is to show that most elliptic curves have very few integral points; perhaps the strongest known result in this direction is that $80\%$ of curves in $\mathcal{F}_{\mathrm{univ}}$ have at most $2$ integral points (by combining the fact that $100\%$ of rank $1$ curves in $\mathcal{F}_{\mathrm{univ}}$ have at most $2$ points \cite[Lemma 20]{levent-intpts} with Bhargava--Shankar's result \cite{arulmanjul-5Sel} that at least $80\%$ of curves in $\mathcal{F}_{\mathrm{univ}}$ have rank $0$ or $1$). Note that these bounds do not imply that the average number of integral points is bounded, since it is a priori possible that there is some small exceptional subset in which the curves have an enormous number of points.
\end{remark}
\begin{remark}
Theorem \ref{thm:the bound on the moments} gives a bound on $\mathrm{Avg}(\left|E_{A,B}({\mathbb Z})\right|^s)$ for $s < \log_2{5}$ by using Bhargava--Shankar's bound on the average size of $5$-Selmer groups over this family \cite{arulmanjul-5Sel}. If a bound on the average size of $n$-Selmer groups over this family were known, the same argument would yield a similar bound for all $s < \log_2(n)$.
\end{remark}
\subsection*{Method of proof}
Theorem \ref{thm:the bound on the number of integral points} follows from studying a bijection first observed by Mordell between integral points on an integral Weierstrass model $E_{A,B}$ of an elliptic curve and binary quartics of the form $X^4 + 6 c X^2 Y^2 + 8 d X Y^3 + e Y^4$ with $c, d, e \in {\mathbb Z}$ and invariants $I = -48A$ and $J = -1728B$. The natural map taking the integral point to an element of the $2$-Selmer group of the elliptic curve translates precisely to taking the corresponding binary quartic to its ${\rm PGL}_2({\mathbb Q})$-equivalence class. By working explicitly (and using results of Bombieri--Schmidt \cite{bombierischmidt} and Evertse \cite{evertse} on Thue equations), we bound the size of a fibre by
$$\ll \prod_{p^2\mid \Delta_{A,B}} \min\left\{ 4 \floor{\frac{v_p(\Delta_{A,B})}{2}} + 1,\, 2 \cdot 10^7\right\}.$$
The image lies in $E_{A,B}({\mathbb Q})/2E_{A,B}({\mathbb Q})$, whose size is at most $\leq 4 \cdot 2^{\mathop{\mathrm{rank}}{E_{A,B}({\mathbb Q})}}$, giving the theorem.
Theorem \ref{thm:the bound on the moments} follows fairly straightforwardly from Theorem \ref{thm:the bound on the number of integral points}, H{\"o}lder's inequality, standard analytic techniques, and knowledge of bounds on the average sizes of $5$-Selmer groups in this family from Bhargava--Shankar \cite{arulmanjul-5Sel}.
We also obtain similar bounds on the second moments of the number of integral points on elliptic curves in so-called {\em large} families (see Definition \ref{def:largefamily}), for which average sizes of $5$-Selmer groups are bounded by \cite{arulmanjul-5Sel}. For example, for the family $\mathcal{F}_{\mathrm{min}}$ of {\em minimal} Weierstrass models (the subset of $\mathcal{F}_{\mathrm{univ}}$ where there does not exist a prime $p$ with $p^4 \mid A$ and $p^6 \mid B$), or for the family ${\mathcal F}_{\mathrm{ss}}$ of all semistable elliptic curves, we obtain the same result as in Theorem \ref{thm:the bound on the moments}, when the curves are ordered by naive height.
For certain other families of elliptic curves, e.g., the family
$$\mathcal{F}_1 := \{y^2 + d_3 y = x^3 + d_2 x^2 + d_4 x : d_2, d_3, d_4 \in {\mathbb Z},\, \Delta \neq 0\}$$
of elliptic curves in Weierstrass form with a marked point at $(0,0)$, ordered by an analogous notion of height, we also find that the average (and the $s$-moments for $0 < s < \log_2 3$) of the number of integral points is bounded. These types of results follow from the same techniques as for Theorem \ref{thm:the bound on the moments} and bounds on the average $3$-Selmer group size from \cite{cofreecounting}.
\subsection*{Acknowledgments} We thank Manjul Bhargava, Arul Shankar, and Joe Silverman for helpful comments and conversations. LA and WH were supported by the NSF GRFP and NSF grant DMS-1701437, respectively.
\section{Binary quartic forms and integral points on elliptic curves}
\subsection{Preliminaries on binary quartic forms}
Given a binary quartic form
\begin{equation} \label{eq:bq}
f(X,Y) = a X^4 + b X^3 Y + c X^2 Y^2 + d X Y^3 + e Y^4
\end{equation}
with coefficients in ${\mathbb Q}$, the group ${\rm SL}_2({\mathbb Q})$ naturally acts by linear substitutions of the variables, i.e., for $g \in {\rm SL}_2({\mathbb Q})$, one has
\begin{equation} \label{eq:SL2action}
g \cdot f(X,Y) = f((X,Y) \cdot g).
\end{equation}
There exist degree $2$ and $3$ polynomial invariants $I$ and $J$ that generate the ${\rm SL}_2({\mathbb Q})$-invariant ring
as a polynomial ring. The standard normalizations of $I$ and $J$ are as follows:
\begin{align*}
I &= 12 a e - 3 b d + c^2, \\
J &= 72 a c e - 27 a d^2 - 27 b^2 e + 9 b c d - 2 c^3.
\end{align*}
The discriminant $\Delta(f) = \frac{1}{27}(4I^3-J^2)$ of $f$ is a polynomial invariant with integer coefficients. It is well known that if $\Delta(f)$ is nonzero, then the double cover $Z^2 = f(X,Y)$ of ${\mathbb P}^1$ is a genus one curve with Jacobian isomorphic to the elliptic curve
$$E: y^2 = x^3 - \frac{I}{3} x - \frac{J}{27}.$$
Conversely, over any field $K$, a smooth genus one curve with a rational degree $2$ divisor or line bundle (thereby giving a degree $2$ map to ${\mathbb P}^1$) has a model of the form $Z^2 = f(X,Y)$ for a binary quartic form $f$ over $K$.
We say that a binary quartic form \eqref{eq:bq} is {\em integral} if $a, b, c, d, e \in {\mathbb Z}$ and {\em integer-matrix} if additionally $4$ divides $b$ and $d$ and $6$ divides $c$. Both conditions are preserved by the action of ${\rm SL}_2({\mathbb Z})$. For an integer-matrix binary quartic form $f$, there are polynomial invariants $I'(f), J'(f)$ with {\em integral} coefficients such that $12 I' = I$ and $432 J' = J$, so the elliptic curve associated to $f$ is isomorphic to
\begin{equation} \label{eq:ECwithIJprime}
y^2 = x^3 - 4I' x - 16 J'.
\end{equation}
In the sequel, we will mostly work with binary quartics of a special type, so we name them as follows:
\begin{definition}
We say a binary quartic form \eqref{eq:bq} is {\em flattened} if it is integral and monic with no $X^3 Y$-coefficient, i.e., if $a = 0$, $b = 1$, and $c, d, e \in {\mathbb Z}$.
\end{definition}
\subsection{Mordell's construction} \label{sec:Mordell}
In \cite[Chapter 25]{mordell-diophantine}, Mordell shows that, given an integral affine Weierstrass model of an elliptic curve $y^2 = x^3 + Ax + B$ with an integral point, there exists an integer-matrix binary quartic form $f(X,Y)$ and $p,q \in {\mathbb Z}$ such that $f(p,q) = 1$ and $I'(f) = -4A$ and $J'(f) = -4B$; however, his construction is not explicit. Conversely, given an integer-matrix binary quartic form $f(X,Y)$ such that $I'$ and $J'$ are multiples of $4$ and $p,q \in {\mathbb Z}$ such that $f(p,q) = 1$, one may explicitly produce (using covariants of $f$) an integral point on the elliptic curve \eqref{eq:ECwithIJprime}.
In the next two subsections, we give a geometric explanation of Mordell's construction, which yields an explicit construction of a monic integer-matrix binary quartic form associated to an integral point on an elliptic curve.
Let $\mathcal{E}$ be an elliptic curve over ${\mathbb Q}$ with affine integral Weierstrass model
\begin{equation} \label{eq:Weierstrass}
E_{A,B}: y^2 = x^3 + A x + B
\end{equation}
with $A, B \in {\mathbb Z}$. Let $O$ denote the point at infinity. Given a point $P = (x_0,y_0) \in E_{A,B}({\mathbb Q})$, the degree $2$ divisor $O + P$ induces a map from $\mathcal{E}$ to ${\mathbb P}^1$ as a double cover ramified at four (not necessarily rational) points. In other words, we obtain a rational binary quartic form, which is easily computed \cite{cremonafisherstoll, cofreecounting}:
\begin{equation} \label{eq:BQfromintpoint}
f(X,Y) = X^4 - 6 x_0 X^2 Y^2 + 8 y_0 X Y^3 + (-4A-3x_0^2) Y^4.
\end{equation}
It is easy to check that $I'(f) = -4A$ and $J'(f) = -4B$.
Conversely, given a binary quartic $f(X,Y) = X^4 + 6c X^2 Y^2 + 4d X Y^3 + e Y^4$, we may easily solve for the coefficients of the elliptic curve and the integral point by equating the coefficients with \eqref{eq:BQfromintpoint}. We obtain the elliptic curve
$$E: y^2 = x^3 - \frac{3 c^2 + e}{4} x + \frac{c^3 + d^2 - c e}{4}$$
which contains the point $P = (x_0,y_0) = (-c, d/2)$. Assuming that $I'(f) = 3c^2+e$ and $J'(f) = - c^3 - d^2 + c e$ are both divisible by $4$, we immediately have that $d$ must be even, in which case the elliptic curve $E$ and the point $P$ both have integral coefficients.
It is clear that these constructions are inverse to one another. We thus obtain the explicit maps for the bijection in the following theorem:
\begin{theorem}[Mordell] \label{thm:Mordell}
The following two sets are in bijection:
\begin{itemize}
\item integral affine Weierstrass models $y^2 = x^3 + Ax + B$ of elliptic curves with integral points $(x_0,y_0)$,
\item binary quartics $X^4 + 6 c X^2 Y^2 + 8 d X Y^3 + e Y^4$ with $c,d,e \in {\mathbb Z}$ and $e \equiv c^2 \pmod{4}$.
\end{itemize}
\end{theorem}
Note that the binary quartics in Theorem \ref{thm:Mordell} are flattened.
\subsection{Binary quartics with representations of $1$}
We now relate the sets in Theorem \ref{thm:Mordell} with binary quartic forms with representations of $1$, which is Mordell's original correspondence \cite[Chapter 25]{mordell-diophantine}. This subsection is not needed for proving the main theorems in this paper, but we include it to give a more modern interpretation of Mordell's work.
We show that integer-matrix binary quartic forms $f(X,Y)$ with a representation of $1$ (i.e., with $p,q \in {\mathbb Z}$ with $f(p,q)=1$) may be transformed, under the standard action of ${\rm SL}_2({\mathbb Z})$, to flattened integer-matrix binary quartics. An element $g \in {\rm SL}_2({\mathbb Z})$ acts on $f(X,Y)$ by linear transformations as in \eqref{eq:SL2action} and on $(p,q)$ satisfying $f(p,q) = 1$ by $(p,q) \cdot g$.
\begin{lemma} \label{lem:bqrep1}
There is a bijection between flattened integer-matrix binary quartics
\begin{equation} \label{eq:monicandzero}
X^4 + 6c X^2 Y^2 + 4d X Y^3 + e Y^4
\end{equation}
with $c, d, e \in {\mathbb Z}$ and ${\rm SL}_2({\mathbb Z})$-equivalence classes of triples $(f,p,q)$, where $f$ is an integer-matrix binary quartic form and $p,q \in {\mathbb Z}$ with $f(p,q) = 1$.
Furthermore, restricting to flattened integer-matrix binary quartics where $d$ is even and $c^2 \equiv e \pmod{4}$ gives a bijection with triples $(f,p,q)$ where $I'(f)$ and $J'(f)$ are divisible by $4$.
\end{lemma}
\begin{proof}
Given an integer-matrix binary quartic form $f(X,Y)$ and $p,q \in {\mathbb Z}$ with $f(p,q)=1$, because $p$ and $q$ must be relatively prime, there exist integers $\alpha$ and $\beta$ with $\alpha p + \beta q = 1$. Since the action of $(\begin{smallmatrix} \alpha & \beta \\ -q & p \end{smallmatrix})$ takes $(p,q)$ to $(1,0)$, there exists an ${\rm SL}_2({\mathbb Z})$-transformation taking $f$ to a monic integer-matrix binary quartic form. Then ``completing the quartic'' (which is possible over ${\mathbb Z}$ because of the coefficients of $4$ and $6$) shows that there exists a unique ${\rm SL}_2({\mathbb Z})$-transformation of $f$ giving a binary quartic of the form \eqref{eq:monicandzero}.
Given two binary quartics $f$ and $f'$ of the above form, each with the representation $(p,q) = (1,0)$ of $1$, it is easy to check that there is no nontrivial element of ${\rm SL}_2({\mathbb Z})$ taking $(f,1,0)$ to $(f',1,0)$.
The last statement follows trivially since for the binary quartic \eqref{eq:monicandzero}, we compute $I' = 3c^2 + e$ and $J' = -c^3 - d^2 + ce$.
\end{proof}
Combining Lemma \ref{lem:bqrep1} with Theorem \ref{thm:Mordell}, we have the following:
\begin{corollary} \label{cor:Mordellrep1}
The following sets are in bijection:
\begin{enumerate}
\item[(i)] integral affine Weierstrass models $y^2 = x^3 + Ax + B$ of elliptic curves with integral points $(x_0,y_0)$,
\item[(ii)] binary quartics $X^4 + 6 c X^2 Y^2 + 8 d X Y^3 + e Y^4$ with $c,d,e \in {\mathbb Z}$ and $e \equiv c^2 \pmod{4}$,
\item[(iii)] ${\rm SL}_2({\mathbb Z})$-equivalence classes of triples $(f,p,q)$, where $f(X,Y)$ is an integer-matrix binary quartic form with $4 \mid I'(f)$ and $4 \mid J'(f)$ and $p,q \in {\mathbb Z}$ with $f(p,q) = 1$.
\end{enumerate}
\end{corollary}
\section{Counting integral points on elliptic curves}
\subsection{Integral points and Selmer elements}
Given an elliptic curve $\mathcal{E}$ over ${\mathbb Q}$ with an integral affine Weierstrass model $E_{A,B}$ of the form \eqref{eq:Weierstrass}, we consider the sequence of maps
\begin{equation} \label{eq:EZtoSel2}
\Psi \colon E_{A,B}({\mathbb Z}) \hookrightarrow \mathcal{E}({\mathbb Q}) \to \mathcal{E}({\mathbb Q})/2\mathcal{E}({\mathbb Q}) \stackrel{\xi}{\hookrightarrow} {\rm Sel}_2(\mathcal{E})
\end{equation}
where $E_{A,B}({\mathbb Z})$ denotes the integral points on $E_{A,B}$ and ${\rm Sel}_2(\mathcal{E})$ is the $2$-Selmer group of $\mathcal{E}$.
It is well known that elements of ${\rm Sel}_2(\mathcal{E})$ may be represented as binary quartic forms $f(X,Y)$ over ${\mathbb Q}$ such that the Jacobian of the associated genus one curve $C(f): Z^2 = f(X,Y)$ is isomorphic to $\mathcal{E}$ and $C$ is locally soluble. More precisely, elements of ${\rm Sel}_2(\mathcal{E})$ are in bijection with ${\rm PGL}_2({\mathbb Q})$-equivalence classes of such binary quartic forms (see, e.g., \cite{bsd, arulmanjul-bqcount, coregular}). The ${\rm PGL}_2({\mathbb Q})$-action on binary quartic forms is induced from the following twisted action of ${\rm GL}_2({\mathbb Q})$ on binary quartics: for $g \in {\rm GL}_2({\mathbb Q})$ and a binary quartic $f(X,Y)$, we have $g \cdot f(X,Y) = (\det g)^{-2} f((X,Y) \cdot g)$. The ring of ${\rm PGL}_2({\mathbb Q})$-invariants is still the polynomial ring generated by $I$ and $J$.
The map $\xi: \mathcal{E}({\mathbb Q})/2\mathcal{E}({\mathbb Q}) \hookrightarrow {\rm Sel}_2(\mathcal{E})$ sends a rational point $P \in \mathcal{E}({\mathbb Q})$ to the rational binary quartic form arising from the degree $2$ map $\mathcal{E} \to {\mathbb P}^1$ given by the divisor $O+P$ (as described in \S \ref{sec:Mordell}).
The composition $\Psi$ of the maps in \eqref{eq:EZtoSel2} is thus given by one direction of the bijection in Theorem \ref{thm:Mordell}, from an integral point $P = (x_0,y_0)\in E_{A,B}({\mathbb Z})$ to the ${\rm PGL}_2({\mathbb Q})$-equivalence class of the corresponding binary quartic form $f_P(X,Y) := X^4 - 6 x_0 X^2 Y^2 + 8 y_0 X Y^3 + (-4A-3x_0^2) Y^4$. Note that the genus one curve $C(f_P)$ associated to such a form (in fact, any monic binary quartic form) is automatically globally soluble; indeed, $f_P(1,0) = 1$ gives a rational solution. This is not surprising since, by construction, the image of $P$ in ${\rm Sel}_2(\mathcal{E})$ lies in the subset of globally soluble forms, namely the image of $\mathcal{E}({\mathbb Q})/2\mathcal{E}({\mathbb Q})$.
Writing $\mathcal{E}({\mathbb Q})\cong {\mathbb Z}^{\mathop{\mathrm{rk}}{\mathcal{E}({\mathbb Q})}}\oplus \mathcal{E}({\mathbb Q})_{\mathrm{tors}}$ and noting that $|\mathcal{E}({\mathbb Q})_{\mathrm{tors}}|\ll 1$ by Mazur's theorem \cite{mazur-eisenstein}, we see that $|\mathcal{E}({\mathbb Q})/2\mathcal{E}({\mathbb Q})|\ll 2^{\mathop{\mathrm{rk}}{\mathcal{E}({\mathbb Q})}}$ (in fact, $\leq 4 \cdot 2^{\mathop{\mathrm{rk}}{\mathcal{E}({\mathbb Q})}}$). Hence the image of $\xi$, and thus the image of the composition map $\Psi$, is of size at most
$$\ll 2^{\mathop{\mathrm{rk}}(\mathcal{E}({\mathbb Q}))}.$$
Therefore, to prove Theorem \ref{thm:the bound on the number of integral points}, it suffices to show that the size of each fibre of the map $\Psi$ is bounded as follows:
\begin{proposition}\label{prop:the fibre bound}
Let $f(X,Y) = X^4 + a_2 X^2 Y^2 + a_3 X Y^3 + a_4 Y^4 \in {\mathbb Z}[X,Y]$ be a flattened binary quartic form. The number of elements $\gamma\in {\rm PGL}_2({\mathbb Q})$ such that $\gamma\cdot f$ is flattened is
$$\ll \prod_{p^2 \mid \Delta(f)} \min \left\{4 \floor{\frac{v_p(\Delta(f))}{2}} + 1,\, 2 \cdot 10^7\right\}.$$
\end{proposition}
To prove Proposition \ref{prop:the fibre bound}, we first establish properties of any $\gamma \in {\rm PGL}_2({\mathbb Q})$ that sends a flattened binary quartic form $f$ to another flattened form.
We then show that each such $\gamma$ gives rise to a solution of a Thue equation, and invoke the fact that the number of such solutions is bounded. For example, in the simplest case when the discriminant $\Delta(f)$ is as squarefree as possible\footnote{If $f$ is a binary quartic form associated to an elliptic curve $E_{A,B}$, then $\Delta(f) = 2^{8} \Delta_{A,B} = - 2^{12} (4A^3 + 27 B^2)$, so here we mean $2^{-12} \Delta(f)$ is squarefree and not divisible by $2$.}, then the size of the fiber of $\Psi$ is bounded by $4 \floor{\frac{12}{2}} + 1 = 25$ times the number of solutions to $f(x,y) = 1$, which is uniformly bounded by $2 \cdot 10^7$ for all $f$ \cite{evertse}. (Once $\Delta(f) \gg 1$, Akhtari \cite{akhtari-quartic} gives a much better bound of $26$ for the number of solutions.)
\begin{lemma} \label{lem:gammaprops}
Let $f(X,Y) = X^4 + a_2 X^2 Y^2 + a_3 X Y^3 + a_4 Y^4 \in {\mathbb Z}[X,Y]$ be a flattened binary quartic form. For any $\gamma\in {\rm PGL}_2({\mathbb Q})$ such that $\gamma\cdot f$ is flattened, if we
write $\gamma = \left(\begin{smallmatrix} a & b\\ c& d\end{smallmatrix}\right)$ with $a,b,c,d\in {\mathbb Z}$ and $\gcd(a,b,c,d) = 1$, then we have
\begin{enumerate}[label={\normalfont (\roman*)}]
\item $\gcd(a,b) = 1$, and
\item $f(a,b) = (\det \gamma)^2$ divides the discriminant $\Delta(f)$ of $f$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\gamma\in {\rm PGL}_2({\mathbb Q})$ be such that $\gamma\cdot f$ is flattened, so $(\gamma\cdot f)(1,0) = f(a,b) (\det \gamma)^{-2} = 1$ and $\gamma \cdot f$ has zero subleading coefficient. We may represent any $\gamma \in {\rm PGL}_2({\mathbb Q})$ by $\left(\begin{smallmatrix} a & b\\ c& d\end{smallmatrix}\right)$ with $a,b,c,d\in {\mathbb Z}$ and $\gcd(a,b,c,d) = 1$.
Let $g := \gcd(a,b)$, and write $a = g\alpha$ and $b = g\beta$ with $\alpha, \beta\in {\mathbb Z}$, so $\gcd(\alpha,\beta) = 1$. Then there exist integers $\widetilde{\alpha}, \widetilde{\beta}$ such that $\alpha\widetilde{\alpha} - \beta\widetilde{\beta} = 1$. Let $\widetilde{\gamma} := \left(\begin{smallmatrix} \alpha & \beta\\ \widetilde{\beta}& \widetilde{\alpha}\end{smallmatrix}\right)\in {\rm SL}_2({\mathbb Z})$, and let $\eta := c\widetilde{\alpha} - d\widetilde{\beta}\in {\mathbb Z}$. Define
$$U := \gamma\widetilde{\gamma}^{-1} = \left(\begin{array}{cc} a & b\\ c& d\end{array}\right)\cdot \left(\begin{array}{cc} \widetilde{\alpha} & -\beta\\ -\widetilde{\beta}& \alpha\end{array}\right) = \left(\begin{array}{cc} g & 0\\ \eta& \frac{\det{\gamma}}{g}\end{array}\right),$$
implying that
$\gamma = U \widetilde{\gamma}.$
We now show that $g$ divides all the entries of $U$, namely $g^2 \mid \det{\gamma}$ and $g \mid \eta$.
Let $\widetilde{f} := \widetilde{\gamma}\cdot f\in {\mathbb Z}[X,Y]$, with no twisted action necessary since $\widetilde{\gamma}\in {\rm SL}_2({\mathbb Z})$. Write
$$\widetilde{f}(X,Y) =: \widetilde{a}_0 X^4 + \widetilde{a}_1 X^3 Y + \widetilde{a}_2 X^2 Y^2 + \widetilde{a}_3 X Y^3 + \widetilde{a}_4 Y^4\in {\mathbb Z}[X,Y].$$
Then $(\gamma\cdot f)(X,Y) = (\det \gamma)^{-2} (U\cdot \widetilde{f})(X,Y) = (\det \gamma)^{-2} \widetilde{f}\left(gX + \eta Y, \frac{\det{\gamma}}{g} Y\right)$. Expanding, we compute that the $X^4$-coefficient in $\gamma\cdot f$ is
$$(\gamma\cdot f)(1,0) = f(a,b) = g^4 f(\alpha,\beta) = \frac{g^4 \widetilde{a}_0}{(\det{\gamma})^2}.$$
Since it is also $1$ by hypothesis, we find that $\frac{(\det{\gamma})^2}{g^4} = \widetilde{a}_0\in {\mathbb Z}$. Thus $g^4$ divides $(\det{\gamma})^2$, so $g^2$ divides $\det \gamma$.
Now the $X^3 Y$-coefficient of $\gamma \cdot f$ is
$$\frac{4 g^3\eta\cdot \widetilde{a}_0 + g^2 (\det{\gamma})\cdot \widetilde{a}_1}{(\det{\gamma})^2} = 0.$$ Substituting for $\widetilde{a}_0$, we find that $\widetilde{a}_1 = -4\cdot \frac{(\det{\gamma})\cdot \eta}{g^3} \in {\mathbb Z}$.
Finally, the $X^2 Y^2$-coefficient of $\gamma \cdot f$ is
$$\frac{6 g^2 \eta^2\cdot \widetilde{a_0} + 3 g \eta (\det{\gamma})\cdot \widetilde{a}_1 + (\det{\gamma})^2\cdot \widetilde{a}_2}{(\det{\gamma})^2} = - \frac{6\eta^2}{g^2} + \widetilde{a}_2$$
after substituting for $\widetilde{a}_0$ and $\widetilde{a}_1$.
Since $\widetilde{a}_2\in {\mathbb Z}$ and this coefficient is integral as well, we deduce that $g^2$ divides $6\eta^2$, so $g$ divides $\eta$.
Since $g$ divides all the entries of $U$, we see that $g$ divides all the entries of $U\cdot \widetilde{\gamma} = \gamma$, implying that $g$ divides $\gcd(a,b,c,d) = 1$ and thus $g=1$ as desired.
Substituting $g =1$ shows that
$\widetilde{a}_1 = -4 \eta \det \gamma,$
so $\det \gamma$ divides $\widetilde{a}_1$, and of course $(\det \gamma)^2$ divides (since it is in fact equal to) $\widetilde{a}_0.$
We thus find that $(\det\gamma)^2$ divides $\Delta(\widetilde{f}) = \Delta(f)$ (since every term of $\Delta(\widetilde{f})$ is a multiple of either $\widetilde{a}_0$ or $\widetilde{a}_1^2$).
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:the fibre bound}]
Lemma \ref{lem:gammaprops} shows that for any $\gamma \in {\rm PGL}_2({\mathbb Q})$ (represented by $\left(\begin{smallmatrix} a & b\\ c & d\end{smallmatrix}\right)$ with $a,b,c,d\in {\mathbb Z}$ and $\gcd(a,b,c,d) = 1$) such that $\gamma \cdot f$ is flattened, we have that $\gcd(a,b) = 1$ and $f(a,b) = (\det{\gamma})^2$ is a square dividing $\Delta(f)$.
We now claim that the map
$$
\Phi \colon \{\gamma\in {\rm PGL}_2({\mathbb Q}) : \gamma\cdot f \textrm{ is flattened}\}
\to \{(a,b)\in {\mathbb Z}^2 : \gcd(a,b) = 1, f(a,b) = \square, f(a,b) \mid \Delta(f)\},
$$
taking $\gamma$ as above to $(a,b)$, is injective.
Indeed, if $\gamma,\gamma'\in {\rm PGL}_2({\mathbb Q})$ map to the same $(a,b)\in {\mathbb Z}^2$, write $\gamma = \left(\begin{smallmatrix} a & b\\ c& d\end{smallmatrix}\right)$ and $\gamma' = \left(\begin{smallmatrix} a & b\\ c'& d'\end{smallmatrix}\right),$ and note that
$$
\gamma'\gamma^{-1} = \left(\begin{array}{cc} 1 & 0\\ \frac{c'd - cd'}{\det{\gamma}}& 1\end{array}\right).
$$
Let $\lambda := \frac{c'd - cd'}{\det{\gamma}}\in {\mathbb Q}$. Since $$(\gamma'\cdot f)(X,Y) = ((\gamma'\gamma^{-1})\cdot (\gamma\cdot f))(X,Y) = (\gamma\cdot f)(X + \lambda Y, Y)$$ and both $\gamma\cdot f$ and $\gamma'\cdot f$ are flattened by hypothesis, it follows that $\lambda = 0$ and so $\gamma = \gamma'$, as desired.
Thus, for the map $\Phi$, the size of the domain is bounded by the size of the codomain, which is simply
\begin{equation} \label{eq:codomainsize}
\sum_{\delta^2 \mid \Delta(f)} \# \{(a,b)\in {\mathbb Z}^2 : \gcd(a,b) = 1, f(a,b) = \delta^2\}.
\end{equation}
We divide the set of primes $p^2 \mid \Delta(f)$ into two sets to obtain a hybrid bound.
Let $S$ be the set\footnote{Taking $S$ to be the empty set in this argument gives a weaker but simpler upper bound for \eqref{eq:codomainsize}, and thus for Proposition \ref{prop:the fibre bound}, of
$\displaystyle \prod_{p^2 \mid \Delta(f)} \left(4 \floor{\frac{v_p(\Delta(f))}{2}} + 1\right).$} of primes such that $2 \cdot 10^7 \leq 4 \floor{\frac{v_p(\Delta(f))}{2}} + 1$, and let
$$D := \prod_{\substack{p^2\mid \Delta(f) \\ p\not\in S}} p^{v_p(\Delta(f))}.$$
Given $\delta$ such that $\delta^2\mid \Delta(f)$, set $\nu := \gcd(\delta, D)$ and $\mu := \frac{\delta}{\nu}$.
The argument of Bombieri--Schmidt in \cite[Section VI]{bombierischmidt}, specifically the second-to-last paragraph, produces $\leq 4^{\omega(\nu)}$ many quartic forms $f_{\nu, i}$, depending only on $f$ and $\nu$, such that the number of relatively prime solutions to $f(a,b) = \delta^2 = \nu^2 \mu^2$ is bounded above by the sum of the numbers of relatively prime solutions of $f_{\nu, i}(a,b) = \mu^2$. Rewriting \eqref{eq:codomainsize} in terms of $\mu$ and $\nu$ gives
\begin{align}
\sum_{\delta^2 \mid \Delta(f)} &\# \{(a,b)\in {\mathbb Z}^2 : \gcd(a,b) = 1, f(a,b) = \delta^2\} \nonumber \\
&= \sum_{\nu^2\mid D}\sum_{\mu^2\mid \frac{\Delta(f)}{D}} \# \{(a,b)\in {\mathbb Z}^2 : \gcd(a,b) = 1, f(a,b) = \mu^2 \nu^2\}
\nonumber \\
&\leq \sum_{\nu^2\mid D}\sum_{i=1}^{4^{\omega(\nu)}} \sum_{\mu^2\mid \frac{\Delta(f)}{D}} \# \{(a,b)\in {\mathbb Z}^2 : \gcd(a,b) = 1, f_{\nu, i}(a,b) = \mu^2\}. \label{eq:expandsum}
\end{align}
To bound the number of solutions to $f_{\nu,i}(a,b) = \mu^2$, we use the following theorem of Evertse \cite[Theorem 1]{evertse}: for a number field $K$ with a finite set of places $T$ containing all infinite places and a homogeneous polynomial $g\in {\mathcal O}_K[x,y]$ of degree at least $3$, the inclusion $f(a,b)\in {\mathcal O}_{K,T}^\times$ has at most $(5 \cdot 10^6 \deg(g))^{\#T}$ many solutions, modulo the action of ${\mathcal O}_{K,T}^\times$. Applying this theorem here with $K = {\mathbb Q}$, $T = S \cup \{\infty\}$, and $g = f_{\nu, i}$, we see that the innermost sum of \eqref{eq:expandsum} is at most
$$
\# \{(a,b)\in {\mathbb Z}^2 : \gcd(a,b) = 1, f_{\nu, i}(a,b)\in {\mathbb Z}[S^{-1}]^\times = {\mathcal O}_{{\mathbb Q},S}^\times\} \ll (2 \cdot 10^7)^{\#S}.
$$
(Note that the condition $\gcd(a,b) = 1$ implies that only $(a,b)$ and $(-a,-b)$ are solutions in the same coset under the action of ${\mathcal O}_{K,T}^\times$.)
Hence the size \eqref{eq:codomainsize} of the codomain of $\Phi$ is
\begin{align*}
\sum_{\delta^2 \mid \Delta(f)} \# \{(a,b)\in {\mathbb Z}^2 : \gcd(a,b) = 1, f(a,b) = \delta^2\}
&\ll \sum_{\nu^2\mid D} \sum_{i=1}^{4^{\omega(\nu)}} (2 \cdot 10^7)^{\#S}
\\&\ll (2 \cdot 10^7)^{\#S} \sum_{\nu^2\mid D} 4^{\omega(\nu)}
\\&= (2 \cdot 10^7)^{\#S} \prod_{p^2\mid D} \left(4 \floor{\frac{v_p(\Delta(f))}{2}} + 1\right),
\end{align*}
completing the argument.
\qedhere
\end{proof}
\begin{remark} \label{rmk:fourtotwo}
When $2\leq v_p(\Delta(f)) < 4$, evidently $p\not\in S$, and either $p\nmid \nu$ (in which case there is no factor corresponding to $p$) or $v_p(\nu) = 2$. By simply enumerating cases of $f$ over ${\mathbb Q}_p$ one finds that the number of disks required for \cite[Lemma 7]{bombierischmidt} is in fact at most $3$, because at least two roots lie in the same residue disk modulo $p$. This translates into a factor of $3$ corresponding to $p$, rather than $4$, and thus gives the claim of Remark \ref{rmk:rightafterthemaintheorem} (after applying this improved bound in the following proof of Theorem \ref{thm:the bound on the number of integral points}).
\end{remark}
Having proved Proposition \ref{prop:the fibre bound}, Theorem \ref{thm:the bound on the number of integral points} follows easily.
\begin{proof}[Proof of Theorem \ref{thm:the bound on the number of integral points}.]
We showed that the map $\Psi \colon E_{A,B}({\mathbb Z})\to \mathcal{E}({\mathbb Q})/2\mathcal{E}({\mathbb Q})\subseteq {\rm Sel}_2(\mathcal{E})$, taking an integral point of $E_{A,B}$ to the ${\rm PGL}_2({\mathbb Q})$-equivalence class of its corresponding binary quartic form $f$ (by Theorem \ref{thm:Mordell}), has image of size $\ll 2^{\mathop{\mathrm{rk}}{\mathcal{E}({\mathbb Q})}}$. Recall that the binary quartic $f$ has invariants $I = -48A$ and $J = -1728B$, so $\Delta(f) =
2^{8} \Delta_{A,B}$. Thus, applying Proposition \ref{prop:the fibre bound}, we see that the size of a fibre of the map $\Psi$ is bounded by
$$\ll \prod_{p^2\mid \Delta_{A,B}} \min\left\{ 4 \floor{\frac{v_p(\Delta_{A,B})}{2}} + 1,\, 2 \cdot 10^7\right\}.$$
Combining the two estimates gives the theorem.
\end{proof}
\subsection{Bounding moments of the number of integral points on elliptic curves} \label{sec:mainthmpf}
Theorem \ref{thm:the bound on the moments} follows from ``averaging'' the bound in Theorem \ref{thm:the bound on the number of integral points} and analytic techniques; the additional crucial input is Bhargava--Shankar's result that the average size of the $5$-Selmer group of elliptic curves in $\mathcal{F}_{\mathrm{univ}}$, ordered by naive height, is bounded \cite{arulmanjul-5Sel}.
In fact, because the average size of the $5$-Selmer group is bounded in any ``large'' family \cite[Theorem 31]{arulmanjul-5Sel}, we may prove the same result for such families as well.
\begin{definition} \label{def:largefamily}
We say a subfamily ${\mathcal F} \subseteq \mathcal{F}_{\mathrm{univ}}$ of elliptic curves over ${\mathbb Q}$ is {\em defined by congruence conditions} if, for all primes $p$, there exists a closed subset $\Sigma_p$ of $\{(A,B) \in {\mathbb Z}_p^2 : \Delta_{A,B} \neq 0 \}$ whose boundary has measure $0$ such that $E_{A,B} \in {\mathcal F}$ for $(A,B) \in \Sigma_p$.
For such a family ${\mathcal F}$, let ${\rm Inv}({\mathcal F}) := \{(A,B) \in {\mathbb Z} \times {\mathbb Z}: E_{A,B} \in {\mathcal F} \}$ and let ${\rm Inv}_p({\mathcal F})$ be the $p$-adic closure of ${\rm Inv}({\mathcal F})$ in ${\mathbb Z}_p^2 \setminus \{\Delta_{A,B} = 0 \}$. The set ${\rm Inv}_\infty({\mathcal F})$ is defined to be $\{(A,B) \in {\mathbb R}^2 : \Delta_{A,B} \bowtie 0\}$, where $\bowtie$ is $>$, $<$, or $\neq 0$ depending on whether ${\mathcal F}$ contains only curves of positive discriminant, negative discriminant, or both, respectively. Then ${\mathcal F}$ is a {\em large family} if, for all but finitely many primes $p$, the set ${\rm Inv}_p({\mathcal F})$ contains all $(A,B) \in {\mathbb Z}_p^2$ such that $p^2 \nmid \Delta_{A,B}$.
\end{definition}
Examples of large families include $\mathcal{F}_{\mathrm{univ}}$ itself, the family $\mathcal{F}_{\mathrm{min}}$ of minimal Weierstrass curves, the family ${\mathcal F}_{\mathrm{ss}}$ of semistable elliptic curves, and any family defined by finitely many congruence conditions. We now prove the following stronger version of Theorem \ref{thm:the bound on the moments}:
\begin{theorem} \label{thm:largefamilymomentsbound}
For any large family ${\mathcal F}$ of elliptic curves and any $0 < s < \log_2 5 = 2.3219 \ldots$, we have
$$\mathrm{Avg}_{\mathcal F}( \left| E({\mathbb Z}) \right|^s ) \ll_s 1$$
where the average is taken over all elliptic curves $E$ in ${\mathcal F}$ ordered by naive height.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:largefamilymomentsbound}]
Let $\mathcal{F}^{\leq T} := \{ (A,B) \in {\rm Inv}({\mathcal F}) : \Delta_{A,B} \neq 0, H(A,B) \leq T\}$ represent the curves in ${\mathcal F}$ of naive height at most $T$. By simply applying the bound of Theorem \ref{thm:the bound on the number of integral points} and noting that $4e + 1 \leq (e + 1)^3$ for any positive integer $e$, for any $(A,B) \in \mathcal{F}^{\leq T}$ we have
$$
|E_{A,B}({\mathbb Z})|^s\ll (2^s)^{\mathop{\mathrm{rk}}{E_{A,B}({\mathbb Q})}}\cdot \prod_{p^2 \mid \Delta_{A,B}} \left(\floor{\frac{v_p(\Delta_{A,B})}{2}} + 1\right)^{3s}.
$$
Summing over all $(A,B)$ in $\mathcal{F}^{\leq T}$ and then applying H\"older's inequality with dual exponent pair $(1 + \varepsilon, 1 + \varepsilon^{-1})$ yields
\begin{align}
&\sum_{(A,B)\in \mathcal{F}^{\leq T}} |E_{A,B}({\mathbb Z})|^s\ll \sum_{(A,B)\in \mathcal{F}^{\leq T}} (2^s)^{\mathop{\mathrm{rk}}{E_{A,B}({\mathbb Q})}}\cdot \prod_{p^2 \mid \Delta_{A,B}} \left(\floor{\frac{v_p(\Delta_{A,B})}{2}} + 1\right)^{3s}
\nonumber\\
&\leq \left(\sum_{(A,B)\in \mathcal{F}^{\leq T}} \left(2^{(1+\varepsilon) s} \right)^{\mathop{\mathrm{rk}}{E_{A,B}({\mathbb Q})}}\right)^{\frac{1}{1+\varepsilon}}
\left(\sum_{(A,B)\in \mathcal{F}^{\leq T}} \prod_{p^2 \mid \Delta_{A,B}} \left(\floor{\frac{v_p(\Delta_{A,B})}{2}} + 1\right)^{3\left(1 + \varepsilon^{-1}\right) s}\right)^{\frac{\varepsilon}{1+\varepsilon}}. \label{eq:afterholder}
\end{align}
\vspace{\baselineskip}
We bound the first term in \eqref{eq:afterholder} as follows.
Since $0 < s < \log_2 5$, we may choose $\varepsilon > 0$ such that $(1+\varepsilon)\cdot s < \log_2{5}$, or, equivalently, $0 < \varepsilon < \frac{\log_2{5}}{s} - 1$. Then $2^{(1+\varepsilon) s}\leq 5$, so
\begin{equation} \label{eq:5selinequality}
\left( 2^{(1+\varepsilon)\cdot s} \right)^{\mathop{\mathrm{rk}}{E_{A,B}({\mathbb Q})}}\leq 5^{\mathop{\mathrm{rk}}{E_{A,B}({\mathbb Q})}} \leq \left|E_{A,B}({\mathbb Q})/5E_{A,B}({\mathbb Q})\right| \leq \left|{\rm Sel}_5(E_{A,B})\right|.
\end{equation}
Bhargava and Shankar \cite[Theorem 31]{arulmanjul-5Sel} show that
$\mathrm{Avg}_{{\mathcal F}} |{\rm Sel}_5(E_{A,B})| = 6$ for any large family ${\mathcal F}$ ordered by naive height,
and combining this average with \eqref{eq:5selinequality} implies that
\begin{equation} \label{eq:boundfirstterm}
\sum_{(A,B)\in \mathcal{F}^{\leq T}} \left( 2^{(1+\varepsilon)\cdot s} \right)^{\mathop{\mathrm{rk}}{E_{A,B}({\mathbb Q})}}\leq (6 + o(1))\cdot \left| \mathcal{F}^{\leq T} \right|.
\end{equation}
In Lemma \ref{lem:boundsecondterm}, we will bound the second term in \eqref{eq:afterholder} by showing that for all $t\geq 0$,
\begin{equation} \label{eq:boundsumt}
\sum_{(A,B)\in \mathcal{F}^{\leq T}} \prod_{p^2 \mid \Delta_{A,B}} \left(\floor{\frac{v_p(\Delta_{A,B})}{2}} + 1\right)^t\ll_t \left| \mathcal{F}^{\leq T} \right|.
\end{equation}
(By the Selberg--Delange method, we find that the implicit constant is $\ll O(1)^{16^t}$, but we need not be so precise.) The desired theorem follows from the two bounds \eqref{eq:boundfirstterm} and \eqref{eq:boundsumt}.
\end{proof}
\begin{lemma} \label{lem:boundsecondterm}
For any large family ${\mathcal F}$ and all $t \geq 0$, we have
$$\sum_{(A,B)\in \mathcal{F}^{\leq T}} \prod_{p^2 \mid \Delta_{A,B}} \left(\floor{\frac{v_p(\Delta_{A,B})}{2}} + 1\right)^t\ll_t |\mathcal{F}^{\leq T}|.$$
\end{lemma}
\begin{proof}
Let $0 < \delta < 1$ be a parameter, to be taken to be small but $\gg 1$ (in fact, $\delta = \frac{1}{100}$ suffices, though one may optimize the argument and take $\delta$ even larger).
We may use the standard trick of decomposing a product over all primes into products of small and large primes:
$$
\prod_{p^2\mid n} \left(\floor{\frac{v_p(n)}{2}} + 1\right) =
\left(\prod_{\substack{p^2\mid n \\ p < n^\delta}} \left(\floor{\frac{v_p(n)}{2}} + 1\right)\right)\cdot \left(\prod_{\substack{p^2\mid n \\ p\geq n^\delta}} \left(\floor{\frac{v_p(n)}{2}} + 1\right)\right).
$$
Now note that the number of $p\geq n^\delta$ for which $p^2 \mid n$ is trivially at most $\frac{1}{2\delta}$, and, for each such $p$, we have that $v_p(n)\leq \frac{1}{\delta}$ as well. It follows that
\begin{equation} \label{eq:largeprimebound}
\prod_{p^2 \mid n\text{, }p\geq n^\delta} \left(\floor{\frac{v_p(n)}{2}} + 1\right)\leq \left(\frac{1}{2\delta}+1\right)^{\frac{1}{2\delta}} \leq \delta^{-\delta^{-1}} \ll 1.
\end{equation}
Using the large prime bound \eqref{eq:largeprimebound} and the fact that $\left|\Delta_{A,B}\right|\ll T$, the sum in \eqref{eq:boundsumt} may thus be bounded by
\begin{equation*}
\sum_{(A,B)\in \mathcal{F}^{\leq T}} \prod_{p^2 \mid \Delta_{A,B}} \left(\floor{\frac{v_p(\Delta_{A,B})}{2}} + 1\right)^t\ll_t \sum_{(A,B)\in \mathcal{F}^{\leq T}} \prod_{\substack{p^2\mid \Delta_{A,B} \\ p\ll T^\delta}}
\left(\floor{\frac{v_p(\Delta_{A,B})}{2}} + 1\right)^t.
\end{equation*}
We now bound the product over small primes, first for ${\mathcal F} = \mathcal{F}_{\mathrm{univ}}$.
Let $m\ll T^{\frac{1}{3}}$. Each fibre of the natural reduction map $\mathcal{F}_{\mathrm{univ}}^{\leq T}\to ({\mathbb Z}/m{\mathbb Z})^2$ is of size\footnote{This bound is unsurprising since we expect equidistribution.}
\begin{equation} \label{eq:boundreductionfiberFuniv}
\ll \left(\frac{T^{\frac{1}{3}}}{m} + 1\right)\cdot \left(\frac{T^{\frac{1}{2}}}{m} + 1\right) \ll \frac{\left|\mathcal{F}_{\mathrm{univ}}^{\leq T}\right|}{m^2}.
\end{equation}
Moreover, the number of $(\bar{A},\bar{B})\in ({\mathbb Z}/m{\mathbb Z})^2$ such that $\Delta_{\bar{A},\bar{B}} = -16(4\bar{A}^3 + 27\bar{B}^2)\equiv 0\pmod{m}$ is $$\ll m\cdot O(1)^{\omega(m)}.$$ Indeed, in ${\mathbb Z}/p^{v_p(m)}{\mathbb Z}$, by Hensel's lemma\footnote{For $p=2$ or $3$, either the valuation $v_p(m)$ is large enough (e.g., $\geq 10$), in which case Hensel's lemma applies, or else there are $O(1)$ many elements of ${\mathbb Z}/p^{v_p(m)}{\mathbb Z}$ anyway.} one has $\ll 1$ solutions for $B$ for any fixed $A$, and the Chinese remainder theorem then implies that, for fixed $A\in {\mathbb Z}/m{\mathbb Z}$, one has $\ll O(1)^{\omega(m)}$ solutions for $B\in {\mathbb Z}/m{\mathbb Z}$.
Therefore, for $m\ll T^{\frac{1}{3}}$, the number of $(A,B)\in \mathcal{F}_{\mathrm{univ}}^{\leq T}$ with $m\mid \Delta_{A,B}$ is
\begin{equation} \label{eq:mdividesDiscFuniv}
\ll \frac{\left|\mathcal{F}_{\mathrm{univ}}^{\leq T}\right|}{m^2}\cdot (m\cdot O(1)^{\omega(m)}) = \left|\mathcal{F}_{\mathrm{univ}}^{\leq T}\right| \frac{O(1)^{\omega(m)}}{m}.
\end{equation}
Similarly, for any large ${\mathcal F}$, we obtain the same bounds. Indeed, from \cite[Proposition 3.16]{arulmanjul-bqcount}, for any prime $p$, the number of $(A,B) \in \mathcal{F}_{\mathrm{univ}}^{\leq T}$ with $p^2$ dividing $\Delta_{A,B}$ is $O(T^{\frac{5}{6}}p^{-\frac{3}{2}})$. Then \cite[Theorem 3.17]{arulmanjul-bqcount} implies that the number of elliptic curves with height $\leq T$ in any large family ${\mathcal F}$ is the product of local densities with bounded error:
$$
\int_{\substack{(A,B) \in {\rm Inv}_\infty({\mathcal F}) \\ H(A,B) < T}} dA\,dB = \prod_p\int_{(A,B) \in {\rm Inv}_p({\mathcal F})} dA\,dB + o(T^{\frac{5}{6}}).
$$
Since ${\mathcal F}$ is defined by congruence conditions and contains all curves with discriminant not a multiple of $p^2$ for almost all $p$, we have that $\lim_{T\to\infty} \frac{ \left| \mathcal{F}^{\leq T} \right|}{\left| \mathcal{F}_{\mathrm{univ}}^{\leq T} \right|} \geq c_{\mathcal F} $ for some constant $0 < c_{\mathcal F} \leq 1$ depending only on ${\mathcal F}$. In other words, any large family makes up a positive proportion of $\mathcal{F}_{\mathrm{univ}}$.
Thus, the fibers of the reduction map $\mathcal{F}^{\leq T} \to ({\mathbb Z}/m{\mathbb Z})^2$ are also of size $\ll \left| \mathcal{F}^{\leq T} \right| m^{-2}$ (analogously to \eqref{eq:boundreductionfiberFuniv}). And just as in \eqref{eq:mdividesDiscFuniv}, the number of $(A,B) \in \mathcal{F}^{\leq T}$ with $m \mid \Delta_{A,B}$ is therefore
\begin{equation} \label{eq:mdividesDisc}
\ll \frac{\left|\mathcal{F}^{\leq T}\right|}{m^2}\cdot (m\cdot O(1)^{\omega(m)}) = \left|\mathcal{F}^{\leq T} \right| \frac{O(1)^{\omega(m)}}{m}.
\end{equation}
We will apply \eqref{eq:mdividesDisc} to the case of $m = d^2$ where the prime factors of $d$ satisfy $p\ll T^\delta$, i.e., $d$ is $T^\delta$-smooth. For $(A,B) \in \mathcal{F}^{\leq T}$, let
$$
n_{A,B} := \prod_{\substack{p^2\mid \Delta_{A,B} \\ p\ll T^\delta}} p^{\floor{\frac{v_p(\Delta_{A,B})}{2}}}.
$$
We claim there always exists a divisor $d_{A,B}$ of $n_{A,B}$ (and hence $d_{A,B}^2 \mid \Delta_{A,B}$)
such that
\begin{align}
\min(n_{A,B}, T^{\frac{1}{6}-\delta}) &\ll d_{A,B}\ll T^{\frac{1}{6}} \label{eq:nAB} \\
\text{ and } \quad \qquad \qquad \tau(n_{A,B}) &\ll \tau(d_{A,B})^4, \label{eq:taund}
\end{align}
where $\tau$ denotes the standard number-of-divisors function.
To see this, we first note that, for any $m\in {\mathbb Z}^+$, $x \geq 1$, and $1 \leq y\leq m$ such that $p \leq x$ for all primes $p$ dividing $m$, there is always a divisor of $m$ in the interval $[y, xy]$. Indeed, either $m$ already lies in this interval, or we may, by removing one prime of $m$ at a time, reduce the size of the divisor by at most a factor of $x$ at each step. Since we eventually reach $1\leq y$, we must eventually cross this interval. We will call this the `greedy argument'.
Now we apply this to $n_{A,B}$ and its divisors. First, if $n_{A,B}$ already satisfies the inequality \eqref{eq:nAB} we are done and may take $d_{A,B} := n_{A,B}$. Otherwise, there is at least one divisor $d\mid n_{A,B}$ with $T^{\frac{1}{6} - \delta}\ll d\ll T^{\frac{1}{6}}$ by using the greedy argument with $m = n_{A,B}$, $x\asymp T^\delta$, and $y\asymp T^{\frac{1}{6} - \delta}$. Thus we may take $d_{A,B}$ to be a divisor $d$ in that interval maximizing $\tau(d)$ among all divisors in the interval:
$$d_{A,B} := \mathrm{argmax} \left\{ \tau(d) : d\mid n_{A,B}, T^{\frac{1}{6} - \delta}\ll d\ll T^{\frac{1}{6}} \right\}.$$
In other words, we have $d_{A,B}$ divides $n_{A,B}$ and $T^{\frac{1}{6} - \delta}\ll d_{A,B}\ll T^{\frac{1}{6}}$, and if $d\mid n_{A,B}$ and $T^{\frac{1}{6} - \delta}\ll d\ll T^{\frac{1}{6}}$, then $\tau(d_{A,B})\geq \tau(d)$.
For any $d\mid n_{A,B}$ with $d\ll T^{\frac{1}{6}}$, there exists an integer $d'$ with $d\mid d'\mid n_{A,B}$ and $T^{\frac{1}{6} - \delta}\ll d'\ll T^{\frac{1}{6}}$. Indeed, if $d \gg T^{\frac{1}{6} - \delta}$, we take $d' = d$; otherwise, if $d\ll T^{\frac{1}{6} - \delta}$, apply the greedy argument with $m = d$, $x\asymp T^\delta$, $y\asymp \frac{T^{\frac{1}{6} - \delta}}{d}$ to find a divisor $e\mid \frac{n}{d}$ with $\frac{T^{\frac{1}{6}} - \delta}{d}\ll e\ll \frac{T^{\frac{1}{6}}}{d}$, and take $d' := de$. Note that $\tau(d)\leq \tau(d')\leq \tau(d_{A,B})$.
Since $n_{A,B}^2\mid \Delta_{A,B}$ by definition and $|\Delta_{A,B}|\ll T$, we have that $n_{A,B}\ll T^{\frac{1}{2}}$. Moreover, applying the greedy argument at most two times, we may write $n_{A,B} = d_{A,B} d_1 d_2 d_3$ with $d_i\ll T^{\frac{1}{6}}$ for $i = 1, 2, 3$. Then $\tau(d_i) \leq \tau(d_{A,B})$, so $\tau(n_{A,B}) = \tau(d_{A,B} d_1 d_2 d_3)\leq \tau(d_{A,B}) \tau(d_1) \tau(d_2) \tau(d_3) \leq \tau(d_{A,B})^4$, yielding \eqref{eq:taund} and the claim.
Choosing one such $d_{A,B}$ for each $(A,B)\in \mathcal{F}^{\leq T}$, we conclude that
\begin{align*}
\sum_{(A,B)\in \mathcal{F}^{\leq T}} \prod_{\substack{p^2\mid \Delta_{A,B} \\ p\ll T^\delta}} \left(\floor{\frac{v_p(\Delta_{A,B})}{2}} + 1\right)^t
&= \sum_{(A,B)\in \mathcal{F}^{\leq T}} \tau(n_{A,B})^t
\\&\ll_t \sum_{\substack{d\ll T^{\frac{1}{6}} \\ d\text{ $T^\delta$-smooth}}} \tau(d)^t \sum_{\substack{(A,B)\in \mathcal{F}^{\leq T} \\ d^2\mid \Delta_{A,B}}} 1
\\&\ll \left|\mathcal{F}^{\leq T}\right|\cdot \sum_{\substack{d\ll T^{\frac{1}{6}} \\ d\text{ $T^\delta$-smooth}}} \frac{O(1)^{\omega(d)} \tau(d)^t}{d^2} & \text{ by \eqref{eq:mdividesDisc}}
\\&\ll_t \left|\mathcal{F}^{\leq T}\right|
\end{align*}
as desired.
\end{proof}
\subsection{Other families} \label{sec:otherfamilies}
The arguments in Section \ref{sec:mainthmpf} may be modified appropriately to give averages or moments on the number of integral points on elliptic curves over ${\mathbb Q}$ in some other families where we have finite upper bounds on the average $d$-Selmer group size, for some $d > 2$. These families include the following:
\begin{align*}
{\mathcal F}_1 &:= \{y^2 + d_3 y = x^3 + d_2 x^2 + d_4 x : d_2, d_3, d_4 \in {\mathbb Z}, \, \Delta \neq 0 \} \\
{\mathcal F}_1(2) &:= \{y^2 = x^3 + d_2 x^2 + d_4 x: d_2, d_4 \in {\mathbb Z}, \, \Delta \neq 0 \}
\end{align*}
The family ${\mathcal F}_1$ (resp., ${\mathcal F}_1(2)$) has a marked point (resp., marked $2$-torsion point) at $(0,0)$, and the height $H(E)$ of a curve $E$ in either family is again a measure of the size of the coefficients, defined as $\max |d_i|^{\frac{12}{i}}$. By \cite{cofreecounting}, the average size of the $3$-Selmer group in each of these families, ordered by height, is bounded. We claim that the average number of integral points on the curves in these families is bounded, and in fact, a stronger statement holds:
\begin{theorem}
For ${\mathcal F} = {\mathcal F}_1$ or ${\mathcal F}_1(2)$ and any $0 < s < \log_2 3 = 1.5850\ldots$, we have
$$\mathrm{Avg}_{\mathcal F}( \left| E({\mathbb Z}) \right|^s ) \ll_s 1$$
where the average is taken over all elliptic curves $E$ in ${\mathcal F}$ ordered by height.
\end{theorem}
\begin{proof}
The proof follows the same outline as that of Theorem \ref{thm:largefamilymomentsbound}. Let $\mathcal{F}^{\leq T} := \{ E \in {\mathcal F} : \Delta_{E} \neq 0, H(A,B) \leq T\}$ represent the curves in ${\mathcal F}$ of height at most $T$. The bound of Theorem \ref{thm:the bound on the number of integral points} and H\"older's inequality give an inequality analogous to \eqref{eq:afterholder}:
\begin{equation}
\sum_{E \in \mathcal{F}^{\leq T}} |E({\mathbb Z})|^s\ll \left(\sum_{E \in \mathcal{F}^{\leq T}} (2^{(1+\varepsilon)\cdot s})^{\mathop{\mathrm{rk}}{E({\mathbb Q})}}\right)^{\frac{1}{1+\varepsilon}}
\left(\sum_{E\in \mathcal{F}^{\leq T}} \prod_{p^2 \mid \Delta_E} \left(\floor{\frac{v_p(\Delta_{E})}{2}} + 1\right)^{3\left(1 + \varepsilon^{-1}\right) s}\right)^{\frac{\varepsilon}{1+\varepsilon}} \label{eq:afterholder2}
\end{equation}
The first term is bounded as before, by choosing $0 < \varepsilon < \frac{\log_2 3}{s} - 1$ so that
$$
\left(2^{ (1+\varepsilon)\cdot s} \right)^{\mathop{\mathrm{rk}}{E({\mathbb Q})}}\leq 3^{\mathop{\mathrm{rk}}{E({\mathbb Q})}} \leq \left|E({\mathbb Q})/3E({\mathbb Q})\right| \leq \left|{\rm Sel}_3(E)\right|.
$$
The bounds on the average $3$-Selmer size from \cite{cofreecounting} imply that
$$
\sum_{E\in \mathcal{F}^{\leq T}} \left(2^{(1+\varepsilon)\cdot s} \right)^{\mathop{\mathrm{rk}}{E({\mathbb Q})}} \ll \left| \mathcal{F}^{\leq T} \right|.
$$
To bound the second term in \eqref{eq:afterholder2}, we imitate Lemma \ref{lem:boundsecondterm}. We claim that for all $t\geq 0$,
$$
\sum_{E \in \mathcal{F}^{\leq T}} \prod_{p^2 \mid \Delta_{E}} \left(\floor{\frac{v_p(\Delta_{E})}{2}} + 1\right)^t\ll_t \left| \mathcal{F}^{\leq T} \right|.
$$
Almost all of the proof of Lemma \ref{lem:boundsecondterm} still works, including the large prime bound; we need only show that the number of $E \in \mathcal{F}^{\leq T}$ with $m \mid \Delta_E$ is
\begin{equation} \label{eq:solutionsmodm}
\ll \left|\mathcal{F}^{\leq T} \right| O(1)^{\omega(m)}m^{-1}
\end{equation}
(as in \eqref{eq:mdividesDisc}).
For ${\mathcal F} = {\mathcal F}_1$, each fiber of the natural reduction map $\mathcal{F}^{\leq T} \to ({\mathbb Z}/m{\mathbb Z})^3$ sending $E \in {\mathcal F}_1$ to $(d_2, d_3, d_4)$ modulo $m$ is now of size $\ll \left|\mathcal{F}^{\leq T}\right|\cdot m^{-3}$ (since there are now $3$ parameters instead of $2$ for each curve in ${\mathcal F}_1$). But Hensel's lemma and the Chinese remainder theorem, in this case, show that there are $\ll m^2\cdot O(1)^{\omega(m)}$ solutions $(d_2, d_3, d_4)$ modulo $m$ to the discriminant vanishing modulo $m$. We thus still obtain the bound \eqref{eq:solutionsmodm}. The argument for ${\mathcal F}_1(2)$ is almost identical.
\end{proof}
|
1803.08070
|
\section*{Acknowledgement}
This work was made under the supervision of Tapani Hyttinen. I want to express my gratitude to him for introducing me to the topic, his valuable advices and support during this work. This research was supported by the Doctoral Programme in Mathematics and Statistics (Domast).
\section{Introduction}
One of the main motivations behind the study of the generalized descriptive set theory, is the connections with model theory. The complexity of a countable first-order theory can be measured using the Borel reducibility in the generalized Baire spaces: We say that $T'$ is more complex than $T$ if the isomorphism relation among models of $T$ with universe $\kappa$ ($\cong_T$) is Borel reducible to the isomorphism relation among models of $T'$ with universe $\kappa$.
The classification of theories in Shelah's stability theory gives another notion of complexity. S. Friedman, Hyttinen, Kulikov and others have studied the connection between these two notions of complexity. The results reviewed in this introduction require further assumptions and the reader is referred to the original paper for the exact assumptions.\\ \\
In \cite{FHK13} it was shown that the following is consistent: if $T$ is classifiable and $T'$ is not, then $\cong_{T'}$ is not Borel reducible to $\cong_T$. In \cite{HM} it was shown, under heavy assumptions on $\kappa$, that if $T$ is classifiable and $T'$ is stable unsuperstable with OCP, then $\cong_T$ is continuously reducible to $\cong_{T'}$, if in addition $V=L$, then $\cong_{T'}$ is $\Sigma_1^1$-complete. In \cite{LS} Laskowski and Shelah studied the $\lambda$-Borel completeness of the relation $(Mod_\lambda(T),\equiv_{\infty,\aleph_0})$ when $T$ is $\omega$-stable with \textit{eni}-DOP or \textit{eni}-deep (see below).
\begin{definition}\label{D.1.1}
For any relational language $L$ with size at most $\lambda$, let $L^{\pm}=L\cup\{\neg R\mid R\in L\}$, and let $S_L^\lambda$ denote the set of $L$-structures $M$ with universe $L$. Let $L(\lambda)=\{R(\bar{\alpha})\mid R\in L^{\pm},\bar{\alpha}\in \lambda^n, n=\textit{arity}(R)\}$ and endow $S_L^\lambda$ with the topology generated by the subbasis $$\mathcal{B}=\{U_{R(\bar{\alpha})}\mid R(\bar{\alpha})\in L(\lambda)\}$$ where $U_{R(\bar{\alpha})}=\{M\in S_L^\lambda\mid M\models R(\bar{\alpha})\}$.
\end{definition}
\begin{definition}\label{D.1.2}
Given a language $L$ of size at most $\lambda$, a set $K\subseteq S_L^\lambda$ is $\lambda$-\textit{Borel} if, there is a $\lambda$-Boolean combination $\psi$ of $L(\lambda)$-sentences (i.e., a propositional $L_{\lambda^+,\aleph_0}$-sentence of $L(\lambda)$) such that $$K=\{M\in S_L^\lambda\mid M\models \psi\}$$
\end{definition}
Given two relational languages $L_1$ and $L_2$ of size at most $\lambda$, a function $f:S_{L_1}^\lambda\rightarrow S_{L_2}^\lambda$ is $\lambda$-\textit{Borel} if the inverse image of every open set is $\lambda$-Borel.
\begin{definition}\label{D.1.3}
Suppose that $L_1$ and $L_2$ are two relational languages of size at most $\lambda$, and for $l=1,2$, $K_l$ is a $\lambda$-Borel subset of $S_{L_l}^\lambda$ that is invariant under $\equiv_{\infty,\aleph_0}$. We say that $(K_1,\equiv_{\infty,\aleph_0})$ is $\lambda$-\textit{Borel reducible} to $(K_2,\equiv_{\infty,\aleph_0})$, written $$(K_1,\equiv_{\infty,\aleph_0})\leq_\lambda^B (K_2,\equiv_{\infty,\aleph_0})$$ if there is a $\lambda$-Borel function $f:S_{L_1}^\lambda\rightarrow S_{L_2}^\lambda$ such that $f(K_1)\subseteq K_2$, and for all $M,N\in K_1$ it holds that $$M\equiv_{\infty,\aleph_0} N \ \text{ if and only if } \ f(M)\equiv_{\infty,\aleph_0} f(N)$$
\end{definition}
\begin{definition}\label{D.1.4}
$K$ is $\lambda$-\textit{Borel complete} for $\equiv_{\infty,\aleph_0}$ if $(K,\equiv_{\infty,\aleph_0})$ is a maximum with respect to $\leq_\lambda^B$. We call a theory $T$ $\lambda$-Borel complete for $\equiv_{\infty,\aleph_0}$ if \textit{Mod}$_\lambda(T)$, the class of models of $T$ with universe $\lambda$, is $\lambda$-Borel complete for $\equiv_{\infty,\aleph_0}$.
\end{definition}
Laskowski and Shelah proved the following result, \cite{LS} (Corollary 4.13 and 6.10).
\begin{lemma}\label{L.1.5}
If $T$ is $\omega$-stable with \textit{eni}-DOP or \textit{eni}-deep, then $T$ is $\lambda$-Borel complete for $\equiv_{\infty,\aleph_0}$
\end{lemma}
To understand this result in the context of the generalized descriptive set theory, we will have to introduce some notions first. Here and throughout the paper we assume that $\kappa$ is an uncountable
cardinal that satisfies $\kappa^{<\kappa}=\kappa$, $\mathcal{M}$ will denote the monster model, and for every finite tuple $a$, we will denote $a\in A^{length(a)}$ by $a\in A$, unless something else is stated.
The generalized
Baire space is the set $\kappa^\kappa$ with the bounded topology. For
every $\zeta\in \kappa^{<\kappa}$, the set
$$[\zeta]=\{\eta\in \kappa^\kappa \mid \zeta\subset \eta\}$$
is a basic open set. The open sets are of the form $\bigcup X$ where
$X$ is a collection of basic open sets. The collection of
Borel subsets of $\kappa^\kappa$ is the smallest set which
contains the basic open sets and is closed under unions and
intersections, both of length $\kappa$. A Borel set is any
element of this collection.\\
A function $f\colon \kappa^\kappa\rightarrow \kappa^\kappa$ is \emph{Borel},
if for every open set $A\subseteq \kappa^\kappa$ the inverse image
$f^{-1}[A]$ is a Borel subset of $\kappa^\kappa$. Let $E_1$ and $E_2$ be
equivalence relations on $\kappa^\kappa$. We say that $E_1$ is
\emph{Borel reducible} to $E_2$, if there is a Borel function $f\colon
\kappa^\kappa\rightarrow \kappa^\kappa$ that satisfies $(x,y)\in E_1\Leftrightarrow
(f(x),f(y))\in E_2$. We call $f$ a \emph{reduction} of $E_1$ to
$E_2$. This is denoted by $E_1\le_B E_2$ and if $f$ is continuous,
then we say that $E_1$ is \emph{continuously reducible} to $E_2$ and
this is denoted by $E_1\le_c E_2$.\\
Let $\mathcal{L}$ be a given relation vocabulary of size $\kappa$, $\mathcal{L}=\{R_{(n,m)}|n,m\in \kappa\backslash\{0\}\}$, where $R_{(n,m)}$ is an $n$-ary relation. Fix a bijection $g:\omega\backslash\{0\}\times \kappa\backslash\{0\}\rightarrow\kappa$ that satisfies that $g\restriction \omega\backslash\{0\}\times \omega\backslash\{0\}$ is a bijection between $\omega\backslash\{0\}\times \omega\backslash\{0\}$ and $\omega$, define $P_{g(n,m)}:=R_{(n,m)}$ and rewrite $\mathcal{L}=\{P_n|n<\kappa\}$. Denote $g^{-1}(\alpha)$ by $(g^{-1}_1(\alpha),g^{-1}_2(\alpha))$. When we describe a complete theory $T$ in a vocabulary $L\subseteq \mathcal{L}$, we think of it as a complete $\mathcal{L}$-theory extending $T\cup \{\forall \bar{x}\neg
P_n(\bar{x})|P_n\in \mathcal{L}\backslash L\}$. We can code $\mathcal{L}$-structures with domain $\kappa$ as follows.
\begin{definition}\label{D.1.6}
Fix a bijection $\pi\colon \kappa^{<\omega}\to \kappa$. For every
$\eta\in \kappa^\kappa$ define the $\mathcal{L}$-structure
$\mathcal{A}_\eta$ with domain $\kappa$ as follows:
For every relation $P_m$, every tuple
$(a_1,a_2,\ldots , a_n)$ in $\kappa^n$ satisfies
$$(a_1,a_2,\ldots , a_n)\in P_m^{\mathcal{A}_\eta}\Longleftrightarrow n=g_1^{-1}(m)\textit{ and }\eta(\pi(m,a_1,a_2,\ldots,a_n))\ge 1.$$
\end{definition}
Notice that for every $\mathcal{L}$-structure $\mathcal{A}$ there exists
$\eta\in \kappa^\kappa$ with $\mathcal{A}=\mathcal{A}_\eta$, this way of coding structures can be used to code structures in a countable language too.
Since for all $\beta<\kappa$, the sets $\{\eta\in \kappa^\kappa\mid \eta(\beta)=0\}$ and $\{\eta\in \kappa^\kappa\mid \eta(\beta)>0\}$ are Borel, then for all $R\in \mathcal{L}^\pm$ and $\bar{\alpha}\in \kappa^{\textit{arity}(R)}$ the set $\{\eta\in \kappa^\kappa\mid \mathcal{A}_\eta\models R(\bar{\alpha})\}$ is Borel. Then by the definition of $\kappa$-Borel and the definition of Borel, we conclude that: If $K$ is a $\kappa$-Borel subset of $S^\kappa_\mathcal{L}$, then the set $\{\eta\in \kappa^\kappa\mid M=\mathcal{A}_\eta, M\in K\}$ is Borel. On the other hand by the definition of Borel, we know that for every basic open set $[\zeta]$, there is $\varphi$, a $\mathcal{L}_{\kappa,\aleph_0}$-sentence of $\mathcal{L}(\kappa)$, such that $[\zeta]=\{\eta\in \kappa^\kappa\mid\mathcal{A}_\eta\models\varphi\}$. Therefore, if $K\subseteq S^\kappa_\mathcal{L}$ is such that $\{\eta\in \kappa^\kappa\mid M=\mathcal{A}_\eta, M\in K\}$ is Borel, then there is $\psi$ a $\mathcal{L}_{\kappa^+,\aleph_0}$-sentence of $\mathcal{L}(\kappa)$ such that $\{\eta\in \kappa^\kappa\mid M=\mathcal{A}_\eta, M\in K\}=\{\eta\in \kappa^\kappa\mid\mathcal{A}_\eta\models\psi\}$. We conclude that $K\subseteq S^\kappa_\mathcal{L}$ is $\kappa$-Borel if and only if $\{\eta\in \kappa^\kappa\mid M=\mathcal{A}_\eta, M\in K\}$ is Borel.
Let us define the equivalence relation $\equiv_{\infty,\aleph_0}^K\subset \kappa^\kappa\times\kappa^\kappa$ for every $K$ $\kappa$-Borel subset of $S^\kappa_\mathcal{L}$ invariant under $\equiv_{\infty,\aleph_0}$ by:
$(\eta,\xi)\in\ \equiv_{\infty,\aleph_0}^K$ if and only if
\begin{itemize}
\item $\mathcal{A}_\eta,\mathcal{A}_\xi\in K$ and $\mathcal{A}_\eta\ \equiv_{\infty,\aleph_0}\ \mathcal{A}_\xi$, or
\item $\mathcal{A}_\eta,\mathcal{A}_\xi\notin K$.
\end{itemize}
If $K=Mod_\kappa(T)$, then we denote by $\equiv_{\infty,\aleph_0}^T$ the equivalence relation $\equiv_{\infty,\aleph_0}^K$. From the previous observation, we can restate Lemma \ref{L.1.5} as follows:\\
\textit{If $T$ is $\omega$-stable with eni-DOP or eni-deep, then for every $K$ $\kappa$-Borel subset of $S^\kappa_\mathcal{L}$ invariant under $\equiv_{\infty,\aleph_0}$ it holds that} $$\equiv_{\infty,\aleph_0}^K\ \leq_B\ \equiv_{\infty,\aleph_0}^T.$$
Let us use the isomorphism relation to make a last observation on the relations $\equiv_{\infty,\aleph_0}^K$.
\begin{definition}[The isomorphism relation]\label{D.1.7}
Assume $T$ is a complete first order
theory in a countable vocabulary, $\mathcal{L}$. We define $\cong^\kappa_T$ as the
relation $$\{(\eta,\xi)\in \kappa^\kappa\times
\kappa^\kappa\mid (\mathcal{A}_\eta\models T, \mathcal{A}_\xi\models T,
\mathcal{A}_\eta\cong \mathcal{A}_\xi)\text{ or }
(\mathcal{A}_\eta\not\models T, \mathcal{A}_\xi\not\models T)\}.$$
\end{definition}
We will omit the superscript ``$\kappa$'' in $\cong^\kappa_T$ when it
is clear from the context. For every complete first order theory $T$ in a countable
vocabulary there is an isomorphism relation associated with $T$,
$\cong^\kappa_T$.
Given a countable vocabulary $\mathcal{L}$, define $L$ by $L=\mathcal{L}\cup\{P\}\cup\{R_\beta\mid \beta<\kappa\}$, where $P$ is an unary relation $R_\beta$ is a binary relation for all $\beta<\kappa$. Let $T$ be a complete first order theory in $\mathcal{L}$, for every $\mathcal{A}\in Mod_\kappa(T)$ construct an $L$-structure $\bar{\mathcal{A}}$ such that:
\begin{itemize}
\item $dom(\bar{\mathcal{A}})=\kappa$,
\item $\bar{\mathcal{A}}\models P(\alpha)$ if and only if there is $\beta<\kappa$ such that $\alpha=2\beta$,
\item $\bar{\mathcal{A}}\restriction \{2\beta\mid \beta<\kappa\}$ is isomorphic to $\mathcal{A}$ as an $\mathcal{L}$--structure,
\item $\forall \beta<\kappa$, $R_\beta (x,y)$ implies $\neg P(x)\wedge P(y)$,
\item for every $\alpha<\kappa$ and every $b$ with $\neg P(b)$, there is a unique tuple $\bar{a}\in \kappa^{<\kappa}$ with $length(\bar{a})=\alpha$ and for all $\gamma<\alpha$, $P(a_\gamma)$, that satisfies: $$\forall \beta<\alpha,\ R_\beta(b,c)\Leftrightarrow c=a_\beta.$$
\item for every $\alpha<\kappa$ and every tuple $\bar{a}\in \kappa^\kappa$ with $length(\bar{a})=\alpha$ and for all $\gamma<\alpha$, $P(a_\gamma)$, there is a unique element of $\bar{\mathcal{A}}$, $b_{\bar{a}}$, that satisfies: $$\forall \beta<\alpha,\ R_\beta(b_{\bar{a}},c)\Leftrightarrow \neg P(b_a) \textit{ and } c=a_\beta.$$
\end{itemize}
Let $\bar{K}$ be the smallest subset of $S^\kappa_{L}$ that contains $\{\bar{\mathcal{A}}\mid \mathcal{A}\in K\}$ and is invariant under $\equiv_{\infty,\aleph_0}$.
Shelah's Theorem XIII.1.4 in \cite{Sh} implies the following: if $T$ is a classifiable theory, then any two models that are $\mathcal{L}_{\infty,\kappa}$-equivalent are isomorphic. In other words, if $T$ is a classifiable theory in $\mathcal{L}$, we get that $(\eta,\xi)\in\ \equiv_{\infty,\kappa}^T$ if and only if $(\eta,\xi)\in\ \cong_T$. Now, $(\eta,\xi)\in\ \cong_T$ clearly implies $\bar{\mathcal{A}}_\eta \equiv_{\infty,\aleph_0} \bar{\mathcal{A}}_\xi$; conversely $\bar{\mathcal{A}}_\eta \equiv_{\infty,\aleph_0} \bar{\mathcal{A}}_\xi$ implies $\mathcal{A}_\eta \equiv_{\infty,\kappa} \mathcal{A}_\xi$, so $\bar{\mathcal{A}}_\eta \equiv_{\infty,\aleph_0} \bar{\mathcal{A}}_\xi$ implies $(\eta,\xi)\in\ \cong_T$. We conclude that the map $f:\kappa^\kappa\rightarrow \kappa^\kappa$ given by
\begin{itemize}
\item if $\mathcal{A}_\eta\models T$, then $f(\eta)$ is a code for $\bar{\mathcal{A}}_\eta$ (i.e. $\mathcal{A}_{f(\eta)}=\bar{\mathcal{A}}_\eta$),
\item if $\mathcal{A}_\eta\not\models T$, then $f(\eta)$ a code for $\mathcal{B}$, where $\mathcal{B}$ is a fix $L$-structure not in $\bar{K}$.
\end{itemize}
is a reduction from $\cong_T$ to $\equiv_{\infty,\aleph_0}^{\bar{K}}$. In \cite{FHK13} (Theorem 69) it was proved that if $T$ is classifiable and not shallow, then $\cong_T$ is $\Delta_1^1$ and not Borel. Therefore, if $T$ is classifiable and not shallow, then $\equiv_{\infty,\aleph_0}^{\bar{K}}$ is not Borel. In conclusion, for many $K$ $\kappa$-Borel subset of $S^\kappa_\mathcal{L}$ invariant under $\equiv_{\infty,\aleph_0}$, the relation $\equiv_{\infty,\aleph_0}^{K}$ is not Borel. Notice that all the relations of the form $\equiv_{\infty,\aleph_0}^{K}$ are $\Delta_1^1$, this is due to the fact that $\equiv_{\infty,\aleph_0}$ is characterized by the Ehrenfeucht-Fra\"{i}ss\'e game of length $\omega$ which is a determined game.
From now on $\mathcal{L}$ will be a countable relational vocabulary, $\mathcal{L}=\{P_n\mid n<\omega\}$, the $\mathcal{L}$-structures with domain $\kappa$ will be coded as in Definition \ref{D.1.6}, and every theory is a theory in $\mathcal{L}$.
In this paper we study the complexity of classifiable theories with respect to theories with S-DOP (see below). Under heavy assumptions on $\kappa$, we show that if $T$ is classifiable and $T'$ is superstable with S-DOP, then $\cong_T$ is continuously reducible to $\cong_{T'}$.
We will work with the $\mu$-club relation to obtain this result. For every regular cardinal $\mu<\kappa$, we say that a set $A\subseteq \kappa$ is a $\mu$-club if it is unbounded and closed under $\mu$-limits.
\begin{definition}\label{D.1.8}
We say that $f,g\in \kappa^\kappa$ are $E^\kappa_{\mu\text{-club}}$ equivalent ($f\ E^\kappa_{\mu\text{-club}}\ g$) if the set $\{\alpha<\kappa|f(\alpha)=g(\alpha)\}$ contains a $\mu$-club.
\end{definition}
The following lemma is proved in \cite{HM} (Theorem 2.8) and compares the complexities of the isomorphism relation of classifiable theories with the $\mu$-club relations. We will use this lemma in the proof of the main result.
\begin{lemma}\label{L.1.9}
Assume $T$ is a classifiable theory and $\mu<\kappa$ a regular cardinal, then $\cong_T$ is continuously reducible to $E_{\mu\text{-club}}^\kappa$.
\end{lemma}
\section{Preliminaries}
\subsection{Coloured Trees}
Coloured trees have been very useful in the past to reduce $E_{\mu\text{-club}}^\kappa$ to $\cong_T$ for certain $\mu<\kappa$ and $T$ non-classifiable, examples of this can be found in \cite{FHK13}, \cite{HM} and \cite{HK}. The trees in \cite{FHK13}, \cite{HM} and \cite{HK} are trees of height $\omega+2$, in this section we will present a variation of these trees that has height $\lambda+2$ for $\lambda$ an uncountable cardinal.
For a tree $t$, for every $x\in t$ we denote by $ht(x)$ the height of $x$, the order type of $\{y\in t|y<x\}$. Define $t_\alpha=\{x\in t|ht(x)=\alpha\}$ and $t_{<\alpha}=\cup_{\beta<\alpha}t_\beta$, denote by $x\restriction \alpha$ the unique $y\in t$ such that $y\in t_\alpha$ and $y\leq x$. If $x,y\in t$ and $\{z\in t|z<x\}=\{z\in t|z<y\}$, then we say that $x$ and $y$ are $\sim$-related, $x\sim y$, and we denote by $[x]$ the equivalence class of $x$ for $\sim$.\\
An $\alpha, \beta$-tree is a tree $t$ with the following properties:
\begin{itemize}
\item $|[x]|<\alpha$ for every $x\in t$.
\item All the branches have order type less than $\beta$ in $t$.
\item $t$ has a unique root.
\item If $x,y\in t$, $x$ and $y$ has no immediate predecessors and $x\sim y$, then $x=y$.
\end{itemize}
\begin{definition}\label{D.2.1}
Let $\lambda$ be an uncountable cardinal. A coloured tree is a pair $(t,c)$, where $t$ is a $\kappa^+$, $(\lambda+2)$-tree and $c$ is a map $c:t_\lambda\rightarrow \kappa\backslash \{0\}$.
\end{definition}
\noindent
Two coloured trees $(t,c)$ and $(t',c')$ are isomorphic, if there is a trees isomorphism $f:t\rightarrow t'$ such that for every $x\in t_\lambda$, $c(x)=c'(f(x))$.\\
Denote the set of all coloured trees by $CT^\lambda$. Let $CT^\lambda_*\subset CT^\lambda$ be the set of coloured trees, in which every element with height less than $\lambda$, has infinitely many immediate successors, and every maximal branch has order type $\lambda+1$.\\
We are going to work only with elements of $CT^\lambda_*$, every time we mention a coloured tree, we mean an element of $CT^\lambda_*$.\\
We can see every coloured tree as a downward closed subset of $\kappa^{\leq \lambda}$.
\begin{definition}\label{D.2.2}
Let $(t,c)$ be a coloured tree, suppose $(I_\alpha)_{\alpha<\kappa}$ is a collection of subsets of $t$ that satisfies:
\begin{itemize}
\item for each $\alpha<\kappa$, $I_\alpha$ is a downward closed subset of $t$.
\item $\bigcup_{\alpha<\kappa}I_\alpha=t$.
\item if $\alpha<\beta<\kappa$, then $I_\alpha\subset I_\beta$.
\item if $\gamma$ is a limit ordinal, then $I_\gamma=\bigcup_{\alpha<\gamma}I_\alpha$.
\item for each $\alpha<\kappa$ the cardinality of $I_\alpha$ is less than $\kappa$.
\end{itemize}
We call $(I_\alpha)_{\alpha<\kappa}$ a filtration of $t$.
\end{definition}
Order the set $\lambda\times \kappa\times \kappa\times \kappa\times \kappa$ lexicographically, $(\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_5)>(\beta_1,\beta_2,\beta_3,\beta_4,\beta_5)$ if for some $1\leq k \leq 5$, $\alpha_k>\beta_k$ and for every $i<k$, $\alpha_i=\beta_i$. Order the set $(\lambda\times \kappa\times \kappa\times \kappa\times \kappa)^{\leq \lambda}$ as a tree by inclusion.\\
Define the tree $(I_f,d_f)$ as, $I_f$ the set of all strictly increasing functions from some $\theta\leq \lambda$ to $\kappa$ and for each $\eta$ with domain $\lambda$, $d_f(\eta)=f(sup(rang(\eta)))$.\\
For every pair of ordinals $\alpha$ and $\beta$, $\alpha<\beta<\kappa$ and $i<\lambda$ define $$R(\alpha,\beta,i)=\bigcup_{i< j\leq \lambda}\{\eta:[i,j)\rightarrow[\alpha,\beta)|\eta \text{ strictly increasing}\}.$$
\begin{definition}\label{D.2.5}
Assume $\kappa$ is an inaccessible cardinal. If $\alpha<\beta<\kappa$ and $\alpha,\beta,\gamma\neq 0$, let $\{P^{\alpha,\beta}_\gamma|\gamma<\kappa\}$ be an enumeration of all downward closed subtrees of $R(\alpha,\beta,i)$ for all $i$, in such a way that each possible coloured tree appears cofinally often in the enumeration. And the tree $P^{0,0}_0$ is $(I_f,d_f)$.
\end{definition}
This enumeration is possible because $\kappa$ is inaccessible; there are at most\\ $|\bigcup_{i<\lambda}\mathcal{P}(R(\alpha,\beta,i))|\leq \lambda\times\kappa=\kappa$ downward closed coloured subtrees, and at most $\kappa\times \kappa^{<\kappa}=\kappa$ coloured trees.\\
Denote by $Q(P^{\alpha,\beta}_\gamma)$ the unique ordinal number $i$ such that $P^{\alpha,\beta}_\gamma\subset R(\alpha,\beta,i)$.
\begin{definition}\label{D.2.6}
Assume $\kappa$ is an inaccessible cardinal.
Define for each $f\in \kappa^\kappa$ the coloured tree $(J_f,c_f)$ by the following construction.\\
For every $f\in \kappa^\kappa$ define $J_f=(J_f,c_f)$ as the tree of all $\eta: s\rightarrow \lambda\times \kappa^4$, where $s\leq \lambda$, ordered by extension, and such that the following conditions hold for all $i,j<s$:\\
Denote by $\eta_i$, $1\leq i\leq 5$, the functions from $s$ to $\kappa$ that satisfies, $\eta(n)=(\eta_1(n),\eta_2(n),\eta_3(n),\eta_4(n),\eta_5(n))$.
\begin{enumerate}
\item [1.] $\eta\restriction n\in J_f$ for all $n<s$.
\item [2.]$\eta$ is strictly increasing with respect to the lexicographical order on $\lambda\times \kappa^4$.
\item [3.]$\eta_1(i)\leq \eta_1(i+1)\leq \eta_1(i)+1$.
\item [4.]$\eta_1(i)=0$ implies $\eta_2(i)=\eta_3(i)=\eta_4(i)=0$.
\item [5.]$\eta_2(i)\ge\eta_3(i)$ implies $\eta_2(i)=0$.
\item [6.]$\eta_1(i)<\eta_1(i+1)$ implies $\eta_2(i+1)\ge \eta_3(i)+\eta_4(i)$.
\item [7.]For every limit ordinal $\alpha$, $\eta_k(\alpha)=sup_{\beta<\alpha}\{\eta_k(\beta)\}$ for $k\in \{1,2\}$.
\item [8.]$\eta_1(i)=\eta_1 (j)$ implies $\eta_k (i)=\eta_k (j)$ for $k\in \{2,3,4\}$.
\item [9.]If for some $k<\lambda$, $[i,j)=\eta_1^{-1}\{k\}$, then $$\eta_5\restriction {[i,j)}\in P^{\eta_2(i),\eta_3(i)}_{\eta_4(i)}.$$
\noindent Note that 7 implies $Q(P^{\eta_2(i),\eta_3(i)}_{\eta_4(i)})=i$.
\item [10.]If $s=\lambda$, then either
\begin{itemize}
\item [(a)] there exists an ordinal number $m$ such that for every $k<m$ $\eta_1(k)<\eta_1(m)$, for every $k' \ge m$ $\eta_1(k)=\eta_1(m)$, and the color of $\eta$ is determined by $P^{\eta_2(m),\eta_3(m)}_{\eta_4(m)}$: $$c_f(\eta)=c(\eta_5\restriction {[m,\lambda)})$$ where $c$ is the colouring function of $P^{\eta_2(m),\eta_3(m)}_{\eta_4(m)}$.\\
\end{itemize}
Or
\begin{itemize}
\item [(b)] there is no such ordinal $m$ and then $c_f(\eta)=f(sup(rang(\eta_5)))$.
\end{itemize}
\end{enumerate}
\end{definition}
The following lemma is a variation of Lemma 4.7 of \cite{HM}. In \cite{HM} Lemma 4.7 refers to trees of height $\omega+2$ and the relation $E^\kappa_{\omega\text{-club}}$, nevertheless the proof is the same in both cases.
\begin{lemma}\label{L.2.7}
Assume $\kappa$ is an inaccessible cardinal, then for every $f,g\in \kappa^\kappa$ the following holds $$f\ E^\kappa_{\lambda\text{-club}}\ g \Leftrightarrow J_f\cong J_g$$
\end{lemma}
\begin{remark}\label{R.2.7a}
For each $\alpha<\kappa$ define $J_f^\alpha$ as $$J_f^\alpha=\{\eta\in J_f| rang(\eta)\subset \lambda\times(\beta)^4\text{ for some }\beta<\alpha\}.$$ Notice that $(J_f^\alpha)_{\alpha<\kappa}$ is a filtration of $J_f$ and it has the following properties:
\begin{enumerate}
\item [1.]
$
sup(rang(\eta_4))\leq sup(rang(\eta_3))=sup(rang(\eta_5))=sup(rang(\eta_2)).
$
\item [2.]When $\eta\restriction k\in J_f^\alpha$ holds for every $k\in \lambda$, $sup(rang(\eta_5))\leq \alpha$. If in addition $\eta\notin J_f^\alpha$, then
$
sup(rang(\eta_5))= \alpha.
$
\end{enumerate}
\end{remark}
From now on $\kappa$ will be an inaccessible cardinal. Let us take a look at the sets $rang(f)$ and $rang(c_f)$, more specifically at the set $\{\alpha<\kappa|f(\alpha)\in rang(c_f)\}$.
\begin{remark}\label{R.2.8}
Assume $f\in \kappa^\kappa$ and let $J_f$ be the respective coloured tree obtained by Definition \ref{D.2.6}. If $\eta\in J_f$ satisfies Definition \ref{D.2.6} item 10 (b), then clearly exists $\alpha<\kappa$ such that $c_f(\eta)=f(\alpha)$. It is possible that not for every $\alpha<\kappa$, there is $\eta\in J^{\alpha+1}_f$ such that $c_f(\eta)=f(\alpha)$. Nevertheless the set $C=\{\alpha<\kappa|\exists \xi\in J^{\alpha+1}_f\text{ such that }\xi_1\restriction \omega=id+1, \xi_1\restriction {[\omega,\lambda)}=id\restriction {[\omega,\lambda)}\text{ and } c_f(\xi)=f(\alpha)\}$ is an $\lambda$-club. $C$ is unbounded: For every $\beta<\kappa$ we can construct the function $\eta\in J_f$ by $\beta_0=\beta$, $\eta_1\restriction \omega=id+1$, $\eta_1\restriction {[\omega,\lambda)}=id\restriction {[\omega,\lambda)}$, $\eta_2(i)=\beta_i$, $\eta_3(i)=\beta_i+1$, $\eta_4(i)=\gamma_i$ and $\eta_5=\eta_2$, where $\gamma_i$ is the least ordinal such that $P_{\gamma_i}^{\beta_i \beta_i+1}=\{\xi:[i,i+1)\rightarrow [\beta_i,\beta_i+1)\}$, $\beta_{i+1}=\beta_i+1+\gamma_i$ and $\beta_i=\cup_{j<i}\beta_j$ for $i$ a limit ordinal; since $\kappa$ is inaccessible, $\eta\in J^{(\cup_{i<\lambda}\beta_i)+1}_f$ and $\cup_{i<\lambda}\beta_i\in \mathcal{C}$. $C$ is $\lambda$-closed: Let $\{\alpha_i\}_{i<\lambda}$ be a succession of elements of $\mathcal{C}$, for every $i<\omega$ let $\xi^i$ be an element of $J_f$ such that $\xi^i_1\restriction \omega=id+1$, $\xi^i_1\restriction {[\omega,\lambda)}=id$ and $rang(\xi_5^i)=\alpha_i$, define $n_0=0$ and for every $i<\lambda$, $n_{i+1}$ as the least ordinal number bigger than $n_i$ such that $\alpha_i<\xi^{i+1}_2(n_{i+1})$. The function $\xi$ define by $\xi\restriction {[n_i,n_{i+1})}=\xi^i\restriction {[n_i,n_{i+1})}$ is an element of $J^{(\cup_{i<\lambda}\alpha_i)+1}_f$ such that $\xi_1\restriction \omega=id+1$, $\xi_1\restriction {[\omega,\lambda)}=id$ and $rang(\xi_5)=\cup_{i<\lambda}\alpha_i$, therefore $f(\cup_{i<\lambda}\alpha_i)=c_f(\xi)$ and $\cup_{i<\lambda}\alpha_i\in \mathcal{C}$.
\end{remark}
\subsection{Strong DOP}
Now, we will recall the dimensional order property and the strong dimensional order property. We will also give some important properties that will be useful in the fourth section, in that section we construct models of theories with the strong dimensional property.
In \cite{Sh} Shelah gives an axiomatic approach for an isolation notion, $F$, and defines the notions $F$-constructible, $F$-atomic, $F$-primary, $F$-prime and $F$-saturated.
\begin{definition}\label{D.3.1}
Denote by $F_\theta^a$ the set of pairs $(p,B)$ with $|B|<\theta$, such that for some $A\supseteq B$ and $a$, $p\in S(A)$, $a\models p$ and $stp(a,B)\vdash p$.
\end{definition}
In \cite{Sh} (Definition II 4.2 (2), and Definition V 1.1 (2) and (4)) the notions of stationarization of a type, and orthogonal types were defined as follows.
\begin{definition}\label{D.3.2}
We call $p$ a stationarization of $q$ if $q$ is stationary and $p$ parallel to $q$ or $q$ is complete over some $A$, and for some $c$ realizing $q$, $p$ is parallel to $stp(c,A)$. A stationarization of $q$ over $A$ is any stationarization $p\in S(A)$ of $q$.
\end{definition}
\begin{definition}\label{D.3.3}
\begin{enumerate}
\item [1.]If $p(x_1)$, $q(x_2)$ are complete types over $A$, $p$ an $m$-type, $q$ an $n$-type, we call $p$ weakly orthogonal to $q$ if and only if $p(x_1)\cup q(x_2)$ is complete over $A$.
\item [2.]Let $p_1$ be complete or stationary and $p_2$ be complete or stationary. Then $p_1$ is orthogonal to $p_2$, $p_1\perp p_2$, if for every $A$, $dom(p_1)\cup dom(p_2)\subseteq A$, $A$ the universe of a $F_\omega^a$-saturated model, and any stationarizations $q_l$ of $p_l$, $l=1,2$ over $A$; $q_1$ is weakly orthogonal to $q_2$.
\item [3.]The type $p$ is orthogonal to the set $A$, $p\perp A$, if $p$ is orthogonal to every complete type over $A$.
\end{enumerate}
\end{definition}
The following Lemma can be found in \cite{Sh} (Lemma V 1.1 (2)) and it gives us a equivalence to weakly orthogonality.
\begin{lemma}\label{L.3.4}
If $p_1=tp(a_1,A)$, and $p_2=tp(a_2,A)$, then $p_1$ is weakly orthogonal to $p_2$ if and only if $tp(a_1,A)\vdash tp(a_1,A\cup a_2) \Leftrightarrow tp(a_2,A)\vdash tp(a_2,A\cup a_1)$.
\end{lemma}
Notice that for $p_1,p_2\in S(A)$ stationary types the following holds. If $p_1=tp(a_1,A)$, and $p_2=tp(a_2,A)$, then by Lemma \ref{L.3.4} $p_1$ is weakly orthogonal to $p_2$ if and only if $a_1\downarrow_A a_2$.\\ On the other hand, if $A\subseteq B$, $p\in S(A)$ is stationary, and $q\in S(B)$ is a stationarization of $p$, then $q$ is the non-forking extension of $p$. Therefore, let $p_1,p_2\in S(A)$ be stationary. $p_1$ is orthogonal to $p_2$ if for all $a_1$, $a_2$, and $B\supseteq A$ the following holds: If $a_1\models p_1$, $a_2\models p_2$, $a_1\downarrow_AB$ and $a_2\downarrow_AB$, then $a_1\downarrow_Ba_2$.\\
By Definition \ref{D.3.3} item 3, $p\in S(B)$ is orthogonal to $A$ if $p$ is orthogonal to every $q\in S(A)$. By Definition \ref{D.3.2} and since the strong types are stationary, $p\in S(B)$ is orthogonal to $A\subseteq B$ if for all $a$ and $q\in S(A)$ such that $tp(a,B)$ is stationary, $a\models q$ and $a\downarrow_AB$, $p\perp tp(a,B)$. We conclude that a stationary type $p\in S(B)$ is orthogonal to $A$ if for all $a,b$ and $D\supset A$ the following holds: If $tp(b,B)$ is stationary, $a\models p$, $b\downarrow_AB$, $b\downarrow_BD$ and $a\downarrow_BD$, then $a\downarrow_Db$.
\begin{fact}\label{F.3.5}
Let $B,D\subseteq M$, $M$ a $F_\omega^a$-saturated model over $B\cup D$, and $p\in S(M)$. If $p$ is orthogonal to $D$ and $p$ does not fork over $B\cup D$, then for every $a\models p\restriction B\cup D$ the following holds: $a\downarrow_{B\cup D}M$ implies $tp(a,M)\perp D$.
\end{fact}
\begin{proof}
Notice that since $M$ is a model, then every complete type over $M$ is stationary. Let $p\in S(M)$ and $B,D\subseteq M$ such that $p$ is orthogonal to $D$ and $p$ does not fork over $B\cup D$. Suppose, towards a contradiction, that there is $a$ such that $a\models p\restriction B\cup D$, $a\downarrow_{B\cup D}M$ and $tp(a,M)\not\perp D$. Therefore, there are $N$ and $c$, $D\subseteq N$, such that $a\downarrow_MN$, $c\downarrow_DM\cup N$, and $a\not\downarrow_Nc$.\\
Let $b$ be such that $b\models p$, there is $f\in Aut(\mathcal{M},D\cup B)$ such that $f(a)=b$. Denote by $N'$ the image $f(N)$. Choose $b'$ such that $b'\downarrow_{B\cup D}M\cup N'$ and $stp(b',B\cup D)=stp(b,B\cup D)$. We know that $a\downarrow_{B\cup D}M$ and $a\downarrow_MN$, then by transitivity we get $a\downarrow_{B\cup D}M\cup N$. Therefore $a\downarrow_{B\cup D}N$, since $f\in Aut(\mathcal{M},D\cup B)$ we conclude that $b\downarrow_{B\cup D}N'$. Since $stp(b',B\cup D)=stp(b,B\cup D)$ and $b'\downarrow_{B\cup D}N'$ we conclude that $tp(b,N'\cup B)=tp(b',N'\cup B)$, there is $h\in Aut(\mathcal{M},N'\cup B)$ such that $h(b)=b'$. On the other hand, by the way we chose $b$, we know that $b\downarrow_{B\cup D}M$. Since $stp(b',B\cup D)=stp(b,B\cup D)$ and $b'\downarrow_{B\cup D}M$, then $tp(b',M)=tp(b,M)=p$. We conclude that there is $F\in Aut(\mathcal{M},B\cup D)$ such that $F(a)=b'$ and $tp(b',M)\perp D$. Denote by $c'$ the image $F(c)$.\\
Choose $c''$ such that $tp(c'',N'\cup B\cup b')=tp(c',N'\cup B\cup b')$ and $c''\downarrow_{N'\cup B\cup b'}M$. Since $b'\downarrow_{B\cup N'}M$, then by transitivity we get $c''b'\downarrow_{N'\cup B}M$, so $c''\downarrow_{N'\cup B}M$. On the other hand $c\downarrow_DM\cup N$, so $c\downarrow_DB\cup N$, since $F\in Aut(\mathcal{M},B\cup D)$, we get $c'\downarrow_DB\cup N'$.
By the way chose $c''$ we know that $tp(c'',N'\cup B)=tp(c',N'\cup B)$, therefore $c''\downarrow_DB\cup N'$ and by transitivity we get $c''\downarrow_DM\cup N'$.\\
We conclude that $c''\downarrow_MN'$ and $c''\downarrow_DM$, since $b'\downarrow_MN'$ and $tp(b',M)\perp D$, we get $b'\downarrow_{N'}c''$. By the way we chose $c''$ we know that $tp(c',N'\cup b')=tp(c'',N'\cup b')$, so $b'\downarrow_{N'}c'$. Since $F\in Aut(\mathcal{M},B\cup D)$, we conclude that $a\downarrow_{N}c$, a contradiction.
\end{proof}
\begin{corollary}\label{C.3.6}
A type $p\in S(B\cup C)$ is orthogonal to $C$, if for every $F_\omega^a$-primary model, $M$, over $B\cup C$ there exists a non-forking extension of $p$, $q\in S(M)$, orthogonal to $C$.
\end{corollary}
\begin{proof}
The proof follows by Definition \ref{D.3.3} item 2, Fact \ref{F.3.5} and the fact that every $F_\omega^a$-primary model over $B\cup C$ is $F_\omega^a$-primitive.
\end{proof}
In \cite{Sh} (X.2 Definition 2.1) Shelah defines the dimensional order property, DOP, as follows.
\begin{definition}\label{D.3.7}
A theory $T$ has the dimensional order property (DOP) if there are $F_{\kappa(T)}^a$-saturated models $(M_i)_{i<3}$, $M_0\subset M_1\cap M_2$, $M_1\downarrow_{M_0}M_2$, and the $F_{\kappa(T)}^a$-prime model over $M_1\cup M_2$ is not $F_{\kappa(T)}^a$-minimal over $M_1\cup M_2$.
\end{definition}
In \cite{Sh} he also proves the following important lemma (X.2 Lemma 2.2).
\begin{lemma}\label{L.3.8}
Let $M_0\subset M_1\cap M_2$ be $F_{\kappa(T)}^a$-saturated models, $M_1\downarrow_{M_0}M_2$, $M$ $F_{\kappa(T)}^a$-atomic over $M_1\cup M_2$ and $F_{\kappa(T)}^a$-saturated. Then the following conditions are equivalent:
\begin{enumerate}
\item [1.]$M$ is not $F_{\kappa(T)}^a$-minimal over $M_1\cup M_2$.
\item [2.]There is an infinite indiscernible $I\subseteq M$ over $M_1\cup M_2$.
\item [3.]There is a type $p\in S(M)$ orthogonal to $M_1$ and to $M_2$, $p$ not algebraic.
\item [4.]There is an infinite $I\subseteq M$ indiscernible over $M_1\cup M_2$ such that $Av(I,M)$ is orthogonal to $M_1$ and to $M_2$.
\end{enumerate}
\end{lemma}
The rest of the results in this section will be stated and proved for the case of the $F_{\omega}^a$ isolation. Many of those results can be easily generalized to $F_{\kappa(T)}^a$ by making small changes on the proof.\\
From now on we will work only with superstable theories. We know that for every superstable theory $T$, $\kappa(T)=\omega$.\\
The following lemma is very important at the moment to understand Definition \ref{D.3.13}, below. The proof of Lemma \ref{L.3.8} made by Shelah in \cite{Sh} (X.2 Lemma 2.2) also works as a proof for the following lemma.
\begin{lemma}\label{L.3.9}
Let $M_0\subset M_1\cap M_2$ be $F_{\omega}^a$-saturated models, $M_1\downarrow_{M_0}M_2$, $M_3$ $F_{\omega}^a$-atomic over $M_1\cup M_2$ and $F_{\omega}^a$-saturated. Then the following conditions are equivalent:
\begin{enumerate}
\item [1.]There is a non-algebraic type $p\in S(M_3)$ orthogonal to $M_1$ and to $M_2$, that does not fork over $M_1\cup M_2$.
\item [2.]There is an infinite indiscernible $I\subseteq M_3$ over $M_1\cup M_2$ that is independent over $M_1\cup M_2$.
\item [3.]There is an infinite $I\subseteq M_3$ indiscernible over $M_1\cup M_2$ and independent over $M_1\cup M_2$, such that $Av(I,M_3)$ is orthogonal to $M_1$ and to $M_2$.
\end{enumerate}
\end{lemma}
The following Lemma is proved in \cite{HS99} (Theorem 2.1).
\begin{lemma}\label{L.3.10}
Let $M_0\prec M_1,M_2$ be $F_{\omega}^a$-saturated models, such that $M_1\downarrow_{M_0}M_2$. Let $M_3$ be an $F_{\omega}^a$-prime model over $M_1\cup M_2$ and let $I\subseteq M_3$ be an indiscernible over $M_1\cup M_2$ such that $Av(I,M_3)$ is orthogonal to $M_1$ and to $M_2$. If $(B_i)_{i<3}$ are sets such that:
\begin{itemize}
\item $B_0\downarrow_{M_0}M_1\cup M_2$.
\item $B_1\downarrow_{M_1\cup B_0}B_2\cup M_2$.
\item $B_2\downarrow_{M_2\cup B_0}B_1\cup M_1$.
\end{itemize}
Then $$tp(I,M_1\cup M_2) \vdash tp(I,M_1\cup M_2\cup_{i<3}B_i).$$
\end{lemma}
The following lemma shows that, if $M_1$, $M_2$, and $M_3$ are models that satisfy Definition \ref{D.3.7}, then we can find models $M_1'$, $M_2'$, and $M_3'$ that extend $M_1$, $M_2$, and $M_3$ respectively and satisfy Definition \ref{D.3.7}.
\begin{lemma}\label{L.3.11}
Let $M_0\subset M_1\cap M_2$ be $F_{\omega}^a$-saturated models, such that $M_1\downarrow_{M_0}M_2$ and $M_3$, the $F_{\omega}^a$-prime model over $M_1\cup M_2$, is not $F_{\omega}^a$-minimal over $M_1\cup M_2$.\\
If $(M'_i)_{i<3}$ are $F_{\omega}^a$-saturated models that satisfy:
\begin{itemize}
\item $\forall i<3$, $M_i\subseteq M'_i$.
\item $\forall i<3$, $M'_i\downarrow_{M_i}M_3$.
\item $M'_1\downarrow_{M'_0}M'_2$.
\end{itemize}
Then $M_3'$ the $F_{\omega}^a$-prime model over $M'_1\cup M_2'$ is not $F_{\omega}^a$-minimal over $M_1'\cup M_2'$.
\end{lemma}
\begin{proof}
By Lemma \ref{L.3.8} there is an infinite indiscernible sequence $I=(a_i)_{i<\omega}$ in $M_3$ over $M_1\cup M_2$. Since $M_3$ is $F_{\omega}^a$-atomic over $M_1\cup M_2$, then for all $n<\omega$ there exists $A_n\subseteq M_1\cup M_2$, such that $|A_n|<\kappa(T)$ and $stp((a_i)_{i\leq n},A_n)\vdash tp((a_i)_{i\leq n},M_1\cup M_2)$.\\
Since $M'_1\downarrow_{M_0'}M_2'$ and $M'_0\downarrow_{M_0}M_3$, the assumptions of Lemma \ref{L.3.10} hold for $B_i=M_i'$. Therefore $$tp(I,M_1\cup M_2)\vdash tp(I,M_1'\cup M_2'),$$
so $I$ is indiscernible over $M'_1\cup M'_2$, $stp((a_i)_{i\leq n},A_n)\vdash tp((a_i)_{i\leq n},M_1'\cup M_2')$, and $stp(a_n,A_n\cup \{a_i\}_{i<n})\vdash tp(a_n,M_1'\cup M_2'\cup \{a_i\}_{i<n})$. We conclude that $M_1'\cup M_2'\cup I$ is constructible over $M_1'\cup M_2'$.\\
Let $M_3'$ be the $F_{\omega}^a$-prime model over $M_1'\cup M_2'$ with construction $(b_i,B_i)_{i<\gamma}$, such that $b_i=a_i$ and $B_i=A_i\cup \{a_j\}_{j<i}$, for $i<\omega$.\\
Since $I$ is indiscernible over $M'_1\cup M'_2$ and $I\subseteq M_3'$, by Lemma \ref{L.3.8}, we conclude that $M_3'$ is not $F_{\omega}^a$-minimal over $M'_1\cup M'_2$.
\end{proof}
\begin{remark}\label{R.3.12}
Notice that in the previous lemma it was proved that $I$ is indiscernible over $M'_1\cup M'_2$, by Lemma \ref{L.3.8}, we also obtain that $Av(I,M_3')$ is orthogonal to $M_1'$ and to $M_2'$.\\
Also, it was proved that for every $a_n\in I$ there exists $A_n\subseteq M_1\cup M_2$, such that $stp(a_n,A_n\cup \{a_i\}_{i<n})\vdash tp(a_n,M_1'\cup M_2'\cup \{a_i\}_{i<n})$. Therefore $a_n\downarrow_{A_n\cup \{a_i\}_{i<n}}M_1'\cup M_2'$, so $a_n\downarrow_{M_1\cup M_2\cup \{a_i\}_{i<n}}M_1'\cup M_2'$. We conclude that if $I$ is independent over $M_1\cup M_2$, then $a_n\downarrow_{M'_1\cup M'_2}\cup \{a_i\}_{i<n}$ and $I$ is independent over $M'_1\cup M'_2$.
\end{remark}
\begin{definition}\label{D.3.13}
We say that a superstable theory $T$ has the strong dimensional order property (S-DOP) if the following holds:\\
There are $F_{\omega}^a$-saturated models $(M_i)_{i<3}$, $M_0\subset M_1\cap M_2$, such that $M_1\downarrow_{M_0}M_2$, and for every $M_3$ $F_{\omega}^a$-prime model over $M_1\cup M_2$, there is a non-algebraic type $p\in S(M_3)$ orthogonal to $M_1$ and to $M_2$, such that it does not fork over $M_1\cup M_2$.
\end{definition}
In \cite{HrSo} Hrushovski and Sokolvi\'c proved that the theory of differentially closed fields of characteristic zero (DCF) has eni-DOP, so it has DOP. The reader can find an outline of this proof in \cite{Mar07}. We will show that the models used in \cite{Mar07} also testify that the theory of differentially closed fields has S-DOP. We will focus on the proof of the S-DOP property:
\textit{There are $F_{\omega}^a$-saturated models $(M_i)_{i<3}$, $M_0\subset M_1\cap M_2$, such that $M_1\downarrow_{M_0}M_2$, and for every $M_3$ $F_{\omega}^a$-prime model over $M_1\cup M_2$, there is a non-algebraic type $p\in S(M_3)$ orthogonal to $M_1$ and to $M_2$, such that it does not fork over $M_1\cup M_2$.}
For more on DCF (proofs, definitions, references, etc) can be found in \cite{Mar}.
\begin{definition}\label{D.3.14}
A differential field is a field $K$ with a derivation map $\delta:K\rightarrow K$ wit the properties:
\begin{itemize}
\item $\delta(a+b)=\delta(a)+\delta(b)$
\item $\delta(ab)=a\delta(b)+b\delta(a)$
\end{itemize}
\end{definition}
We call $\delta(a)$ the derivative of $a$ and we denote by $\delta^n(a)$ the $n$th derivative of $a$. For a differential field $K$ we denote by $K\{x_1,x_2,\ldots,x_n\}$ the ring $$K[x_1,x_2,\ldots,x_n,\delta(x_1),\delta(x_2),\ldots,\delta(x_n),\delta^ 2(x_1),\delta^ 2(x_2),\ldots,\delta^2(x_n),\ldots]$$ The derivation map $\delta$ is extended in $K\{x_1,x_2,\ldots,x_n\}$ by $\delta(\delta^m(x_i))=\delta^{m+1}(x_i)$. We call $K\{x_1,x_2,\ldots,x_n\}$ the ring of differential polynomials over $K$.
\begin{definition}\label{D.3.15}
We say that a diferential field $K$ is differentially closed if for any differential field $L\supseteq K$ and $f_1,f_2,\ldots, f_n\in K\{x_1,x_2,\ldots,x_n\}$ the system $f_1(x_1,x_2,\ldots,x_n)=f_2(x_1,x_2,\ldots,x_n)=f_n(x_1,x_2,\ldots,x_n)=0$ has solution in $L$, then it has solution in $K$.
\end{definition}
Let $K$ be a saturated model of DFC, $k\subseteq K$ and $a\in K^n$, we denote by $k\langle a\rangle$ the differentially closed subfield generated by $k(a)$. If $A\subseteq K$ and for all $n$, every nonzero $f\in k\{x_1,x_2,\ldots,x_n\}$, and all $a_1,a_2,\ldots,a_n\in A$ it holds that $f(a_1,a_2,\ldots,a_n)\neq 0$, then we say that $A$ is $\delta$-independent over $k$. Let us denote by $j(E)$ the $j$-invariant of the elliptic curve $E$.
\begin{theorem}\label{T.3.16}
\begin{itemize}
\item Let $A$ be an algebraic closed field of characteristic zero. For all $a\in A$ there is an elliptic curve $E$ definable over $A$ with $j(E)=a$.
\item $E\cong E_1$ if and only if $j(E)=j(E)$.
\end{itemize}
\end{theorem}
For $a\in K$, let $E(a)$ be the elliptic curve defined over $K$ with $j$-invariant $a$, let $E(a)^\sharp$ be the $\delta$-closure of the torsion points and $p_a\in S(a)$ be the generic type of $E(a)^\sharp$.
For all $k\subseteq K$ denote by $k^{dif}$ the differential closure of $k$ in $K$.
\begin{theorem}[Hrushovski, Sokolvi\'c]\label{T.3.17}
Suppose $K_0$ is a differentially closed field with characteristic zero, $\{a,b\}$ is $\delta$-independent over $K_0$, $K_1=K_0\langle a\rangle^{dif}$, $K_2=K_0\langle b\rangle^{dif}$, $K=K_0\langle a,b\rangle^{dif}$, and $p$ the non-forking extension of $p_{a+b}$ in $K$. Then $K_1\downarrow_{K_0}K_2$, $p\perp K_1$, and $p\perp K_2$.
\end{theorem}
\begin{corollary}\label{C.3.18}
DFC has the S-DOP.
\end{corollary}
\begin{proof}
Let $a$, $b$, $K_1$, $K_2$, and $p$ be as in Theorem \ref{T.3.17}. By Theorem \ref{T.3.17} it is enough to show that $p$ does not fork over $K_1\cup K_2$. By the way $p$ was defined, we know that $p$ does not fork over $a+b$, therefore $p$ does not fork over $\{a,b\}$. Since $\{a,b\}$ is $\delta$-independent over $K_0$, $K_1=K_0\langle a\rangle^{dif}$, and $K_2=K_0\langle b\rangle^{dif}$, we conclude that $p$ does not fork over $K_1\cup K_2$.
\end{proof}
\section{Construction of Models}
In this section we will use coloured trees to construct models of a superstable theory with S-DOP. To do this, we will need some basic results first and fix some notation. We will study only the superstable theories with S-DOP. Instead of write $F_\omega^a$-constructible, $F_\omega^a$-atomic, $F_\omega^a$-saturated and $F_\omega^a$-saturated we will write $a$-constructible, $a$-atomic, $a$-primary, $a$-prime and $a$-saturated. From now on $T$ will be a superstable theory with S-DOP.\\
Because of the definition of S-DOP, we know that there are $a$-saturated models $(M_i)_{i<3}$, $M_0\subset M_1\cap M_2$, such that $M_1\downarrow_{M_0}M_2$, and for every $M_3$ $a$-prime model over $M_1\cup M_2$, there is a non-algebraic type $p\in S(M_3)$ orthogonal to $M_1$ and to $M_2$ that does not fork over $M_1\cup M_2$. So $p\restriction M_1\cup M_2$ is orthogonal to $M_1$ and to $M_2$. By Lemma \ref{L.3.9}, we know that there is an infinite $I\subseteq M_3$ indiscernible over $M_1\cup M_2$ that is independent over $M_1\cup M_2$, such that $Av(I,M_3)=p$. For this independent sequence $I$, it holds that $Av(I,M_1\cup M_2)$ is orthogonal to $M_1$ and to $M_2$.\\
We will denote by $\lambda(T)$ the least cardinal such that $T$ is $\lambda$-stable. Since $T$ is superstable, then $\lambda(T)\leq 2^\omega$, we will denote by $\lambda$ the cardinal $(2^\omega)^+$.
\begin{definition}\label{D.4.1}
Let us define the dimension of an indiscernible $I$ over $A$ in $M$ by:
$dim(I,A,M)=min\{|J|:J\text{ is equivalent to }I \text{ and }J\text{ is a maximal indiscernible}$\\ $\text{ over }A\text{ in }M\}$. If for all $J$ as above $dim(I,A,M)=|J|$, then we say that the dimension is true.
\end{definition}
The following results are important to study $a$-primary models and indiscernible sets. The proof of these results
can be found in \cite{Sh} (Lemma III 3.9 and Theorem IV 4.9).
\begin{lemma}\label{L.4.2}
If $I$ is a maximal indiscernible set over $A$ in $M$, then $|I|+\kappa(T)=dim(I,A,M)+\kappa(T)$,
and if $dim(I,A,M)\ge\kappa(T)$, then the dimension is true.
\end{lemma}
\begin{theorem}\label{T.4.3}
If $M$ is $a$-primary model over $A$, and $I\subseteq M$ is an infinite indiscernible set over $A$, then $dim(I,A,M)=\omega$.
\end{theorem}
For any indiscernible sequence $I=\{a_i|i<\gamma\}$, we will denote by $I\restriction _\alpha$ the sequence $I=\{a_i|i<\alpha\}$. Now for every $f\in \kappa^\kappa$ we will use the the tree $J_f$ given in Definition \ref{D.2.6}, to construct the model $\mathcal{A}^f$.\\
Since $T$ has the S-DOP, by Lemma \ref{L.3.9} and Lemma \ref{L.3.10} there are $a$-saturated models $\mathcal{A}, \mathcal{B}, \mathcal{C}$ of cardinality $2^\omega$
and an indiscernible sequence $\mathcal{I}$ over $\mathcal{B}\cup \mathcal{C}$ of size $\kappa$ that is independent over $\mathcal{B}\cup \mathcal{C}$ such that
\begin{enumerate}
\item [1.]$\mathcal{A}\subset\mathcal{B}\cap\mathcal{C}$, $\mathcal{B}\downarrow_{\mathcal{A}}\mathcal{C}$.
\item [2.]$Av(\mathcal{I},\mathcal{B}\cup\mathcal{C})$ is orthogonal to $\mathcal{B}$ and to $\mathcal{C}$.
\item [3.]If $(B_i)_{i<3}$ are sets such that:
\begin{itemize}
\item [(a)]$B_0\downarrow_{\mathcal{A}}\mathcal{B}\cup\mathcal{C}$.
\item [(b)]$B_1\downarrow_{\mathcal{B}\cup B_0}B_2\cup\mathcal{C}$.
\item [(c)]$B_2\downarrow_{\mathcal{C}\cup B_0}B_1\cup\mathcal{B}$.
\end{itemize}
Then, $$tp(\mathcal{I},\mathcal{B}\cup\mathcal{C}) \vdash tp(\mathcal{I},\mathcal{B}\cup\mathcal{C}\cup_{i<3}B_i).$$
\end{enumerate}
For every $\xi\in (J_f)_{<\lambda}$ and every $\eta\in (J_f)_{\lambda}$ ($(J_f)_{<\lambda}$ and $(J_f)_{\lambda}$ are given by the definition of $t_\alpha$ at the beginning of the section Preliminaries), let $\mathcal{B}_\xi\cong_\mathcal{A}\mathcal{B}$, $\mathcal{A}\preceq \mathcal{B}_\xi$, and
$\mathcal{C}_\eta\cong_\mathcal{A}\mathcal{C}$, $\mathcal{A}\preceq \mathcal{C}_\eta$, such that the models $(\mathcal{B}_\xi)_{\xi\in (J_f)_{<\lambda}}$ and $(\mathcal{C}_\eta)_{\eta\in (J_f)_{\lambda}}$ satisfy the following:
\begin{itemize}
\item $\mathcal{B}_\xi\downarrow_\mathcal{A}\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\wedge\zeta\neq\xi\}.$
\item $\mathcal{C}_\eta\downarrow_\mathcal{A}\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\wedge\theta\neq\eta\}.$
\end{itemize}
Notice that all $\xi,\eta\in J_f$, $\xi\in (J_f)_{<\lambda}$ and $\eta\in (J_f)_{\lambda}$, satisfy $$\mathcal{B}_\xi\cup\mathcal{C}_\eta\downarrow_\mathcal{A}\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\wedge\zeta\neq\xi\wedge\theta\neq\eta\}.$$
For all $\eta\in (J_f)_{\lambda}$ and every $\xi<\eta$ denote by $H_\eta$ and $H_\xi$ the isomorphisms $H_\eta:\mathcal{C}\rightarrow\mathcal{C}_\eta$, and $H_\xi:\mathcal{B}\rightarrow\mathcal{B}_\xi$, such that $H_\eta\restriction \mathcal{A}=H_\xi\restriction \mathcal{A}=id$.
\begin{fact}\label{F.4.4}
Let $H'_{\xi\eta}:\mathcal{C}\cup\mathcal{B}\rightarrow\mathcal{C}_\eta\cup\mathcal{B}_\xi$, be defined by $H'_{\xi\eta}\restriction {\mathcal{C}}=H_\eta$ and $H'_{\xi\eta}\restriction {\mathcal{B}}=H_\xi$, $H'_{\xi\eta}$ is an elementary map.
\end{fact}
\begin{proof}
By the way the models $\mathcal{C}_\eta$ and $\mathcal{B}_\xi$ were chosen, we know that $\mathcal{B}_\xi\downarrow_\mathcal{A}\mathcal{C}_\eta$. Since $H_\eta$ is elementary, there is $F$ and automorphism of the monster model that extends $H_\eta$, so $F^{-1}(\mathcal{B}_\xi)\downarrow_\mathcal{A}\mathcal{C}$ . Since $\mathcal{B}$ and $\mathcal{B}_\xi$ are isomorphic, then $tp(\mathcal{B},\mathcal{A})=tp(\mathcal{B}_\xi,\mathcal{A})$. On the other hand $F$ is an automorphism, we conclude that $tp(\mathcal{B},\mathcal{A})=tp(F^{-1}(\mathcal{B}_\xi),\mathcal{A})$. Since $F^-(\mathcal{B}_\xi)\downarrow_\mathcal{A}\mathcal{C}$, $\mathcal{B}\downarrow_\mathcal{A}\mathcal{C}$, and $tp(\mathcal{B},\mathcal{A})$ is stationary, we conclude that $tp(\mathcal{B},\mathcal{C})=tp(F^{-1}(\mathcal{B}_\xi),\mathcal{C})$. Therefore $tp((\mathcal{B}\cup\mathcal{C}),\emptyset)=tp(\mathcal{B}_\xi\cup\mathcal{C}_\eta,\emptyset)$.
\end{proof}
Let $F_{\xi\eta}$ be an automorphism of the monster model that extends $H'_{\xi\eta}$ and denote the sequence $\mathcal{I}$ by $\{w_\alpha|\alpha<\kappa\}$. For all $\eta\in (J_f)_{\lambda}$ and every $\xi<\eta$, let $I_{\xi\eta}=\{b_\alpha|\alpha<c_f(\eta)\}$ be an indiscernible sequence over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$
of size $c_f(\eta)$, that is independent over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$, that satisfies:
\begin{itemize}
\item $tp(I_{\xi\eta},\mathcal{B}_\xi\cup\mathcal{C}_\eta)=tp(F_{\xi\eta}(\mathcal{I}\restriction c_f(\eta)),\mathcal{B}_\xi\cup\mathcal{C}_\eta)$.
\item $I_{\xi\eta}\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta}\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\}\cup\bigcup\{I_{\zeta\theta}|\zeta\neq\xi\vee\theta\neq\eta\}$.
\end{itemize}
Therefore, there is an elementary embedding $G:\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup F_{\xi\eta}(\mathcal{I}\restriction c_f(\eta))\rightarrow \mathcal{B}_\xi\cup\mathcal{C}_\eta\cup I_{\xi\eta}$ given by $G\restriction\mathcal{B}_\xi\cup\mathcal{C}_\eta=id$ and $G(F_{\xi\eta}(\mathcal{I}\restriction c_f(\eta)))=I_{\xi\eta}$. So the map $H_{\xi\eta}:\mathcal{B}\cup\mathcal{C}\cup \mathcal{I}\restriction c_f(\eta)\rightarrow \mathcal{B}_\xi\cup\mathcal{C}_\eta\cup I_{\xi\eta}$ given by $H_{\xi\eta}=G\circ F_{\xi\eta}$ is elementary.
\begin{remark}\label{R.4.4a}
$\mathcal{B}_\xi$, $\mathcal{C}_\eta$, and $I_{\xi\eta}$ satisfy the following:
\begin{enumerate}
\item [1.]$Av(I_{\xi\eta},\mathcal{B}_\xi\cup\mathcal{C}_\eta)$ is orthogonal to $\mathcal{B}_\xi$ and to $\mathcal{C}_\eta$.
\item [2.]If $(B_i)_{i<3}$ are sets such that:
\begin{itemize}
\item [(a)]$B_0\downarrow_{\mathcal{A}}\mathcal{B}_\xi\cup\mathcal{C}_\eta$.
\item [(b)]$B_1\downarrow_{\mathcal{B}_\xi\cup B_0}B_2\cup\mathcal{C}_\eta$.
\item [(c)]$B_2\downarrow_{\mathcal{C}_\eta\cup B_0}B_1\cup\mathcal{B}_\xi$.
\end{itemize}
Then, $$tp(I_{\xi\eta},\mathcal{B}_\xi\cup\mathcal{C}_\eta) \vdash tp(I_{\xi\eta},\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup_{i<3}B_i).$$
\item [3.]$I_{\xi\eta}\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta}\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\}\cup\bigcup\{I_{\zeta\theta}|\zeta\neq\xi\vee\theta\neq\eta\}$.
\end{enumerate}
\end{remark}
\begin{definition}\label{D.4.5}
Let $\Gamma_f$ be the set $\bigcup\{\mathcal{B}_\xi,\mathcal{C}_\eta,I_{\xi\eta}|\xi\in (J_f)_{<\lambda}\wedge\eta\in (J_f)_\lambda \wedge \xi<\eta\}$ and let $\mathcal{A}^f$ be the $a$-primary model over $\Gamma_f$.
Let $\Gamma_f^\alpha$ be the set $\bigcup\{\mathcal{B}_\xi,\mathcal{C}_\eta,I_{\xi\eta}|\xi,\eta\in J_f^\alpha \wedge \xi<\eta\}$, where $J_f^\alpha=\{\eta\in J_f|rang(\eta)\subset \lambda\times (\beta)^4 \text{ for some }\beta<\alpha\}$ (as in Remark \ref{R.2.7a}).
\end{definition}
\begin{fact}\label{F.4.6}
If $\alpha$ is such that $\alpha^{\lambda}<f(\alpha)$, $sup(\{c_f(\eta)\}_{\eta\in J_f^\alpha})<\alpha$, then $|\Gamma_f^{\alpha+1}|= f(\alpha)$.
\end{fact}
\begin{proof}
Since $\Gamma_f^\alpha=\cup\{\mathcal{B}_\xi,\mathcal{C}_\eta,I_{\xi\eta}|\xi\in (J^\alpha_f)_{<\lambda}\wedge\eta\in (J^\alpha_f)_\lambda \wedge \xi<\eta\}$, we know that $|\Gamma_f^{\alpha+1}|\leq |J^{\alpha+1}_f|\cdot sup(\{c_f(\eta)\}_{\eta\in (J_f^{\alpha+1})_{\lambda}})$. Since $|J_f^{\alpha+1}|\leq \alpha^{\lambda}<f(\alpha)$ and $sup(\{c_f(\eta)\}_{\eta\in J_f^\alpha})<\alpha<f(\alpha)$, we get $|\Gamma_f^{\alpha+1}|\leq max(f(\alpha), sup(\{c_f(\eta)\}_{\eta\in J_f^{\alpha+1}\backslash J_f^\alpha}))$. But every $\eta\in J_f^{\alpha+1}\backslash J_f^\alpha$ with domain $\lambda$ has $rang(\eta_1)=\lambda$ and $f(\alpha)=c_f(\eta)$, otherwise $rang(\eta_5)<\alpha$ and $\eta\in J_f^\alpha$. We conclude $|\Gamma_f^{\alpha+1}|= f(\alpha)$.
\end{proof}
\begin{lemma}\label{L.4.7}
For every $\xi\in (J_f)_{<\lambda}$, $\eta\in (J_f)\lambda$, $\xi<\eta$, let $p_{\xi\eta}$ be the type $Av(I_{\xi\eta}\restriction \omega,I_{\xi\eta}\restriction\omega\cup\mathcal{B}_\xi\cup\mathcal{C}_\eta)$. If $c_f(\eta)>\omega$, then $dim(p_{\xi\eta},\mathcal{A}^f)=c_f(\eta)$.
\end{lemma}
\begin{proof}
Denote by $S$ the set $I_{\xi\eta}\restriction\omega\cup\mathcal{B}_\xi\cup\mathcal{C}_\eta$, so $p_{\xi\eta}=Av(I_{\xi\eta}\restriction\omega,S)$.
Suppose, towards a contradiction, that $dim(p_{\xi\eta},\mathcal{A}^f)\neq c_f(\eta)$. Since $I_{\xi\eta}\subset \mathcal{A}^f$, then $dim(p_{\xi\eta},\mathcal{A}^f)> c_f(\eta)$. Therefore, there is an independent sequence $I=\{a_i|i<c_f(\eta)^+\}$ over $S$ such that $I\subset \mathcal{A}^f$ and $\forall a\in I$, $a\models p_{\xi\eta}$.
\begin{claim}\label{Claim 4.7.1}
$I_{\xi\eta}\restriction\omega\cup I$ is indiscernible over $\mathcal{B}_\xi\cup \mathcal{C}_\eta$.
\end{claim}
\begin{proof}
We will show by induction on $\alpha$, that $I_{\xi\eta}\restriction\omega\cup \{a_i|i\leq \alpha\}$ is indiscernible over $\mathcal{B}_\xi\cup \mathcal{C}_\eta$.
Case $\alpha=0$.
Since $a_0\models p_{\xi\eta}$, then $tp(a_0,S)=Av(I_{\xi,\eta}\restriction\omega,S)$ and $I_{\xi\eta}\restriction\omega\cup \{a_0\}$ is indiscernible over $\mathcal{B}_\xi\cup \mathcal{C}_\eta$.
Suppose $\alpha$ is an ordinal such that for every $\beta<\alpha$, $I_{\xi\eta}\restriction\omega\cup \{a_i|i\leq \beta\}$ is indiscernible over $\mathcal{B}_\xi\cup \mathcal{C}_\eta$. Therefore, $I_{\xi\eta}\restriction\omega\cup \{a_i|i< \alpha\}$ is indiscernible over $\mathcal{B}_\xi\cup \mathcal{C}_\eta$. By the way $I$ was chosen, we know that $a_\alpha\downarrow_S\{a_i|i< \alpha\}$ and $a_\alpha\models p_{\xi\eta}$. Since $I_{\xi\eta}\restriction\omega\cup \{a_i|i< \alpha\}$ is indiscernible over $\mathcal{B}_\xi\cup \mathcal{C}_\eta$, then $Av(I_{\xi\eta}\restriction\omega, S\cup\{a_i|i< \alpha\})=Av(I_{\xi\eta}\restriction\omega\cup \{a_i|i< \alpha\}, S\cup\{a_i|i< \alpha\})$, therefore $Av(I_{\xi\eta}\restriction\omega\cup \{a_i|i< \alpha\}, S\cup\{a_i|i< \alpha\})$ does not fork over $S$. Since $Av(I_{\xi\eta}\restriction\omega\cup \{a_i|i< \alpha\}, S\cup\{a_i|i< \alpha\})$ is stationary, we conclude that $tp(a_\alpha,S\cup\{a_i|i<\alpha\})=Av(I_{\xi,\eta}\restriction\omega\cup\{a_i|i<\alpha\},S\cup\{a_i|i<\alpha\})$ and $I_{\xi,\eta}\restriction\omega\cup\{a_i|i\leq\alpha\}$ is indiscernible over $\mathcal{B}_\xi\cup \mathcal{C}_\eta$.
\end{proof}
In particular $I_{\xi\eta}\restriction\omega\cup I$ is indiscernible, and $I_{\xi\eta}$ is equivalent to $I$.
\begin{claim}\label{Claim 4.7.2}
$tp(I_{\xi\eta},\mathcal{B}_\xi\cup\mathcal{C}_\eta)\vdash tp(I_{\xi\eta},\Gamma_f\backslash I_{\xi\eta})$ and $I_{\xi\eta}$ is indiscernible over $\Gamma_f\backslash I_{\xi\eta}$.
\end{claim}
\begin{proof}
Define: $$B_0=\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}\cup\bigcup\{I_{rp}|r\neq\xi\wedge p\neq\eta\}$$
$$B_1=\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}\cup\bigcup\{I_{rp}|p\neq\eta\}$$
$$B_2=\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}\cup\bigcup\{I_{rp}|r\neq\xi\}$$
Notice that by the way we chose the sequences $I_{xy}$, for every $r<p$ it holds that $$I_{rp}\downarrow_{\mathcal{B}_r\cup\mathcal{C}_p}\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta,\theta\in J_f\}\cup\bigcup\{I_{\zeta\theta}|\zeta\neq r\vee\theta\neq p\}.$$
Let $J$ be a finite subset of $\{I_{rp}|r\neq \xi\wedge p\neq \eta\}$, $J=\{I_i|i<m\}$, then $$I_0\downarrow_{\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}}\mathcal{B}_\xi\cup\mathcal{C}_\eta$$
and $$I_1\downarrow_{\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}\cup I_0}\mathcal{B}_\xi\cup\mathcal{C}_\eta,$$
by transitivity $$I_0\cup I_1\downarrow_{\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}}\mathcal{B}_\xi\cup\mathcal{C}_\eta.$$
In general, if $n<m-1$ is such that $$\{I_i|i\leq n\}\downarrow_{\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}}\mathcal{B}_\xi\cup\mathcal{C}_\eta,$$ then since $$I_{n+1}\downarrow_{\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}\cup\bigcup \{I_i|i\leq n\}}\mathcal{B}_\xi\cup\mathcal{C}_\eta$$ we conclude by transitivity that $$\{I_i|i\leq n+1\}\downarrow_{\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}}\mathcal{B}_\xi\cup\mathcal{C}_\eta.$$
We conclude $$\bigcup J\downarrow_{\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}}\mathcal{B}_\xi\cup\mathcal{C}_\eta.$$ Because of the finite character we get that $$\bigcup \{I_{rp}|r\neq \xi\wedge p\neq \eta\}\downarrow_{\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}}\mathcal{B}_\xi\cup\mathcal{C}_\eta.$$
By the way we chose the models $\mathcal{B}_x$ and $\mathcal{C}_y$, we know that $$\mathcal{B}_\xi\cup\mathcal{C}_\eta\downarrow_\mathcal{A} \bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\},$$ by transitivity we conclude
$B_0\downarrow_{\mathcal{A}}\mathcal{B}_\xi\cup\mathcal{C}_\eta.$
Notice that for every $p\neq \eta$, $\xi< p$ we have $$I_{\xi p}\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_p}\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta,\theta\in J_f\}\cup\bigcup\{I_{\zeta\theta}|\zeta\neq \xi\vee\theta\neq p\}$$ so $$I_{\xi p}\downarrow_{\mathcal{B}_\xi\cup B_0}\mathcal{C}_\eta\cup\bigcup\{I_{\zeta\theta}|\zeta\neq \xi\vee\theta\neq p\}.$$ From this we can conclude, in a similar way as before, that for every finite $J\subseteq \{I_{\xi p}|p\neq \eta\}$ it holds that $$\bigcup J\downarrow_{\mathcal{B}_\xi\cup B_0}\mathcal{C}_\eta\cup\bigcup\{I_{\zeta\theta}|\zeta\neq \xi\}.$$ Because of the finite character we get that $$\bigcup \{I_{\xi p}|p\neq \eta\}\downarrow_{\mathcal{B}_\xi\cup B_0}\mathcal{C}_\eta\cup\bigcup\{I_{\zeta\theta}|\zeta\neq \xi\}.$$ Since $\bigcup\{\mathcal{B}_r\cup\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}\subseteq B_0$ and $\bigcup\{I_{r p}|r\neq \xi\wedge p\neq \eta\}\subseteq B_0$, then we conclude $$B_1\downarrow_{\mathcal{B}_\xi\cup B_0}\mathcal{C}_\eta\cup B_2.$$
Using a similar argument, it can be proved that $$B_2\downarrow_{\mathcal{C}_\eta\cup B_0}\mathcal{B}_\xi\cup B_1.$$
To summary, the following holds:
\begin{itemize}
\item $B_0\downarrow_{\mathcal{A}}\mathcal{B}_\xi\cup\mathcal{C}_\eta$,
\item $B_1\downarrow_{\mathcal{B}_\xi\cup B_0}\mathcal{C}_\eta\cup B_2$,
\item $B_2\downarrow_{\mathcal{C}_\eta\cup B_0}\mathcal{B}_\xi\cup B_1$,
\end{itemize}
by Remark \ref{R.4.4a} item 2, we can conclude that $tp(I_{\xi\eta},\mathcal{B}_\xi\cup\mathcal{C}_\eta) \vdash tp(I_{\xi\eta},\Gamma_f\backslash I_{\xi\eta})$ and since $I_{\xi\eta}$ is indiscernible over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$, then $I_{\xi\eta}$ is indiscernible over $\Gamma_f\backslash I_{\xi\eta}$.
\end{proof}
By Claim \ref{Claim 4.7.1} we know that $tp(I,\mathcal{B}_\xi\cup\mathcal{C}_\eta)=tp(I_{\xi\eta},\mathcal{B}_\xi\cup\mathcal{C}_\eta)$, therefore by Claim \ref{Claim 4.7.2} $tp(I,\mathcal{B}_\xi\cup\mathcal{C}_\eta)\vdash tp(I_{\xi\eta},\Gamma_f\backslash I_{\xi\eta})$. We conclude that $tp(I,\mathcal{B}_\xi\cup\mathcal{C}_\eta)\vdash tp(I,\Gamma_f\backslash I_{\xi\eta})$ and since $I$ is indiscernible over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$, then $I$ is indiscernible over $\Gamma_f\backslash I_{\xi\eta}$.
\begin{claim}\label{Claim 4.7.3}
There are $I',I^*\subseteq I$ such that $|I'|=c_f(\eta)^+$ and $I'\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*}I_{\xi\eta}$.
\end{claim}
\begin{proof}
Let us denote the elements of $I_{\xi\eta}$ by $b_i$, $I_{\xi\eta}=\{b_i|i<c_f(\eta)\}$.
Since $T$ is superstable, we know that for every $\alpha<c_f(\eta)$ there is a finite $B_\alpha\subseteq I\cup \{b_i|i<\alpha\}$ such that $b_\alpha\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup B_\alpha}I\cup \{b_i|i<\alpha\}$. Define $I^*=(\bigcup_{\alpha<c_f(\eta)}B_\alpha)\cap I$ and $I'=I\backslash I^*$, notice that $|I^*|\leq c_f(\eta)$, so $|I'|=c_f(\eta)^+$. Because of the finite character, to prove that $I'\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*}I_{\xi\eta}$, it is enough to prove that $I'\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*}\{b_i|i<\alpha\}$ holds for every $\alpha<c_f(\eta)$. Let us prove this by induction on $\alpha>0$.
Case: $\alpha=1$.
By the way $B_0$ was chosen, we know that $b_0\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup B_0}I$, and this implies $$I'\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*}b_0.$$
Case: $\alpha=\beta+1$.
Suppose $\beta$ is such that $I'\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*}\{b_i|i<\beta\}$ holds. By the way $B_\beta$ was chosen, we know that $b_\beta\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup B_\beta}I\cup \{b_i|i<\beta\}$ and $B_\beta\subseteq I\cup \{b_i|i<\beta\}$. Therefore $b_\beta\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I ^*\cup \{b_i|i<\beta\}}I'$ and by the induction hypothesis and transitivity, we conclude that $\{b_i|i\leq\beta\}\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I ^*}I'$. So $I'\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*}\{b_i|i<\alpha\}$.
Case: $\alpha$ is a limit ordinal.
Suppose $\alpha$ is a limit ordinal such that $I'\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*}\{b_i|i<\beta\}$ holds for every $\beta<\alpha$. Therefore, for every finite $A\subseteq \{b_i|i<\alpha\}$ we know that $I'\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*}A$. Because of the finite character, we conclude that $I'\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*}\{b_i|i<\alpha\}$.
\end{proof}
\begin{claim}\label{Claim 4.7.4}
$I'$ is is indiscernible over $\Gamma_f\cup I^*$, in particular $I'$ is is indiscernible over $\Gamma_f$.
\end{claim}
\begin{proof}
Let $\{c_0,c_1,\ldots , c_n\}$ and $\{c_0',c'_1,\ldots , c'_n\}$ be disjoint subsets of $I'$ with $n$ elements, such that $i\neq j$ implies $c_i\neq c_j$ and $c'_i\neq c'_j$. We will prove that the following holds for every $m\leq n$ $$tp(\{c'_0,\ldots, c_{m-1}',c_m,c_{m+1}, c_n\},\Gamma_f\cup I^*)=tp(\{c_0',\ldots, c_{m-1}',c_m',c_{m+1},\ldots , c_n\},\Gamma_f\cup I^*).$$
By Claim \ref{Claim 4.7.3}, we know that $\{c_0,c_1,\ldots , c_n\}\cup \{c_0',c_1',\ldots , c_n'\}\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*}I_{\xi\eta}$, so $c_m\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*\cup \{c'_0,\ldots,c_{m-1}',c_{m+1},\ldots , c_n\}}I_{\xi\eta}$ and $c_m'\downarrow_{(\Gamma_f\backslash I_{\xi\eta})\cup I^*\cup \{c'_0,\ldots,c'_{m-1}, c_{m+1}, \ldots, c_n\}}I_{\xi\eta}$.
Since $\{c_m,c'_m\}\cup I^*\cup \{c'_0,\ldots,c_{m-1}',c_{m+1}, \ldots, c_n\}$ is indiscernible over $(\Gamma_f\backslash I_{\xi\eta})$, and $\{c_0,c_1,\ldots , c_n\}\cap\{c_0',c'_1,\ldots , c'_n\}=\emptyset$, then $$c_m\models Av(I^*\cup \{c'_0,\ldots,c_{m-1}',c_{m+1},\ldots, c_n\},(\Gamma_f\backslash I_{\xi\eta})\cup I^*\cup \{c'_0,\ldots,c_{m-1}',c_{m+1},\ldots, c_n\})$$ and $$c'_m\models Av(I^*\cup \{c'_0,\ldots,c_{m-1}',c_{m+1},\ldots, c_n\},(\Gamma_f\backslash I_{\xi\eta})\cup I^*\cup \{c_0',\ldots,c_{m-1}',c_{m+1},\ldots , c_n\}).$$ We know that the type $Av(I^*\cup \{c_0',\ldots,c_{m-1}',c_{m+1},\ldots , c_n\},(\Gamma_f\backslash I_{\xi\eta})\cup I^*\cup \{c_0',\ldots,c_{m-1}',c_{m+1},\ldots , c_n\})$ is stationary, we conclude that $$tp(c_m,\Gamma_f\cup I^*\cup \{c_0',\ldots,c_{m-1}',c_{m+1},\ldots , c_n\})=tp(c'_m,\Gamma_f\cup I^*\cup \{c_0',\ldots ,c_{m-1}',c_{m+1},\ldots, c_n\})$$ and $$tp(\{c'_0,\ldots, c_{m-1}',c_m,c_{m+1},\ldots, c_n\},\Gamma_f\cup I^*)=tp(\{c_0',\ldots, c_{m-1}',c_m',c_{m+1},\ldots , c_n\},\Gamma_f\cup I^*)$$ as we wanted.
Since $$tp(\{c'_0,\ldots, c_{m-1}',c_m,c_{m+1},\ldots, c_n\},\Gamma_f\cup I^*)=tp(\{c_0',\ldots, c_{m-1}',c_m',c_{m+1},\ldots , c_n\},\Gamma_f\cup I^*)$$ holds for every $m\leq n$, we conclude that $$tp(\{c_0,\ldots, c_n\},\Gamma_f\cup I^*)=tp(\{c_0',\ldots, c'_n\},\Gamma_f\cup I^*).$$ To finish the proof, let $\{c_0,c_1,\ldots , c_n\}$ and $\{c_0',c'_1,\ldots , c'_n\}$ be subsets of $I'$ with $n$ elements, such that $i\neq j$ implies $c_i\neq c_j$ and $c'_i\neq c'_j$. Since $I'$ is infinite, then there is $\{c_0'',c''_1,\ldots , c''_n\}\subseteq I'$ such that $\{c_0'',c''_1,\ldots , c''_n\}\cap(\{c_0,c_1,\ldots , c_n\}\cup\{c_0',c'_1,\ldots , c'_n\})=\emptyset$. Therefore $$tp(\{c_0,\ldots, c_n\},\Gamma_f\cup I^*)=tp(\{c_0'',\ldots, c''_n\},\Gamma_f\cup I^*)=tp(\{c_0',\ldots, c'_n\},\Gamma_f\cup I^*),$$ we conclude that $I'$ is is indiscernible over $\Gamma_f\cup I^*$.
\end{proof}
Let $J\subset \mathcal{A}^f$ be a maximal indiscernible set over $\Gamma_f$ such that $I'\subseteq J$. By Lemma \ref{L.4.2} $|J|+\kappa(T)=dim(J,\Gamma_f,\mathcal{A}^f)+\kappa(T)$. Since $T$ is superstable, $\kappa(T)<\omega<|J|$ and we conclude that $\kappa(T)<dim(J,\Gamma_f,\mathcal{A}^f)+\kappa(T)$. Therefore $\kappa(T)<dim(J,\Gamma_f,\mathcal{A}^f)$ and by Lemma \ref{L.4.2} the dimension is true, $dim(J,\Gamma_f,\mathcal{A}^f)=|J|$. So $dim(J,\Gamma_f,\mathcal{A}^f)>\omega$ a contradiction with Theorem \ref{T.4.3}.
\end{proof}
One of the key lemmas for the proof of the main results (Theorem \ref{T.4.14}) is Lemma \ref{L.4.10} (below). To prove this lemma, we will need the following lemma about $a$-saturated models and the definition of a nice subsets of $\Gamma_f$.
\begin{lemma}\label{L.4.8}
If $\mathcal{N}$ is an $a$-saturated model, then for every finite $C$ and $a$, there is $b\in \mathcal{N}$ such that $stp(b,C\cap\mathcal{N})=stp(a,C\cap\mathcal{N})$ and $b\downarrow_{C\cap\mathcal{N}}C$.
\end{lemma}
\begin{proof}
Since $\mathcal{N}$, there is a sequence $(b_i)_{i<\omega}\subseteq \mathcal{N}$ that satisfies that for all $i<\omega$, $stp(b_i,\mathcal{N}\cap C)=stp(a,\mathcal{N}\cap C)$ and $b_i\downarrow_{\mathcal{N}\cap C}C$. On the other hand $T$ is superstable, so there is $i<\omega$ such that $\bigcup_{i\leq j}b_j\downarrow_{\mathcal{N}\cap C\cup \bigcup_{j< i}b_j}C$. Therefore $b_i\downarrow_{\mathcal{N}\cap C\cup \bigcup_{j< i}b_j}C$ holds for some $i<\omega$, by transitivity we conclude that there is $i<\omega$ such that $b_i\downarrow_{\mathcal{N}\cap C}C$.
\end{proof}
Now we define the nice subsets of $\Gamma_f$. These subsets have a couple of properties, that will be useful when we study the model $\mathcal{A}^f$.
\begin{definition}\label{D.4.9}
We say $X\subseteq \Gamma_f$ is nice if the following holds.
\begin{enumerate}
\item [1.]If $X\cap I_{\xi\eta}\neq \emptyset$, then $\mathcal{B}_\xi,\mathcal{C}_\eta\subset X$.
\item [2.]If $\mathcal{B}_\xi\cap X\neq \emptyset$, then $\mathcal{B}_\xi\subset X$.
\item [3.]If $\mathcal{C}_\eta\cap X\neq \emptyset$, then $\mathcal{C}_\eta\subset X$.
\item [4.]If $\xi<\eta$ and $\mathcal{B}_\xi,\mathcal{C}_\eta\subset X$, then $X\cap I_{\xi\eta}$ is infinite.
\end{enumerate}
\end{definition}
The argument for the next Lemma is a variation of the argument used of \cite{HS99} in the fourth section.
\begin{lemma}\label{L.4.10}
Let $Z$ be a nice subset of $\Gamma_f$ and $d\in\Gamma_f\backslash Z$. Then for all $B$ finite subset of $Z$ there is $f\in Saut(\mathcal{M},B)$ such that $f(d)\in Z$.
\end{lemma}
\begin{proof}
Since $d$ is finite, the sets $\{I_{\xi\eta}\subseteq \Gamma_f|d\cap I_{\xi\eta}\neq \emptyset\}$, $\{\mathcal{B}_\xi\subseteq \Gamma_f|d\cap \mathcal{B}_\xi\neq \emptyset\}$, and $\{\mathcal{C}_\eta\subseteq \Gamma_f|d\cap \mathcal{C}_\eta\neq \emptyset\}$ are finite. Denote by $Y_I$, $Y_\mathcal{B}$ and $Y_\mathcal{C}$ the sets $\{I_{\xi\eta}\subseteq \Gamma_f|d\cap I_{\xi\eta}\neq \emptyset\}$, $\{\mathcal{B}_\xi\subseteq \Gamma_f|d\cap \mathcal{B}_\xi\neq \emptyset\}$ and $\{\mathcal{C}_\eta\subseteq \Gamma_f|d\cap \mathcal{C}_\eta\neq \emptyset\}$ respectively.\\
Notice that since $Z$ is nice and $d\in\Gamma_f\backslash Z$, then for all $\xi\in (J_f)_{<\lambda}$, $d\cap \mathcal{B}_\xi\neq \emptyset$ implies $I_{\xi\eta}\notin Z$ for all $\eta\in (J_f)_\lambda$, $\xi<\eta$. The same holds for all $\eta\in (J_f)_\lambda$, $d\cap \mathcal{C}_\eta\neq \emptyset$ implies that $I_{\xi\eta}\notin Z$ for all $\xi\in (J_f)_{<\lambda}$, $\xi<\eta$. Therefore, there exists $d'\in \Gamma_f\backslash Z$ such that $d\subseteq d'$ and $\{I_{\xi\eta}\subseteq \Gamma_f|d'\cap I_{\xi\eta}\neq \emptyset\}$ is non-empty. Without loss of generality we can assume that $Y_I\neq\emptyset$. Notice, that if $\xi\in (J_f)_{<\lambda}$ and $\eta\in (J_f)_\lambda$, $\xi<\eta$, are such that $I_{\xi\eta}\cap d\neq\emptyset$ and $\mathcal{B}_\xi\not\subseteq Z$, then there is $d'\in \Gamma_f\backslash Z$ such that $d\subseteq d'$ and $\mathcal{B}_\xi\cap d'\neq\emptyset$. Without loss of generality we can assume that for all $I_{\xi\eta}\in Y_I$ either $\mathcal{B}_\xi\subseteq Z$ or $\mathcal{B}_\xi\cap d\neq\emptyset$. Using the same argument, without loss of generality we can assume that for all $I_{\xi\eta}\in Y_I$ either $\mathcal{C}_\eta\subseteq Z$ or $\mathcal{C}_\eta\cap d\neq\emptyset$.
From the previous discussion we can conclude that we only have the following cases for the sets $Y_I$, $Y_\mathcal{C}$, and $Y_\mathcal{B}$:
\begin{enumerate}
\item [1.]$Y_I\neq\emptyset$, $Y_\mathcal{B}=Y_\mathcal{C}=\emptyset$, and $\forall I_{\xi\eta}\in Y_I(\mathcal{B}_\xi,\mathcal{C}_\eta\subseteq Z)$.
\item [2.]$Y_I,Y_\mathcal{C}\neq\emptyset$, $Y_\mathcal{B}=\emptyset$, and $\forall I_{\xi\eta}\in Y_I(\mathcal{B}_\xi\subseteq Z)$.
\item [3.]$Y_I,Y_\mathcal{B}\neq\emptyset$, $Y_\mathcal{C}=\emptyset$, and $\forall I_{\xi\eta}\in Y_I(\mathcal{C}_\eta\subseteq Z)$.
\item [4.]$Y_I,Y_\mathcal{C},Y_\mathcal{B}\neq\emptyset$.
\end{enumerate}
It is clear that the cases 1, 2, and 3 follow from the case 4. We will show only the proof of the cases 1 and 4.
Case 1.\\
In this case we will prove something stronger. By induction on $|Y_I|$ we will show that there is $f\in Saut(\mathcal{M},\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\}\cup\bigcup\{I_{\zeta\theta}|I_{\zeta\theta}\notin Y_I\}\cup B)$ such that $f(d)\in Z$.\\ \\
If $|Y_I|=1$:\\
Let us denote by $I_{\xi\eta}$ the only element of $Y_I$. Since $\mathcal{B}_\xi,\mathcal{C}_\eta\subseteq Z$, then $Z\cap I_{\xi\eta}= I'_{\xi\eta}$ is infinite and $I_{\xi\eta}\neq I'_{\xi\eta}$. Let $I^*=I'_{\xi\eta}\cap B$ by the way we chose the models $\mathcal{B}_x,\mathcal{C}_y$ and the sequences $I_{xy}$, we know that
$I_{\xi\eta}\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta}\Gamma_f\backslash I_{\xi\eta}$, so
$I_{\xi\eta}\backslash I^*\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup I^*}\Gamma_f\backslash I_{\xi\eta}$.
By Claim \ref{Claim 4.7.2}, $I_{\xi\eta}$ is indiscernible over $\Gamma_f\backslash I_{\xi\eta}$, so there is $d'\in I'_{\xi\eta}\backslash I^*$ such that $stp(d,\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup I^*)=stp(d',\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup I^*)$. Therefore, we know that $$d\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup I^*} I^*\cup (\Gamma_f\backslash I_{\xi\eta})$$ and $$d'\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup I^*}I^*\cup (\Gamma_f\backslash I_{\xi\eta}).$$ Since $B\subseteq I^*\cup (\Gamma_f\backslash I_{\xi\eta})$, we conclude that $d$ and $d'$ have the same strong type over $\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\}\cup\bigcup\{I_{\zeta\theta}|I_{\zeta\theta}\notin Y_I\}\cup B$ and there is $f\in Saut(\mathcal{M},\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta| \zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\}\cup\bigcup\{I_{\zeta\theta}|I_{\zeta\theta}\notin Y_I\}\cup B)$ such that $f(d)=d'$, so $f(d)\in Z$.\\ \\
Successor case.\\
Let us suppose that if $|Y_I|=n$, then there is $f\in Saut(\mathcal{M},\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\}\cup\bigcup\{I_{\zeta\theta}|I_{\zeta\theta}\notin Y_I\}\cup B))$ such that $f(d)\in Z$.\\ \\
Let $Y_I$ be such that $|Y_I|=n+1$. Let $\xi\in (J_f)_{<\lambda}$ and $\eta\in (J_f)_{\lambda}$ be such that $I_{\xi\eta}\in Y_I$, and let $d_0=d\cap I_{\xi\eta}$. By the case $|Y_I|=1$, there is $g_0\in Saut(\mathcal{M},\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\}\cup\bigcup\{I_{\zeta\theta}|\zeta\neq\xi\vee\theta\neq\eta\}\cup B)$ such that $g_0(d_0)\in Z$. Since $|Y_I\backslash\{I_{\xi\eta}\}|=n$, by the induction hypothesis there is $g_1\in Saut(\mathcal{M},\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\}\cup\bigcup\{I_{\zeta\theta}|I_{\zeta\theta}\notin Y_I\}\cup B\cup I_{\xi\eta})$ such that $g_1(d\backslash d_0)\in Z$. We conclude that $f=g_1\circ g_0$ satisfies $f(d)\in Z$ and $f\in Saut (\mathcal{M},\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\}\cup\bigcup\{I_{\zeta\theta}|I_{\zeta\theta}\notin Y_I\}\cup B)$.\\
Case 4.
\begin{claim}\label{Claim 4.10.1}
For all $\mathcal{B}_\xi\subseteq \Gamma_f$ and $\mathcal{C}_\eta\subseteq \Gamma_f$, $\xi<\eta$, there are $x_\eta\subset \mathcal{C}_\eta$ and $y_\xi\subset \mathcal{B}_\xi$, both finite, that satisfy $I_{\xi\eta}\downarrow_{x_\eta\cup y_\xi}\mathcal{B}_\xi\cup \mathcal{C}_\eta$.
\end{claim}
\begin{proof}
Let $I_{\xi\eta}=(r_j)_{j<|I_{\xi\eta}|}$, by the finite character, it is enough to show that there are $x_\eta\subset \mathcal{C}_\eta$ and $y_\xi\subset \mathcal{B}_\xi$, both finite, such that for every $k<|I_{\xi\eta}|$ it holds $(r_j)_{j\leq k}\downarrow_{x_\eta\cup y_\xi}\mathcal{B}_\xi\cup\mathcal{C}_\eta$. We will prove this by induction on $k$.\\
Since $T$ is superstable there are $x_\eta\subset \mathcal{C}_\eta$ and $y_\xi\subset \mathcal{B}_\xi$, both finite, such that $r_0\downarrow_{x_\eta\cup y_\xi}\mathcal{B}_\xi\cup\mathcal{C}_\eta$. Since $I_{\xi\eta}$ is indiscernible over $\mathcal{B}_\xi\cup \mathcal{C}_\eta$, it holds that $r_j\downarrow_{x_\eta\cup y_\xi}\mathcal{B}_\xi\cup\mathcal{C}_\eta$, for every $j<|I_{\xi\eta}|$. Fix $x_\eta$ and $y_\xi$ such that $r_j\downarrow_{x_\eta\cup y_\xi}\mathcal{B}_\xi\cup\mathcal{C}_\eta$, for all $j<|I_{\xi\eta}|$.\\
Suppose $k$ is such that for every $\theta<k$, $(r_j)_{j\leq \theta}\downarrow_{x_\eta\cup y_\xi}\mathcal{B}_\xi\cup\mathcal{C}_\eta$, so by the finite character we conclude $(r_j)_{j< k}\downarrow_{x_\eta\cup y_\xi}\mathcal{B}_\xi\cup\mathcal{C}_\eta$.
Since $I_{\xi\eta}$ is independent over $\mathcal{B}_\xi\cup \mathcal{C}_\eta$, it holds that $r_{k}\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta}(r_j)_{j< k}$. By the way $x_\eta$ and $y_\xi$ were chosen, we know that $r_k\downarrow_{x_\eta\cup y_\xi}\mathcal{B}_\xi\cup \mathcal{C}_\eta$, then by transitivity $r_{k}\downarrow_{x_\eta\cup y_\xi\cup (r_j)_{j< k}}\mathcal{B}_\xi\cup\mathcal{C}_\eta$. By transitivity we conclude that $(r_j)_{j\leq k}\downarrow_{x_\eta\cup y_\xi}\mathcal{B}_\xi\cup\mathcal{C}_\eta$.
\end{proof}
By the way we chose the models $\mathcal{B}_x,\mathcal{C}_y$ and the sequences $I_{xy}$, we know that $I_{\xi\eta}\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta}\Gamma_f\backslash I_{\xi\eta}$. Because of the previous claim there are $x_\eta\subset \mathcal{C}_\eta$ and $y_\xi\subset \mathcal{B}_\xi$, both finite, such that $I_{\xi\eta}\downarrow_{x_\eta\cup y_\xi}\Gamma_f\backslash I_{\xi\eta}$. Without loss of generality, we can assume that $x_\eta\subseteq d\cap \mathcal{C}_\eta$ and $y_\xi\subseteq\mathcal{B}_\xi\cap B$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\notin Y_\mathcal{B}$, $\mathcal{C}_\eta\in Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$. Therefore $I_{\xi\eta}\downarrow_{(B\cap\mathcal{B}_\xi)\cup (d\cap \mathcal{C}_\eta)}\Gamma_f\backslash I_{\xi\eta}$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\notin Y_\mathcal{B}$, $\mathcal{C}_\eta\in Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$. Without loss of generality, we can assume that $y_\xi\subseteq d\cap \mathcal{B}_\xi$ and $x_\eta\subseteq\mathcal{C}_\eta\cap B$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\in Y_\mathcal{B}$, $\mathcal{C}_\eta\notin Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$. Therefore $I_{\xi\eta}\downarrow_{(B\cap\mathcal{C}_\eta)\cup (d\cap \mathcal{B}_\xi)}\Gamma_f\backslash I_{\xi\eta}$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\in Y_\mathcal{B}$, $\mathcal{C}_\eta\notin Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$. Without loss of generality, we can assume that $y_\xi\subseteq \mathcal{B}_\xi\cap B$ and $x_\eta\subseteq\mathcal{C}_\eta\cap B$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\notin Y_\mathcal{B}$, $\mathcal{C}_\eta\notin Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$. Therefore $I_{\xi\eta}\downarrow_{B\cap(\mathcal{C}_\eta\cup \mathcal{B}_\xi)}\Gamma_f\backslash I_{\xi\eta}$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\notin Y_\mathcal{B}$, $\mathcal{C}_\eta\notin Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$.
Since $T$ is superstable, we know there is a finite $D\subset \mathcal{A}$ such that $B\downarrow_{D}\mathcal{A}$. Without loss of generality we can assume $D\subset B\cap \mathcal{A}$, so $$B\downarrow_{B\cap \mathcal{A}}\mathcal{A}.$$
Let us define $Y_I'=\{I_{\xi\eta}\in Y_I|\mathcal{B}_\xi,\mathcal{C}_\eta\subseteq Z\}$, and let $e=d\cap \bigcup Y_I'$. By Case 1, we know that there is $g\in Saut(\mathcal{M},\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\}\cup\bigcup\{I_{\zeta\theta}|I_{\zeta\theta}\notin Y_I'\}\cup B)$ such that $
g(e)\in Z$. Let $B'=B\cup g(e)$ and $d^*=d\backslash e$. Since $I_{\xi\eta}\downarrow_{B\cap(\mathcal{C}_\eta\cup \mathcal{B}_\xi)}\Gamma_f\backslash I_{\xi\eta}$ holds for all $I_{\xi\eta}\in Y_I'$, we know by transitivity that $e\downarrow_{B}\Gamma_f\backslash \bigcup Y_I'$. Since $B\downarrow_{B\cap \mathcal{A}}\mathcal{A}$, we conclude that $e\cup B\downarrow_{B\cap \mathcal{A}}\mathcal{A}$. Because of $g\in Saut(\mathcal{M},\bigcup\{\mathcal{B}_\zeta,\mathcal{C}_\theta|\zeta\in (J_f)_{<\lambda}\wedge\theta\in (J_f)_\lambda\}\cup\bigcup\{I_{\zeta\theta}|I_{\zeta\theta}\notin Y_I'\}\cup B)$, and $B'\cap \mathcal{A}=B\cap \mathcal{A}$, we conclude that
\begin{equation}
B'\downarrow_{B'\cap \mathcal{A}}\mathcal{A}
\end{equation}
Notice that $B\cap\mathcal{C}_\eta=B'\cap\mathcal{C}_\eta$, $B\cap\mathcal{B}_\xi=B'\cap\mathcal{B}_\xi$, $d\cap \mathcal{C}_\eta=d^*\cap \mathcal{C}_\eta$, and $d\cap \mathcal{B}_\xi=d^*\cap \mathcal{B}_\xi$ hold for all $\eta$ and $\xi$. Therefore:
\begin{itemize}
\item $I_{\xi\eta}\downarrow_{(B'\cap\mathcal{B}_\xi)\cup (d^*\cap \mathcal{C}_\eta)}\Gamma_f\backslash I_{\xi\eta}$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\notin Y_\mathcal{B}$, $\mathcal{C}_\eta\in Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$
\item $I_{\xi\eta}\downarrow_{(B'\cap\mathcal{C}_\eta)\cup (d^*\cap \mathcal{B}_\xi)}\Gamma_f\backslash I_{\xi\eta}$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\in Y_\mathcal{B}$, $\mathcal{C}_\eta\notin Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$
\end{itemize}
Define $d_0=d^*\cap(\bigcup Y_\mathcal{C}\cup \bigcup Y_\mathcal{B}\cup \bigcup\{I_{\xi\eta}|\mathcal{B}_\xi\in Y_\mathcal{B}\wedge\mathcal{C}\eta\in Y_\mathcal{C}\})$.
Since $d^*$ is finite, we know there are a finite number of independent sequences $I_{\xi\eta}\in Y_I$ that satisfy $d^*\cap I_{\xi\eta}\neq \emptyset$ and $I_{\xi\eta}\cap d_0=\emptyset$. Let $\{I_i\}_{1\leq i<m}$ be an enumeration of these independent sequences such that there is $n$, $1\leq n<m$, that satisfy:
\begin{itemize}
\item if $I_i=I_{\xi\eta}$ and $i\leq n$, then $\mathcal{C}_\eta\in Y_\mathcal{C}$.
\item if $I_i=I_{\xi\eta}$ and $n<i$, then $\mathcal{B}_\xi\in Y_\mathcal{B}$.
\end{itemize}
Denote by $d_i$ the tuples $d^*\cap I_i$ for all $1\leq i<m$. For every $1\leq i<m$, there exist $\xi\in (J_f)_{\leq \lambda}$ and $\eta\in (J_f)_\lambda$ such that $I_i=I_{\xi\eta}$, let us denote by $\mathcal{B}_i$ and $\mathcal{C}_i$ the models $\mathcal{B}_\xi$ and $\mathcal{C}_\eta$, respectively. Notice that $i\neq j$ does not implies $\mathcal{B}_i\neq \mathcal{B}_j$ or $\mathcal{C}_i\neq \mathcal{C}_j$.
By the way we chose the models $\mathcal{B}_x, \mathcal{C}_y$ and the sequences $I_{xy}$, we know that $I_{\xi\eta}\downarrow_{\mathcal{B}_\xi\mathcal{C}_\eta}\Gamma_f\backslash I_{\xi\eta}$ holds for all $\xi<\eta$, $\eta\in (J_f)_\lambda$. Let us denote by $Q$ the set $\{I_{\xi\eta}|\mathcal{B}_\xi\in Y_\mathcal{B}\wedge\mathcal{C}\eta\in Y_\mathcal{C}\}$. Since $Q$ is finite, by transitivity we concluded that $\bigcup Q\downarrow_{\bigcup Y_\mathcal{C}\cup \bigcup Y_\mathcal{B}}\Gamma_f\backslash Q$. Since $Y_\mathcal{C}$ is finite and $\mathcal{C}_\eta\downarrow_\mathcal{A}\bigcup\{\mathcal{C}_y,I_{xy}|y\neq \eta\}\cup\bigcup\{\mathcal{B}_x|\mathcal{B}_x\subseteq\Gamma_f\}$ holds for every $\eta\in (J_f)_\lambda$, we conclude by transitivity that $\bigcup Y_\mathcal{C}\downarrow_\mathcal{A}\bigcup\{\mathcal{C}_y,I_{xy}|\mathcal{C}_y\notin Y_\mathcal{C}\}\cup \bigcup\{\mathcal{B}_x|x\in (J_f)_{<\lambda}\}$. Therefore $\bigcup Y_\mathcal{C}\downarrow_{\bigcup Y_\mathcal{B}}\bigcup\{\mathcal{C}_y,I_{xy}|\mathcal{C}_y\notin Y_\mathcal{C}\}\cup \bigcup\{\mathcal{B}_x|x\in (J_f)_{<\lambda}\}$ and by transitivity we conclude that $$\bigcup Q\cup \bigcup Y_\mathcal{C}\downarrow_{\bigcup Y_\mathcal{B}}\bigcup\{\mathcal{C}_y,I_{xy}|\mathcal{C}_y\notin Y_\mathcal{C}\}\cup \bigcup\{\mathcal{B}_x|x\in (J_f)_{<\lambda}\}.$$ By a similar argument, we conclude that $\bigcup Y_\mathcal{B}\downarrow_\mathcal{A}\bigcup\{\mathcal{B}_x,I_{xy}|\mathcal{B}_x\notin Y_\mathcal{B}\}\cup \bigcup\{\mathcal{C}_y|y\in (J_f)_{\lambda}\}$. Denote by $\mathcal{W}$ the set $\bigcup\{I_{xy}|\mathcal{C}_y\notin Y_\mathcal{C}\wedge \mathcal{B}_x\notin Y_\mathcal{B}\}\cup \bigcup\{\mathcal{B}_x|\mathcal{B}_x\notin Y_\mathcal{B}\}\cup\bigcup\{\mathcal{C}_y|\mathcal{C}_y\notin Y_\mathcal{C}\}$, by transitivity we conclude that $$\bigcup Q\cup \bigcup Y_\mathcal{C}\cup\bigcup Y_\mathcal{B}\downarrow_{\mathcal{A}}\mathcal{W}.$$ Since $(\bigcup Y_\mathcal{C}\cup\bigcup Y_\mathcal{B})\cap Z=\emptyset$ and $Z$ is nice ($I_{\xi\eta}\cap Z\neq \emptyset$ implies $\mathcal{B}_\xi,\mathcal{C}_\eta\subseteq Z$), then $Z\subseteq \mathcal{W}$ and by the definition of $d_0$ we know that $d_0\subseteq Q$, we get $d_0\downarrow_{\mathcal{A}}Z$. By (1) and transitivity we conclude that
$$
d_0\downarrow_{B'\cap \mathcal{A}}B'.
$$
By Lemma \ref{L.4.8}, there is $d_0'\in \mathcal{A}$ such that $Stp(d_0,B'\cap \mathcal{A})=Stp(d_0',B'\cap \mathcal{A})$ and $d_0'\downarrow_{B'\cap \mathcal{A}}B'$. We conclude that $Stp(d_0,B')=Stp(d_0',B')$, and there is $f_0\in Saut(\mathcal{M},B')$ such that $f_0(d_0)=d_0'$.
We know that $I_{\xi\eta}\downarrow_{(B'\cap\mathcal{B}_\xi)\cup (d^*\cap \mathcal{C}_\eta)}\Gamma_f\backslash I_{\xi\eta}$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\notin Y_\mathcal{B}$, $\mathcal{C}_\eta\in Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$. Since $d^*\cap \mathcal{C}_\eta\subseteq d_0\subseteq \Gamma_f\backslash I_{\xi\eta}$ holds for all $\mathcal{C}_\eta\in Y_\mathcal{C}$, then $I_{\xi\eta}\downarrow_{(B'\cap\mathcal{B}_\xi)\cup d_0}\Gamma_f\backslash I_{\xi\eta}$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\notin Y_\mathcal{B}$, $\mathcal{C}_\eta\in Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$. We know that $I_{\xi\eta}\downarrow_{(B'\cap\mathcal{C}_\eta)\cup (d^*\cap \mathcal{B}_\xi)}\Gamma_f\backslash I_{\xi\eta}$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\in Y_\mathcal{B}$, $\mathcal{C}_\eta\notin Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$. Since $d^*\cap \mathcal{B}_\xi\subseteq d_0\subseteq \Gamma_f\backslash I_{\xi\eta}$, holds for all $\mathcal{B}_\xi\in Y_\mathcal{B}$, then $I_{\xi\eta}\downarrow_{(B'\cap\mathcal{C}_\eta)\cup d_0}\Gamma_f\backslash I_{\xi\eta}$ holds for all $\eta<\xi$ that satisfy $\mathcal{B}_\xi\in Y_\mathcal{B}$, $\mathcal{C}_\eta\notin Y_\mathcal{C}$, and $I_{\xi\eta}\in Y_I$.
\begin{claim}\label{Claim 4.10.2}
There are automorphisms of the monster model $(f'_i)_{0<i<m}$ and $(f_i)_{0\leq i<m}$ that satisfy the following:
\begin{itemize}
\item For every $0<i<m$, $f_i=f'_i\circ f_{i-1}$.
\item For every $0<i\leq n$ there is $d_i'\in \mathcal{B}_i$ such that $f'_i\in Saut(\mathcal{M},B'\cup (d_j')_{j<i})$ and $f'_i(f_{i-1}(d_i))=d_i'$.
\item For every $n<i<m$ there is $d_i'\in \mathcal{C}_i$ such that $f'_i\in Saut(\mathcal{M},B'\cup (d_j')_{j<i})$ and $f'_i(f_{i-1}(d_i))=d_i'$.
\end{itemize}
\end{claim}
\begin{proof}
Notice that the automorphism $f_0$ was chosen above.
To choose the automorphisms $(f'_i)_{0<i<m}$ and $(f_i)_{0< i<m}$, let us proceed by induction over $i$. Suppose $j\leq n$ is such that there are automorphisms of the monster model $(f'_i)_{0<i<j}$ and $(f_i)_{0\leq i<j}$ that satisfy the following:
\begin{itemize}
\item For every $0<i<j$, $f_i=f'_i\circ f_{i-1}$.
\item For every $0<i<j$ there is $d_i'\in \mathcal{B}_i$ such that $f'_i\in Saut(\mathcal{M},B'\cup (d_k')_{k<i})$ and $f'_i(f_{i-1}(d_i))=d_i'$.
\end{itemize}
We know that $I_j\downarrow_{(B'\cap\mathcal{B}_j)\cup d_0}\Gamma_f\backslash I_j$, so $d_j\downarrow_{(B'\cap\mathcal{B}_j)\cup d_0}B'\cup (d_i)_{i<j}$. By the induction hypothesis we get that $f_{j-1}=f'_{j-1}\circ f'_{i-2}\circ\cdots\circ f'_1\circ f_{0}$, so $f_{j-1}(d_j)\downarrow_{(B'\cap\mathcal{B}_j)\cup d'_0}B'\cup (d_i')_{i<j}$ and
$$f_{j-1}(d_j)\downarrow_{((B'\cup (d_i')_{i<j})\cap\mathcal{B}_j)\cup d'_0}B'\cup (d_i')_{i<j}.$$
By Lemma \ref{L.4.8}, there is $d_j'\in \mathcal{B}_j$ such that $stp(f_{j-1}(d_j),(B'\cup (d_i')_{i<j})\cap \mathcal{B}_j)=stp(d'_j,(B'\cup (d_i')_{i<j})\cap \mathcal{B}_j)$ and $d_j'\downarrow_{(B'\cup (d_i')_{i<j})\cap \mathcal{B}_j}B'\cup (d_i')_{i<j}$. Therefore, $$d_j'\downarrow_{((B'\cup (d_i')_{i<j})\cap \mathcal{B}_j)\cup d_0'}B'\cup (d_i')_{i<j}$$
We conclude that $stp(f_{j-1}(d_j),B'\cup (d_i')_{i<j})=stp(d'_j,B'\cup (d_i')_{i<j})$. Then, there is $f'_j\in Saut(\mathcal{M},B'\cup (d_i')_{i<j})$ such that $f'_j(f_{j-1}(d_j))=d_j'$ and $f_j=f'_j\circ f_{j-1}$ is an automorphism.\\
Suppose $j> n$ is such that there are automorphisms of the monster model $(f'_i)_{0<i<j}$ and $(f_i)_{0\leq i<j}$ that satisfy the following:
\begin{itemize}
\item For every $0<i<j$, $f_i=f'_i\circ f_{i-1}$.
\item For every $0<i\leq n$ there is $d_i'\in \mathcal{B}_i$ such that $f'_i\in Saut(\mathcal{M},B'\cup (d_k')_{k<i})$ and $f'_i(f_{i-1}(d_i))=d_i'$.
\item For every $n<i<j$ there is $d_i'\in \mathcal{C}_i$ such that $f'_i\in Saut(\mathcal{M},B'\cup (d_k')_{k<i})$ and $f'_i(f_{i-1}(d_i))=d_i'$.
\end{itemize}
We know that $I_j\downarrow_{(B'\cap\mathcal{C}_j)\cup d_0}\Gamma_f\backslash I_j$, so $d_j\downarrow_{(B'\cap\mathcal{C}_j)\cup d_0}B'\cup (d_i)_{i<j}$. By the induction hypothesis we get that $f_{j-1}=f'_{j-1}\circ f'_{i-2}\circ\cdots\circ f'_1\circ f_{0}$, so $f_{j-1}(d_j)\downarrow_{(B'\cap\mathcal{C}_j)\cup d'_0}B'\cup (d_i')_{i<j}$ and
$$f_{j-1}(d_j)\downarrow_{((B'\cup (d_i')_{i<j})\cap\mathcal{C}_j)\cup d'_0}B'\cup (d_i')_{i<j}.$$
By Lemma \ref{L.4.8}, there is $d_j'\in \mathcal{C}_j$ such that $stp(f_{j-1}(d_j),(B'\cup (d_i')_{i<j})\cap \mathcal{C}_j)=stp(d'_j,(B'\cup (d_i')_{i<j})\cap \mathcal{C}_j)$ and $d_j'\downarrow_{(B'\cup (d_i')_{i<j})\cap \mathcal{C}_j}B'\cup (d_i')_{i<j}$. Therefore, $$d_j'\downarrow_{((B'\cup (d_i')_{i<j})\cap \mathcal{C}_j)\cup d_0'}B'\cup (d_i')_{i<j}$$
We conclude that $stp(f_{j-1}(d_j),B'\cup (d_i')_{i<j})=stp(d'_j,B'\cup (d_i')_{i<j})$. Then, there is $f'_j\in Saut(\mathcal{M},B'\cup (d_i')_{i<j})$ such that $f'_j(f_{j-1}(d_j))=d_j'$ and $f_j=f'_j\circ f_{j-1}$ is an automorphism.
\end{proof}
By Claim \ref{Claim 4.10.2}, $f_{m-1}\in Saut(\mathcal{M},B')$, so $f=f_{m-1}\circ g\in Saut(\mathcal{M},B)$. Since $g(e)\in B'$, $f_{m-1}\in Saut(\mathcal{M},B')$ and for all $0<i<m$ either $\mathcal{B}_i\subseteq Z$ or $\mathcal{C}_i\subseteq Z$, we conclude that $f(d)\in Z$.
\end{proof}
Suppose $X$ and $A$ are nice subsets of $\Gamma_f$. If $\xi$ and $\eta$ are such that $\mathcal{B}_\xi\cup\mathcal{C}_\eta\subseteq A$ and $I_{\xi\eta}\cap X\subseteq A$, then we say that $A$ is $X$-nice for $(\xi,\eta)$.
\begin{lemma}\label{L.4.11}
Suppose $Z\subseteq \Gamma_f$ is nice and $B$ is $a$-constructable over $Z$. If $X\subseteq \Gamma_f$ is a nice subset such that $Z\cup X$ is nice, then $B\cup X$ is $a$-constructible over $Z\cup X$.
\end{lemma}
\begin{proof}
Let $(Z,(a_i,B_i)_{i<\gamma})$ be an $a$-construction for $B$ over $Z$. Let $(\mathcal{D}_i)_{i<\delta}$ be an enumeration of $\{\mathcal{B}_\xi,\mathcal{C}_\eta,I_{\xi\eta}\cap X|\xi<\eta\wedge \mathcal{B}_\xi\cup\mathcal{C}_\eta\subseteq Z\cup X\}$ such that $\mathcal{B}_\xi$ and $\mathcal{C}_\eta$ are before $I_{\xi\eta}$ in the enumeration. Let $Z^j$ be the minimal nice subset of $Z\cup X$ that contains $Z\cup\bigcup_{i\leq j}\mathcal{D}_i$, and it is $X$-nice for every $(x,y)$ that satisfies: either $\mathcal{B}_x\subseteq\bigcup_{i\leq j}\mathcal{D}_i\backslash Z$ or $\mathcal{C}_y\subseteq\bigcup_{i\leq j}\mathcal{D}_i\backslash Z$. First, we will show that $(Z^j,(a_i,B_i)_{i<\gamma})$ is an $a$-construction for $B\cup Z^j$ over $Z^j$, for every $j<\delta$.\\
Suppose, towards a contradiction, that $\alpha$ is the minimal ordinal such that $(Z^\alpha,(a_i,B_i)_{i<\gamma})$ is not an $a$-construction for $B\cup Z^\alpha$ over $Z^\alpha$.\\
By the minimality of $\alpha$, $(Z^\beta,(a_i,B_i)_{i<\gamma})$ is an $a$-construction for $B\cup Z^\beta$ over $Z^\beta$, for every $\beta<\alpha$. Therefore for every $\beta<\alpha$ and $i<\gamma$, $(tp(a_i,Z_i^\beta),B_i)\in F_\omega^a$ where $Z_i^\beta=Z^\beta\cup\bigcup_{j<i}a_j$. So $(tp(a_i,\cup_{\beta<\alpha}Z_i^\beta),B_i)\in F_\omega^a$ for every $i<\gamma$, we conclude that $\alpha$ is not a limit cardinal. Let us denote by $Z'$ the set $Z^\beta$, for $\beta$ the predecessor of $\alpha$.\\ \\
The proof is divided in the following cases:
\begin{enumerate}
\item [1.]$\mathcal{D}_\alpha=\mathcal{C}_\eta$ for some $\mathcal{C}_\eta\subseteq X\cup Z$.
\item [2.]$\mathcal{D}_\alpha=\mathcal{B}_\xi$ for some $\mathcal{B}_\xi\subseteq X\cup Z$.
\item [3.]$\mathcal{D}_\alpha=I_{\xi\eta}\cap X$, for some $\mathcal{B}_\xi\cup\mathcal{C}_\eta\subseteq X\cup Z$.
\end{enumerate}
The case 2 is similar to the case 1, we will show only the cases 1 and 3.\\
Case 1.\\
Since $(Z^\alpha,(a_i,B_i)_{i<\gamma})$ is not an $a$-construction over $Z^\alpha$, then by the minimality of $Z^\alpha$, $\mathcal{C}_\eta\not \subseteq Z'$. Therefore, $I_{\xi\eta}\cap Z'=\emptyset$ for every $\xi<\eta$. Since $X\cup Z$ is nice, then we know that for all $\mathcal{B}_\xi\subseteq Z'$ that satisfies $\xi<\eta$, it holds that $\mathcal{B}_\xi\subseteq X$.
Let $n$ be the least ordinal such that $(Z'\cup \mathcal{C}_\eta\cup \bigcup \{I_{\xi\eta}\cap X|\xi<\eta\wedge \mathcal{B}_\xi\subseteq Z'\},(a_i,B_i)_{i\leq n})$ is not an a-construction over $Z'\cup \mathcal{C}_\eta\cup \bigcup \{I_{\xi\eta}\cap X|\xi<\eta\wedge \mathcal{B}_\xi\subseteq Z'\}$, since $a$-isolation is the $F_\omega^a$-isolation, then $B_n$ is finite and we can assume $n<\omega$.\\
Denote by $D$ the set $\mathcal{C}_\eta\cup \bigcup \{I_{\xi\eta}\cap X|\xi<\eta\wedge \mathcal{B}_\xi\subseteq Z'\}$.
Since $(Z'\cup D,(a_i,B_i)_{i< n})$ is an $a$-construction over $Z'$, then $C=\bigcup_{i< n} B_i\cap (Z'\cup D)$ is such that $stp(a_0^\frown\cdots ^\frown a_{n-1},C)\vdash tp(a_0^\frown\cdots ^\frown a_{n-1}, Z'\cup D)$. Notice that $C$ is a subset of $Z'$.\\
On the other hand, there is $b$ such that $stp(b,B_n)=stp(a_n,B_n)$, and $tp(b,Z'\cup \bigcup\{a_i|i<n\}\cup D)\neq tp(a_n,Z'\cup \bigcup\{a_i|i<n\}\cup D)$. So there are tuples $d\in D\backslash \mathcal{A}$ and $e\in Z'\cup \bigcup\{a_i|i<n\}$ that satisfy $tp(b,e\cup d)\neq tp(a_n,e\cup d)$. Denote by $W$ the set $C\cup((B_n\cup e)\cap Z')$, by Lemma \ref{L.4.10} we know that there is $g\in Saut(\mathcal{M},W)$ such that $g(d)\in Z'$. We know that, $stp(a_0^\frown\cdots ^\frown a_{n-1},C)\vdash tp(a_0^\frown\cdots ^\frown a_{n-1}, Z'\cup D)$, so $a_0^\frown\cdots ^\frown a_{n-1}\downarrow_C Z'\cup D$. We conclude that $$a_0^\frown\cdots ^\frown a_{n-1}\downarrow_W d$$ and $$a_0^\frown\cdots ^\frown a_{n-1}\downarrow_W g(d).$$ Therefore $stp(d,C\cup B_n\cup e)=stp(g(d),\cup C\cup B_n\cup e)$ and there is $f\in Saut(\mathcal{M},C\cup B_n\cup e)$ that satisfies $f(d)=g(d)$.\\
Since $tp(b,e\cup d)\neq tp(a_n, e\cup d)$ and $stp(b,B_n)=stp(a_n,B_n)$ hold, then we have that $tp(f(b),e\cup f(d))\neq tp(f(a_n), e\cup f(d))$, and the strong types of $a_n,b,f(a_n)$ and $f(b)$ over $B_n$ are the same strong type. Since $(Z',(a_i,B_i)_{i<\gamma})$ is an $a$-construction, then by the $a$-isolation we know that $stp(a,B_n)\vdash tp(a_n,Z'\cup \bigcup\{a_i|i<n\})$, on the other hand $stp(a_n,B_n)=stp(f(a_n),B_n)=stp(f(b),B_n)$, so $tp(f(a_n),Z'\cup \bigcup\{a_i|i<n\})=tp(f(b),Z'\cup \bigcup\{a_i|i<n\})$. In particular $e,f(d)\in Z'$, so $tp(f(b),e\cup f(d))=tp(f(a_n),e\cup f(d))$, a contradiction.\\
Case 3.\\
By the way $(\mathcal{D}_i)_{i<\delta}$ was define, we know that $\mathcal{B}_\xi$ and $\mathcal{C}_\eta$ are before $I_{\xi\eta}\cap X$ in the enumeration, so $\mathcal{B}_\xi\cup\mathcal{C}_\xi\subseteq Z'$. We have the following possibilities possibilities, either $\mathcal{B}_\xi\not\subseteq Z$, or $\mathcal{C}_\eta\not\subseteq Z$, or $\mathcal{B}_\xi,\mathcal{C}_\eta\subseteq Z$. In the first two cases, by the way $Z'$ was defined, we know that $Z'$ is $X$-nice for $(\xi, \eta)$, so $I_{\xi\eta}\cap X\subset Z'$. Therefore, $Z'=Z^\alpha$ and $(Z',(a_i,B_i)_{i<\gamma})$ is an $a$-construction for $B\cup Z^\alpha$ over $Z^\alpha$, a contradiction. Therefore, we need to show only the case when $\mathcal{B}_\xi,\mathcal{C}_\eta\subset Z$. Since $(Z^\alpha,(a_i,B_i)_{i<\gamma})$ is not an $a$-construction over $Z^\alpha$, then $I_{\xi\eta}\cap X\not \subseteq Z'$.\\
Let $n$ be the least ordinal such that $(Z'\cup (I_{\xi\eta}\cap X),(a_i,B_i)_{i\leq n})$ is not an a-construction over $Z'\cup (I_{\xi\eta}\cap X)$, since $a$-isolation is the $F_\omega^a$-isolation, then $B_n$ is finite and we can assume $n<\omega$.\\
Since $(Z'\cup (I_{\xi\eta}\cap X),(a_i,B_i)_{i< n})$ is an $a$-construction over $Z'\cup (I_{\xi\eta}\cap X)$, then $C=\bigcup_{i< n} B_i\cap (Z'\cup (I_{\xi\eta}\cap X))$ is such that $stp(a_0^\frown\cdots ^\frown a_{n-1},C)\vdash tp(a_0^\frown\cdots ^\frown a_{n-1}, Z'\cup( I_{\xi\eta}\cap X))$. Notice that $C$ is a subset of $Z'$.\\
On the other hand, there is $b$ such that $stp(b,B_n)=stp(a_n,B_n)$, and $tp(b,Z'\cup \bigcup\{a_i|i<n\}\cup (I_{\xi\eta}\cap X))\neq tp(a_n,Z'\cup \bigcup\{a_i|i<n\}\cup (I_{\xi\eta}\cap X))$. Since $Z'$ is nice, then there is an infinite $I'_{\xi\eta}\subset I_{\xi\eta}\cap X$ contained in $Z'$.
Therefore, there are tuples $d\in (I_{\xi\eta}\cap X)\backslash I'_{\xi\eta}$ and $e\in Z'\cup \bigcup\{a_i|i<n\}$ that satisfy $tp(b,e\cup d)\neq tp(a_n, e\cup d)$. Denote by $W$ the set $C\cup ((B_n\cup e)\cap Z')$, by Lemma \ref{L.4.10} we know that there is $g\in Saut(\mathcal{M},W)$ such that $g(d)\in Z'$.
Since $stp(a_0^\frown\cdots ^\frown a_{n-1},C)\vdash tp(a_0^\frown\cdots ^\frown a_{n-1}, Z'\cup (I_{\xi\eta}\cap X))$, then $a_0^\frown\cdots ^\frown a_{n-1}\downarrow_C Z'\cup (I_{\xi\eta}\cap X)$. Therefore $$a_0^\frown\cdots ^\frown a_{n-1}\downarrow_W d$$ and $$a_0^\frown\cdots ^\frown a_{n-1}\downarrow_W g(d).$$
So, $stp(d,C\cup B_n\cup e)=stp(g(d),\cup C\cup B_n\cup e)$ and there is $f\in Saut(\mathcal{M},C\cup B_n\cup e)$ that satisfies $f(d)=g(d)$.\\
Since $tp(b,e\cup d)\neq tp(a_n, e\cup d)$ and $stp(b,B_n)=stp(a_n,B_n)$ hold, we have that $tp(f(b),e\cup f(d))\neq tp(f(a_n), e\cup f(d))$, and $a_n,b,f(a_n)$ and $f(b)$ have the same strong type over $B_n$. Since $(Z',(a_i,B_i)_{i<\gamma})$ is an $a$-construction, then by the $a$-isolation we know that $stp(a,B_n)\vdash tp(a_n,Z'\cup \bigcup\{a_i|i<n\})$, on the other hand $stp(a_n,B_n)=stp(f(a_n),B_n)=stp(f(b),B_n)$, so $tp(f(a_n),Z'\cup \bigcup\{a_i|i<n\})=tp(f(b),Z'\cup \bigcup\{a_i|i<n\})$. In particular $e,f(d)\in Z'$, so $tp(f(b),e\cup f(d))=tp(f(a_n),e\cup f(d))$, a contradiction.
Finally, since for every $\beta<\delta$ and $i<\gamma$, $(tp(a_i,Z_i^\beta),B_i)\in F_\omega^a$ where $Z_i^\beta=Z^\beta\cup\bigcup_{j<i}a_j$, then $(tp(a_i,\cup_{\beta<\delta}Z_i^\beta),B_i)\in F_\omega^a$ and $(\Gamma_f,(a_i,B_i)_{i<\gamma})$ is an $a$-construction for $B\cup\Gamma_f$ over $\Gamma_f$.
\end{proof}
\begin{fact}\label{F.4.12}
If $Z\subseteq \Gamma_f$ is nice, then for every $\alpha<\kappa$ the following holds $$Z\downarrow_{Z\cap \Gamma_f^\alpha}\Gamma_f^\alpha.$$
\end{fact}
\begin{proof}
By finite character, it is enough to prove $Z\downarrow_{Z\cap \Gamma_f^\alpha}\Gamma$ for every nice set $\Gamma\subseteq \Gamma_f^\alpha$, such that $S=\{\mathcal{B}_\xi, \mathcal{C}_\eta|\mathcal{B}_\xi,\mathcal{C}_\eta\subseteq \Gamma\}$ is a finite set.\\
In the proof of Claim \ref{Claim 4.7.2}, it was proved that for every $\xi<\eta$ the following holds $$\mathcal{B}_\xi\cup\mathcal{C}_\eta\downarrow_\mathcal{A}\bigcup\{\mathcal{B}_r,\mathcal{C}_p|r\neq\xi\wedge p\neq\eta\}\cup\bigcup\{I_{rp}|r\neq \xi\wedge p\neq\eta\}.$$
Since $\mathcal{C}_\eta\downarrow_\mathcal{A}\mathcal{B}_\xi$, we can conclude $$\mathcal{B}_\xi\downarrow_\mathcal{A}\bigcup\{\mathcal{B}_r,\mathcal{C}_p|r\neq\xi\}\cup\bigcup\{I_{rp}|r\neq \xi\wedge p\neq\eta\}$$ and $$\mathcal{C}_\eta\downarrow_\mathcal{A}\bigcup\{\mathcal{B}_r,\mathcal{C}_p|p\neq\eta\}\cup\bigcup\{I_{rp}|r\neq \xi\wedge p\neq\eta\}.$$
Since $S$ is finite, by monotonicity and transitivity we can conclude that
\begin{equation}
\bigcup\{\mathcal{B}_\xi, \mathcal{C}_\eta|\mathcal{B}_\xi,\mathcal{C}_\eta\subseteq \Gamma\backslash Z\}\downarrow_\mathcal{A}\bigcup\{\mathcal{B}_r, \mathcal{C}_p|\mathcal{B}_r,\mathcal{C}_p\not\subseteq \Gamma\backslash Z\}\cup\bigcup\{I_{rp}|\mathcal{B}_r,\mathcal{C}_p\not\subseteq \Gamma\backslash Z\}.
\end{equation}
Notice that since $Z$ is nice, from (2) we conclude that $(\bigcup S)\backslash Z\downarrow_\mathcal{A}Z$ and $(\cup S)\backslash Z\downarrow_{(\cup S)\cap Z}Z$.\\
By the way we chose the sequences $I_{rp}$, we know that for every $\xi<\eta$, the following holds $$I_{\xi\eta}\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta}\bigcup\{\mathcal{B}_r,\mathcal{C}_p|r\neq\xi \wedge p\neq \eta\}\cup\bigcup\{I_{rp}|r\neq \xi\vee p\neq\eta\}.$$
Since $I_{\xi\eta}$ is independent over $\mathcal{B}_\xi\cup \mathcal{C}_\eta$, then by transitivity, $$I_{\xi\eta}\backslash Z\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta}\bigcup\{\mathcal{B}_r,\mathcal{C}_p|r\neq\xi \wedge p\neq \eta\}\cup\bigcup\{I_{rp}|r\neq \xi\vee p\neq\eta\}\cup (I_{\xi\eta}\cap Z),$$
therefore $I_{\xi\eta}\backslash Z\downarrow_{\cup S}(\Gamma_f^\alpha\backslash I_{\xi\eta})\cup Z.$ Since $S$ is finite and $\Gamma$ is nice, then by transitivity we conclude $$\bigcup\{I_{\xi\eta}\backslash Z|\mathcal{B}_\xi,\mathcal{C}_\eta\subseteq \Gamma\}\downarrow_{\cup S}Z.$$ Since $(\cup S)\backslash Z\downarrow_{(\cup S)\cap Z}Z$, then by transitivity, we conclude $\Gamma\backslash Z\downarrow_{(\cup S)\cap Z}Z$, therefore $\Gamma\downarrow_{\Gamma\cap Z}Z$ and $\Gamma\downarrow_{\Gamma_f^\alpha\cap Z}Z$.
\end{proof}
From the proof of this Fact we can get the following corollary.
\begin{corollary}\label{C.4.13}
If $Z\subseteq \Gamma_f$ is nice, then for every nice set $\Gamma\subseteq \Gamma_f$ the following holds $$Z\downarrow_{Z\cap \Gamma}\Gamma.$$
\end{corollary}
Now, we are ready to prove the main result of this section. The next theorem shows, for certain kind of functions, that the models $\mathcal{A}^f$ and $\mathcal{A}^g$ are isomorphic if and only if $J_f$ and $J_g$ are isomorphic coloured trees.
\begin{theorem}\label{T.4.14}
Assume $f,g$ are functions from $\kappa$ to $Card\cap\kappa\backslash \lambda$ such that $f(\alpha),g(\alpha)>\alpha^{++}$ and $f(\alpha),g(\alpha)>\alpha^\lambda$. Then $\mathcal{A}^f$ and $\mathcal{A}^g$ are isomorphic if and only if $f$ and $g$ are $E_{\lambda\text{-club}}^\kappa$ equivalent.
\end{theorem}
\begin{proof}
From right to left.
Assume $f$ and $g$ are $E_{\lambda\text{-club}}^\kappa$ equivalent. By Lemma \ref{L.2.7} $J_f$ and $J_g$ are isomorphic coloured trees, let $G:J_f\rightarrow J_g$ be an isomorphism. Define $\mathcal{H}_{\xi\eta}:\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup I_{\xi\eta}\rightarrow\mathcal{B}_{G(\xi)}\cup\mathcal{C}_{G(\eta)}\cup I_{G(\xi)G(\eta)}$ by $\mathcal{H}_{\xi\eta}=H_{G(\xi)G(\eta)}\circ H^{-1}_{\xi\eta}$ (where $H_{rp}$ is the elementary embedding used in the construction of $I_{rp}$), we know that $\mathcal{H}_{\xi\eta}$ is elementary.
\begin{claim}\label{Claim 4.14.1}
The map $\mathcal{H}=\bigcup_{\eta\in (J_f)_\lambda}\bigcup_{\xi\in (J_f)_{<\lambda}, \xi<\eta}\mathcal{H}_{\xi\eta}$ is elementary.
\end{claim}
\begin{proof}
Let us denote by $\mathcal{W}$ the set $\bigcup\{\mathcal{B}_\xi,\mathcal{C}_\eta|\xi\in (J_f)_{<\lambda},\eta\in (J_f)_\lambda\}$.
Let us start by showing that $\mathcal{H}\restriction \mathcal{W}$ is elementary. Let $\{D_i|i<\gamma\}$ be an enumeration of $\mathcal{W}$, we will proceed by induction to prove that $\mathcal{H}\restriction \bigcup\{D_i\mid i<\gamma\}$ is elementary. By the way $\mathcal{H}$ was defined and Fact \ref{F.4.4}, we know that $\mathcal{H}\restriction D_0$ is elementary. Let $\alpha$ be such that the map $\mathcal{H}\restriction \bigcup\{D_i\mid i\leq\beta\}$ is elementary for all $\beta<\alpha$, then the map $\mathcal{H}\restriction \bigcup\{D_i\mid i<\alpha\}$ is elementary. By the way the models $\mathcal{C}_\eta$ and $\mathcal{B}_\xi$ were chosen, we know that $D_\alpha\downarrow_\mathcal{A}\bigcup\{D_i\mid i<\alpha\}$ and by the definition of $\mathcal{H}$, $\mathcal{H}( D_\alpha)\downarrow_\mathcal{A}\mathcal{H}(\bigcup\{D_i\mid i<\alpha\})$.
Since $\mathcal{H}\restriction\bigcup\{D_i\mid i<\alpha\}$ is elementary, there is $F$ and automorphism of the monster model that extends $\mathcal{H}\restriction\bigcup\{D_i\mid i<\alpha\}$, so $F^{-1}(\mathcal{H}(D_\alpha))\downarrow_\mathcal{A}\bigcup\{D_i\mid i<\alpha\}$.
By the definition of $\mathcal{H}$, we know that $D_i$ and $\mathcal{H}(D_i)$ are isomorphic, then $tp(D_\alpha,\mathcal{A})=tp(\mathcal{H}(D_\alpha),\mathcal{A})$. On the other hand $F$ is an automorphism, we conclude that $tp(D_\alpha,\mathcal{A})=tp(F^{-1}(\mathcal{H}(D_\alpha)),\mathcal{A})$. Since $F^-(\mathcal{H}(D_\alpha))\downarrow_\mathcal{A}\bigcup\{D_i\mid i<\alpha\}$, $D_\alpha\downarrow_\mathcal{A}\bigcup\{D_i\mid i<\alpha\}$, and $tp(D_\alpha,\mathcal{A})$ is stationary, we conclude that $tp(D_\alpha,\bigcup\{D_i\mid i<\alpha\})=tp(F^{-1}(\mathcal{H}(D_\alpha)),\bigcup\{D_i\mid i<\alpha\})$. Therefore $tp(\bigcup\{D_i\mid i\leq\alpha\},\emptyset)=tp(\mathcal{H}(\bigcup\{D_i\mid i\leq\alpha\}),\emptyset)$. We conclude that $\mathcal{H}\restriction \bigcup\{D_i\mid i\leq\alpha\}$ is elementary.
Let $\{D_i|i<\gamma\}$ be an enumeration of the set $\{I_{\xi\eta}|\xi<\eta\wedge \xi\in (J_f)_{<\lambda}\wedge \eta\in(J_f)_{\lambda}\}$, we will proceed by induction to prove that $\mathcal{H}\restriction \mathcal{W}\cup\bigcup\{D_i\mid i<\gamma\}$ is elementary. Let $\alpha$ be such that the map $\mathcal{H}\restriction \mathcal{W}\cup\bigcup\{D_i\mid i\leq\beta\}$ is elementary for all $\beta<\alpha$, then the map $\mathcal{H}\restriction \mathcal{W}\cup\bigcup\{D_i\mid i<\alpha\}$ is elementary. Let us denote by $I_{rp}$ the sequence $D_\alpha$. By Claim \ref{Claim 4.7.2} we know that $tp(I_{G(r)G(p)},\mathcal{B}_{G(r)}\cup\mathcal{C}_{G(p)})\vdash tp(I_{G(r)G(p)},\Gamma_g\backslash I_{G(r)G(p)})$ in particular $$tp(I_{G(r)G(p)},\mathcal{B}_{G(r)}\cup\mathcal{C}_{G(p)})\vdash tp(I_{G(r)G(p)},\mathcal{H}(\mathcal{W}\cup\bigcup\{D_i\mid i<\alpha\})).$$ Since $\mathcal{H}\restriction \mathcal{W}\cup\bigcup\{D_i\mid i<\alpha\}$ is elementary, there is $F$ an automorphism of the monster model that extends $\mathcal{H}\restriction \mathcal{W}\cup\bigcup\{D_i\mid i<\alpha\}$, therefore $$tp(F^{-1}(I_{G(r)G(p)}),\mathcal{B}_r\cup\mathcal{C}_p)\vdash tp(F^{-1}(I_{G(r)G(p)}),\mathcal{W}\cup\bigcup\{D_i\mid i<\alpha\}).$$ On the other hand, $\mathcal{H}_{rp}$ is elementary, so $tp(I_{G(r)G(p)}\cup\mathcal{B}_{G(r)}\cup\mathcal{C}_{G(p)},\emptyset)=tp(I_{rp}\cup\mathcal{B}_r\cup\mathcal{C}_p,\emptyset)$. Since $F$ is an automorphism, we know that $tp(F^{-1}(I_{G(r)G(p)})\cup\mathcal{B}_r\cup\mathcal{C}_p),\emptyset)=tp(I_{rp}\cup\mathcal{B}_r\cup\mathcal{C}_p,\emptyset)$. We conclude that $tp(F^{-1}(I_{G(r)G(p)}),\mathcal{B}_r\cup\mathcal{C}_p)=tp(I_{rp},\mathcal{B}_r\cup\mathcal{C}_p)$, therefore $$tp(I_{rp},\mathcal{B}_r\cup\mathcal{C}_p)\vdash tp(F^{-1}(I_{G(r)G(p)}),\mathcal{W}\cup\bigcup\{D_i\mid i<\alpha\}).$$ So $tp(I_{rp},\mathcal{W}\cup\bigcup\{D_i\mid i<\alpha\})=tp(F^{-1}(I_{G(r)G(p)}),\mathcal{W}\cup\bigcup\{D_i\mid i<\alpha\})$, we conclude that $tp(I_{rp}\cup\mathcal{W}\cup\bigcup\{D_i\mid i<\alpha\},\emptyset)=tp(I_{G(r)G(p)}\cup\mathcal{H}(\mathcal{W}\cup\bigcup\{D_i\mid i<\alpha\}),\emptyset)$ and $\mathcal{H}\restriction \mathcal{W}\cup\bigcup\{D_i\mid i\leq\alpha\}$ is elementary.
\end{proof}
Let $\bar{\mathcal{H}}$ be an automorphism that extends $\mathcal{H}$, then $\bar{\mathcal{H}}(\mathcal{A}^f)$ is $a$-primary over $\Gamma_g$. Therefore $\bar{\mathcal{H}}(\mathcal{A}^f)$ and $\mathcal{A}^g$ are isomorphic, we conclude that $\mathcal{A}^f$ and $\mathcal{A}^g$ are isomorphic.
From Left to right.
Let us assume, towards a contradiction, that $f$ and $g$ are not $E_{\lambda\text{-club}}^\kappa$ equivalent and there is an isomorphism $\Pi:\mathcal{A}^f\rightarrow \mathcal{A}^g$. Without loss of generality, we can assume that $\{\alpha|f(\alpha)>g(\alpha)\wedge cf(\alpha)=\lambda\}$ is stationary.\\
Let $(\Gamma_f,(a_i^f,B_i^f)_{i<\gamma})$ be an $a$-construction of $\mathcal{A}^f$ over $\Gamma_f$. For every $\alpha$ define $\mathcal{A}_f^\alpha=\Gamma^\alpha_f\cup\bigcup\{a_i^f|i<\alpha\}$, clearly $\mathcal{A}_f^\alpha$ is not necessary a model.\\
We say that $\alpha<\kappa$ is $f$-good if $(\Gamma^\alpha_f,(a_i^f,B_i^f)_{i<\alpha})$ is an $a$-construction over $\Gamma^\alpha_f$, $\mathcal{A}_f^\alpha$ is an $a$-primary model over $\Gamma_f^\alpha$ and $\alpha$ is a cardinal. Notice that there are club many $f$-good cardinals.\\
We say that $\alpha$ is very good if, $\alpha$ is $f$-good, $f(\alpha)>g(\alpha)>\alpha^{++}$ and $\Pi(\mathcal{A}_f^\alpha)=\mathcal{A}_g^\alpha$. Notice that since there are club many $\alpha$'s satisfying $\pi(\mathcal{A}_f^\alpha)=\mathcal{A}_g^\alpha$ and stationary many $\alpha$'s with cofinality $\lambda$ such that $f(\alpha)>g(\alpha)$, then there are stationary many very good cardinals.\\
Since there are club many $\alpha$'s satisfying $sup(\{c_g(p)\}_{p\in J_g^\alpha})<\alpha$, then by Remark \ref{R.2.8} we can choose $\alpha$ a very good cardinal with cofinality $\lambda$ and $\eta\in J_f$, such that the following holds:
\begin{itemize}
\item $\alpha^{\lambda}<g(\alpha)$,
\item $sup(\{c_g(p)\}_{p\in J_g^\alpha})<\alpha$,
\item there are cofinally many very good cardinals $\beta<\alpha$,
\item $\bigcup rang(\eta_1)=\lambda$ and $\bigcup rang(\eta_5)=\alpha$.
\end{itemize}
Notice that by Definition \ref{D.2.6} item 10, $c_f(\eta)=f(\alpha)$.\\
Let us choose $X\subseteq \Gamma_g$ and $Y\subseteq \gamma$ such that:
\begin{itemize}
\item $Y$ has power $2^\omega$ and is closed (i.e. for all $i\in Y$, $B_i^g\subseteq \Gamma_g\cup\bigcup_{j\in Y}a^g_j$).
\item $X$ has power $2^\omega$ and is nice.
\item $D=X\cup\bigcup\{a_i^g|i\in Y\}$ is the $a$-primary model over $X$.
\item $D^\alpha=(X\cap \Gamma_g^\alpha)\cup\bigcup\{a_i^g|i\in Y\wedge i<\alpha\}$ is the $a$-primary model over $X\cap \Gamma_g^\alpha$.
\item $\Pi(\mathcal{C}_\eta)\subseteq D$ and $\Pi(\mathcal{A})\subseteq D^\alpha$.
\item If $\xi\in (J_g)_{<\lambda}$ is such that $\mathcal{B}_\xi\subseteq X$, then for all $\zeta<\xi$, $\mathcal{B}_\zeta\subseteq X$.
\item If $\theta\in (J_g)_{\lambda}\backslash J_g^{\alpha+1}$ is such that $\mathcal{C}_\theta\subseteq X$, then for all $\zeta\in J_g^\alpha$, $\zeta<\theta$ implies that $\mathcal{B}_\zeta\subseteq X$.
\end{itemize}
Notice that since $D=X\cup\bigcup\{a_i^g|i\in Y\}$ is an $a$-construction over $X$, then for all $i\in Y$, $B_i^g\subseteq X\cup\bigcup_{j\in Y}a^g_j$.
Let $E$ be an $a$-primary model over $\Gamma_g^{\alpha+1}\cup \mathcal{A}_g^\alpha\cup D$. By the definition of $\mathcal{A}^g$, we know that $stp(a_i^g,B_i^g)\vdash tp(a_i^g,\Gamma_g\cup \bigcup\{a_j^g|j<i\})$. Since $B_i^g\subseteq X\cup\bigcup\{a_j^g|j<i\wedge j\in Y\}$ holds for every $i\in Y$, then $stp(a_i^g,B_i^g)\vdash tp(a_i^g,X\cup \Gamma^\alpha_g\cup \bigcup\{a_j^g|j<\alpha\}\cup \bigcup\{a_j^g|j<i\wedge j\in Y\})$ holds for all $i\in Y\backslash \alpha$. We conclude that $D\cup \mathcal{A}_g^\alpha$ is $a$-constructable over $X\cup \mathcal{A}_g^\alpha$. Notice that $X\cup\Gamma_g^\alpha$ is nice, so by Lemma \ref{L.4.11} $X\cup \mathcal{A}_g^\alpha$ is $a$-constructable over $X\cup\Gamma_g^\alpha$. We conclude by Lemma \ref{L.4.11} that $E$ is $a$-constructable over $\Gamma_g^{\alpha+1}\cup X$. Let $F$ be an $a$-primary model over $E \cup \bigcup \{\mathcal{B}_\xi, I_{\xi\theta}|\xi<\theta\wedge \mathcal{C}_\theta\subseteq X\backslash\Gamma_g^{\alpha+1}\}$, notice that $\Gamma_g^{\alpha+1}\cup X\cup \bigcup \{\mathcal{B}_\xi, I_{\xi\theta}|\xi<\theta\wedge \mathcal{C}_\theta\subseteq X\backslash\Gamma_g^{\alpha+1}\}$ is nice and by Lemma \ref{L.4.11} we conclude that $F$ is is $a$-constructable over $\Gamma_g^{\alpha+1}\cup X\cup \bigcup \{\mathcal{B}_\xi, I_{\xi\theta}|\xi<\theta\wedge \mathcal{C}_\theta\subseteq X\backslash\Gamma_g^{\alpha+1}\}$. Let $G$ be an $a$-primary model over $\Gamma_g\cup F$, since $F$ is $a$-constructable over $\Gamma_g^{\alpha+1}\cup X \cup \bigcup \{\mathcal{B}_\xi, I_{\xi\theta}|\xi<\theta\wedge \mathcal{C}_\theta\subseteq X\backslash\Gamma_g^{\alpha+1}\}$, then by Lemma \ref{L.4.11} $G$ is $a$-primary over $\Gamma_g^{\alpha+1}\cup X\cup\bigcup \{\mathcal{B}_\xi, I_{\xi\theta}|\xi<\theta\wedge \mathcal{C}_\theta\subseteq X\backslash\Gamma_g^{\alpha+1}\}\cup \Gamma_g$. Without loss of generality, we can assume $G=\mathcal{A}^g$.\\
Since $\alpha$ is $\lambda$-cofinal, $\lambda>2^\omega$ and $|X|=2^\omega$, there is a very good $\beta<\alpha$ such that $X\cap\Gamma_g^{\alpha}\subset \Gamma_g^\beta$. Let $\xi<\eta$ be such that $\mathcal{B}_\xi\subseteq \Gamma_f^\alpha\backslash\Gamma_f^\beta$ and $\xi\notin J_f^\beta$.
\begin{claim}\label{Claim 4.14.2}
$\Pi(\mathcal{B}_\xi)\downarrow_{\Pi(\mathcal{A})}D$.
\end{claim}
\begin{proof}
Let us start by showing that $\mathcal{A}^\beta_g\downarrow_{\Gamma_g^\beta}X\cup\Gamma_g^\alpha$.\\ If $\mathcal{A}^\beta_g\not\downarrow_{\Gamma_g^\beta}X\cup\Gamma_g^\alpha$, then there are finite $a\in \mathcal{A}_g^\beta$ and $b\in (X\cup\Gamma_g^\alpha)\backslash \Gamma_g^\beta$ such that $a\not\downarrow_{\Gamma_g^\beta}b$.\\
Since $\beta$ is very good, we know that $\mathcal{A}^\beta_g$ is $a$-constructable over $\Gamma^\beta_g$, therefore $\mathcal{A}^\beta_g$ it is $a$-atomic over $\Gamma^\beta_g$. So, there is a finite set $A_1\subseteq \Gamma^\beta_g$ such that $stp(a,A_1)\vdash tp(a,\Gamma^\beta_g)$.\\
Since $T$ is superstable, there is a finite set $A_2\subseteq \Gamma^\beta_g$ such that $a\cup b\downarrow _{A_2}\Gamma_g^\beta$. Denote by $A$ the set $A_1\cup A_2$. Since $\Gamma_g^\beta$ is nice, $A$ is a finite subset of $\Gamma_g^\beta$ and $b\in (X\cup\Gamma_g^\alpha)\backslash \Gamma_g^\beta$, then by Lemma \ref{L.4.10} there is $\mathcal{F}\in Saut(\mathcal{M},A)$ such that $\mathcal{F}(b)\in \Gamma_g^\beta$. Therefore $stp(\mathcal{F}(a),A_1)\vdash tp(a,\Gamma^\beta_g)$, and $\mathcal{F}(a)\downarrow_{A_1}\Gamma^\beta_g$, we conclude that $\mathcal{F}(a)\downarrow_{A}\mathcal{F}(b)$ and $a\downarrow_{A}b$. Since $a\cup b\downarrow _{A_2}\Gamma_g^\beta$, then $a\cup b\downarrow _{A}\Gamma_g^\beta$. Therefore $a\downarrow_{\Gamma_g^\beta}b$, a contradiction.\\ \\
By Fact \ref{F.4.12}, we know that $X\downarrow_{\Gamma_g^\beta}\Gamma_g^\alpha$. Since $\mathcal{A}^\beta_g\downarrow_{\Gamma_g^\beta}X\cup\Gamma_g^\alpha$, then $X\downarrow_{\mathcal{A}_g^\beta}\Gamma_g^\alpha$.\\
Now let us show that $D\downarrow_{\mathcal{A}_g^\beta}\Pi(\mathcal{B}_\xi)$.
By the definition of $\mathcal{A}^g$, we know that $stp(a_i^g,B_i^g)\vdash tp(a_i^g,\Gamma_g\cup \bigcup\{a_j^g|j<i\})$. Since $B_i^g\subseteq X\cup\bigcup\{a_j^g|j<i\wedge j\in Y\}$ holds for every $i\in Y$, then $stp(a_i^g,B_i^g)\vdash tp(a_i^g,X\cup \Gamma^\beta_g\cup \bigcup\{a_j^g|j<\beta\}\cup \bigcup\{a_j^g|j<i\wedge j\in Y\})$ holds for all $i\in Y\backslash \beta$. We conclude that $D\cup \mathcal{A}_g^\beta$ is $a$-constructable over $X\cup \mathcal{A}_g^\beta$, since $\mathcal{A}_g^\beta$ is $a$-saturated, then
$X\rhd_{\mathcal{A}_g^\beta}D\cup\mathcal{A}_g^\beta$. So $X\downarrow_{\mathcal{A}_g^\beta}\Gamma_g^\alpha$ implies that $D\downarrow_{\mathcal{A}_g^\beta}\Gamma_g^\alpha$. On the other hand $\mathcal{A}_g^\alpha$ is $a$-constructable over $\mathcal{A}_g^\beta\cup\Gamma_g^\alpha$, then $\Gamma_g^\alpha\rhd_{\mathcal{A}_g^\beta}\mathcal{A}_g^\alpha$ and $D\downarrow_{\mathcal{A}_g^\beta}\mathcal{A}_g^\alpha$. By the way we chose $\mathcal{B}_\xi$ and since $\alpha$ and $\beta$ are very good, we know that $D\downarrow_{\mathcal{A}_g^\beta}\Pi(\mathcal{B}_\xi)$.\\
Now, since $\mathcal{A}_f^\beta$ is $a$-constructible over $\Gamma_f^\beta$ and $\mathcal{A}$ is $a$-saturated, then $\Gamma_f^\beta \rhd_{\mathcal{A}}\mathcal{A}_f^\beta$. Since $\mathcal{B}_\xi\cap \Gamma_f^\beta=\mathcal{A}$, by Fact \ref{F.4.12} we know that $\mathcal{B}_\xi\downarrow_\mathcal{A}\Gamma_f^\beta$, so by domination, $\mathcal{B}_\xi\downarrow_\mathcal{A}\mathcal{A}_f^\beta$. Since $\beta$ is very good, we know that $\Pi(\mathcal{B}_\xi)\downarrow_{\Pi(\mathcal{A})}\mathcal{A}_g^\beta$, so by transitivity $D\downarrow_{\Pi(\mathcal{A})}\Pi(\mathcal{B}_\xi)$. We conclude $\Pi(\mathcal{B}_\xi)\downarrow_{\Pi(\mathcal{A})}D$ as we wanted.
\end{proof}
Clearly, we also have $\Pi(\mathcal{B}_\xi)\downarrow_{\Pi(\mathcal{C}_\eta)}D$, because $\Pi(\mathcal{C}_\eta)\subseteq D$.\\
\begin{claim}\label{Claim 4.14.3}
There is $a\in I_{\xi\eta}\backslash (I_{\xi\eta}\restriction \omega)$ such that $\Pi(a)\notin E$ and $\Pi(a)\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}E$.
\end{claim}
\begin{proof}
Suppose, towards a contradiction, that for every $a\in I_{\xi\eta}\backslash (I_{\xi\eta}\restriction\omega)$, $\Pi(a)\not\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}E$. Then, for every $a\in I_{\xi\eta}\backslash (I_{\xi\eta}\restriction\omega)$ there is $b_a\in E$ such that $\Pi(a)\not\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}b_a$.\\
The model $E$ was defined as an $a$-primary model over $\Gamma_g^{\alpha+1}\cup X$, therefore $|E|\leq \lambda(T)+(|\Gamma_g^{\alpha+1}\cup X|+\omega)^{<\omega}$. Since $\lambda(T)\leq 2^\omega$ and $|X|=2^\omega$, we obtain $|E|\leq 2^\omega+ |\Gamma_g^{\alpha+1}|$, by Fact \ref{F.4.6}, we get $|E|\leq g(\alpha)$ and $|E|<f(\alpha)$. Since $|I_{\xi\eta}|=f(\alpha)$, then there is $b\in E$ and an infinite subset of $I_{\xi\eta}\backslash (I_{\xi\eta}\restriction\omega)$, $J=\{c_i|i<\omega\}$, such that for every $i<\omega$, $\Pi(c_i)\not\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}b$ holds. Since $\Pi(I_{\xi\eta}\backslash (I_{\xi\eta}\restriction\omega))$ is independent over $\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)$, then $b\not\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)\cup\{\Pi(c_j)|j<i\}}\Pi(c_i)$ for every $i<\omega$. So $T$ is not superstable, a contradiction.
\end{proof}
Notice that $\Pi(I_{\xi\eta})$ is indiscernible over $\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)$. Since $\Pi(\mathcal{B}_\xi)\downarrow_{\Pi(\mathcal{C}_\eta)}D$, then by domination we get $M_3\downarrow_{\Pi(\mathcal{C}_\eta)}D$, where $M_3$ is an $a$-primary model over $\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)$. So the models $M_0=M_0'=\Pi(\mathcal{A})$, $M_1=M_1'=\Pi(\mathcal{B}_\xi)$, $M_2=\Pi(\mathcal{C}_\eta)$ and $M_2'=D$ satisfy the assumptions of Lemma \ref{L.3.11}, therefore $\Pi(I_{\xi\eta})$ is indiscernible over $\Pi(\mathcal{B}_\xi)\cup D$. By Remark \ref{R.3.12}, if $M'_3$ is an $a$-primary model over $\Pi(\mathcal{B}_\xi)\cup D$ with $\Pi(I_{\xi\eta}\restriction \omega)\subseteq M_3'$, then $Av(\Pi(I_{\xi\eta}\restriction \omega),M_3')\perp D$ and $\Pi(I_{\xi\eta})$ is independent over $\Pi(\mathcal{B}_\xi)\cup D$. So, if $a$ is the element given in Claim \ref{Claim 4.14.3} and $\Pi(a)\notin M_3'$ holds, then $tp(\Pi(a),M_3')\perp D$.
\begin{claim}\label{Claim 4.14.4}
$tp(\Pi(a),E)\perp D$
\end{claim}
\begin{proof}
Let $M_3'$ be an $a$-primary model over $\pi(\mathcal{B}_\xi)\cup D$ with $\pi(I_{\xi\eta}\restriction \omega)\subseteq M_3'$. Since $E$ is $a$-saturated, then there is $\mathcal{F}_3'\rightarrow E$ an elementary embedding such that $\mathcal{F}\restriction \Pi(\mathcal{B}_\xi)\cup D=id$. Let $b$ be such that $b\models \mathcal{F}(Av(\Pi(I_{\xi\eta}\restriction \omega),M_3'))$, since $Av(\Pi(I_{\xi\eta}\restriction \omega),M_3')\perp D$, then $tp(b,\mathcal{F}(M_3'))\perp D$. By the way $I_{\xi\eta}$ was chosen and Remark \ref{R.3.12}, we know that $\Pi(I_{\xi\eta})$ is independent over $\Pi(\mathcal{B}_\xi)\cup D$, by Lemma \ref{L.3.9} we conclude that $\mathcal{F}(Av(\Pi(I_{\xi\eta}\restriction \omega),M_3'))$ doesn't fork over $\Pi(\mathcal{B}_\xi)\cup D$. On the other hand, by Claim \ref{Claim 4.14.3} $\Pi(a)\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}E$, so $\Pi(a)\downarrow_{\Pi(\mathcal{B}_\xi)\cup D}\mathcal{F}(M_3')$. By Fact \ref{F.3.5}, since $tp(b,\mathcal{F}(M_3'))\perp D$, $b\downarrow_{\Pi(\mathcal{B}_\xi)\cup D}\mathcal{F}(M_3')$ and $\Pi(a)\downarrow_{\Pi(\mathcal{B}_\xi)\cup D}\mathcal{F}(M_3')$ hold, then $tp(\Pi(a),\mathcal{F}(M_3'))\perp D$.\\
To show that $tp(\Pi(a),E)\perp D$ let $d$ and $B$ be such that $d\downarrow_DE$, $D\subseteq B$, $\Pi(a)\downarrow_EB$, and $d\downarrow_EB$.
By transitivity, $d\downarrow_DE$ and $d\downarrow_EB$ implies that $d\downarrow_DE\cup B$. By Claim \ref{Claim 4.14.3} we know that $\Pi(a)\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}E$, then by transitivity we get $\Pi(a)\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}E\cup B$. Therefore $d\downarrow_D\mathcal{F}(M_3')\cup B$ and $\Pi(a)\downarrow_{\Pi(\mathcal{B}_\xi)\cup D}\mathcal{F}(M_3')\cup B$ hold, so $d\downarrow_D\mathcal{F}(M_3')$, $d\downarrow_{\mathcal{F}(M_3')} B$ and $\Pi(a)\downarrow_{\mathcal{F}(M_3')} B$ hold. Since $tp(\Pi(a),\mathcal{F}(M_3'))\perp D$, we conclude that $\Pi(a)\downarrow_Bb$.
\end{proof}
Let $I_X$ be the set $\bigcup\{\mathcal{B}_r, I_{rp}|\mathcal{B}_r\not\subseteq\Gamma_g^{\alpha+1}\wedge r<p\wedge\mathcal{C}_p\subseteq X\backslash \Gamma_g^{\alpha+1}\}$. Let us show that $D\downarrow_XI_X\cup \Gamma_g^{\alpha+1}$.\\
If $D\not\downarrow_{X}I_X\cup\Gamma_g^{\alpha+1}$, then there are finite $c\in D$ and $b\in (I_X\cup\Gamma_g^{\alpha})\backslash X$ such that $a\not\downarrow_{X}b$.\\
Since $D$ is $a$-constructable over $X$, then it is $a$-atomic over $X$. So, there is a finite $A_1\subseteq X$ such that $stp(c,A_1)\vdash tp(c,X)$.\\
Since $T$ is superstable, there is a finite $A_2\subseteq X$ such that $c\cup b\downarrow _{A_2}X$. Denote by $A$ the set $A_1\cup A_2$. Since $X$ is nice, $A$ is a finite subset of $X$ and $b\in (I_X\cup\Gamma_g^{\alpha})\backslash X$, then by Lemma \ref{L.4.10} there is $\mathcal{F}\in Saut(\mathcal{M},A)$ such that $\mathcal{F}(b)\in X$. Therefore $stp(\mathcal{F}(c),A_1)\vdash tp(c,X)$, and $\mathcal{F}(c)\downarrow_{A_1}X$, we conclude $\mathcal{F}(c)\downarrow_{A}\mathcal{F}(b)$ and $c\downarrow_{A}b$. Since $c\cup b\downarrow _{A_2}X$, then $c\cup b\downarrow _{A}X$. Therefore $c\downarrow_{X}b$, a contradiction.\\ \\
By Fact \ref{F.4.12}, we know that $I_X\cup X\downarrow_{X\cap \Gamma_g^{\alpha+1}}\Gamma_g^{\alpha+1}$, then $I_X\downarrow_{X}\Gamma_g^{\alpha+1}$. Since $D\downarrow_XI_X\cup \Gamma_g^{\alpha+1}$, then we conclude that $I_X\downarrow_D\Gamma_g^{\alpha+1}$.\\
By the way $E$ was chosen, we know that $E$ is $a$-constructible over $D\cup \Gamma_g^{\alpha+1}$. Since $D$ is $a$-saturated, then we get that $\Gamma_g^{\alpha+1}\rhd_DE$. By domination we conclude $I_X\downarrow_DE$.\\
Therefore, for every $c\in I_X$ we have that $c\downarrow_DE$. Since $c\downarrow_EE$ and $\Pi(a)\downarrow_EE$ hold, then by Claim \ref{Claim 4.14.4} we conclude that $c\downarrow_E\Pi(a)$ for every $c\in I_X$. By the finite character we get $I_X\downarrow_E\Pi(a).$
By the way $F$ was chosen, we know that $F$ is $a$-constructible over $I_X\cup E$, and since $E$ is $a$-saturated, we conclude that $I_X\rhd_EF.$ Therefore $F\downarrow_E\Pi(a)$.
Since $\Pi(a)\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}E$, by transitivity we conclude $\Pi(a)\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}F$.\\ \\
On the other hand $\Pi(a)\in \mathcal{A}^g$ and $\mathcal{A}^g$ is $a$-constructable over $F\cup \Gamma_g$, then $\mathcal{A}^g$ is $a$-atomic over $F\cup \Gamma_g$ and there is a finite $B\subseteq F\cup \Gamma_g$ such that $(tp(\Pi(a),F\cup \Gamma_g),B)\in F_\omega^a$ and $\Pi(a)\in \mathcal{N}$, where $\mathcal{N}\subseteq \mathcal{A}^g$ is $a$-primary over $F\cup B$. Let $B'=B\backslash F$, there is a nice set $\mathcal{Y}$ such that $\mathcal{Y}\cap F=\mathcal{A}$, $B'\subseteq \mathcal{Y}$, $\mathcal{Y}$ $\Gamma_g$-nice for all $(r,p)$ that satisfy $\mathcal{B}_r,\mathcal{C}_p\subset \mathcal{Y}$, and $S=\{r\in J_g\mid (r\in (J_g)_{<\lambda}\wedge \mathcal{B}_r\subset \mathcal{Y})\vee(r\in (J_g)_{\lambda}\wedge \mathcal{C}_r\subset \mathcal{Y})\}$ is finite. Define $\mathcal{X}=\{r\in J_g\mid (r\in (J_g)_{<\lambda}\wedge \mathcal{B}_r\subset X)\vee(r\in (J_g)_{\lambda}\wedge \mathcal{C}_r\subset X)\}$. Let $\bar{S}=S\cup \{r\in (J_g)_{<\lambda}\mid\exists p\in S\ (r<p)\}$ and $\bar{\mathcal{X}}=\mathcal{X}\cup \{r\in (J_g)_{<\lambda}\mid\exists p\in \mathcal{X}\ (r<p)\}$. By the way $\bar{\mathcal{X}}$ was defined, we know that for every limit ordinal $\theta<\lambda$ and $\zeta\in J_g$, if for all $\theta'<\theta$, $\zeta\restriction{\theta'}\in \bar{\mathcal{X}}$ holds, then $\zeta\restriction{\theta}\in \bar{\mathcal{X}}$. Notice that since $cf(\alpha)=\lambda$, if $\theta<\lambda$ is a limit ordinal such that for all $\theta'<\theta$, $\zeta\restriction{\theta'}\in J_g^{\alpha+1}$ holds, then $\zeta\restriction{\theta}\in J_g^{\alpha+1}$. We conclude that if $\theta<\lambda$ and $\zeta\in J_g$ are such that for all $\theta'<\theta$, $\zeta\restriction\theta'\in \bar{\mathcal{X}}\cup J_g^{\alpha+1}$ and $\zeta\restriction{\theta}\in \bar{S}\backslash (\bar{\mathcal{X}}\cup J_g^{\alpha+1})$, then $\theta$ is a successor ordinal.\\
Let $\{u_i\}_{i<f(\alpha)^+}$ be a sequence of subtrees of $J_g$ with the following properties:
\begin{itemize}
\item $u_0=\bar{S}
\item Every $u_i$ is a tree isomorphic to $u_0$.
\item If $i\neq j$, then $u_i\cap u_j=u_0\cap (\bar{\mathcal{X}}\cup J_g^{\alpha+1})$.
\item Every $\zeta\in dom(c_g)\cap u_0$ satisfies $c_f(\zeta)=c_f(G_i( \zeta))$, where $G_i$ is the isomorphism between $u_0$ and $u_i$.
\end{itemize}
For every $\zeta\in u_0$ and $\theta<\lambda$ such that $\zeta\restriction\theta\in \bar{\mathcal{X}}\cup J_g^{\alpha+1}$ and $\zeta\restriction{\theta+1}\in u_0\backslash (\bar{\mathcal{X}}\cup J_g^{\alpha+1})$, it holds by Definition \ref{D.2.6} that $\zeta\restriction\theta$ has $\kappa$ many immediate successors in $J_g\backslash J_g^{\alpha+1}$. Also by Definition \ref{D.2.6} the elements of $J_f$ are all the functions $\eta:s\rightarrow \lambda\times \kappa^4$ that satisfy the items 1 to 8, therefore each of this immediate successors of $\zeta\restriction\gamma$, $\zeta'$, satisfies that in the set $\{r\in J_f|\zeta'\leq r\}$ there is a subtree isomorphic (as coloured tree) to $\{p\in u_0\backslash (\bar{\mathcal{X}}\cup J_g^{\alpha+1})\mid \zeta\restriction \gamma+1\leq p\}$.
This and the fact that $S$ is finite, gives the existence of the sequence $\{u_i\}_{i<f(\alpha)^+}$.
By the way we chose the sequence $\{u_i\}_{i<f(\alpha)^+}$, for every $i<f(\alpha)^+$, the isomorphism $G_i$ induces a coloured trees isomorphism $\bar{G}_i:\bar{\mathcal{X}}\cup J_g^{\alpha+1}\cup u_0\rightarrow \bar{\mathcal{X}}\cup J_g^{\alpha+1}\cup u_i$ such that $\bar{G}_i\restriction \bar{\mathcal{X}}\cup J_g^{\alpha+1}= id$. Let us denote by $z_i$ the tree $\bar{\mathcal{X}}\cup J_g^{\alpha+1}\cup u_i$.
Let us define $U_i=\{\mathcal{B}_r\mid r\in z_i\wedge r\in (J_g)_{<\lambda}\}\cup\{\mathcal{C}_p\mid p\in z_i\wedge p\in (J_g)_{\lambda}\}$ and $\bar{U}_i=U_i\cup \{I_{rp}\mid \mathcal{B}_r\in U_i\wedge \mathcal{C}_p\in U_i\wedge r<p\}$. Notice that $\bigcup \bar{U}_i$ is nice for all $i<f(\alpha)^+$. Since $u_i$ is isomorphic to $\bar{S}$, then $p\in z_i$ and $r<p$, implies $r\in z_i$. Therefore, $\bigcup\bigcup_{j\neq i}\bar{U}_j$ is nice for all $i<f(\alpha)^+$.
\begin{claim}\label{Claim 4.14.5}
For all $i<f(\alpha)^+$ it holds that $\bigcup \bar{U}_i\downarrow_F\bigcup\bigcup_{j\neq i}\bar{U}_j$.
\end{claim}
\begin{proof}
By the way the sets $\bar{U}_i$ were constructed, we know that $(\bigcup \bar{U}_i) \cap(\bigcup \bar{U}_j)=\Gamma_g^{\alpha+1}\cup X\cup I_X$ for all $i\neq j$. Let us denote by $\mathbb{F}$ the set $\Gamma_g^{\alpha+1}\cup X\cup I_X$. By Corollary 4.13 we know that $$\bigcup \bar{U}_i\downarrow_\mathbb{F}\bigcup\bigcup_{j\neq i}\bar{U}_j.$$
Let us proof that $F\downarrow_\mathbb{F}\bigcup\bigcup_{j<f(\alpha)^+}\bar{U}_j$. Suppose it is false, then $F\not\downarrow_\mathbb{F}\bigcup\bigcup_{j<f(\alpha)^+}\bar{U}_j$ and there are finite $c\in F$ and $b\in \bigcup\bigcup_{j<f(\alpha)^+}\bar{U}_j$ such that $c\not\downarrow_{\mathbb{F}}b$.\\
Since $F$ is $a$-constructable over $\mathbb{F}$, then it is $a$-atomic over $\mathbb{F}$. So, there is a finite $A_1\subseteq \mathbb{F}$ such that $stp(c,A_1)\vdash tp(c,\mathbb{F})$.\\
Since $T$ is superstable, there is a finite $A_2\subseteq \mathbb{F}$ such that $c\cup b\downarrow _{A_2}\mathbb{F}$. Denote by $A$ the set $A_1\cup A_2$. By Lemma \ref{L.4.10} there is $\mathcal{F}\in Saut(\mathcal{M},A)$ such that $\mathcal{F}(b)\in \mathbb{F}$. Therefore $stp(\mathcal{F}(c),A_1)\vdash tp(c,\mathbb{F})$, and $\mathcal{F}(c)\downarrow_{A_1}\mathbb{F}$. So $\mathcal{F}(c)\downarrow_{A}\mathcal{F}(b)$ and $c\downarrow_{A}b$. Since $c\cup b\downarrow _{A_2}\mathbb{F}$, then $c\cup b\downarrow _{A}\mathbb{F}$. Therefore $c\downarrow_{\mathbb{F}}b$, a contradiction.\\
Since $F\downarrow_\mathbb{F}\bigcup\bigcup_{j<f(\alpha)^+}\bar{U}_j$ and $\bigcup \bar{U}_i\downarrow_\mathbb{F}\bigcup\bigcup_{j\neq i}\bar{U}_j$ holds, we conclude that $\bigcup \bar{U}_i\downarrow_F\bigcup\bigcup_{j\neq i}\bar{U}_j$.
\end{proof}
The isomorphisms $(\bar{G}_i)_{i<f(\alpha)^+}$ induce the followings elementary maps $\mathcal{H}^i_{rp}:\mathcal{B}_r\cup\mathcal{C}_p\cup I_{rp}\rightarrow\mathcal{B}_{\bar{G}_i(r)}\cup\mathcal{C}_{\bar{G}_i(p)}\cup I_{\bar{G}_i(r)\bar{G}_i(p)}$ for all $r,p\in z_0$ ($r\in (J_g)_{<\lambda}$ and $p\in (J_g)_{\lambda}$), given by $\mathcal{H}^i_{rp}=H_{\bar{G}_i(r)\bar{G}_i(p)}\circ H^{-1}_{rp}$. Let $\{D_i\mid i<\theta\}$ be an enumeration of $U_0$ such that if $D_i$ is a subset of $\Gamma_g^{\alpha+1}\cup X\cup I_X$ and $D_j$ is a subset of $U_0\backslash \Gamma_g^{\alpha+1}\cup X\cup I_X$, then $i<j$. Let $\{D'_i\mid i<\theta'\}$ be an enumeration of $\{I_{rp}\mid I_{rp}\in\bar{U}_0\}$.
\begin{claim}\label{Claim 4.14.6}
The map $\mathcal{H}_i:\bigcup\bar{U}_0\rightarrow\bigcup\bar{U}_i$ defined by $$\mathcal{H}_i=\bigcup_{\eta\in z_0\cap (J_f)_\lambda}\bigcup_{\xi\in z_0\cap (J_f)_{<\lambda}, \xi<\eta}\mathcal{H}^i_{\xi\eta}$$ is elementary.
\end{claim}
\begin{proof}
Let us start by showing that $\mathcal{H}_i\restriction \bigcup U_i$ is elementary. We will proceed by induction to prove that $\mathcal{H}_i\restriction \bigcup\{D_j\mid j\leq\theta\}$ is elementary. By the way $\mathcal{H}_i$ was defined and Fact \ref{F.4.4}, we know that $\mathcal{H}_i\restriction D_0$ is elementary. Let $n$ be such that the map $\mathcal{H}_i\restriction \bigcup\{D_j\mid j\leq m\}$ is elementary for all $m<n$, then the map $\mathcal{H}_i\restriction \bigcup\{D_j\mid j<n\}$ is elementary. By the way the models $\mathcal{C}_r$ and $\mathcal{B}_p$ were chosen, we know that $D_n\downarrow_\mathcal{A}\bigcup\{D_j\mid j<n\}$ and by the definition of $\mathcal{H}_i$, $\mathcal{H}_i( D_n)\downarrow_\mathcal{A}\mathcal{H}_i(\bigcup\{D_j\mid j<n\})$.
Since $\mathcal{H}_i\restriction\bigcup\{D_j\mid j<n\}$ is elementary, there is $\mathcal{F}$ an automorphism of the monster model that extends $\mathcal{H}_i\restriction\bigcup\{D_j\mid j<n\}$, so $\mathcal{F}^{-1}(\mathcal{H}_i(D_n))\downarrow_\mathcal{A}\bigcup\{D_j\mid j<n\}$.
By the definition of $\mathcal{H}_i$, we know that $D_j$ and $\mathcal{H}_i(D_j)$ are isomorphic, then $tp(D_n,\mathcal{A})=tp(\mathcal{H}_i(D_n),\mathcal{A})$. On the other hand $\mathcal{F}$ is an automorphism, we conclude that $tp(D_n,\mathcal{A})=tp(\mathcal{F}^{-1}(\mathcal{H}_i(D_n)),\mathcal{A})$. Since $\mathcal{F}^-(\mathcal{H}_i(D_n))\downarrow_\mathcal{A}\bigcup\{D_j\mid j<n\}$, $D_n\downarrow_\mathcal{A}\bigcup\{D_j\mid j<n\}$, and $tp(D_n,\mathcal{A})$ is stationary, we conclude that $tp(D_n,\bigcup\{D_j\mid j<n\})=tp(\mathcal{F}^{-1}(\mathcal{H}_i(D_n)),\bigcup\{D_j\mid j<n\})$. Therefore $tp(\bigcup\{D_j\mid j\leq n\},\emptyset)=tp(\mathcal{H}_i(\bigcup\{D_j\mid j\leq n\}),\emptyset)$. We conclude that $\mathcal{H}_i\restriction \bigcup\{D_j\mid j\leq n\}$ is elementary.
Now we will show by induction over the indiscernible sequences that $\mathcal{H}_i\restriction \bigcup U_0\cup\bigcup\{D'_j\mid j\leq\theta'\}$ is elementary. Let $n$ be such that the map $\mathcal{H}_i\restriction \bigcup U_0\cup\bigcup\{D'_j\mid j\leq m\}$ is elementary for all $m<n$, then the map $\mathcal{H}_i\restriction \bigcup U_0\cup\bigcup\{D'_j\mid j<n\}$ is elementary. Let us denote by $I_{rp}$ the sequence $D'_n$. By Claim \ref{Claim 4.7.2} we know that $tp(I_{\bar{G}_i(r)\bar{G}_i(p)},\mathcal{B}_{\bar{G}_i(r)}\cup\mathcal{C}_{\bar{G}_i(p)})\vdash tp(I_{\bar{G}_i(r)G(p)},\Gamma_g\backslash I_{\bar{G}_i(r)\bar{G}_i(p)})$ in particular $$tp(I_{\bar{G}_i(r)\bar{G}_i(p)},\mathcal{B}_{\bar{G}_i(r)}\cup\mathcal{C}_{\bar{G}_i(p)})\vdash tp(I_{\bar{G}_i(r)\bar{G}_i(p)},\mathcal{H}_i(\bigcup U_0\cup\bigcup\{D'_j\mid j<n\})).$$ Since $\mathcal{H}_i\restriction \bigcup U_0\cup\bigcup\{D'_j\mid j<n\}$ is elementary, there is $\mathcal{F}$ an automorphism of the monster model that extends $\mathcal{H}_i\restriction \bigcup U_0\cup\bigcup\{D'_j\mid j<n\}$, therefore $$tp(\mathcal{F}^{-1}(I_{\bar{G}_i(r)\bar{G}_i(p)}),\mathcal{B}_r\cup\mathcal{C}_p)\vdash tp(\mathcal{F}^{-1}(I_{\bar{G}_i(r)\bar{G}_i(p)}),\bigcup U_0\cup\bigcup\{D'_j\mid j<n\}).$$ On the other hand, $\mathcal{H}^i_{rp}$ is elementary, so $tp(I_{\bar{G}_i(r)\bar{G}_i(p)}\cup\mathcal{B}_{\bar{G}_i(r)}\cup\mathcal{C}_{\bar{G}_i(p)},\emptyset)=tp(I_{rp}\cup\mathcal{B}_r\cup\mathcal{C}_p,\emptyset)$. Since $\mathcal{F}$ is an automorphism, we know that $tp(F^{-1}(I_{\bar{G}_i(r)\bar{G}_i(p)})\cup\mathcal{B}_r\cup\mathcal{C}_p,\emptyset)=tp(I_{rp}\cup\mathcal{B}_r\cup\mathcal{C}_p,\emptyset)$. We conclude that $tp(\mathcal{F}^{-1}(I_{\bar{G}_i(r)\bar{G}_i(p)}),\mathcal{B}_r\cup\mathcal{C}_p)=tp(I_{rp},\mathcal{B}_r\cup\mathcal{C}_p)$, therefore $$tp(I_{rp},\mathcal{B}_r\cup\mathcal{C}_p)\vdash tp(\mathcal{F}^{-1}(I_{\bar{G}_i(r)\bar{G}_i(p)}),\bigcup U_0\cup\bigcup\{D'_j\mid j<n\}).$$ So $tp(I_{rp},\bigcup U_0\cup\bigcup\{D'_j\mid j<n\})=tp(\mathcal{F}^{-1}(I_{\bar{G}_i(r)\bar{G}_i(p)}),\bigcup U_0\cup\bigcup\{D'_j\mid j<n\})$, we conclude that $tp(I_{rp}\cup\bigcup U_0\cup\bigcup\{D'_j\mid j<n\},\emptyset)=tp(I_{\bar{G}_i(r)\bar{G}_i(p)}\cup\mathcal{H}_i(\bigcup U_0\cup\bigcup\{D'_j\mid j<n\}),\emptyset)$ and $\mathcal{H}_i\restriction \bigcup U_0\cup\bigcup\{D'_j\mid j\leq n\}$ is elementary.
\end{proof}
\begin{claim}\label{Claim 4.14.7}
If $\mathcal{R}: f(\alpha)^+\rightarrow f(\alpha)^+$ is a permutation, then $tp(\bigcup\bigcup_{j<i}\bar{U}_j,\Gamma_g^{\alpha+1}\cup X\cup I_X)=tp(\bigcup\bigcup_{j<i}\bar{U}_{\mathcal{R}(j)},\Gamma_g^{\alpha+1}\cup X\cup I_X)$ holds for all $i<f(\alpha)^+$.
\end{claim}
\begin{proof}
It is enough to show that the map $\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}$ is elementary.
We will prove by a double induction that the map $\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}$ is elementary. By Claim \ref{Claim 4.14.6}, we know that $\mathcal{H}_{\mathcal{R}(0)}\circ \mathcal{H}_0^{-1}$ is elementary. For the successor case let $m$ be an ordinal such that $\bigcup_{j\leq m}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}$ is elementary. We will start by showing that $$\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup U_{m+1}$$ is elementary.
Let $\{E_j|j<\theta\}$ be the enumeration of $\bigcup U_{m+1}$ induced by $\{D_j|j<\theta\}$ and $\mathcal{H}_{m+1}$, and let $n<\theta$ be such that $E_n\not\subseteq \Gamma_g^{\alpha+1}\cup X\cup I_X$ and $$\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup\{E_j|j\leq w\}$$ for all $w<n$, then the map $$\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup\{E_j|j<n\}$$ is elementary. Then there is an automorphism $\mathcal{F}$ of the monster model that extends $$\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup\{E_j|j<n\}.$$
By Corollary 4.13 we know that $$E_n\downarrow_\mathcal{A}\bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup\{E_j|j<n\},$$ and by the definition of $\mathcal{H}_{\mathcal{R}(m+1)}\circ \mathcal{H}_{m+1}^{-1}$ we know that $$\mathcal{H}_{\mathcal{R}(m+1)}\circ \mathcal{H}_{m+1}^{-1}(E_n)\downarrow_\mathcal{A}\bigcup\bigcup_{j\leq m}\bar{U}_{\mathcal{R}(j)}\cup\mathcal{H}_{\mathcal{R}(m+1)}\circ \mathcal{H}_{m+1}^{-1}(\bigcup\{D_j|j<n\})$$ so $$\mathcal{F}^{-1}(\mathcal{H}_{\mathcal{R}(m+1)}\circ \mathcal{H}_{m+1}^{-1}(E_n))\downarrow_\mathcal{A}\bigcup\bigcup_{j\leq m}\bar{U}_{j}\cup\bigcup\{E_j|j<n\}.$$
By Claim \ref{Claim 4.14.6} we know that $\mathcal{H}_{\mathcal{R}(m+1)}\circ \mathcal{H}_{m+1}^{-1}$ is elementary, so $tp(E_n,\mathcal{A})=tp(\mathcal{H}_{\mathcal{R}(m+1)}\circ \mathcal{H}_{m+1}^{-1}(E_n),\mathcal{A})$, and since $\mathcal{F}$ is an automorphism, we get $tp(E_n,\mathcal{A})=tp(\mathcal{F}^{-1}(\mathcal{H}_{\mathcal{R}(m+1)}\circ \mathcal{H}_{m+1}^{-1}(E_n)),\mathcal{A})$. Since the types over $\mathcal{A}$ are stationary, we conclude that $E_n$ and $\mathcal{F}^{-1}(\mathcal{H}_{\mathcal{R}(m+1)}\circ \mathcal{H}_{m+1}^{-1}(E_n))$ have the same type over $\bigcup\bigcup_{j\leq m}\bar{U}_{j}\cup\bigcup\{E_j|j<n\}$. We conclude that $$\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup\{E_j|j\leq n\}$$ is elementary.
Now we will show by induction over the indiscernible sequences that $$\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j\leq m+1}\bar{U}_j$$ is elementary.
Let $\{E'_j|j<\theta'\}$ be the enumeration of the set $\{I_{rp}|I_{rp}\in\bar{U}_{m+1}\}$ induced by $\{D'_j|j<\theta'\}$ and $\mathcal{H}_{m+1}$, and let $n<\theta$ be such that the map $$\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup U_{m+1}\cup\bigcup\{E'_j|j\leq w\}$$ for all $w<n$, then $$\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup U_{m+1}\cup\bigcup\{E'_j|j< n\}$$ is elementary. Let us denote by $I_{rp}$ the sequence $E'_n$ and by $I_{tq}$ the sequence $\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}(E'_n)$. By Claim \ref{Claim 4.7.2} we know that $tp(I_{tq},\mathcal{B}_{t}\cup\mathcal{C}_{q})\vdash tp(I_{tq},\Gamma_g\backslash I_{tq})$ in particular $$tp(I_{tq},\mathcal{B}_{t}\cup\mathcal{C}_{q})\vdash tp(I_{tq},\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}(\bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup U_{m+1}\cup\bigcup\{E'_j|j< n\})).$$
Since $$\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup U_{m+1}\cup\bigcup\{E'_j|j< n\}$$ is elementary, there is $\mathcal{F}$ an automorphism of the monster model that extends it, therefore $$tp(\mathcal{F}^{-1}(I_{tq}),\mathcal{B}_r\cup\mathcal{C}_p)\vdash tp(\mathcal{F}^{-1}(I_{tq}),\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}(\bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup U_{m+1}\cup\bigcup\{E'_j|j< n\}).$$ On the other hand, by Claim \ref{Claim 4.14.6} we know that $\mathcal{H}_{\mathcal{R}(m+1)}\circ \mathcal{H}_{m+1}^{-1}$ is elementary, so $tp(I_{tq}\cup\mathcal{B}_{t}\cup\mathcal{C}_{q},\emptyset)=tp(I_{rp}\cup\mathcal{B}_r\cup\mathcal{C}_p,\emptyset)$. Since $\mathcal{F}$ is an automorphism, we know that $tp(\mathcal{F}^{-1}(I_{tq})\cup\mathcal{B}_r\cup\mathcal{C}_p,\emptyset)=tp(I_{rp}\cup\mathcal{B}_r\cup\mathcal{C}_p,\emptyset)$. We conclude that $tp(\mathcal{F}^{-1}(I_{tq}),\mathcal{B}_r\cup\mathcal{C}_p)=tp(I_{rp},\mathcal{B}_r\cup\mathcal{C}_p)$, therefore $$tp(I_{rp},\mathcal{B}_r\cup\mathcal{C}_p)\vdash tp(\mathcal{F}^{-1}(I_{tq}),\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}(\bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup U_{m+1}\cup\bigcup\{E'_j|j< n\}).$$ So $I_{rp}$ and $\mathcal{F}^{-1}(I_{tq})$ have the same type over $\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}(\bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup U_{m+1}\cup\bigcup\{E'_j|j< n\})$, we conclude that $$\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j\leq m}\bar{U}_j\cup\bigcup U_{m+1}\cup\bigcup\{E'_j|j\leq n\}$$ is elementary. So $$\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j\leq m+1}\bar{U}_j$$ is elementary.
For the limit case it is easy to see that, if $m$ is a limit ordinal such that $\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j<i}\bar{U}_j$ is elementary for all $i<m$, then it follows that $\bigcup_{j<f(\alpha)^+}\mathcal{H}_{\mathcal{R}(j)}\circ \mathcal{H}_j^{-1}\restriction \bigcup\bigcup_{j<m}\bar{U}_j$ is elementary.
\end{proof}
By Claim \ref{Claim 4.14.7} we know that $(\bigcup\bar{U}_i)_{i<f(\alpha)^+}$ is an indiscernible sequence over $\Gamma_g^{\alpha+1}\cup X\cup I_X$. Therefore, for all $i<f(\alpha)^+$, $stp(\bigcup\bar{U}_0,\Gamma_g^{\alpha+1}\cup X\cup I_X)=stp(\bigcup\bar{U}_i,\Gamma_g^{\alpha+1}\cup X\cup I_X)$. Let $\mathcal{G}_i: F\cup\bigcup\bar{U}_0\rightarrow F\cup\bigcup\bar{U}_i$, be given by $\mathcal{G}_i\restriction F=id$ and $\mathcal{G}_i\restriction \bigcup\bar{U}_0=\mathcal{H}_i$.
\begin{claim}\label{Claim 4.14.8}
$\mathcal{G}_i$ is elementary
\end{claim}
\begin{proof}
Let $(\Gamma_g^{\alpha+1}\cup X\cup I_X, (c_j,C_j)_{j<\kappa})$ be an $a$-construction of $F$ over $\Gamma_g^{\alpha+1}\cup X\cup I_X$, by Lemma \ref{L.4.11}, $(\bigcup\bar{U}_i, (c_j,C_j)_{j<\kappa})$ is an $a$-construction of $F\cup \bigcup\bar{U}_i$ over $\bigcup\bar{U}_i$ (notice that $\bigcup\bar{U}_i=\Gamma_g^{\alpha+1}\cup X\cup I_X\cup \bigcup\bar{U}_i$). We will show by induction on $m$ that $\mathcal{G}_i\restriction \bigcup\bar{U}_0\cup\bigcup\{c_j\mid j\leq m\}$ is elementary. Let $m<\kappa$ be such that for all $w<m$ it holds $\mathcal{G}_i\restriction \bigcup\bar{U}_0\cup\bigcup\{c_j\mid j\leq w\}$ is elementary, and $stp(\bigcup\bar{U}_0\cup\bigcup\{c_j\mid j\leq w\},\Gamma_g^{\alpha+1}\cup X\cup I_X)=stp(\bigcup\bar{U}_i\cup\bigcup\{c_j\mid j\leq w\},\Gamma_g^{\alpha+1}\cup X\cup I_X)$, therefore $\mathcal{G}_i\restriction \bigcup\bar{U}_0\cup\bigcup\{c_j\mid j<m\}$ is elementary, and $stp(\bigcup\bar{U}_0\cup\bigcup\{c_j\mid j<m\},\Gamma_g^{\alpha+1}\cup X\cup I_X)=stp(\bigcup\bar{U}_i\cup\bigcup\{c_j\mid j<m\},\Gamma_g^{\alpha+1}\cup X\cup I_X)$. By Claim \ref{Claim 4.14.6} and since $stp(\bigcup\bar{U}_0,\Gamma_g^{\alpha+1}\cup X\cup I_X)=stp(\bigcup\bar{U}_i,\Gamma_g^{\alpha+1}\cup X\cup I_X)$ holds, we know that $0\leq m$. Since $a$-constructibility is $F^a_\omega$-constructibility, then there is $Z\subset m+1$ such that $m\in Z$ and $Z$ is closed. Therefore there is $C'\subseteq \Gamma_g^{\alpha+1}\cup X\cup I_X$ such that $stp((c_j)_{j\in Z}, C')\vdash tp((c_j)_{j\in Z}, \bigcup \bar{U}_i\cup\bigcup_{j\notin Z, j<m}c_j)$. On the other hand, there is $\bar{\mathcal{G}}\in Saut(\mathcal{M},\Gamma_g^{\alpha+1}\cup X\cup I_X)$ such that $\bar{\mathcal{G}}\restriction \bigcup\bar{U}_0\cup\bigcup\{c_j\mid j<m\}=\mathcal{G}_i\restriction \bigcup\bar{U}_0\cup\bigcup\{c_j\mid j<m\}$. So $stp((c_j)_{j\in Z, j<m}\ ^\frown \bar{\mathcal{G}}^{-1}(c_m), B')\vdash tp((c_j)_{j\in Z, j<m}\ ^\frown \bar{\mathcal{G}}^{-1}(c_m), \bigcup \bar{U}_0\cup\bigcup_{j\notin Z, j<m}c_j)$. Since $\bar{\mathcal{G}}\in Saut(\mathcal{M},\Gamma_g^{\alpha+1}\cup X\cup I_X)$, then $stp((c_j)_{j\in Z, j<m}\ ^\frown \bar{\mathcal{G}}^{-1}(c_m), B')=stp((c_j)_{j\in Z}, B')$, we conclude that $tp((c_j)_{j\in Z}, \bigcup \bar{U}_0\cup\bigcup_{j\notin Z, j<m}c_j)=tp((c_j)_{j\in Z, j<m}\ ^\frown \bar{\mathcal{G}}^{-1}(c_m), \bigcup \bar{U}_0\cup\bigcup_{j\notin Z, j<m}c_j)$. Therefore $tp(\bigcup \bar{U}_0\cup\bigcup_{j\leq m}c_j,\emptyset)=tp(\bigcup \bar{U}_m\cup\bigcup_{j\leq m}c_j,\emptyset)$ and $\mathcal{G}_i\restriction \bigcup\bar{U}_0\cup\bigcup\{c_j\mid j\leq m\}$ is elementary.
\end{proof}
Let us define for all $i<f(\alpha)^+$ the model $M_i\subseteq \mathcal{A}^g$ as an $a$-primary model over $F\cup\bigcup_{j<i}M_j\cup\bigcup\bar{U}_i$, with $\mathcal{N}\subseteq M_0$ and let $b_0\in M_0$ be $\Pi(a)$ (notice that $B\subseteq \bar{U}_0$, it was chosen such that $(tp(\Pi(a),F\cup \Gamma_g),B)\in F_\omega^a$ and $\Pi(a)\in \mathcal{N}$, $\mathcal{N}$ the $a$-primary model over $F\cup B$). For all $0<i<f(\alpha)^+$ let $\bar{\mathcal{G}}_i\in Saut(\mathcal{M},\Gamma_g^{\alpha+1}\cup X\cup I_X)$ be such that $\bar{\mathcal{G}}_i\restriction F\cup\bigcup\bar{U}_i=\mathcal{G}_i\restriction F\cup\bigcup\bar{U}_i$ and $b_i\in M_i$ be such that $stp(b_i,\mathcal{G}_i(B))=stp(\bar{\mathcal{G}}_i(\Pi(a)),\mathcal{G}_i(B))$. We know that $(tp(\Pi(a),F\cup \Gamma_g),B)\in F_\omega^a$, so by $a$-isolation and the definition of $\bar{\mathcal{G}}_i$ we conclude that $(tp(b_i,\bar{\mathcal{G}}_i(F \cup\bigcup\bar{U}_0)),\mathcal{G}_i(B))\in F_\omega^a$, so $(tp(b_i,F \cup\bigcup\bar{U}_i),\mathcal{G}_i(B))\in F_\omega^a$. Therefore $tp(b_i,F)=tp(\bar{\mathcal{G}}_i(\Pi(a)),F)$ and since $\bar{\mathcal{G}}_i$ is an automorphism that fix $F$, we conclude that $tp(b_i,F)=tp(\Pi(a),F)$. On the other hand $(tp(b_i,F \cup\bigcup\bar{U}_i),\mathcal{G}_i(B))\in F_\omega^a$ implies that $b_i\cup F \cup\bigcup\bar{U}_i$ is $a$-constructable over $F \cup\bigcup\bar{U}_i$, since $F$ is $a$-saturated then $\bigcup\bar{U}_i\rhd_Fb_i\cup\bigcup\bar{U}_i$. By Claim \ref{Claim 4.14.5} we know that $\bigcup \bar{U}_i\downarrow_F\bigcup\bigcup_{j\neq i}\bar{U}_j$, so by domination we conclude that $b_i\cup\bigcup \bar{U}_i\downarrow_F\bigcup\bigcup_{j\neq i}\bar{U}_j$, in particular $b_i\downarrow_F\bigcup\bigcup_{j\neq i}\bar{U}_j$ holds for all $i<f(\alpha)^+$.
\begin{claim}\label{Claim 4.14.9}
For all $i<f(\alpha)^+$, $M_i$ is $a$-constructable over $F\cup\bigcup\bigcup_{j\leq i}\bar{U}_j$.
\end{claim}
\begin{proof}
Suppose towards a contradiction, that it is false. Let $i<f(\alpha)^+$ be the least ordinal such that $M_i$ is not $a$-constructable over $F\cup\bigcup\bigcup_{j\leq i}\bar{U}_j$, notice that $0<i$. Since $F$ is $a$-constructable over $\Gamma_g^{\alpha+1}\cup X\cup I_X$, by Lemma \ref{L.4.11}, $F\cup \bigcup \bar{U}_0$ is $a$-constructable over $\bigcup \bar{U}_0$, and $M_0$ is $a$-constructable over $\bar{U}_0$.
Let $(\bigcup_{h<i}M_h\cup\bigcup\bar{U}_j,(c^j_k,C^j_k)_{k<\kappa})$ be an $a$-construction of $M_j$ over $\bigcup_{h<i}M_h\cup\bigcup\bar{U}_j$. Let us order the set $\{c_k^j\mid j\leq i,k<\kappa\}$ in a lexicographic way, i.e. $c_k^j<c_n^m$ if $j<m$, or $j=m$ and $k<n$. Since $M_i$ is not $a$-constructable over $\bigcup\bigcup_{j\leq i}\bar{U}_j$, then $(\bigcup\bigcup_{j\leq i}\bar{U}_j,(c_k^j,C_k^j)_{j\leq i,k<\kappa})$ is not an $a$-construction over $\bigcup\bigcup_{j\leq i}\bar{U}_j$. Let $j<i$ be such that $(\bigcup\bigcup_{h\leq i}\bar{U}_h,(c_k^n,C_k^n)_{n\leq j,k<\kappa})$ is not an $a$-construction over $\bigcup\bigcup_{h\leq i}\bar{U}_h$. If $j<i$, then by the minimality of $i$, we know that $(\bigcup\bigcup_{h\leq j}\bar{U}_h,(c_k^n,C_k^n)_{n\leq j,k<\kappa})$ is an $a$-construction over $\bigcup\bigcup_{h\leq j}\bar{U}_h$, by Lemma \ref{L.4.11} $(\bigcup\bigcup_{h\leq i}\bar{U}_h,(c_k^n,C_k^n)_{n\leq j,k<\kappa})$ is an $a$-construction over $\bigcup\bigcup_{h\leq i}\bar{U}_h$ a contradiction. Therefore $j=i$ and $(\bigcup\bigcup_{h< i}\bar{U}_h,(c_k^n,C_k^n)_{n<i,k<\kappa})$ is an $a$-construction over $\bigcup\bigcup_{h< i}\bar{U}_h$, by Lemma \ref{L.4.11} $(\bigcup\bigcup_{h\leq i}\bar{U}_h,(c_k^n,C_k^n)_{n<i,k<\kappa})$ is an $a$-construction over $\bigcup\bigcup_{h\leq i}\bar{U}_h$. We conclude that $(\bigcup\bigcup_{h\leq i}\bar{U}_h,(c_k^n,C_k^n)_{n\leq i,k<\kappa})$ is an $a$-construction over $\bigcup\bigcup_{h\leq i}\bar{U}_h$, a contradiction.
\end{proof}
By Claim \ref{Claim 4.14.9} we know that $\bigcup\bigcup_{k\leq j}\bar{U}_k\rhd_FM_j$ holds for all $i<f(\alpha)^+$, and since $b_i\downarrow_F\bigcup\bigcup_{j\neq i}\bar{U}_j$ holds for all $i<f(\alpha)^+$, then $b_i\downarrow_FM_j$ holds for all $j,i<f(\alpha)^+$, $j<i$. In particular $b_i\downarrow_F\bigcup_{k\leq j}b_k$ holds for all $j,i<f(\alpha)^+$, $j<i$. We conclude that $b_i\downarrow_F\bigcup_{j< i}b_j$ holds for all $i<f(\alpha)^+$.
Since $tp(b_i,F)=tp(\Pi(a),F)$ and $\Pi(a)\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}F$, we get that $b_i\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}F$ and by transitivity we conclude that $b_i\downarrow_{\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)}\bigcup_{j< i}b_j$. So $(b_i)_{i<f(\alpha)^+}$ is an independent sequence over $\Pi(\mathcal{B}_\xi\cup\mathcal{C}_\eta)$. Since for $i\neq j$ we know that $tp(b_i,F)=tp(b_j,F)$, the types over $F$ are stationary, and $b_i\downarrow_F\bigcup_{j< i}b_j$, then we conclude that $(b_i)_{i<f(\alpha)^+}$ is an indiscernible sequence over $F$.
For every $i<f(\alpha)^+$ let $c_i$ be $\Pi^{-1}(b_i)$, since $\Pi$ is an isomorphism, then $(c_i)_{i<f(\alpha)^+}$ is an indiscernible sequence over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$ and an independent sequence over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$, notice that $c_0=a$, so$c_0\in I_{\xi\eta}$.\\
Denote by $J$ the sequence $(c_i)_{i<f(\alpha)^+}$, since $T$ is superstable, there is $J'\subseteq J$ of power $f(\alpha)^+$ such that $c_0\notin J'$ and satisfies $J'\downarrow_{J\restriction\omega\cup\mathcal{B}_\xi\cup\mathcal{C}_\eta}I_{\xi\eta}$.
Since $J$ is an independent sequence over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$, then $J'\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta}J\restriction\omega\cup I_{\xi\eta}.$
Let us denote by $Q$ the set $\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup (I_{\xi\eta}\restriction\omega)\backslash \{c_0\}$,
so $J'\downarrow_{Q}I_{\xi\eta}$. Since $Av(I_{\xi\eta},Q)$ is stationary and $I_{\xi\eta}$ is independent over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$, we conclude that $I'=\{c_0\}\cup(I_{\xi\eta}\backslash (I_{\xi\eta}\restriction\omega))$ is indiscernible over $J'\cup Q$. Especially $I'$ is indiscernible over $\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J'$. On the other hand $J'\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta}J\restriction\omega\cup I_{\xi\eta}$ implies that $J'\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta}I_{\xi\eta}$, and since $I_{\xi\eta}$ is independent over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$, we conclude that $I_{\xi\eta}$ is independent over $\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J'$. In particular $I'$ is independent over $\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J'$.
We will prove by induction that $J'\cup I'$ is indiscernible over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$. Let us denote by $\{d_i\mid i<f(\alpha)\}$ the sequence $I'$.
Since $c_0\in I'\cap J$, $c_0\models Av(J',\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J')$, and $I'$ is indiscernible over $J'\cup Q$, then for every $i<f(\alpha)$,
\begin{equation*}
d_i\models Av(J',\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J').
\end{equation*}
Suppose $j$ is such that for all $n<j$ the sequence $J'\cup \{d_i\mid i\leq n\}$ is indiscernible over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$, then $J'\cup \{d_i\mid i<j\}$ is indiscernible over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$, therefore $Av(J'\cup \{d_i\mid i<j\},\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J'\cup \{d_i\mid i<j\})=Av(J',\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J'\cup \{d_i\mid i<j\})$ and it does not fork over $\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J'$. On the other hand we know that $Av(J',\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J')$ is stationary,
$d_j\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J'}\{d_i\mid i<j\}$ and $d_j\models Av(J',\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J')$, we conclude that $tp(d_j,\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J'\cup \{d_i\mid i<j\}))=Av(J'\cup \{d_i\mid i<j\},\mathcal{B}_\xi\cup\mathcal{C}_\eta\cup J'\cup \{d_i\mid i<j\})$. Therefore $J'\cup \{d_i\mid i\leq j\}$ is indiscernible over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$. We conclude that $J'\cup I'$ is indiscernible. So $J'$ is equivalent to $I_{\xi\eta}$ and for all $d\in J'$, $d\models Av(I_{\xi\eta}\restriction \omega,I_{\xi\eta}\restriction \omega\cup \mathcal{B}_\xi\cup\mathcal{C}_\eta)$. Since $J'$ is independent over $\mathcal{B}_\xi\cup\mathcal{C}_\eta$ and $J'\downarrow_{\mathcal{B}_\xi\cup\mathcal{C}_\eta}I_{\xi\eta}$, we conclude that $J'$ is independent over $I_{\xi\eta}\restriction \omega\cup \mathcal{B}_\xi\cup\mathcal{C}_\eta$, so $dim(p_{\xi\eta},\mathcal{A}^f)\ge f(\alpha)^+$, but this contradicts Lemma \ref{L.4.7}.
\end{proof}
\begin{corollary}\label{C.4.15}
If $\kappa$ is an innaccessible and $T$ is a theory with S-DOP, then $E_{\lambda\text{-club}}^\kappa\leq_c\cong_T$.
\end{corollary}
\begin{proof}
Let $f$ and $g$ be elements of $\kappa^\kappa$.
First we will construct a function $F:\kappa^\kappa\rightarrow\kappa^\kappa$ such that $f\ E^\kappa_{\lambda\text{-club}}\ g$ if and only if $\mathcal{A}^{F(f)}$ and $\mathcal{A}^{F(g)}$ are isomorphic.\\ \\
For every cardinal $\alpha<\kappa$, define $S_\alpha=\{\beta\in Card\cap\kappa|\lambda,\alpha^{+++},\alpha^\lambda<\beta\}$.
Let $\mathcal{G}_\beta$ be a bijection from $\kappa$ into $S_\beta$, for every $\beta<\kappa$. For every $f\in \kappa^\kappa$ define $F(f)$ by $F(f)(\beta)=\mathcal{G}_\beta(f(\beta))$, for every $\beta<\kappa$. Clearly $f\ E^\kappa_{\lambda\text{-club}}\ g$ if and only if $F(f)\ E^\kappa_{\lambda\text{-club}}\ F(g)$ i.e. $\mathcal{A}^{F(f)}$ and $\mathcal{A}^{F(g)}$ are isomorphic and $F$ is continuous.\\ \\
Finally we need to find $\mathcal{G}:\{F(f)|f\in \kappa^\kappa\}\rightarrow\kappa^\kappa$ such that $\mathcal{A}_{\mathcal{G}(F(f))}\cong \mathcal{A}^{F(f)}$ and $f \mapsto \mathcal{G}(F(f))$ is continuous.\\
Notice that for every $f,g\in \kappa^\kappa$ and $\alpha<\kappa$, by Definition \ref{D.2.6} and the definition of $J_f^\alpha$ in Remark \ref{R.2.7a}, it holds: $$F(f)\restriction\alpha=F(g)\restriction\alpha\Leftrightarrow J_{F(f)}^\alpha=J_{F(g)}^\alpha.$$
By Definition \ref{D.4.5}, for every $f,g\in \kappa^\kappa$ and $\alpha<\kappa$ it holds:
$$J_{F(f)}^\alpha=J_{F(g)}^\alpha\Leftrightarrow \Gamma_{F(f)}^\alpha=\Gamma_{F(g)}^\alpha.$$
By the definition of $\mathcal{A}_f^\alpha$ in Theorem \ref{T.4.14}, for every $f,g\in \kappa^\kappa$ and $\alpha<\kappa$ an $F(f)$-good and $F(g)$-good cardinal, it holds:
$$\Gamma_{F(f)}^\alpha=\Gamma_{F(g)}^\alpha\Leftrightarrow \mathcal{A}_{F(f)}^\alpha\cong\mathcal{A}_{F(g)}^\alpha.$$
In general,since there are club many $F(f)$-good and $F(g)$-good cardinals, then by the definition of $\mathcal{A}_f^\alpha$ in Theorem \ref{T.4.14} we can construct the models $\mathcal{A}^f$ such that for every $f,g\in \kappa^\kappa$ and $\alpha<\kappa$, it holds: $$J_{F(f)}^\alpha=J_{F(g)}^\alpha\Leftrightarrow \mathcal{A}_{F(f)}^\alpha=\mathcal{A}_{F(g)}^\alpha.$$
So we can construct the models $\mathcal{A}^f$ such that for every $f,g\in \kappa^\kappa$ and $\alpha<\kappa$, it holds: $$F(f)\restriction\alpha=F(g)\restriction\alpha\Leftrightarrow \mathcal{A}_{F(f)}^\alpha=\mathcal{A}_{F(g)}^\alpha.$$
For every $f\in \kappa^\kappa$ define $C_f\subseteq Card\cap\kappa$ such that $\forall \alpha\in C_f$, it holds that for all $\beta$ ordinal smaller than $\alpha$, $\mid\mathcal{A}^\beta_{F(f)}\mid<\mid\mathcal{A}^\alpha_{F(f)}\mid$. For every $f\in \kappa^\kappa$ and $\alpha\in C_f$ choose $E^\alpha_f:dom(\mathcal{A}^\alpha_{F(f)})\rightarrow\mid\mathcal{A}^\alpha_{F(f)}\mid$ a bijection, such that $\forall \beta,\alpha\in C_f$, $\beta<\alpha$ it holds that $E_f^\beta\subseteq E_f^\alpha$. Therefore $\bigcup_{\alpha\in C_f}E_f^\alpha=E_f$ is such that $E_f:dom(\mathcal{A}^{F(f)})\rightarrow\kappa$ is a bijection, and for every $f,g\in \kappa^\kappa$ and $\alpha<\kappa$ it holds:
If $F(f)\restriction\alpha=F(g)\restriction\alpha$, then $E_f\restriction dom(\mathcal{A}_{F(f)}^\alpha)=E_g\restriction dom(\mathcal{A}_{F(g)}^\alpha)$.\\
Let $\pi$ be the bijection in Definition \ref{D.1.6}, define the function $\mathcal{G}$ by: $$\mathcal{G}(F(f))(\alpha)=\begin{cases} 1 &\mbox{if } \alpha=\pi(m,a_1,a_2,\ldots,a_n) \text{ and } \mathcal{A}^{F(f)}\models P_m(E_f^{-1}(a_1),E_f^{-1}(a_2),\ldots,E_f^{-1}(a_n))\\
0 & \mbox{in other case. } \end{cases}$$
To show that $\mathcal{G}$ is continuous, let $[\eta\restriction\alpha]$ be a basic open set and $\xi\in \mathcal{G}^{-1}[[\eta\restriction\alpha]]$. So, there is $\beta\in C_\xi$ such that for all $\gamma<\alpha$, if $\gamma=\pi(m,a_1,a_2,\ldots,a_n)$, then $E^{-1}_\xi(a_i)\in dom(\mathcal{A}_\xi^\beta)$ holds for all $i\leq n$. Since for all $\zeta\in [\xi\restriction\beta]$ it holds that $\mathcal{A}_\xi^\beta=\mathcal{A}_\zeta^\beta$, then for every $\gamma<\alpha$ that satisfies $\gamma=\pi(m,a_1,a_2,\ldots,a_n)$, it holds that: $$\mathcal{A}^{\xi}\models P_m(E_\xi^{-1}(a_1),E_\xi^{-1}(a_2),\ldots,E_\xi^{-1}(a_n))\Leftrightarrow\mathcal{A}^{\zeta}\models P_m(E_\zeta^{-1}(a_1),E_\zeta^{-1}(a_2),\ldots,E_\zeta^{-1}(a_n)).$$
We conclude that $\mathcal{G}(\zeta)\in [\eta\restriction\alpha] $, and $\mathcal{G}$ is continuous.
\end{proof}
\begin{corollary}\label{C.4.16}
If $\kappa$ is an innaccessible and $T_1$ is a classifiable theory and $T_2$ is a superstable theory with S-DOP, then $\cong_{T_1}\leq_c\cong_{T_2}$.
\end{corollary}
\begin{proof}
It follows from Lemma \ref{L.1.9} and Corollary \ref{C.4.15}.
\end{proof}
The last corollary is related to $\Sigma_1^1$-complete relations.
\begin{definition}\label{D.4.17}
Suppose $E$ is an equivalence relation on $\kappa^\kappa$. We say that $E$ is $\Sigma_1^1$
if $E$ is the projection of a closed set in $\kappa^\kappa\times \kappa^\kappa\times \kappa^\kappa$ and it
is $\Sigma_1^1$-complete, if every $\Sigma_1^1$ equivalence relation is
Borel reducible to $E$.
\end{definition}
The following theorem is proved in \cite{HK} (Theorem 7).
\begin{theorem}\label{T.4.18}
Suppose $V=L$. Then $E^\kappa_{\mu\text{-club}}$
is $\Sigma_1^ 1$-complete for every regular $\mu<\kappa$.
\end{theorem}
\begin{corollary}\label{C.4.19}
Suppose $V=L$. If $\kappa$ is an innaccessible and $T$ is a superstable theory with S-DOP, then $\cong_{T}$ is $\Sigma_1^ 1$-complete.
\end{corollary}
\begin{proof}
It follows from Corollary \ref{C.4.15} and Theorem \ref{T.4.18}.
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
0803.0354
|
\section{introduction\label{se:intro}}
Exploring for new physics at the ``Terascale'' -- energy scales of
$\sim$ 1 TeV and beyond -- is the highest priority for particle
physics. A new, high energy, high statistics neutrino scattering
experiment running at the Tevatron at Fermi National Accelerator
Laboratory can look beyond the Standard Model at Terascale energies by
making precision electroweak measurements, direct searches for novel
phenomena, and precision QCD studies. In this article we limit the
discussion to precision electroweak measurements; QCD studies and
their impact on the precision measurements are explored in ref.
\cite{EOI, QCDPRD}. The ideas developed in this article were proposed
within the context of an expression of interest for a new neutrino
experiment, NuSOnG (Neutrino Scattering On Glass) \cite{EOI}.
\begin{figure}
{\includegraphics[width=3.in]{FeynmanESIMD.pdf}}
\caption{\label{fig:FeynmanESIMD} Left: ``elastic scattering'' (ES).
Right: ``Inverse Muon Decay'' (IMD).
}
\end{figure}
A unique and important measurement of the NuSOnG physics program is
the ratio of neutral current (NC) and charged current (CC)
neutrino-electron scattering, which probes new physics. The leading order
Feynman diagrams for these processes are shown in
Fig. \ref{fig:FeynmanESIMD}. The NC process, $\nu_{\mu}+e^-
\rightarrow \nu_{\mu}+ e^-$, called ``elastic scattering'' or ES,
provides the sensitivity to the Terascale physics. This process can
explore new physics signatures in the neutrino sector which are not
open to other, presently planned experiments. The CC process, called
``inverse muon decay'' or IMD, $\nu_{\mu}+ e^- \rightarrow \nu_e +
\mu^-$, is well understood in the Standard Model due to precision
measurement of muon decay \cite{mudk}. Since the data samples are
collected with the same beam, target and detector at the same time,
the ratio of ES to IMD events cancels many systematic errors while
maintaining a strong sensitivity to the physics of interest. Our
measurement goal of the ES to IMD ratio is a 0.7\% error, adding
systematic and statistical errors in quadrature. The high sensitivity
which we propose arises from the combined high energy and high
intensity of the NuSOnG design, leading to event samples more than an
order of magnitude higher than past experiments.
Normalizing the ES to the IMD events represents an important step
forward from past ES measurements, which have normalized neutrino-mode
ES measurements to the antineutrino mode, $\bar \nu_{\mu}+e^- \rightarrow
\bar \nu_{\mu}+ e^-$\cite{ahrens,CHARMIIsin2thw}. The improvement is
in both the experimental and the theoretical aspects of the
measurement. First, the flux contributing to IMD and $\nu$ ES is
identical, whereas neutrino and antineutrino fluxes are never
identical and so require corrections. Second, the ratio of $\nu$ ES to
$\bar \nu$ ES cancels sensitivity to Beyond Standard Model (BSM) physics
effects from the NC to CC coupling ratio, $\rho$, which are among the
primary physics goals of the NuSOnG measurement. In contrast, there
is no such cancellation in the ES to IMD ratio.
The design of this experiment, described in
Sec.~\ref{concept}, is driven both by requiring sufficient statistics
to make precision neutrino-electron scattering measurements and by the
need for a neutrino flux which does not extend below the IMD
threshold. The threshold for IMD events is
\begin{equation}
E_\nu \ge E_\mu \ge {{m_\mu^2}\over{2 m_e}} =10.9~{\rm GeV},
\end{equation}
where we have dropped the small $m_e^2$ term for simplicity.
The functional form above threshold, shown in
Fig.~\ref{fig:IMDthresh}, is given by $(1-m_\mu^2/E_{cm}^2)^2$, where
$E_{cm}$ is the center of mass energy. Thus a high energy neutrino
beam is required to obtain a high statistics sample of these events.
The flux design should provide a lower limit on the beam energy of
about 30 GeV, still well above the IMD threshold.
\begin{figure}
{\includegraphics[width=3.in, bb=100 300 560 570]{IMD_threshold_factor.pdf}}
\caption{\label{fig:IMDthresh} Threshold factor for the IMD cross section,
as a function of neutrino energy.}
\end{figure}
Sec.~\ref{EWreview} describes the Standard Model Physics of
neutrino electroweak scattering, for both electron and quark targets.
In this section, the value of the normalization of the ES to IMD
events is further explored. The very high statistics will also permit
an electroweak measurement using the deep inelastic scattering (DIS)
data sample from NuSOnG, via the ``Paschos Wolfenstein method'' (PW)
\cite{PW}. The best electroweak measurement using DIS events to date
comes from the NuTeV experiment, which has observed an anomaly. The
status of this result is reviewed below. Making conservative
assumptions concerning systematic improvements over NuTeV, our
measurement goal using this technique is a 0.4\% error on
$\sin^2\theta_W$, adding statistical and systematic errors in
quadrature.
In Sec.~\ref{Terascale}, we discuss NuSOnG's potential to discover or
constrain new physics through indirect probes, by making precision
measurements of SM processes to look for deviations from SM
predictions. We first frame the issue by considering in turn several
model-independent parameterizations of possible new physics and asking
what constraints will be imposed on new physics in the event NuSOnG
agrees with the SM. (1) Oblique correction parameters describe the
effects of heavy new states in vector boson loops. (2) New states may
induce higher-dimensional effective operators involving neutrinos.
Finally, (3) new states may modify the couplings of the gauge bosons
to neutrinos and leptons, including possibly violating lepton
universality. In each case we consider the ability of NuSOnG to
detect or constrain these types of deviations from the SM.
In Sec.~\ref{Models}, we examine specific models for new physics. We
begin by presenting the sensitivity to a set of new physics models.
In particular, we consider
\begin{itemize}
\item typical $Z^\prime$ models,
\item non-degenerate leptoquark models,
\item R-parity violating SUSY models,
\item extended Higgs models.
\end{itemize}
The models were selected because they are often used as benchmarks in
the literature. While this list is not exhaustive, it serves to
illustrate the possibilities. For each case, we consider how NuSOnG
compares to other measurements and note the unique contributions. We
end this section by approaching the question from the opposite view,
asking: how could the results from NuSOnG clarify the underlying
physics model, should evidence of new physics emerge from LHC in the
near future?
Two further studies which can be performed by NuSOnG are QCD
measurements and direct searches. The very large ($\sim 600$ million
event) DIS sample will allow the opportunity for precision studies of
QCD. There are many interesting measurements which can be made in
their own right and which are important to NuSOnG's Terascale physics
program. The very high flux will also permits direct searches for new
physics. Those which complement the physics discussed in this paper
include:
\begin{itemize}
\item non-unitarity in the light neutrino mixing matrix;
\item wrong-sign inverse muon decay (WSIMD),
$\bar{\nu}_\mu + e^- \rightarrow \mu^- + \bar{\nu}_e$;
\item decays of
neutrissimos, {\it i.e.,} moderately-heavy neutral-heavy-leptons, with
masses above 45 GeV.
\end{itemize}
For more information on these studies, see refs.~\cite{EOI, QCDPRD}.
\section{Conceptual Design for the Experiment \label{concept}}
In order to discuss the physics case for a new high energy, high
statistics experiment, one must specify certain design parameters for
the beam and detector. The beam and detector should marry the best
aspects of NuTeV \cite{NuTeVbeam}, the highest energy neutrino
experiment, and Charm II \cite{CharmIIdet}, the experiment
with the largest ES sample to date. The plan presented here is not
optimized, but provides a basis for discussion. The final design of the
NuSOnG detector will be based on these concepts, and is still
under development.
In this section, we present, but do not justify, the design choices.
Later in this article, we discuss the reasoning for the choices,
particularly in Secs.~\ref{IMDvsBarNu} and \ref{justify}.
\begin{figure}
{\includegraphics[width=3.5in, bb=0 400 560 670]{fluxplots.pdf}}
\caption{The assumed energy-weighted flux, from the NuTeV Experiment
\cite{NuTeVbeam}, in neutrino mode (left) and antineutrino mode
(right). Black: muon neutrino, red: muon antineutrino,
blue: electron neutrino and antineutrino flux.}
\label{fig:beam}
\end{figure}
We will assume a beam design based on the one used by the NuTeV experiment
\cite{NuTeVbeam}, which is the most recent high energy neutrino
experiment. This experiment used 800 GeV protons on target. The beam
flux, shown in Fig.~\ref{fig:beam}, is ideal for the physics case
for several reasons. There is essentially no flux below 30 GeV,
hence all neutrinos are well above the IMD threshold. It is sign-selected:
in neutrino mode, 98.2\% of neutrino interactions were due to $\pi^+$
and $K^+$ secondaries, while in antineutrino mode 97.3\% came from
$\pi^-$ and $K^-$. The ``wrong sign'' content was very low, with a
0.03\% antineutrino contamination in neutrino mode and 0.4\% neutrino
contamination in antineutrino mode. The electron-flavor content was
1.8\% in neutrino mode and 2.3\% in antineutrino mode. The major
source of these neutrinos is $K^{\pm}_{e3}$ decay, representing 1.7\%
of the total flux in neutrino mode, and 1.6\% in antineutrino mode.
Redesign of the beamline for NuSOnG is expected to lead to modest
changes in these ratios. For example, if the decay pipe length is
1.5 km rather than 440 m, as in NuTeV, the $\pi/K$ ratio increases by 20\%
and the fractional $\nu_e$ content is reduced.
With respect to Tevatron running conditions, we will assume that
twenty times more protons on target (POT) per year can be produced for
NuSOnG compared to NuTeV. This is achieved through three times higher
intensity per pulse (or ``ping''). Nearly an order of magnitude
more pulses per spill are provided. Our studies assume 4 $\times$ 10$^{19}$
POT/year, with 5 years of running. Preliminary studies supporting
these goals are provided in ref. \cite{pot}.
The event rates quoted below are consistent with 1.5$\times10^{20}$
protons on target in neutrino running and $0.5\times10^{20}$ protons
on target in antineutrino running. The choice to emphasize neutrino
running is driven by obtaining high statistics ES, which has a higher
cross section for neutrino scatters, and to use the IMD for
normalization -- this process only occurs in neutrino scattering. The
Standard Model forbids an IMD signal in antineutrino mode. However,
some antineutrino running is required for the physics described in the
following sections, especially the PW electroweak measurement.
The beam from such a design is highly forward directed. NuTeV was
designed so that 90\% of the neutrinos from pion decay were contained
within the detector face, where the detector was located at 1 km. For
NuSOnG, which will use a 5 m detector, $\sim$90\% of the neutrinos from pion
decay are contained at $\sim$3 km.
The optimal detector is a fine-grained calorimeter for electromagnetic
shower reconstruction followed by a toroid muon spectrometer. This
allows excellent reconstruction of the energy of the outgoing lepton
from charged current events. We employ a Charm II style design
\cite{CharmIIdet}, which uses a glass target calorimeter followed by a
toroid. We assume one inch glass panels with active detectors
interspersed for energy and position measurement. Glass provides an
optimal choice of density, low enough to allow electromagnetic showers
to be well sampled, but high enough that the detector length does not
compromise acceptance for large angle muons by the toroid. Approximately
10\% of the glass will be doped with scintillator to allow
for background studies, as discussed in Sec.~\ref{justify}.
The design introduces four identical sub-detectors of this glass-calorimeter
and toroid design,
each a total of 29 m in length (including the toroid). Between
each sub-detector is a 15 m decay
region for direct searches for new physics.
The total fiducial volume is 3 ktons.
The NuSOnG run plan, for reasons discussed in Sec.~\ref{IMDsub} and
\ref{IMDvsBarNu}, concentrates on running in neutrino mode. This
design will yield the rates shown in Table~\ref{Tab:rates}. These
rates, before cuts, are assumed throughout the rest of the discussion.
We can compare this sample to past experiments. The present highest
statistics sample for $\nu_\mu$ and $\bar \nu_\mu$ ES is from
CHARM~II, with 2677$\pm$82 events in neutrino mode and 2752$\pm$88
events in antineutrino mode \cite{CHARMIIsin2thw}. Thus the proposed
experiment will have a factor of 30 (2.5) more $\nu$($\bar
\nu$)-electron events. As an example, after cuts, the first method of
analysis described in Sec.~\ref{justify} retains 63\% of the $\nu$
sample. For deep inelastic scattering, 600M and 190M events are
expected in neutrino and antineutrino modes, respectively. After
minimal cuts to isolate DIS events \cite{SamThesis},
NuTeV had 1.62M DIS (NC+CC) events in neutrino mode and 0.35M in
antineutrino mode; thus NuSOnG has orders of magnitude more events.
\begin{table}
\begin{ruledtabular}
\begin{tabular}{c|c}
600\/M & $\nu_\mu$ CC Deep Inelastic Scattering\\
190\/M & $\nu_\mu$ NC Deep Inelastic Scattering \\
75\/k & $\nu_\mu$ electron NC elastic scatters (ES) \\
700\/k &$\nu_\mu$ electron CC quasi-elastic scatters (IMD) \\
33\/M & $\bar \nu_\mu$ CC Deep Inelastic Scattering \\
12\/M & $\bar \nu_\mu$ NC Deep Inelastic Scattering \\
7\/k & $\bar \nu_\mu$ electron NC elastic~scatters (ES)\\
0\/k & $\bar \nu_\mu$ electron CC quasi-elastic scatters (WSIMD)\\
\end{tabular}
\end{ruledtabular}
\caption{Rates assumed for this paper. NC indicates ``neutral current''
and CC indicates ``charged current.'' \label{Tab:rates}
}
\end{table}
The detector will incorporate several specialized regions. A region
of fine vertex-tracking facilitates measurements of the strange
sea relevant for the electroweak analysis, as described in
ref.~\cite{QCDPRD}. Two possibilities are under consideration: an
emulsion detector or a silicon detector of the style of NOMAD-STAR
\cite{NOMAD-STAR}. Both are compact and easily accommodated. For
further QCD studies, it will also be useful to intersperse alternative
target materials: C, Al, Fe, and Pb \cite{QCDPRD}.
\section{Electroweak Measurements in Neutrino Scattering\label{EWreview}}
Neutrino neutral current (NC) scattering is an ideal probe for new
physics. An experiment like NuSOnG is unique in its ability to test
the NC couplings by studying scattering of neutrinos from both
electrons and quarks. A deviation from the Standard Model predictions
in both the electron and quark measurements would present a compelling
case for new physics.
The exchange of the $Z$ boson between the neutrino $\nu$ and fermion $f$
leads to the effective interaction:
\begin{eqnarray}
\mathcal{L}
& = & -\sqrt{2}G_F
\Bigl[\, \bar{\nu}\gamma_\mu\bigl(g_V^\nu - g_A^\nu \gamma_5\bigr)\nu \,\Bigr]
\Bigl[\, \bar{f}\gamma^\mu\bigl(g_V^f - g_A^f \gamma_5\bigr)f \,\Bigr] \cr
& = & -\sqrt{2}G_F
\Bigl[\, g_L^\nu\,\bar{\nu}\gamma_\mu(1-\gamma_5)\nu
+ g_R^\nu\,\bar{\nu}\gamma_\mu(1+\gamma_5)\nu \,\Bigr] \cr
& & \qquad\qquad
\times
\Bigl[\, g_L^f \,\bar{f}\gamma^\mu(1-\gamma_5)f
+ g_R^f \,\bar{f}\gamma^\mu(1+\gamma_5)f \,\Bigr] \;, \cr
& &
\label{eq:geneffint}
\end{eqnarray}
where the Standard Model values of the couplings are:
\begin{eqnarray}
g_L^\nu & = & \sqrt{\rho}\left(+\frac{1}{2}\right) \;,\cr
g_R^\nu & = & 0\;, \cr
g_L^f & = & \sqrt{\rho}\left(I_3^f - Q^f\sin^2\theta_W \right) \;,\cr
g_R^f & = & \sqrt{\rho}\left(-Q^f\sin^2\theta_W\right) \;,
\end{eqnarray}
or equivalently,
\begin{eqnarray}
g_V^\nu \;=\; g_L^\nu + g_R^\nu & = & \sqrt{\rho}\left(+\frac{1}{2}\right)\;,\cr
g_A^\nu \;=\; g_L^\nu - g_R^\nu & = & \sqrt{\rho}\left(+\frac{1}{2}\right)\;,\cr
g_V^f \;=\; g_L^f + g_R^f & = & \sqrt{\rho}\left(I_3^f - 2Q^f\sin^2\theta_W\right) \;,\cr
g_A^f \;=\; g_L^f - g_R^f & = & \sqrt{\rho}\left(I_3^f\right) \;.
\end{eqnarray}
Here, $I_3^f$ and $Q^f$ are the weak isospin and electromagnetic
charge of fermion $f$, respectively. In these formulas, $\rho$ is the
relative coupling strength of the neutral to charged current
interactions ($\rho=1$ at tree level in the Standard Model). The weak
mixing parameter, $\sin^2 \theta_W$, is related (at tree level) to
$G_F$, $M_Z$ and $\alpha$ by
\begin{equation}
\sin^2 2 \theta_W=\frac{4 \pi \alpha }{ \sqrt{2} G_F M_Z^2} .
\end{equation}
\subsection{Neutrino Electron Elastic Scattering \label{nuesub}
}
The differential cross section for $\nu_\mu$ and $\bar \nu_\mu$
ES, defined using the coupling constants
described above, is:
\begin{eqnarray}
\frac{d\sigma}{dT} & = &
\frac{2G_F^2 m_e}{\pi}
\Biggl[
(g_L^\nu g_V^e \pm g_L^\nu g_A^e)^2
\Biggr.\cr
& & \qquad\qquad +
(g_L^\nu g_V^e \mp g_L^\nu g_A^e)^2
\left(1-\frac{T}{E_\nu}\right)^2 \cr
& & \qquad\qquad -
\Biggl.
\Bigl\{ (g_L^\nu g_V^e)^2 - (g_L^\nu g_A^e )^2 \Bigr\}
\frac{m_e T}{E_\nu^2}
\Biggr] \;.
\end{eqnarray}
The upper and lower signs correspond to the neutrino and
anti-neutrino cases, respectively. In this equation, $E_{\nu}$ is the
incident ${\nu}_{\mu}$ energy and $T$ is the electron recoil kinetic
energy.
More often in the literature, the cross section is defined in terms of
the parameters $(g_{V}^{\nu e},g_{A}^{\nu e})$, which are defined as
\begin{eqnarray}
g_V^{\nu e}
& \equiv & (2g_L^\nu g_V^e)
\;=\; \rho\left(-\frac{1}{2}+2\sin^2\theta_W\right) \;,\cr
g_A^{\nu e}
& \equiv & (2g_L^\nu g_A^e)
\;=\; \rho\left(-\frac{1}{2}\right) \;,
\end{eqnarray}
In terms of these parameters, we can write:
\begin{eqnarray}
\frac{d\sigma}{dT} & = &
\frac{G_F^2 m_e}{2\pi}
\Biggl[
(g_V^{\nu e} \pm g_A^{\nu e})^2
\Biggr. \cr
& & +
(g_V^{\nu e} \mp g_A^{\nu e})^2
\left(1-\frac{T}{E_\nu}\right)^2 \cr
& & -
\Biggl.
\Bigl\{ (g_V^{\nu e})^2 - (g_A^{\nu e} )^2 \Bigr\}
\frac{m_e T}{E_\nu^2}
\Biggr] \;.
\end{eqnarray}
When $m_e \ll E_\nu$, as is the case in NuSOnG, the third term in
these expressions can be neglected. If we introduce the variable
$y=T/E_\nu$, then
\begin{eqnarray}
\frac{d\sigma}{dy}
& = & \frac{G_{F}^{2}m_e E_\nu}{2\pi}
\left[
\left( g_{V}^{\nu e} \pm g_{A}^{\nu e} \right)^{2}
+ \left( g_{V}^{\nu e} \mp g_{A}^{\nu e} \right)^{2}
\left( 1 - y \right)^{2}
\right]\;.\cr
& &
\label{differentialcrosssection}
\end{eqnarray}
Integrating, we obtain the total cross sections
which are
\begin{eqnarray}
\sigma & = &
\frac{G_{F}^{2}m_e E_\nu}{2\pi}
\left[
\left( g_{V}^{\nu e} \pm g_{A}^{\nu e} \right)^{2}
+ \frac{1}{3}\left( g_{V}^{\nu e} \mp g_{A}^{\nu e} \right)^{2}
\right]\;.
\label{sigma_enu}
\end{eqnarray}
Note that
\begin{eqnarray}
\left( g_{V}^{\nu e} + g_{A}^{\nu e} \right)^{2}
& =& \rho^2\left(-1+2\sin^2\theta_W\right)^2
\; \cr
& = &\; \rho^2\left(1-4\sin^2\theta_W+4\sin^4\theta_W\right) \;,\cr
\left( g_{V}^{\nu e} - g_{A}^{\nu e} \right)^{2}
& = & \rho^2\left(2\sin^2\theta_W\right)^2
\; \cr
& =& \; \rho^2\left(4\sin^4\theta_W\right) \;.
\end{eqnarray}
Therefore,
\begin{eqnarray}
\sigma(\nu_{\mu}\, e) & = & \frac{G_F^2 m_e E_\nu}{2\pi}
\,\rho^2
\Biggl[ 1 - 4\sin^2\theta_W + \frac{16}{3}\sin^4\theta_W \Biggr] \;,\cr
\sigma({\bar \nu_{\mu}}\, e) & = & \frac{G_F^2 m_e E_\nu}{2\pi}
\,\frac{\rho^2}{3}
\Biggl[ 1 - 4\sin^2\theta_W + 16\sin^4\theta_W \Biggr] \;. \cr
& &
\end{eqnarray}
The ratio of the integrated cross sections for neutrino to antineutrino
electron ES is
\begin{equation}
R_{\nu/\bar \nu}
\;=\; \frac{\sigma(\nu_{\mu}\, e)}{\sigma^({\bar \nu_{\mu}} e)}
\;=\; 3\;
\frac{1-4\sin^2\theta_W+{{16}\over{3}}\sin^4\theta_W}
{1-4\sin^2\theta_W+16\sin^4\theta_W} \;. \label{eq:nuerat}
\end{equation}
Fig. \ref{pastnu}(top) shows the results for $\sin^2 \theta_W$
from many past experiments which have used this ``$\nu/\bar \nu$ ES
ratio.''
\begin{figure}
\vspace{-0.75in}
\scalebox{0.45}{\includegraphics{pastresults.pdf}}
\vspace{-0.75in}
\caption{\label{pastnu} Measurements of $\sin^2 \theta_W$ from past
experiments. Top: neutrino-electron elastic scattering experiments.
Bottom: neutrino DIS experiments. All DIS results are adjusted to
the same charm mass (relevant for experiments not using the PW method).
The Standard Model value, indicated by the line, is $0.2227$ \cite{PDG}.}
\end{figure}
In the ratio, $R_{\nu/\bar \nu}$, the dependence on $\rho$ canceled.
This directly extracts $\sin^2\theta_W$. The relationship
between the error on the ratio and the error on $\sin^2\theta_W$,
which for convenience we abbreviate as $z$, is:
\begin{eqnarray}
\delta z & = &( \frac{32z-12}{16z^{2}-4z+1} + \nonumber \\
& ~~& \frac{448z^{2}-144z-512z^{3}+12}{48z^{2}-8z-128z^{3}+256z^{4}+1}) ^{-1}\delta R_{\nu/\bar \nu} \nonumber\\
& = & -0.103\;\delta R_{\nu/\bar \nu};\\
\delta z/z & = & -0.575\;\delta R_{\nu/\bar \nu}/R_{\nu/\bar \nu},
\label{rationunubarerr}\end{eqnarray}
for $z=0.2227$ (or $R_{\nu/\bar \nu}=1.242$).
Roughly, the fractional error on $\sin^2\theta_W$ is 60\% of the
fractional error on $R_{\nu/\bar \nu}$.
\subsection{A New Technique: Normalization Through IMD \label{IMDsub}
}
\begin{figure*}
\scalebox{0.425}{\includegraphics{y_IMD_100_200_GeV.pdf}}~\scalebox{0.425}{\includegraphics{thmu_imd.pdf}}
\vspace{-2.5in}
\caption{\label{fig:ycut} Kinematic distributions for IMD events from incident neutrino energy between 100 and 200 GeV. Left: $y$ distribution; right:
$\theta_\mu$ distribution. Black: distribution of events before cuts; Red:
distribution after cuts for analysis method 1 (see Sec.~\ref{justify}).}
\end{figure*}
An experiment such as
NuSOnG can make independent measurements of the electroweak
parameters for both $\nu_\mu$ and $\bar \nu_\mu$-electron scattering.
We can achieve this via ratios or by direct extraction of the cross
section. In the case of $\nu_\mu$-electron scattering, we
will use the ratio of the number of events in
neutrino-electron elastic scattering to inverse muon decay:
\begin{equation}
{{N(\nu_\mu e^- \rightarrow \nu_\mu e^-)}
\over{N(\nu_\mu e^- \rightarrow \mu^- \nu_e)}}
=
\frac{\sigma^{\nu e}_{NC} \times \Phi^\nu}{\sigma^{IMD} \times \Phi^{\nu}}.
\end{equation}
Because the cross section for IMD events is well determined by the
Standard Model, this ratio should have low errors and will isolate the
EW parameters from NC scattering. In the discussion below,
we will assume that the systematic error on this ratio is 0.5\%.
In the case of $\bar \nu_\mu$ data, the absolute normalization is more
complex because there is no equivalent process to inverse muon decay
(since there are no positrons in the detector). One can use the fact
that, for low exchange energy (or ``nu'') in Deep Inelastic
Scattering, the cross sections in neutrino and antineutrino scattering
approach the same constant, $A$ \cite{lownu}. This is called the ``low nu
method'' of flux extractions. For DIS events with low
energy transfer and hence low hadronic energy ($5 \lesssim E_{had}
\lesssim 10$ GeV), $N^{low~E_{had}}_{\nu DIS} = \Phi^\nu A$ and
$N^{low~E_had}_{\bar\nu DIS} = \Phi^{\bar \nu} A$. The result is that
the electroweak parameters can be extracted using the ratio
\begin{equation}
\frac {{N^{low~E_{had}}_{\nu DIS}}} {N^{low~E_{had}}_{\bar\nu DIS}} \times
\frac {N(\bar \nu_\mu e^- \rightarrow \bar \nu_\mu e^-)}
{N(\nu_\mu e^- \rightarrow \mu^- \nu_e)}
=
\frac {\Phi^\nu} {\Phi^{\bar \nu}} \times
\frac {\sigma^{\bar \nu e}_{NC} \times \Phi^{\bar \nu}}
{\sigma^{IMD} \times \Phi^\nu}.
\end{equation}
The first ratio cancels the DIS cross section, leaving the
energy-integrated $\nu$ to $\bar \nu$ flux ratio. The IMD events in
the denominator of the second term cancel the integrated $\nu$ flux.
The NC elastic events cancel the integrated $\bar \nu$ flux.
Because of the added layer of complexity, the antineutrino ES
measurement would have a higher systematic error than the neutrino
ES scattering measurement. The potentially higher error is one factor
leading to the plan that NuSOnG
concentrate on neutrino running for the ES studies.
As shown in Fig.~\ref{fig:IMDthresh}, IMD events have a kinematic threshold
at 10.9 GeV. These events also have other interesting kinematic properties.
The minimum energy of the outgoing muon in the lab frame is given by
\begin{equation}
E_{\mu~lab}^{min} = \frac{m_\mu^2 + m_e^2}{2 m_e}= 10.9~{\rm GeV}.
\end{equation}
In the detector described above, muons of this energy and higher will
reach the toroid spectrometer without ranging-out in the glass. An
interesting consequence is that, independent of $E_\nu$, the energy
transfer in the interaction has a maximum value of
\begin{equation}
y_{max} = 1 - \frac{10.9~{\rm GeV}}{E_\nu}.
\end{equation}
Thus at low $E_\nu$, the cutoff in $y$ is less than unity,
as shown in Fig.~\ref{fig:ycut} (left). The direct consequence of
this is a strong cutoff in angle of the outgoing muon, shown in
Fig.~\ref{fig:ycut} (right). In principle, one can reconstruct
the full neutrino energy in these events:
\begin{equation}
E_{\nu}^{IMD}=\frac{1}{2}\frac{2m_{e}E_{\mu}-m_{e}^{2}-m_{\mu}^{2}}%
{m_{e}-E_{\mu}+p_{\mu}\cos\theta_{\mu}}
\end{equation}
This formula depends on $\theta_\mu$, which is small. The reconstructed
$E_\nu$ is smeared by resolution effects as seen in
Fig.~\ref{fig:EnuIMD}. While the analysis can be done by summing over
all energies, these distributions indicate that an energy binned
analysis may be possible. This is more powerful because one can fit
for the energy dependence of backgrounds. For the illustrative
analyzes below, however, we do not employ this technique.
\begin{figure}
\vspace{-0.5in}
\scalebox{0.425}{\includegraphics{enu_imd_gen_rec.pdf}}\vspace{-2.25in}
\caption{\label{fig:EnuIMD} Reconstructed neutrino energy (red) for IMD
events before cuts compared to true neutrino energy (black).}
\end{figure}
The error on $\sin^2\theta_W$ extracted from this ratio, $R_{ES/IMD}$,
assuming a Standard Model value for $\rho$, is the same as the error
on the ratio:
\begin{equation}
{{\delta(sin^2\theta_W)}\over{sin^2\theta_W}} \approx {{\delta R_{ES/IMD}}\over {R_{ES/IMD}}}.
\label{ratioesimderr}
\end{equation}
Ref.~\cite{MarcianoParsa} provides a useful summary of radiative corrections
for the ES and IMD processes, which were originally calculated
in Ref.~\cite{SSM}. The error from radiative corrections is expected to
be below 0.1\%. It is noted that to reduce the error below 0.1\%,
leading two-loop effects must be included. A new evaluation of the
radiative corrections is underway \cite{newradcor}.
\subsection{IMD Normalization vs. $\bar \nu$ Normalization
\label{IMDvsBarNu}
}
NuSOnG can measure both the $\nu/\bar \nu$ ES ratio, as in the
case of past experiments shown in
Eq.~(\ref{eq:nuerat}), as well as the ES/IMD ratio. In the case of the
former, to obtain the best measurement in a 5 year run, one would
choose a 1:3 ratio of run time in $\nu$ versus $\bar \nu$ mode. In the
latter case, one would maximize running in $\nu$ mode. The result of
the two cases is a nearly equal error on $\sin^2\theta_W$, despite
the fact that the error on the $\nu/\bar \nu$ ES is nearly twice that
of the ES/IMD ratio. To understand this, compare
Eq.~(\ref{rationunubarerr}) to Eq.~(\ref{ratioesimderr}). However, the
ES/IMD ratio is substantially stronger for reasons of physics.
Therefore, our conceptual design calls for running mainly with a $\nu$
beam. In this section we explore the issues for these two methods of
measurement further. We also justify why the precision measurement
requires high energies, only available from a Tevatron-based beam.
\subsubsection{Comparison of the Two Measurement Options}
From the point of view of physics, The ES/IMD ratio is more
interesting than the $\nu/\bar\nu$ ES ratio. This is because $\rho$
has canceled in the $\nu/\bar \nu$ ES ratio of Eq.~(\ref{eq:nuerat}),
leaving the ratio insensitive to physics which manifests itself
through changes in the NC coupling. Many of the unique physics goals
of NuSOnG, discussed in Sec.~\ref{Terascale}, depend upon sensitivity
to the NC coupling.
An equally important concern was one of systematics. The $\nu$ and $\bar \nu$
fluxes for a conventional neutrino beam are substantially different.
For the case of NuSOnG, the fluxes are compared in
Fig.~\ref{fig:beam}. Predicting the differences in these fluxes
from secondary production measurements and simulations leads to
substantial systematic errors. For beams at high energies ($>30$
GeV), such as NuSOnG, the ``low nu'' method \cite{lownu} for
determining the ratio of the neutrino to antineutrino fluxes from Deep
Inelastic events, developed by CCFR and NuTeV and described in
Sec.~\ref{IMDsub}, can be employed.
However, this leads to the criticism that one has introduced a new
process into the purely-leptonic analysis.
Neither criticism is relevant to the ES/IMD ratio. The sensitivity
to the new physics through the couplings does not cancel. Because
both processes are in neutrino mode, the flux exactly cancels, as long
as the neutrino energies are well above the IMD threshold (this will
be illustrated in the analysis presented in Sec.~\ref{justify}). This
ratio has the added advantage of needing only neutrino-mode running,
which means that very high statistics can be obtained. This is
clearly the more elegant solution.
It should be noted that nothing precludes continued running of NuSOnG
beyond the 5-year plan presented here. This run-length was selected
as ``reasonable'' for first results. If interesting physics
is observed in this first phase, an extended run in antineutrino mode
may be warranted, in which case {\it both} the ES/IMD and $\nu/\bar \nu$ ES
ratios could be measured. The latter would then constrain $\sin^2\theta_W$
in a pure neutrino measurement and the former is then used to
extract $\rho$.
To measure the ES/IMD ratio to high precision, there must be little
low energy flux. This is because the IMD has a threshold of 10.9 GeV,
and does not have substantial rate until $\sim 30$ GeV.
The low-energy cut-off in the flux (see Fig.~\ref{fig:beam})
coming from the energy-angle correlation of neutrinos from pion
decay, is ideal.
\subsubsection{Why a Tevatron-based Beam is Best for Both Options}
The ES/IMD measurement is not an option for the planned beams from the
Main Injector at Fermilab. For both presently planned Main Injector
experiments at Fermilab \cite{Minerva} and for the proposed
Project-X DUSEL beam \cite{PXDUSEL}, the
neutrino flux is peaked at $\sim$5 GeV. The majority of the flux of
these beams is below 5 GeV, and most of the flux is below the 10.9 GeV
IMD threshold. Because of this, one simply cannot use the IMD events
to normalize.
In principle, the $\nu/\bar \nu$ ES ratio could be used. However, in
practice this will have large systematics. The $\nu$ and $\bar \nu$
fluxes for a horn beam are significantly different. First principles
predictions of secondary mesons are not sufficient to reduce this
error to the precision level. The energy range is well below the deep
inelastic region where the ``low nu'' method can be applied to
accurately extract a $\bar \nu/\nu$ flux ratio. Other processes, such
as charged-current quasi-elastic scattering, could be considered for
normalization, but the differences in nuclear effects in neutrino and
antineutrino scattering for these events is not sufficiently well
understood to yield a precision measurement.
Lastly, the ES rates for the present Main Injector beams are too low
for a high statistics measurement. This is because the cross section
falls linearly with energy. Event samples on the order of 10k may be
possible with extended running in the Project X DUSEL beam in the
future. From the point of view of statistics, even though two orders
of magnitude more protons on target are supplied in such a beam, the
Tevatron provides a substantially higher rate of ES per year of
running.
Compared to the Main Injector beam, a Tevatron-based beam does not face
these issues. The choice of running in neutrino mode provides the
highest precision measurement while optimizing the physics.
\begin{table*}[tbp] \centering
\begin{ruledtabular}
\begin{tabular}
[c]{ll|cc|l}\hline
\multicolumn{2}{l|}{Quantity} & Assumed Value & Uncertainty & Source of
Estimate\\\hline
\multicolumn{2}{l|}{Muon} & & & \\
& Energy Resolution & $\delta E/E=10\%$ & 2.5\% & NuTeV testbeam measurement\\
& Energy Scale Error & $E_{rec}=1.0\times E_{true}$ & 0.5\% & NuTeV testbeam
measurement\\
& Angular Resolution & $\delta\theta=0.011/E^{0.96}$ rad & 2.5\% & Multiple
scattering fit simulation\\
\multicolumn{2}{l|}{Electron} & & & \\
& Energy Resolution & $\delta E/E=0.23/E^{0.5}$ & 1.0\% & Same as CHARM II\\
& Energy Scale Error & $E_{rec}=1.0\times E_{true}$ & 1.0\% & Scaled from
CHARM II with NuSOnG statistics\\
& Angular Resolution & $\delta\theta=0.008/E^{0.5}$ rad & 2.5\% & $2$ better
than CHARM II due to sampling\\
\multicolumn{2}{l|}{Flux} & & & \\
& Normalization & 1.0 & 3\% & Current total cross section uncertainty\\
& Shape Uncertainty & 1.0 & 1\% & Similar to NuTeV low-nu method\\
\multicolumn{2}{l|}{Backgrounds} & & & \\
& $\nu_\mu$ CCQE & 1.0 & 5\% & Extrapolated from NuTeV\\
& $\nu_e$ CCQE & 1.0 & 3\% & Extrapolated from CHARM II\\\hline
\end{tabular}
\end{ruledtabular}
\caption{Resolutions and systematic uncertainty estimates used in the parameterized
Monte Carlo studies. The NuTeV estimates are based on Ref. \cite{NuTeVres}
and the CHARM II
estimates from Ref. \cite{CharmIIdet}. Units for angles are radians and energies are in GeV.}\label{Resolution_syst_errors}%
\end{table*}%
\subsection{A $0.7\%$ Measurement Goal for the ES to IMD Ratio \label{justify}
}
Achieving 0.7\% precision on the ES/IMD measurement depends on reducing the
backgrounds to an acceptable level without introducing significant systematics
and while maintaining high signal statistics. Many of the
systematic uncertainties will tend to cancel. The most important background
for both the $\nu$-$e$ neutral current and IMD events comes from charged current
quasi-elastic (CCQE) scatters ($\nu_{e}n\rightarrow pe$ and $\nu_{\mu
}n\rightarrow p\mu$). These background CCQE processes have a much broader
$Q^{2}$ as compared to the signal processes and, therefore, can be partially
eliminated by kinematic cuts on the outgoing muon or electron. Initial cuts on
the scattering angle and energy of the outgoing muon or electron can easily
reduce the CCQE background by factors of 60 and 14 respectively while
retaining over 50\% of the $\nu$-$e$ neutral current and IMD signal. This leaves
events with very forward scatters and outgoing scattered protons of low
kinetic energy.
Because the NuSOnG design is at the conceptual stage and in order to be
conservative, we have developed two different strategies for achieving a 0.7\%
error. This serves as a proof of principle that this level of error, or
better, can be reached. The first method relies on detecting protons from the
quasi-elastic scatter. The second method uses the beam kinematics to cut the
low energy flux which reduces the CCQE background.
These methods were checked via two, independently written, parameterized Monte
Carlos. The parameterized Monte Carlos made the assumptions given in Table
\ref{Resolution_syst_errors} where both the assumed values and uncertainties
are presented. These estimates of resolutions and systematic errors are based
on previous experimental measurements or on fits to simulated data. One Monte
Carlo used the Nuance event generator \cite{Nuance} to produce events, while the other was
an independently written event generator. Both Monte Carlos include nuclear
absorption and binding effects.%
The first strategy uses the number of protons which exit the glass to
constrain the total rate of the background. In $\sim 33\%$ of the
events, a proton will exit the glass, enter a chamber and traverse the
gas. This samples protons of all energies and $Q^{2}$, since the
interactions occur uniformly throughout the glass. After initial cuts,
the protons are below 100 MeV, and therefore highly ionizing. If
we define 1 MIP as the energy deposited by a single minimum ionizing
particle, like a muon, then the protons
consistently deposit greater than 5 MIPs in the chamber. Thus, one
can identify CCQE events by requiring
$>$%
4 MIPS in the first chamber. The amount of remaining CCQE background
after this requirement can be measured if a fraction such as 10\% of
the detector is made from scintillating glass that can directly
identify CCQE events from light associated with the outgoing proton. A
wide range of scintillating glasses have been developed \cite{glasses}
for nuclear experiments. These glasses are not commonly used in high
energy physics experiments because the scintillation time constant is
typically on the order of 100 ns. In a neutrino experiment, which
has inherently lower rates than most particle experiments,
this is not an issue. CCQE events can be identified by the
scintillation light from the proton assuming reasonable parameters for
the glass and readout photomultiplier tubes: 450 photons/MeV, an
attenuation length of 2 m, eight phototubes per glass sheet, quantum
efficiency of the tubes of 20\%. Using the identified CCQE events from
the instrumented glass, the uncertainty in the residual background can
be reduced to 2.0\% for the IMD measurement. For the CCQE background
to the $\nu_\mu $-$e$ neutral current measurement, the uncertainty is
assumed to be 3\% for the Monte Carlo prediction. Combining all the
systematic errors leads to a $\sim $0.7\% accuracy on the $\nu$-$e$
measurement as shown in Tab.~\ref{Method_1_errors}.%
In Tab.~\ref{Method_1_errors}, the cancellation of the flux errors
should be noted. This occurred because we use the ES/IMD ratio,
as discussed in the previous section.
\begin{table*}[tbp] \centering
\begin{ruledtabular}
\begin{tabular}
[c]{rrccc}\hline
& & IMD Uncertainty & ES Uncertainty & Uncertainty on Ratio\\\hline
\multicolumn{2}{l}{Statistical Uncertainty} & 0.18\% & 0.46\% & 0.49\%\\\hline
\multicolumn{2}{l}{Resolution Smearing} & & & \\
& $\delta$(E$_{\mu}$) $=\pm2.5\%$ & 0.00\% & 0.00\% & 0.00\%\\
& $\delta$($\theta_{\mu}$) $=\pm2.5\%$ & 0.04\% & 0.00\% & 0.04\%\\
& $\delta$(E$_{e}$) $=\pm1.5\%$ & 0.00\% & 0.01\% & 0.01\%\\
& $\delta$($\theta_{e}$) $=\pm2.5\%$ & 0.00\% & 0.09\% & 0.09\%\\\hline
\multicolumn{2}{l}{Energy Scale} & & & \\
& $\delta$(Escale$_{\mu}$) $=0.5\%$ & 0.37\% & 0.00\% & 0.37\%\\
& $\delta$(Escale$_{e}$) $=1.5\%$ & 0.00\% & 0.19\% & 0.19\%\\\hline
\multicolumn{2}{l}{Flux} & & & \\
& Normalization & 3.00\% & 3.00\% & 0.00\%\\
& High energy flux up 1\% & 0.25\% & 0.25\% & 0.00\%\\
& Low energy flux up 1\% & 0.15\% & 0.13\% & 0.02\%\\\hline
\multicolumn{2}{l}{IMD Background: statistical error} & 0.06\% & 0.00\% &
0.06\%\\
\multicolumn{2}{r}{2.0\% systematic error} & 0.26\% & 0.00\% & 0.26\%\\\hline
\multicolumn{2}{l}{$\nu_{\mu}$e Background: statistical error} & 0.00\% &
0.12\% & 0.12\%\\
\multicolumn{2}{r}{3\% systematic error} & 0.00\% & 0.19\% & 0.19\%\\\hline
& & \multicolumn{2}{r}{Total Syst. Uncertainty on Ratio} & 0.54\%\\
& & \multicolumn{2}{r}{Total Stat. Uncertainty on Ratio} & 0.51\%\\
& & \multicolumn{2}{r}{Total Uncertainty on Ratio} & 0.74\%\\\hline
\end{tabular}
\end{ruledtabular}
\caption{Estimates of the IMD and ES uncertainties using a $>5$ MIP cut on the first downstream
chamber. The columns give the errors for each process and then for the ratio. Errors are included for
statistical uncertainties and uncertainties associated with the knowledge of resolution smearing,
energy scale, flux shape, and backgrounds. The flux shape uncertainties are significantly reduced
in the ratio measurement.}\label{Method_1_errors}%
\end{table*}%
The second strategy involves reducing the relative CCQE background to
signal by using a harder flux for the analysis. This study used the
same Monte Carlos, with the resolutions listed in
Tab.~\ref{Resolution_syst_errors}, as the first analysis. The total
systematic and statistical error achieved was 0.6\%. Below, we
explain how a harder flux is obtained for the analysis. Then, we
explain how this flux improves the signal-to-background in both the ES
and IMD analyzes.
The strong correlation between energy and angle at the NuSOnG detector
is used to isolate the harder flux. This is simplest to express in
the non-bend view of the beamline, where it is given for pions by
the well-known off-axis formula:
\begin{equation}
E_\nu = {{0.43 E_{\pi}}\over{1+\gamma^2 \theta^2}},
\end{equation}
where $\theta$ is the off-axis angle, $\gamma=E_\pi/m_\pi$, $E_\pi$ is
the energy of the pion and $E_\nu$ is the energy of the neutrino.
For the NuTeV beam and detector lay-out, this angle-energy dependence
resulted in the sharp cutoff of the flux for $<30$ GeV
shown in Fig. \ref{fig:beam}. Using the NuTeV G3 beam Monte Carlo \cite{NuTeVbeam},
we have shown that by selecting vertices in the
central region of the detector, one can adjust the energy where the
flux sharply cuts off. Adjusting the aperture to retain flux
above 50 GeV reduces the total event rate by 55\%.
A harder flux allows for background reduction in both the ES and the
IMD samples while maintaining the signal at high efficiency. In the
case of ES events, the background is from $\nu_e$ CCQE. The energy
distribution of the electron is substantially different in the two
cases. In the case of $\nu_e$ CCQE events, the electron carries most
of the energy of the incoming neutrino because the exchange energy in
the interaction is small. Thus the CCQE events produced by the harder
flux populate the visible energy range above 50 GeV. On the other
hand, the outgoing electron in ES events tends to populate the low
visible energy region due to the combination of a flat $y$
distribution for the process convoluted with the incident neutrino
energy spectrum. The result is that a cut on the visible energy less
than 50 GeV reduces the error from the $\nu_e$ CCQE background to a
negligible level. To understand the improvement in the IMD analysis,
consider Fig.~\ref{fig:IMDthresh}, which shows the threshold effects.
The IMD signal is also rising with energy. In contrast, the $\nu_\mu$
CCQE rate, which is the most significant background, is flat with
energy for fluxes above 1 GeV. This signal-to-background is greatly
improved with a high energy flux. This allows looser cuts to be
applied, which in turn reduces the systematics.
These two analyzes use substantially different strategies and can, in
principle, be combined. Given these preliminary studies, we feel
confident that as the detector moves from a conceptual to real design,
we will be able to achieve a better than 0.7\% error. However,
for this paper we take the conservative approach of assuming 0.7\%.
\subsection{Neutrino Quark Scattering \label{PWsection}}
~~~\\
~~~
Substantially higher precision has been obtained using neutrino-quark
scattering, which compares neutral-current (NC) to charged-current (CC)
scattering to extract $\sin^2 \theta_W$. However, these experiments are
subject to issues of modeling in the quark sector.
Fig.~\ref{pastnu}(bottom) reviews the history of these measurements.
The lowest systematic errors come from
implementing a ``Paschos-Wolfenstein style'' \cite{PW} analysis.
This PW technique would be
used by any future experiment, including NuSOnG. This requires high
purity
$\nu$ and $\bar \nu$ beams, for which
the following ratios of DIS events could be formed:
\begin{eqnarray}
R^\nu &=& \frac{\sigma_{NC}^\nu}{\sigma_{CC}^\nu} \\
R^{\bar \nu} &=& \frac{\sigma_{NC}^{\bar \nu}}{\sigma_{CC}^{\bar \nu}}.
\end{eqnarray}
Paschos and Wolfenstein \cite{PW} recast these as:
\begin{equation}
R^- = \frac{\sigma_{NC}^\nu - \sigma_{NC}^{\bar \nu}}{\sigma_{CC}^\nu - \sigma_{CC}^{\bar \nu}} = \frac{R^\nu - r R^{\bar \nu}}{1-r},
\label{eq:PWRminus}
\end{equation}
where $r=\sigma_{CC}^{\bar \nu}/\sigma_{CC}^{\nu}$. In $R^-$ many systematics cancel to first order, including the effects of the quark and
antiquark seas for $u, d, s$, and $c$. Charm production only enters
through $d_{valence}$ (which is Cabibbo suppressed) and at high $x$;
thus the error from the charm mass is greatly reduced.
The cross section ratios can be written in terms of the effective
neutrino-quark coupling parameters $g_L^2$ and $g_R^2$ as
\begin{eqnarray}
R^\nu &=& g_L^2+rg_R^2 \\
R^{\bar \nu} &=& g_L^2 + {1 \over r} g_R^2\\
R^- &=& g_L^2-g_R^2 = \rho^2 ({1 \over 2} - \sin^2\theta_W),
\end{eqnarray}
in which
\begin{eqnarray}
g_L^2 & = & (2 g_L^\nu g_L^u)^2 + (2 g_L^\nu g_L^d)^2~\cr
& = & \rho^2 ({1 \over 2} - \sin^2 \theta_W + {5 \over 9} \sin^4 \theta_W) \label{eq:gl}\\
g_R^2 & = & (2 g_L^\nu g_R^u)^2 + (2 g_L^\nu g_R^d)^2 \cr
& = &\rho^2({5 \over 9} \sin^4 \theta_W). \label{eq:gr}
\end{eqnarray}
In a variation on the PW idea, rather than directly form $R^-$, NuTeV
fit simultaneously for $R^\nu$ and $R^{\bar \nu}$ to extract $\sin^2
\theta_W,$ obtaining the value $\sin^2\theta_W=0.2277\pm 0.00162$.
Events were classified according to the length of hits in the
scintillator planes of the NuTeV detector, with long events
identified as CC interactions and short events as NC. An important
background in the CC sample came from pion decay-in-flight, producing
a muon in a NC shower. Significant backgrounds in the NC sample came
from muons which ranged out or exited and from $\nu_e$ CC scatters
which do not have a muon and thus are classified as ``short.''
In this paper, we present the sensitivity of NuSOnG to new physics if
the NuTeV errors are reduced by a factor of $\sim 2$. This is a very
conservative estimate, since most of the improvement comes from higher
statistics. Only a 90\% improvement in the systematics is required to
reach this goal. Tab.~\ref{NuTeVerrs} argues why a 90\% reduction in
systematic error should be straightfroward to achieve. It is likely
that the NuSOnG errors will be lower, but this requires detailed
study.
In Table~\ref{NuTeVerrs}, we list the errors which NuTeV identified in their
original analysis and indicate how NuSOnG will improve each error.
Many of the largest experimental systematics of
NuTeV are improved by introducing a fine-grained sampling calorimeter.
The NuTeV detector had four inches of iron between unsegmented
scintillator planes and eight inches between drift chamber planes.
Better lateral segmentation and transverse detection will
improve identification of scatters from intrinsic $\nu_e$s in
the beam and separation of CC and NC events by improved
three-dimensional shower shape
analyzes. The NuTeV analyzes of the intrinsic $\nu_e$ content \cite{Serge}
and the CC/NC separation for the $\sin^2 \theta_W$ analysis which
relied strictly on event length. With this said, the power
of classifying by event length is shown by the fact that the
NuTeV intrinsic $\nu_e$ analysis was sensitive to a discrepancy in
the predicted intrinsic $\nu_e$ rate which was recently resolved
with a new measurement of the $K_{e3}$ branching ratio that was
published in 2003. Details of these issues are considered in
the next section.
\begin{table*}
\begin{ruledtabular}
\begin{tabular}{|c|c|l|} \hline
Source & NuTeV & Method of reduction in NuSOnG \\
& Error & \\ \hline \hline
Statistics & 0.00135 & Higher statistics \\ \hline \hline $\nu_e$, $\bar \nu_e$ flux prediction & 0.00039 & Improves in-situ measurement of $\bar \nu_e$ CC scatters, thereby constraining prediction,\\
& & due to better lateral segmentation and transverse detection.\\
& & Also, improved beam design to further reduce $\bar \nu_e$ from $K^0$.\\ \hline
Interaction vertex position & 0.00030 & Better lateral segmentation. \\ \hline
Shower length model & 0.00027 & Better lateral segmentation and transverse detection \\
& & will allow more sophisticated shower identification model. \\ \hline
Counter efficiency and noise & 0.00023 & Segmented scintillator strips of the type \\ \hline
& & developed by MINOS \cite{MINOSStrips} will improve this. \\ \hline
Energy Measurement & 0.00018 & Better lateral segmentation. \\ \hline \hline
Charm production, strange sea &
0.00047 & In-situ measurement \cite{EOI, QCDPRD}. \\
\hline
$R_L$ &
0.00032 & In-situ measurement \cite{EOI, QCDPRD}.\\
\hline
$\sigma^{\bar
\nu}/\sigma^{\nu}$ & 0.00022 & Likely to be at a similar level. \\
\hline Higher Twist & 0.00014 & Recent results reduce this error \cite{Petti}. \\ \hline
Radiative Corrections & 0.00011 & New analysis underway, see text below. \\ \hline
Charm Sea & 0.00010 & Measured in-situ using wrong-sign muon production in DIS. \\
\hline
\hline Non-isoscalar target & 0.00005 & Glass is isoscalar \\
\hline
\end{tabular}
\caption{Source and value of NuTeV errors on $\sin^2 \theta_W$, and
reason why the error will be reduced in the PW-style analysis of
NuSOnG. This paper assumes NuSOnG will reduce the total NuTeV error
by a factor of two. This is achieved largerly through the improved
statistical precision and requires only a 90\% reduction in the
overal NuTeV systematic error. This table argues that a better than
90\% reduction is likely, but further study, once the detector
design is complete, is required.}
\label{NuTeVerrs}
\end{ruledtabular}
\end{table*}
\subsection{The NuTeV Anomaly \label{nutevsection}
}
From Fig.~\ref{pastnu}, it is apparent that the NuTeV measurement
is in agreement with past neutrino scattering results, although
these have much larger errors;
however, in disagreement with the
global fits to the electroweak data which give a Standard Model
value of $\sin^2\theta_W =0.2227$ \cite{NuTeVanomaly}.
Expressed in terms of the couplings,
NuTeV measures:
\begin{eqnarray}
g_L^2 = 0.30005 \pm 0.00137 \\
g_R^2 = 0.03076 \pm 0.00110,
\end{eqnarray}
which can be compared to the Standard Model values of $g_L^2=0.3042$ and
$g_R^2=0.0301$, respectively.
NuTeV is one of a set of $Q^2 \ll m_Z^2$ experiments measuring
$\sin^2\theta_W$. It was performed at $Q^2 =$ 1 to 140 GeV$^2$,
$\langle Q^2_\nu \rangle=26$ GeV$^2$, $\langle Q^2_{\bar \nu} \rangle
=15$ GeV$^2$, which is also the expected range for NuSOnG. Two other
precision low $Q^2$ measurements are from atomic parity
violation \cite{APV} (APV), which samples $Q^2 \sim 0$; and SLAC E158,
a M{\o}ller scattering experiment at average $Q^2=0.026$ GeV$^2$
\cite{E158}. Using the measurements at the $Z$-pole with $Q^2=M_z^2$
to fix the value of $\sin^2 \theta_W$, and evolving to low
$Q^2$\cite{E158web}, the APV and SLAC E158 are in agreement with the
Standard Model. However, the radiative corrections to neutrino
interactions allow sensitivity to high-mass particles which are
complementary to the APV and M{\o}ller-scattering corrections. Thus,
these results may not be in conflict with NuTeV. The NuSOnG
measurement will provide valuable additional information on this
question.
Since the NuTeV result was published, more than 300 papers have been
written which cite this result.
Several
``Standard-Model'' explanations have been suggested. While
some constraints on these ideas can come from outside experiments,
it will be necessary for any future neutrino scattering experiment,
such as NuSOnG,
to be able to directly address these proposed solutions.
Also various Beyond Standard Model
explanations have been put forward; those which best explain the
result require a follow-up experiment which probes the neutral weak
couplings specifically with neutrinos, such as NuSOnG.
Here, we consider the explanations which are
``within the Standard Model'' and address the Beyond Standard
Model later.
\begin{figure}
\vspace{5mm}
\centering
\scalebox{0.4}{\includegraphics[bb=0 155 585 700]{pulls3.pdf}}
\caption{\label{pulls} Effect of various ``Standard Model''
explanations on the NuTeV anomaly. The $y$-axis is the deviation
($\delta
\sin^2\theta_W=\sin^2\theta_W^{SM}-\sin^2\theta_W^{NuTeV}$). The
solid line is the published NuTeV deviation. Thick black lines
extending from the NuTeV deviation show the range of possible pulls
from NLO QCD and various isospin violation models. Note that the
isospin violation models are mutually exclusive and so should not be
added in quadrature. They are, from left to right, the full bag
model, the meson cloud model, and the isospin QED model. }
\end{figure}
Several systematic adjustments to the NuTeV result have been
identified since the result was published but have not yet been
incorporated into a new NuTeV analysis. As discussed here, the
corrections due to the two new inputs, a new $K_{e3}$ branching ratio
and a new strange sea symmetry, are significant in size but are in
opposite direction -- away and toward the Standard Model. So a
re-analysis can be expected to yield a central value for NuTeV which
will not change significantly. However, the error is expected to
become larger.
In 2003, a new result from BNL865 \cite{Ke3} yielded a
$K_{e3}$ branching ratio which was $2.3\sigma$ larger than
past measurements and a value of $|V_{us}|^2$ which brought
the CKM matrix measurements into agreement with unitarity in the
first row \cite{865impact}. The measurement was confirmed by
CERN NA48/2 \cite{Na48}.
The resulting increased $K_{e3}$ branching ratio \cite{PDG}
increases the absolute prediction of intrinsic $\nu_e$s in the NuTeV
beam. This does not significantly change the error because the error
on $Ke3$ was already included in the analysis. However, it introduces
a correction moving the NuTeV result further away from the Standard
Model, since it implies that in the original analysis, NuTeV
under-subtracted the $\nu_e$ background in the NC sample. The shift in
$\sin^2\theta_W$ can be estimated in a back of envelope calculation to
be about $\sim$0.001 away from the Standard Model
\cite{SamPrivateComm}.
The final NuTeV
measurement of the difference between the strange and anti-strange sea
momentum distributions, was published in 2007 \cite{DMason}. This
``strange sea asymmetry'' is defined as
\begin{equation}
xs^-(x)\equiv xs(x)-x\overline{s}(x),
\end{equation}
Because of mass suppression for the production of charm in CC scatters
from strange quarks, a difference in the momentum distributions will result
in a difference in the CC cross sections for neutrinos and
antineutrinos. Thus a correction to the denominator of
Eq.~(\ref{eq:PWRminus}) would be required. The most recent next-to-leading
order analysis finds the asymmetry, integrated over $x$ is $0.00195\pm
0.00055\pm 0.00138$ \cite{DMason}. An integrated asymmetry of 0.007
is required to explain the published NuTeV result \cite{DMason}, and so
one can estimate that this is a shift of about 0.0014 in
$\sin^2\theta_W$ toward the Standard Model. In this case, the errors
on the NuTeV result will become larger because this effect was not
originally considered in the analysis. A very naive estimate of the
size of the increase can be derived by scaling the error on the
integrated strange sea, quoted above, and is about 0.001 toward
the Standard Model. If this naive estimate of the
systematic error is borne out, then this could
raise the NuTeV error on $\sin^2\theta_W$ from 0.0016 to 0.0018.
NuSOnG will directly address the strange sea asymmetry in
its QCD measurement program, as described in ref. \cite{QCDPRD}.
In ref. \cite{diener}, additional electromagnetic radiative
corrections have been suggested as a source of the discrepancy.
However, this paper only considered the effect of these corrections on
$R^\nu$ and not $R^{\bar \nu}$ and for fixed beam energy of $E_\nu=80$
GeV. The structure of the code from these authors has also made it
difficult to modify for use in NuTeV. This has prompted a new set of
calculations by other authors which are now under way \cite{newradcor}.
There are, as yet, only estimates for the approximate size of newly
identified effects, which are small.
The NuTeV analysis was not performed at a full NLO level in QCD; any
new experiment, such as NuSOnG will need to undertake a full NLO
analysis. This is possible given recently published calculations
\cite{Ellis,Moch}, including those on target mass corrections
\cite{SamTargMass}. On Fig.~\ref{pulls}, we show an early estimate of
the expected size and direction of the pull \cite{nlomodels}. On this
plot, the solid horizontal line indicates the deviation of NuTeV from
the Standard Model. The thick vertical lines, which emanate from the
NuTeV deviation, show the range of pulls estimated for various
explanations. The range of pull for the NLO calculation is shown on
the left.
The last possibility is that there is large isospin violation (or
charge symmetry violation) in the nucleus. The NuTeV analysis assumed
isospin symmetry, that is, $u(x)^p = d(x)^n$ and $d(x)^p = u(x)^n$.
Isospin violation can come about from a variety of sources and is an
interesting physics question in its own right. NuSOnG's direct
constraints on isospin violation are discussed in ref. \cite{QCDPRD},
which also considers the constraints from
other experiments.
Various models for isospin violation have
been studied and their pulls range from less than $1\sigma$ away from
the Standard Model to $\sim 1 \sigma$ toward the Standard Model
\cite{isomodels}. We have chosen three examples \cite{isomodels} for
illustration on Fig.~\ref{pulls}: the full bag model, the meson cloud
model, and the isospin QED model. These are mutually exclusive
models, so only one of these can affect the NuTeV anomaly.
\section{The Terascale Physics Reach of NuSOnG\label{Terascale}}
\begin{center}
\begin{table*}[tbp]
\begin{ruledtabular}
\begin{tabular}{|l|l|}
Topic & Contribution of NuSOnG Measurement \\ \hline \hline
Oblique Corrections & Four distinct and complementary probes of $S$ and $T$.\\
& In the case of agreement with LEP/SLD: $\sim$25\% improvement in
electroweak precision. \\ \hline
Neutrino-lepton NSIs & Order of magnitude improvement in neutrino-electron effective couplings measurements.\\
& Energy scale sensitivity up to $\sim 5$ TeV at 95\% CL.\\ \hline
Neutrino-quark NSIs & Factor of two improvement in neutrino-quark effective coupling measurements.\\
& Energy scale sensitivity up to $\sim 7$ TeV at 95\% CL.\\ \hline
Mixing with Neutrissimos & 30\% improvement on the $e$-family coupling in a global fit. \\
& 75\% improvement on the $\mu$-family coupling in a global fit. \\ \hline
Right-handed Couplings & Complementary sensitivity to $g_R/g_L$ compared to LEP.\\
& Order of magnitude improvement compared to past experiments. \\
\end{tabular}
\end{ruledtabular}
\caption{Summary of NuSOnG's contribution to general Terascale physics studies.}
\label{Tab:genericsummary}
\end{table*}
\end{center}
Even when new states are too heavy to be produced at resonance in
collisions they can make their presence known indirectly, as virtual
particles which affect SM processes through interference with SM
contributions to amplitudes. The new heavy states induce small shifts
in observables from SM predictions, and conversely precise
measurements of these observables can constrain or detect new physics
at mass scales well above the energies of the colliding particles. In
this way the precision neutrino scattering measurements at NuSOnG will
place TeV-scale indirect constraints on many classes of new physics,
or perhaps detect new physics by measuring deviations from SM
predictions. The effects of new high-scale physics may be reduced to
a small number of effective operators along with corresponding
parameters which may be fit to data. Although the particular set of
operators used depends on broad assumptions about the new physics, the
approach gives a parameterization of new physics which is largely
model-independent.
For concreteness we will assume that NuSOnG will be able to measure
the neutrino ES/IMD ratio to a precision of 0.7\%, $\sigma({\bar
\nu}_\mu e)$ (normalized as per Sec.~\ref{IMDsub}) to 1.3\%, and
that NuSOnG will be able to halve the errors on NuTeV's measurement of
DIS effective couplings, to $\Delta g_L^2=0.0007$ and $\Delta
g_R^2=0.0006$ (where $g_L$ and $g_R$ were defined in Eqs.~(\ref{eq:gl})
and (\ref{eq:gr})).
We first parameterize new physics using the oblique parameters $ST$,
which is appropriate when the important effects of the new physics
appear in vacuum polarizations of gauge bosons. We next assume new
physics effects manifest as higher-dimensional operators made of SM
fermion fields. We separately consider the possibility that the gauge
couplings to neutrinos are modified. Realistic models usually
introduce several new operators with relations among the coefficients;
we consider several examples. A summary of the contributions of
NuSOnG to the study of Terascale Physics is provided in
Table~\ref{Tab:genericsummary}.
\subsection{Oblique corrections \label{obliquecorrections}}
For models of new physics in which the dominant loop corrections are
vacuum polarization corrections to the $SU(2)_L \times U(1)_Y$ gauge
boson propagators (``oblique'' corrections), the $STU$
\cite{Peskin:1991sw, Peskin:1990zt} parameterization provides a
convenient framework in which to describe the effects of new physics
on precision electroweak data. Differences between the predictions of
a new physics model and those of a reference Standard Model (with a
specified Higgs boson and top quark mass) can be expressed as nonzero
values of the oblique correction parameters $S$, $T$ and $U$. $T$ and
$U$ are sensitive to new physics that violates isospin, while $S$ is
sensitive to isospin-conserving physics. Predictions of a Standard
Model with Higgs or top masses different from the reference Standard
Model may also be subsumed into shifts in $S$ and $T$ (in many models
$U$ is much smaller than $S$ and $T$ and is largely unaffected by the
Higgs mass, so it is often omitted in fits). Within a specific model
of new physics the shift on the $ST$ plot away from the SM will be
calculable \cite{Peskin:2001rw}. For example,
\begin{itemize}
\item A heavy Standard Model Higgs boson will make a positive
contribution to $S$ and a larger negative contribution to $T$.
\item Within the space of $Z^\prime$ models, a shift in almost any direction
in $ST$ space is possible, with larger shifts for smaller $Z^\prime$
masses.
\item Models with a fourth-generation of
fermions will shift $S$ positive, and will shift $T$ positive if there
are violations of isospin.
\end{itemize}
In constructing models incorporating
several types of new physics the corresponding shifts to $S$ and $T$
combine; if contributions from different sectors are large, then they must
conspire to cancel.
\begin{figure}[ht]
\includegraphics[scale=0.4]{STfig1.pdf}
\caption{The impact of NuSOnG on the limits of $S$ and $T$.
The reference SM is $m_t=170.9$~GeV, and $m_H=115$~GeV.
1$\sigma$ bands due to NuSOnG observables are shown against the 90\%
contour from LEP/SLD. The central ellipses are the 68\% and 90\% confidence limit contours with NuSOnG included. See Eqs.~(\ref{eq:gl}) and (\ref{eq:gr}) for the definitions
of $g_L$ and $g_R$.}
\label{STplot1}
\end{figure}
The constraints on $S$ and $T$ from
the full set of precision electroweak data
strongly restrict the models of new physics which are viable.
The strongest constraints are from LEP/SLD,
which give a current bound of
\begin{eqnarray}
S & = & -0.02\pm 0.11 \;,\cr
T & = & +0.06\pm 0.13 \;,\cr
\mathrm{Corr}(S,T) & = & 0.91.\;
\end{eqnarray}
The ES and DIS measurements from NuSOnG provide four distinct and
complementary probes of $S$ and $T$,
as shown in Fig.~\ref{STplot1}.
If the target precision is achieved, and assuming the NuSOnG agree with SM
predictions, NuSOnG will further
reduce the errors on $S$ and $T$ from the LEP/SLD values to
\begin{eqnarray}
S & = & -0.05\pm 0.09 \;,\cr
T & = & +0.02\pm 0.10 \;,\cr
\mathrm{Corr}(S,T) & = & 0.87\;.
\end{eqnarray}
The $\sim 25\%$ reduction in the errors is primarily due to the
improved measurement of $g_L^2$. We note that the error $g_L^2$ is
likely to be further reduced (see Sec.~\ref{PWsection}), and so the this is
conservative estimate of NuSOnG's contribution to the physics.
\subsection{Non-standard interactions}
NuSOnG will probe new physics that modifies neutrino-quark and
neutrino-electron scattering. If the masses associated to the new
degrees of freedom are much larger than the center of mass energy
($s=2m_eE_{\rm beam}\lesssim 0.5$~GeV$^2$) then modifications to these
processes are well-described by higher-dimensional effective
operators. In the context of neutrino reactions, these operators are
also referred to as non-standard interactions (NSI's). In a
model-independent effective Lagrangian approach these effective
operators are added to the SM effective Lagrangian with arbitrary
coefficients. Expressions for experimental observables can be
computed using the new effective Lagrangian, and the arbitrary
coefficients can then be constrained by fitting to data. Typically,
bounds on the magnitude of the coefficients are obtained using only one or a
few of the available effective operators. This approach simplifies
the analysis and gives an indication of the scale of constraints,
although we must be mindful of relationships among different operators
that will be imposed
by specific assumptions regarding the underlying physics.
To assess the sensitivity of NuSOnG to ``heavy'' new physics in
neutral current processes, we introduce the following effective
Lagrangian for neutrino-fermion interactions
\cite{Bandyopadhyay:2007kx,Davidson:2003ha,Mohapatra:2005wg}:
\begin{eqnarray}
\mathcal{L}_\mathrm{NSI}
& = &
-\sqrt{2}G_F
\Bigl[\, \bar{\nu}_\alpha \gamma_\sigma P_L \nu_\beta
\,\Bigr]
\Bigl[\,
\varepsilon_{\alpha\beta}^{fV}\, \bar{f}\gamma^\sigma f
-\varepsilon_{\alpha\beta}^{fA}\, \bar{f}\gamma^\sigma\gamma_5 f
\,\Bigr] \cr
& = &
-2\sqrt{2}G_F
\Bigl[\, \bar{\nu}_\alpha \gamma_\sigma P_L \nu_\beta
\,\Bigr]
\Bigl[\,
\varepsilon_{\alpha\beta}^{fL}\, \bar{f}\gamma^\sigma P_L f\cr
&& ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\varepsilon_{\alpha\beta}^{fR}\, \bar{f}\gamma^\sigma P_R f
\,\Bigr] \;.
\label{NSI}
\end{eqnarray}
where $\alpha,\beta=e,\mu,\tau$ and $L,R$ represent left-chiral and
right-chiral fermion fields. If $\alpha\neq\beta$, then the
$\alpha\leftrightarrow \beta$ terms must be Hermitian conjugates of
each other, {\it i.e.}
$\varepsilon_{\beta\alpha}=\varepsilon_{\alpha\beta}^*$. NuSOnG is
sensitive to the $\beta=\mu$ couplings. This effective Lagrangian is
appropriate for parameterizing corrections to neutral current
processes; an analysis of corrections to charged-current processes
requires a different set of four-fermion operators.
Assuming $\varepsilon_{\alpha\beta}=0$ for $\alpha\neq\beta$
we need consider only the terms $\varepsilon_{\mu\mu}^{f*}$
($*=V,A,L,R$). If we rewrite Eq.~(\ref{eq:geneffint}) as
\begin{eqnarray}
& & \cr
\mathcal{L} & = &
-\sqrt{2}G_F
\Bigl[\, \bar{\nu}\gamma_\mu P_L \nu
\,\Bigr]
\Bigl[\,
g_V^{\nu f}\, \bar{f}\gamma^\mu f
-g_A^{\nu f}\, \bar{f}\gamma^\mu\gamma_5 f
\,\Bigr] \cr
& = &
-2\sqrt{2}G_F
\Bigl[\, \bar{\nu}\gamma_\mu P_L \nu
\,\Bigr]
\Bigl[\,
g_L^{\nu f}\, \bar{f}\gamma^\mu P_L f \cr
&& ~~~~~~~~~~~~~~~~~~~~~~~~~~~~+g_R^{\nu f}\, \bar{f}\gamma^\mu P_R f
\,\Bigr] \;,
\label{L2}
\end{eqnarray}
where
\begin{eqnarray}
g_V^{\nu f} & = & 2g_L^\nu g_V^f
\;=\; \rho\left( I_3^f - 2 Q^f\sin^2\theta_W \right) \;,\cr
g_A^{\nu f} & = & 2g_L^\nu g_A^f
\;=\; \rho\left( I_3^f \right) \;,\cr
g_L^{\nu f} & = & 2g_L^\nu g_L^f
\;=\; \rho\left( I_3^f - Q^f\sin^2\theta_W \right)\;,\cr
g_R^{\nu f} & = & 2g_L^\nu g_R^f
\;=\; \rho\left( -Q^f\sin^2\theta_W \right)\;,
\end{eqnarray}
then we see that adding Eq.~(\ref{NSI}) to the SM Lagrangian will simply
shift the effective couplings:
\begin{eqnarray}
g_{V}^{\nu f} & \longrightarrow &
\tilde{g}_{V}^{\nu f} \;=\; g_{V}^{\nu f} + \varepsilon_{\mu\mu}^{fV}\;,\cr
g_{A}^{\nu f} & \longrightarrow &
\tilde{g}_{A}^{\nu f} \;=\; g_{A}^{\nu f} + \varepsilon_{\mu\mu}^{fA}\;,\cr
g_{L}^{\nu f} & \longrightarrow &
\tilde{g}_{L}^{\nu f} \;=\; g_{L}^{\nu f} + \varepsilon_{\mu\mu}^{fL}\;,\cr
g_{R}^{\nu f} & \longrightarrow &
\tilde{g}_{R}^{\nu f} \;=\; g_{R}^{\nu f} + \varepsilon_{\mu\mu}^{fR}.\;
\end{eqnarray}
Consequently, errors on the $g_{P}^{\nu f}$'s translate directly into
errors on the $\varepsilon_{\mu\mu}^{fP}$'s, $P=V,A$ or $P=L,R$.
\subsubsection{Neutrino-lepton NSI}
A useful review of present constraints on non-standard
neutrino-electron interactions can be found in ref.
\cite{Barranco}. As this paper states, and as we show below, an
improved measurement of neutrino-elecron scattering is needed.
The world average value for neutrino-electron effective couplings, dominated by CHARM II, is
\begin{eqnarray}
g_V^{\nu e} & = & -0.040\pm 0.015 \;,\cr
g_A^{\nu e} & = & -0.507\pm 0.014 \;,\cr
\mathrm{Corr}(g_V^{\nu e},g_A^{\nu e}) & = & -0.05\;.
\label{CHARM2result}
\end{eqnarray}
The current $1\sigma$ bounds from CHARM II, Eq.~(\ref{CHARM2result})
translates to $|\varepsilon_{\mu\mu}^{eP}|<0.01$, $(P=L,R)$
with a correlation of $0.07$~\cite{Bandyopadhyay:2007kx}.
At the current precision goals, NuSOnG's $\nu_\mu e$ and ${\overline \nu}_\mu e$ will significantly reduce the uncertainties on these NSI's, to
\begin{eqnarray}
|\varepsilon_{\mu\mu}^{eV}| & < & 0.0036\;,\cr
|\varepsilon_{\mu\mu}^{eA}| & < & 0.0019\;,\cr
\mathrm{Corr}(\varepsilon_{\mu\mu}^{eV},\varepsilon_{\mu\mu}^{eA}) & = & -0.57\;,
\label{EPSeVAbounds}
\end{eqnarray}
or in terms of the chiral couplings,
\begin{eqnarray}
|\varepsilon_{\mu\mu}^{eL}| & < & 0.0015\;,\cr
|\varepsilon_{\mu\mu}^{eR}| & < & 0.0025\;,\cr
\mathrm{Corr}(\varepsilon_{\mu\mu}^{eL},\varepsilon_{\mu\mu}^{eR}) & = & 0.64.\;.
\label{EPSePbounds}
\end{eqnarray}
Even in the absence of a $\sigma(\bar{\nu}_\mu e)$ measurement
$\varepsilon_{\mu\mu}^{eL}$ and $\varepsilon_{\mu\mu}^{eR}$ can be
constrained from the $\nu_\mu e$ scattering data alone through a fit to
the recoil electron energy spectrum (see Eq.~(\ref{differentialcrosssection})).
We first consider the constraint on $\varepsilon_{\mu\mu}^{eL}$ and
$\varepsilon_{\mu\mu}^{eR}$ from the total cross section
$\sigma(\nu_\mu e)$. It is convenient to recast the effective
interaction slightly, as
\begin{widetext}
\begin{eqnarray}
\mathcal{L}_\mathrm{NSI}^{e} & = &
-2\sqrt{2}G_F
\Bigl[\, \bar{\nu}_\alpha \gamma_\sigma P_L \nu_\mu
\,\Bigr]
\Bigl[\,
\varepsilon_{\alpha\mu}^{eL}\, \bar{e}\gamma^\sigma P_L e
+\varepsilon_{\alpha\mu}^{eR}\, \bar{e}\gamma^\sigma P_R e
\,\Bigr] \cr
& = & +\frac{\sqrt{2}}{\Lambda^2}
\Bigl[\, \bar{\nu}_\alpha \gamma_\sigma P_L \nu_\mu
\,\Bigr]
\Bigl[\,
\cos\theta\, \bar{e}\gamma^\sigma P_L e
+\sin\theta\, \bar{e}\gamma^\sigma P_R e
\,\Bigr] \;.
\label{eq:leff}
\end{eqnarray}
\end{widetext}
The new physics is parameterized by two coefficients $\Lambda$ and
$\theta$. $\Lambda$ represents the broadly-defined new physics scale
while $\theta\in[0,2\pi]$ defines the relative coupling of left-chiral
and right-chiral electrons to the new physics. As an example, a
scenario with a purely ``left-handed'' $Z^{\prime}$ that couples to
leptons with coupling $g'$ would be described by $\Lambda \propto
M_{Z'}/g'$ and $\theta=0$ or $\theta=\pi$, depending on the relative
sign between $g'$ and the electroweak couplings. $\Lambda$ and
$\theta$ are related to to the NSI parameters in Eq.~(\ref{NSI}) by
\begin{equation}
\varepsilon_{\alpha\mu}^{eL} = -\frac{\cos\theta}{2 G_F\Lambda^2}\;,~~~
\varepsilon_{\alpha\mu}^{eR} = -\frac{\sin\theta}{2 G_F\Lambda^2}.\;
\end{equation}
Note that we have generalized from our assumption of the previous
section and not taken $\alpha=\mu$ necessarily. At NuSOnG, new physics modifies
(pseudo)elastic neutrino--electron scattering. Here we use the word
``pseudo'' to refer to the fact that we cannot identify the flavor of
the final-state neutrino, which could be different from the incoming
neutrino flavor in the case of flavor changing neutral currents.
The shift in the total cross section is
\begin{eqnarray}
\frac{\delta\sigma(\nu_{\mu}e)}{\sigma(\nu_\mu e)}
& = &
\frac{ \left\{\, 2\, g_L^{\nu e}\, \varepsilon_{\mu\mu}^{eL}
+ (\varepsilon_{\mu\mu}^{eL})^2 \right\}
+ \frac{1}{3}
\left\{\, 2\,g_R^{\nu e}\, \varepsilon_{\mu\mu}^{eR}
+ (\varepsilon_{\mu\mu}^{eR})^2 \right\} }
{ (g_L^{\nu e})^2 + \frac{1}{3}(g_R^{\nu e})^2 } \cr
& \approx & -\left(\frac{516\,\mathrm{GeV}}{\Lambda}\right)^2
\cos(\theta-\phi) \cr
&& + 0.096\left(\frac{516\,\mathrm{GeV}}{\Lambda}\right)^4
(1+2\cos^2\theta) \;.
\label{nueshift2}
\end{eqnarray}
where
\begin{equation}
\tan\phi\;=\;\frac{g_R^{\nu e}}{3g_L^{\nu e}}\;\approx\;-0.28\;.
\end{equation}
When ${\cal O}(\varepsilon^2)$ terms are negligible, a $0.7\%$ measurement
of $\sigma(\nu_\mu e)$ translates into a
95\% confidence level bound of
\begin{equation}
\Lambda \;>\; (4.4\,\mathrm{TeV})\times\sqrt{|\cos(\theta-\phi)|}\;
\end{equation}
from elastic scattering.
The measurement of the electron recoil energy will allow us to do better.
Fig.~\ref{fig:eff_limit}(dark line) depicts the 95\% confidence level
sensitivity of NuSOnG to the physics described by Eq.~(\ref{eq:leff})
when $\nu_\alpha=\nu_{\mu}$, obtained after fitting the recoil
electron kinetic energy distribution.
Fig.~\ref{fig:eff_limit}(closed contour) represents how well NuSOnG
should be able to measure $\Lambda$ and $\theta$, at the 95\%
level. Weaker bounds from pseudoelastic scattering are also shown.
We have not included ``data'' from
$\bar{\nu}_{\mu}$--electron scattering. While there will be fewer of these events,
they should qualitatively improve our
ability to pin down the new physics parameters given the distinct
dependency on $g^{\nu e}_V$ and $g^{\nu e}_A$ (see Sec.~\ref{nuesub}).
\begin{figure}
\centering
\scalebox{0.35}{\includegraphics[clip=true]{Exclusion95v2.pdf}}
\vspace{1mm}
\caption{
(DARK LINES) 95\% confidence level sensitivity of NuSOnG to new heavy
physics described by Eq.~(\ref{eq:leff}) when $\nu_\alpha=\nu_{\mu}$
(higher curve) and $\nu_\alpha\neq\nu_{\mu}$ (lower curve). (CLOSED
CONTOURS) NuSOnG measurement of $\Lambda$ and $\theta$, at the 95\%
level, assuming $\nu_\alpha=\nu_{\mu}$, $\Lambda=3.5$~TeV and $
\theta=2\pi/3$ (higher, solid contour) and $\nu_\alpha\neq\nu_{\mu}$, $
\Lambda=1$~TeV and $\theta=4\pi/3$ (lower, dashed contour). Note that
in the pseudoelastic scattering case ($\nu_\alpha\neq\nu_{\mu}$) $
\theta$ and $\pi+\theta$ are physically indistinguishable.
}
\label{fig:eff_limit}
\end{figure}
Eq.~(\ref{eq:leff}) does not include all effective dimension-six
operators that contribute to neutrino--electron (pseudo) elastic
scattering. All neglected terms will either not contribute at NuSOnG,
or were assumed to be suppressed with respect to
Eq.~(\ref{eq:leff}). In turn, terms proportional to a right-handed
neutrino current $\bar{\nu}_R\gamma_{\sigma}\nu_R$ lead to negligibly
small effects since neutrino masses are negligibly small and we are
dealing with neutrino beams produced by pion and muon decay ({\it
i.e.}, for all practical purposes, we have a purely left-handed muon
neutrino beam and a purely right-handed muon antineutrino
beam). Chirality violating effective operators
(e.g. $(\bar{\nu}_R\nu_L)(\bar{e}_Le_R)$), on the other hand, are
expected to be suppressed with respect to Eq.~(\ref{eq:leff}) by terms
proportional to neutrino masses and the electron mass (measured in
units of $\Lambda$). The reason is that, in the limit of massless
neutrinos or a massless electron, chiral symmetry is restored while
such operators explicitly violate it. For the same reason,
dimension-five magnetic moment-type operators
($\bar{\nu}\sigma_{\rho\sigma}\nu F^{\rho\sigma}$) have also been
neglected.
We note also that Eq.~(\ref{eq:leff}) violates
$SU(2)_L$ unless one also includes similar terms where $\nu_
L\leftrightarrow\ell_L$ ($\ell=e,\mu, \tau$).
In this case, certain
flavor combinations would be severely constrained by
electron--electron scattering and rare muon and tau decays. One way
around such constraints is to postulate that the operators in
Eq.~(\ref{eq:leff}) are dimension-eight operators proportional to
$\bar{L}H^*\gamma_{\sigma}LH$, where $L$ is the left-chiral lepton
doublet and $H$ is the Higgs scalar doublet. In this case,
$1/\Lambda^2$ should be replaced by $v^2/\Lambda^4$, where $v=246$~GeV
is the scale of electroweak symmetry breaking.
Finally, another concern is whether modifications to the charged current neutrino--electron
(pseudo)quasi-elastic scattering ((pseudo)IMD, $\nu_{\mu}e\to\nu_{\alpha}\mu$) can render the translation of NuSOnG data into constraints or measurements of $\theta$ and $\Lambda$ less straightforward. This turns out not to be the case, since new physics contributions to $\nu_{\mu}e\to\nu_{\alpha}\mu$ are already very well constrained by precision studies of muon decay. Hence, given the provisos of the two previous paragraph, Eq.~(\ref{eq:leff}) is expected to capture all ``heavy'' new physics effects in (pseudo)elastic neutrino electron scattering.
\subsubsection{Neutrino-quark NSI}
We next consider the $f=u,d$ case. The change in the parameters
$g_L^2$ and $g_R^2$ (see Eqs.~(\ref{eq:gl},\ref{eq:gr})) due to the NSI's is
\begin{eqnarray}
\Delta g_L^2 & = & 2 g_L^{\nu u}\varepsilon_{\mu\mu}^{uL}
+2 g_L^{\nu d}\varepsilon_{\mu\mu}^{dL} \cr
&\;\approx\;& +0.69\,\varepsilon_{\mu\mu}^{uL} -0.85\,\varepsilon_{\mu\mu}^{dL}
\;,\cr
\Delta g_R^2 & = &
2 g_R^{\nu u}\varepsilon_{\mu\mu}^{uR}
+2 g_R^{\nu d}\varepsilon_{\mu\mu}^{dR} \cr
&\;\approx\;& -0.31\,\varepsilon_{\mu\mu}^{uR} +0.15\,\varepsilon_{\mu\mu}^{dR}
\;.
\end{eqnarray}
so only these linear combinations are constrained.
The bounds from NuTeV (rescaled to $1 \sigma$ bounds from ref.~\cite{Bandyopadhyay:2007kx}) are:
\begin{eqnarray}
\varepsilon_{\mu\mu}^{uL} & = & -0.0053\pm 0.0020\;,\cr
\varepsilon_{\mu\mu}^{dL} & = & +0.0043\pm 0.0016\;,\cr
|\varepsilon_{\mu\mu}^{uR}| & < & 0.0035\;,\cr
|\varepsilon_{\mu\mu}^{dR}| & < & 0.0073\;.
\end{eqnarray}
These bounds are obtained by setting only one of the parameters be non-zero at a time.
If NuSOnG reduces the errors on the NuTeV measurement of $g_L^2$ and $g_R^2$ by a factor of 2, the $1\sigma$ bounds on the NSI parameters are similarly reduced:
\begin{eqnarray}
|\varepsilon_{\mu\mu}^{uL}| & < & 0.001\;,\cr
|\varepsilon_{\mu\mu}^{dL}| & < & 0.0008\;, \cr
|\varepsilon_{\mu\mu}^{uR}| & < & 0.002\;, \cr
|\varepsilon_{\mu\mu}^{dR}| & < & 0.004\;.
\end{eqnarray}
In terms of a new physics scale defined as $\Lambda = 1/\sqrt{2 \,\mathrm{G_F} \varepsilon},$ these constraints range from
$\Lambda \;>\; 3\,\mathrm{TeV}$ to $\Lambda \;>\; 7\,\mathrm{TeV}.$
We note that neutrino-quark scattering will also be sensitive to NSIs which
correct CC interactions. These interactions are not included in Eq.~(\ref{NSI}). If they
are important, as is the case in some of the scenarios we treat later, a new
analysis is necessary and the bounds above cannot be used. This is to be contrasted to the neutrino--lepton case, discussed in the previous subsection.
\subsection{Neutrissimos, Neutrino Mixing and Gauge Couplings}
\begin{figure}[ht]
\vspace{-1.5in}
\includegraphics[scale=0.4]{epsilon_fit.pdf}
\caption{Potential constraint on $\epsilon_e$ and $\epsilon_\mu$ from NuSOnG (see Eq.~(\ref{epsilon_ell})).
This is a two-dimensional projection of a 4 parameter fit with $S$, $T$, $\epsilon_e$ and $\epsilon_\mu$.
The green ellipse is the 90\% CL contour of a fit to all the charge current particle decay data + LEP/SLD.}
\label{epsilon_fit}
\end{figure}
In those classes of models which include moderately heavy electroweak
gauge singlet (``neutrissimo'') states, with masses above 45 GeV, the
mixing of the $SU(2)_L$-active neutrinos and the sterile states may
lead to a suppression of the neutrino-gauge couplings. The resulting
pattern of modified interactions is distinct from those of the
previous section since they will also induce correlated shifts to the
charged-current coupling. For example, Ref.~\cite{Loinaz:2003gc}
presents models with one sterile state per active neutrino flavor and
intergenerational mixing among neutrinos. In these models the flavor
eigenstates are linear combinations of mass eigenstates, and those
mass eigenstates too heavy to be produced in final states result in
an effective suppression of the neutrino-gauge boson coupling. This
suppression may be flavor-dependent depending on the structure of the
neutrino mixing matrix. If the mass matrix contains Majorana terms,
such models permit both lepton flavor violation and lepton
universality violation.
Neutrinos couple to the $W$ and the $Z$ through interactions described by:
\begin{eqnarray}
\mathcal{L} &\;=\; &
\frac{g}{\sqrt{2}}W^-_\mu\, \bar{\ell}_L \gamma^\mu \nu_{\ell L}
+ \frac{g}{\sqrt{2}}W^+_\mu\, \bar{\nu}_{\ell L} \gamma^\mu \ell_L \cr
& & + \frac{e}{2sc}Z_\mu\, \bar{\nu}_{\ell L}\gamma^\mu \nu_{\ell L} \;,
\end{eqnarray}
where $\ell = e,\mu,\tau$.
If the neutrinos mix with gauge singlet states so that the $SU(2)_L$ interaction eigenstate is a superposition of mass eigenstates $\nu_\mathrm{\ell,light}$ and $\nu_\mathrm{\ell,heavy}$
\begin{equation}
\nu_{\ell L} \;=\;
\nu_\mathrm{\ell,light}\cos\theta_\ell + \nu_\mathrm{\ell,heavy}\sin\theta_\ell\;,
\end{equation}
then the interaction of the light states is given by
\begin{eqnarray}
\mathcal{L} &&\;=\; \cr
&&\left(\frac{g}{\sqrt{2}}W^-_\mu\, \bar{\ell}_L \gamma^\mu \nu_\mathrm{\ell,light}
+ \frac{g}{\sqrt{2}}W^+_\mu\, \bar{\nu}_\mathrm{\ell,light} \gamma^\mu \ell_L
\right)\cos\theta_\ell \cr
&&+ \left(\frac{e}{2sc}Z_\mu\, \bar{\nu}_\mathrm{\ell,light}\gamma^\mu \nu_\mathrm{\ell,light}\right)\cos^2\theta_\ell\;.
\end{eqnarray}
Defining
\begin{equation}
\epsilon_\ell \;\equiv\; 1-\cos^2\theta_\ell\;.
\end{equation}
the shift in the Lagrangian due to this mixing is
\begin{eqnarray}
\delta\mathcal{L} &\;=\;&
-\left(\frac{g}{\sqrt{2}}W^-_\mu\, \bar{\ell}_L \gamma^\mu \nu_{\ell}
+ \frac{g}{\sqrt{2}}W^+_\mu\, \bar{\nu}_{\ell} \gamma^\mu \ell_L
\right)\frac{\epsilon_\ell}{2} \nonumber \\
&-& \left(\frac{e}{2sc}Z_\mu\, \bar{\nu}_\mathrm{\ell}\gamma^\mu \nu_\mathrm{\ell}\right)\epsilon_\ell\;,
\label{epsilon_ell}
\end{eqnarray}
where we have dropped the subscript ``light" from the neutrino fields.
Lepton universality data
from $W$ decays and from charged current
$\pi,\tau$ and $K$ decays \cite{Loinaz:2004qc} constraint
differences $\epsilon_{\ell_i}-\epsilon_{\ell_j}$.
LEP/SLD and other precision electroweak
data will imposed additional constraints on $\epsilon_\ell$ in combination with the oblique parameters,
as will NuSOnG.
A fit to all the charge current decay data and LEP/SLD with $S$, $T$,
$\epsilon_e$ and $\epsilon_\mu$ yields
\begin{eqnarray}
S & = & -0.05 \pm 0.11 \;,\cr
T & = & -0.44 \pm 0.28 \;,\cr
\epsilon_e & = & 0.0049 \pm 0.0022 \;,\cr
\epsilon_\mu & = & 0.0023 \pm 0.0021 \;.
\end{eqnarray}
If we now included hypothetical data from NuSOnG, assuming NuSOnG achieves its precision goals and measures central values consistent with the Standard Model, we see the constraints on $\epsilon_\mu$ and $\epsilon_e$ are substantially improved. In this case, the fit yields
\begin{eqnarray}
S & = & \phantom{-}0.00 \pm 0.10 \;,\cr
T & = & -0.11 \pm 0.12 \;,\cr
\epsilon_e & = & 0.0030 \pm 0.0017 \;,\cr
\epsilon_\mu & = & 0.0001 \pm 0.0012. \;,
\end{eqnarray}
Fig.~\ref{epsilon_fit} shows the two dimensional cross section in
the $\epsilon_e$-$\epsilon_\mu$ plane of the four dimensional fit.
The likelihood coutours are 2D projections.
Though not obvious from the figure, it is NuSOnG's improved measurement of $g_L^2$ which contributes the most to strengthening the bounds on the $\epsilon_\ell$.
In models of this class lepton
flavor violating decays such as $\mu \rightarrow e \gamma$ impose additional constraints on
products $\epsilon_{\ell_i} \epsilon_{\ell_j}$.
For example, the strong constraint from $\mu \rightarrow e \gamma$ implies $\epsilon_{e}
\epsilon_{\mu} \approx 0$. This type of model has been proposed as a solution to the NuTeV anomaly. If we take take only one of $\epsilon_e$ or $\epsilon_\mu$ to be nonzero (to respect the constraint from $\mu \rightarrow e \gamma$), the NuTeV value of $g_L^2$ is accommodated in the fit by best-fit values of $\epsilon$ that are large and positive and best-fit values of T are large and negative (consistent with a heavy Higgs).
\subsection{Right-handed coupling of the neutrino to the $Z$}
In the Standard Model, neutrino
couplings to the $W$- and $Z$-bosons are purely left-handed. The fact
that the neutrino coupling to the $W$-boson and an electron is purely
left-handed is, experimentally, a well-established fact (evidence
includes precision measurements of pion and muon decay, nuclear
processes, etc.). By contrast, the nature of the neutrino coupling to
the $Z$ boson is, experimentally, far from being precisely established
\cite{Carena:2003aj}. The possibility of a right-handed neutrino--$Z$-boson coupling is not included in the previous
discussions, and is pursued separately in this subsection.
\begin{figure*}
\centering
\scalebox{0.8}{\includegraphics[clip=true]{NuSOnG_0035.pdf}}
\vspace{1mm}
\caption{Precision with which the right-handed neutrino--$Z$-boson
coupling can be determined by combining NuSOnG measurements of
$g_L^{\nu}$ with the indirect determination of the invisible
$Z$-boson width at LEP if (left) the $\nu+e$ scattering
measurement is consistent with the Standard Model prediction
$g_L^{\nu}=0.5$ and (right) the $\nu+e$
scattering measurement is significantly lower, $g_L^{\nu}=0.485$,
but still in agreement with the CHARM II measurement(at the one
sigma level). Contours (black, red) are one and two sigma,
respectively. The star indicates the Standard Model
expectation.}
\label{glgr}
\end{figure*}
The best measurement of the neutrino coupling to the $Z$-boson is
provided by indirect measurements of the invisible $Z$-boson width at
LEP. In units where the Standard Model neutrino--$Z$-boson couplings are
$g_L^{\nu}=0.5$, $g_R^{\nu}\equiv 0$, the LEP measurement \cite{invZ}
translates
into $(g^{\nu}_L)^2+(g^{\nu}_R)^2=0.2487\pm 0.0010$. Note that
this result places no meaningful bound on $g_R^{\nu}$.
Precise, model-independent information on $g^{\nu}_L$ can be obtained
by combining $\nu_{\mu}+e$ scattering data from CHARM II and LEP and
SLD data. Assuming model-independent couplings of the fermions to the
$Z$-boson, $\nu_{\mu} +e$ scattering measures $g_L^{\nu}=\sqrt{\rho}/2$, while
LEP and SLD measure the left and right-handed couplings of the
electron to the $Z$. The CHARM II result translates into
$|g_L^{\nu}|=0.502\pm 0.017$ \cite{Carena:2003aj}, assuming that the
charged-current weak interactions produce only left-handed
neutrinos. In spite of the good precision of the CHARM II result
(around 3.5\%), a combination of all available data allows
$|g_R^{\nu}/g_L^{\nu}|\sim 0.4$ at the two $\sigma$ confidence level
\cite{Carena:2003aj}.
Significant improvement in our understanding of $g_R^{\nu}$ can only
be obtained with more precise measurements of $\nu+e$ scattering, or
with the advent of a new high intensity $e^+e^-$ collider, such as the
ILC. By combining ILC running at the $Z$-boson pole mass and at
$\sqrt{s}=170$~GeV, $|g_R^{\nu}/g_L^{\nu}|\lesssim 0.3$ could be
constrained at the two $\sigma$ level after analyzing
$e^+e^-\to\gamma+$missing energy events \cite{Carena:2003aj}.
Assuming that $g^{\nu}_L$ can be measured with 0.7\% uncertainty,
Fig.~\ref{glgr} depicts an estimate of how precisely $g_R^{\nu}$ could be constrained
once NuSOnG ``data'' is combined with LEP data. Fig.~\ref{glgr}(left) considers the hypothesis that the Standard Model expectations are correct. In this case, NuSOnG data would reveal that $g_R/g_L$ is less than 0.2 at the two sigma level. On the other hand, if $g_R/g_L=0.25$ -- in good agreement with the current CHARM II and LEP data -- NuSOnG data should reveal that $g_R\neq 0$ at more than the two sigma level, as depicted in Fig.~\ref{glgr}(right).
The capability of performing this measurement in other experiments has
been examined. The NuSOnG measurement compares favorably, and
complements, the ILC capabilities estimated in \cite{Carena:2003aj}.
Ref~\cite{de Gouvea:2006cb} studied measurements using other neutrino
beams, including reactor fluxes and beta beams. NuSOnG's reach is
equivalent to or exceeds the most optimistic estimates for these
various neutrino sources.
\section{Specific Theoretical Models and Experimental Scenarios \label{Models}}
If NuSOnG's measurements agree
with the SM within errors, we will place stringent constraints on new physics
models; if they disagree, it will be a signal for new physics. In the
latter case the availability of both DIS and ES channels will improve
our ability to discriminate among new physics candidates. NuSOnG will
also provide an important complement to the LHC. The LHC will provide
detailed information about the spectrum of new states directly
produced. However, measurements of the widths of these new states
will provide only limited information about their couplings. NuSOnG
will probe in multiple ways the couplings of these new states to
neutrinos and to other SM particles.
In this section we provide several case studies of NuSOnG
sensitivity to specific models of new physics. These include several
typical $Z^\prime$models, leptoquark models, models of R-parity
violating supersymmetry, and models with extended Higgs sectors. We
examine how these will affect $\nu_\mu e$ ES and $\nu_\mu N$ DIS at
tree-level. Our list is far from exhaustive but serves to illustrate
the possibilities. We summarize our contributions in Table~\ref{Tab:modeloverview}.
\begin{center}
\begin{table*}[tbp]
\begin{ruledtabular}
\begin{tabular}{|c|l|}
Model & Contribution of NuSOnG Measurement \\ \hline \hline
Typical $Z^\prime$ Choices: $(B-xL)$,$(q-xu)$,$(d+xu)$ & At the level of, and complementary to, LEP II bounds.\\ \hline
Extended Higgs Sector & At the level of, and complementary to $\tau$ decay bounds. \\ \hline
R-parity Violating SUSY & Sensitivity to masses $\sim 2$ TeV at 95\% CL. \\
& Improves bounds on slepton couplings by $\sim 30\%$ and \\
& on some squark couplings by factors of 3-5. \\ \hline
Intergenerational Leptoquarks with non-degenerate masses & Accesses unique combinations of couplings. \\
& Also accesses coupling combinations explored by $\pi$ decay bounds, \\
& at a similar level.\\ \hline
\end{tabular}
\end{ruledtabular}
\label{Tab:modeloverview}
\caption{Summary of NuSOnG's contribution in the case of specific models}
\end{table*}
\end{center}
The opposite way to approach this problem is to ask: in the face
of evidence for new Terascale Physics, how can we differentiate
between specific models? NuSOnG has the potential to {\em discover} new
physics through indirect probes, in the event that one or more of its
measurements definitively contradicts SM predictions. We discuss
several possible patterns of deviation of model-independent parameters
from SM predictions and some interpretations in terms of particular
models. This is presented in the context of various expectations
for LHC to illustrate how NuSOnG enhances the overall physics program.
Since the NuTeV reanalysis is ongoing, and
since the ES constraints from CHARM-II are weak, it is prudent that we
commit to no strong assumptions about the central value of the NuSOnG
measurements but instead consider all reasonable outcomes.
\subsection{Sensitivity in the Case of Specific Theoretical Models}
\begin{figure}
\vspace{-0.75in}
\scalebox{0.45}{\includegraphics{examples.pdf}}
\vspace{-2.in}
\caption{\label{massreach} Some examples of NuSOnG's 2$\sigma$
sensitivity to new high-mass particles commonly considered in the
literature. For explanation of these ranges, and further examples,
see text.}
\end{figure}
We next consider the constraints imposed by the proposed NuSOnG
measurements on explicit models of BSM
physics. An explicit model provides relations among effective
operators which give stronger and sometimes better-motivated
constraints on new physics than is obtained from bounds obtained by
considering effective operators one by one, but at the expense of the
generality of the conclusions. Many models can be analyzed using the
effective Lagrangian of Eq.~(\ref{NSI}), but others introduce new
operators and must be treated individually. The list of models
considered is not exhaustive, but rather illustrates the new physics
reach of NuSOnG.
\subsubsection{$Z^\prime$ models}
Massive $Z'$ fields are one of the simplest signatures of
physics beyond the Standard Model. (For a recent review, see
\cite{Langacker}.) $Z'$ vector bosons are generic in grand unified
theories and prevalent in theories that address the electroweak gauge
hierarchy. They may stabilize the weak scale directly by canceling
off quadratic divergences of Standard Model fields, as in theories of
extra-dimensions or Little Higgs theories. In supersymmetric models,
$Z'$ fields are not needed to cancel quadratic divergences, but
are still often tied to the scale of soft-breaking (and hence the
electroweak scale). In these last two cases, the $Z'$ typically has a
TeV-scale mass, and is an attractive target for NuSOnG.
If the $Z'$ mass is sufficiently large, its exchange is well-described
at NuSOnG energies by the effective operator of
Eq.~(\ref{eq:leff}). In this case, the new physics scale is related to
the $Z'$ model by $\Lambda \sim M_{Z'}/g_{Z'}$, the ratio of the $Z'$
mass to its gauge-coupling. Further model-dependence shows up in the
ratio of fermion charges under the ${\rm U(1)'}$ symmetry associated
with the $Z'$, and the presence of any $Z-Z'$ mixing. With reasonable
theoretical assumptions, the absence of new sources of large
flavor-changing neutral currents, the consistency of Yukawa
interactions, and anomaly cancellation with a minimal number of exotic
fermions, the number of interesting models can be reduced
substantially, to four discrete families of generic $U(1)'$ models
each containing one free parameter, $x$ \cite{CDDT}. In
Table~\ref{Tab:zprime}, we indicate the charges of $\nu_{\mu L}, e_L,
e_R$ under these families of $U(1)'$ symmetries.
\begin{table}
\begin{ruledtabular}
\begin{tabular}{|c|c|c|c|c|}
&$U(1)_{B-xL}$ &$U(1)_{q+xu}$& $U(1)_{10+x\bar{5}}$& $U(1)_{d-xu}$ \\ \hline
$\nu_{\mu L}, e_L$ & $-x$ & $-1$ & $x/3$ & $(-1+x)/3$ \\
$e_R$ & $-x$ & $-(2+x)/3$ & $-1/3$ & $x/3$ \\
\end{tabular}
\end{ruledtabular}
\label{Tab:zprime}
\caption{Charges of $\nu_{\mu L}, e_L, e_R$ under 4 phenomenologically viable classes of $U(1)'$ symmetries. Each value of $x$ corresponds to a different $U(1)'$ symmetry that is considered.}
\end{table}
Using the sensitivity of NuSOnG
to the scale $\Lambda$ in $\nu_{\mu}$ scattering shown in
Figure~\ref{fig:eff_limit}, we can bound the combination $M_{Z'}/g_{Z'}$ for the four families of $Z'$ models as a function of $x$. It is important to note that these bounds are competitive with the LEP-II bounds found in \cite{CDDT}, which are based on $Z'$ decays to all fermions, not just electrons and neutrinos.
\begin{figure}
\centering
\includegraphics[width=3.25in]{zprime.pdf}
\vspace{1mm}
\caption{95\% confidence level sensitivity of NuSOnG to the indicated
$Z'$ models. The charges of the electrons and neutrinos under the
underlying $U(1)'$ gauge symmetry are described in
Table~\ref{Tab:zprime}. The bounds are plotted as functions of the
parameter $x$, which scans over allowed fermion charges for each
family of $U(1)'$ symmetries, versus the ratio $M_{z'}/g_{Z'}$. }
\label{fig:zprime}
\end{figure}
There are $Z^\prime$ models which distinguish among generations can
affect neutrino scattering. These will be probed by NuSOnG at the TeV
scale~\cite{L1minusL2,LmuMinusLtau,Bminus3Ltau,Bminus3Le,Bminus3over2LtauplusLmu}.
Among these, $B-3L_\mu$ was suggested as a possible explanation for the
NuTeV anomaly~\cite{Ma:2001tb, Davidson:2001ji}, however, we show here
that this is not the case. Nevertheless, it remains an interesting
example to consider.
In the gauged $B-3L_\mu$ the $Z'$ modifies $\nu_\mu N$ DIS.
The exchange of the $Z'$ between the $\nu_\mu$ and the quarks
induces operators with coefficients
\begin{eqnarray}
\varepsilon_{\mu\mu}^{uL}\;&=&\;\varepsilon_{\mu\mu}^{uR}\;=\;
\varepsilon_{\mu\mu}^{dL}\;=\;\varepsilon_{\mu\mu}^{dR} \cr
&\;=\; &
-\frac{1}{2\sqrt{2}G_F}\frac{g_{Z'}^2}{M_{Z'}^2}
\;\equiv\;\varepsilon_{B-3L_\mu}\;.
\end{eqnarray}
which shift $g_L^2$ and $g_R^2$ by
\begin{equation}
\Delta g_L^2=\Delta g_R^2=-\frac{2 s^2}{3}\,\varepsilon_{B-3L_\mu}.
\end{equation}
It should be noted that since $\varepsilon_{B-3L_\mu}$ is negative,
this shows that both $g_L^2$ and $g_R^2$ will be shifted positive.
This, in fact, excludes gauged $B-3L_\mu$ as an explanation of the
NuTeV anomaly.
With this said, a NuSOnG measurement of $g_L^2$ and $g_R^2$ that
improves on NuTeV errors by a factor of 2 yields a $2\sigma$ bound
\begin{equation}
\frac{M_{Z'}}{g_{Z'}} \;>\; 2.2\,\mathrm{TeV}\;.
\end{equation}
which is comparable and complementary to the existing bound from D0,
and thus interesting to consider.
\subsubsection{Models with extended Higgs sectors}
In the Zee \cite{Zee:1980ai} and Babu-Zee \cite{Babu:1988ki} models,
an isosinglet scalar $h^+$ with hypercharge $Y=+1$ is introduced,
which couples to left-handed lepton doublets $\ell$ as
\begin{equation} \label{L_Babu}
\mathcal{L}_h
\;=\; \lambda_{ab}
\left(\,\overline{\ell^c_{aL}}\,i\sigma_2\,\ell_{bL}^{\phantom{\mathrm{T}}}\,\right) h^+ + h.c.\;,
\end{equation}
where $(ab)$ are flavor indices: $a,b=e,\mu,\tau$.
The exchange of a charged Higgs induces the effective operator from Eq.~(\ref{NSI}) which with coefficient
\begin{equation}
\varepsilon_{\mu\mu}^{eL}\;=\;-\frac{1}{\sqrt{2}G_F}\frac{|\lambda_{e\mu}|^2}{M_h^2}\;,\qquad
\varepsilon_{\mu\mu}^{eR}\;=\; 0\;.
\end{equation}
From Eq.~(\ref{EPSePbounds}), the 95\% bound is:
\begin{equation}
\frac{M_h}{|\lambda_{e\mu}|} \;>\; 5.2\,\mathrm{TeV},\;.
\end{equation}
competitive with current bound from $\tau$-decay of $5.4$~TeV.
\subsubsection{R-parity violating SUSY}
Assuming the particle content of the
Minimal Supersymmetric Standard Model (MSSM),
the most general R-parity violating superpotential (involving only
tri-linear couplings) has the form
\cite{Rparity_notations}
\begin{equation}
W_{\not R}
=\frac{1}{2}\lambda_{ijk}\hat{L}_i \hat{L}_j \hat{E}_k
+\lambda^{\prime}_{ijk}\hat{L}_i \hat{Q}_j \hat{D}_k
+\frac{1}{2}\lambda^{\prime\prime}_{ijk}\hat{U}_i \hat{D}_j \hat{D}_k\;,
\label{RPVlagrangian}
\end{equation}
where $\hat{L}_i$, $\hat{E}_i$, $\hat{Q}_i$, $\hat{D}_i$, and $\hat{U}_i$ are
the left-handed MSSM superfields defined in the usual fashion,
and the subscripts $i,j,k=1,2,3$ are the generation indices.
$SU(2)_L$ gauge invariance requires
the couplings $\lambda_{ijk}$ to be antisymmetric in the first two indices:
\begin{equation}
\lambda_{ijk} \;=\; -\lambda_{jik}\;,
\end{equation}
The purely baryonic operator $\hat{U}_i\hat{D}_j\hat{D}_k$ is irrelevant to neutrino scattering, so only the
9 $\lambda_{ijk}$ and 27 $\lambda'_{ijk}$ couplings are of interest.
From the $\hat{L}\hat{L}\hat{E}$ part of the Eq.~(\ref{RPVlagrangian})
slepton exchange will contribute to $\nu_\mu e$ ES at NuSOnG.
These induce four-fermion operators appearing in Eq.~(\ref{NSI}) with corresponding coefficients
\begin{eqnarray}
\varepsilon_{\mu\mu}^{eL} & \;=\; &
-\frac{1}{4\sqrt{2}G_F}\sum_{k=1}^3 \frac{|\lambda_{21k}|^2}{M_{\tilde{e}_{kR}}^2}\;,\qquad \cr
\varepsilon_{\mu\mu}^{eR} & \;=\; &
+\frac{1}{4\sqrt{2}G_F}\sum_{j=1,3} \frac{|\lambda_{2j1}|^2}{M_{\tilde{e}_{jL}}^2}\;.
\end{eqnarray}
If we place bounds on the sleptons one at a time, then
Eq.~(\ref{EPSePbounds}) translates to the $2 \sigma$ bounds shown in
Table~\ref{Tab:sleptons}, presented for masses of 100 GeV. To rescale
to different masses, use $\left(\frac{M}{100\,\mathrm{GeV}}\right) $.
This can be compared to current bounds Ref.~\cite{Barbier:2004ez}.
NuSOnG improves all of these bounds.
\begin{table}
\begin{tabular}{|c||c|l|}
\hline
\ Coupling\ \ &\ 95\% NuSOnG bound\ \ & current 95\% bound\ \ \\
\hline\hline
$|\lambda_{121}|$ & $0.03$ & \ $0.05$ ($V_{ud}$) \\
$|\lambda_{122}|$ & $0.04$ & \ $0.05$ ($V_{ud}$) \\
$|\lambda_{123}|$ & $0.04$ & \ $0.05$ ($V_{ud}$) \\
$|\lambda_{231}|$ & $0.05$ & \ $0.07$ ($\tau$ decay) \\
\hline\hline
$|\lambda'_{211}|$ & $0.05$ & \ $0.06$ ($\pi$ decay) \\
$|\lambda'_{212}|$ & $0.06$ & \ $0.06$ ($\pi$ decay) \\
$|\lambda'_{213}|$ & $0.06$ & \ $0.06$ ($\pi$ decay) \\
$|\lambda'_{221}|$ & $0.07$ & \ $0.21$ ($D$ meson decay)\ \ \\
$|\lambda'_{231}|$ & $0.07$ & \ $0.45$ ($Z\rightarrow \mu^+\mu^-$)\\
\hline
\end{tabular}
\caption{Potential bounds on the R-parity violating $LLE$ (top) and $LQD$ (bottom) couplings
from NuSOnG, assuming that only one coupling is non-zero at a time for each set.
All squark and slepton masses are set to 100~GeV. To obtain limits for
different masses, rescale by $\left(\frac{M}{100\,\mathrm{GeV}}\right) $. Current bounds are from Ref.~\cite{Barbier:2004ez}.}
\label{Tab:sleptons}
\end{table}
From the $\hat{L}\hat{Q}\hat{D}$ part of Eq.~(\ref{RPVlagrangian}), squark exchange will
contribute to contribute to NC $\nu_\mu N$ DIS and CC $\nu_\mu N$ DIS.
The resulting shifts in $g_L^2$ and $g_R^2$ are
\begin{eqnarray}
\delta g_L^2 & = &
2\Bigl[\, g_L^{\nu d} \varepsilon_{\mu\mu}^{dL} - g_L^2 \varepsilon_c \,\Bigr] \;,\cr
\delta g_R^2 & = &
2\Bigl[\, g_R^{\nu d} \varepsilon_{\mu\mu}^{dR} - g_R^2 \varepsilon_c \,\Bigr]\;,
\end{eqnarray}
where
\begin{eqnarray}
\varepsilon_{\mu\mu}^{dL} & = &
-\frac{1}{4\sqrt{2}G_F}\sum_{k=1}^3 \frac{|\lambda'_{21k}|^2}{M_{\tilde{d}_{kR}}^2}\;,\cr
\varepsilon_{\mu\mu}^{dR} & = &
-\frac{1}{4\sqrt{2}G_F}\sum_{j=1}^3 \frac{|\lambda'_{2j1}|^2}{M_{\tilde{d}_{jL}}^2}
\;,\cr
\varepsilon_c & = &
+\frac{1}{4\sqrt{2}G_F}\sum_{k=1}^3 \frac{|\lambda'_{21k}|^2}{M_{\tilde{d}_{kR}}^2}
\;=\; -\varepsilon_{\mu\mu}^{dL}\;,
\end{eqnarray}
$\varepsilon_{\mu\mu}^{dL}$ and $\varepsilon_{\mu\mu}^{dR}$ are
associated with terms of Eq.~(\ref{NSI}), while $\varepsilon_c$ is
associated with a four-fermion interaction that corrects charged
currents,
\begin{equation}
-2\sqrt{2}G_F\varepsilon_c
\Bigl[
\bigl( \overline{\mu_L} \gamma_\sigma \nu_{\mu L} \bigr)
\bigl( \overline{u_L}\gamma^\sigma d_L \bigr)
+ h.c. \;
\Bigr] \;.
\end{equation}
The shifts in $g_L^2$ and $g_R^2$ are:
\begin{eqnarray}
\delta g_L^2 & = &
2\left(g_L^{\nu d}+g_L^2\right) \varepsilon_{\mu\mu}^{dL} \;,\cr
\delta g_R^2 & = &
2g_R^2 \varepsilon_{\mu\mu}^{dL}
+ 2g_R^{\nu d} \varepsilon_{\mu\mu}^{dR}
\;.
\end{eqnarray}
Assuming the projected precision goals for NuSOnG on $g_L^2$ and
$g_R^2$, and allowing only one of the couplings to be nonozero at a
time, the $2 \sigma$ bounds are given in Table~\ref{Tab:sleptons} mass
of 100~GeV, in all cases. To obtain limits for different masses,
one simply rescales by $\left(\frac{M}{100\,\mathrm{GeV}}\right) $.
NuSOnG's measurements are competitive with $\pi$ decay bounds, and
improves the current bounds on the $221$ and $231$ couplings by factors
of 3 and 5, respectively.
\subsubsection{Intergenerational leptoquark models}
Measurements of $g_L^2$ and $g_R^2$ are sensitive to leptoquarks.
Because the exchange of a leptoquark can interfere with both $W$ and
$Z$ exchange processes, we cannot use the limits on the NSI's of
Eq.~(\ref{NSI}), since we must also include the effects of
the four-fermion operators associated with charged-current processes.
Instead, the interactions of leptoquarks with
ordinary matter can be described in a model-independent fashion by an
effective low-energy Lagrangian as discussed in
Refs.~\cite{BuchRuckWyler,leptoquarks} for generation-universal leptoquark couplings.
For leptoquarks to contribute to $\nu_\mu N$
DIS, they must couple second generation leptons to first generation
quarks, so we use the more general Lagrangian of
~\cite{Davidson:1993qk,Honda:2007wv},
which allows the coupling constants to depend
on the generations of the quarks and leptons that couple to each
leptoquark.
We summarize the quantum numbers and couplings of the various leptoquarks fields
in Table~\ref{LQtable};
our notation conventions are those of Ref.~\cite{Honda:2007wv}.
\begin{center}
\begin{table*}[tbp]
\begin{ruledtabular}
\begin{tabular}{|c|c||c|c|c|c|c||c||c|}
\hline
\multicolumn{2}{|c|}{\ Leptoquark\ \ } &\ Spin\ \ & $\;\;F\;\;$ &
$\,SU(3)_C\,$
& $\quad I_3\quad$ & $\quad Y\quad$ & $\;\;Q_{em}\;\;$ &\ Allowed
Couplings\ \\
\hline\hline
$\;S_1\;$
& $\;S_1^0\;$
& $0$
& $-2$
& $\bar{3}$
& $\phantom{+}0$
& $\phantom{+}\frac{1}{3}$
& $\phantom{+}\frac{1}{3}$
&
$\;g_{1L}(\overline{u_L^c}e_L^{\phantom{c}}-\overline{d_L^c}\nu_L^{\phantom{c}}),\,
g_{1R}(\overline{u^c_R}e_R^{\phantom{c}})\;$ \\
\hline
$\;\tilde{S}_1\;$
& $\;\tilde{S}_1^0\;$
& $0$
& $-2$
& $\bar{3}$
& $\phantom{+}0$
& $\phantom{+}\frac{4}{3}$
& $\phantom{+}\frac{4}{3}$
& $\tilde{g}_{1R}(\overline{d_R^c}e_R^{\phantom{c}})$ \\
\hline
$\;V_{2\mu}\;$
& $\;V_{2\mu}^+\;$
& $1$
& $-2$
& $\bar{3}$
& $+\frac{1}{2}$
& $\phantom{+}\frac{5}{6}$
& $\phantom{+}\frac{4}{3}$
& $g_{2L}(\overline{d_R^c}\gamma^\mu e_L^{\phantom{c}}),
\,g_{2R}(\overline{d_L^c}\gamma^\mu e_R^{\phantom{c}}) $ \\
& $\;V_{2\mu}^-\;$
&
&
&
& $-\frac{1}{2}$
&
& $\phantom{+}\frac{1}{3}$
& $g_{2L}(\overline{d_R^c}\gamma^\mu \nu_L^{\phantom{c}}),\,
g_{2R}(\overline{u_L^c}\gamma^\mu e_R^{\phantom{c}}) $ \\
\hline
$\tilde{V}_{2\mu}\;$
& $\tilde{V}_{2\mu}^+\;$
& $1$
& $-2$
& $\bar{3}$
& $+\frac{1}{2}$
& $-\frac{1}{6}$
& $\phantom{+}\frac{1}{3}$
& $\tilde{g}_{2L}(\overline{u_R^c}\gamma^\mu e_L^{\phantom{c}})$\\
& $\;\tilde{V}_{2\mu}^-\;$
&
&
&
& $-\frac{1}{2}$
&
& $-\frac{2}{3}$
& $\tilde{g}_{2L}(\overline{u_R^c}\gamma^\mu \nu_L^{\phantom{c}})$ \\
\hline
$\;\vec{S}_3\;$
& $\;S_3^+\;$
& $0$
& $-2$
& $\bar{3}$
& $+1$
& $\phantom{+}\frac{1}{3}$
& $\phantom{+}\frac{4}{3}$
& $-\sqrt{2} g_{3L}(\overline{d_L^c}e_L^{\phantom{c}}) $ \\
& $\;S_3^0\;$
&
&
&
& $\phantom{+}0$
&
& $\phantom{+}\frac{1}{3}$
&
$-g_{3L}(\overline{u_L^c}e_L^{\phantom{c}}+\overline{d_L^c}\nu_L^{\phantom{c}})
$ \\
& $\;S_3^-\;$
&
&
&
& $-1$
&
& $-\frac{2}{3}$
& $\sqrt{2}g_{3L}(\overline{u_L^c}\nu_L^{\phantom{c}})$ \\
\hline\hline
$\;S_2\;$
& $\;S_2^+\;$
& $0$
& $0$
& $3$
& $+\frac{1}{2}$
& $\phantom{+}\frac{7}{6}$
& $\phantom{+}\frac{5}{3}$
& $h_{2L}(\overline{u_R} e_L),h_{2R}(\overline{u_L}e_R) $ \\
& $\;S_2^-\;$
&
&
&
& $-\frac{1}{2}$
&
& $\phantom{+}\frac{2}{3}$
& $h_{2L}(\overline{u_R} \nu_L),-h_{2R}(\overline{d_L}e_R) $ \\
\hline
$\;\tilde{S}_2\;$
& $\;\tilde{S}_2^+\;$
& $0$
& $0$
& $3$
& $+\frac{1}{2}$
& $\phantom{+}\frac{1}{6}$
& $\phantom{+}\frac{2}{3}$
& $\tilde{h}_{2L}(\overline{d_R}e_L)$ \\
& $\;\tilde{S}_2^-\;$
&
&
&
& $-\frac{1}{2}$
&
& $-\frac{1}{3}$
& $\tilde{h}_{2L}(\overline{d_R}\nu_L)$ \\
\hline
$\;V_{1\mu}\;$
& $\;V_{1\mu}^0\;$
& $1$
& $0$
& $3$
& $\phantom{+}0$
& $\phantom{+}\frac{2}{3}$
& $\phantom{+}\frac{2}{3}$
& $\;h_{1L}(\overline{u_L}\gamma^\mu \nu_L+\overline{d_L}\gamma^\mu
e_L),\;h_{1R}(\overline{d_R}\gamma^\mu e_R)\;$ \\
\hline
$\;\tilde{V}_{1\mu}\;$
& $\;\tilde{V}_{1\mu}^0$
& $1$
& $0$
& $3$
& $\phantom{+}0$
& $\phantom{+}\frac{5}{3}$
& $\phantom{+}\frac{5}{3}$
& $\tilde{h}_{1R}(\overline{u_R}\gamma^\mu e_R)$ \\
\hline
$\;\vec{V}_{3\mu}\;$
& $\;V_{3\mu}^+\;$
& $1$
& $0$
& $3$
& $+1$
& $\phantom{+}\frac{2}{3}$
& $\phantom{+}\frac{5}{3}$
& $\sqrt{2}h_{3L}(\overline{u_L}\gamma^\mu e_L)$ \\
& $\;V_{3\mu}^0\;$
&
&
&
& $\phantom{+}0$
&
& $\phantom{+}\frac{2}{3}$
& $h_{3L}(\overline{u_L}\gamma^\mu \nu_L-\overline{d_L}\gamma^\mu e_L) $ \\
& $V_{3\mu}^-\;$
&
&
&
& $-1$
&
& $-\frac{1}{3}$
& $\sqrt{2}h_{3L}(\overline{d_L}\gamma^\mu \nu_L)$ \\
\hline
\end{tabular}
\caption[]{Quantum numbers of scalar and vector leptoquarks with
$SU(3)_C\times SU(2)_L\times U(1)_Y$ invariant couplings to quark-lepton
pairs ($Q_{\rm em}=I_3+Y$) \cite{PDG}.}
\label{LQtable}
\end{ruledtabular}
\end{table*}
\end{center}
The four-fermion operators induced by leptoquark exchange
will affect NC and/or CC processes, and at NuSOnG the effect manifests
itself in shifts $g_L^2$ and $g_R^2$.
Assuming degenerate masses within each iso-multiplet,
the shifts in $g_L^2$ and $g_R^2$ can be written generically as
\begin{eqnarray}
\delta g_L^2 & = & C_L\,\frac{|\lambda_{LQ}^{12}|^2/M_{LQ}^2}{g^2/M_W^2}
\;=\; \frac{C_L}{4\sqrt{2}G_F}\frac{|\lambda_{LQ}^{12}|^2}{M_{LQ}^2} \;,\cr
\delta g_R^2 & = & C_R\,\frac{|\lambda_{LQ}^{12}|^2/M_{LQ}^2}{g^2/M_W^2}
\;=\; \frac{C_R}{4\sqrt{2}G_F}\frac{|\lambda_{LQ}^{12}|^2}{M_{LQ}^2} \;,
\end{eqnarray}
where $\lambda_{LQ}^{12}$ denotes the $(ij)=(12)$ coupling of the leptoquark
and $M_{LQ}$ is its mass.
In table~\ref{LQbounds} we list what they are, and in
figure~\ref{gL2gR2shifts}
we plot the dependence of $\delta g_L^2$ and $\delta g_R^2$ on
the ratio $|\lambda_{LQ}|^2/M_{LQ}^2$.
Table~\ref{LQbounds} also lists the projected NuSOnG bounds on the
coupling constants~\cite{LoinazProninTakeuchiprep}.
Existing bounds on $S_1$, $\vec{S}_3$, $V_1$, and $\vec{V}_3$ couplings
from
$R_\pi = Br(\pi\rightarrow e\nu)/Br(\pi\rightarrow \mu\nu)$ are already much
stronger, but could be circumvented for $\vec{S}_3$ and $\vec{V}_3$
if the masses within the multiplet are allowed to be non-degenerate.
\begin{center}
\begin{table*}[ht]
\begin{ruledtabular}
\begin{tabular}{|c||c|c||c||c|c|}
\hline
$\;LQ\;$ & $\qquad C_L\qquad$ & $\quad C_R\quad$ &
$\;\;|\lambda_{LQ}|^2\;\;$
&\ NuSOnG 95\% bound\ \
&\ 95\% bound from $R_\pi$ \ \ \\
\hline\hline
$S_1$
& $\;s^2\left(\frac{4}{3}-\frac{10}{9}s^2\right)\;$
& $-\frac{10}{9}s^4$
& $|g_{1L}^{12}|^2$
& $0.0036$
& $0.0037$ \\
\hline
$\vec{S}_3$
& $+\frac{10}{9}s^4$
& $+\frac{10}{9}s^4$
& $|g_{3L}^{12}|^2$
& $0.010$
& $0.0008$ \\
\hline
$S_2$
& $0$
& $-\frac{8}{3}s^2$
& $|h_{2L}^{12}|^2$
& $0.0013$
& N/A \\
\hline
$\tilde{S}_2$
& $0$
& $+\frac{4}{3}s^2$
& $|\tilde{h}_{2L}^{12}|^2$
& $0.0026$
& N/A \\
\hline
$V_1$
& $\;s^2\left(\frac{4}{3}-\frac{20}{9}s^2\right)\;$
& $-\frac{20}{9}s^4$
& $|h_{1L}^{12}|^2$
& $0.0040$
& $0.0018$ \\
\hline
$\vec{V}_3$
& $\;-4s^2\left(1-\frac{5}{9}s^2\right)\;$
& $+\frac{20}{9}s^4$
& $|h_{3L}^{12}|^2$
& $0.0011$
& $0.0004$ \\
\hline
$V_2$
& $0$
& $-\frac{4}{3}s^2$
& $|g_{2L}^{12}|^2$
& $0.0026$
& N/A \\
\hline
$\tilde{V}_2$
& $0$
& $+\frac{8}{3}s^2$
& $|\tilde{g}_{2L}^{12}|^2$
& $0.0013$
& N/A \\
\hline
\end{tabular}
\caption{Potential and existing 95\% bounds on the leptoquark couplings
squared
when the leptoquark masses are set to 100~GeV. To obtain the limits for
different leptoquark masses, multiply by $(M_{LQ}/100\,\mathrm{GeV})^2$.
Existing bounds on the $S_1$, $\vec{S}_3$, $V_1$, and $\vec{V}_3$
couplings from
$R_\pi=Br(\pi\rightarrow e\nu)/Br(\pi\rightarrow \mu\nu)$ are also shown.}
\label{LQbounds}
\end{ruledtabular}
\end{table*}
\end{center}
\begin{figure}[ht]
\includegraphics[scale=0.75]{dgL2.pdf}
\includegraphics[scale=0.75]{dgR2.pdf}
\caption{Shifts in $g_L^2$ and $g_R^2$ due to leptoquarks.
Horizontal lines indicate the projected $1\sigma$ limits of NuSOnG.}
\label{gL2gR2shifts}
\end{figure}
\subsection{Interplay with LHC to Isolate the Source of New Physics}
By the time NuSOnG runs, the LHC will have accumulated a wealth of
data and will have begun to change the particle physics landscape.
The message from LHC data may be difficult to decipher, however. As
discussed below, NuSOnG will be able to help elucidate the new physics
revealed at the LHC. The discovery of a Higgs along with the
anticipated measurement of the top mass to 1 GeV precision would
effectively fix the center of the $ST$ plot and will enhance the power
of the precision electroweak data as a tool for discovering new
physics. If additional resonances are discovered at the LHC, it is
still likely that little will be learned about their couplings.
The NuSOnG experiment provides complementary information to LHC.
Rather than generalize, to illustrate the power of NuSOnG, two
specific examples are given here. We emphasize that these are just
two of a wide range of examples, but they serve well to demonstrate
the point. Here we have chosen examples from typical new physics
models other than $Z^\prime$ models which were discussed above, in
order to demonstrate the physics range which can be probed
by NuSOnG.
\begin{figure}[t]
\centering
{\includegraphics[width=3.25in, bb=100 100 650 500]{Plot_3.pdf}}
\caption{NuSOnG expectation in the case of a Tev-scale triplet
leptoquark. For clarity, this plot and the two following cases, show
the expectation from only the two highest precision measurements
from NuSOnG: $g_L^2$ and $\nu$ ES.}
\label{fig:NuSOnGEW1}
\end{figure}
First, extend the Standard Model to include a non-degenerate $SU(2)_L$
triplet leptoquark ($\vec{S}_3$ or $\vec{V}_3$ in the notation of
\cite{BuchRuckWyler}, with masses in the 0.5-1.5 TeV range. At the
LHC these leptoquarks will be produced primarily in pairs through
gluon fusion, and each leptoquark will decay to a lepton and a jet
\cite{Belyaev}. The peak in the lepton-jet invariant mass
distribution will be easily detected over background. This will
provide the leptoquark masses but yield little information about their
couplings to fermions. The leptoquarks will also shift the
neutrino-nucleon effective coupling $g_L^2$ in a way that depends
sensitively on both the leptoquark couplings and masses. Such a
leptoquark-induced shift could provide an explanation for the NuTeV
anomaly \cite{Davidson:1993qk,Gabrielli:2000te,Davidson:2001ji}. In this scenario, NuSOnG would find that
isospin and the strange sea can be constrained to the point that they
do not provide an explanation for the NuTeV anomaly, thus the NuTeV anomaly
is the
result of new physics. The NuSOnG PW measurement of $\sin2{\theta_W}$
will agree with NuTeV; $g_R^2$ and the $\nu e$ and $\overline{\nu}e$
elastic scattering measurements will agree with LEP.
Fig.~\ref{fig:NuSOnGEW1} illustrates this example. NuSOnG's
measurement of $g_L^2$ would provide a sensitive measurement of the
leptoquark couplings when combined with the LHC mass measurements as
inputs.
\begin{figure}[t]
\centering
{\includegraphics[width=3.25in, bb=100 100 650 500]{Plot_2.pdf}}
\caption{NuSOnG expectation if the NuTeV anomaly is
due to isospin violation and
there is a heavy 4th generation with isospin violation.}
\label{fig:NuSOnGEW2}
\end{figure}
\begin{figure}[t]
\centering
{\includegraphics[width=3.25in, bb=100 100 650 500]{Plot_1.pdf}}
\caption{If LHC sees a Standard Model Higgs and no evidence of
new physics, NuSOnG may reveal new physics in the neutrino sector.}
\label{fig:NuSOnGEW3}
\end{figure}
A second example is the existence of a fourth generation family. A
fourth family with non-degenerate masses ({\it i.e.} isospin
violating) is allowed within the LEP/SLD constraints \cite{Tait}. As
a model, we choose a fourth family with mass splitting on the order of
$\sim 75$ GeV and a 300 GeV Higgs. This is consistent with LEP at
1$\sigma$ and perfectly consistent with $M_W$, describing the point
(0.2,0.19) on the $ST$ plot. In this scenario, LHC will
measure the Higgs mass from the highly enhanced $H \rightarrow ZZ$
decay. An array of exotic decays which will be difficult
to fully reconstruct, such as production of 6 W's and 2 b's, will be
observed at low rates. In this scenario,
isospin violation explains the NuTeV anomaly, thus
the NuTeV PW and the NuSOnG PW measurements agree with the
$\nu$eES measurements. These three precision neutrino results, all
with ``LEP-size'' errors, can be combined and will intersect the
one-sigma edge of the LEP measurements.
Fig.~\ref{fig:NuSOnGEW2} illustrates this example. From this, the
source, a fourth generation with isospin violation, can be
demonstrated.
Lastly, while it seems unlikely, it is possible that LHC will observe
a Standard Model Higgs and no signatures of new physics. If this is
the case, it is still possible for NuSOnG to add valuable clues to new
physics. This is because the experiment is uniquely sensitive to the
neutrino sector. If a situation such as is illustrated on
Fig.~\ref{fig:NuSOnGEW3} arose, the only explanation would be new
physics unique to neutrino interactions.
\section{Summary and Conclusions}
NuSOnG is an experiment which can search for new physics from keV
through TeV energy scales, as well as make interesting QCD
measurements. This article has focussed mainly on the Terascale
physics which can be accessed through this new high energy, high
statistics neutrino scattering experiment. The case has been made
that this new neutrino experiment would be a valuable addition to the
presently planned suite of experiments with Terascale reach.
The NuSOnG experiment design draws on the heritage of the CHARM II and
CCFR/NuTeV experiments. A high energy, flavor-pure neutrino flux is
produced using 800 GeV protons from the Tevatron. The detector
consists of four modules, each composed of a finely-segmented
glass-target (SiO$_2$) calorimeter followed by a muon spectrometer.
In its five-year data acquisition period, this experiment will record
almost one hundred thousand neutrino-electron elastic scatters and
hundreds of millions of deep inelastic scattering events, exceeding the
current world data sample by more than an order of magnitude. This
experiment can address concerns related to model systematics of
electroweak measurements in neutrino-quark scattering by direct
constraints using {\it in-situ} structure function measurements.
NuSOnG will be unique among present and planned experiments for its
ability to probe neutrino couplings to Beyond Standard Model
physics. This experiment offers four distinct and complementary
probes of $S$ and $T$. Two are of high precision with the proposed
run-plan, and the precision of the other two would be improved by a
follow-up five-year antineutrino run. Neutrino-lepton non-standard
interactions can be probed with an order of magnitude improvement in
the measured effective couplings. Neutrino-quark non-standard
interactions can be probed by an improvement in the measured
neutrino-quark effective couplings of a factor of two or better. The
experiment is sensitive to new physics up to energy scales $\sim 5$
TeV at 95\% CL. The measurements are sensitive to universality of the
couplings and an improvement in the $e$-family of 30\% and
$\mu$-family of 75\% will allow for probes of neutrissimos. As a
unique contribution, NuSOnG measures $g_R/g_L$, which is not
accessible by other near-future experiments. This article described
NuSOnG's physics contribution under several specific models. These
included models of $Z^\prime$s, extended Higgs models, leptoquark
models and $R$-parity violating SUSY models. We also considered how,
once data are taken at LHC and NuSOnG, the underlying physics can be
extracted. The opportunity for direct searches related to these
indirect electroweak searches was also described. The conclusion of
our analysis is that a new neutrino experiment, such as NuSOnG, would
substantially enhance the presently planned Terascale program.
\bigskip
\begin{acknowledgments}
We thank the following people for their thoughtful comments on
the development of this physics case:
P. Langacker, M.~Shaposhnikov, F. Vannucci, J. Wells.
We acknowledge the support of the following funding agencies for the
authors of this paper: Deutsche Forschungsgemeinschaft, The Kavli Institute
for Theoretical Physics, The United States Department of Energy,
The United States National Science Foundation.
\end{acknowledgments}
|
0803.1744
|
\section{Introduction}
\label{intro}
Nuclear shadowing (NS) in deep-inelastic scattering
(DIS) off nuclei is usually studied via
nuclear structure functions.
Assuming shadowing region of small Bjorken $x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.01$
the structure function $F_2$ per nucleon turns out to be smaller
in nuclei than in a free nucleon.
This fact affects then the corresponding study of nuclear
effects mainly in connection with the interpretation
of the results from hadron-nucleus and heavy ion experiments.
There are different treatments of
NS depending
on the reference frame.
In the infinite momentum frame of the nucleus
NS can be interpreted
as a result of parton fusion
\cite{infinite,mq-86}
leading to a reduction of the parton density
at low Bjorken $x_{Bj}$.
In the rest frame of the nucleus, however,
this phenomenon looks like
NS of the virtual photon hadronic fluctuations
and is occurred due to their multiple scattering
inside the target
(see refs.~\cite{nz-91,krt-00,n-03} and references therein,
for example).
In the rest frame of the nucleus,
the dynamics
of NS is controlled by the
time scale known as the
effect of quantum coherence.
It results
from destructive interference of
the amplitudes for which the interaction takes place on different bound
nucleons.
It can be estimated by relying on the uncertainty principle and
Lorentz time dilation.
For the lowest Fock component
of the photon it reads,
\beq
t_c = \frac{2\,\nu}{Q^2 + M_{\bar{q}q}^2}\ ,
\label{10}
\eeq
where $\nu$ and $Q^2$ is the photon energy and virtuality,
respectively
and $M_{\bar{q}q}$ is the effective mass of the ${\bar{q}q}$ pair.
It is usually called coherence time, but we also will use the term
coherence length (CL), since light-cone (LC) kinematics is assumed, $l_c=t_c$.
CL is related to the longitudinal momentum transfer $q_c=1/l_c$.
Note, that higher Fock states containing gluons,
$|\bar{q}qG\rangle$, $|\bar{q}q2G\rangle$, ...
have larger effective masses than $\bar{q}q$ fluctuation
resulting in shorter coherence times.
The Green function approach \cite{kst2}
is a very effective tool for the study
of NS
naturally incorporating the effects of CL.
Such a formalism
is based on the solution of
the evolution equation for the Green function.
Usually, one has been forced to obtain
an analytical solution leading to
a harmonic
oscillator form (see Eq.~(\ref{142})).
It requires, however, to implement
several approximations into a rigorous quantum-mechanical
approach
like a constant nuclear density function (\ref{270}) and
a specific quadratic form (\ref{260}) of the dipole cross section.
We remove such approximations for the $|\bar{q}q\rangle$ Fock state
solving
the evolution equation
numerically using algorithm from
ref.~\cite{n-03}.
It is supported also by the fact that
the predictions for NS \cite{n-03}
corresponding to the lowest Fock component of the photon
showed quite large differences from
approximate calculations \cite{krt-98,krt-00}
using the harmonic oscillator Green function approach
for $l_c \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} R_A$
($R_A$ is the nuclear radius).
Calculations of NS
presented so far in the color dipole approach
\cite{krt-98,krt-00,n-03}
were performed assuming only
$\bar{q}q$ fluctuations of the photon and neglecting
higher Fock components containing gluons
and sea quarks.
The effects of higher Fock states
are included in
energy dependence of the dipole cross section,
$\sigma_{\bar{q}q}(\vec{r},s)$\footnote{Here $\vec r$
represents the transverse separation of the $\bar{q}q$ photon
fluctuation
and $s$ is the center of mass energy squared.}.
However, investigating nuclear effects,
these Fock states lead also to
gluon shadowing (GS), which has been neglected so far
when the model predictions were compared with experimental
data.
It was demonstrated in ref.~\cite{kst2} that a contribution of
GS to overall NS
becomes effective at small $x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.01$.
It confirms the necessity for inclusion of GS
in the model predictions in the shadowing region
$\sim 0.0001\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.01$, where
the data exist.
\section{Light-cone dipole approach to nuclear
shadowing}
\label{lc}
In the rest frame of the nucleus NS in the
total virtual photoabsorption cross section
$\sigma_{tot}^{\gamma^*A}(x_{Bj},Q^2)$
(or in the structure function $F_2^A(x_{Bj},Q^2)$)
can be decomposed over different Fock components of the
virtual photon.
Then the total photoabsorption cross section on a nucleus can be formally
represented in the form
\begin{equation}
\sigma_{tot}^{\gamma^*A}(x_{Bj},Q^2) =
A~\sigma_{tot}^{\gamma^*N}(x_{Bj},Q^2) -
\Delta\sigma_{tot}(x_{Bj},Q^2)\, ,
\label{110}
\end{equation}
where
$\Delta\sigma_{tot}(x_{Bj},Q^2) =
\Delta\sigma_{tot}({\bar{q}q}) +
\Delta\sigma_{tot}(\bar{q}qG) +
\Delta\sigma_{tot}(\bar{q}q2G) + ...$,
$x_{Bj}$ is the Bjorken variable and
$\sigma_{tot}^{\gamma^*N}(x_{Bj},Q^2)$ is
total photoabsorption cross section on a nucleon
\begin{equation}
\sigma_{tot}^{\gamma^*N}(x_{Bj},Q^2) =
\int d^2 r \int_{0}^{1} d\alpha\,\Bigl
| \Psi_{\bar{q}q}(\vec{r},\alpha,Q^2)\,\Bigr |^2
~\sigma_{\bar{q}q}(\vec{r},s)\, .
\label{120}
\end{equation}
The dipole cross section $\sigma_{\bar{q}q}(\vec r,s)$
in Eq.~(\ref{120})
represents the interaction of a
$\bar{q}q$ dipole of transverse separation $\vec r$ with a nucleon
\cite{zkl}. It depends on the c.m. energy squared $s$.
The dependence of $\sigma_{\bar{q}q}(\vec r,s)$ on $\vec{r}$ and energy
is universal and flavor independent
allowing to describe
various high energy processes
in an uniform way.
$\sigma_{\bar{q}q}(\vec r,s)$ is known to vanish quadratically
$\propto r^2$ as $r\rightarrow 0$ due to color
screening (property of color transparency \cite{zkl,bbgg}).
It cannot be predicted
reliably because of poorly known higher order
perturbative QCD (pQCD) corrections and
nonperturbative effects.
However, $\sigma_{\bar{q}q}(\vec r,s)$ can be extracted from experimental data on
DIS and structure functions using
reasonable parametrizations. In this case pQCD corrections
and nonperturbative effects are naturally included.
There are two popular parametrizations of $\sigma_{\bar{q}q}(\vec
r,s)$, GBW presented in \cite{gbw} and KST suggested in \cite{kst2}.
Detailed discussion and comparison of these two parametrizations
can be found in refs.~\cite{knst-01,n-03}.
Whereas GBW parametrization can not
be applied in the nonperturbative region of small $Q^2$,
KST parametrization presents a good description of the
transition down to limit of real photoproduction, $Q^2=0$.
Because we study the shadowing region of small $x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.01$
and available experimental data from the E665 and NMC collaborations cover
small and moderate values of $Q^2\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 2\div 3$\,GeV$^2$ we
prefer the latter parametrization.
The KST parametrization
\cite{kst2} has the following form
containing an explicit energy dependence,
\beq
\sigma_{\bar{q}q}(r,s) = \sigma_0(s)\,\left [1 -
exp\left ( - \frac{r^2}{R_0^2(s)}\right )\right ]\, ,
\label{kst-1}
\eeq
where
\beq
\sigma_0(s) = \sigma_{tot}^{\pi\,p}(s)\,\left
(1 + \frac{3\,R_0^2(s)}{8\,\langle r_{ch}^2\rangle_{\pi}}\right )\, .
\label{kst-2}
\eeq
In Eq.~(\ref{kst-1}) the energy-dependent radius
$R_0(s) = 0.88\,(s/s_0)^{-\lambda/2}\,\mbox{fm}$
with $\lambda = 0.28$ and $s_0 = 1000\,\mbox{GeV}^2$.
In Eq.~(\ref{kst-2})
$\sigma_{tot}^{\pi\,p}(s) = 23.6\,(s/s_0)^{0.079} +
1.432\,(s/s_0)^{-0.45}\,\mbox{mb}$ corresponding to the Pomeron and Reggeon parts of the
$\pi p$ total cross section \cite{rpp-96} and
$\langle r_{ch}^2\rangle_{\pi} = 0.44\,\mbox{fm}^2$
represents the mean pion charge radius squared.
Several comments about KST parametrization are in order:
{\bf i)}
The KST dipole cross section (\ref{kst-1}) is proportional to $r^2$
for $r\rightarrow 0$, but flattens off at large $r$.
{\bf ii)}
The saturation scale $R_0(s)$ decreases with increasing $s$.
The energy dependence of $\sigma_{\bar{q}q}(r,s)$
correlates with $r$. At small $r$ the dipole cross section rises with a
hard pomeron intercept, $0.36$, and at large separations it still depends
with a soft pomeron intercept, $0.08$, on energy.
{\bf iii)}
In contrast to GBW parametrization \cite{gbw}, energy dependent ansatz,
Eq.~(\ref{kst-1}), allows to describe simultaneously
hadron-hadron scattering and DIS.
This improvement at large separations leads to a worse description
of the short-distance part of the dipole cross section which is
responsible for the behavior of $F_2^p(x_{Bj},Q^2)$ at large $Q^2$.
To satisfy Bjorken scaling the dipole cross section at small $r$
must be a function of the product $s\,r$ which is not the case
for the parametrization in Eq.~(\ref{kst-1}).
{\bf iv)}
The form of Eq.~(\ref{kst-1})
successfully describes the data for DIS at small $x_{Bj}$ only up to
$Q^2\approx 10\,\mbox{GeV}^2$. Nevertheless, this interval of $Q^2$ is
sufficient for the purpose of the present paper. \\
The second ingredient,
$\Psi_{\bar{q}q}({\vec{r}},\alpha,Q^2)$,
in (\ref{120}) is
the perturbative distribution amplitude (``wave function'') of the $\bar{q}q$
Fock component of the photon.
It depends on the
photon virtuality $Q^2$ and the relative share $\alpha$ of the photon
momentum carried by the quark.
The corresponding form
for transversely (T) and longitudinally (L) polarized photons
can be found in refs.~\cite{lc,bks-71,nz-91}:
\begin{equation}
\Psi_{\bar{q}q}^{T,L}({\vec{r}},\alpha,Q^2) =
\frac{\sqrt{N_{C}\,\alpha_{em}}}{2\,\pi}\,\,
Z_{q}\,\bar{\chi}\,\hat{O}^{T,L}\,\chi\,
K_{0}(\epsilon\,r)
\label{122}
\end{equation}
where $\chi$ and $\bar{\chi}$ are the spinors of the quark and
antiquark, respectively; $Z_{q}$ is the quark charge,
$N_{C} = 3$ is the number of colors.
$K_{0}(\epsilon r)$ is a modified Bessel
function with
\begin{equation}
\epsilon^{2} =
\alpha\,(1-\alpha)\,Q^{2} + m_{q}^{2}\ ,
\label{123}
\end{equation}
where $m_{q}$ is the quark mass.
The form of the operators $\hat{O}^{T,L}$ can be found
in \cite{kst2}.
The distribution amplitude Eq.~(\ref{122}) controls
the transverse $\bar{q}q$ separation
with the mean value,
\begin{equation}
\langle r\rangle \sim \frac{1}{\epsilon} =
\frac{1}{\sqrt{Q^{2}\,\alpha\,(1-\alpha) + m_{q}^{2}}}\,.
\label{130}
\end{equation}
For very asymmetric $\bar{q}q$ pairs with $\alpha$ or $(1-\alpha) \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$}
m_q^2/Q^2$ the mean transverse separation $\langle r\rangle \sim 1/m_q$ becomes
huge since one must use current quark masses within pQCD. A popular
recipe to fix this problem is to introduce an effective quark mass
$m_{eff}\sim \Lambda_{QCD}$ which should represent the nonperturbative
interaction effects between $q$ and $\bar{q}$.
However, we introduce this interaction explicitly
using corresponding phenomenology based on the LC Green function
approach developed in \cite{kst2}.
The Green function $G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1)$
describes the
propagation of an interacting $\bar{q}q$ pair between points with
longitudinal coordinates $z_1$ and $z_2$ and with initial and final
separations $\vec{r_1}$ and $\vec{r_2}$. This
Green function satisfies the
two-dimensional Schr\"odinger equation,
\begin{equation}
i\frac{d}{dz_2}\,G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1)=
\left[\frac{\epsilon^{2} - \Delta_{r_{2}}}{2\,\nu\,\alpha\,(1-\alpha)}
+V_{\bar{q}q}(z_2,\vec{r_2},\alpha)\right]
G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1)\ ,
\label{135}
\end{equation}
with the boundary condition
\begin{equation}
G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1)|_{z_2=z_1}=
\delta^2(\vec{r_1}-\vec{r_2})\, .
\label{136}
\end{equation}
In Eq.~(\ref{135}) $\nu$ is the photon energy and the Laplacian
$\Delta_{r}$ acts on the coordinate $r$.
Studying propagation of a $\bar{q}q$ in vacuum
the LC potential $V_{\bar{q}q}(z_2,\vec{r_2},\alpha)$ in (\ref{135})
contains only the real part, which
is responsible for
the interaction between the $q$ and $\bar{q}$.
For the oscillator form
of this potential,
\begin{equation}
{\rm Re}\,V_{\bar{q}q}(z_2,\vec{r_2},\alpha) =
\frac{a^4(\alpha)\,\vec{r_2}\,^2}
{2\,\nu\,\alpha(1-\alpha)}\ ,
\label{140}
\end{equation}
one can obtain the analytical solution
of the evolution equation (\ref{135}) represented by the
harmonic oscillator Green function \cite{fg},
\begin{eqnarray}
G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1) =
\frac{a^2(\alpha)}{2\;\pi\;i\;
{\rm sin}(\omega\,\Delta z)}\, {\rm exp}
\left\{\frac{i\,a^2(\alpha)}{{\rm sin}(\omega\,\Delta z)}\,
\Bigl[(r_1^2+r_2^2)\,{\rm cos}(\omega \;\Delta z) -
2\;\vec{r_1}\cdot\vec{r_2}\Bigr]\right\}
\nonumber\\ \times {\rm exp}\left[-
\frac{i\,\epsilon^{2}\,\Delta z}
{2\,\nu\,\alpha\,(1-\alpha)}\right] \ ,
\label{142}
\end{eqnarray}
where $\Delta z=z_2-z_1$ and
\begin{equation} \omega = \frac{a^2(\alpha)}{\nu\;\alpha(1-\alpha)}\ .
\label{144}
\end{equation}
The shape of the function $a(\alpha)$ in Eq.~(\ref{140}) is
presented and discussed in ref.~\cite{kst2}.
Matrix element (\ref{120}) contains the LC wave function squared,
which has the following form for T and L polarizations in
the limit of vanishing interaction between $\bar{q}$ and $q$,
\begin{equation}
\Bigl |\Psi^{T}_{\bar{q}q}(\vec r,\alpha,Q^2)\,\Bigr |^2 =
\frac{2\,N_C\,\alpha_{em}}{(2\pi)^2}\,
\sum_{f=1}^{N_f}\,Z_f^2
\left[m_f^2\,K_0(\epsilon,r)^2
+ [\alpha^2+(1-\alpha)^2]\,\epsilon^2\,K_1(\epsilon\,r)^2\right]\ ,
\label{197a}
\end{equation}
\begin{equation}
\Bigl |\Psi^{L}_{\bar{q}q}(\vec r,\alpha,Q^2)\,\Bigr |^2 =
\frac{8\,N_C\,\alpha_{em}}{(2\pi)^2}\,
\sum_{f=1}^{N_f}\,Z_f^2
\,Q^2\,\alpha^2(1-\alpha)^2\,
K_0(\epsilon\,r)^2\ ,
\label{197b}
\end{equation}
where $K_0$ and $K_1$ are the modified Bessel functions.
The effects of higher Fock states
$|\bar{q}q\rangle$, $|\bar qqG\rangle$,
$|\bar{q}q2G\rangle$, ...
are implicitly incorporated
into the energy
dependence of the dipole cross section
$\sigma_{\bar{q}q}(\vec{r},s)$
naturally included in realistic KST parametrization
Eq.~(\ref{kst-1}).
Investigating
DIS on nuclear targets, the corresponding
formula for NS,
$\Delta\sigma_{tot}(x_{Bj},Q^2) =
\Delta\sigma_{tot}(\bar{q}q)$,
representing the shadowing correction for the lowest
$\bar{q}q$ Fock state,
has the following form \cite{krt-98}
\begin{equation}
\Delta\sigma_{tot}(x_{Bj},Q^2) = \frac{1}{2}~{Re}~\int d^2 b \int_{-\infty}^{
\infty} dz_1 ~\rho_{A}(b,z_1) \int_{z_1}^{\infty} dz_2~
\rho_A(b,z_2)
\int_{0}^{1} d\alpha ~A(z_1,z_2,\alpha)\, ,
\label{230}
\end{equation}
with
\begin{equation}
A(z_1,z_2,\alpha)
= \int d^2 r_2 ~\Psi^{*}_{\bar{q}q}(\vec{r_2},\alpha,Q^2)
~\sigma_{\bar{q}q}(r_2,s) \int d^2 r_1
~G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1)
~\sigma_{\bar{q}q}(r_1,s)
~\Psi_{\bar{q}q}(\vec{r_1},\alpha,Q^2)\, .
\label{240}
\end{equation}
In Eq.~(\ref{230})
$\rho_{A}({b},z)$ represents the nuclear density function defined
at the point with longitudinal coordinate $z$ and impact
parameter $\vec{b}$.
\begin{figure}[tbh]
\special{psfile=shadowing.ps
angle=270. voffset= 55. hoffset= 30.
hscale=50. vscale=50.}
\begin{center}
\vspace{6.7cm}
\parbox{13cm}
{\caption[Delta]
{A cartoon for the shadowing term $\Delta\sigma_{tot}(x_{Bj},Q^2)
= \Delta\sigma_{tot}(\bar{q}q)$ in (\ref{230}). Propagation of the
$\bar{q}q$ pair through the nucleus is described by the Green
function $G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1)$, which results
from the summation over different paths of the $\bar{q}q$ pair.}
\label{shad}}
\end{center}
\end{figure}
The shadowing term $\Delta\sigma_{tot}(x_{Bj},Q^2) =
\Delta\sigma_{tot}(\bar{q}q)$ in (\ref{230}) is illustrated in
Fig.~\ref{shad}. At the point $z_1$ the initial photon diffractively
produces the $\bar{q}q$ pair ($\gamma^*N\to \bar{q}qN$) with
transverse separation $\vec{r_1}$. The $\bar{q}q$ pair then
propagates through the nucleus along arbitrary curved trajectories,
which are summed over, and arrives at the point $z_2$ with
transverse separation $\vec{r_2}$. The initial and final separations
are controlled by the LC wave function of the $\bar{q}q$ Fock
component of the photon $\Psi_{\bar{q}q}(\vec{r},\alpha,Q^2)$. During
propagation through the nucleus the $\bar{q}q$ pair interacts with
bound nucleons via the dipole cross section $\sigma_{\bar{q}q}(r,s)$,
which depends on the local transverse separation $\vec{r}$. The
Green function $G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1)$ describes
the propagation of the $\bar{q}q$ pair from $z_1$ to $z_2$.
Describing propagation of the $\bar{q}q$ pair in nuclear medium
the Green function $G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1)$
satisfies again the time-dependent
two-dimensional Schr\"odinger equation (\ref{135}).
However, the potential in this case acquires
in addition also the imaginary part responsible for
attenuation of the $\bar{q}q$ photon fluctuation in the medium
and has the following form
\begin{equation}
Im V_{\bar{q}q}(z_2,\vec r,\alpha) = -
\frac{\sigma_{\bar{q}q}(\vec r,s)}{2}\,\rho_{A}({b},z_2)\, .
\label{250}
\end{equation}
As we already mentioned above
the analytical solution of Eq.~(\ref{135}) is known only for the
harmonic oscillator potential $V_{\bar{q}q}(r)\propto r^2$,
i.e.
\beq
\sigma_{\bar{q}q}(r,s) = C(s)\,r^2\ ,
\label{260}
\eeq
and uniform nuclear density
\beq
\rho_A(b,z) = \rho_0~\Theta(R_A^2-b^2-z^2)\,
\label{270}
\eeq
in Eq.~(\ref{250}) for the imaginary part of the LC
potential.
In this case the solution of Eq.~(\ref{135}) has the same form
as Eq.~(\ref{142}), expect that one should replace $\omega\Longrightarrow
\Omega$ and $a^2(\alpha)\Longrightarrow b(\alpha)$, where
\begin{equation}
\Omega =
\frac{b(\alpha)}{\nu \alpha (1 - \alpha)}
=
\frac{
\sqrt{a^4(\alpha) - i\,\rho_A(b,z)\,\nu\,\alpha\,(1 - \alpha)\,C(s)}}
{\nu\,\alpha\,(1 - \alpha)}\, .
\label{280}
\end{equation}
Determination of the energy dependent factor $C(s)$ in Eq.~(\ref{260})
and the mean nuclear density
$\rho_0$ in Eq.~(\ref{270}) can be realized
by the procedure described in \cite{krt-00,n-03}.
In the general case when there is no
restrictions for $l_c$,
if $l_c\sim R_A$
one has to take into account the variation of the transverse size
$r$ during propagation of the $\bar{q}q$ pair through the nucleus.
The overall total photoabsorption cross section on a nucleus
$\sigma^{\gamma^*A} = \sigma_T^{\gamma^*A} +
\epsilon'\,\sigma_L^{\gamma^*A}$, assuming that the photon
polarization $\epsilon'=1$.
If one takes into account only $\bar{q}q$ Fock component
of the photon
the full expression after summation over all flavors, colors,
helicities and spin states has the following form \cite{bgz-98}
\begin{eqnarray}
\sigma^{\gamma^*A}(x_{Bj},Q^2)
&=&
A\,\sigma^{\gamma^*N}(x_{Bj},Q^2)
-
\Delta\,\sigma(x_{Bj},Q^2)
\nonumber \\
&=&
A\,\int\,d^2r\,\int_{0}^{1}\,d\alpha\,\sigma_{\bar{q}q}(r,s)
\,\Biggl (\Bigl
|\Psi^T_{\bar{q}q}(\vec{r},\alpha,Q^2)\Bigr
|^2 +
\Bigl |\Psi^L_{\bar{q}q}(\vec{r},\alpha,Q^2)\Bigr
|^2\Biggr )
\nonumber \\
\,&-&\,
\frac{N_C\,\alpha_{em}}{(2\pi)^2}\sum_{f=1}^{N_f}\,Z_f^2\,\,\Re e\,
\int\,d^2b\,\int_{-\infty}^{\infty}\,dz_1\,\int_{z_1}^{\infty}\,
dz_2\,\int_{0}^{1}\,d\alpha\,\int\,d^2r_1\,\int\,d^2r_2
\nonumber\\
\times\,
\rho_A(b,z_1)&\rho_A(b,z_2)&
\sigma_{\bar{q}q}(r_2,s)
\sigma_{\bar{q}q}(r_1,s)
\,\Biggl\{\Bigl[\,\alpha^2
+ (1 -
\alpha)^2\,\Bigr]
\,\epsilon^2
\,\frac{\vec{r_1}\,\cdot\,\vec{r_2}}{r_1\,r_2}\,
K_1(\epsilon\,r_1)\,K_1(\epsilon\,r_2)
\nonumber\\
\,&+&\,\Bigl[\,m_f^2 +
4\,Q^2\,\alpha^2\,(1 -
\alpha)^2\,\Bigr]\,
K_0(\epsilon\,r_1)\,K_0(\epsilon\,r_2)\Biggr\}\,
G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1)
\, .
\label{320}
\end{eqnarray}
The shape of
$\Bigl |\,\Psi^{T,L}_{\bar{q}q}(\vec{r},\alpha,Q^2)\,\Bigr |^2$
is given by Eqs.~(\ref{197a})
and (\ref{197b}), respectively.
In the high energy limit,
$l_c\gg R_A$,
the transverse separation $r$ between
$\bar{q}$ and $q$ does not vary
during propagation through the nucleus (Lorentz time dilation).
Consequently,
the kinetic
term in Eq.~(\ref{135}) can be neglected and the Green function
reads
\begin{eqnarray}
G_{\bar{q}q}(b;\vec{r_2},z_2;\vec{r_1},z_1)|_{\nu\to\infty} =
\delta(\vec{r_2}-\vec{r_1})\,\exp\Biggl[ - \frac{1}{2}\,
\sigma_{\bar{q}q}(r_2,s)\,\int_{z_1}^{z_2}\,dz\,\rho_A(b,z)\Biggr]\,.
\label{330}
\end{eqnarray}
After substitution of the expression (\ref{330}) into Eq.~(\ref{320}),
one arrives at the following results:
\begin{eqnarray}
&&\sigma^{\gamma^*A}_{npt}(x_{Bj},Q^2) =
2\,\int\,d^2b\,\int\,d^2r\,\int_0^1\,d\alpha
\left\{1 - \exp\,\Bigl[ - \frac{1}{2}\,\sigma_{\bar{q}q}(r,s)\,
T_A(b)\Bigr]\,\right\} \nonumber\\
\times\,&&\frac{2\,N_C\,\alpha_{em}}{(2\,\pi)^2}
\sum_{f=1}^{N_f}\,Z_f^2\,
\Biggl\{\Bigl[\,\alpha^2 + (1 - \alpha)^2\,\Bigr]
K_1(\epsilon\,r)^2\,
+ \,\Bigl[m_f^2 + 4\,Q^2\,\alpha^2\,(1 - \alpha)^2\Bigr]
K_0^2(\epsilon\,r)\,\Biggr\} ,
\label{335}
\end{eqnarray}
where
\beq
T_A(b) = \int_{-\infty}^{\infty}\,dz\,\rho_A(b,z)\,
\label{300}
\eeq
is the nuclear thickness calculated with the realistic Wood-Saxon
form of the nuclear density with parameters taken from \cite{saxon}.
At photon polarization parameter $\epsilon'=1$
the structure function ratio
$F_2^A(x_{Bj},Q^2)/F_2^N(x_{Bj},Q^2)$ can be expressed via
ratio of the total photoabsorption cross sections
\begin{eqnarray}
\frac{F_2^A(x_{Bj},Q^2)}{F_2^N(x_{Bj},Q^2)} =
\frac{\sigma_T^{\gamma^*A}(x_{Bj},Q^2)
+ \sigma_L^{\gamma^*A}(x_{Bj},Q^2)}
{\sigma_T^{\gamma^*N}(x_{Bj},Q^2)
+ \sigma_L^{\gamma^*N}(x_{Bj},Q^2)}\, ,
\label{340}
\end{eqnarray}
where the numerator on right-hand side (r.h.s.) is given by
Eq.~(\ref{320}), whereas denominator can be expressed
as the first term of Eq.~(\ref{320})
divided by the mass number $A$.
The nonperturbative $\bar{q}-q$ interaction is included
by replacements $K_0(\epsilon\,r)/2\,\pi \Longrightarrow
\Phi_0(\epsilon,r,\lambda)$ and
$K_1(\epsilon\,r)/2\,\pi \Longrightarrow
\Phi_1(\epsilon,r,\lambda)$
in all perturbative expressions.
The corresponding functions $\Phi_{0,1}$ and
parameter $\lambda$
are defined in ref.~\cite{kst2}.
We solve
the evolution equation for the Green function, Eq.~(\ref{135}),
numerically using algorithm from ref.~\cite{n-03}.
Such an exact solution is performed
for realistic KST parametrization of the dipole
cross section
(\ref{kst-1})
and nuclear density function
in the realistic Wood-Saxon form with parameters
taken from ref.~\cite{saxon}.
Finally we would like to emphasize
(see also the next section)
that the $\bar{q}q$ Fock component
of the photon represents the higher twist shadowing correction
\cite{krt-00}, and vanishes at large quark masses as $1/m_f^2$.
This does not happen for higher Fock states containing gluons,
which lead to GS. Therefore
GS represents the leading twist shadowing correction \cite{kst2}.
Moreover, a steep energy dependence of the dipole cross section
$\sigma_{\bar{q}q}(r,s)$ (see Eq.~(\ref{kst-1}))
especially at smaller dipole sizes $r$ causes
a steep energy rise of both corrections.
\section{Gluon shadowing}\label{glue-shadow}
Investigating nuclear effects at small $x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.01$,
the higher Fock states of the photon
containing gluons also contribute to NS.
Because of a
shorter coherence time (lifetime)
of these fluctuations,
GS will be dominated at higher energies, i.e. at smaller
$x_{Bj}$- values than NS coming from the
lowest $|\bar{q}q>$ Fock component.
However, no data for GS are available and
one should rely on calculations.
NS for $|\bar{q}q\rangle$ Fock component
of the photon is dominated by T photon
polarizations, because the corresponding photoabsorption
cross section is scanned at larger dipole sizes than
for the L photon polarization.
The transverse $\bar{q}q$
separation is controlled by the distribution amplitude
Eq.~(\ref{122}), with the mean value given by Eq.~(\ref{130}).
Contributions of large size dipoles come from the
asymmetric $\bar{q}q$ fluctuations of the virtual photon,
when the quark and antiquark
in the photon carry a very large ($\alpha\to 1$) and a very small
fraction ($\alpha\to 0$) of the photon momentum, and vice versa.
The LC wave function for L photons (\ref{197b})
contains a term $\alpha^2\,(1-\alpha)^2$, which makes considerably smaller the
contribution from asymmetric $\bar{q}q$ configurations than for
T photons (see Eq.~(\ref{197a})). Consequently, in
contrast to T photons, all $\bar{q}q$ dipoles from
L photons have a size squared $\propto 1/Q^2$ and the
double-scattering term vanishes as $\propto 1/Q^4$.
The leading-twist contribution for the shadowing of L
photons arises from the $|\bar{q}qG\rangle$ Fock component
of the photon because the gluon can propagate relatively
far from the $\bar{q}q$ pair, although the $\bar{q}$-$q$ separation
is of the order $1/Q^2$. After radiation of the gluon the pair
is in an octet state and consequently the $|\bar{q}qG\rangle$
state represents a $GG$ dipole. Then the corresponding
correction to the L cross section is just GS.
Interpretation of GS is
reference frame dependent.
In the infinite momentum frame this phenomenon looks analogical as
gluon-gluon fusion.
Within a parton model interpretation,
the gluon clouds
of nucleons which have the same impact parameter overlap at small
$x_{Bj}$ in longitudinal direction. This allows gluons originated from
different nucleons to fuse leading to a gluon density which is not
proportional to the density of nucleons any more. This is GS.
In the rest frame of the nucleus phenomenon of GS
corresponds to the process of gluon radiation and shadowing corrections
related to multiple interactions of the radiated gluons in the nuclear
medium. This is a coherence phenomenon known as the
Landau-Pomeranchuk effect, namely the suppression of bremsstrahlung by
interference of radiation from different scattering centers. It demands
a sufficiently long coherence time of radiation, a condition equivalent
to demanding a small Bjorken $x_{Bj}$ in the parton model.
There are still very few numerical evaluations of GS
in the literature, all done in the rest frame of the nucleus.
GS can be identified
as the shadowing correction to the L cross
section coming from the $GG$ dipole representing
$|\bar{q}qG\rangle$ Fock component of the photon.
For evaluation of GS is important to know about
the transverse size of this $GG$ dipole.
This size has been extracted in ref.~\cite{kst2}
from data for diffractive excitation of the incident
hadrons to the states of large mass, the so called triple-Pomeron region.
It was found that the mean
dipole size of the $GG$ system
(radius of propagation of the LC gluons) is
rather small , $r_0\approx 0.3\,\,\mbox{fm}$ \cite{k3p}.
It results in a not very strong onset of GS.
The smallness of the size of quark-gluon fluctuations
has been incorporated via
a nonperturbative LC potential in the Schr\"odinger equation
for the Green function describing the propagation of a quark-gluon
system.
The strength of the potential was fixed by data on high mass
($M_X^2$) diffraction $pp\to pX$ \cite{kst2}.
This approach allows to extend the methods of pQCD to the region
of small $Q^2\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} Q_0^2 = 4/r_0^2$.
At higher $Q^2\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} Q_0^2$ GS slowly (logarithmically) decreases in
accordance
with expectations based on the evolution equation \cite{mq-86}.
\begin{figure}[htb]
\special{psfile=glue-shad.eps
angle=0. voffset=-520. hoffset= 0.
hscale=70. vscale=70.}
\begin{center}
\vspace{9.2cm}
\parbox{14.0cm}
{\caption[Delta]
{
The ratio of the nucleus-to-nucleon gluon densities as function of the
thickness of the nucleus, $L=T(b)/\rho_0$, at $Q^2=4\,GeV^2$ and different fixed
values of $x_{Bj}$. Figure is taken from ref.~\cite{knst-01}.
}
\label{glue-shad}}
\end{center}
\end{figure}
We repeated the calculations \cite{kst2} of the ratio of the gluon
densities in nuclei and nucleon,
\beq
R_G(x_{Bj},Q^2)=\frac{G_A(x_{Bj},Q^2)}{A\,G_N(x_{Bj},Q^2)} \approx
1 - \frac{\Delta\sigma_{tot}(\bar{q}qG)}
{\sigma_{tot}^{\gamma^*A}}\ ,
\label{RG}
\eeq
where $\Delta\sigma_{tot}(\bar{q}qG)$ is the inelastic correction to the total
cross section $\sigma_{tot}^{\gamma^*A}$ related to the creation of a
$|\bar{q}qG\rangle$ intermediate Fock state.
Further calculation details can be found in \cite{kst2}.
As an illustration of not very strong onset of GS, here we present
$R_G(x_{Bj},Q^2)$, Eq.~(\ref{RG}), for different nuclear thicknesses
$T_A(b)$. Using an approximation of constant nuclear density (see
Eq.~(\ref{270})), $T_A(b)=\rho_0\,L$, where $L=2\,\sqrt{R_A^2 - b^2}$, the
ratio
$R_G(x_{Bj},Q^2)$ is also implicitly a function of $L$. An example for
the calculated $L$-dependence of $R_G(x_{Bj},Q^2)$ at $Q^2=4\,\,\mbox{GeV}^2$ is
depicted in Fig.~\ref{glue-shad} for different values of $x_{Bj}$.
We calculated GS only for the lowest Fock component
containing just one LC gluon.
Inclusion of higher
multigluon Fock components is still a challenge. However, their effect
can be essentially taken into account by eikonalization of the calculated
$R_G(x_{Bj},Q^2)$, i.e. the dipole
cross section, which is proportional to the gluon density at small
separations, should be renormalized everywhere,
\beq
\sigma_{\bar{q}q} \Rightarrow
R_G\,\sigma_{\bar{q}q}\ .
\label{1100}
\eeq
According to Eq.~(\ref{1100})
we will demonstrate that
GS suppresses the total photoabsorption
cross section on a nucleus $\sigma_{tot}^{\gamma^* A}(x_{Bj},Q^2)$.
We expect a non-negligible effect of GS
in the shadowing region of small $x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} (0.001\div 0.01)$
and at small
and medium values of $Q^2\sim 2\div 3\,$GeV$^2$
corresponding to the
kinematic range of available data.
\section{Numerical results}
\label{results}
Here
we present the available data vs.
realistic predictions for NS
in DIS
based on exact numerical solutions of the
evolution equation for the Green function
corresponding to the lowest $\bar{q}q$ Fock component
of the photon.
Such a comparison is performed for the shadowing
region of small $x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.01$.
We take into account also a contribution of GS
which should increase overall nuclear suppression.
The effects of GS are calculated for the lowest
Fock component containing just one LC gluon.
Inclusion of higher Fock components with
more gluons is realized by eikonalization of the
calculated $R_G(x_{Bj},Q^2)$, i.e
using renormalization (\ref{1100}).
\begin{figure}[htb]
\special{psfile=r-pb-all-07.eps
angle=0. voffset=-425. hoffset= 40.
hscale=60. vscale=60.}
\begin{center}
\vspace{12.9cm}
\parbox{14.0cm}
{\caption[Delta]
{Nuclear shadowing for lead.
Calculations correspond to exact
numerical solution of the evolution equation for the Green function,
$G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1)$,
for the lowest Fock component of the photon
using KST \cite{kst2}
parametrization of the dipole cross section and
realistic nuclear density
function of the Woods-Saxon form \cite{saxon}.
The solid and dashed curves
represent the predictions
calculated with and without contribution of gluon
shadowing, respectively.
}
\label{r-pb-all}}
\end{center}
\end{figure}
For a numerical solution of the Schr\"odinger
equation for the Green function
we adopt an algorithm from ref.~\cite{n-03}.
Because available data from the E665 \cite{e665} and
NMC \cite{nmc} collaborations cover the region
of small and medium values of $Q^2 \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 4\,\,\mbox{GeV}^2$
we use the KST parametrization of the dipole cross section
\cite{kst2} which is valid down to the limit
of real photoproduction.
Performing a numerical solution
of the evolution equation for the Green function,
the imaginary part of the
LC potential (\ref{250}) contains the corresponding KST
dipole cross section as well.
The nuclear density function
$\rho_{A}({b},z)$ is taken in the
realistic Wood-Saxon form with parameters from ref.~\cite{saxon}.
Because available
data from the E665 and NMC collaborations correspond
to very small values of $Q^2 \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1\,\,\mbox{GeV}^2$ at
small $x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.004$,
the nonperturbative interaction effects
between $\bar{q}$ and $q$ are included
explicitly via the real part of the LC potential of the
form (\ref{140}).
Effects of NS are studied via $x_{Bj}$-
behavior of the ratio (\ref{340}) divided by the mass
number $A$.
Firstly we present NS for the lead target in
Fig.~{\ref{r-pb-all}} at different fixed values of $Q^2$.
The solid and dashed curves represent the predictions
obtained with and without contribution of GS,
respectively.
One can see that as a manifestation
of a shorter $l_c$ for higher Fock states,
the onset of GS happens at smaller
$x_{Bj}$ than the quark shadowing.
Fig.~\ref{r-pb-all} clearly demonstrates not very strong onset of GS in the
range of $x_{Bj}\in (0.01,0.0001)$ where the most
of available data exist.
Besides the effects of GS are
stronger at smaller $Q^2$ because corresponding
Fock fluctuations of the photon have a larger
transverse size.
\begin{figure}[htb]
\special{psfile=e665-all.eps
angle=0. voffset=-420. hoffset= -10.
hscale=80. vscale=65.}
\begin{center}
\vspace{10.80cm}
\parbox{14.0cm}
{\caption[Delta]
{
Comparison of the model with experimental data
from the E665 \cite{e665} and NMC
\cite {nmc} collaborations.
Calculations correspond to exact
numerical solution of the evolution equation for the Green function,
$G_{\bar{q}q}(\vec{r_2},z_2;\vec{r_1},z_1)$,
for the lowest Fock component of the photon
using KST \cite{kst2}
parametrization of the dipole cross section and
realistic nuclear density
function of the Woods-Saxon form \cite{saxon}.
The solid and dashed curves
are calculated with and without contribution of gluon
shadowing, respectively.
}
\label{e665-all}}
\end{center}
\end{figure}
Saturation of NS
at low $x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 10^{-4}$
at the level given by Eq.~(\ref{335})
is realized only for energy independent
dipole cross section
(see parametrization (\ref{260})).
However, it is not so for the
realistic energy-dependent KST parametrization,
Eq.~(\ref{kst-1}).
In Fig.~\ref{e665-all} we present a comparison of the
model predictions with data
from the E665 \cite{e665} and NMC \cite{nmc}
collaborations. Fig. shows a quite reasonable agreement
with experimental data in spite of absence of any free
parameters in the model.
One can see that the effect of GS produces
an additional NS,
which rises with the mass number $A$.
Fig.~\ref{e665-all} also demonstrates
a non-negligibility of GS
already in the region of $x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.001\div 0.01$.
Very large error bars
especially at small $x_{Bj}\sim 10^{-4}$
do not allow to investigate
separately the effect of GS.
Therefore more accurate new data on NS
in DIS at small $x_{Bj}$ are very important
for the further exploratory study of nuclear
modification of the structure functions and
also GS.
\begin{figure}[tbh]
\special{psfile=r-pb-3-all.eps
angle=0. voffset=-355. hoffset= 30.
hscale=65. vscale=55.}
\begin{center}
\vspace{9.20cm}
\parbox{14.0cm}
{\caption[Delta]
{
Comparison of the model predictions for the ratio Pb/nucleon
obtained without (upper thick
solid line) and with GS (lower thick solid line) with other
models, versus $x_{Bj}$ at fixed $Q^2 = 3\,\,\mbox{GeV}^2$.
HKM are the results from \cite{hkm}, FS-NLO from
\cite{fs-nlo}, Bartels from \cite{bartels},
Sarcevic from \cite{sarcevic}, Frankfurt from \cite{frankfurt}
($Q^2 = 4\,\,\mbox{GeV}^2$),
Armesto from \cite{armesto}, EKS98 from \cite{eks98} and ACKLS
from \cite{ackls}.
}
\label{r-pb-3-all}}
\end{center}
\end{figure}
Finally,
Fig.~\ref{r-pb-3-all} represents
a comparison of the model predictions for NS
with the results from other models at
$Q^2 = 3\,\,\mbox{GeV}^2$ (except the results of ref.~\cite{frankfurt}
which are at $Q^2 = 4\,\,\mbox{GeV}^2$).
At small $x_{Bj} = 10^{-5}$ we predict quite large effects of GS
(compare upper and lower thick solid lines).
Note that the difference between models rises towards
small values of $x_{Bj}$ following from a different treatment
of various nuclear effects.
The models presented in
Fig.~\ref{r-pb-3-all}
can be divided into several groups:
i) \emph{models where parton distribution functions (PDF's)
has been determined from data using
Dokshitzer-Gribov-Lipatov-Altareli-Parisi
(DGLAP)
evolution equation} \\
Here the nuclear PDF's have been
parametrized performing next to leading order (NLO)
\cite{fs-nlo} or leading order (LO)
\cite{eks98,hkm} global
analysis of nuclear DIS and Drell-Yan data.
ii) \emph{models based on Glauber-like rescatterings}\\
Here the model in ref.~\cite{armesto} is based
on an application of a saturating ansatz for the total
$\gamma^*$-nucleon cross section in the proton.
This ansatz is then extended to the nuclear case
by its introduction in a Glauber expression.
Within the model from \cite{sarcevic}
a Glauber ansatz provides with the initial
condition for DGLAP evolution.
iii) \emph{models based on Gribov inelastic shadowing} \\
Here in ref.~\cite{ackls} nuclear structure functions are studied
using relation with diffraction on nucleons which arises
from Gribov's reggeon calculus.
The model presented in \cite{frankfurt}
employs again some parametrization of hard diffraction
at the scale $Q_0^2$, which gives nuclear shadowing
through Gribov's reggeon calculus similarly as in
ref.~\cite{ackls}.
Then nuclear suppression calculated at $Q^2_0$ is
used as initial condition for DGLAP evolution.
iv) \emph{models based on high-density QCD} \\
Here the model in ref.~\cite{bartels} is based
on numerical solution of a non-linear
equation for small-$x_{Bj}$ evolution
and on application of this equation
for the case of nuclear targets.
Fig.~\ref{r-pb-3-all} shows large differences
in predictions of NS
at small $x_{Bj}$ among
different models.
It gives a stimulation to obtain
new more accurate
data on nuclear structure functions
by lepton-ion collider planned
at RHIC \cite{eRHIC}.
It can help us to discriminate among
different models.
\section{Summary and conclusions}
\label{conclusions}
We present a short review of the color dipole
approach based
on the LC QCD Green function formalism,
which naturally incorporates the interference
effects of CT and CL.
Within this approach \cite{krt-98,krt-00,n-03}
we study NS in DIS
at small Bjorken $x_{Bj}$.
Calculations of NS corresponding
to $\bar{q}q$ component of the virtual photon
are based on an exact numerical solution
of the evolution
equation for the Green function.
It allows to use realistic
parametrizations of the dipole cross section
(GBW \cite{gbw} and KST \cite{kst2})
and realistic nuclear density function
of the Woods-Saxon form \cite{saxon}.
Because available data from the shadowing
region of $x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.01$ coming
mostly from the E665 and NMC collaborations
cover only small and medium values
of $Q^2\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 4\,\,\mbox{GeV}^2$, we prefer
KST parametrization \cite{kst2}
of the dipole cross section.
On the other hand the data obtained at
a lower part of the $x_{Bj}$- kinematic
interval correspond to very low values of
$Q^2 < 1\,\,\mbox{GeV}^2$ (nonperturbative region).
For this reason we include explicitly
the nonperturbative interaction effects
between $\bar{q}$ and $q$ taking into
account
the real part of the LC potential
$V_{\bar{q}q}$ (\ref{140}) in the
time-dependent two-dimensional
Schr\"odinger equation (\ref{135})
In order to compare the realistic calculations
with data on NS, the effects of
GS are taken into account.
The same path integral technique
\cite{kst2} is applied in this case.
GS was calculated only for the lowest Fock component, $|\bar{q}qG\rangle$.
Effect of higher
Fock components containing more gluons
was essentially taken into account by eikonalization of the calculated
$R_G(x_{Bj},Q^2)$ using renormalization (\ref{1100}).
We demonstrate that the onset of GS starts to be effective
at $x_{Bj}\sim 0.01$.
It rises
towards small $x_{Bj}$ because higher Fock components with more
gluons having shorter coherence time will contribute
to overall NS.
Such a situation is illustrated in Fig.~\ref{r-pb-all}.
Performing numerical calculations, we find that our
model is in reasonable agreement with existing
experimental data (see Fig.~\ref{e665-all}).
Large error bars and incompatibility
of the experimental results from the E665 and NMC
collaborations do not allow
to study separately the effect of GS.
Therefore more accurate new data on NS
in DIS off nuclei at still smaller $x_{Bj}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 10^{-5}$ are very important
for the further exploratory study of GS effects.
Comparison among various models shows large differences
for the Pb/nucleon ratio of the structure functions at
$x_{Bj} = 10^{-5}$ and $Q^2 = 3\,\,\mbox{GeV}^2$ (see Fig.~\ref{r-pb-3-all}).
It has a large impact on the calculation of high-$p_T$ particle
spectra
in nuclear collisions at RHIC and LHC.
Such large differences at small $x_{Bj}$ among
different models should be testable by the
new more precise
data on nuclear structure functions
which can
be obtained by lepton-ion collider planned
at RHIC \cite{eRHIC}.
\medskip
\noindent
{\bf Acknowledgments}:
This work has
been supported in part by the Slovak Funding Agency, Grant No. 2/7058/27
and by grant VZ MSM 6840770039, and LC 07048 (Czech Republic).
\def
|
0803.0610
|
\section{Introduction}
Optimal signaling through linear time--varying (LTV) channels is a challenging
task for future communication systems. For a particular realization
of the time--varying channel operator the transmitter and receiver
design, which avoids interference
is related to ''eigen--signaling''. Eigen--signaling simplifies much of the
information theoretic treatment of communication in dispersive channels.
However, it is well known that
for an ensemble of channels, which are dispersive in time and frequency
such a joint diagonalization can not be achieved because the eigen--decompositions can
differ from one to another channel realization.
Several approaches like for example the ''basis expansion
model'' (BEM) \cite{giannakis:bem} and the canonical channel representation \cite{sayeed:jointdiversity}
are proposed to describe eigen--signaling in some approximate sense.
Then a necessary prerequisite is the characterization
of remaining approximation errors.
A typical scenario commonly encountered
in wireless communication, is signaling through a random time-varying and frequency selective (doubly--dispersive)
channel, which in general is represented by a pseudo-differential operator $\boldsymbol{\HH}$. The abstract random
channel operating on an input signal $s:\mathbb{R}\rightarrow\mathbb{C}$ can be expressed (at least in the weak sense)
in the form of a random kernel, symbol or spreading function.
The signal $r:\mathbb{R}\rightarrow\mathbb{C}$
at the time instant $t$ at the output of the time--varying channel is then:
\begin{equation*}
r(t)=(\boldsymbol{\HH} s)(t)
\end{equation*}
It is a common assumption that knowledge of $\boldsymbol{\HH}$ at the receiver can be obtained
up to certain accuracy by channel estimation, which will
allow for coherent detection. However, channel knowledge at the transmitter
simplifies equalization and detection complexity at the receiver and can increase the
link performance.
It can be used to perform a diagonalizing operation (i.e. eigen--signaling)
and allocation of resources in this domain (e.g. power allocation).
We shall call the first part of this description from now on as the eigenstructure of $\boldsymbol{\HH}$.
Signaling through classes of channels having common eigenstructure could be,
in principle, interference--free and would allow for simple information recovering algorithms
based on the received signal $r(t)$. However, for $\boldsymbol{\HH}$ being random, random eigenstructure has to be
expected in general such that the design of the transmitter and the receiver
has to be performed jointly for ensembles of channels having different eigenstructures. Nevertheless,
interference then can not be avoided in the communication chain. For such interference scenarios it
is important to have bounds on the distortion of a particular selected
signaling scheme. Refer for example to \cite{durisi:wssus:capacity}
for a recent application in information theory.
Initial results in this field can be found
in the literature on pseudo-differential operators
\cite{kohn:pdo:algebra,folland:harmonics:phasespace} where the overall operator
was split up into a main part to be studied and a ''small'' operator to be controlled.
More recent results with direct application to time--varying channels
were obtained by Kozek \cite{kozek:thesis,kozek:eigenstructure}
and Matz \cite{matz:thesis} which resemble the notion of underspread
channels. They investigated the approximate symbol calculus of pseudo-differential
operators in this context and derived bounds for the $\Leb{2}$--norm
of the distortion which follow from the approximate product rule
in terms of Weyl symbols. We will present more details on this approach
in Section \ref{subsec:approxeigen:symbolcalculus}.
Controlling this approximation
intimately scales with the ''size'' of the spreading of the contributing
channel operators. For operators with compactly supported spreading
such a scale is $|U|$ -- the size of the spreading support $U$.
Interestingly this approximation behavior breaks
down in their framework at a certain critical size. Channels
below this critical size are called according to their terminology underspread and
otherwise overspread.
However, we found that previous bounds can be improved and generalized
in several directions by considering the problem of approximate eigenstructure
from another perspective, namely investigating directly the $\Leb{p}$--norm $E_p$
of the error $\boldsymbol{\HH} s-\lambda r$
for well known choices of $\lambda$. We shall focus on the case where
$\lambda$ is the symbol of the operator $\boldsymbol{\HH}$ and on the important
case where $\lambda$ is the orthogonal distortion which can be understood
as the $\Leb{2}$--minimizer.
We believe that extensions to $p\neq2$ are important when further statistical
properties of the spreading process of the random channel operator are at hand\footnote{%
We provide further motivation and arguments in Remark \ref{rem:approxeigen:statisticalmodel} at the end of the paper.}.
Our approach will also show the connection to well known fidelity and
localization criteria related to pulse design
\cite{kozek:thesis,jung:wssuspulseshaping,jung:isit06}. In particular,
the latter is also related to the notion of localization operators \cite{daubechiez:tflocalization:geophase}.
The underspread property of doubly--dispersive channels occurs also in the context
of channel measurement and identification \cite{kailath:ltvmeasurement}.
In addition refer to the following recent articles \cite{kozek:identification:bandlimited,Pfander08:opid} for
rigorous treatments of channel identification
based on Gabor (Weyl--Heisenberg) frame theory. The authors connect
the critical time--frequency sampling density immanent in this theory to
the stability of the channel measurement. A relation between these different
notions of underspreadness has to be expected but is beyond
the scope of this paper.
The paper is organized as follows: In Section \ref{sec:timefrequencyanalysis} we shall give an
introduction into the
basics from time--frequency analysis including the Weyl correspondence and
the spreading representation of doubly--dispersive channels. In Section \ref{sec:mainresults} of the paper we shall
consider the problem of approximate eigenstructure for operators with
spreading functions, which are supported on a common set $U$ in the time--frequency
plane having non--zero and finite
Lebesgue measure $|U|$. We present the approach for $E_2$ followed
by a summary of the main results of our analysis on $E_p$.
The detailed analysis for $E_p$ will be presented in Section \ref{sec:genanalysis}. Finally,
Section \ref{sec:numerical} contains a numerical verification of our results.
\subsection{Notation and Some Definitions}
We present certain notation and definitions that shall be used through the paper.
For $1\leq p<\infty$ and functions $f:\mathbb{R}^n\rightarrow\mathbb{C}$
the functionals $\lVert f\rVert_p:=\left(\int|f(t)|^pdt\right)^{1/p}$
are then usual notion of $p$--norms
($dt$ is the Lebesgue measure on $\mathbb{R}^n$).
Furthermore for $p=\infty$ is
$\lVert f\rVert_\infty:=\text{\rm ess sup\,\,} |f(t)|$.
If $\lVert f\rVert_p$ is finite $f$ is said to be in $\Leb{p}(\mathbb{R}^n)$.
The inner product $\langle\cdot,\cdot\rangle$
on the Hilbert space $\Leb{2}(\mathbb{R}^n)$ is given as
$\langle x,y\rangle:=\int_{\mathbb{R}^n} \bar{x}(t)y(t)dt$ where $\bar{x}(t)$ denotes
complex conjugate of $x(t)$.
A particular dense subset of $\Leb{p}(\mathbb{R}^n)$ is
the class of Schwartz functions ${\mathcal{S}}(\mathbb{R}^n)$ (infinite
differentiable rapidly decreasing functions).
The notation $p'$ denotes always the dual index of $p$, i.e.
$1/p+1/p'=1$ with $p'=\infty$ if $p=1$ (and the reverse).
\section{Time--Frequency Analysis}
\label{sec:timefrequencyanalysis}
\subsection{Phase Space Displacements and Ambiguity Functions }
Several physical properties of time--varying channels (like delay and Doppler spread) are in
general related
to a time--frequency view on operators $\boldsymbol{\HH}$.
Time-frequency representations itself are important tools in signal analysis, physics and
many other scientific areas. Among them are the
Woodward cross ambiguity function \cite{woodward:probinfradar}
and the Wigner distribution.
Ambiguity functions can be understood as
inner product representations of time--frequency shift operators.
More generally, a displacement (or shift) operator for
functions $f:\mathbb{R}^n\rightarrow\mathbb{C}$ can be
defined as:
\begin{equation}
({\boldsymbol{S}}_{\mu} f)(x):=e^{i2\pi \DotReal{\mu_2}{x}}f(x-\mu_1)
\label{eq:weyl:shift:shiftoperator}
\end{equation}
where $\mu=(\mu_1,\mu_2)\in\mathbb{R}^{2n}$ and $\mu_1,\mu_2\in\mathbb{R}^n$. In general
$\mathbb{R}^{2n}$ is called phase space.
Later on
we shall focus on $n=1$, where we have that the functions $f$ are signals in time and $\mu$ is a displacement in
time and frequency. Then the phase space is also called \emph{time--frequency plane} and
the operators ${\boldsymbol{S}}_\mu$ are \emph{time--frequency shift operators}.
There is an ambiguity as to which displacement should be performed first where
\eqref{eq:weyl:shift:shiftoperator} corresponds to the separation
${\boldsymbol{S}}_\mu={\boldsymbol{S}}_{(0,\mu_2)}{\boldsymbol{S}}_{(\mu_1,0)}$.
However, it is well known that
a generalized view can be achieved by considering so--called $\alpha$-generalized displacements:
\begin{equation}
{\boldsymbol{S}}_{\mu}^{(\alpha)}:=
{\boldsymbol{S}}_{(0,\mu_2(\frac{1}{2}+\alpha))}
{\boldsymbol{S}}_{(\mu_1,0)}
{\boldsymbol{S}}_{(0,\mu_2(\frac{1}{2}-\alpha))}
=e^{-i2\pi(1/2-\alpha)\zeta(\mu,\mu)}{\boldsymbol{S}}_\mu
\label{eq:weyl:shift:alphageneralized}
\end{equation}
where $\zeta(\mu,\nu)=\DotReal{\mu_1}{\nu_2}$ (inner product on $\mathbb{R}^n$)
and then set ${\boldsymbol{S}}_\mu={\boldsymbol{S}}_\mu^{(1/2)}$. Usually
$\alpha$ is called \emph{polarization}.
The operators in \eqref{eq:weyl:shift:alphageneralized} act isometrically on all $\Leb{p}(\mathbb{R}^n)$,
hence are unitary on $\Leb{2}(\mathbb{R}^n)$.
Furthermore, they establish\footnote{up to unitary equivalence}
unitary representations (Schr\"odinger representation) of the Weyl--Heisenberg
group on $\Leb{2}(\mathbb{R}^n)$ (see for example \cite{folland:harmonics:phasespace}).
In physics it is common to choose the most symmetric case $\alpha=0$
and the operators are usually called Weyl operators or
Glauber displacement operators.
If we define the symplectic form as
$\eta(\mu,\nu):=\zeta(\mu,\nu)-\zeta(\nu,\mu)$,
we have the following well known \emph{Weyl commutation relation}:
\begin{equation}
\begin{split}
{\boldsymbol{S}}_{\mu}^{(\alpha)}{\boldsymbol{S}}_{\nu}^{(\beta)} =
e^{-i2\pi \eta(\mu,\nu)}{\boldsymbol{S}}_{\nu}^{(\beta)}{\boldsymbol{S}}_{\mu}^{(\alpha)}
\end{split}
\label{eq:weyl:shift:weylcommrelation}
\end{equation}
for arbitrary polarizations $\alpha$ and $\beta$.
In this way a generalized (cross) ambiguity function can be defined as:
\begin{equation}
{\mathbf{A}}^{(\alpha)}_{g\gamma}(\mu)\overset{\text{def}}{=}\langle g,{\boldsymbol{S}}_\mu^{(\alpha)}\gamma\rangle
=\int_{\mathbb{R}^n}\bar{g}(x+(\frac{1}{2}-\alpha)\mu_1)\gamma(x-(\frac{1}{2}+\alpha)\mu_1)
e^{i2\pi\DotReal{\mu_2}{x}}dx
\label{eq:tfanalysis:crossamb}
\end{equation}
The function ${\mathbf{A}}^{(1/2)}_{g\gamma}$ is also known as the
\emph{Short--time Fourier transform} (sometimes also windowed Fourier transform or
Fourier--Wigner transform) of $g$
with respect to a window $\gamma$. This function is continuous for
$g\in{\mathcal{S}}(\mathbb{R}^n)$ and $\gamma\in{\mathcal{S}}'(\mathbb{R}^n)$ (the dual of
${\mathcal{S}}(\mathbb{R}^n)$, i.e. the tempered distributions).
Well known relations of these functions, which follow
directly from definition \eqref{eq:tfanalysis:crossamb} are:
\begin{equation}
|{\mathbf{A}}^{(\alpha)}_{g\gamma}(\mu)|=|\langle g,{\boldsymbol{S}}_\mu^{(\alpha)}\gamma\rangle|\leq
\lVert g\rVert_2\lVert\gamma\rVert_2=\lVert{\mathbf{A}}_{g\gamma}^{(\alpha)}\rVert_2
\label{eq:tfanalysis:crossamb:properties}
\end{equation}
where the right hand side (rhs) is sometimes also called the radar uncertainty principle. For particular
weight functions $m:\mathbb{R}^{2n}\rightarrow\mathbb{R}_{+}$ the weighted $p$--norms
$\lVert{\mathbf{A}}_{g\gamma}^{(\alpha)}m\rVert_p$ are also called the modulation norms
$\lVert\gamma\rVert_{M_{m}^{p,p}}$ of $\gamma$ with respect to Schwartz
function $g\in{\mathcal{S}}(\mathbb{R}^n)$ ($M_m^{p,p}$ is then corresponding modulation space \cite{feichtinger:modspaces}).
Let the symplectic Fourier transform $\mathcal{F}_s F$ of a function
$F:\mathbb{R}^{2n}\rightarrow\mathbb{C}$ be defined as:
\begin{equation}
(\mathcal{F}_s F)(\mu)=\int_{\mathbb{R}^{2n}} e^{-i2\pi\eta(\nu,\mu)} F(\nu)d\nu
\label{eq:tfanalysis:symplectic:fourier}
\end{equation}
The symplectic Fourier transform of the (cross) ambiguity function $\mathcal{F}_s{\mathbf{A}}_{g\gamma}^{(\alpha)}$
is called the (cross) Wigner distribution of $g$ and $\gamma$ in polarization $\alpha$.
\subsection{Weyl Correspondence and Spreading Representation}
The operational meaning of pseudo-differential operators can
be stated with a (distributional) kernel, coordinate-based in the form of
infinite matrices or in some algebraic manner (see for example
\cite[Chapter 14]{grochenig:gaborbook}). The kernel based
description is usually written in a form like:
\begin{equation}
(\boldsymbol{\HH} \gamma)(t)=\int_{\mathbb{R}^n} h(t,t') \gamma(t')dt'
\end{equation}
with a kernel $h:\mathbb{R}^{2n}\rightarrow\mathbb{C}$ (for two Schwartz functions
$\gamma,g\in{\mathcal{S}}(\mathbb{R}^n)$ the kernel $h$ exists even as a tempered distribution, i.e.
Schwartz kernel theorem states $h\in{\mathcal{S}}'(\mathbb{R}^{2n})$ with
$\langle g,\boldsymbol{\HH}\gamma\rangle=\langle h,\bar{g}\otimes\gamma\rangle$,
see for example \cite[Thm. 14.3.4]{grochenig:gaborbook}).
However, the abstract description of $\boldsymbol{\HH}$ as superpositions of time--frequency shifts is
important and quite close to the physical modeling of time--varying channels.
We will adopt this time-frequency framework to describe the channel operators.
Let us denote with $\mathcal{T}_\infty$ the set of compact operators, i.e. for
$X\in\mathcal{T}_\infty$ holds $X=\sum_k s_k\langle x_k,\cdot\rangle y_k$ with
singular values $\{s_k\}$ and two orthonormal bases (singular functions) $\{x_k\}$ and $\{y_k\}$.
For $p<\infty$ the $p$th Schatten class is the set of operators
$\mathcal{T}_p:=\{X\,\,|\,\,\lVert X\rVert^p_p:=\Trace{((X^*X)^{p/2})}=\sum_k |s_k|^p<\infty\}$
where $\Trace(\cdot)$ is the usual meaning of the trace (e.g. evaluated
in a particular basis).
Then $\mathcal{T}_p$ for $1\leq p<\infty$ are Banach spaces and
$\mathcal{T}_1\subset\mathcal{T}_p\subset\mathcal{T}_\infty$
(see for example \cite{reed:simon:fourier}).
The sets $\schattenclass_1$ and $\mathcal{T}_2$
are called trace class and Hilbert--Schmidt operators. Hilbert--Schmidt operators
form itself a Hilbert space with inner product
$\langle Y,X\rangle_{\mathcal{T}_2}:=\Trace(Y^*X)$.
For $X\in\schattenclass_1$ it holds by
properties of the trace that
$|\langle Y,X\rangle_{\mathcal{T}_2}|\leq\lVert X\rVert_1\lVert Y\rVert$, where
$\lVert\cdot\rVert$ denotes the operator norm.
Hence for $Y={\boldsymbol{S}}_\mu^{(\alpha)}$ given by \eqref{eq:weyl:shift:alphageneralized} one can
define analogously to the ordinary Fourier transform \cite{daubechies:integraltransform,grossmann:wignerweyliso}
a mapping $\schattenclass_1\rightarrow\Leb{2}(\mathbb{R}^{2n})$ via:
\begin{equation}
\boldsymbol{\Sigma}_{X}^{(\alpha)}(\mu)\overset{\text{def}}{=}
\langle{\boldsymbol{S}}_\mu^{(\alpha)},X\rangle_{\mathcal{T}_2}
\label{eq:tfanalysis:noncomm:fourier}
\end{equation}
In essence, the kernel $h$ of the channel operator $\boldsymbol{\HH}$ is given as
the (inverse) Fourier transform in the $\mu_2$ variable (see for example
\cite[Chapter 14]{grochenig:gaborbook}).
Note that $\boldsymbol{\Sigma}_{X}^{(\alpha)}(0)=\Trace(X)$ and
$|\boldsymbol{\Sigma}_{X}^{(\alpha)}(\mu)|\leq\lVert X\rVert_1$
(because $\lVert{\boldsymbol{S}}_\mu^{(\alpha)}\rVert=1$). The function
$\boldsymbol{\Sigma}_{X}^{(\alpha)}$ is sometimes
called the ''non--commutative'' Fourier transform \cite{holevo:propquantum},
characteristic function, inverse Weyl transform \cite{weyl:theoryofgroups:quantum}
or $\alpha$--generalized \emph{spreading function} of $X$ \cite{kozek:thesis,kozek:generalweyl}.
From \eqref{eq:weyl:shift:alphageneralized} it follows that
$\boldsymbol{\Sigma}_{X}^{(\alpha)}=e^{i2\pi(1/2-\alpha)\zeta(\mu,\mu)}\boldsymbol{\Sigma}_{X}^{(1/2)}$.
\begin{mylemma}[Spreading Representation]
Let $X\in\mathcal{T}_2$. Then there it holds:
\begin{equation}
X
=\int_{\mathbb{R}^{2n}} \langle{\boldsymbol{S}}_\mu^{(\alpha)},X\rangle_{\mathcal{T}_2}{\boldsymbol{S}}_{\mu}^{(\alpha)} d\mu
\label{eq:tfanalysis:linop:spreading}
\end{equation}
where the integral is meant in the weak sense\footnote{
For $\lVert\boldsymbol{\Sigma}_{X}^{(\alpha)}\rVert_1<\infty$
\eqref{eq:tfanalysis:linop:spreading} is a Bochner integral.
Weak interpretation of \eqref{eq:tfanalysis:linop:spreading} as $\langle g,X\gamma\rangle$
extents the meaning of this integral
to tempered distributions
\cite[Chapter 2]{folland:harmonics:phasespace} or
\cite[Chapter 14.3]{grochenig:gaborbook}.}.
\label{lemma:tfanalysis:linop:spreading}
\end{mylemma}
The extension to the Hilbert--Schmidt operators $\mathcal{T}_2$ is
due to continuity of the mapping in \eqref{eq:tfanalysis:noncomm:fourier} and density of
$\mathcal{T}_1$ in $\mathcal{T}_2$.
A complete proof of this lemma can be found in many books on Weyl calculus
(for example matched to our notation in \cite[Chapter V]{holevo:propquantum} and \cite{Segal1963}).
Furthermore, the following
important shift--property:
\begin{equation}
\boldsymbol{\Sigma}^{(\alpha)}_{{\boldsymbol{S}}_\mu(\beta)X{\boldsymbol{S}}_\mu^*(\beta)}(\nu)=
e^{-i2\pi\eta(\mu,\nu)}\boldsymbol{\Sigma}^{(\alpha)}_{X}(\nu)
\label{eq:tfanalysis:noncomm:shiftproperty}
\end{equation}
can be verified easily using \eqref{eq:weyl:shift:alphageneralized} and \eqref{eq:weyl:shift:weylcommrelation}.
The composition of the symplectic Fourier transform
$\mathcal{F}_s$ as defined in \eqref{eq:tfanalysis:symplectic:fourier}
with the mapping in \eqref{eq:tfanalysis:noncomm:fourier}
establishes the so called \emph{Weyl correspondence} \cite{weyl:theoryofgroups:quantum}
in a particular polarization $\alpha$ (for this generalized approach in signal
processing see also \cite{kozek:generalweyl}).
The function $\boldsymbol{L}^{(\alpha)}_{X}=\mathcal{F}_s\boldsymbol{\Sigma}^{(\alpha)}_X$ is called (generalized) \emph{Weyl symbol} of $X$.
The original Weyl symbol is $\boldsymbol{L}^{(0)}_{X}$. The cases $\alpha=\tfrac{1}{2}$ and
$\alpha=-\tfrac{1}{2}$ are also known as Kohn--Nirenberg symbol (or Zadeh's
time--varying transfer function) and Bello's
frequency--dependent modulation function \cite{bello:wssus}.
\if0
The operator $X$ can also be expressed in a kernel representation having a kernel
$k_{X}(x,y)$ defined as:
\begin{equation}
k_{X}(x,y)=(\mathcal{F}_2 \mathbf{F}^{(\alpha)}X)(y-x,p_\alpha(x,y))
\label{eq:tfanalysis:kernel}
\end{equation}
with a particular polarization map $p_\alpha(x,y):=(\frac{1}{2}-\alpha)y+(\frac{1}{2}+\alpha)x$ and
$\mathcal{F}_2$ denotes the Fourier transform in the second argument.
\fi
The Parseval identities are:
\begin{equation}
\langle X,Y\rangle_{\mathcal{T}_2}=
\langle \boldsymbol{\Sigma}^{(\alpha)}_X,\boldsymbol{\Sigma}^{(\alpha)}_Y\rangle=
\langle \boldsymbol{L}_X^{(\alpha)},\boldsymbol{L}_Y^{(\alpha)}\rangle
\label{eq:tfanalysis:weyl:parceval}
\end{equation}
for $X,Y\in\mathcal{T}_2$.
For a rank--one operator $X=\langle\gamma,\cdot\rangle g$ it follows that
$\boldsymbol{\Sigma}^{(\alpha)}_X=\bar{{\mathbf{A}}}_{g\gamma}^{(\alpha)}$
such that \eqref{eq:tfanalysis:weyl:parceval} reads in this case as:
$\langle g,Y\gamma\rangle=\langle\bar{{\mathbf{A}}}_{g\gamma}^{(\alpha)},\boldsymbol{\Sigma}^{(\alpha)}_Y\rangle$.
\section{Problem Statement and Main Results}
\label{sec:mainresults}
In this section we will establish a concept, which we have called
the ''approximate eigenstructure''. The latter are sets of signals
and coefficients which fulfill a particular property
of singular values and functions up to certain approximation error $E_p$
measured in $p$--norm.
Part \ref{subsec:approxeigen} motivates this concept for a single channel operator.
In part \ref{subsec:approxeigen:compacteigen} of this section we will then extent
this framework to a
time--frequency formulation for ''random''
time--varying channels with a common support of the spreading functions.
We consider on how approximate eigenstructure behavior scales with the respect
to the particular spreading functions, which is the main problem of this paper.
Recent results in this direction are for $p=2$ and based on estimates on the approximate
product rule of Weyl symbols. We will give a general formulation of this approach
and an overview over the known results for $E_2$ in part \ref{subsec:approxeigen:symbolcalculus} of this section.
After that we present in \ref{subsec:approxeigen:directapproach}
a new (direct) approach for upperbounding $E_p$ yielding for our setup also
improved and more general estimates for $p=2$.
This part contains a summary of the main results, where the more
detailed analysis is in Section \ref{sec:genanalysis}.
\subsection{The Approximate Eigenstructure}
\label{subsec:approxeigen}
It is a common approach to describe a given channel operator $\boldsymbol{\HH}$
on a superposition of signals where $\boldsymbol{\HH}$ act rather simple.
As already mentioned, compact operators on a Hilbert space can be formally represented as
$\boldsymbol{\HH}=\sum_{k=1}^\infty s_k\langle x_k,\cdot\rangle y_k$
with the singular values $\{s_k\}$ and singular functions
$\{x_k\}$ and $\{y_k\}$.
Transmitting an information bearing complex data symbol $c$ for example in the form
of the signal $s=c\cdot x_k$ through $\boldsymbol{\HH}$ we known that with proper channel measurement
(obtaining $s_k$) the information can be coherently ''recovered'' from the estimate
$\langle y_k,\boldsymbol{\HH} s\rangle=s_k\cdot c$.
The crucial point here is that the transmitting device has to know and implement $\{x_k\}$ before.
However, in practical implementation $\{x_k\}$ is required to be fixed and structured to some sense
(for example in the form of filterbanks).
But in general, also the singular functions depend explicitly on the operator
$\boldsymbol{\HH}$, i.e. they vary from one realization to another. They can be very unstructured and it is
difficult to relate properties of $\boldsymbol{\HH}$ in such representations to physical measurable quantities.
Hence, instead of requiring $\boldsymbol{\HH} x_k=s_ky_k$ we would like to have that
$\boldsymbol{\HH} x_k-s_ky_k$ is ''small'' in some sense. Usually, approximation in the $\Leb{2}$--norm
seems to be of most interest in the signal design. However, there are
certain problems as peak power and stability issues where stronger results are required.
Furthermore intuitively we are aware that the approximation of the singular behavior
of $\{s_k,y_k,x_k\}$ has to be ''uniform'' in more than one particular norm.
In this paper we consider the $\Leb{p}$ norms for the approximation, thus we have
the following formulation for the Hilbert space $\Leb{2}(\mathbb{R}^n)$:
\begin{mydefinition}[Approximate Eigenstructure]
Let $\epsilon$ be a given positive number. Consider $\lambda\in\mathbb{C}$ and
two functions $g,\gamma\in\Leb{2}(\mathbb{R}^n)$
with $\lVert g\rVert_2=\lVert \gamma\rVert_2=1$. If
\begin{equation}
E_p:=\lVert \boldsymbol{\HH}\gamma-\lambda g\rVert_p\leq\epsilon
\label{eq:approxeigen1}
\end{equation}
we call $\{\lambda,g,\gamma\}$ a $\Leb{p}$--approximate eigenstructure of $\boldsymbol{\HH}$
with bound $\epsilon$.
\label{def:approxeigen:Ep}
\end{mydefinition}
The set of $\lambda$'s for which exists $g_\lambda$ such that
$\{\lambda,g_\lambda,g_\lambda\}$ is a $\Leb{2}$--approximate eigenstructure for a common fixed
$\epsilon$ is also called the \emph{$\epsilon$--pseudospectrum}\footnote{Thanks to
T. Strohmer for informing me about this relation.} of $\boldsymbol{\HH}$.
More generally, we will allow also $g\neq \gamma$ such that
the term ''approximate singular'' functions is suited for our approach as well.
Obviously for the ''true eigenstructure'' $\{s_k,y_k,x_k\}$ as defined above we have that $\epsilon=0$ for each $p$ and $k$
.
On the other hand, for given $g$ and $\gamma$ the minimum of the left hand side (lhs) of \eqref{eq:approxeigen1}
is achieved for $p=2$ at $\lambda=\langle g,\boldsymbol{\HH}\gamma\rangle$ such that $E_p$ for
$\{\langle g,\boldsymbol{\HH}\gamma\rangle,g,\gamma\}$ describes the amount of
\emph{orthogonal distortion} caused by $\boldsymbol{\HH}$ measured in the $p$--norm.
\subsection{The Problem Statement for Channels with Compactly Supported Spreading }
\label{subsec:approxeigen:compacteigen}
It is of general importance to what degree the Weyl symbol or a smoothed version
of it approaches the
eigen--value (or more generally singular value) characteristics of a given
channel operator $\boldsymbol{\HH}$.
Inspired from the ideas in \cite{kozek:eigenstructure} we will consider now
the following question: What is the error $E_p(\mu)$ if we approximate the action of $\boldsymbol{\HH}$ on
${\boldsymbol{S}}_\mu\gamma$ as a multiplication of ${\boldsymbol{S}}_\mu g$ by $\lambda(\mu)$?
Hence, instead of the ''true'' eigenstructure consisting of the singular values and functions of $\boldsymbol{\HH}$
we shall consider a more structured family $\{\lambda(\mu),{\boldsymbol{S}}_\mu g,{\boldsymbol{S}}_\mu\gamma\}$.
The latter will intuitively probe the operator $\boldsymbol{\HH}$ locally in a phase space (time--frequency) meaning
if $g$ and $\gamma$ are in some sense time--frequency localized around the origin.
The validity of this approximate picture, in which the function $\lambda:\mathbb{R}^{2n}\rightarrow\mathbb{C}$
now serves as a multiplicative channel is essentially described by $E_p(\mu)$.
For example, in wireless communication ${\boldsymbol{S}}_\mu g$ and ${\boldsymbol{S}}_\mu\gamma$ could be
well--localized prototype filters at time--frequency slot $\mu$ of the receive and
transmit filterbanks of a particular communication device
and $\lambda(\mu)$ is an effective channel coefficient to be equalized.
However, with this application in mind, we are typically
confronted with \emph{random channel operators} $\boldsymbol{\HH}$ characterized by
random spreading functions $\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}$ having
a common (Lebesgue measurable) support $U$ of non--zero and finite measure
$|U|$, i.e. $0<|U|<\infty$. The assumption of a known support seems to be
the minimal apriori channel knowledge that enters practically
the system design (e.g. of a communication device).
For example, a typical doubly--dispersive
channel model ($n=1$) for this application is that spreading
occurs in $U=[0,{\tau_d}]\times[-B_D,B_D]$ where ${\tau_d}$ and $B_D$ are the maximum
delay spread and Doppler frequency. It is then desirable to have \emph{common
prototype filters for all these channel realizations}.
It is clear that in this direction
Definition \ref{def:approxeigen:Ep} is not yet adequate enough. We have to measure
the approximation error with respect to a certain scale of the
particular random spreading functions. In this paper we measure the approximate
eigenstructure with respect to its $\Leb{q}$--norm.
We believe that this approach is important to have reasonable
estimates for the various statistical fading and scattering environments. An example
of such an application is given in Remark \ref{rem:approxeigen:statisticalmodel}
in Section \ref{sec:numerical}.
We consider only bounded
spreading functions such that
the operator $\boldsymbol{\HH}$ is of Hilbert Schmidt type, i.e. $\boldsymbol{\HH}\in\mathcal{T}_2$.
To this end, let us call this set of channel operators as $\text{OP}(U)$, i.e.
\begin{equation}
\text{OP}(U):=\{\,\boldsymbol{\HH}\,| \support{\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}}\subseteq U\,\text{and}\,
\sup_{\mu\in U} |\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}(\mu)|<\infty\}
\end{equation}
As already discussed for example in \cite{Pfander08:opid} the operator
class $\text{OP}(U)$ does not include limiting cases of doubly--dispersive channels like
the time--invariant channel or the identity. Generalizations,
for example in the sense of tempered distributions, are beyond the scope
of this paper. We aim at an extension of Definition \ref{def:approxeigen:Ep} for the approximate
eigenstructure which is meaningful and suited for this class of channels.
We will formulate this as our main problem of this paper:
\begin{myproblem}
Consider two functions $g,\gamma:\mathbb{R}^n\rightarrow\mathbb{C}$
with $\lVert g\rVert_2=\lVert\gamma\rVert_2=1$. Let be $1\leq q\leq\infty$,
$1\leq p<\infty$ and $0<\delta<\infty$ such that
for all operators $\boldsymbol{\HH}\in\text{OP}(U)$ it holds:
\begin{equation}
\begin{split}
E_p(\mu):
&=\lVert\boldsymbol{\HH}{\boldsymbol{S}}_\mu\gamma-\lambda(\mu){\boldsymbol{S}}_\mu g\rVert_p
\leq \delta\cdot \lVert\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}\rVert_q
\end{split}
\label{eq:approxeigen:epweyl}
\end{equation}
where the $p$--norm is with respect to the argument of the function
$\boldsymbol{\HH}{\boldsymbol{S}}_\mu\gamma-\lambda(\mu){\boldsymbol{S}}_\mu g$.
Then $\{\lambda(\mu),{\boldsymbol{S}}_\mu g,{\boldsymbol{S}}_\mu\gamma\}$ is an $\Leb{p}$-approximate eigenstructure for
{\bf all} $\boldsymbol{\HH}\in\text{OP}(U)$, each of them with
individual bound $\epsilon=\delta\cdot \lVert\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}\rVert_a$.
How small can we choose the scale $\delta$ given $g$, $\gamma$, $U$, $p$ and $q$?
What can be said about $\inf_{g,\gamma}(\delta)$?
\label{problem:approxeigen:epweyl}
\end{myproblem}
Note that, independently of the polarization $\alpha$, the operator
${\boldsymbol{S}}_\mu$ can be replaced in \eqref{eq:approxeigen:epweyl}
with any $\beta$--polarized shift
${\boldsymbol{S}}_\mu^{(\beta)}$ without change of $E_p(\mu)$.
Furthermore, as already stated in the definition of $E_p$ in \eqref{eq:approxeigen1}
$\lVert g\rVert_2=\lVert\gamma\rVert_2=1$, throughout the rest of the paper.
Summarizing: How much could $\{{\boldsymbol{S}}_\mu g,{\boldsymbol{S}}_\mu\gamma\}$ serve as common
approximations (measured in $p$--norm) to the singular functions of the operator class $\text{OP}(U)$
for fixed $U$ ?
\subsection{Results Based on the Approximate Product Rule}
\label{subsec:approxeigen:symbolcalculus}
In previous work \cite{kozek:thesis,kozek:transferfunction,matz:thesis,matz:timefreq:transfer} results
were provided for $g=\gamma$ and (apart of \cite{matz:timefreq:characterization})
$\lambda=\boldsymbol{L}^{(\alpha)}_{\boldsymbol{\HH}}$ for the case $p=2$.
These are obtained if one considers the problem from view of symbolic calculus
and can be summarized in the following lemma:
\begin{mylemma}
Let $\gamma=g$ and $\lambda=\boldsymbol{L}^{(\alpha)}_{\boldsymbol{\HH}}$. It holds:
\begin{equation}
E_2(\mu)
\leq\left(|\boldsymbol{L}^{(\alpha)}_{\boldsymbol{\HH}^*\boldsymbol{\HH}}(\mu)-|\boldsymbol{L}^{(\alpha)}_{\boldsymbol{\HH}}(\mu)|^2|
+\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}^*\boldsymbol{\HH}}\Omega\rVert_1+2|\boldsymbol{L}^{(\alpha)}_{\boldsymbol{\HH}}(\mu)|\cdot\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}} \Omega\,\rVert_1\right)^{\frac{1}{2}}
\label{eq:gmc:underspread:generalupperbound}
\end{equation}
where
$\Omega=|{\mathbf{A}}^{(\alpha)}_{\gamma\gamma}-1|$.
\label{lemma:gmc:underspread:generalupperbound}
\end{mylemma}
Note that the lemma not yet necessarily requires $\boldsymbol{\HH}\in\text{OP}(U)$.
The proof is provisioned in Appendix \ref{appendix:lemma:gmc:underspread:generalupperbound:proof}.
This bound is motivated by the work of W. Kozek \cite{kozek:thesis}. However
it has been formulated in a more general context. The first term of the bound in
\eqref{eq:gmc:underspread:generalupperbound} contains
the Weyl symbol $\boldsymbol{L}_{XY}^{(\alpha)}$ of the composition $XY$ of two operators $X$ and $Y$
($\boldsymbol{\HH}^*$ and $\boldsymbol{\HH}$ in this case), which
is the twisted multiplication \cite{pool:mathweylcorrespondence} of the symbols of the operators $X$ and $Y$.
On the level of spreading functions\footnote{%
Symbols (spreading functions) in
$\Leb{2}(\mathbb{R}^{2n})$ with twisted multiplication (convolution) are
$*$-isomorph to the algebra of Hilbert--Schmidt operators.
},
$\boldsymbol{\Sigma}_{XY}^{(\alpha)}$ is given by the so called
\emph{twisted convolution} $\,\natural\,_\phi$ of $\boldsymbol{\Sigma}_X^{(\alpha)}$ and $\boldsymbol{\Sigma}_Y^{(\alpha)}$
\cite{Segal1963,Kastler1965}:
\begin{equation}
\begin{split}
&\boldsymbol{\Sigma}^{(\alpha)}_{XY}(\rho)=
\int_{\mathbb{R}^{2n}} \boldsymbol{\Sigma}^{(\alpha)}_{X}(\mu)\boldsymbol{\Sigma}^{(\alpha)}_{Y}(\rho-\mu)e^{-i2\pi\phi(\mu,\rho)}d\mu
\overset{\text{def}}{=}(\boldsymbol{\Sigma}^{(\alpha)}_{X}\,\natural\,_\phi \boldsymbol{\Sigma}^{(\alpha)}_{Y})(\rho)
\end{split}
\label{eq:tfanalysis:spreading:twistedconv}
\end{equation}
with $\phi(\mu,\rho)=(\alpha+\frac{1}{2})\zeta(\mu,\rho)+(\alpha-\frac{1}{2})\zeta(\rho,\mu)-2\alpha\zeta(\mu,\mu)$.
For the polarization $\alpha=0$ it follows $\phi(\mu,\rho)=\eta(\mu,\rho)/2$ and
conventional convolution is simply $\,\natural\,_0$.
Expanding $\exp(-i2\pi\phi(\mu,\rho))$ in $\mu$ as a Taylor series reveals that twisted convolutions
are weighted sums of $\,\natural\,_0$--convolutions \cite{folland:harmonics:phasespace}
related to moments of $\boldsymbol{\Sigma}^{(\alpha)}_{X}$ and $\boldsymbol{\Sigma}^{(\alpha)}_{Y}$.
Hausdorff--Young inequality with sharp constants
$c^2_p=p^{\frac{1}{p}}/p'^{\frac{1}{p'}}$ (and $c_1=c_\infty=1$) gives for $1\leq p\leq 2$
estimates on the following ''approximate product rule'' of Weyl symbols:
$\lVert\boldsymbol{L}^{(\alpha)}_{XY}-\boldsymbol{L}^{(\alpha)}_X\boldsymbol{L}^{(\alpha)}_Y\rVert_{p'}
\leq2c^{2n}_p \lVert F\rVert_p$ where
$F(\rho)=\int_{\mathbb{R}^{2n}} |\boldsymbol{\Sigma}^{(\alpha)}_{X}(\mu)\boldsymbol{\Sigma}^{(\alpha)}_{Y}(\mu-\rho)\sin(2\pi\phi(\mu,\rho))|d\mu$.
In particular, for $p=1$ we get:
\begin{equation}
\begin{split}
|\boldsymbol{L}^{(\alpha)}_{\boldsymbol{\HH}^*\boldsymbol{\HH}}-|\boldsymbol{L}^{(\alpha)}_{\boldsymbol{\HH}}|^2|\leq
2\lVert F\rVert_1\qquad\text{a.e.}
\end{split}
\label{eq:gmc:underspread:multcalc}
\end{equation}
Let us assume now that $\boldsymbol{\HH}\in\text{OP}(U)$.
With $\chi_U$ we shall denote the characteristic function of $U$ (its indicator function).
Kozek \cite[Thm. 5.6]{kozek:thesis} has considered the case
$\alpha=0$ obtaining the following result:
\begin{mytheorem}[W. Kozek \cite{kozek:thesis}]
Let $U=[-\tau_0,\tau_0]\times[-\nu_0,\nu_0]$ and $\alpha=0$. If
$|U|=4\tau_0\nu_0\leq 1$ then
\begin{equation}
E_2(\mu)\leq\left(
2\sin(\frac{\pi |U|}{4})\lVert\boldsymbol{\Sigma}^{(0)}_{\boldsymbol{\HH}}\rVert^2_1+
\epsilon_\gamma\left(\lVert\boldsymbol{\Sigma}^{(0)}_{\boldsymbol{\HH}^*\boldsymbol{\HH}}\rVert_1+
2\lVert\boldsymbol{\Sigma}^{(0)}_{\boldsymbol{\HH}}\rVert^2_1\right)\right)^{\frac{1}{2}}
\label{eq:approxeigen:kozektheorem}
\end{equation}
where $\epsilon_\gamma=\lVert({\mathbf{A}}^{(0)}_{\gamma\gamma}-1)\chi_U\rVert_\infty$.
\label{thm:approxeigen:kozektheorem}
\end{mytheorem}
The proof can be found in \cite{kozek:thesis} or independently from
Lemma \ref{lemma:gmc:underspread:generalupperbound} with $\epsilon_\gamma=\lVert\Omega\chi_U\rVert_\infty$ and
$\lVert\boldsymbol{L}_{\boldsymbol{\HH}}^{(\alpha)}\rVert_\infty\leq\lVert\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}\rVert_1$.
Further utilizing the fact
that $\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}^*\boldsymbol{\HH}}\rVert_1\leq\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert^2_1$,
Equation \eqref{eq:approxeigen:kozektheorem} can be written as:
\begin{equation}
\frac{E_2(\mu)}{\lVert\boldsymbol{\Sigma}^{(0)}_{\boldsymbol{\HH}}\rVert_1}
\leq \left(2\sin(\frac{\pi |U|}{4}) + 3\lVert({\mathbf{A}}^{(0)}_{\gamma\gamma}-1)\chi_U\rVert_\infty\right)^{\frac{1}{2}}
\label{eq:approxeigen:kozektheorem:simplified}
\end{equation}
which gives an initial answer to the problem formulated in section \ref{problem:approxeigen:epweyl}.
Theorem \ref{thm:approxeigen:kozektheorem} was further extended in \cite[Thm. 2.22]{matz:thesis}
by G. Matz (see also \cite{matz:timefreq:transfer}) to a formulation in terms of weighted $1$--moments of
(not necessarily compactly supported) spreading functions. His approach
includes also different polarizations $\alpha$.
For a spreading function as in Theorem \ref{thm:approxeigen:kozektheorem} and $\alpha=0$ the results
agree with \eqref{eq:approxeigen:kozektheorem:simplified}.
Equation \eqref{eq:approxeigen:kozektheorem:simplified} could be interpreted
in such a way that only the second term can be controlled
by $\gamma$ (e.g. pulse shaping) where the first term of the rhs of \eqref{eq:approxeigen:kozektheorem:simplified}
is only related to the overall spread $|U|$. However we shall show in the next section that the first
term can be eliminated
from the bound and the second (shape--independent) term can further tightened.
\if0
The result of Theorem \ref{thm:approxeigen:kozektheorem}
was generalized in \cite[Thm. 2.22]{matz:thesis}
by G. Matz (see also \cite{matz:timefreq:transfer}) to a formulation in terms of weighted $1$--moments of
spreading functions
$m_{\boldsymbol{\HH}}^{(\phi)}=\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\phi\rVert_1/\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_1$
for some weight functions $\phi:\mathbb{R}^2\rightarrow\mathbb{R}_{+}$:
\begin{mytheorem}[G. Matz \cite{matz:thesis}]
Let $\boldsymbol{\HH}$ be an operator with spreading function
$\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}=\mathbf{F}^{(\alpha)}\boldsymbol{\HH}$
\begin{equation}
\frac{E_2^2}{\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert^2_1}\leq
2\pi\left(c_\alpha m^{\delta_{10}}_{\boldsymbol{\HH}}m^{\delta_{01}}_{\boldsymbol{\HH}}
+2|\alpha|m^{\delta_{11}}_{\boldsymbol{\HH}}\right)+
m^{(\phi)}_{\boldsymbol{\HH}^*\boldsymbol{\HH}}+2m^{(\phi)}_{\boldsymbol{\HH}}
\label{eq:approxeigen:matztheorem}
\end{equation}
where $\phi(\mu)=|1-{\mathbf{A}}_{\gamma\gamma}^{(\alpha)}(\nu)|$,
and $\delta_{kl}(\mu)=\mu_1^k\mu_2^l$ and
$c_\alpha=|\alpha+\frac{1}{2}|+|\alpha-\frac{1}{2}|$.
\label{thm:approxeigen:matztheorem}
\end{mytheorem}
This generalization
includes now different polarizations $\alpha$ and is not restricted
to the special choice of $U$. Assuming again a spreading function with limited support
$U=[-\tau_0,\tau_0]\times[-\nu_0,\nu_0]$ (as in Theorem
\ref{thm:approxeigen:kozektheorem}) one can find from
for $|\alpha|\leq 1/2$ the following simplified
bound:
\begin{equation}
\frac{E_2^2}{\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert^2_1}\leq
2\sin(\frac{\pi |U|}{4}c_\alpha)
+ 2\sin(\frac{\pi |U|}{2}|\alpha|)
+ 3\lVert({\mathbf{A}}_{\gamma\gamma}-1)\chi_U\rVert_\infty
\label{eq:approxeigen:matztheorem:simplified}
\end{equation}
which agrees for $\alpha=0$ with \eqref{eq:approxeigen:kozektheorem:simplified}.
\fi
\subsection{Results Based on a Direct Approach}
\label{subsec:approxeigen:directapproach}
\newcommand{\bar{\rho}}{\bar{\rho}}
\newcommand{{\bar{\rho}_\infty}}{{\bar{\rho}_\infty}}
We have considered the function $\lambda$ as the Weyl symbol
exclusively for the exponents $p=2$ and $a=1$ in \ref{subsec:approxeigen:symbolcalculus}. This approach
is in line with prior work of Kozek, Matz and Hlawatsch and provides
results on the approximate eigenstructure problem established in Section \ref{problem:approxeigen:epweyl}.
To obtain further results for different values of $p$, $a$ and $\lambda$,
we shall now restart the analysis from a different perspective. In the following
we present the main results, through most of the analysis will be presented in Section
\ref{sec:genanalysis}. We use a ''smoothed'' version
of the Weyl symbol:
\begin{equation}
\lambda=\mathcal{F}_s (\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\cdot B)
\label{eq:approxeigen:lambda}
\end{equation}
where $B:\mathbb{R}^{2n}\rightarrow\mathbb{C}$ is a bounded function.
We consider two important cases:
{\bf Case C1:} Let $B={\mathbf{A}}^{(\alpha)}_{g\gamma}$ such that \eqref{eq:approxeigen:lambda}
reads as $\lambda=\boldsymbol{L}_{\boldsymbol{\HH}}^{(\alpha)}\ast\mathcal{F}_s{\mathbf{A}}^{(\alpha)}_{g\gamma}$ where
$\ast$ denotes convolution. This corresponds to the well known
smoothing with the cross Wigner function $\mathcal{F}_s{\mathbf{A}}^{(\alpha)}_{g\gamma}$
and was already considered in \cite{matz:timefreq:characterization} (for averages
over WSSUS\footnote{Wide--sense stationary uncorrelated scattering (WSSUS)
channel model \cite{bello:wssus}}
channels ensembles). In particular
this is exactly the orthogonal distortion:
\begin{equation}
\lambda(\mu)=\langle{\boldsymbol{S}}_\mu g,\boldsymbol{\HH}{\boldsymbol{S}}_\mu\gamma\rangle
=\left(\mathcal{F}_s(\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\cdot {\mathbf{A}}_{g\gamma}^{(\alpha)})\right)(\mu)
\end{equation}
as already mentioned in \ref{subsec:approxeigen} and corresponds to the choice of the $E_2$--minimizer.
Since $\lambda$ depends in this case on $g$ and $\gamma$ we
consider here how accurately the action of operators $\boldsymbol{\HH}$ on the family $\{{\boldsymbol{S}}_\mu\gamma\}$
can be described as multiplication operators on the family $\{{\boldsymbol{S}}_\mu g\}$.
From the rule in
\eqref{eq:weyl:shift:alphageneralized} the definition of the cross ambiguity function
in \eqref{eq:tfanalysis:crossamb} and the non--commutative Fourier
transform in \eqref{eq:tfanalysis:noncomm:fourier}
it is clear that this choice is also independent of the polarization
$\alpha$. Recall that the Weyl symbol of a rank-one operator is the Wigner distribution, such
that with this approach we again effectively compare twisted with ordinary convolution.
{\bf Case C2:} Here we consider $B=1$ such that $\lambda=\boldsymbol{L}_{\boldsymbol{\HH}}^{(\alpha)}$ is the Weyl symbol.
The function $\lambda$ is now independent of $g$ and $\gamma$.
Thus in contrast to C1 this case is related to the ''pure'' symbol calculus.
Obviously, we have to expect now a dependency on the polarization
$\alpha$. Furthermore, this was the approach considered for $p=2$ in the previous part\footnote{However,
also there the same methodology as in \eqref{eq:approxeigen:lambda} could be applied as well.} of this section.
The first theorem parallels Theorem \ref{thm:approxeigen:kozektheorem}
and its consequence
\eqref{eq:approxeigen:kozektheorem:simplified}. We shall not yet restrict ourselves to the cases
C1 and C2. Instead we only require that $B$ has to be essentially bounded.
\begin{mytheorem}
Let $\boldsymbol{\HH}\in\text{OP}(U)$,
$g,\gamma\in\Leb{\infty}(\mathbb{R}^n)$ and $B\in\Leb{\infty}(\mathbb{R}^{2n})$.
For $2\leq p<\infty$ and $1\leq q\leq\infty$ (with the usual meaning for
$q=\infty$):
\begin{equation}
\frac{E_p(\mu)}{\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_q}\leq
C^{\frac{p-2}{p}}\cdot\lVert(1+|B|^2-2\Real{{\mathbf{A}}^{(\alpha)}_{g\gamma}\bar{B}})\chi_U\rVert^{1/p}_{q'/p}
\label{eq:thm:approxeigen:Ep1}
\end{equation}
where $C$ is a constant depending on $g$, $\gamma$ and $B$.
The minimum of this bound over $B$ is achieved in the case of $p=2$ for C1.
\label{thm:approxeigen:Ep1}
\end{mytheorem}
\begin{myproof}
The proof follows from the middle term of \eqref{eq:lemma:approxeigen:lemmaep3:eq1}
in Lemma \ref{lemma:approxeigen:lemmaep3} if we set $C$ as:
\begin{equation}
\begin{split}
C
&=\underset{x\in\mathbb{R}^n,\,\, \nu\in U}{\text{\rm ess sup\,\,}}|({\boldsymbol{S}}_\nu^{(\alpha)}\gamma)(x)-B(\nu)g(x)|
\leq \lVert\gamma\rVert_\infty+\lVert B\rVert_\infty\lVert g\rVert_\infty
\end{split}
\end{equation}
In Lemma \ref{lemma:approxeigen:lemmaep3} the range of $p$ is $1\leq p<\infty$. However, from the
discussion in Section \ref{subsec:uniformestimates} it is clear that
\eqref{eq:thm:approxeigen:Ep1} gives
only for $p\geq 2$ a reasonable bound.
\end{myproof}
{\bf Comparison to the bound of Kozek:}
With $|1-\Real{{\mathbf{A}}_{\gamma\gamma}^{(\alpha)}}|\leq |1-{\mathbf{A}}_{\gamma\gamma}^{(\alpha)}|$
we can transform the result of the last theorem for C2 with settings $p=2$, $q=1$
and $g=\gamma$ into a form comparable to \eqref{eq:approxeigen:kozektheorem}
and \eqref{eq:approxeigen:kozektheorem:simplified} which is:
\begin{equation}
\frac{E_2(\mu)}{\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_1}\leq
\left(2\lVert (1-{\mathbf{A}}^{(\alpha)}_{g\gamma})\chi_U\rVert_\infty\right)^{\frac{1}{2}}
\end{equation}
Hence this technique improves the previous bounds.
It includes different polarizations $\alpha$ and does not require any
shape or size constraints on $U$.
Interestingly the offset in \eqref{eq:approxeigen:kozektheorem:simplified},
which does not depend on $(g,\gamma)$
and in an initial glance seems to be related to the notion of underspreadness,
has been disappeared now.
{\bf Discussion of the critical size:}
The behavior of the bound in \eqref{eq:thm:approxeigen:Ep1} on $|U|$ depends in general on the choice
of the function $B$. For example for the case C1, $p=q=2$ and with \eqref{eq:tfanalysis:crossamb:properties}
it follows that the rhs of \eqref{eq:thm:approxeigen:Ep1} is the square root of
$|U|-\langle |{\mathbf{A}}_{g\gamma}^{(\alpha)}|^2,\chi_U\rangle$ and again with \eqref{eq:tfanalysis:crossamb:properties}
we have that:
\begin{equation}
\sqrt{|U|-\min(|U|,1)}\leq\text{rhs of \eqref{eq:thm:approxeigen:Ep1}}\leq \sqrt{|U|}.
\end{equation}
This implies that this term is of the same order as $\sqrt{|U|}$ for $|U|\gg1$
(see also Lemma \ref{lemma:approxeigen:ep:uniform} later on). The lhs of the inequality suggests that
for $|U|\leq 1$ the scaling behavior might alter, i.e. $|U|=1$ is in this sense
a critical point between over- and underspread channels as introduced
in \cite{kozek:thesis}.
On the other hand the lhs of the last equation is not zero for $0<|U|\leq1$.
Indeed from Theorem \ref{corr:ambbound:supp}
we have an improved version as follows:
\begin{equation}
\sqrt{|U|-\min(|U|e^{-\frac{|U|}{e}},1)}\leq\text{rhs of \eqref{eq:thm:approxeigen:Ep1}}\leq \sqrt{|U|}
\end{equation}
which suggest that at $|U|=e$ the behavior changes.
{\bf Restriction to the cases C1 and C2:} If we further restrict ourselves to $q>1$ (i.e. $q'<\infty$) we can
establish the relation to weighted norms of ambiguity functions \cite{jung:isit06}. For
simplicity let us consider now the two cases C1 ($k=1$) and C2 ($k=2$). We define therefore
the functions $A_k:\mathbb{R}^{2n}\rightarrow\mathbb{R}$ for $k=1,2$ as:
\begin{equation}
A_1:=|{\mathbf{A}}_{g\gamma}^{(\alpha)}|^2\quad\text{and}\quad A_2:=\Real{{\mathbf{A}}_{g\gamma}^{(\alpha)}}
\label{eq:approxeigen:Ak}
\end{equation}
We then have the following result:
\begin{mytheorem}
Let $\boldsymbol{\HH}\in\text{OP}(U)$ and
$g,\gamma\in\Leb{\infty}(\mathbb{R}^n)$.
For $2\leq p<\infty$, $1<q\leq\infty$ and $|U|\leq 1$ it holds:
\begin{equation}
\frac{E_p(\mu)}{\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_{q}}\leq
C^\frac{p-2}{p}
k\left(k(|U|-\langle A_k,\chi_U\rangle)\right)^{1/\max(q',p)}
\end{equation}
where $k=1$ for C1 and $k=2$ for C2.
\label{thm:approxeigen:Ep2}
\end{mytheorem}
\begin{proof}
We now use the bound \eqref{eq:lemma:approxeigen:lemmaep3:eq2}
in Lemma \ref{lemma:approxeigen:lemmaep3} with the uniform estimates
$C_{bp}\leq k$ from Lemma \ref{lemma:approxeigen:cpb:uniform}. Again, as follows from the discussion
in Section \ref{subsec:uniformestimates}, we consider only $p\geq 2$.
\end{proof}
The assumption $|U|\leq1$ is only used to simplify the bound. Improved estimates follow
from Lemma \ref{lemma:approxeigen:lemmaep3} directly.
From the positivity of $A_1$ we observe that the orthogonal distortion
(the case C1) is always related to weighted $2$--norms of the cross ambiguity
function (the weight is in this case only $\chi_U$). For the
case C2 this can be turned into weighted $1$--norm if
$A_2$ is positive on $U$ or fulfill certain cancellation properties. Furthermore,
the case C2 depends obviously on the polarization $\alpha$.
For particular symmetries of $U$ explicit values can be found
as shown with the following theorem (for simplicity we consider $n=1$):
\begin{mytheorem}
If $\boldsymbol{\HH}\in\text{OP}(U)$ with $\chi_U(\mu)=\chi_U(-\mu)$ and
$g,\gamma\in\Leb{\infty}(\mathbb{R})$ and $B\in\Leb{\infty}(\mathbb{R}^{2})$.
For $2\leq p<\infty$, $1<q\leq\infty$ and $|U|\leq 1$ it holds:
\begin{equation}
\frac{E_p(\mu)}{\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_{q}}\leq
32^\frac{p-2}{4p}
k\left(k|U|(1-L)\right)^{1/\max(q',p)}
\label{eq:thm:approxeigen:Ep3}
\end{equation}
In general, $L\geq\lambda_{\max}(Q^*Q)^{1/k}$ and $Q$
is an operator with spreading function $\chi_U/|U|$ in
polarization $0$. Furthermore,
$L\geq l(|U|2/k)$ with $l(x)=2(1-e^{-x/2})/x$ for $U$ being a disc and
$l(x)=2\cdot\text{erf}(\sqrt{\pi x/8})^2/x$ for $U$ being a square.
\label{thm:approxeigen:Ep3}
\end{mytheorem}
\begin{proof}
We combine Lemma \ref{lemma:approxeigen:lemmaep3} and
Lemma \ref{lemma:approxeigen:localization} with the uniform estimates
$C_{bp}\leq k$ from Lemma \ref{lemma:approxeigen:cpb:uniform} such that
\begin{equation}
\frac{E_p(\mu)}{\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_{q}}\leq
32^\frac{p-2}{4p}
k\left(k|U|(1-\lambda_{\max}(Q^*Q)^{1/k})\right)^{1/\max(q',p)}
\end{equation}
where $Q$ is a compact operator with spreading function $\chi_U/|U|$ in polarization $\alpha$.
From Lemma \ref{lemma:approxeigen:infR1:symmetric} we know
that our assumptions imply that $Q$ is Hermitian for $\alpha=0$. In general, therefore it holds
that $L=\lambda_{\max}(Q^*Q)^{1/k}=\lambda_{\max}(Q)^{2/k}$ where $\lambda_{\max}(Q)$
is at least as the value of the integral \eqref{eq:corr:approxeigen:lambdan} over the first Laguerre function.
We abbreviate $L=l(|U|)^{2/k}$ such that
for a disc of radius $\sqrt{|U|/\pi}$ this integral is
$l(x)=2(1-e^{-x/2})/x$ and for
a square of length $\sqrt{U}$ we get $l(x)=2\cdot\text{erf}(\sqrt{\pi x/8})^2/x$.
However, Lemma \ref{lemma:approxeigen:infR1:symmetric} asserts this as an upper
bound achieved in this case
with Gaussians $g(x)=2^{1/4}e^{-\pi\langle x,x\rangle}$ which is tight for C2 but not for C1.
This means, for C1 this can be further improved by direct evaluation on Gaussians.
Indeed, from the proof of Corollary \ref{corr:approxeigen:gaussianbounds} we
know that $l(|U|\cdot2/k)\geq l(|U|)^{2/k}$ is achievable.
For $\lVert V\rVert_\infty$ we get $\lVert V\rVert_\infty=2\lVert g\rVert_\infty=32^{1/4}$.
\end{proof}
This result can be extended in part to regions $U$ which are canonical equivalent
to discs and squares centered at the origin (see the discussion at the beginning
of Section \ref{subsec:approxeigen:laguerre}).
However, this holds in principle only for $p=2$ because such canonical transformations
will change the constants in \eqref{eq:thm:approxeigen:Ep3}.
\section{General Analysis and Proofs}
\label{sec:genanalysis}
With the following lemma we separate
the support of the spreading function $\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}$ from the quantity $E_p(\mu)$.
We shall make use of the
non--negative function $V:\mathbb{R}^{n}\times\mathbb{R}^{2n}\rightarrow\mathbb{R}_{+}$, defined as:
\begin{equation}
V(x,\nu):=|({\boldsymbol{S}}_\nu^{(\alpha)}\gamma)(x)-B(\nu)g(x)|\cdot \chi_U(\nu)\geq 0
\label{eq:approxeigen:defV}
\end{equation}
and of the functionals $V_p(\nu):=\lVert V(\cdot,\nu)\rVert_p$, i.e.
the usual $p$--norms in the first argument.
For simplifying our analysis we shall restrict ourselves
to indicator weights $\chi_U$ (the characteristic function of $U$).
However, the same can be repeated with slight abuse of notation using
more general weights.
\begin{mylemma}
Let $\boldsymbol{\HH}\in\text{OP}(U)$,
$1\leq p<\infty$, $1\leq q\leq\infty$.
If $V(\cdot,\nu)\in\Leb{p}(\mathbb{R}^n)$ for all $\nu\in U$ then it
holds:
\begin{equation}
E_p(\mu)/\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_q\leq\lVert V_p\rVert_{q'}
\label{eq:approxeigen:lemmaep1:1}
\end{equation}
whenever $\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\in\Leb{q}(\mathbb{R}^{2n})$
and $V_p\in\Leb{q'}(\mathbb{R}^{2n})$.
\label{lemma:approxeigen:lemmaep1}
\end{mylemma}
\begin{myproof}
Firstly, using Weyl's commutation rule \eqref{eq:weyl:shift:weylcommrelation} and
the definition of $\lambda$ in
\eqref{eq:approxeigen:lambda} gives:
\begin{equation}
\begin{split}
E_p(\mu)
&\overset{\eqref{eq:approxeigen:epweyl}}{=}\lVert\int_{\mathbb{R}^{2n}} d\nu\,
\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}(\nu){\boldsymbol{S}}_\nu^{(\alpha)}{\boldsymbol{S}}_\mu\gamma-
\lambda(\mu){\boldsymbol{S}}_\mu g
\rVert_p\\
&\overset{\eqref{eq:weyl:shift:weylcommrelation}}{=}
\lVert{\boldsymbol{S}}_\mu\left(
\int_{\mathbb{R}^{2n}} d\nu\,\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}(\nu)
e^{-i2\pi\eta(\nu,\mu)}
{\boldsymbol{S}}_\nu^{(\alpha)}\gamma-
\lambda(\mu) g\right)
\rVert_p \\
&\overset{\eqref{eq:approxeigen:lambda}}{=}
\lVert\int_{\mathbb{R}^{2n}} d\nu\, \boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}(\nu)
e^{-i2\pi\eta(\nu,\mu)}
({\boldsymbol{S}}_\nu^{(\alpha)}\gamma-B(\nu)g)
\rVert_p\,.\\
\end{split}
\label{eq:approxeigen:lemmaep1:proof1}
\end{equation}
Note that the $p$--norm is with respect to the argument of the functions $g$
and ${\boldsymbol{S}}_\nu^{(\alpha)}\gamma$.
The last step follows because ${\boldsymbol{S}}_\mu^{(\alpha)}$ acts
isometrically on all $\Leb{p}(\mathbb{R}^n)$.
Let $f:\mathbb{R}^n\times\mathbb{R}^{2n}\rightarrow\mathbb{C}$ be the function defined as:
\begin{equation}
f(x,\nu):=e^{-i2\pi\eta(\nu,\mu)}
\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}(\nu)[({\boldsymbol{S}}_\nu^{(\alpha)}\gamma)(x)-B(\nu)g(x)]
\end{equation}
From $\boldsymbol{\HH}\in\text{OP}(U)$ (bounded spreading functions) and $V(\cdot,\nu)\in\Leb{p}(\mathbb{R}^n)$ for all $\nu\in U$
it follows
that $f(\cdot,\nu)\in\Leb{p}(\mathbb{R}^n)$.
Then \eqref{eq:approxeigen:lemmaep1:proof1} reads for $1\leq p<\infty$ by
Minkowski (triangle) inequality
\begin{equation}
\begin{split}
E_p(\mu)
&=\lVert\int_{\mathbb{R}^{2n}} d\nu f(\cdot,\nu)\rVert_p
\leq\int_{\mathbb{R}^{2n}} d\nu\lVert f(\cdot,\nu)\rVert_p
=\lVert \boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\cdot V_p\rVert_1
\leq \lVert \boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_q\lVert V_p\rVert_{q'}
\end{split}
\end{equation}
In the last step we used H\"older's inequality, such that
the claim of this lemma follows.
\end{myproof}
\subsection{The Relation to Ambiguity Functions}
In the next lemma we shall show that
$\lVert V_p\rVert_{q'}$ can be related to ambiguity functions, which occur
for $p=2$. We introduce $R:\mathbb{R}^{2n}\rightarrow\mathbb{R}_{+}$ as
the non--negative function:
\begin{equation}
R:=V_2^2=(1+|{\mathbf{A}}^{(\alpha)}_{g\gamma}-B|^2-|{\mathbf{A}}^{(\alpha)}_{g\gamma}|^2)\cdot\chi_U\geq 0
\label{eq:approxeigen:defR}
\end{equation}
and abbreviate $R_s:=\lVert R\rVert_s$. Using \eqref{eq:approxeigen:Ak} we
can write \eqref{eq:approxeigen:defR} for the cases C1 and C2 as $R=k(1-A_k)\cdot\chi_U$ for $k=1,2$.
From the non--negativity of $R$ follows
that:
\begin{equation}
R_1=|U|+\lVert({\mathbf{A}}^{(\alpha)}_{g\gamma}-B)\chi_U\rVert_2^2-\lVert{\mathbf{A}}^{(\alpha)}_{g\gamma}\chi_U\rVert_2^2
\label{eq:approxeigen:R1}
\end{equation}
Hence, $R_1$ reflects an interplay between two localization criteria
in the phase space.
In particular, we get for C1 and for C2:
\begin{equation}
R_1=k(|U|-\langle A_k,\chi_U\rangle)
\label{eq:approxeigen:R1:special}
\end{equation}
With the following lemma we shall explicitly provide the
relation between the bound $\lVert V_p\rVert_{q'}$ in Lemma \ref{lemma:approxeigen:lemmaep1} to the quantity $R_1$.
\begin{mylemma}
For $1\leq p<\infty$ and $1\leq q'\leq\infty$ it
holds (with the usual meaning for $q'=\infty$):
\begin{equation}
\lVert V_p\rVert_{q'}\leq \lVert V\rVert_\infty^\frac{p-2}{p}
\cdot R_{q'/p}^{1/p}
\label{eq:lemma:approxeigen:lemmaep3:eq1}
\end{equation}
Equality is achieved for $p=2$ and then
the minimum over $B$ of the rhs is achieved for C1.
For $q'<\infty$ let
$C_{pq}=R_\infty^\frac{q'-p}{q'p}$ for $p\leq q'$ and
$C_{pq}=|U|^\frac{p-q'}{q'p}$ else.
Then it holds further that:
\begin{equation}
\lVert V_p\rVert_{q'}\leq
\lVert V\rVert_\infty^\frac{p-2}{p}\cdot C_{pq}\cdot
R_1^{1/\max(p,q')}
\label{eq:lemma:approxeigen:lemmaep3:eq2}
\end{equation}
with equality for $q'=p=2$.
\label{lemma:approxeigen:lemmaep3}
\end{mylemma}
The proof can be found in the Appendix \ref{appendix:lemma:approxeigen:lemmaep3:proof}.
The main reason for this lemma, in particular for the second part, is that it opens up for
case C1 the relation to weighted norms of ambiguity function (i.e. localization of $A_k$ on $U$).
However, for C2 we are also concerned with the question of positivity (and cancellation properties) in $U$.
We shall study these relations in more detail in Section \ref{subsec:localization}.
\subsection{Uniform Estimates}
\label{subsec:uniformestimates}
As already mentioned before, for ''true'' eigenstructure we have $E_p=0$ for all $p$,
such that the notion of approximate eigenstructure should be in some sense uniform in $q$ and $p$.
In the first step it is therefore necessary to validate
uniform bounds for $C_{pq}$.
We observe that $\lVert V\rVert_\infty^\frac{p-2}{p}$ will then restrict
the application of Lemma \ref{lemma:approxeigen:lemmaep3} only to $p\geq 2$ because
$\lVert V\rVert_\infty$ will be in general small. For example for C2 and
$g=\gamma$ let $|U|\rightarrow 0$
in \eqref{eq:approxeigen:defV}. This behavior has to be expected because
the ambiguity function is a $\Leb{2}$--related construction and from
$\Leb{2}$ boundedness one can only with
further decay conditions infer $\Leb{p}$--boundedness for $p<2$. Consequentially we
shall restrict the following analysis to
$2\leq p<\infty$ such that $\sup\lVert V\rVert_\infty^\frac{p-2}{p}=\max(\lVert V\rVert_\infty,1)$.
For $\lVert V\rVert_\infty$ we can use for example a worst case estimate of the form
$\lVert V\rVert_\infty\leq\lVert\gamma\rVert_\infty+\lVert B\rVert_\infty\cdot\lVert g\rVert_\infty
\leq\lVert\gamma\rVert_\infty+\lVert g\rVert_\infty$
which is valid for C1
($\lVert B\rVert_\infty=\lVert{\mathbf{A}}_{g\gamma}\rVert_\infty\leq1$
by \eqref{eq:tfanalysis:crossamb:properties})
and C2.
\begin{mylemma}[Uniform Bounds for $C_{pq}$]
For $2\leq p<\infty$ and
$1<q\leq\infty$ it holds the uniform estimate
$C_{pq}\leq k$ if $q'\geq p$
where $k=1$ for C1 and $k=2$ for C2. If $q'<p$ then it holds
$C_{pq}\leq\max(|U|,1)$.
\label{lemma:approxeigen:cpb:uniform}
\end{mylemma}
\begin{proof}
It is easily verified that
$\sup |U|^\frac{p-q'}{q'p}=\max(|U|,1)$ where the supremum is over all
$1\leq q'<p$ and $2\leq p<\infty$.
The same
can be found also for $1\leq p<\infty$.
Similarly we get for the quantity $R_\infty^\frac{q'-p}{q'p}$ the
uniform estimate $\sup R_\infty^\frac{q'-p}{q'p}=\max(\sqrt{R_\infty},1)$ where $p\leq q'\leq\infty$ $2\leq p<\infty$.
For $1\leq p<\infty$ we would get instead $\max(R_\infty,1)$.
From the non--negativity of $R$ it follows that:
\begin{equation}
R_\infty
=k(1-\underset{\nu\in U}{\text{\rm ess inf\,\,}} A_k(\nu))
\end{equation}
From \eqref{eq:tfanalysis:crossamb:properties} it follows
that the inequality $R_\infty\leq 1$ is always fulfilled for C1.
For the case C2 this gives instead that $R_\infty\leq 4$, in general.
\end{proof}
The following lemma provides a simple upper bound on $E_p/\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_q$
which is for $p=2$ uniformly in $g$ and $\gamma$. Thus, it will serve as a benchmark.
\begin{mylemma}[Uniform Bound for $E_p/\lVert\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}\rVert_q$]
For $1\leq p<\infty$ and
$1<q\leq\infty$ it holds:
\begin{equation}
\frac{E_p(\mu)}{\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_q}\leq
\lVert V\rVert_\infty^\frac{p-2}{p}\cdot k^{2/p}\cdot |U|^{1/q'}
\label{eq:approxeigen:ep:uniform}
\end{equation}
with $k=1$ for C1 and $k=2$ for C2.
\label{lemma:approxeigen:ep:uniform}
\end{mylemma}
\begin{proof}
We use $R_1\leq R_\infty\cdot |U|$ in \eqref{eq:lemma:approxeigen:lemmaep3:eq1} of Lemma
\ref{lemma:approxeigen:lemmaep3} and the uniform estimates $R_\infty\leq k$.
from Lemma \ref{lemma:approxeigen:cpb:uniform}.
\end{proof}
This bound can not be related to ambiguity functions, i.e. will give no insight
on possible improvements due to localization.
\subsection{Weighted Norms of Ambiguity Functions and Localization}
\label{subsec:localization}
In the previous section we have shown that $R_1$ is a relevant term, which controls
the approximate eigenstructure.
In the following analysis we shall further investigate $R_1$. We are interested in
$\inf_{g,\gamma}(R_1)$ which is:
\begin{equation}
\inf_{g,\gamma} R_1=k|U|\left(1-\sup_{g,\gamma}\langle A_k,\boldsymbol{C}\rangle\right)
\label{eq:approxeigen:localization:R1}
\end{equation}
where $\boldsymbol{C}:=\chi_U/|U|$.
Thus, \eqref{eq:approxeigen:localization:R1} is a particular case
of a more general problem, where $\boldsymbol{C}$ is some arbitrary weight (non--negative) function
$\boldsymbol{C}$.
Thus, let us consider $\sup_{g,\gamma}\langle A_k,\boldsymbol{C}\rangle$ and
let us focus first only on $A_1=|{\mathbf{A}}_{g\gamma}^{(\alpha)}|^2$ which is also positive.
Since $A_1$ is quadratic in $\gamma$ we can rewrite
$\langle A_1,\boldsymbol{C}\rangle=\langle \gamma,L_{\boldsymbol{C},g}\gamma\rangle$ where this quadratic
form defines (weakly) an operator $L_{\boldsymbol{C},g}$.
Such operators are also called \emph{localization operators} \cite{daubechiez:tflocalization:geophase}
and it follows that $\sup_{\gamma}\langle A_1,\boldsymbol{C}\rangle=\lambda_{\max}(L_{\boldsymbol{C},g})$.
The eigen--values and eigen--functions of Gaussian ($g$ is set to be a Gaussian)
localization operators on the disc ($U$ is a disc)
are known to be Hermite functions (more generally this holds if $\boldsymbol{C}$ has elliptical symmetry).
Kozek \cite{kozek:thesis,kozek:eigenstructure} found that
for elliptical symmetry also the joint optimization results in Hermite
functions\footnote{Kozek considered $g=\gamma$. However one can show that
for elliptical symmetry around the origin the optimum has also this property.}.
For $\boldsymbol{C}$ being Gaussian
the joint optimum ($g$ \emph{and} $\gamma$) is known explicitly \cite{jung:isit06}.
The last result is based on
a theorem, formulated in \cite{jung:isit06}, which we will need
also in this paper. Let us consider for simplicity once again the
one--dimensional case (the generalizations for $n>1$ are similar),
i.e. for $n=1$ we have:
\begin{mytheorem}
Let $\lVert g\rVert_2=\lVert\gamma\rVert_2=1$ and
$s,r\in\mathbb{R}$. Furthermore
let $\boldsymbol{C}\in\Leb{s'}(\mathbb{R}^2)$. Then the inequality:
\begin{equation}
\langle|{\mathbf{A}}^{(\alpha)}_{g\gamma}|^r, \boldsymbol{C} \rangle \leq
\left(\frac{2}{rs}\right)^{\frac{1}{s}}\lVert\boldsymbol{C}\rVert_{s'}
\label{eq:jung:fidelitybound}
\end{equation}
holds for each $s\geq\max\{1,\frac{2}{r}\}$.
\label{thm:jung:fidelitybound}
\end{mytheorem}
From \eqref{eq:weyl:shift:alphageneralized} follows that \eqref{eq:jung:fidelitybound} does not depend
on the polarization $\alpha$.
The proof can be found in \cite{jung:isit06} and is based on a result of
E. Lieb \cite{lieb:ambbound}. Note that apart from the
normalization constraint the bound in Theorem \ref{thm:jung:fidelitybound}
does not depend anymore on $g$ and $\gamma$. Hence
for any given $\boldsymbol{C}$ the optimal bound $N_r(\boldsymbol{C})$
can be found by
\begin{equation}
N_r(\boldsymbol{C}):=\min_{\mathbb{R}\ni s\geq\max\{1,\frac{2}{r}\}}\left(
\left(\frac{2}{rs}\right)^{\frac{1}{s}}\lVert\boldsymbol{C}\rVert_{s'}
\right)
\label{eq:jung:fidelitybound:min}
\end{equation}
The equality case in Theorem \ref{thm:jung:fidelitybound}
is given for $g$,$\gamma$ and $\boldsymbol{C}$ being Gaussians (see \cite{jung:isit06}
for more details).
The following lemma states lower and upper bounds on the optimal achievable values of
the quantities $\langle A_k,\boldsymbol{C}\rangle$.
\begin{mylemma}
Let be $\boldsymbol{C}:\mathbb{R}^{2n}\rightarrow\mathbb{R}_{+}$ a non--negative weight function with $\lVert\boldsymbol{C}\rVert_1=1$.
Then it holds:
\begin{equation}
\lambda_{\max}(Q^*Q)\leq \sup_{g,\gamma}\langle A_1,\boldsymbol{C}\rangle\leq
N_2(\boldsymbol{C})
\end{equation}
for case C1 and equivalently for case C2:
\begin{equation}
\lambda_{\max}(Q^*Q)^{1/2}=\max_{g,\gamma}\langle A_2,\boldsymbol{C}\rangle\leq
N_1(\boldsymbol{C})
\end{equation}
where $Q$ is the operator with spreading function $\boldsymbol{C}$ in polarization $\alpha$.
\label{lemma:approxeigen:localization}
\end{mylemma}
\begin{proof}
Considering first the case C1 (that is $k=1$), which
is independent of the polarization $\alpha$. The corresponding term
$\langle A_1,\boldsymbol{C}\rangle$ is relevant in the theory of WSSUS pulse shaping \cite{jung:wssuspulseshaping}
where $\boldsymbol{C}$ is called the scattering function.
In \cite{jung:isit05} we have already pointed out that a lower bound can be obtained from
convexity. We have:
\begin{equation}
|\langle g,Q\gamma\rangle|^2\leq\langle A_1,\boldsymbol{C}\rangle\leq N_2(\boldsymbol{C})
\end{equation}
where $Q$ is a compact (follows from normalization) operator with spreading function $\boldsymbol{C}$.
The uniform upper bound is according to \eqref{eq:jung:fidelitybound:min}.
The optimum of the lower bound is achieved for $g$ and $\gamma$ being the eigen--functions
of $Q^*Q$ and $QQ^*$ corresponding to the maximal eigen--value $\lambda_{\max}(Q^*Q)$, such that
for the supremum over $g$ and $\gamma$ it follows that:
\begin{equation}
\lambda_{\max}(Q^*Q)\leq\sup_{g,\gamma}\langle A_1,\boldsymbol{C}\rangle\leq N_2(\boldsymbol{C})
\end{equation}
For the case C2 ($k=2$) we proceed as follows.
For a given $\gamma$ we have:
\begin{equation}
\langle A_2,\boldsymbol{C}\rangle=\frac{1}{2}\left(\langle Q\gamma,g\rangle+\langle g,Q\gamma\rangle\right)
\leq\lVert Q\gamma\rVert_2
\end{equation}
with equality in the last step for $g=Q\gamma/\lVert Q\gamma\rVert_2$. Choosing $\gamma$
from the eigen--space of $Q^*Q$ related to the maximal eigen--value, we get:
\begin{equation}
\lambda_{\max}(Q^*Q)^{1/2}
=\max_{g,\gamma}\langle A_2,\boldsymbol{C}\rangle\leq N_1(\boldsymbol{C})
\end{equation}
because
$\langle A_2,\boldsymbol{C}\rangle
\leq\langle|A_2|,\boldsymbol{C}\rangle\leq\langle\sqrt{A_1},\boldsymbol{C}\rangle
\leq N_1(\boldsymbol{C})$ where again $N_1$ is from
\eqref{eq:jung:fidelitybound:min}.
\end{proof}
For the particular weight function of interest in this paper, i.e. for $\boldsymbol{C}=\chi_U/|U|$ the upper
bounds can be calculated explicitely. For $n=1$ we get the following result:
\begin{mycorollary}[Norm Bounds for Flat Scattering]
Let be $\boldsymbol{C}:=\chi_U/|U|$.
Then it holds that:
\begin{equation}
\langle |{\mathbf{A}}^{(\alpha)}_{g\gamma}|^r, \boldsymbol{C} \rangle <N_r(\boldsymbol{C})=
\begin{cases}
e^{-\frac{r|U|}{2e}} & |U|\leq 2e/r^*\\
\left(\frac{2}{r^*|U|}\right)^{r/r^*} & \text{else}
\end{cases}
\label{eq:ambbound:supp:best}
\end{equation}
where $r^*=\max\{r,2\}$. It is not possible to achieve equality.
\label{corr:ambbound:supp}
\end{mycorollary}
The proof is obviously independent of $\alpha$ and available in \cite{jung:isit06}.
\begin{myremark}
When using the WSSUS model \cite{bello:wssus} for doubly--dispersive
mobile communication channels one typically assumes
time--frequency scattering within a shape
$U=[0,\tau_d]\times[-B_d,B_d]$
such that $|U|=2B_d\tau_d\ll 1< e$, where $B_d$ denotes maximum Doppler bandwidth $B_d$
and $\tau_d$ is maximum delay spread.
Then \eqref{eq:ambbound:supp:best} predicts for a $\Leb{1}$--normalized scattering function $\boldsymbol{C}:=|U|^{-1}\chi_U$, that
the best (mean) correlation response ($r=2$) in using filter $g$ at the
receiver and $\gamma$ at the transmitter is bounded above by
$e^{-2B_d\tau_d/e}$.
\end{myremark}
\if0
Again --- Corollary \ref{corr:ambbound:supp} can be transformed into
a necessary condition on the support $|U|$ for achieving a certain level
$\delta$ of this response, i.e.
\begin{equation}
\delta\leq N_r(\boldsymbol{C})\,
\Longrightarrow\,
|U|\leq \begin{cases}
-\frac{2e}{r}\ln\delta& \delta\leq e^{-r/r^*}\\
\frac{2}{r^*}\delta^{-r^*/r}& \text{else}
\end{cases}
\end{equation}
For example -- a necessary condition to achieve a mean correlation ($r=2$) response
$\delta\geq 1/e$ is that the size of support $|U|$ of a flat scattering distribution
has to be smaller than (or equal to) $e$.
\fi
From the definition of $R_1$ in \eqref{eq:approxeigen:R1} and from
\eqref{eq:ambbound:supp:best} of Corollary \ref{corr:ambbound:supp}
we know
that for $|U|\leq k e$ we have the estimate:
\begin{equation}
k|U|(1-e^{-\frac{|U|}{ke}})<\inf_{g,\gamma}(R_1)\leq k|U|(1-\lambda_{\max}(Q^*Q)^{1/k})
\end{equation}
which are implicit inequalities for $|U|$.
The restriction $|U|\leq e$ for the lower bound can be removed if the second
alternative in \eqref{eq:ambbound:supp:best} of Corollary \ref{corr:ambbound:supp} is further
studied. However, for simplicity
we have considered only the first region which is suited to our application (small $|U|$).
In particular,
with $R_1\leq R_\infty |U|$ we have
also $R_\infty\geq k(1-e^{-\frac{|U|}{k e}})$.
This proves also the assertion in \cite{jung:isit07}, i.e.
a necessary condition for $R_\infty\leq1$ is that $|U|\leq 2e\ln2$ . Furthermore
for $R_\infty\rightarrow k$ the size constraint on $U$ vanishes.
\if0
It might be relevant how to fulfill this implicit requirement
for a given $R_1$. This can be found
by considering the following bound for the exponential part of the implicit inequality,
i.e. let $c_k$ a constant such that
$e^{-\frac{|U|}{ek}}\geq (1+c_k|U|)^{-1}$ holds for all $|U|\in[0,e]$.
Both sides are convex and monotone decreasing in $|U|$, agree at $|U|=0$ and with
$c_k\geq(e^{1/k}-1)/e$ this inequality is correct for all $|U|\in[0,e]$. Now --
from
\begin{equation}
|U|(1-e^{-\frac{|U|}{ek}})\leq\frac{|U|^2}{1/c_k+|U|}\leq\delta
\end{equation}
we get the explicit relation:
\begin{equation}
|U|\leq\frac{\delta}{2}+\sqrt{\frac{\delta^2}{4}+\frac{\delta}{c_k}}
\end{equation}
Summarizing, if for a given $\delta$ the size $|U|$ of the support fulfills the
last inequality the necessary condition for $R_1\leq\delta$ from Lemma \ref{lemma:approxeigen:supportonu}
is fulfilled as well.
\fi
\subsection{Even Spreading Functions and Laguerre Integrals}
\label{subsec:approxeigen:laguerre}
Simple estimates for $\langle|{\mathbf{A}}_{g\gamma}^{(\alpha)}|^r,\boldsymbol{C}\rangle$ (and therefore
also for $\lambda_{\max}(Q^*Q)$)
can be found if $\boldsymbol{C}$ exhibits certain symmetries upon canonical transformations.
Let $T:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2n}$ be the transformation
$T(\nu)=L\cdot\nu+c$ with a $2n\times 2n$ symplectic matrix\footnote{
This means that $\eta(L\mu,L\mu)=\eta(\mu,\mu)$ for all
$\mu$. In particular this means that $|\det(L)|=1$ such that the
measure $|U|$ is invariant under $L$.
} $L$ and a phase space translation $c\in\mathbb{R}^{2n}$. It is well known that
$|{\mathbf{A}}_{g\gamma}|=|{\mathbf{A}}_{\tilde{g}\tilde{\gamma}}\circ T|$, where $\tilde{g}$ and $\tilde{\gamma}$ are
related to $g$ and $\gamma$ by unitary transforms which depend on $T$.
See for example \cite[Chapter 4]{folland:harmonics:phasespace}
for a review on metaplectic representation. We have then:
$\langle |{\mathbf{A}}_{g\gamma}|^r,\boldsymbol{C}\rangle=\langle |{\mathbf{A}}_{\tilde{g}\tilde{\gamma}}|^r,\boldsymbol{C}\circ T^{-1}\rangle$.
In particular this means, that we can always rotate, translate and
(jointly) scale $\boldsymbol{C}$ to simple prototype shapes. For example, elliptical
(rectangular) shapes can always be transformed to discs (squares)
centered at the origin. Further symmetries can be exploited as shown exemplary in
the following lemma (for simplicity we consider only $n=1$):
\begin{mylemma}
Let be $Q$ the operator with spreading function $\chi_U$.
If the shape of $U$ has the symmetry $\chi_U(\mu)=\chi_U(-\mu)$ then for each $m\geq0$
it holds that:
\begin{equation}
\lambda_{\max}(Q^*Q)\geq
\left(\frac{1}{|U|}\int_U l_m(\pi(|\mu|^2))d\mu\right)^2
\end{equation}
where $|\mu|^2=\mu_1^2+\mu_2^2$ and $l_m$ is the $m$th Laguerre function.
\label{lemma:approxeigen:infR1:symmetric}
\end{mylemma}
\begin{proof}
The calculation of $\lambda_{\max}(Q^*Q)$ simplifies much for normal operators which
involves the investigation of $Q$ only, i.e. $\lambda_{m}(Q^*Q)=|\lambda_{m}(Q)|^2$.
For an arbitrary operator $Y$ it follows
that $\boldsymbol{\Sigma}_{Y^*}^{(\alpha)}(\mu)=\bar{\boldsymbol{\Sigma}}^{(\alpha)}_{Y}(-\mu)e^{-i4\pi\alpha\zeta(\mu,\mu)}$
is the spreading function of $Y^*$ in polarization $\alpha$.
Hence, on the level of spreading functions the normality of $Y$ is equivalent to:
\begin{equation}
\boldsymbol{\Sigma}_{Y}^{(\alpha)}(\mu)\bar{\boldsymbol{\Sigma}}_Y^{(\alpha)}(\nu)=
\bar{\boldsymbol{\Sigma}}_{Y}^{(\alpha)}(-\mu)\boldsymbol{\Sigma}_{Y}^{(\alpha)}(-\nu)\cdot
e^{i4\pi\alpha(\zeta(\mu,\mu)+\zeta(\nu,\nu))}
\end{equation}
which can be verified using the rules for ${\boldsymbol{S}}_\mu^{(\alpha)}$ like \eqref{eq:weyl:shift:alphageneralized}
and \eqref{eq:weyl:shift:weylcommrelation}.
The operator $Q$ has by definition the real spreading function $\chi_U$.
Hence the desired symmetry is fulfilled for $\alpha=0$.
Let be $h_m$ the $m$th Hermite function.
It is known that the ambiguity functions of Hermite functions are given by the Laguerre functions
\cite{klauder:radardesign} (see for example also \cite{folland:harmonics:phasespace}).
Obviously, the maximal eigen--value fulfills:
\begin{equation}
\lambda_{\max}(Q)\geq\langle h_m,Qh_m\rangle
=\frac{1}{|U|}\int_U\langle h_m,{\boldsymbol{S}}_\mu^{(0)}h_m\rangle d\mu
=\frac{1}{|U|}\int_U l_m(\pi|\mu|^2)d\mu
\label{eq:corr:approxeigen:lambdan}
\end{equation}
where $l_m(t)=e^{-t/2}L_m^{(0)}(t)$ are the Laguerre functions and
$L_m^{(0)}$ are the $0$th Laguerre polynomials.
\end{proof}
\if0
\todo{
Thus, for our case we obtain a hermitian $Q$ if
$\chi_U(\mu)=\chi_U(-\mu)$.
According to \eqref{eq:tfanalysis:kernel} the operator $Q$ has a kernel representation with the kernel:
\begin{equation}
\begin{split}
k_{U}(x,y)
=&\frac{1}{|U|}(\mathcal{F}_2 \chi_U)(y-x,p_\alpha(x,y))
=\frac{1}{|U|}\int e^{-i2\pi p_\alpha(x,y)z}\chi_U(y-x,z) dz\\
\overset{\alpha=0}{=}&
\frac{1}{|U|}\int e^{-i\pi(y+x)z}\chi_U(y-x,z) dz
\end{split}
\end{equation}
If we let $\chi_U(\nu)=\chi_{T}(\nu_1)\cdot\chi_{[-F(\nu_1),F(\nu_1)]}(\nu_2)$, we get:
\begin{equation}
\begin{split}
k_{U}(x,y)
&=\frac{\chi_{T}(y-x)}{|U|}\cdot\int_{-F(y-x)}^{F(y-x)}e^{-i\pi(y+x)z} dz\\
&=\frac{2\cdot\chi_{T}(y-x)}{|U|}\cdot\frac{\sin(\pi(y+x)F(y-x))}{\pi(y+x)}
\end{split}
\end{equation}
\subsubsection{Elliptical Support}
If we let $F(\nu_1)=\sqrt{r^2-\nu_1}$ and $T=[-r,r]$ such that $|U|=\pi r^2$,
we get the result for the radial symmetric problem, which
was already discussed in \cite{bracken:wignerbounds}. The kernel is then:
\begin{equation}
\begin{split}
k_{U}(x,y)
&=\frac{2\cdot\chi_{[-r,r]}(y-x)}{\pi r^2}\cdot\frac{\sin(\pi(y+x)\sqrt{r^2-(y-x)^2})}{\pi(y+x)}
\end{split}
\end{equation}
It is well known that $Q$ commutes with the differential operator $\frac{\partial^2}{\partial x^2}-x^2$
(the Hermite operator), hence also has the Hermite functions $h_n$ as its eigen--functions.
It is known that the ambiguity function of Hermite functions are given by the Laguerre functions
\cite{klauder:radardesign} (see for example also \cite{folland:harmonics:phasespace}).
The eigen--values can be therefore calculated in this case exactly as:
\begin{equation}
\lambda_n=\langle h_n,Qh_n\rangle=\int_U\langle h_n,{\boldsymbol{S}}_\mu(0)h_n\rangle d\mu
=(-1)^n\int_0^{r^2} l_n(t)dt
\end{equation}
where $l_n(t)=e^{-t/2}L_j^{(0)}(t)$ are the Laguerre functions and
$L_j^{(0)}$ the $0$th Laguerre polynomials. It follows that $l_0=1-e^{-r^2}$.
\subsubsection{Rectangular Support}
Let $F(\nu_1)=d$ and $T=[-d,d]$ such that $|U|=4d^2$. It was shown in \cite{} that
$Q$ commutes with the differential operator $\dots$ (Helmholtz equation). The kernel is then:
\begin{equation}
\begin{split}
k_{U}(x,y)
&=\frac{2\cdot\chi_{[-d,d]}(y-x)}{4d^2}\cdot\frac{\sin(\pi d(y+x))}{\pi(y+x)}
\end{split}
\end{equation}
}
\fi
\if0
\subsection{Case C2 and Positivity}
A general characterization of the requirement $A_2(U)\geq0$ is to the
authors knowledge unknown. However, some necessary condition for positivity
can be found in the following way.
The ambiguity function ${\mathbf{A}}_{g\gamma}^{(\alpha)}(\mu)$ can also be understood as the
Fourier transform of the function
$\bar{g}(\cdot+(\frac{1}{2}-\alpha)\mu_1)\gamma(\cdot-(\frac{1}{2}+\alpha)\mu_1)$ at point $\mu_2$.
From this it follows that $2A_2(\mu)$ is the Fourier transform of:
\begin{equation}
\begin{split}
T(x)=
&\bar{g}(x+(\frac{1}{2}-\alpha)\mu_1)\gamma(x-(\frac{1}{2}+\alpha)\mu_1)+\\
&g(-x+(\frac{1}{2}-\alpha)\mu_1)\bar{\gamma}(-x-(\frac{1}{2}+\alpha)\mu_1)
\end{split}
\end{equation}
at point $\mu_2$. Note that $T(x)=\bar{T}(-x)$.
Hence, positivity of $A_2(\mu)}$ for a given $\mu_1$
is ensured by Bochner's theorem if $T$ is of positive type. In particular, this implies that
\begin{equation}
T(0)^2\geq T(x)T(-x)=|T(x)|^2
\end{equation}
for any $x$. For $\alpha=0$ this reads as
\begin{equation}
\left(\bar{g}(t)\gamma(-t)+g(t)\bar{\gamma}(-t)\right)^2\geq |\bar{g}(2t)\gamma(0)+g(0)\bar{\gamma}(-2t)|^2
\end{equation}
where $t=\mu_1/2$. If we assume further the symmetry $g(t)=\gamma(-t)$ and $g(0)\neq0$
we get that $|g(2t)|\leq |g(t)|^2/|g(0)|$.
\fi
\subsection{Gaussian Signaling and the Corresponding Bounds}
\label{subsec:approxeigen:gaussiansignaling}
\newcommand{e}{e}
The previous part of this section indicates that approximate eigen--functions have to be ''Gaussian--like''.
Hence it makes sense to consider Gaussian signaling explicitely. For simplicity we do this
for the time--frequency symmetric case $g=\gamma=2^{\frac{1}{4}}e^{-\pi t^2}$ and $n=1$.
We have the relation:
\begin{equation}
({\boldsymbol{S}}_\nu^{(\alpha)} g)(x)
=e^{-\pi\left(2i\eta(\nu,xe)+i(1-2\alpha)\zeta(\nu,\nu)+\nu_1^2\right)}
g(x)
\label{eq:approxeigen:numerical:gausshift}
\end{equation}
if we let $e:=(1,i)$.
According to \eqref{eq:approxeigen:lemmaep1:proof1} the error $E_p(\mu)$ can be calculated as:
\begin{equation}
\begin{split}
E_p(\mu)
=\lVert\int_{\mathbb{R}^2} d\nu\, \boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}(\nu)e^{-i2\pi\eta(\nu,\mu)}
\cdot g\cdot f(\nu,\cdot)\rVert_p\\
\end{split}
\label{eq:approxeigen:gaussian:Ep}
\end{equation}
The function $f:\mathbb{R}^2\times\mathbb{R}\rightarrow\mathbb{C}$ is defined
for a particular polarization $\alpha$ as:
\begin{equation}
\begin{split}
f(\nu,x)
&=\begin{cases}
e^{-\pi[2i\eta(\nu,xe)+i(1-2\alpha)\zeta(\nu,\nu)+\nu_1^2]}-
e^{-\pi[\frac{\langle\nu,\nu\rangle}{2}-2i\alpha\zeta(\nu,\nu)]}
& \text{for C1}\\
e^{-\pi[2i\eta(\nu,xe)+i(1-2\alpha)\zeta(\nu,\nu)+\nu_1^2]}-1
& \text{for C2}
\end{cases}
\end{split}
\label{eq:approxeigen:numberical:fnux}
\end{equation}
where we have used that the ambiguity function in polarization $\alpha$ is
${\mathbf{A}}_{gg}^{(\alpha)}(\nu)=e^{-\frac{\pi}{2}s_\alpha(\nu)}$ and
$s_\alpha(\nu):=\DotReal{\nu}{\nu}+4i\alpha\zeta(\nu,\nu)$.
The following Corollary contains the bounds specialized to the Gaussian case:
\begin{mycorollary}[Gaussian Bounds]
For the case C1 ($k=1$) and for the case C2 ($k=2$) in polarization $\alpha=0$ it holds
for any $1\leq p<\infty$ and $1\leq q\leq\infty$ that:
\begin{equation}
\frac{E_p(\mu)}{\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_{q}}
\leq
32^{\frac{p-2}{4p}}\cdot \lVert k (1-e^{-\frac{\pi}{k}s_0})\chi_U\rVert^{1/p}_{q'/p}
\label{eq:corr:approxeigen:gaussianbounds:eq1}
\end{equation}
where $s_0(\nu):=\langle\nu,\nu\rangle$. For $q>1$ it follows
from \eqref{eq:corr:approxeigen:gaussianbounds:eq1} also:
\begin{equation}
\begin{split}
\frac{E_p(\mu)}{\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_{q}}
\leq
32^{\frac{p-2}{4p}}\cdot k\cdot
\left(k|U|(1-
\langle\boldsymbol{C}, e^{-\frac{\pi}{k}s_0}\rangle\right)^{1/\max(q',p)}
\end{split}
\label{eq:corr:approxeigen:gaussianbounds:eq2}
\end{equation}
where $\boldsymbol{C}=\chi_U/|U|$.
\label{corr:approxeigen:gaussianbounds}
\end{mycorollary}
\begin{proof}
We use the abbreviation $A_1=e^{-\pi s_0}$ and $A_2=\Real{e^{-\frac{\pi}{2}s_\alpha}}$
as introduced in \eqref{eq:approxeigen:Ak}.
Only for $\alpha=0$ the case C2 provides an Euclidean distance measure
in phase space.
Equation
\eqref{eq:corr:approxeigen:gaussianbounds:eq1} of the claim follows
from Lemma \ref{lemma:approxeigen:lemmaep1} and from \eqref{eq:lemma:approxeigen:lemmaep3:eq1} of
Lemma \ref{lemma:approxeigen:lemmaep3} together with
$\lVert V\rVert_\infty\leq 2\lVert g\rVert_\infty=32^{1/4}$.
If $q>1$ we can relate this further by
\eqref{eq:lemma:approxeigen:lemmaep3:eq2} of
Lemma \ref{lemma:approxeigen:lemmaep3}
to weighted norms of ambiguity functions. Using the uniform
bound $C_{pq}\leq k$ from Lemma
\ref{lemma:approxeigen:cpb:uniform} and the relation for $R_1$
in \eqref{eq:approxeigen:R1:special} we get Gaussian integrals of the form
\eqref{eq:corr:approxeigen:gaussianbounds:eq2}
which can now be solved analytically for some cases.
For example, if
$U$ is a centered disc of radius $\sqrt{|U|/\pi}$ we
get $\langle \boldsymbol{C},e^{\frac{\pi}{k}s_0}\rangle=l(2|U|/k)$ where $l(x)=2(1-e^{-x/2})/x$.
For a centered square of length $\sqrt{|U|}$ we have instead
$l(x)=2\text{erf}(\sqrt{\pi x/8})^2/x$.
\end{proof}
\section{Numerical Verification}
\label{sec:numerical}
In this part we shall establish a spreading model with a finite number of random parameters.
We shall need this model to verify numerically the bounds derived in this paper.
Since several (iterated)
integrals are involved which partially can only be computed numerically we have
evaluate the achieved accuracy. We aim at computing
$E_p(\mu)/\lVert\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}\rVert_q$ up to a desired accuracy $\Delta$.
In our derivation we
will assume that single definite integrals can be computed within a given
predefined error (for example in using Simpson quadrature).
\subsection{Spreading Model with Finite Number of Parameters}
Let us consider a doubly--dispersive channel model with a finite number
of fading parameters $c_k$, where $k\in{\mathbb{Z}}_K^2$ and
${\mathbb{Z}}_K=\{0\dots K-1\}$. Each fading contribution has its
\emph{own doubly--dispersive} operation on the input signal, hence the model
is different from the usual (distributional) models
having a finite number of separated paths with fixed Doppler frequencies.
The spreading function $\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}$ should be of the form:
\begin{equation}
\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}(\nu)=\sum_{k\in{\mathbb{Z}}_K^2}c_k\chi_u(\nu-u(k+o))
=\sum_{k\in{\mathbb{Z}}_K^2}c_k\chi_1(\nu/u-k+o)
\label{eq:approxeigen:finitespreadingmodel}
\end{equation}
where $\chi_u(y)=\chi_{[0,u]}(y_1)\chi_{[0,u]}(y_2)$ is the characteristic function
of the square $[0,u]\times[0,u]=:[0,u]^2$ and $o=(\tfrac{1}{2},\tfrac{1}{2})$.
Thus the latter is a disjoint partition of the
square $[0,Ku]^2$ with area $(Ku)^2$. In other words,
if we fix the support of the spreading function to be $|U|$, then it follows for a $K^2$--sampling
of this area that $u=\sqrt{|U|}/K$. For such a model the $q$--norm of the spreading function
as needed for the calculation of the ratio $E_p/\lVert\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}\rVert_q$
is:
$\lVert\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}\rVert_q=u^{2/q}\lVert c\rVert_q$
where $\lVert c\rVert_q:=(\sum_k |c_k|^q)^{1/q}$ is simply the $q$th vector norm of the vector
$c=(\dots,c_k,\dots)\in\mathbb{C}^{K^2}$.
Let us abbreviate $l=l(k)=k+o$.
With \eqref{eq:approxeigen:numberical:fnux} we get for the integrand in
\eqref{eq:approxeigen:gaussian:Ep}:
\begin{equation}
\begin{split}
\int_{\mathbb{R}^2} \boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}(\nu)
e^{-i2\pi\eta(\nu,\mu)}g(x)f(\nu,x) d\nu
&=\sum_{k\in{\mathbb{Z}}_K^2} c_k\cdot g(x)
\underbrace{\int_{\mathbb{R}^2}\chi_1(\frac{\nu}{u}-l)e^{-i2\pi\eta(\nu,\mu)}f(\nu,x) d\nu}_{F_k(x)}\\
\end{split}
\label{eq:approxeigen:numerical:Fk:1}
\end{equation}
\if0
Let us restrict further evaluations to the case C2 for $\alpha=1/2$. The remaining cases are similar.
Then the functions $F_k(x)$ can be found as:
\begin{equation}
\begin{split}
F_k(x)
&=\int\chi_1(\frac{\nu}{u}-l)e^{-i2\pi\eta(\nu,\mu)}\left(
e^{-i2\pi[\eta(\nu,xc)+(\frac{1}{2}-\alpha)\zeta(\nu,\nu)]}g(\nu_1)-1\right)d\nu\\
&=
u^2\int_{[0,1]^2}
e^{-i2\pi u\eta(\nu+l,\mu)}(e^{-i2\pi u[\eta(\nu+l, xc)+u(\frac{1}{2}-\alpha)\zeta(\nu+l,\nu+l)]}
g(u(\nu_1+l_1))-1) d\nu\\
&=
u^2\int_{[0,1]^2}
(e^{-i2\pi u[\eta(\nu+l, \mu+xc)+u(\frac{1}{2}-\alpha)\zeta(\nu+l,\nu+l)]}
g(u(\nu_1+l_1))-e^{-i2\pi u\eta(\nu+l,\mu)}) d\nu\\
&\overset{\alpha=1/2}{=}
u^2\left(q(l_1,\mu_2+ix)
s(\mu_1+x)-e^{-i2\pi u \eta(l,\mu)}s(-\mu_2)s(\mu_1)\right)
\end{split}
\label{eq:approxeigen:numerical:Fk:1}
\end{equation}
where we used the abbreviations
\begin{equation}
\begin{split}
s(y)
:=&
\int_0^1 e^{i2\pi uyx} dx=e^{i\pi uy}\text{sinc}(u y)\\
q(y,A)
:=&
\int_0^1 e^{-i2\pi uA(s+y)}g(u(s+y))ds\\
=&
\frac{e^{-\pi A^2}}{2u}\left(\text{erf}(\sqrt{\pi}[u(y+1)+iA])-\text{erf}(\sqrt{\pi}[uy+iA])\right)
\end{split}
\end{equation}
We will use this setup to perform the remaining calculations numerically.
\fi
The approximate eigenstructure error reads now as $E_p(\mu)=\lVert\sum_{k\in{\mathbb{Z}}_K^2} c_k \cdot g \cdot F_k\rVert_p$.
For $\alpha=1/2$ and case C2 the integral in $F_k(x)$ can be calculated explicitely. In general, however,
$F_k(x)$ has to be computed numerically up to a certain accuracy $\delta$ (it is a well--defined and
definite integral). Thus, let
the computed value $\tilde{F}_k(x)$ be such that pointwise
$|\tilde{F}_k(x)-F_k(x)|\leq\delta$ for all $x$ and $k$.
We would like to use $\tilde{F}_k(x)$ instead of $F_k(x)$ to compute
the approximation $\tilde{E}_p(\mu)$ on $E_p(\mu)$. However we have
to restrict the remaining indefinite integral over $x$ to a finite interval
$I:=[-L,L]$. With $J$ we denote its complement in $\mathbb{R}$, i.e. $J:=\mathbb{R}\setminus I$.
Observe from \eqref{eq:approxeigen:numberical:fnux} that $|f|\leq 2$, hence $|F_k|\leq 2u^2$
in \eqref{eq:approxeigen:numerical:Fk:1} and that for a Gaussian $\lVert g\rVert_p^p=1/\sqrt{p}$.
If we choose $\pi L\geq \max(\sqrt{\log(2u^2/\delta)},1)$ we have:
\newcommand{\text{erfc}}{\text{erfc}}
\begin{equation}
\begin{split}
\lVert F_k g\cdot\chi_J\rVert_p
&\leq 2u^2 \lVert g\cdot\chi_J\rVert_p
=2u^2\text{erfc}(\sqrt{\pi p}L)^{1/p}
\leq \frac{2u^2}{\left(\pi\sqrt{p}L\right)^{1/p}}e^{-\pi L^2}\\
&=\lVert g\rVert_p \frac{2u^2}{\left(\pi L\right)^{1/p}}e^{-\pi L^2}
\overset{\pi L\geq 1}{\leq}
\lVert g\rVert_p \cdot 2u^2 \cdot e^{-\pi L^2}\leq \delta\lVert g\rVert_p
\end{split}
\end{equation}
For such a chosen $L$ the integration with respect to $x$ over the interval $I=[-L,L]$ can be performed again within an
accuracy of $\delta$. This yields for the overall calculation error:
\begin{equation}
\begin{split}
|E_p(\mu)-\tilde{E}_p(\mu)|
&\leq \delta+
\sum_{k} |c_k|\left(
\lVert(F_k-F_k^{(\delta)})g\cdot\chi_I\rVert_p+
\lVert F_k g\cdot\chi_J\rVert_p
\right)\\
&\leq \left(1+
2\lVert c\rVert_1\cdot\lVert g\rVert_p\right)\delta
= \left(1+
2\lVert c\rVert_1\cdot p^{-\frac{1}{2p}}\right)\delta
\end{split}
\label{eq:approxeigen:num:Ep}
\end{equation}
If we choose $\delta=\Delta\cdot\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_q\cdot(1+2\lVert c\rVert_1\cdot p^{-\frac{1}{2p}})^{-1}$ (and $L$ respectively) we
can guarantee that
the error on $E_p(\mu)/\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_q$ is below $\Delta$.
\begin{myremark}[Interference Estimates for Statistical Models]
\label{rem:approxeigen:statisticalmodel}
Consider the following example: The transmitter sends the signal ${\boldsymbol{S}}_\mu\gamma$ through the unknown
channel $\boldsymbol{\HH}$. Let us again for simplicity use
the finite--parameter spreading model \eqref{eq:approxeigen:finitespreadingmodel}
for a support $U$ of square shape. The receiver already knows
the vector of fading parameters $c$ for the spreading function $\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}$ of the channel,
the pulse $g$ and $\gamma$ and the time--frequency slot $\mu$.
The normalized $q$--norms $c_q=\lVert c\rVert_q\cdot K^{-2/q}$
of the $K^2$ fading coefficients characterize the statistical model for
the spreading such that $\lVert\boldsymbol{\Sigma}^{(\alpha)}_{\boldsymbol{\HH}}\rVert_q=|U|^{1/q}\cdot c_q$.
If the contribution of this particular slot $\mu$ is removed from the signal it remains
$e:=\boldsymbol{\HH}{\boldsymbol{S}}_\mu\gamma-\lambda(\mu){\boldsymbol{S}}_\mu g$. Let us assume that the receiver expects
another information in the span of the function $f$ (for example $f={\boldsymbol{S}}_\nu g$ could be another
slot $\nu$). The interference will be $\langle f,e\rangle$.
Let be $A_f(p)=\lVert f\rVert_{p'}\cdot 32^\frac{p-2}{4p}$.
We have
\begin{equation}
|\langle f,e\rangle|\leq E_p(\mu)\cdot \lVert f\rVert_{p'}
<A_f(p)\cdot(|U|(1-L^2))^{1/\max(q',p)}\cdot |U|^{1/q}\cdot c_q
\end{equation}
With the assumption that $|U|\leq1$ we use $|U|^{1/\max(q',p)+1/q}\leq |U|$ such that:
\begin{equation}
|\langle f,e\rangle|
<A_f(p)\cdot(1-L^2)^{1/\max(q',p)}\cdot |U|\cdot c_q
\end{equation}
This means, for different statistical models (characterized by $c_q$) and functions $f$
(characterized by $\lVert f\Vert_{p'}$ in the quantity $A_f(p)$) we can
characterize the amount of interference.
\end{myremark}
\subsection{Numerical Experiments}
We will consider now the case where the coefficients $c_k$ of the
vector $c\in\mathbb{C}^{K^2}$ are identical, independent and normal distributed which
refers to the doubly--dispersive Rayleigh fading channel. The
square shaped support $U$ has a random size $|U|$ taken from a
distribution uniformly on the interval $[10^{-3}, 10^{-2}]$ corresponding
to values of the time--frequency spread relevant in mobile
communication.
Each realization of the fading factor $c$ and $u=\sqrt{|U|}$ parameterize via \eqref{eq:approxeigen:finitespreadingmodel}
a random spreading function $\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}$ in a given polarization $\alpha$
which give itself rise to a random channel operator $\boldsymbol{\HH}$ by
Lemma \ref{lemma:tfanalysis:linop:spreading}.
On this random channel we
have evaluated $E_p(\mu)$ for Gaussian signaling as described previously
in Section \ref{subsec:approxeigen:gaussiansignaling}.
For each realization we have taken $\mu$ uniformly from $[-5,5]^2$.
We have calculated $N=1000$ Monte Carlo (MC) runs for different values of $p$ and $q$.
For each run $E_p(\mu)/\lVert\boldsymbol{\Sigma}_{\boldsymbol{\HH}}^{(\alpha)}\rVert_q$ has been computed (corresponding
to one point in Fig.\ref{fig:approxeigen:ep:C1:2:2} and Fig.\ref{fig:approxeigen:ep:C1:3:15}) up to an accuracy
of $\Delta=10^{-8}$.
The computed
values $E_p(\mu)$ are compared to the uniform bound in
\eqref{eq:approxeigen:ep:uniform} of Lemma \ref{lemma:approxeigen:ep:uniform}
which depends only on the support and is valid for any normalized $g$ and $\gamma$.
Improved bounds are valid only for particular $g$ and $\gamma$ like
the Laguerre/Gauss (GL) bound from Theorem \eqref{thm:approxeigen:Ep3}.
Fig.\ref{fig:approxeigen:ep:C1:2:2} shows the case C1 for $p=q=2$, where we expect
the most tight results.
The GL bound improves the uniform estimates approximately by a factor of 10. However
the computed MC values are still below this estimate by a factor of approximately two. The
latter estimate degrades to a factor of approximately $10$ for $p=3$ and $q=3/2$
as displayed in Fig.\ref{fig:approxeigen:ep:C1:3:15}.
\begin{figure}
\includegraphics[width=\linewidth]{ep_C1_2_2.eps}
\caption{\emph{Approximate Eigenstructure for the case C1, $p=2$, $q=2$:} Verification of $1000$
Monte Carlo runs with the uniform bound in Lemma \ref{lemma:approxeigen:ep:uniform} and the
optimized Laguerre/Gauss bound of Theorem \ref{thm:approxeigen:Ep3}.
}
\label{fig:approxeigen:ep:C1:2:2}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{ep_C1_3_15.eps}
\caption{\emph{Approximate Eigenstructure for the case C1, $p=3$, $q=1.5$:} Verification of $1000$
Monte Carlo runs with the uniform bound in Lemma \ref{lemma:approxeigen:ep:uniform} and the
optimized Laguerre/Gauss bound of Theorem \ref{thm:approxeigen:Ep3}.
}
\label{fig:approxeigen:ep:C1:3:15}
\end{figure}
\section{Conclusions}
In this paper we have considered doubly--dispersive channels with compactly supported spreading.
We have shown to what level of approximation error a description
as simple multiplication operators is valid.
We have focused on two well known choices of such a description, i.e. the
multiplication with the (generalized) Weyl symbol of the operator and the case
of Wigner smoothing.
We found that in both cases the approximation errors can be
bounded by the size of the support of the spreading function. Our estimates improve
and generalize recent results in this direction. Furthermore we have drawn the relation to
localization operators and fidelity measures known from the theory of pulse shaping.
Finally, we have verified our estimates using Monte Carlo methods with a precise
control of the numerical uncertainties.
\section*{Acknowledgments}
\addcontentsline{toc}{section}{Acknowledgment}
The author would like to thank Gerhard Wunder,
Thomas Strohmer and Holger Boche for helpful discussions on this topic.
The author extents a special thanks to the anonymous reviewers for their
constructive comments, which have improved the paper.
|
0803.0017
|
\section{Introduction \label{intro}}
\subsection*{Intersection problems}
Suppose $N$ is a compact smooth manifold equipped with a closed
submanifold $Q\subset N$. An {\it intersection problem} for $(N,Q)$
consists of a map $f\: P \to N$, where $P$ is a closed manifold. A
{\it solution} to the problem consists of a homotopy of $f$ to a map
$g$ satisfying $g(P) \cap Q = \emptyset$. We depict the situation by
$$
\SelectTips{cm}{}
\xymatrix{
& N - Q \ar[d] \\
P \ar[r]_f\ar@{..>}[ur]
& N\, ,
}
$$ in which we seek to find the dotted arrow making the diagram
homotopy commute. One also has a version of the above when
$P$ has a boundary whose image under $f$ is disjoint from $Q$.
We then require the deformation of $f$ to hold the
boundary fixed. Let $i_Q\: Q\subset N$ be the inclusion.
We will often denote the data by $(f,i_Q)$.
In \cite{klein-williams}, we produced an obstruction
$\chi(f)$ living in a certain bordism group whose vanishing is
necessary for the existence of a solution. Furthermore, the
obstruction was shown to be sufficient in the range $p \le 2n - 2q -
3$, where $\dim N = n, \dim Q = q$ and $\dim P = p$. We also gave a
version of the obstruction for families.
\medskip
Here, we will consider {\it equivariant} intersection problems. Suppose
$G$ is a finite group and the above manifolds are equipped with
smooth $G$-actions.
\medskip
In the equivariant setting, $i_Q\:Q \subset N$ is a $G$-submanifold
and $f\: P \to N$ is an equivariant map. We now seek a deformation
of $f$ through $G$-maps to an
equivariant map whose image is disjoint from $Q$.
The partial answers we will give to such questions
are phrased in terms of isotropy data.
If $X$ is a $G$-space, we let
$$
{\mathcal I}(G;X)
$$
denote the conjugacy classes of subgroups of $G$ which appear
as stabilizer groups of points of $X$.
\subsection*{Indexing functions}
An {\it indexing function} $\phi_\bullet$ on a $G$-space $X$ assigns to a
subgroup $H\subset G$ a locally constant function $\phi_H$ with domain
$X^H$, the fixed point set of $H$ acting on $X$, and codomain given by
the extended integers $\mathbb Z\cup \pm\infty$. It is also required to
be conjugation invariant: if $K = gHg^{-1}$ and $h\:X^H \to X^K$ is
the homeomorphism $x\mapsto gx$, then $\phi_H = \phi_K \circ h$.
If $\psi_\bullet$ is another indexing function on $X$,
and $H \subset G$ is a subgroup,
we write
$$
\phi_H \le \psi_H
$$
if
$\phi_H(x) \le \psi_H(x)$ for
all $x\in X^H$. If $\phi_H \le \psi_H$ for all $H$, then we write
$\phi_\bullet \le \psi_\bullet$.
Here are some examples:
\subsubsection*{Dimension} If $M$ is a locally smooth $G$-manifold, then for
any subgroup $H \subset G$ the components of the fixed point set $M^H$
are manifolds \cite[Ch.\ 4]{Bredon}. The dimensions of the components
can vary. If $x\in M^H$, then the dimension of the component
containing $x$ defines a locally constant function $m^H$. The
collection $m^\bullet := \{m^H\}_{H\subset G}$ is called the {\it
dimension function} of $M$. If $M^H$ is empty, our convention is to
set $m^H = -\infty$.
\subsubsection*{Codimension}
Let $i_Q\:Q \subset N$ be as above.
Let $$\cd_\bullet(i_Q)$$ be the indexing function
on $Q$
in which $\cd_H(i_Q)(x)$ is the dimension of
the normal space
to the embedding $Q^H \subset N^H$ at $x\in Q^H$ (if $Q^H$ is empty but $N^H$ isn't,
our convention is to set $\cd_H(i_Q) = +\infty$).
\subsubsection*{Pullback}
Suppose $f\:X\to Y$ is a $G$-map.
Given an indexing function $\alpha_\bullet$ on $Y$, we
obtain an indexing function $f^*\alpha_\bullet$ on
$X$ which is given by $f^*\alpha_H(x) = \alpha_H(f(x))$.
\subsubsection*{Pushforward} Given $f\: X\to Y$ as above,
let $\beta_\bullet$ be an indexing function on $X$.
For $y \in Y$ we let $[y]$ denote the associated path component.
Let $I_{f,y}$ be the set of those $[x]$ for which $[f(x)] = [y]$. That is,
$I_{f,y}$ is the inverse image of $f_*\: \pi_0(X) \to \pi_0(Y)$ at $[y]$.
Define
an indexing function $f_!\beta_\bullet$ on $Y$ by the rule
\[
f_!\beta_H(y) =
\begin{cases} \inf_{I_{f,y}} \beta_H(x) \quad &\text{ if $I_{f,y}$ is nonempty, } \\
\infty \quad &\text{ otherwise.}
\end{cases}
\]
Note $f_!f^*\alpha_\bullet \ge \alpha_\bullet$, with equality holding when
$f_*$ is a surjection, whereas $f^*f_!\beta_\bullet \le \beta_\bullet$
with equality holding when $f_*$ is an injection.
\subsection*{Stable intersections}
Just as in the unequivariant case,
equivariant intersections can be removed when the codimension is sufficiently
large. The equivariant intersection problem $(f,i_Q)$
is said to be {\it stable} if
$$
p^H \,\, \le \,\, f^*(i_Q)_!\cd_H(i_Q) - 1\,
$$
for every $(H) \in {\mathcal I}(G;P)$ (Roughly, this means the dimension of the transverse intersection
of $f(P^H)$ and $Q^H$ is negative).
If the intersection problem is stable,
one can use
elementary equivariant obstruction theory to show $f$
equivariantly deforms off of $Q$, yielding a solution.
\subsection*{A ``cohomological'' result}
Our first main result gives a complete obstruction
to solving equivariant intersection problems in the equivariant
{\it metastable} range. The obstruction lies
in the cohomology of $P$ with coefficients in a certain
{\it parametrized equivariant spectrum} over $N$. A reader who is not
familiar with this technology should consult \S\ref{proof-cohom}.
\begin{bigthm} \label{cohom} To an equivariant intersection problem $(f,i_Q)$,
there is a naive parametrized $G$-spectrum ${\mathcal E}(i_Q)$ over
$N$, which is constructed from the inclusion $i_Q \: Q\subset N$,
and an obstruction
$$
e_G(f) \in H^0_G(P;{\mathcal E}(i_Q))
$$
which vanishes when the intersection problem has a solution.
Conversely, if $e_G(f)=0$ and
$$
p^H \le 2f^*(i_Q)_{!}\cd_H(i_Q) - 3
$$
for all $(H) \in {\mathcal I}(G;P)$,
then the intersection problem has a solution.
\end{bigthm}
\begin{intro-rem} Theorem \ref{cohom} is an equivariant version
of \cite[{Cor.\ 3.5}]{klein-williams}.
The word {\it naive} is used here indicate that the parametrized
spectrum is indexed over a trivial universe; the
equivariant cohomology theory of the theorem
is therefore of ``Bredon type.''
The inequalities of Theorem
\ref{cohom} define the {\it equivariant metastable range.}
When $G$ is the trivial group, one has
the sole inequality $2p \le 2n -2q-3$, which is just
the unequivariant metastable range (cf.\ \cite{klein-williams}).
\end{intro-rem}
\subsection*{Homotopical equivariant bordism}
Since naive equivariant cohomology theories are not indexed
over representations, they are not fully ``stable.''
From our viewpoint, a crucial deficiency
of naive theories is their lack of Poincar\'e duality.
To get around this,
we impose additional conditions to get a more
tractible invariant residing in a theory
which does possess Poincar\'e duality. We will map the
equivariant cohomology theory of Theorem \ref{cohom}
into a similarly defined $\RO(G)$-graded one.
The additional constraints will insure the map is injective.
Applying duality to our pushed-forward invariant, we obtain
another invariant living in $\RO(G)$-graded homology.
We then identify the homology theory with the
homotopical $G$-bordism groups of a certain $G$-space.
To a $G$-space $X$ equipped with real $G$-vector bundle $\xi$,
one has an associated {\it equivariant Thom spectrum}
$$
X^\xi \, ,
$$
whose spaces $X^\xi_V$ are indexed by representations $V$ ranging over a
complete $G$-universe ${\mathcal U}$ (compare \cite[Chap.\
XV]{May}). Here, $X^\xi_V$ denotes the Thom space of $\xi \oplus
V$. Equivalently, $X^\xi$ is the equivariant suspension spectrum
of the Thom space of $\xi$. More generally, $X^\xi$
is defined whenever $\xi$ is an {\it virtual} $G$-bundle over $X$
(see \cite[Ch.\ 9]{LMS} for details).
For a virtual $G$-representation $\alpha = V-W$, the {\it homotopical $G$-bordism group}
of $(X,\xi)$ in degree $\alpha$ is given by
$$
\Omega^G_\alpha(X;\xi) \,\, := \,\,
\colim_{U}
[S^{V+U},X^\xi_{W+U}]^G \, ,
$$
where $[S^{V+U},X^\xi_{W+U}]^G$ denotes the homotopy classes of based
$G$-maps $S^{V+U} \to X^\xi_{W+U}$ (in which $S^{V+U}$ is
the one point compactification of the direct sum of $V$ and $U$),
and the colimit is indexed over the finite dimensional
subrepresentations $U$ of ${\mathcal U}$ using the partial ordering
defined by inclusion. Actually, we will only need
consider the case when $\alpha = 0$ is the trivial representation of rank zero.
\begin{intro-rems}(1).
There is a related object, ${\mathcal N}^G_\alpha(X;\xi)$, called the {\it
geometric bordism group} of $(X,\xi)$. It is generated by
$G$-manifolds $M$ equipped with $G$-map $u\:M \to X$ and a stable
$G$-bundle isomorphism $$u^*\xi\oplus \tau_M \oplus \epsilon_W \,\, \cong \,\,\epsilon_V\, ,$$
where $\epsilon_V$ denotes the $G$-bundle whose total space is
$X \times V$.
The Pontryagin-Thom construction defines a homomorphism
$$
{\mathcal N}^G_\alpha(X;\xi) \to \Omega^G_\alpha(X;\xi)\, .
$$
In constrast with the
unequivariant case, this map can fail to be an isomorphism because of
the lack of equivariant transversality (see \cite{Petrie},
\cite{CW}, \cite[{p.\ 156}]{May}).
\smallskip
{\flushleft (2).} When $\xi$ is a $G$-vector bundle (not virtual),
then $X^\xi$ is the equivariant suspension spectrum of the Thom space
of $\xi$. In particular, when $\xi$ is trivial of rank zero, we get
$\Sigma^\infty_G (X_+)$, the equivariant suspension spectrum of
$X\amalg *$. In this case, the map from the
equivariant geometric bordism group to the homotopical one is an
isomorphism \cite{Hauschild}, \cite{Kos}. The $k$-th homotopy group of
$\Sigma^\infty_G (X_+)$
coincides with $\Omega^{G,\text{fr}}_k(X)$, the $k$-dimensional
equivariant framed bordism group of $X$.
\smallskip
{\flushleft (3).} When $G = e$ is the trivial group, and $\xi$ has virtual
rank $n$, $\Omega^e_0(X;\xi) = \Omega_0(X;\xi)$ is the bordism group generated by maps
$\alpha\:M \to X$, with $M$ a compact $n$-manifold, together with a
(stable) isomorphism $\alpha^*\xi$ with the stable normal bundle of
$M$. Note the indexing convention used here is different from the one
of \cite{klein-williams} (the latter implicitly ignored the rank of $\xi$
but indicated the dimension of the manifolds in the degree of the bordism group; thus
the group $\Omega_n(X;\xi)$ of \cite{klein-williams} coincides with
the current $\Omega_0(X;\xi)$).
\end{intro-rems}
\medskip
We now specialize to the equivariant bordism groups
arising from intersection problems. Given an
equivariant intersection problem $(f,i_Q)$, define
$$
E(f,i_Q)
$$
to be the {\it homotopy fiber product} (a.k.a.\ homotopy pullback)
of $f$ and $i_Q$. A point in $E(f,i_Q)$ is a triple $(x,\lambda,y)$ in
which $x\in P$, $y \in Q$ and $\lambda\: [0,1]\to N$ is a path such
that $\lambda(0) = f(x)$ and $\lambda(1) = y$. There is an evident
action of $G$ on $E(f,i_Q)$.
There are forgetful maps $j_P\:E(f,i_Q) \to P$ and $j_Q\: E(f,i_Q) \to Q$,
both equivariant. There is also an equivariant
map $j_N \: E(f,i_Q) \to N$ given by $(x,\lambda,y) \mapsto \lambda(1/2)$.
Using these, we obtain an equivariant virtual bundle over $E(f,i_Q)$ by
$$
\xi \,\, := \,\, j_N^*\tau_N - j_Q^*\tau_Q - j_P^*\tau_P\, .
$$
If $Q\subset N$ is held fixed, then $\xi$ is completely determined
by $f\:P \to N$.
\subsection*{A ``homological'' result}
\begin{bigthm} \label{equi-int}
Given an equivariant intersection problem $(f,i_Q)$, there is
an invariant
$$
\chi_G(f) \in \Omega^G_0(E(f,i_Q);\xi)
$$
which vanishes when $f$ is equivariantly
homotopic to a map whose image is disjoint from $Q$.
Conversely, assume $\chi_G(f) = 0$ and
\begin{itemize}
\item for each
$(H) \in {\mathcal I}(G;P)$, we have
$$
p^H \le 2f^*(i_Q)_{!}\cd_H(i_Q) - 2 \, ;
$$
\item for each $(H) \in {\mathcal I}(G;P)$ and each proper subgroup
$K \subsetneq H$, we have
$$
p^H \le f^*(i_Q)_{!}\cd_K(i_Q) - 2\, .
$$
\end{itemize}
Then $f$ is equivariantly homotopic to a map whose image is disjoint
from $Q$.
\end{bigthm}
\begin{intro-rems} (1). The assignment
$
f\mapsto \chi_G(f)
$
is a {\it global section} of a locally constant sheaf over the
equivariant mapping space $\maps(P,N)^G$. The stalk of this sheaf at
$f$ is $\Omega^G_0(E(f,i_Q);\xi)$. This explains the sense in which
$\chi_G(f)$ is an invariant: an equivariant homotopy from $f$ to another map
$f'\: P \to N$ gives rise to an isomorphism of stalks over $f$ and $f'$, and the
isomorphism transfers $\chi_G(f)$ to $\chi_G(f')$.
\smallskip
{\flushleft (2).} The second set of inequalities of Theorem \ref{equi-int}
can be regarded as a {\it gap condition}.
\smallskip
{\flushleft (3).} An advantage that Theorem \ref{equi-int} enjoys
over Theorem \ref{cohom} is that the obstruction group appearing in the former is
is defined directly in terms of the maps $f\: P \to N$ and $Q \to N$.
It turns out that obstruction group of Theorem \ref{cohom} is defined in terms of $f$ the map $N-Q\to N$, which is not as easy to identify in terms of the input data. Furthermore,
the equivariant bordism group appearing in
Theorem \ref{equi-int} arises from a Thom spectrum indexed over a complete universe,
so more machinery is at hand for the purpose of making calculations (see \cite{May}).
\end{intro-rems}
\subsection*{Boundary conditions}
There is also a version of
Theorem \ref{equi-int} when $N$ is compact, possibly with boundary, and
$P$ is compact with boundary $\partial P\ne \emptyset$ satisfying
$f(\partial P) \cap Q = \emptyset$.
In this instance one seeks an equivariant deformation of $f$, fixed on
$\partial P$, to a new map whose image is disjoint from $Q$.
\begin{bigadd} \label{boundary} Theorem $\ref{equi-int}$ also holds
when $P$ and $N$ are compact manifolds with boundary, where it is assumed
$f(\partial P) \cap Q = \emptyset$ and $Q$ is embedded in the interior of $N$.
\end{bigadd}
\subsection*{Sparse isotropy}
When the action of $G$ on $P$ has few isotropy types, the inequalities
in Theorem \ref{equi-int} unravel somewhat.
\subsubsection*{Free actions}
Suppose the action of $G$ on $P$ is free.
Then the trivial group is the only isotropy group and the inequalities
of Theorem \ref{equi-int} reduce to a single inequality
$$
p \le 2(n-q) - 3 = 2n-2q-3\, .
$$
Furthermore, the
equivariant bordism group of Theorem \ref{equi-int} is isomorphic
to the unequivariant bordism group
$$
\Omega_0(EG \times_G E(f,i_Q);\id_{EG}\times_G\xi)\, ,
$$
where $EG \times_G E(f,i_Q)$ is the Borel construction.
This bordism group is generated by maps $u\:M \to EG \times_G E(f,i_Q)$
together with a stable isomorphism $\nu_M \cong u^*(\id_{EG}\times_G\xi)$,
where $M$ has dimension $p+q-n$ and $\nu_M$ denotes the stable normal bundle.
The identification of these groups is obtained using a transfer construction
(we omit the details).
\subsubsection*{Trivial actions}
If $P$ has a trivial $G$-action, then the only isotropy group is
$G$. In this instance, $P$ has image in $N^G$ and the intersection
problem becomes an unequivariant one, involving the map $f\:P\to N^G$
and the submanifold $Q^G \subset N^G$. Assume for simplicity that
$N^G$ and $Q^G$ are connected. Then by \cite{klein-williams}, the
intersection problem admits a solution when $\chi(f) \in
\Omega_0(E(f,i_{Q^G});\xi)$ is trivial and $p \le 2n^G -2q^G -3$.
\subsubsection*{Prime order groups} Let $G$ be a cyclic
group of prime order. By the above, we can assume both the trivial group
and $G$ appear as stabilizer groups. Then $\emptyset \neq P^G \subsetneq P$.
For simplicity, assume $Q^G$ and $N^G$ are connected.
Then the first set of inequalities of Theorem \ref{equi-int} becomes
$$
p \le 2n-2q-3\, , \qquad p^G \le 2n^G-2q^G -3\, ,
$$
and the second set amounts to the single inequality
$$
p^G \le n-q-2\, .
$$
\subsection*{Local intersection theory}
Suppose an equivariant intersection problem $(f,i_Q)$ has been
partially solved in the following sense: there is $G$-subspace $U
\subset P$ such that $f(U)$ is disjoint from $Q$. One can then ask
whether the solution extends to a larger subspace of $P$.
A {\it local equivariant intersection problem} amounts to these data.
A systematic approach to such questions provided by
the isotropy stratification of $P$.
\subsubsection*{The isotropy stratification}
The relation of {\it subconjugacy} describes a partial ordering ${\mathcal
I}(G;P)$: we will write
$$
(H) < (K)
$$
if $K$ is properly subconjugate to $H$. We then choose a total ordering
which is compatible with the partial ordering. Let
$$
(H_1) < (H_{2}) < \cdots <(H_\ell)
$$
be the maximal chain coming from the total ordering of ${\mathcal I}(G;P)$.
Let $P_i\subset P$ be the set of points $x$ having stabilizer group
$G_x$ in which $(G_x) \le (H_i)$. Then we have a filtration of
$G$-spaces
$$
\emptyset = P_0 \subsetneq P_1 \subsetneq P_2 \subsetneq \cdots\subsetneq P_\ell = P \, ,
$$
where each inclusion $P_i \subset P_{i+1}$ possesses the equivariant
homotopy extension property (cf.\ \cite{Davis}, \cite{Illman}).
\subsubsection*{The local obstruction}
Suppose $(f,i_Q)$ is an equivariant intersection problem
with $f(P_{i-1}) \cap Q = \emptyset$ for some $i \ge 1$. We seek a
deformation of $f$ relative to $P_{i-1}$ to a new map $f'$ such that
$f'(P_i) \cap Q = \emptyset$. The map $f'$ is then a
solution to the local problem.
Let $H$ be a representative of $(H_i)$ and let $f_H \: P_H \to N$ denote
the restriction of $f$ to $P_H$. The Weyl group $W(H) = N(H)/H$ acts
on $P^H$ and freely on $P_H$. Let ${}_{H}\xi$ be the virtual $W(H)$-bundle
over $E(f_H,i_Q)$ defined by
$$
j_N^*\tau_N - j_Q^*\tau_Q - j_{P_H}^*\tau_{P_H}\, .
$$
\begin{bigthm} \label{local} There is an invariant
$$
\chi^i_G(f) \in \Omega_0^{W(H)}(E(f_H,i_Q);{}_{H}\xi))
$$
which is trivial when the local problem at $P_i$ relative to
$P_{i-1}$ can be solved.
Conversely, assume $\chi^i_G(f) = 0$ and
$$
p^H \le 2f^*(i_Q)_{!}\cd_H(i_Q) - 3
$$
for $(H)= (H_i)$. Then
the local problem admits a solution.
\end{bigthm}
\subsubsection*{Descent} The global invariant $\chi_G(f)$
is an assemblage of all the local invariants.
Although the local invariants may contain more information,
they can fail to provide a solution to the global
question. To address this point, we will give criteria for deciding
when the vanishing of the global invariants implies the vanishing of
the local ones. In combination with Theorem \ref{local} the criteria
yield a kind of {\it descent} theory for equivariant intersection
problems.
Let $H \in {\mathcal I}(G;P)$ be and consider the inclusion
$$
P_{H} \subset P^{H}.
$$
The corresponding inclusion
$E(P_H,Q) \subset E(P^H,Q)$ by $t_H$.
The map $f^H\: P^H \to N$ will denote the restriction
of $f$ to $P^H$. Define a virtual $W(H)$-bundle ${}^H\xi$ over
$E(f^{H},i_Q)$ by $j_N^*\tau_N - j_Q^*\tau_Q - j_{P^H}^*\tau_{P^H}$.
Since the pullback of ${}^H\!\xi$ along $t_H$ is ${}_H\xi$,
we get an induced homomorphism
$$
(t_H)_*\: \Omega^{W(H)}_0(E(f_{H},i_Q);{}_H\xi) \to
\Omega^{W(H)}_0(E(f^{H},i_Q);{}^H\!\xi) \, .
$$
\begin{bigthm}[``Global-to-Local''] \label{global-to-local} Assume
\begin{itemize}
\item
$f(P_{i-1}) \cap Q = \emptyset$
for some $i \ge 1$ (so $\chi_G^i(f)$ is defined).
\item $(t_H)_*$ is injective for $(H)= (H_i)$.
\end{itemize}
Then $\chi_G(f) = 0$ implies
$\chi^i_G(f)= 0$.
\end{bigthm}
\begin{bigcor}[``Descent''] \label{descent}
Let $(f,i_Q)$ be an equivariant intersection problem.
Assume
\begin{itemize}
\item $\chi_G(f) = 0$,
\item $(t_H)_*$ is injective,
\item
$p^H \le 2f^*(i_Q)_{!}\cd_H(i_Q) - 3$,
\end{itemize}
for every $(H) \in {\mathcal I}(G;P)$. Then
there is an equivariant deformation of $f$
to a map whose image is disjoint from $Q$.
\end{bigcor}
\subsection*{Applications}
\subsubsection*{Embeddings}
Suppose $f\: P \to N$ is a smooth immersion. Equipping
$P$ with a Riemannian metric, we identify
the total space
of the unit tangent disk bundle of $P$ with a compact tubular neighborhood
of the diagonal $\Delta_P \subset P \times P$. With respect
to this identification, the involution of $P \times P$ corresponds
to the one on the tangent bundle that maps a tangent vector to its negative.
Let $S(2)$ be the total space of the
unit spherical tangent bundle of $P$, and let
$P(2)$ be the effect of deleting the interior of the tubular neighborhood
from $P \times P$. Then $(P(2),S(2))$ is a free
$\mathbb Z_2$-manifold with boundary.
If we rescale the metric, then $f {\times} f$
determines an equivariant map
$$
(f(2),f(2)_{|S(2)})\:(P(2),S(2)) \to (N^{\times 2},N^{\times 2} - \Delta_N)\, ,
$$
which yields relative $\mathbb Z_2$-equivariant intersection
problem with free domain.
The fiber product $E(f(2),i_{\Delta_N})$ in this case coincides with the
space of triples $(x,\gamma,y)$ with $x,y \in P(2)$ and $\gamma$ a path
from $f(x)$ to $f(y)$. The involution is given by $(x,\gamma,y) \mapsto
(y,\bar \gamma,x)$, where $\bar\gamma(t) := \gamma(1-t)$.
We set $E'(f,f) := E(f(2),i_{\Delta_N})$.
Applying Addendum \ref{equi-int} and observing
the action is free, we have an obstruction
$$
\mu(f) \in \Omega_0(E{\mathbb Z}_2 \times_{\mathbb Z_2}
E'(f,f);\id \times_{\mathbb Z_2}\xi)
$$
whose vanishing suffices for finding an equivariant deformation of
$f(2)$, fixed on $S(2)$, to a map whose image is disjoint from $\Delta_N$,
provided $3p+3 \le 2n$.
By a theorem of Haefliger \cite{Haefliger},
$f$ is regularly homotopic to an embedding in the metastable range $3p+3\le 2n$ if and only if the above equivariant intersection problem admits a solution.
Consequently,
\begin{bigcor}[compare {\cite[{th.\!\! 2.3}]{HQ}}]
If $f$ is regularly homotopic to an embedding, then
$\mu(f)$ is trivial.
Conversely, in the metastable range, the vanishing of
$\mu(f)$ implies $f$ is regularly homotopic to an embedding.
\end{bigcor}
\subsubsection*{Equivariant fixed point theory}
Let $M$ be a closed smooth manifold equipped a smooth
action of a finite group $G$.
Let
$$
\maps^\flat(M,M)^G
$$
denote the space of
fixed point free $G$-maps from $M$ to itself.
Equivariant fixed point theory studies the extent to which the inclusion
$$
\maps^\flat(M,M)^G \to \maps(M,M)^G
$$
is a surjection on path components.
For an equivariant self map $f\: M \to M$, let
$$
L_f M
$$
be the space of paths $\lambda \: [0,1] \to M$
satisfying the constraint $f(\lambda(0)) = \lambda(1)$. Then
$G$ acts on $L_f M$ pointwise. Let
$(L_f M)_+$ be the effect of adding a disjoint basepoint
to $L_fM$, and finally, let
$$
\Omega^{G,\text{fr}}_0(L_f M))
$$
be the $G$-equivariant framed bordism of $L_f M$ in dimension zero.
\begin{bigthm} \label{lefschetz} There is
an invariant
$$
\ell_G(f) \in
\Omega^{G,\text{\rm fr}}_0(L_f M)
$$
which vanishes when $f$ is equivariantly homotopic to a
fixed point free map.
Conversely, assume $\ell_G(f) = 0$. If
\begin{itemize}
\item $m^H \ge 3$ for all $(H) \in {\mathcal I}(G;M)$.
\item $m^H \le m^K - 2$
for proper inclusions $K \subsetneq H$ with $K, H \in {\mathcal I}(G;M)$,
\end{itemize}
then $f$ is equivariantly homotopic to a fixed point free map.
\end{bigthm}
\begin{intro-rems} (1). The above can be regarded as an equivariant
analog of a classical theorem of Wecken \cite{Wecken}.
\smallskip
{\flushleft (2).} A formula of tom Dieck splits
$\Omega^{G,\text{\rm fr}}_0(L_f M)$
into a direct sum of unequivariant framed bordism
groups indexed over the conjugacy classes of subgroups
of $G$. The summand corresponding to a conjugacy class $(H)$
is
$$
\Omega^{\text{\rm fr}}_0(EW(H) \times_{W(H)} L_{f^H} M)\, ,
$$
where $EW(H) \times_{W(H)} L_{f^H} M$ is the Borel construction
of the Weyl group $W(H)$ acting on $L_{f^H} M$
(see \cite{tD}, \cite{May}).
Consequently, $\ell_G(f)$ decomposes as a sum
of invariants indexed in the same way.
We conjecture the Nielsen number $N(f^H)$ can
be computed from the projection of $\ell_G(f)$ onto
the displayed summand.
\smallskip
{\flushleft (3).} Our result bears close similarity to a theorem
of Fadell and Wong \cite{Fadell-Wong} (see also
\cite{Ferrario},\cite{Weber}). Their result uses
the Nielsen numbers $N(f^H)$ with $(H)\in
{\mathcal I}(G;M)$ in place of our $\ell_G(f)$.
\end{intro-rems}
\subsubsection*{Periodic Points}
A fundamental problem in discrete dynamics
is to enumerate the periodic orbits of a self map
$f\: M\to M$, where $M$ is a closed manifold.
Let $n \ge 2$ be an integer.
A point $x\in M$
is said to be {\it $n$-periodic} if $x$ is a
fixed point of the $n$-th iterate of $f$, i.e., $f^n(x) = x$.
The set of $n$-periodic points of $f$ is
denoted
$$
P_n(f)\, .
$$
The cyclic group $\mathbb Z_n$ acts on $P_n(f)$: if $t\in {\mathbb Z_n}$ is a
generator, then the action is defined by
$t\cdot x := f(x)$.
The {\it homotopy $n$-periodic point set} of $f$
is the $\mathbb Z_n$-space
$$
\text{ho} P_n(f)
$$
consisting of $n$-tuples
$$
(\lambda_1,\lambda_2,...,\lambda_n)\, ,
$$
in which $\lambda_i\:[0,1] \to M$ is a path and the data are
subject to the constraints
$$
f(\lambda_{i+1}(0)) = \lambda_i(1))\, , \qquad i = 1,2,\dots
$$
Here we interpret the subscript $i$ as being taken modulo $n$.
The action of $\mathbb Z_n$ on $\text{ho}P_n(f)$ is
given by cyclic permutation of factors.
There is a map $P_n(f) \to \text{\rm ho}P_n(f)$
given by sending an $n$-periodic point $x$
to the $n$-tuple
$$
(c_x,c_{f(x)},c_{f^2(x)},...,c_{f^{n-1}(x)})
$$
in which $c_x$ denotes the constant path with value $x$.
\medskip
For a self map $f\:M \to M$, as above, let
\[
\Omega_0^{\mathbb Z_n,\text{\rm fr}}(\text{\rm ho}P_n(f))
\]
be the ${\mathbb Z}_n$-equivariant framed bordism
group of $\text{\rm ho}P_n(f)$ in dimension $0$.
\begin{bigthm}\label{periodic}
There is a homotopy theoretically defined invariant
$$
\ell_n(f) \in \Omega_0^{\mathbb Z_n,\text{\rm fr}}(\text{\rm ho}P_n(f))
$$
which is an obstruction to deforming $f$ to an $n$-periodic point
free self map.
\end{bigthm}
\begin{intro-rems}
At the time of writing, we do not know the extent to which $\ell_n(f)$ is the
complete obstruction to making $f$ $n$-periodic point free. When
$\dim M \ge 3$, Jezierski \cite{Jezierski} has shown the vanishing of
the Nielsen numbers $N(f^k)$ for all divisors $k|n$ implies $f$
is homotopic to an $n$-periodic point free map (here $f^k$ denotes
the $k$-fold composition of $f$ with itself). We conjecture that
$\ell_n(f)$ determines $N(f^k)$.
\end{intro-rems}
\subsubsection*{Periodic points and the fundamental group}
Let $\pi$ be a group equipped with endomorphism
$\rho\: \pi\to \pi$.
Consider the equivalence relation on $\pi$ generated by the
elementary relations
$$
x \,\sim \, gx\rho^n(g)^{-1} \quad \text{and} \quad x\, \sim \, \rho(x)
$$
for $x,g \in \pi$.
Let
$$
\pi_{\rho,n}
$$
be the set of equivalence classes.
Let
$$
\mathbb Z[\pi_{\rho,n}]
$$
denote the free abelian group with basis $\pi_{\rho,n}$.
Let $f\: M \to M$ be a self map of a connected closed manifold $M$.
Fix a basepoint $*\in M$.
Choose a homotopy
class of path $[\alpha]$ from $*$ to $f(*)$.
Then $[\alpha]$ defines an isomorphism
$$
\pi_1(M,*) \cong \pi_1(M,f(*))\, .
$$
Furthermore, $f$ and $[\alpha]$ together define a homomorphism
$$
\rho\: \pi_1(M,*) \overset {f_\sharp} \to \pi_1(M,f(*)) \cong \pi_1(M,*) \, .
$$
Let $\pi = \pi_1(M,*)$.
\begin{bigthm} \label{stalk} The data consisting
of the self map $f\: M \to M$,
the choice of basepoint $*\in M$ and the homotopy
class of path $[\alpha]$ from $*$ to $f(*)$
determine an isomorphism of abelian groups
$$
\Omega_0^{\mathbb Z_n,\text{\rm fr}}(\text{\rm ho}P_n(f)) \,\, \cong \,\,
\bigoplus_{k|n} \mathbb Z[\pi_{\rho,k}]\, .
$$
With respect to this isomorphism, there is
a decomposition
$$
\ell_n(f)\,\, = \,\,\underset {k|n}\oplus \, \ell_n^k(f)\, ,
$$
in which $\ell_n^k(f)\in \mathbb Z[\pi_{\rho,k}]$.
\end{bigthm}
\section{Preliminaries \label{prelim}}
\subsection*{$G$-Universes}
The $G$-representations of this paper are assumed to come equipped
with a $G$-invariant inner product. A {\it $G$-universe} ${\mathcal U}$
is a countably infinite dimensional real representation of $G$ which
contains the trivial representation and which contains infinitely many
copies of each of its finite dimensional subrepresentations.
We will be interested in two kinds of universes. A {\it complete}
universe is one that contains infinitely many copies of
representatives for the irreducible representations of $G$ (in this
instance one can take $\mathcal U$ to be the countable direct sum of the
regular representation). A {\it trivial} universe contains only
trivial representations.
\subsection*{Spaces}
We work in the category of compactly generated topological spaces.
The empty space is $(-2)$-connected and every non-empty space is
$(-1)$-connected. A map $A \to B$ of spaces (with $B$ nonempty) is
$r$-connected if for any choice of basepoint in $B$, the
homotopy fiber with respect to this choice of basepoint is an
$(r{-}1)$-connected space In particular, any map $A \to B$ is
$(-1)$-connected. A weak homotopy equivalence
is an $\infty$-connected map.
\subsection*{$G$-spaces}
Let $G$ be a finite group. A {\it $G$-space} is
a space $X$ equipped with a left action of $G$. A map
of $G$-spaces is a $G$-equivariant map.
Let $T$ be a transitive $G$-set.
The {\it $T$-cell} of dimension $j$ is the $G$-space
$$
T \times D^j \, ,
$$
where $G$ acts diagonally with trivial action on $D^j$.
\begin{rem} If a choice of basepoint $t\in T$ is given, then
one has a preferred isomorphism $G/H \cong T$, where
$H = G_t$ is the stabilizer of $t$. Given another choice
of basepoint $t'$, the stabilizer group $G_{t'}$ is conjugate
to $H$. We will call the conjugacy class $(H)$ the {\it type} of $T$.
Two transitive $G$-sets are isomorphic if and only if
they have the same type.
\end{rem}
Given a $G$-map $f\:T \times S^{j-1} \to Y$, one may form
$$
Y \cup_f (T \times D^j)\, .
$$
This is called an {\it $T$-cell attachment.}
If $j = 0$, we interpret the above as a disjoint union.
A {\it relative $G$-cell complex} $(X,Y)$ is a pair in which
$X$ is obtained from $Y$ by iterated equivariant cell attachments
(where we allow $T$ to vary over different transitive $G$-sets;
the collection of attached cells is allowed to be a class).
The order of attachment defines a partial ordering on the
collection of cells.
If this order is dimension preserving (i.e., no cell of dimension
$j$ is attached after a cell of dimension $j'$ when $j < j'$), then
$(X,Y)$ is a {\it relative $G$-CW complex}.
When the collection of such attachments is finite, one
says $(X,Y)$ is {\it finite}.
When $Y$ is the empty space, $X$ is a {\it $G$-cell complex} and
when the attachments are self-indexing, $X$ is a $G$-CW complex.
The {\it cellular dimension function} $d_\bullet$ for
$(X,Y)$ is the indexing function whose value at $H$ is
the maximal dimension of the cells of type $(H)$
appearing in the collection of attached cells.
We set $d_H = -\infty$ if $(X,Y)$ has no cells of type
$(H)$.
\begin{rem} Let $M$ be a closed smooth
$G$-manifold with dimension function $m^\bullet$. A result of Illman \cite{Illman} shows that
$M$ possesses
an equivariant triangulation. If $d_\bullet$ is the cellular dimension
function of this triangulation, then $d_H = m^H$ for all $H \in {\mathcal I}(G;M)$.
\end{rem}
\subsection*{Quillen model structure}
Let $T(G)$ be the category of $G$-spaces.
A morphism $f\:X \to Y$ is a {\it weak equivalence} if
for every subgroup $H\subset G$ the induced map of fixed points
$$
f^H\: X^H \to Y^H
$$
is a weak homotopy equivalence. Similarly, a morphism
$f$ is a {\it fibration}
if $f^H$ is a Serre fibration for every $H$.
A morphism $f\: X\to Y$ is a {\it cofibration} if
there is a relative $G$-cell complex $(Z,X)$ such that $Y$ is a retract
of $Z$ relative to $X$.
Let $R(G)$ be the category of {\it based} $G$-spaces.
A morphism $X\to Y$ of $R(G)$ is a weak equivalence,
cofibration or fibration if and only if it is so when
considered as a morphism of $T(G)$.
\begin{prop}[\cite{Dwyer-Kan_sing-real}, {\cite[Ch.\ VI \S5]{May}}]
With respect to the above structure,
both $T(G)$ and $R(G)$ are Quillen model categories.
\end{prop}
\subsection*{Connectivity}
One says an indexing function
$r_\bullet$ is a {\it connectivity function}
for a $G$-space $Y$ if $Y^H$ is $r_H$-connected for $H \subset G$
a subgroup (if $Y^H$ is empty, we set $r_H = -2$).
If $f\:Y \to Z$ is a morphism of $T(G)$, then a connectivity function
for $f$ is an indexing function $r_\bullet$
such that $f^H\:Y^H \to Z^H$ is an $r_H$-connected map
of spaces (one can always assume $r_H \ge -1$
since every map of spaces is at least $(-1)$-connected).
\begin{lem} \label{factorization} Let $Y \to Z$
be a fibration of $T(G)$ with connectivity function $r_\bullet$.
Suppose $(X,A)$ is a relative $G$-cell complex
with cellular dimension function $d_\bullet$. Assume
$d_H \le r_H$ for all subgroups $H \subset G$.
Then given a factorization
problem of the form
$$
\xymatrix{
A \ar[r] \ar[d] & Y \ar[d] \\
X \ar[r]\ar@{..>}[ur] & Z
}
$$
we can find an equivariant lift $X \to Y$ such that
the diagram commutes.
\end{lem}
\begin{rem} The condition $d_H \le r_H$ is automatically
satisfied if no cells of type $(H)$ occur in $(X,A)$.
\end{rem}
\begin{proof}[Proof of Lemma \ref{factorization}]
The proof proceeds by induction on the equivariant cells which
are attached to $A$ to form $X$.
The inductive step is
reduced to solving an equivariant lifting problem of the kind
$$
\xymatrix{
G/H \times S^{j-1} \ar[r] \ar[d] & Y \ar[d]^f \\
G/H \times D^j \ar[r] \ar@{..>}[ur] & Z\, ,\\
}
$$
where the horizontal maps are allowed to vary in their
equivariant homotopy class.
Now, a $G$-map $G/H \times U \to Z$
when $U$ has a trivial action is the same thing as specifying
a map $U \to Z^H$. This means the lifting problem
reduces to an unequivariant one of the form
$$
\xymatrix{
S^{j-1} \ar[r] \ar[d] & Y^H \ar[d]^{f^H} \\
D^j \ar[r]\ar@{..>}[ur] & Z^H \\
}
$$
The latter lift exists because $f^H$ is $r_H$-connected and $j \le r_H$.
\end{proof}
\begin{cor}\label{factorize-homotopy} Consider the lifting problem
$$
\xymatrix{
A \ar[r] \ar[d] & Y \ar[d] \\
X \ar[r]\ar@{..>}[ur] & Z
}
$$
of $G$-spaces in which
\begin{itemize}
\item $Y \to Z$ is a map
with connectivity function $r_\bullet$,
\item $Y$ is cofibrant,
\item $(X,A)$ is
a relative $G$-cell complex with dimension function $d_\bullet$,
\item $d_H \le r_H$ for each subgroup $H \subset G$.
\end{itemize}
Then there is a $G$-map $X \to Y$ making the top triangle of
the diagram commute and the bottom triangle homotopy commute.
\end{cor}
\begin{proof} Factorize the map $Y \to Z$ as
$$
Y \to Y^\text{c} \to Z
$$
in which the map $Y \to Y^\text{c}$ is a cofibration and a weak equivalence
and the map $Y^\text{c} \to Z$ is a fibration. Apply Lemma
\ref{factorization}
to the diagram with $Y^\text{c}$ in place of $Y$. To get a map
$X \to Y^\text{c}$ making the diagram commute.
Since every object is fibrant, the acyclic cofibration
$Y \to Y^{\text{c}}$ is a retract; let $r \: Y^{\text{c}} \to Y$
be a retraction. Let $f\: X \to Y$ be the map $X \to Y^{\text{c}}$
followed by the retraction. Then $f$ satisfies
the conclusion stated in the corollary.
\end{proof}
\subsection*{Fiberwise $G$-spaces}
Fix a $G$-space $B$. A {\it $G$-space over $B$} is a
$G$-space $X$ equipped with $G$-map $X \to B$, usually denoted
$p_X$. A morphism
$X\to Y$ of $G$-spaces over $B$ is a $G$-map which commutes with
the structure maps $p_X$ and $p_{Y}$. Let
$$
T(B;G)
$$
be the category of $G$-spaces over $B$.
We also have a ``retractive'' version of this category, denoted
$$
R(B;G)\, .
$$
An object of the latter consists of a $G$-space $X$ and
maps $p_X\: X \to B$, $s_X \: B \to X$ such that $p_Xs_X = \text{id}_B$.
A morphism $X\to Y$ is an equivariant map compatible with both structure
maps.
In either of these categories, a morphism $X \to Y$ is said to be a
weak equivalence/cofibration/fibration if it is so when
considered as a morphism of $T(G)$
by means of the forgetful functor. With respect to these
definitions, $T(B;G)$ and $R(B;G)$ are Quillen model
categories.
An object $X$ in either of these categories is said
to be {\it $r_\bullet$-connected} if the structure map
$X \to B$ is $(r_\bullet+1)$-connected. A morphism
is said to be $r_\bullet$-connected if the underlying map
of $T(G)$ is.
The category $R(B;G)$ has {\it internal smash products}, constructed
as follows: let $X,Y \in R(B;G)$ be objects. Then
$$
X\wedge_B Y \in R(B;G)
$$
is the object given by the pushout of the diagram
$$
\begin{CD}
B @<<< X \cup_B Y @> \subset >> X \times_B Y
\end{CD}
$$
where $X\times_B Y$ is the fiber product of $X$ and $Y$.
Since $R(B;G)$ is a model category, one can form homotopy
classes of morphisms. If $X,Y\in R(B;G)$, we let
$$
[X,Y]_{R(B;G)}
$$
denote the set of homotopy classes of morphisms. Recall
the definition requires us to replace $X$ by its cofibrant approximation
and $Y$ by its fibrant approximation.
\section{The proof of Theorem \ref{cohom} \label{proof-cohom}}
\subsection*{Unreduced fiberwise suspension}
Let $E\in T(B;G)$ be an object.
The {\it unreduced fiberwise suspension} of $E$ over $B$
is the object $S_B E \in T(B;G)$ given by the double mapping
cylinder
$$
S_B E := B\times 0 \cup E\times [0,1] \cup B\times 1\, .
$$
The two evident inclusions $s_-,s_+\: B \to S_BE$ are
morphisms of $T(B;G)$.
Using $s_-$, we will consider $S_B E$ as an object of $R(B;G)$.
\subsection*{Obstruction to sectioning}
Let $$B^+$$ denote $B \amalg B$ considered as an object of
$R(B;G)$ using the left summand to define a section.
Then
$$
s := s_- \amalg s_+\: B^+\to S_B E
$$
is a morphism of $R(B;G)$. We consider the associated
homotopy class
$$
[s] \in [B^+,S_B E]_{R(B;G)} \, .
$$
The following proposition is an equivariant version of results of
Larmore (\cite[th.\ 4.2-4.3]{Larmore}; see also
\cite[prop.\ 3.1]{klein-williams}).
\begin{prop} \label{Larmore} Assume $E\in T(B;G)$ is fibrant. If
$E \to B$ admits an equivariant section, then $[s]$ is
trivial.
Conversely, assume
\begin{itemize}
\item $[s]$ is trivial,
\item $B$ is a $G$-cell complex with dimension
function $b_\bullet$,
\item the object $E \in T(B;G)$ is $r_\bullet$-connected, and
\item $b_\bullet \le 2r_\bullet + 1$.
\end{itemize}
Then
$E\to B$ admits an equivariant section.
\end{prop}
\begin{proof} Let $\sigma\: B \to E$ be a section. Apply
the functor $S_B$ and note $S_BB = B \times [0,1]$.
We then get a map
$$
S_B \sigma \: B\times [0,1] \to S_B E
$$
which gives a homotopy from $s_-$ to $s_+$ through morphisms
of $T(B;G)$. This is the same thing as establishing the triviality
of $[s]$.
Conversely,
the diagram
$$
\xymatrix{
E \ar[r]\ar[d] & B\ar[d]^{s_+} \\
B \ar[r]_{s_-} & S_B E
}
$$
is preferred homotopy commutative in the category $T(B;G)$.
As a diagram of $G$-spaces it is a homotopy pushout.
Let $H \subset G$ be a subgroup. Taking $H$-fixed points, we obtain
a homotopy pushout
$$
\xymatrix{
E^H \ar[r]\ar[d] & B^H\ar[d]^{s^H_+} \\
B^H \ar[r]_{s^H_-} & S_{B^H} E^H
}
$$
in the category $R(B^H;e)$ where $e$ is the trivial group.
Since $E$ is an $r_\bullet$-connected object, the map
$E^H \to B^H$ is $(r_H + 1)$-connected.
By the Blakers-Massey theorem (see e.g., \cite[p.\ 309]{Good_calc2}),
the second diagram is
$(2r_H+1)$-cartesian. Consequently, the first diagram
is $(2r_\bullet + 1)$-cartesian. Let $P$ denote the
homotopy inverse limit of the diagram
$$
\begin{CD}
B @> s_- >> S_B E @< s_+ << B\, .
\end{CD}
$$
Then we conclude the map $E \to P$ is $(2r_\bullet + 1)$-cartesian.
If we assume $[s] = 0$, then the map $P \to B$ admits a section
up to homotopy (using the universal property of the homotopy pullback).
By the assumptions on $B$ and Corollary \ref{factorize-homotopy},
the $G$-map $E\to B$ admits a
section up to homotopy. Since $E$ is fibrant,
this homotopy section can be converted into a strict section.
\end{proof}
\subsection*{Naive stabilization}
The {\it reduced fiberwise suspension} $\Sigma_B E$ of an object
$E\in R(B;G)$ is given by considering $E$ as an object
of $T(B;G)$, taking its unreduced fiberwise suspension
$S_B E$ and taking the pushout of the diagram
$$
B \leftarrow S_B B \to S_B E
$$
where $S_B B \to S_B E$ arises by applying
$S_B$ to the structure map $B\to E$.
A {\it naive parametrized $G$-spectrum} ${\mathcal E}$ is a collection of objects
$$
{\mathcal E}_n \in R(B;G)
$$
equipped with maps $\Sigma_B {\mathcal E}_n \to {\mathcal E}_{n+1}$.
\begin{ex} Let $Y \in R(B;G)$ be an object.
Its naive parametrized suspension spectrum $\Sigma_B^\infty Y$
has $n$-th object $\Sigma^n_B Y$, the $n$-th iterated fiberwise
suspension of $Y$.
\end{ex}
\begin{defn} Let $X \in T(B;G)$ be an object.
The zeroth {\it cohomology} of $X$ with coefficients in ${\mathcal E}$
is the abelian group given by
$$
H^0_G(X;{\mathcal E}) \,\, := \,\, \colim_{n\to \infty}
[\Sigma^n_B X^+,{\mathcal E}_n]_{R(B;G)}
$$
where $X^+ = X\amalg B$ and the maps in the
colimit arise from the structure maps of ${\mathcal E}$.
\end{defn}
\begin{rem} Assuming the maps ${\mathcal E_n} \to B$ are fibrations,
one can take the pullbacks $f^*{\mathcal E_n} \to X$. These form
a naive $G$-spectrum over $X$,
and an unraveling of the definitions gives
$$
H^0_G(X;f^*{\mathcal E}) = H^0_G(X;{\mathcal E}) \, .
$$
\end{rem}
\begin{defn} Let $X, E\in T(B;G)$ be objects
with $E$ fibrant and $X$ cofibrant. Let
$f\: X \to B$ be the structure map.
Let
$$
e(f,E) \in H^0_G(X;\Sigma_B^\infty S_B E)
$$
be the class defined by the map
$$
\begin{CD}
X^+ @> f^+ >> B^+ @> s >> S_B E\, .
\end{CD}
$$
\end{defn}
\begin{prop} Let $X,E$ and $f$ be as above. If
$E \to B$ admits an equivariant section along $f$,
then $e(f,E)$ is trivial.
Conversely, assume
\begin{itemize}
\item $e(f,E)$ is trivial,
\item $X$ is a $G$-cell complex with dimension
function $k_\bullet$,
\item $E \in T(B;G)$ is $r_\bullet$-connected, and
\item $k_\bullet \le 2r_\bullet + 1$.
\end{itemize}
Then $E \to B$ admits an equivariant section along $f$.
\end{prop}
\begin{proof} By Proposition \ref{Larmore}, it will be enough to prove
the maps
$$
\Sigma_B \: [\Sigma^n_B X^+,\Sigma^n_B S_B E]_{R(B;G)} \to
[\Sigma^{n+1}_B X^+,\Sigma^{n+1}_B S_B E]_{R(B;G)}
$$
are isomorphisms in the stated range. We will do this when
$n = 0$. The case $n >0$ is similar.
We have a map
$$
E \to \Omega_B \Sigma_B E
$$
which is adjoint to the identity. By Corollary \ref{factorize-homotopy},
it will be enough to show this morphism is
$2r_\bullet + 1$-connected. Let $H \subset G$ be a subgroup
and consider the map
$$
E^H \to \Omega_{B^H} \Sigma_{B^H} E^H
$$
of $R(B^H;e)$. If $b\in B^H$ is any point, we have an induced map of
fibers
$$
E_b^H \to \Omega\Sigma E_b^H \, .
$$
Since $E_b^H$ is $r_H$-connected, the Freudenthal suspension
implies the last map is $(2r_H+1)$-connected. We infer that the map
$E^H \to \Omega_B \Sigma_B E^H$ is $(2r_H+1)$-connected, which is
what we needed to show.
\end{proof}
\subsection*{Proof of Theorem \ref{cohom}}
\begin{lem} \label{connectivity}
The map $(N-Q) \to N$ is $(i_Q)_{!}\cd_H(i_Q)-1)$-connected.
\end{lem}
\begin{proof} Let $H \subset G$ be a subgroup. Then $(N-Q)^H = N^H - Q^H$,
and we need to compute the connectivity of the inclusion
$$
N^H - Q^H \to N^H \, .
$$
This will be done using transversality.
Consider a map of pairs
$$
\gamma\:(K,A) \to (N^H,N^H-Q^H)
$$
$(K,A) = (D^j,S^{j-1})$ or $(S^j,\emptyset)$.
We can assume $\gamma$ is transverse to $Q^H$.
If $y \in \gamma(K) \cap Q^H$, then it must be the case
$j \ge \cd_H(i_Q)_H(y)$. Therefore, $\gamma(K)$
is disjoint from $Q^H$ whenever $j <\cd_H(i_Q)_H(y)$
for all $y\in Q^H$. This is equivalent to requiring
$j < (i_Q)_{!}\cd_H(i_Q)$, so the conclusion follows.
\end{proof}
Let $$(N-Q) \to E \to N$$ be the effect of factorizing
$N-Q \to N$ as an acyclic cofibration followed by a fibration.
By Lemma \ref{connectivity}, $E \in T(N;G)$
is an $(i_Q)_{!}\cd_H(i_Q)-2)$-connected object.
Since $N-Q$ is cofibrant, it will suffice to show
$E \to N$ admits a section along $f$.
We set ${\mathcal E}(i_Q)$ equal to the naive fiberwise suspension spectrum
$$
\Sigma^\infty_N S_N E
$$
and
$$
e_G(f) := e(f,E) \in H^0_G(P;{\mathcal E}(i_Q))\, .
$$
By the first part of Proposition \ref{Larmore},
if $E$ admits a section along $f$, then $e_G(f)$ is trivial.
Conversely, assume $e_G(f)$ is trivial.
Then
$$
e(\id_P,f^*E) \in H^0_G(P;f^*{\mathcal E}(i_Q))
$$
is also trivial. One easily checks $f^*E$ is an
$(f^*(i_Q)_{!}\cd_\bullet(i_Q)-2)$-connected object.
By the second part of Proposition \ref{Larmore},
the fibration $f^*E \to P$ admits a section
when
$$
p_\bullet \le 2f^*(i_Q)_{!}\cd_\bullet(i_Q) - 3.
$$
This completes the proof of Theorem \ref{cohom}.
\section{Naive versus Equivariant stabilization \label{naive-versus-equi}}
\subsection*{The unfibered case}
If $Y \in R(G) = R(*;G)$ is a cofibrant object, we define
$$
Q_GY = \colim_{V} \Omega^V\Sigma^V Y\, ,
$$
where $V$ ranges over the finite dimensional subrepresentations of a
complete $G$-universe ${\mathcal U}$ partially ordered
with respect to inclusion, and
$\Omega^V \Sigma^V Y$ is the space of
unequivariant based maps $S^V \to S^V\wedge Y$, where
$S^V$ is the one point compactification of $V$.
This is a $G$-space by conjugating maps by
group elements.
Consider the natural $G$-map
$$
QY \to Q_G Y\, .
$$
\begin{prop} \label{universe_change}
Assume $Y$ has connectivity function $r_\bullet$.
Then the map $QY \to Q_G Y$ is $s_\bullet$-connected,
where
$$
s_H = \inf_{K \subsetneq H} r_K\, .
$$
\end{prop}
\begin{proof} Let $H \subset G$ be a subgroup.
We must show the map of fixed points
$$
Q(Y^H) = (QY)^H \to (Q_G Y)^H \, .
$$
is $s_H$-connected.
By the tom Dieck splitting (\cite[p.\ 203, Th.\ 1.3]{May},
\cite[th.\ 7.7]{tD}),
$$
(Q_GY)^H \,\, \simeq \,\, \prod_{(K)} QEW(K)_+\wedge_{W(K)}Y^K\, ,
$$
where $(K)$ varies over the conjugacy classes of subgroups
of $H$ and $W(K)$ denotes the Weyl group. The factor
corresponding to $(K) = (H)$ gives the inclusion
$Q(Y^H) \to (Q_G Y)^H$. As $QEW(K)_+\wedge_{W(K)}Y^H$
is $r_K$-connected,
it follows the inclusion
is
$(\inf_{K \subsetneq H} r_K)$-connected.
\end{proof}
\subsection*{Equivariant stabilization}
Let $V$ be a finite dimensional $G$-representation equipped with
invariant inner product. We let $D(V)$
be its unit disk and $S(V)$ its unit sphere.
Let $X \in T(B;G)$ be an object.
The {\it unreduced
$V$-suspension} of $X$ over $B$ is the object
$S^V_B X$ given by
$$
S(V) \times B \cup_{S(V) \times E} D(V) \times Y \, .
$$
Note the case of the trivial representation $V = \mathbb R$
recovers $S_B X$.
If $Y \in R(B;G)$ is an object, then
a reduced version of the construction is given by
$$
\Sigma^V_B Y = B \cup_{D(V) \times B} S^V_B Y \, .
$$
The {\it fiberwise $V$-loops} of $Y \in R(B;G)$ is the
object
$$
\Omega^V_B E
$$
given by the space of pairs $(b,\lambda)$ in
which $b\in B$ and $\lambda \: S^V \to p^{-1}(b)$ is a based map.
The action of $g\in G$ on $(b,\lambda)$ is given by
$(gb,g\lambda)$.
Then $(\Sigma_B^V,\Omega_B^V)$ is an adjoint functor pair.
\begin{defn} For an object $Y \in R(B;G)$, define
$$
Q_B^G Y \,\, := \colim_{V} \Omega_B^V \Sigma_B^V Y \, ,
$$
where the colimit is indexed over the finite dimensional subrepresentations of
a complete $G$-universe ${\mathcal U}$.
\end{defn}
\begin{prop} \label{fib-equi-versus-naive} Assume $Y \in R(B;G)$ is fibrant and cofibrant,
with connectivity function $r_\bullet$. Then
$$
Q_B Y \to Q_B^G Y
$$
is $s_\bullet$-connected, where
$$
s_H := \inf_{K \subsetneq H} r_K\, .
$$
\end{prop}
\begin{proof} Let $H \subset G$ be a subgroup. Then the $H$-fixed points
of $Q_B Y$ is $Q_{B^H} Y^H$. Consider the evident map
$$
Q_{B^H} Y^H \to (Q_B^G Y)^H \, .
$$
Let $b \in B^H$ be a point. Then the associated map of homotopy fibers
at $b$ is identified with
$$
Q Y_b^H \to (Q_G Y_b)^H\, ,
$$
where $Y_b$ is the fiber of $Y^H \to B^H$ at $b$.
By Proposition \ref{universe_change}, the map of homotopy fibers is $s_H$-connected.
We conclude $Q_B Y^H \to (Q_B^G Y)^H$ is
also $s_H$-connected.
\end{proof}
\section{Parametrized $G$-spectra over a complete universe \label{fibered-spectra}}
Let ${\mathcal U}$ be a complete $G$-universe.
A {\it (parametrized) $G$-spectrum} ${\mathcal E}$ over $B$ indexed on ${\mathcal U}$
is a collection of objects
$$
{\mathcal E}_V \in R(B;G)
$$
indexed over the finite dimensional subrepresentations $V$ of ${\mathcal U}$
together with maps
$$
\Sigma_B^{V^\perp} {\mathcal E}_V \to {\mathcal E}_{W}
$$
for $V \subset W$, where $V^\perp$ is the orthogonal complement
of $V$ in $W$.
\begin{ex} Let $X \in R(B;G)$ be an object. The fiberwise equivariant
suspension spectrum of $X$, denoted $\Sigma^{\infty,G}_B X$ has $V$-th space
$$
Q_B^G(\Sigma^V_B X) \, .
$$
\end{ex}
These give rise to unreduced $\RO(G)$-graded cohomology
theories on $T(B;G)$.
In order to get the details of the construction right, it is
helpful to know a Quillen model structure is lurking in
the background. For this exposition, it will
suffice to explain what the weak equivalences, fibrant
and cofibrant objects are in this model structure.
The reference for this material is the book \cite{May-Sig}.
One says ${\mathcal E}$ is {\it fibrant} if each of the adjoint maps
${\mathcal E}_W \to \Omega^W_B {\mathcal E}_{V\oplus W}$ is a weak
equivalence of $R(B;G)$ and moreover, each of the maps
${\mathcal E}_W \to B$ is a fibration of $R(B;G)$. Any object
${\mathcal E}$ can be converted into a fibrant object ${\mathcal E}^\text{f}$
by a natural construction, called fibrant approximation.
A map ${\mathcal E} \to {\mathcal E'}$ is given by compatible maps
${\mathcal E}_V \to {\mathcal E'}_V$. A map is a {\it weak equivalence} if
after applying fibrant approximation, it becomes an equivalence
at each $V$. An object ${\mathcal E}$ is {\it cofibrant} if it is the retract
of an object which is obtained from the zero object by attaching
cells. Any ${\mathcal E}$ can be functorially replaced by a cofibrant object within
its weak homotopy type; this is called cofibrant approximation.
\subsection*{Cohomology and homology}
Let ${\mathcal E}$ be as above, and assume it is both fibrant and cofibrant.
Let $X \in T(B;G)$ be a cofibrant object.
The equivariant {\it cohomology} of $X$ with coefficients in $\mathcal E$
is the $\RO(G)$-graded theory on $T(B;G)$, denoted
$$
h^\bullet_G(X;\mathcal E)\, ,
$$
and defined as follows:
if $\alpha = V - W$ is a virtual representation, we set
$$
h^\alpha_G(X;\mathcal E) := [S^W\wedge X^+,\mathcal E_V]_{R(B;G)}
$$
Similarly,
the {\it homology} of $X$ with coefficients in ${\mathcal E}$,
denoted
$$
h_\bullet^G(X;{\mathcal E})\, ,
$$
is defined by
$$
h_\alpha^G(X;{\mathcal E}):= \colim_{U}
[S^{V+U},({\mathcal E}_{W+U} \wedge_B X^+)/B]_{R(*;G)} \, ,
$$
where
$({\mathcal E}_{W+U} \wedge_B X^+)/B$
is the effect of taking the mapping cone of the section
$B \to {\mathcal E}_{W+U} \wedge_B X^+$.
Using fibrant and cofibrant approximation,
the above extends in a straightforward way to the case
of all objects $X$ and all ${\mathcal E}$ a $G$-spectrum over $B$
(we omit the details).
\begin{rem} \label{Po_notation} Here is an alternative approach
to the above, based on \cite{PoHu}. A map of $G$-spaces
$f: X \to Y$ induces a pullback functor
$$
f^*\: R(Y;G) \to R(X;G)
$$
given by $Z \mapsto Z\times_Y X$, with evident
structure map.
The functor $f^*$ has a right adjoint $f_*$ given by
$$
T\mapsto \underline{\secs}_Y(X\to T) \, ,
$$
where
$\underline{\secs}_Y(X\to T)$ has total space
$$
\{(y,s)|\, y\in Y, s\:X_y\to T_y\}\, .
$$
Here, $X_y$ denotes the fiber of $X \to Y$ at $y$,
$T_y$ is the fiber of the composite $T \to X \to Y$
at $y$
and $s\: X_y \to T_y$ is a based (unequivariant) map.
The functor $f^*$ also admits a left adjoint, denoted
$f_\sharp$, which is defined by
$$
T \mapsto T\cup_X Y \, .
$$
Let $\Sp(Y;G)$ be the
category of $G$-spectra over $Y$.
If we make these constructions levelwise, we
obtain functors
$$
f_*,f_\sharp\:\Sp(X;G) \to \Sp(Y;G) \, .
$$
Now take $f\: X\to *$ to be the constant map to a point,
and replace these functors by their derived versions
(using the Quillen model structure). Let ${\mathcal E}$ be
a fibred $G$-spectrum over $B$. If $X\in T(B;G)$ is an
object with structure morphism $p_X\:X\to B$,
we can take the (derived) pullback $p_X^*{\mathcal E}$,
which is a $G$-spectrum over $X$.
Then the $\RO(G)$-graded homotopy groups of the $G$-spectra
$$
f_*{p_X^*\mathcal E} \quad \text{and } \quad f_\sharp{p_X^*\mathcal E}
$$
yield the above cohomology and homology theories.
\end{rem}
\section{Poincar\'e Duality \label{duality-section}}
\subsection*{The orientation bundle}
Let $M$ be a $G$-manifold and $TM$ its tangent bundle.
Let $S^\tau \in R(M;G)$
defined by taking the fiberwise one point compactification of
$TM \to M$ (the section $M \to S^\tau$ is given by the zero section
of $TM$).
Define $S^{-\tau}$ to be the fiberwise functional dual of
$S^\tau$. Alternatively, one can define an unstable version of
$S^{-\tau}$ as follows: equivariantly embed $M$ in a
$G$-representation $V$ and let $\nu$ denote its normal bundle. Its
fiberwise one point compactification $S^\nu$ then represents
$S^{-\tau}$ up to a degree shift by $V$, i.e.,
$$
S^{-\tau} \simeq S^{\nu - V}\, .
$$
We call $S^{-\tau}$ the {\it orientation bundle} of $M$.
If ${\mathcal E}$ is a fibrant and cofibrant $G$-spectrum
over $M$, we set
$$
{}^{-\tau}{\!\mathcal E} \,\, := \,\, S^{-\tau} \wedge_M {\mathcal E} \, .
$$
Using the diagonal action, this is a fibred $G$-spectrum over $M$,
called the {\it twist} of ${\mathcal E}$ by the orientation bundle.
\begin{rem} The reader may object to this construction since
we haven't defined internal smash products of parametrized $G$-spectra. An
{\it ad hoc} way to define ${}^{-\tau}{\!\mathcal E}$ is to use the normal bundle
$\nu$. Let ${}^\nu{\!\mathcal E}$ be the parametrized $G$-spectrum
given by ${}^\nu{\!\mathcal E}_W = S^\nu \wedge_M {\mathcal E}_W$,
where we are using the fiberwise smash product in $R(M;G)$.
Then ${}^{-\tau}{\!\mathcal E}$
can be defined as the parametrized $G$-spectrum
whose $W$-th space is $\Omega_M^V {}^\nu{\!\mathcal E}_W$.
Alternatively, the reader is referred to \cite[Ch.\ 13]{May-Sig} for the
construction of the internal smash product.
\end{rem}
\subsection*{Poincar\'e duality}
The following is a special case of \cite[th.\ 4.9]{PoHu} and also
a special case of \cite[th.\ 19.6.1]{May-Sig}.
\begin{thm}[``Fiberwise Poincar\'e duality'']
\label{duality} Let ${\mathcal E}$ be a
$G$-spectrum over a closed smooth $G$-manifold $M$.
Then there is an isomorphism
$$
h_\bullet^G (M;{}^{-\tau}{\!\mathcal E})\,\, \cong \,\,
h^\bullet_G(M;{\mathcal E})\, .
$$
\end{thm}
\begin{rems} (1). Here it is essential that ${\mathcal E}$ be indexed
over a complete $G$-universe.
\smallskip
{\flushleft (2).} Here is how to recover Theorem \ref{duality} from
\cite[th.\ 4.9]{PoHu}. Take $f\: M \to *$ to be the constant map
to a point. Then, using the notation of
Remark \ref{Po_notation}, we have an equivalence of $G$-spectra
$$
f_\sharp (\mathcal E \wedge_M C_f^{-1}) \,\, \simeq \,\,
f_* {\mathcal E} \, ,
$$
where $C_f^{-1}$ is the orientation bundle $S^{-\tau}$.
Hence,
$$
f_\sharp {}^{-\tau\!}\mathcal E \,\, \simeq \,\,
f_* {\mathcal E} \, .
$$
Now take the equivariant homotopy groups of both sides
and use Remark \ref{Po_notation} to
obtain Theorem \ref{duality}.
\smallskip
{\flushleft (3).} Here is how to recover Theorem \ref{duality}
from \cite[th.\ 19.6.1]{May-Sig}.
Using their notation, take $M = E = B$ and
$J = {}^{-\tau}{\mathcal E}$. Then one has an
equivalence of equivariant fibred $G$-spectra over $M$
$$
J\,\, \simeq \,\, S_p \triangleright (J\wedge_M {\mathbb P_M} S^{\tau})\, .
$$
After applying homology $h_\bullet^G(M;{-})$ to both sides,
the left side becomes, in our notation, $h_\bullet^G(M;{}^{-\tau}{\mathcal E})$,
whereas the right side, after some unraveling of definitions
and rewriting, becomes $h^\bullet_G(M;{\mathcal E})$.
\end{rems}
\section{The equivariant complement formula \label{complement-section}}
As in the introduction, let $N$ be a $G$-manifold and let
$i\:Q\subset N$ be a closed $G$-submanifold. Then $N-Q \to N$
is an object of $T(N;G)$. Let
$$
S_N (N-Q) \in T(N;G)
$$
denote its fiberwise suspension. This has the equivariant homotopy type,
of the complement of $Q$ in $N \times [0,1]$.
Let
$\nu$ denote the normal bundle of $Q$ in $N$.
We let $D(\nu)$ be its unit disk bundle and $S(\nu)$ its
unit sphere bundle.
\begin{lem} \label{complement} There is an equivariant weak equivalence
$$
D(\nu) \cup_{S(\nu)} N \,\, \simeq \,\, S_N (N-Q)\, .
$$
\end{lem}
\begin{proof} Identify $D(\nu)$ with a closed equivariant tubular neighborhood
of $Q$. Then we have an equivariant pushout
$$
\xymatrix{
S(\nu) \ar[r] \ar[d] & N- \interior D(\nu)\ar[d]\\
D(\nu) \ar[r] & N\, ,
}
$$
where $\interior D(\nu)$ is identified with the
interior of the tubular neighborhood
and the inclusion $N-Q \subset N-\interior D(\nu)$ is
an equivariant equivalence.
So we have an equivariant homeomorphism
$$
D(\nu) \cup_{S(\nu)} N \cong N \cup_{N-\interior D(\nu)} N
$$
and the right side has the equivariant homotopy type
of $S_N (N-Q)$, considered as an object of $R(N;G)$.
\end{proof}
The object $D(\nu) \cup_{S(\nu)} N$ is called the {\it fiberwise
equivariant Thom
space} of $\nu$ over $N$. We denote it by $$T_N(\nu)\, .$$ More generally,
for $\xi$ a virtual $G$-bundle over $Q$, one has a {\it fiberwise
equivariant Thom spectrum} $T_N(\xi)$ over $N$.
The virtual $G$-vector bundle
over $Q$ defined by $i^*\tau_N - \tau_Q$.
is represented unstably by $\nu$.
Substituting this and taking fiberwise
suspension spectra of the right side of Lemma \ref{complement},
we obtain
\begin{cor}[``Complement Formula''] \label{comp-form}
There is an weak equivalence of $G$-spectra over $N$
$$
T_N(i^*\tau_N - \tau_Q) \,\, \simeq \,\,
\Sigma^{\infty,G}_N S_N(N-Q)\, .
$$
\end{cor}
Note
the left side of Corollary
\ref{comp-form} depends only on the underlying homotopy
class of the map $i\: Q \to N$.
\section{The proof of Theorem \ref{equi-int} and Addendum \ref{boundary}
\label{proof-equi-int}}
\begin{proof}[Proof of Theorem \ref{equi-int}]
Consider the equivariant intersection problem
$$
\SelectTips{cm}{}
\xymatrix{
& N - Q \ar[d] \\
P \ar[r]_f\ar@{..>}[ur]
& N\, ,
}
$$
from \S1. Recall $E \to N$ is the effect of converting
$N-Q \to N$ into a fibration.
Consider
$$
e(\id_P,f^*E) \in H^0_G(P;f^*{\mathcal E}(i_Q)) \, .
$$
An unraveling of definitions shows
$f^*{\mathcal E}(i_Q)$ is weak equivalent to the
naive fiberwise suspension spectrum of $S_P f^*E$.
Since the object $S_P f^*E$
is $(f^*(i_Q)_{!}\cd_\bullet(i_Q)-1)$-connected
(cf.\ Lemma \ref{connectivity}),
by Proposition \ref{fib-equi-versus-naive} the map
$$
Q_PS_P f^* E \to Q_P^G S_P f^*E
$$
is $s_\bullet$-connected, where
$$
s_H = \inf_{K\subsetneq H} f^*(i_Q)_{!}\cd_K(i_Q) - 1 \, .
$$
Using Corollary \ref{factorize-homotopy}, we infer that the
evident homomorphism
$$
H^0_G(P;{\mathcal E}(i_Q)) \cong H^0_G(P;\Sigma^\infty_P S_P f^*E)
\to
h^0_G(P;{\Sigma^\infty,G}_P S_P f^*E)
$$
from the naive theory to the complete one
is injective when
$$
p^\bullet < s_\bullet \, .
$$
In this range, it follows the image of $e(\id_P,f^*E)$
in $h^0_G(P;\Sigma^{\infty,G}_P f^*E)$ is trivial
if and only if $e(\id_P,f^*E)$ was trivial to begin with.
The next step is to identify
$h^\bullet_G(P;\Sigma^{\infty,G}_P S_P f^*E)$.
Using Lemma \ref{complement}, there is a weak equivalence
of objects
$$
S_N (N-Q)\,\, \simeq \,\, T_N(\nu) \in R(N;G)
$$
where $\nu$ is the normal bundle
of $Q$ in $N$. Consequently, there is an isomorphism
$$
h^\bullet_G(P;\Sigma^{\infty,G}_P S_P f^*E) \,\, \cong \,\,
h^\bullet_G(P;\Sigma^{\infty,G}_P f^* T_N(\nu)) \, .
$$
By Theorem \ref{duality} the group on the right
is naturally isomorphic to
$$
h_\bullet^G(P;{}^{-\tau_P} \Sigma^{\infty,G}_P f^*T_N (\nu))\, .
$$
An unraveling of the construction shows
the latter coincides with
the equivariant homotopy groups of the equivariant
Thom spectrum of the virtual bundle $\xi$ over $E(f,i_Q)$ appearing
in the introduction. In particular,
$$
\Omega^G_0(E(f,i_Q);\xi) \cong h_0^G(P;{}^{-\tau_P}
\Sigma^{\infty,G}_P f^*T_N (\nu) )\, .
$$
With respect to these identifications, we define the {\it equivariant
stable homotopy Euler characteristic}
$$
\chi_G(f) \in \Omega^G_0(E(f,i_Q);\xi)
$$
to be the unique element that corresponds to
$e(\id_P,f^*E)$.
By the above and Theorem \ref{cohom},
$\chi_G(f)$ fulfills the statement of Theorem \ref{equi-int}.
\end{proof}
\begin{proof}[Proof of Addendum \ref{boundary}]
When $N$ has a boundary and $P$ is closed, the above proof
extends without modification.
When $P$ has a boundary, one only needs to replace
Poincar\'e duality (\ref{duality}) in the
closed case with a version of Poincar\'e duality for manifolds
with boundary.
To formulate this,
let $(M,\partial M)$ is a compact smooth manifold with boundary. Then
duality in this case gives an isomorphism
$$
h_\bullet^G(M;{}^{-\tau_M}\mathcal E) \,\, \cong \,\,
h^\bullet_G(M,\partial M;\mathcal E) \, .
$$
The right side is defined as follows: for $\alpha = V -W$ and ${\mathcal E}$ fibrant and
cofibrant, define
$$
h^\bullet_G(M,\partial M;\mathcal E) \,\, := \,\,
[\Sigma^W_M (M/\!\!/\partial M),\mathcal E_V]_{R(M;G)}
$$
where $M/\!\!/\partial M$ is the double
$$
M\cup_{\partial M} M \in R(M;G)
$$
(the section $M\to M/\!\!/\partial M$ is
defined using the left summand).
\end{proof}
\section{The proof of Theorems \ref{local} and \ref{global-to-local}
\label{proof-local}}
\begin{proof}[Proof of Theorem \ref{local}]
Recall the factorization $(N-Q) \to E \to N$ in which
$(N-Q) \to E$ is an acyclic cofibration and $E\to N$ is a fibration
of $T(N;G)$.
By construction we have an isomorphism
$$
H^0_G(X;\Sigma^\infty_N S_N E) \,\, \cong \,\, [X^+,Q_N S_N E]_{R(B;G)}
$$
and an isomorphism
$$
h^0_G(X;\Sigma^{\infty,G}_N S_N E) \,\, \cong \,\,
[X^+,Q^G_N S_N E]_{R(B;G)}
$$
for any object $X \in T(B;G)$.
With respect to these identifications,
the homomorphism
$H^0_G(X;\Sigma^\infty_N S_N E)\to
h^0_G(X;\Sigma^{\infty,G}_N S_N E)$ arises from the map
$$
Q_N S_N E \to Q_N^G S_N E
$$
by applying homotopy classes $[X^+,{-}]_{R(B;G)}$.
Consider the commutative diagram of abelian groups
\begin{equation} \label{big_diagram}
\xymatrix{
[P_i/\!\!/P_{i-1}, Q_N S_N E]_{R(N;G)} \ar[r]^{j_1} \ar[d]_{\ell_1} &
[P_i^+,Q_N S_N E]_{R(N;G)} \ar[r]^{k_1}
\ar[d]^{\ell_2} & [P_{i-1}^+,Q_N S_N E]_{R(N;G)}
\ar[d]^{\ell_3} \\
[P_i/\!\!/P_{i-1}, Q^G_N S_N E]_{R(N;G)} \ar[r]_{j_2} &
[P_i^+,Q^G_N S_N E]_{R(N;G)} \ar[r]_{k_2} &
[P_{i-1}^+,Q^G_N S_N E]_{R(N;G)}
}
\end{equation}
with exact rows, where the object $P_i/\!\!/P_{i-1}$ is given
by $P_i \cup_{P_{i-1}} N$.
\begin{defn} Let $f_i\: P_i \to N$ be the restriction of $f$ to $P_i$.
Let
$$
e^i_G(f) \in [P_i/\!\!/P_{i-1},Q_N S_N E]_{R(N;G)}
$$
be the class determined by the composite
$$
\begin{CD}
P_i @> f_i >> N @> s_+ >> \Sigma_N E
\end{CD}
$$
together with the observation that its restriction
to $P_{i-1}$ has a preferred homotopy over $N$ to the
composite
$$
\begin{CD}
P_{i-1} @> f_{i-1} >> N @> s_- >> \Sigma_N E \, .
\end{CD}
$$
\end{defn}
By essentially the same argument which proves
Theorem \ref{cohom}, the map $f_i$ is equivariantly homotopic
to a map whose image is disjoint from $Q$, relative to
$P_{i-1}$, provided
\begin{itemize}
\item $e^i_G(f) = 0$ and
\item $p^H \le 2f^*(i_Q)_{!}\cd_H(i_Q) - 3$
for all $(H) \in {\mathcal I}(G;P)$.
\end{itemize}
If this is indeed the case, the equivariant homotopy extension
property can be used to obtain a new $G$-map $f'$, coinciding
with $f$ on $P_{i-1}$, and satisfying
$f'(P_i) \cap Q = \emptyset$.
In order to complete the proof of Theorem \ref{local}
we will apply a version of Poincar\'e duality.
Set $H = H_i$ and $P^H_s = P^H- P_H$.
Then the inclusion of pairs
$$
(G\cdot P^H,G\cdot P^H_s) \to (P_i,P_{i-1})
$$
is a relative $G$-homeomorphism. Recall the Weyl group
$W(H)$ acts on $P^H$ and restricts to a free action on $P_H$.
The following result follows from the existence of
equivariant tubular neigbhorhoods.
\begin{lem}[{\cite[\S IV]{Davis}}] \label{compactify}
The open $W(H)$-manifold $P_H$ is the interior of
a compact free $W(H)$-manifold $\bar P_H$ with
corners. Furthermore, the inclusion $P_H \subset \bar P_H$ is an
equivariant weak equivalence.
\end{lem}
Consider the left
square of diagram \ref{big_diagram}. By Lemma
\ref{compactify}, and ``change of groups''
it maps to the square
\begin{equation} \label{little-diagram}
\xymatrix{
[\bar P_H/\!\!/\partial \bar P_H, Q_N S_N E]_{R(N;W(H))}
\ar[r]^{j'_1} \ar[d]_{\ell'_1} &
[(P^H)^+,Q_N S_N E]_{R(N;W(H))}
\ar[d]^{\ell'_2}\phantom{\, .} \\
[\bar P_H/\!\!/\partial \bar P_H, Q^G_N S_N E]_{R(N;W(H))} \ar[r]_{j'_2}
&
[(P^H)^+,Q^G_N S_N E]_{R(N;W(H))} \, .
}
\end{equation}
The proof of Theorem \ref{local} is completed in two steps.
\subsection*{Step 1} The homomorphism $\ell'_1$ is an isomorphism, since
$W(H)$ acts freely on $\bar P_H/\!\!/\partial \bar P_H$ in the
``based'' sense. This can be proved by an induction argument using
an equivariant cell decomposition, together with the observation
that the map $Q_N S_N E \to Q_N^G S_N E$ is a weak homotopy equivalence of
underlying topological spaces.
\subsection*{Step 2} There is a relative $W(H)$-homeomorphism
$$
(\bar P_H,\partial \bar P_H) \cong (P^H,P^H_s)
$$
which, together with change of groups, gives an
isomorphism
$$
[\bar P_H/\!\!/\partial \bar P_H, Q_N S_N E]_{R(N;W(H))} \,\, \cong\,\,
[P_i/\!\!/P_{i-1}, Q_N S_N E]_{R(N;G)} \, .
$$
We will consider $e^i_G(f)$ to be an element of the left
hand side. Then $\ell'_1(e^i_G(f))$ can be regarded as
an element of relative cohomology group
$$
h^0_{W(H)}(\bar P_H,\partial \bar P_H; \Sigma^{\infty,G}_N S_N E) \, .
$$
Define $\chi^i_G(f)$ to be its Poincar\'e dual.
Using the equivariant equivalence $P_H \simeq \bar P_H$,
we can regard $\chi^i_G(f)$ as living in the
homology group
$$
h_0^{W(H)}(P_H; {}^{-\tau_{P_H}\!\!}
(\Sigma^{\infty,G}_{P_H} S_{P_H} f_H^*E))\, .
$$
As in the proof of Theorem \ref{equi-int} this homology group is
isomorphic to the equivariant bordism group
$$
\Omega^{W(H)}_0(E(f_{H},i_Q);{}_H\xi)\, .
$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{global-to-local}]
The proof uses diagrams \eqref{big_diagram} and
\eqref{little-diagram}. The homomorphism
$(t_H)_*$ is identified with
the Poincar\'e dual of the homomorphism
$j'_2$ of diagram \eqref{little-diagram}.
Therefore $(t_H)_*$ is injective if and only if $j'_2$ is.
Let
$$
\ell\: H^0_G(P;\Sigma^\infty_N S_N E)
\to
h^0_G(P;\Sigma^{\infty,G}_N S_N E)
$$
be the canonical homomorphism. Recall $\chi_G(f)$
is the Poincar\'e dual of $\ell(e_G(f))$.
The class $j'_1(e^i_G(f))$ is the one
associated with the composition
$$
\begin{CD}
(P^H)^+ \subset P^+ @> f >> N @> s >> S_N E \, ,
\end{CD}
$$
i.e., the restriction of $e_G(f)$ to $P^H$.
By hypothesis, $\chi_G(f)$ is trivial, so
$\ell'_2j'_1(e^i_G(f))$ must also be trivial
since the latter is the restriction to $P^H$
of the trivial class $\ell(e_G(f))$.
Hence
$$
j_2' \ell_1'(e^i_G(f))= \ell'_2 j'_1(e^i_G(f)) = 0\, .
$$
Furthermore, since
$j_2'$ is identified with $(t_H)_*$, and the latter
is by hypothesis injective, the vanishing of
$j_2' \ell_1'(e^i_G(f))$ implies $\ell_1'(e^i_G(f)) = 0$.
Hence, $\chi^i_G(f)$ vanishes too, as it is the Poincar\'e dual of
$\ell_1'(e^i_G(f))$.
The result is now concluded by induction on $i$ and Theorem \ref{local}.
\end{proof}
\section{The proof of Theorem \ref{lefschetz} \label{proof-lefschetz}}
Given a closed smooth $G$-manifold $M$, we have a commutative square
of equivariant mapping spaces
\begin{equation}\label{fixed_square}
\xymatrix{
\self^\flat(M)^G \ar[r]^\subset \ar[d] & \self(M)^G \ar[d] \\
\maps (M,M \times M - \Delta)^G \ar[r] & \maps (M,M \times M)^G
}
\end{equation}
where
\begin{itemize}
\item $\Delta := \Delta_M \subset M \times M$ is the diagonal,
\item $M \times M$ is given the diagonal $G$-action,
\item $\self(M)^G$ is the space of equivariant self maps of $M$,
\item $\self^\flat(M)^G$ is the subspace of fixed point free
equivariant self maps, and
\item the vertical maps of the square are given by taking
graphs and the horizontal ones are inclusions.
\end{itemize}
\begin{lem}\label{cartesian}
The square \eqref{fixed_square} is $\infty$-cartesian,
i.e., it is a homotopy pullback.
\end{lem}
\begin{proof} The following idea is used in the proof. Suppose
$X\to Y$ is a map of fibrations over $B$.
Let $X_b$ be the fiber of $X \to B$ at $b\in B$ and similarly
let $Y_b$ be the fiber of $Y \to B$. Then the diagram
$$ \xymatrix{ X_b \ar[r]\ar[d] & Y_b \ar[d]\\ X \ar[r] & Y }
$$
is $\infty$-cartesian.
We claim the first factor projection map
\begin{equation}\label{project1}
M\times M -\Delta \to M
\end{equation}
is a fibration of $T(G)$. For, let $H \subset G$
be a subgroup. Taking the induced map of
fixed point spaces yields the projection map
\begin{equation} \label{project2}
M^H\times M^H -\Delta_{M^H} \to M^H\, .
\end{equation}
Since $M^H$ is a manifold,
the map \eqref{project2} is a Serre fibration of spaces.
It follows that the map $\eqref{project1}$ is a fibration
of $T(G)$.
Applying the functor $\maps(M,{-})^G$ to the projection
map, we infer
$$
\maps(M,M\times M - \Delta)^G \to \maps(M,M)^G
$$
is a fibration whose fiber at the identity map
of $M$ is $\maps^\flat(M)^G$.
Similarly, the first factor projection $M \times M \to M$ is an equivariant fibration,
so the induced map
$$
\maps(M,M\times M)^G \to \maps(M,M)^G
$$
is a fibration whose fiber at the identity is $\maps(M,M)^G$.
It now follows easily from the first paragraph of
the proof that the square \eqref{fixed_square}
is $\infty$-cartesian.
\end{proof}
From Lemma \ref{cartesian}, the obstruction to deforming
an equivariant self map
$$
f\: M \to M
$$
to a fixed point free map coincides with
equivariantly deforming its graph
$\Gamma_f\: M \to M \times M$ off of the diagonal.
Consequently, we are reduced to the equivariant intersection problem
$$
\SelectTips{cm}{}
\xymatrix{
& M \times M - \Delta \ar[d] \\
M \ar[r]_{\Gamma_f} \ar@{..>}[ur] & M \times M\, .
}
$$
We will prove Theorem \ref{lefschetz} using Corollary \ref{descent}.
We will need to compute the codimension function the
diagonal.
\begin{lem} \label{codim-Lefschetz} Let $i_\Delta\: \Delta\subset M \times M$ be the
inclusion. For $(H) \in {\mathcal I}(M;G)$, we have
$$
\cd_H(i_\Delta) = m^H \, .
$$
\end{lem}
\begin{proof} If $x\in \Delta = M$ is a point then
the codimension of the diagonal inclusion
$M^H_{(x)} \subset M^H_{(x)} \times M^H_{(x)}$ is clearly $m^H(x)$.
\end{proof}
By Lemma \ref{codim-Lefschetz}, the inequality
of Corollary \ref{descent} amounts to the condition
$$
m^H \le 2(\Gamma_f)^*(i_\Delta)_!m^H - 3\, .
$$
By a straightforward argument which we omit, $(\Gamma_f)^*(i_\Delta)_!m^H$
coincides with $m^H$, so the inequality becomes
$$
m^H \ge 3
$$
for $(H) \in {\mathcal I}(M;G)$.
We now turn to the problem of deciding when the homomorphisms $(t_H)_*$
are injective. What is special about the fixed point case is
that the virtual $W(H)$-bundle ${}^H\xi$, which sits over the space
$$
M^H \times_M L_f M\, ,
$$
is represented by an actual vector bundle. This vector
bundle is just the pullback of the normal bundle
of the embedding $M^H \subset M$ along the
(projection) map $M^H \times_M L_f M\to M^H$. Henceforth,
we identify ${}^H\!\xi$ with this vector bundle.
Therefore $(t_H)_*$
is identified with the homomorphism of equivariant framed
bordism groups
\begin{equation} \label{homo}
\Omega^{W(H)}_0(M_H \times_M L_f M;{}_H\xi) \to
\Omega^{W(H)}_0(M^H \times_M L_f M;{}^H\!\xi)
\end{equation}
induced by the inclusion
$$
t_H\: M_H \times_M L_f M \to M^H \times_M L_f M\, ,
$$
where both $\xi_H$ and $\xi^H$ are $W(H)$-vector bundles
and the pullback $t_H^*\xi^H$ is isomorphic to $\xi_H$.
Hence, $(t_H)_*$
arises by taking the $W(H)$-fixed spectra
of the map of equivariant suspension spectra
\begin{equation} \label{susp-map}
\Sigma^\infty_{W(H)}(M_H \times_M L_f M)^{{}_H\xi} \to
\Sigma^\infty_{W(H)}(M^H \times_M L_f M)^{{}^H\!\xi}
\end{equation}
and then applying $\pi_0$.
The inclusion $M_H \subset M^H$ is $1$-connected, since by hypothesis
$M^H_s := M^H - M_H$ has codimension at least two in $M^H$. Consequently,
the inclusion $M_H \times_M L_f M \to M^H \times_M L_f M$ is also
$1$-connected. Furthermore, $M_H \times_M L_f M$ is $W(H)$-free.
If we apply the tom Dieck splitting to the $W(H)$-fixed points of the map
\eqref{susp-map}, we obtain maps of summands of the form
\begin{equation} \label{summand}
\Sigma^\infty ((M_H \times_M L_f M)^{{}_H\xi})^K_{hW'(K)}
\to
\Sigma^\infty ((M^H \times_M L_f M)^{{}^H\!\xi})^K_{hW'(K)} \, ,
\end{equation}
where $(K)$ ranges through the conjugacy classes of subgroups
of $W(H)$, $W'(K)$ denotes the Weyl group of $K$ in $W(H)$ and
the subscript ``${}_{hW'(K)}$'' is an abbreviation for the
Borel construction (in the notation above, we are first Thomifying,
then taking fixed points and thereafter taking the Borel construction).
If $K$ is not the trivial group, then the freeness of the action
implies the domain of \eqref{summand} is contractible,
and therefore this map induces an injection on $\pi_0$.
If $K$ is trivial, then the map takes the form
$$
\Sigma^\infty (M_H \times_M L_f M)^{{}_H\xi}_{hW(H)}
\to
\Sigma^\infty (M^H \times_M L_f M)^{{}^H\!\xi}_{hW(H)}
$$
which is evidentally $1$-connected. Assemblying these
injections, one sees the homomorphism \eqref{homo} is
also injective. Therefore, the homomorphism
$(t_H)_*$ appearing in the statement of Corollary \ref{descent} is injective
for every $(H) \in {\mathcal I}(M;G)$.
The proof of
Theorem \ref{lefschetz} is now completed by applying Corollary
\ref{descent}.
\section{The proof of Theorems \ref{periodic} and \ref{stalk} \label{proof-periodic}}
Let $f\: M \to M$ be a self map of a closed smooth manifold $M$.
The {\it Fuller map} of $f$ is the
$\mathbb Z_n$-equivariant self map of $M^{\times n}$ given by
$$
(x_1,...,x_n) \mapsto (f(x_{n}),f(x_1),f(x_2),...,f(x_{n-1}))
$$
(compare Fuller \cite{Fuller}). Here $n \ge 2$ and $\mathbb Z_n$ acts by
cyclic permutation of factors. The
assignment $x \mapsto (x,f(x),\dots,f^{n-1}(x))$ defines
a $\mathbb Z_n$-equivariant bijective correspondence
between the $n$-periodic point set of $f$ and the fixed point
set of $\Phi_n(f)$. In particular, $f$ is $n$-periodic point free
if and only if $\Phi_n(f)$ is fixed point free. We wish to know
whether this statement is true up to homotopy.
Let $\self(M)$ be the space of self maps of $M$,
and $\self(M^{\times n})^{\mathbb Z_n}$ the
space of $\mathbb Z_n$-equivariant self maps of $M^{\times n}$. The
{\it Fuller transform}
$$
\Phi_n\: \self(M) \to \self(M^{\times n})^{\mathbb Z_n}
$$
is defined by $f\mapsto \Phi_n(f)$.
Let
$$
\self^\flat_n(M) \subset \self(M)
$$
be the subspace of self maps having no $n$-periodic points.
Let
$$
\text{end}^{\flat}(M^{\times n})^{\mathbb Z^n}
\subset \text{end}(M^{\times n})^{\mathbb Z_n}
$$
be the subspace of equivariant
self maps of $M^{\times n}$ which are fixed point free.
Then there is a commutative diagram of spaces
\begin{equation} \label{period_diagram}
\xymatrix{
\text{end}^{\flat_n}(M)
\ar[r]\ar[d] & \text{end}(M)\ar[d] \\
\text{end}^{\flat}(M^{\times n})^{\mathbb Z_n}
\ar[r] & \text{end}(M^{\times n})^{\mathbb Z_n}
}
\end{equation}
where the vertical maps are given by the Fuller transform
and the horizontal ones are inclusions. The square is
cartesian, i.e., it is a pullback. We wish to understand the
extent to which it is a homotopy pullback.
\begin{ques} Is the above square
$0$-cartesian?
\end{ques}
That is, is the map from $\text{end}^{\flat_n}(M)$
to the corresponding homotopy pullback a surjection on
components? If yes, it would reduce the problem of studying the
$n$-periodic points of $f$ to the $\mathbb Z_n$-equivariant
fixed point theory of $\Phi_n(f)$. At the time of
writing we do not know this to be the case. Nevertheless,
we can still use the diagram to get an invariant of
self maps which is trivial when the self map is homotopic
to an $n$-periodic point free one.
\begin{defn} Set
$$
\ell_n(f) := \ell_{\mathbb Z_n}(\Phi_n(f))\, .
$$
\end{defn}
By a straightforward
calculation we omit, $\ell_n(f)$ lives in the group
$$
\Omega_0^{{\mathbb Z}_n, \text{fr}}(\text{\rm ho}P_n(f))
$$
appearing in the statement of Theorem \ref{periodic}.
It is clear that $\ell_n(f)$ vanishes when $f$ is homotopic
to an $n$-periodic point free map. If we apply
Theorem \ref{lefschetz} to $\Phi_n(f)$ we obtain
\begin{cor} Assume $\dim M \ge 3$ and $\ell_n(f) = 0$.
Then $\Phi_n(f)$ is equivariantly homotopic to a fixed point
free map.
\end{cor}
As mentioned in \S1, a result of Jezierski
\cite{Jezierski}
asserts $f$ is homotopic to an $n$-periodic point free map
if $\dim M \ge 3$ and the Nielsen numbers $N(f^k)$ vanish
for each $k$ a divisor of $n$. Conjecturally,
the invariant $\ell_n(f)$ contains at least as much information
as these Nielsen numbers (additional
evidence for this is provided below
in Theorem \ref{stalk}). If one assumes this to be the case,
then Jezierski's theorem tends to suggest that
the square \eqref{period_diagram} is $0$-cartesian.
However, we do not see any homotopy
theoretic reason why that should be true.
\begin{proof}[Proof of Theorem \ref{stalk}] The tom Dieck splitting yields a
decomposition of $\Omega_0^{\mathbb Z_n,\text{\rm fr}}(\text{\rm ho}P_n(f))$ into
summands of the form
$$
\Omega_0^{\text{\rm fr}}(E\mathbb Z_k \times_{\mathbb Z_k}\text{\rm ho}P_k(f))
$$
for $k$ a divisor of $n$, where we are using the fact
$\text{\rm ho}P_n(f)^{\mathbb Z_k} = \text{\rm ho}P_k(f)$.
Since the zeroth framed bordism of a space is the free abelian
group on its path components,
it will suffice to show $\pi_0(E\mathbb Z_k \times_{\mathbb Z_k}\text{\rm ho}P_k(f))$
is isomorphic to $\pi_{\rho,k}$. We first compute
the set of components of $\text{\rm ho}P_k(f)$.
Recall from \S1 that a point of $\text{\rm ho}P_k(f)$ is given by
a $k$-tuple of paths
$$
(\lambda_1,...,\lambda_k)
$$
subject to the constraint $f(\lambda_{i+1}(0)) = \lambda_i(1)$
where $i$ is taken modulo $k$.
Two points $(\lambda_1,...,\lambda_k)$ and
$(\gamma_1,...,\gamma_k)$ are in the same component if and only if
there are paths $\alpha_i$ having
initial point $\lambda_i(0)$ and terminal point $\gamma_i(0)$
such that the concatenated paths
$$
f(\alpha_i)\ast \lambda_{i+1} \quad \text{and } \quad
\gamma_{i+1}\ast \alpha_{i+1}
$$
are homotopic relative to their endpoints, for $i = 1,2,\dots,k$.
Since $M$ is connected, each component of $\text{\rm ho}P_k(f)$ has a point of the form
$(\lambda_1,\lambda_2,\dots,\lambda_k)$ satisfying $\lambda_i(0) = *$ and
$\lambda_i(1) = f(*)$. Let $\pi_f$ denote the set of homotopy classes of paths
in $M$ joining the basepoint $*$ to the point $f(*)$.
A choice of element $[\alpha]$ of
$\pi_f$ determines an isomorphism with $\pi$.
Using this isomorphism the set of path components of
$\text{\rm ho}P_k(f)$ is a quotient of the $k$-fold cartesian product
$$
\pi \times \cdots \times \pi
$$
with respect to the equivalence relation
$$
(x_1,x_2,\dots,x_k) \, \sim \,
(g_1x_1\rho(g_2)^{-1},g_2x_2\rho(g_3)^{-1},\dots,g_kx_k\rho(g_1)^{-1})
$$
for $x_i,g_i \in \pi$. Using this relation, the $k$-tuple
$(x_1,\dots x_k)$ is equivalent to the $k$-tuple
$$
(y,1,\dots 1)
$$
where $y = x_1\rho(x_2)\rho^2(x_3)\cdots \rho^{k-1}(x_k)$.
Furthermore, any two elements of the form $(y,1,\dots,1)$ and
$(z,1,\dots,1)$ are related precisely when $z = gy\rho^k(g)^{-1}$
for some element $g\in \pi$. Summarizing thus far, we have shown
$\pi_0(\text{\rm ho}P_k(f))$ is the quotient of $\pi$ with respect
to the equivalence relation
$$
y \, \sim\, g y \rho^k(g)^{-1}
$$
for $g,y\in \pi$.
To complete the proof of Theorem \ref{stalk}, one notes the set
of path components of the Borel construction coincides with the
coinvariants of $\mathbb Z_k$ acting on $\pi_0(\text{\rm ho}P_k(f))$. With respect
to the $k$-tuple description of $\pi_0(\text{\rm ho}P_k(f))$, the action is
induced by cyclic permutation of factors: $(x_1,x_2,\dots,x_k) \mapsto
(x_k,x_1,\dots,x_{k-1})$. If we identify this element with
$(y,1,\dots 1)$ with $y$ as above, then the result of acting by a
generator of the cyclic group results in an element equivalent to
$(\rho(y),1,\dots 1)$. Consequently, $\pi_0(E\mathbb Z_k \times_{\mathbb
Z_k}\text{\rm ho}P_k(f))$ is obtained from $\pi_0(\text{\rm ho}P_k(f))$ by imposing the
additional relation $y \sim \rho(y)$. Hence, the set of path
components of the Borel construction is isomorphic to $\pi_{\rho,k}$.
\end{proof}
\begin{conjecture} Let ${\mathcal N}_k(f)$ be the number
of non-zero terms in $\ell_n^k(f)$ expressed as a linear
combination of the basis elements of $\mathbb Z[\pi_{\rho,k}]$.
Then ${\mathcal N}_k(f)$ equals the Nielsen number of $f^k$.
\end{conjecture}
|
0803.1802
|
\section{Introduction}
Graphene, an atomically thin material made only of carbon atoms arranged
in a hexagonal lattice, was isolated only recently.\cite{Nov04,pnas}
Several reviews on the physics of graphene are already
available in the literature.\cite{Nov07,peresworld,rmp,rmpBeenakker}
At low energies, $E<1$ eV, the electronic dispersion has the form
$\epsilon(\bm k)=\pm 3tka/2$, where $t$ is the nearest neighbor
hopping integral and $a$ is the carbon-carbon distance. The effective
theory at these energy scales is that of a massless Dirac Hamiltonian
in (2+1) dimensions. If the experimental probes excite the system
within this energy range, the Dirac Hamiltonian is all there is for
describing the physics of graphene. On the other hand, for excitations
out of this energy range it is necessary to include corrections to the
Dirac Hamiltonian which will modify the energy spectrum and thus the
density of states of the system. One immediate consequence is that the
energy dispersion is no longer a function of the absolute value of the
wave-number $k$. In this paper, we will calculate the optical conductivity of graphene including the leading corrections to the Dirac cone approximation.
One of the first calculations of the optical conductivity of graphene, using
the Dirac Hamiltonian were done by Gusynin and
Sharapov.\cite{Gusynin06} This first study was subsequently revisited
a number of times, \cite{Gusynin06PRL,Gusynin07PRL,Gusynin07} and
summarized in Ref. [\onlinecite{GusyninIJMPB}]. However, these
authors did not include non-linear effects in the calculation. Also
the effect of disorder was done on a phenomenological level, by
broadening the delta functions into Lorentzians characterized by
constant width $\Gamma$. We note that in the Dirac-cone approximation, the
conductivity can also be obtained from the polarization. The
calculations for finite chemical potential and arbitrary $|\bm q|$ and
$\omega$ were done by Wunsch {\it et al.}\cite{Wunsch06} and Hwang and
Das Sarma.\cite{Hwang07}
The calculation of the optical conductivity of graphene, in the Dirac
Hamiltonian limit, including the effect of disorder in a self
consistent way was done by Peres {\it et al.}\cite{PeresPRB} and
recently also corrections due to electron-electron interaction were
discussed.\cite{Herbut08,Sachdev08} The calculation for the graphene
bilayer with disorder was done by Koshino and Ando, \cite{Koshino07}
and by Nilsson {\it et al.}.\cite {Nilsson07} The optical conductivity
of a clean bilayer was first computed by Abergel and Falko,
\cite{Falko07}, and recently generalized to the
biased\cite{StauberFR,castroPRL,Morpurgo07} bilayer case by Nicol and
Carbotte.\cite{Nicol08}
Within the Boltzmann approach, the optical conductivity of graphene
was considered in Refs. [\onlinecite{PeresBZ,StauberBZ}], where the
effect of phonons and the effect of mid-gap states were included.
This approach, however, does not include transitions between the
valence and the conduction band and is, therefore, restricted to
finite doping. The voltage and the temperature dependence of the
conductivity of graphene was considered by Vasko and Ryzhii,
\cite{Vasko07} using the Boltzmann approach. The same authors have
recently computed the photoconductivity of graphene, including the
effect of acoustic phonons.\cite{Vasko08}
The effect of temperature on the optical conductivity of clean
graphene was considered by L. A. Falkovsky and
A. A. Varlamov.\cite{Falkovsky07} The far-infrared properties of clean
graphene were studied in Refs. \onlinecite{Falkovsky08} and
\onlinecite{FalkovskyB}. Also this study was restricted to the Dirac
spectrum approximation.
It is interesting to note that the conductivity of clean graphene, at
half filling and in the limit of zero temperature, is given by the universal
value $\pi e^2/(2h)$. \cite{Ludwig94,PeresIJMPB} On the other hand,
if the temperature is kept finite the conductivity goes to zero at
zero frequency, but the effect of optical phonons does not change the
value of the conductivity of clean graphene.\cite{Stauber08} This
behavior should be compared with the calculation of the DC
conductivity of disordered graphene, which for zero chemical potential
presents the value of $4e^2/(\pi h)$.\cite{PeresPRB,Ludwig94,Shon,Ando2002}
From the experimental point of view, the work of Kuzmenko {\it et al.}
\cite{Kuzmenko} studied the optical conductivity of graphite in the
energy range [0,1] eV, and showed that its behavior is close to that
predicted for clean graphene in that energy range. An explanation of
this odd fact was attempted within the Slonczewski-McClure-Weiss
model. The complex dielectric constant of graphite was studied by
Pedersen for all energy ranges.\cite{Pedersen} The infrared
spectroscopy of Landau levels in graphene was studied by Jiang {\it et
al.}\cite{Jiang} and Deacon {\it et al.}\cite{Deacon}, confirming the
magnetic field dependence of the energy levels and deducing a band
velocity for graphene of $1.1\times 10^6$ m/s. Recently, the infrared
conductivity of a single graphene sheet was
obtained.\cite{basov,Peres08}
Recent studies of graphene multilayers grown on SiC from THz to
visible optics showed a rather complex behavior\cite{George} with
values of optical conductivity close to those predicted for graphene
at infrared frequencies as well as to those measured in
graphite\cite{Kuzmenko}. This experiment\cite{George} especially
indicates the need for a graphene theory valid all the way to optical
frequencies. The absorption spectrum of multilayer graphene in high
magnetic fields was recently discussed in Ref. \onlinecite{deHeer},
including corrections to the Dirac cone approximation.
In this paper we address the question of how the conductivity of clean
graphene changes when ones departs from the linear spectrum
approach. This is an important question for experiments done in the
visible region of the spectrum.\cite{nair} The paper is organized as
follows: in Sec. \ref{hamilt} we introduce our model and derive the
current operator; in Sec. \ref{OC} we discuss the optical conductivity
of graphene by taking into account its full density of states; in
Sec. \ref{stpprime} we discuss the effect on the optical conductivity
of a next nearest neighbors hopping term; in Sec. \ref{scattering} we
analyze the scattering of light by a graphene plane located at the
interface of two different dielectrics and give the transmissivity and
reflectivity curves in the visible region of the spectrum; finally in
Sec. \ref{Concl} we give our conclusions.
\section{The Hamiltonian and the current operators}
\label{hamilt}
The Hamiltonian, in tight binding form, for electrons in graphene
is written as
\begin{eqnarray}
H&=&-t\sum_{\bm R,\sigma}\sum_{\bm \delta=\bm \delta_1-\bm\delta_3}
[a^\dag_\sigma(\bm R)b_\sigma(\bm R+\bm \delta)+H.c.]\nonumber\\
&&-\frac {t'}2\sum_{\bm R,\sigma}\sum_{\bm \delta=\bm \delta_4-\bm\delta_9}
[a^\dag_\sigma(\bm R)a_\sigma(\bm R+\bm \delta)+H.c.]\nonumber\\
&&-\frac {t'}2\sum_{\bm R,\sigma}\sum_{\bm \delta=\bm \delta_4-\bm\delta_9}
[b^\dag_\sigma(\bm R)b_\sigma(\bm R+\bm \delta)+H.c.]\,,
\end{eqnarray}
where the operator $a^\dag_\sigma(\bm R)$ creates an electron in the
carbon atoms of sub-lattice $A$, whereas $b^\dag_\sigma(\bm R)$
does the same in sub-lattice $B$, $t$ is the hopping parameter connecting
first nearest neighbors, with a value
of the order of 3 eV, and $t'$ is the hopping parameter for second
nearest neighbors, with a value of the order of 0.1$t$.
The vectors $\bm\delta_i$ are represented in Fig. \ref{hopping_tp}
and have the form
\begin{equation}
\begin{array}{l}
\displaystyle{\boldsymbol{\delta}_1=\frac{a}{2}\left(1,\sqrt{3}\right)\;,\;
\boldsymbol{\delta}_2=\frac{a}{2}\left(1,-\sqrt{3}\right)\;,\;
\boldsymbol{\delta}_3= -a\left(1,0\right)}\;,\;\\ \\
\displaystyle{\boldsymbol{\delta}_4=a\left(0,\sqrt{3}\right)\;,\;\boldsymbol{\delta}_5=-\boldsymbol{\delta}_4\;,\;
\boldsymbol{\delta}_6=\frac{3a}{2}\left(1,\frac{1}{\sqrt{3}}\right)}\;,\; \\ \\
\displaystyle{\boldsymbol{\delta}_7=-\boldsymbol{\delta}_6\;,\;
\boldsymbol{\delta}_8=\frac{3a}{2}\left(1,-\frac{1}{\sqrt{3}}\right) \;,\;
\boldsymbol{\delta}_9=-\boldsymbol{\delta}_8}\;.
\end{array}
\end{equation}
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.4]{fig1.eps}
\end{center}
\caption{
(color online) Representation of the vectors $\bm\delta_i$, with $i=1-9$.
The carbon-carbon distance, $a$, and the $A$ and $B$ atoms
are also depicted.
\label{hopping_tp}}
\end{figure}
In order to obtain the current operator we modify the
hopping parameters as
\begin{equation}
t\rightarrow te^{i\frac{e}{\hbar}\bm A(t)\cdot\bm\delta}\,,
\end{equation}
and the same for $t'$. Expanding the exponential up to second order
in the vector potential $\bm A(t)$ and assuming that the electric
field is oriented along the $x$ direction, the current operator is obtained
from
\begin{equation}
j_x=-\frac{\partial H}{\partial A_x(t)}\,,
\end{equation}
leading to $j_x=j_x^P+A_x(t)j^D_x$. The operator $j_x^P$ reads
\begin{eqnarray}
j_x^P&=&\frac {tie}{\hbar}
\sum_{\bm R,\sigma}\sum_{\bm \delta=\bm \delta_1-\bm\delta_3}
[\delta_x a^\dag_\sigma(\bm R)b_\sigma(\bm R+\bm \delta)- H.c.]\nonumber\\
&+&\frac {t'ie}{2\hbar}\sum_{\bm R,\sigma}\sum_{\bm \delta=\bm \delta_4-\bm\delta_9}
[\delta_xa^\dag_\sigma(\bm R)a_\sigma(\bm R+\bm \delta)-H.c.]\nonumber\\
&+&\frac {t'ie}{2\hbar}\sum_{\bm R,\sigma}\sum_{\bm \delta=\bm \delta_4-\bm\delta_9}
[\delta_xb^\dag_\sigma(\bm R)b_\sigma(\bm R+\bm \delta)-H.c.]\,.
\end{eqnarray}
The operator $j^D_x$ can be found from the linear term in $A_x(t)$ expansion of
the Hamiltonian.
\section{The optical conductivity }
\label{OC}
\subsection{The Kubo formula}
The Kubo formula for the conductivity is given by
\begin{equation}
\sigma_{xx}(\omega) = \frac {< j^D_x>}{iA_s(\omega + i0^+)}+
\frac {\Lambda_{xx}(\omega + i0^+)}{i\hbar A_s(\omega + i0^+)}\,,
\end{equation}
with $A_s=N_cA_c$ the area of the sample, and $A_c=3\sqrt 3 a^2/2$
($a$ is the carbon-carbon distance)
the area of the unit cell,
from which it follows that
\begin{equation}
\Re\sigma_{xx}
(\omega) = D\delta(\omega) + \frac {\Im \Lambda_{xx}(\omega + i0^+)}
{\hbar\omega A_s}\,,
\end{equation}
and
\begin{equation}
\Im\sigma_{xx}
(\omega) = -\frac {< j^D_x>}{A_s\omega} - \frac {\Re \Lambda_{xx}(\omega + i0^+)}
{\hbar\omega A_s}\,,
\end{equation}
where $D$ is the charge stiffness which reads
\begin{equation}
D= -\pi \frac {<j^D_x>}{A_s} -\pi\frac {\Re \Lambda_{xx}(\omega + i0^+) }
{\hbar A_s}\,.
\label{DW}
\end{equation}
The function $\Lambda_{xx}(\omega + i0^+)$ is obtained from the
Matsubara current-current correlation function, defined as
\begin{equation}
\Lambda_{xx}(i\omega_n) = \int_0^{\hbar\beta}d\,\tau e^{i\omega_n\tau}
<T_{\tau} j^P_{x}(\tau)j^P_x(0)>\,.
\end{equation}
In what follows we start by neglecting the contribution of $t'$ to the
current operator. Its effect is analyzed later and shown to be
negligible. The function $\Im \Lambda_{xx}(\omega + i0^+)$ is given
by
\begin{eqnarray}
&&\Im \Lambda_{xx}(\omega + i0^+)=\frac {t^2e^2a^2}{8\hbar^2} \sum_{\bm k}
f[\phi(\bm k)]
\nonumber\\
&\times&
[n_F(-t\vert\phi(\bm k)\vert-\mu)
-n_F(t\vert\phi(\bm k)\vert-\mu)]\nonumber\\
&\times&[\pi \delta (\omega -2t\vert\phi(\bm k)\vert/\hbar) -
\pi \delta (\omega +2t\vert\phi(\bm k)\vert/\hbar)
]
\,,
\label{im}
\end{eqnarray}
where $n_F(x)$ is the usual Fermi distribution, $\mu$ is the chemical potential,
and the function $\Re \Lambda_{xx}(\omega + i0^+)$ is given by
\begin{eqnarray}
&&\Re \Lambda_{xx}(\omega + i0^+)=-\frac {t^2e^2a^2}{8\hbar^2}
{\cal P}
\sum_{\bm k}
f[\phi(\bm k)]
\nonumber\\
&\times&
[n_F(-t\vert\phi(\bm k)\vert-\mu)
-n_F(t\vert\phi(\bm k)\vert-\mu)]\nonumber\\
&\times&
\frac {4t\vert\phi(\bm k)\vert}{\omega^2- (2\vert\phi(\bm k)\vert)^2}
\,,
\label{Re}
\end{eqnarray}
with
\begin{equation}
\label{FormFactor}
f[\phi(\bm k)] = 18-4\vert\phi(\bm k)\vert^2 + 18 \frac {[\Re\phi(\bm k)]^2-[\Im \phi(\bm k)]^2}
{\vert\phi(\bm k)\vert^2}\,,
\end{equation}
and ${\cal P}$ denoting the principal part of the integral.
The graphene energy bands are given by
$\epsilon(\bm k) = \pm t\vert\phi(\bm k)\vert$, with $\phi(\bm k)$
defined as
\begin{equation}
\phi(\bm k)=1 + e^{\bm k\cdot(\bm\delta_1-\bm\delta_3)}+
e^{\bm k\cdot(\bm\delta_2-\bm\delta_3)}\,.
\end{equation}
\subsection{The real part of the conductivity}
The expression for (\ref{im}) can almost be written in terms of the
energy dispersion $\epsilon(\bm k)$, except for the term
\begin{equation}
\label{Neglect}
\frac {[\Re\phi(\bm k)]^2-[\Im \phi(\bm k)]^2}
{\vert\phi(\bm k)\vert^2}\,.
\end{equation}
In order to proceed analytically, and for the time being
(see Section \ref{ugly}),
we approximate this term by
its value calculated in the Dirac cone approximation (see appendix A)
\begin{equation}
\frac 1 {N_c}\sum_{\bm k}\frac {[\Re\phi(\bm k)]^2-[\Im \phi(\bm k)]^2}
{\vert\phi(\bm k)\vert^2}g(\vert\phi(\bm k)\vert)\simeq 0\,,
\label{Eq_dir}
\end{equation}
where $g(\vert\phi(\bm k)\vert)$ is some given function depending only
on the modulus
of $\phi(\bm k)$.
With this approximation, we have
\begin{equation}
f[\phi(\bm k)] \simeq 18-4\vert\phi(\bm k)\vert^2\,.
\end{equation}
Introducing the density of states per spin per unit cell, $\rho(E)$,
defined as
\begin{equation}
\rho(E)=\frac 1 {N_c}\sum_{\bm k}
\delta(E-t\vert\phi_{\bm k}\vert)\,,
\label{Eqrho}
\end{equation}
the expression for the real part of the conductivity reads
\begin{eqnarray}
\Re\sigma_{xx}(\omega)=\sigma_0\frac {\pi
t^2a^2}{8A_c\hbar\omega}\rho(\hbar\omega/2) [18-(\hbar\omega)^2/t^2]
\nonumber\\ \times
\left[\tanh\frac{\hbar\omega+2\mu}{4k_BT}+\tanh\frac{\hbar\omega-2\mu}{4k_BT}
\right]\,.
\label{Eq_s}
\end{eqnarray}
Equation \ref{Eq_s} is essentially exact in the visible range of the spectrum; missing is only
the a contribution coming from Eq. \ref{Eq_dir}, which contribution wil later
be shown to be negligible.
In the above equation $\sigma_0$ is
\begin{equation}
\sigma_0=\frac \pi 2 \frac {e^2}h\,.
\end{equation}
The momentum integral in Eq. (\ref{Eqrho})
can be performed leading to
\begin{equation}
\rho(E)=\frac{2E}
{t^2\pi^{2}}\left\{ \begin{array}{ccc}
\frac{1}{\sqrt{F(E/t)}}\mathbf{K}
\left(\frac{4E/t}{F(E/t)}\right)\,, & & 0<E<t\,,\\
\\\frac{1}{\sqrt{4E/t}}\mathbf{K}\left(\frac{F(E/t)}
{4E/t}\right)\,, & & t<E<3t\,,\end{array}\right.
\label{eq:DOS1L}
\end{equation}
where $F(x)$ is given by
\begin{equation}
F(x)=(1+x)^{2}-\frac{(x^{2}-1)^{2}}{4}\,,
\label{eq:Fe}
\end{equation}
and $\mathbf{K}(m)$ is defined as
\begin{equation}
\mathbf{K}(m)=\int^1_0 dx [(1-x^2)(1-mx^2)]^{-1/2}\,.
\label{eq:K}
\end{equation}
In Figure \ref{fig1} we give a plot of Eq. (\ref{Eq_s}) over
a large energy range, including the visible part of the spectrum
($E\in[1.0,3.1]$ eV).
\begin{figure}[t]
\begin{center}
\includegraphics*[angle=0,width=0.8\linewidth]{fig2.eps}
\caption{(color online) The optical conductivity as function of
frequency for two values of the chemical potential,
$\mu=0$ eV and $\mu=0.2$ eV, and two
temperatures $T=10$ K and $T=300$ K.
The bottom panels are a zoom in, close to zero frequency, which allow to
depict the frequency region where differences in the chemical potential and in temperature
are most important. We have used $t=2.7$eV.
\label{fig1}}
\end{center}
\end{figure}
It is useful to derive from Eq. (\ref{Eq_s}) an asymptotic expansion for $\Re\sigma_{xx}(\omega)$.
For that, we expand the density of states around $E=0$ and obtain
\begin{equation}
\rho(E)\simeq \frac {2E}{\sqrt 3 \pi t^2}+\frac {2E^3}{3\sqrt 3 \pi t^4}
+\frac {10E^5}{27\sqrt 3 \pi t^6}\,.
\label{expand_rho}
\end{equation}
Using Eq. (\ref{expand_rho}) in Eq. (\ref{Eq_s}) we obtain for
the optical conductivity the approximate result
\begin{eqnarray}
\Re\sigma_{xx}(\omega)&=&\sigma_0
\left(
\frac 1 2 +\frac 1 {72}\frac {(\hbar\omega)^2}{t^2}
\right)\nonumber\\
&\times&
\left(
\tanh\frac{\hbar\omega+2\mu}{4k_BT}+
\tanh\frac{\hbar\omega-2\mu}{4k_BT}
\right)\,.
\end{eqnarray}
In the case of $\mu=0$ this expression is the same
as in Kuzmenko {\it et al.}\cite{Kuzmenko}
and in Falkovsky and Pershoguba
\cite{Falkovsky08}
if in both cases the $(\hbar\omega/t)^2$ term is neglected.
\subsection{Correction to $\Re\sigma_{xx}(\omega)$
introduced by Eq. (\ref{Eq_dir})}
\label{ugly}
We now want to make quantitative the effect of the term given by
Eq. (\ref{Eq_dir}), which was neglected in Eq. (\ref{Eq_s}). To that end we expand the function $\phi(\bm k)$
up to third order in momentum. The expansion is
\begin{eqnarray}
\phi(\bm k)&\simeq& \frac {3a}2(k_y-ik_x)+
\frac 1 2 \left(\frac {3a}2\right)^2(k^2_x+k^2_y/3+2ik_xk_y)
\nonumber\\
&+&\frac 1 6 \left(\frac {3a}2\right)^3
(ik_x^3-k^3_y/3-3k^2_xk_y+ik^2_yk_x)\;.
\end{eqnarray}
The angular integral in Eq. (\ref{Eq_dir}) leads to
\begin{equation}
\int^{2\pi}_0d\theta\left(
[\Re\phi(\bm k)]^2-[\Im\phi(\bm k)]^2
\right)=\frac {\pi}{24}\left(\frac {3ak}2\right)^4\,,
\end{equation}
where we still assume $\vert\phi(\bm k)\vert=3ak/2$. Within this
approximation the contribution to the conductivity coming from
Eq. (\ref{Eq_dir}) has the form
\begin{equation}
\label{uglySigma}
\Re\sigma_{xx}^u(\omega)=\sigma_0\frac 1 {4! 2^4}\left(
\frac {\hbar\omega}t
\right)^2\left(
\tanh\frac {\hbar\omega+2\mu}{4k_BT}+
\tanh\frac {\hbar\omega-2\mu}{4k_BT}
\right)\,.
\end{equation}
Due to the prefactor, this contribution has only a small effect and
shows that the current operator basically conserves the circular
symmetry found close to the $K$-points. In Figure \ref{ss0} we present
$\sigma(\omega)/\sigma_0$ as function of the frequency, considering
several values of $t$, in the optical range and also discussing the numerical value of the term given in
Eq. (\ref{uglySigma}).
\begin{figure}[t]
\begin{center}
\includegraphics*[angle=0,width=0.8\linewidth]{fig3.eps}
\caption{(color online) Left: $\sigma(\omega)/\sigma_0$ as a function
of the frequency, including both Eq. (\ref{Eq_s}) and the
correction $\Re\sigma^u_{xx}$, Eq. (\ref{uglySigma}), for several values of $t$.
Right: The correction $\Re\sigma^u_{xx}$, given by Eq. (\ref{uglySigma}), for several values of $t$.
It is clear that the contribution from this term has barely no effect on the results given by
Eq. (\ref{Eq_s}). The calculations are for zero chemical potential and for room temperature (there is no visible effect on $\sigma(\omega)/\sigma_0$ in the visible range of the spectrum, when compared
to a zero temperature calculation).
\label{ss0}}
\end{center}
\end{figure}
\subsection{The imaginary part of the conductivity}
Neglecting the term proportional to Eq. (\ref{Neglect}), the imaginary
part of the conductivity is given by
\begin{eqnarray}
\Im\sigma_{xx}(\omega)&=&\frac 1{\hbar\omega}\frac 4 {\pi}\sigma_0
(\mu-\frac 2 9 \mu^3/t^2)
-\frac{\sigma_0}{\pi}
\log\frac {\vert \hbar\omega+2\mu\vert}{\vert\hbar\omega-2\mu \vert}
\nonumber\\
&-&\frac {\sigma_0}{36\pi}\left(\frac {\hbar\omega}t\right)^2
\log\frac {\vert \hbar\omega+2\mu\vert}{\vert\hbar\omega-2\mu \vert}\,,
\end{eqnarray}
where we have included all the terms that diverge at
$\hbar\omega=2\mu$ and the contribution from the cubic term in
frequency in the density of states. The contribution of the last term
of $f[\phi(\bm k)]$ in (\ref{FormFactor}) is given by
\begin{equation}
\Im\sigma_{xx}^u(\omega)=-\frac {\sigma_0}{18\pi}\frac 1 {4! 2^4}\left(
\frac {\hbar\omega}t
\right)^2
\log\frac {\vert \hbar\omega+2\mu\vert}{\vert\hbar\omega-2\mu \vert}\,.
\end{equation}
If we neglect the terms in $\mu^3$ and $\omega^2$ we obtain the same
expressions as those derived by Falkovsky and Pershoguba
\cite{Falkovsky08}. We note that these terms are also obtained from the
polarizability in the limit $q\rightarrow0$ since the Fermi velocity
is not $k$-dependent.\cite{Wunsch06}
\subsection{The Drude weight and the Hall coefficient}
The Drude weight (or charge stiffness)
defined by Eq. (\ref{DW}) can be computed in
different limits. In the case $\mu=0$ we are interested in its
temperature dependence. For zero temperature the
exact relation
\begin{equation}
\sum_{\bm k}\vert\phi(\bm k)\vert=\frac 1 8 \sum_{\bm k}
\frac {f[\phi(\bm k)]}{\vert\phi(\bm k)\vert}
\end{equation}
assures that $D=0$ when $\mu=0$.
In general, the Drude weight has the following form:
\begin{eqnarray}
D(T,\mu)&=&t\sigma_0\frac {4\pi^2}{3\sqrt 3}
\frac 1 {N_c}\sum_{\bm k}\left[
\vert\phi(\bm k)\vert - \frac 1 8
\frac {f[\phi(\bm k)]}{\vert\phi(\bm k)\vert}
\right]\nonumber\\
&\times&\left[\tanh\frac{t\vert\phi(\bm k)\vert +\mu}{2k_BT}
+\tanh\frac{t\vert\phi(\bm k)\vert -\mu}{2k_BT}
\right]\,.
\end{eqnarray}
In the case of finite $\mu$, the temperature dependence of $D(T,\mu)$
is negligible. In the Dirac cone approximation we obtain
\begin{equation}
D(0,\mu)=4\pi \sigma_0\mu\left[
1 -\frac 1 9 \left(\frac {\mu}t
\right)^2
\right]\,.
\end{equation}
On the other hand, at zero chemical potential the temperature dependence
of the charge stiffness is given by
\begin{equation}
D(T,0)=8\pi\ln 2\sigma_0 k_BT -4\pi\zeta(3)\sigma_0\frac{(k_BT)^3}{t^2}\,,
\end{equation}
where $\zeta(x)$ is the Riemann zeta function.
Zotos {\it et al.} have shown a very general relation between the Drude
weight and the Hall coefficient.\cite{Zotos1,Zotos2} This relation is
\begin{equation}
R_H=-\frac {1}{eD}\frac{\partial D}{\partial n}\,.
\label{RH}
\end{equation}
Equation (\ref{RH}) does not take into account the possibility of valley
degeneracy and therefore it has to be multiplied by two when we apply it
to graphene. In the case of a finite chemical potential we have the
following relations between the Fermi wave vector $k_F$ and the
chemical potential: $n=k_F^2/\pi$ and $\mu=2tak_F/3$. Applying
Eq. (\ref{RH}) to graphene we obtain
\begin{equation}
R_H=-\frac 2 e \frac {n^{-1/2}/2-3a^2\sqrt n/8}
{\sqrt n-a^2n^{3/2}/4}\simeq -\frac 1{en}\,.
\end{equation}
\section{Effect of $t'$ on the conductivity of graphene}
\label{stpprime}
In this section we want to discuss the effect of $t'$ on the conductivity
of graphene. One important question is what the value of $t'$ is in graphene.
Deacon {\it et al.}\cite{Deacon}
proposed that the dispersion for graphene, obtained from a tight-binding approach with non-orthogonal basis functions, is of the form
\begin{equation}
E=\pm \frac{t\vert\phi(\bm k)\vert}{1\mp s_0\vert\phi(\bm k)\vert}\,
\label{Eq_decon}
\end{equation}
with $\vert\phi(\bm k)\vert\simeq \frac 3 2 ka$, with $a$ the carbon-carbon
distance. On the other hand the dispersion of graphene including
$t'$ has the form
\begin{equation}
E= \pm t \frac 3 2 ka - t'\left[ \frac 9 4(ka)^2-3\right]\,.
\end{equation}
To relate $t'$ and $s_0$ we expand Eq. (\ref{Eq_decon}) as
\begin{equation}
E\simeq \pm t \vert\phi(\bm k)\vert (1\pm s_0\vert\phi(\bm k)\vert)
=\pm t\frac 3 2 ka+s_0t\frac 9 4(ka)^2 \,,
\end{equation}
which leads $t'/t=-s_0$ with $s_0=0.13$.
For computing the conductivity of graphene we need to know the Green's
function with $t'$. These can be written in matrix form as
\begin{eqnarray}
\label{eq:funcao2_green_rede}
\mathbbm G^0(\bm k,i\omega_n) &=& \sum_{\alpha = +,-}
\frac{1/2}
{ i\omega_n - \alpha t\vert\phi(\bm k)\vert/\hbar +
2t'[\vert\phi(\bm k)\vert^2-3]/\hbar }\nonumber\\
&\times&\left(
\begin{array}{cc}
1 & -\alpha \phi(\bm k)/\vert\phi(\bm k)\vert \\
-\alpha \phi(\bm k)^\ast/\vert\phi(\bm k)\vert & 1
\end{array}
\right),
\label{green}
\end{eqnarray}
where $\mathbbm G^0(\bm k,i\omega_n)$ stands for
\begin{equation}
\mathbbm G^0(\bm k,i\omega_n)=
\left(
\begin{array}{cc}
G_{AA}(\bm k,i\omega_n) & G_{AB}(\bm k,i\omega_n)\\
G_{BA}(\bm k,i\omega_n) & G_{BB}(\bm k,i\omega_n)
\end{array}
\right)\,.
\end{equation}
From Eq. (\ref{green}) we see that only the poles are modified, with the
coherence factors having the same form as in the case with $t'=0$.
The current operator $j_x^P=j_{x,t}^P+j_{x,t'}^P$, as derived from the tight-binding Hamiltonian
is written in momentum space as
\begin{eqnarray}
j_{x,t}^P&=&\frac {tiea}{2\hbar}\sum_\sigma\sum_{\bm k}
[(\phi(\bm k)-3)a^\dag_{\sigma,\bm k}b_{\sigma,\bm k}-\nonumber\\
&-&(\phi^\ast(\bm k)-3)b^\dag_{\sigma,\bm k}
a_{\sigma,\bm k}]\,,
\end{eqnarray}
and
\begin{eqnarray}
j_{x,t'}^P&=&\
\frac {3t'iea}{2\hbar}\sum_\sigma\sum_{\bm k}[\phi(\bm k)-\phi^\ast(\bm k)]
\times\nonumber\\
&&(a^\dag_{\sigma,\bm k}a_{\sigma,\bm k}+
b^\dag_{\sigma,\bm k}b_{\sigma,\bm k})\,.
\end{eqnarray}
The operators $j_{x,t}^P$ and $j_{x,t'}^P$ are the current operators
associated with the hopping amplitudes $t$ and $t'$, respectively. The
current-current correlation function is now a sum of three different
terms: one where we have two $j_{x,t}^P$ operators, another one where
we have a $j_{x,t}^P$ and a $j_{x,t'}^P$, and a third one with two
$j_{x,t'}^P$. This last term vanishes exactly, since it would
correspond to the current-current correlation function of a triangular
lattice. Also the crossed term vanishes exactly which can be
understood by performing a local gauge transformation to the fermionic
operators of one sub-lattice, only. The first term leads to a
contribution of the same form as in Eq. (\ref{Eq_s}) but with the
numerators of the two $\tanh$ replaced by
$E_+=\hbar\omega+2t'[(\hbar\omega)^2/(4t^2)-3]+2\mu$ and
$E_-=\hbar\omega-2t'[(\hbar\omega)^2/(4t^2)-3]-2\mu$, respectively.
As a consequence the effect of
$t'$ in the conductivity a graphene only enters in the band structure
$E_{\pm}$ in the Fermi functions. In Figure \ref{condtprime} we plot
the real part of the optical conductivity for two different values of
$\mu$, one with the Fermi energy in the conduction band and the other with
the Fermi energy in the valence band. There is a small effect near twice
the absolute value of the chemical potential, due to the breaking of
particle-hole symmetry introduced by $t'$. For optical frequencies, the effect of $t'$ is negligible.
\begin{figure}[t]
\begin{center}
\includegraphics*[angle=0,width=0.8\linewidth]{fig4.eps}
\caption{(color online) Real part of the conductivity for two values of the chemical potential
at the temperature of 45 K. The parameters used are
$t=3.1$ eV and $t'=-0.13t$. Only the energy range of $\omega\in[0.1,0.3]$ eV is shown because
only here has the chemical potential difference any noticeable effect.
\label{condtprime}}
\end{center}
\end{figure}
\section{The electromagnetic scattering problem}
\label{scattering}
Here we derive the reflectivity and the transmissivity of light
between two media, characterized by
electrical permittivities $\epsilon_i\epsilon_0$, with $i=1,2$,
separated by a graphene flake. The scattering
geometry is represented in Fig. \ref{fig2}, i.e., we assume the field
to propagate in the direction $\bm k=(k_x,0,k_z)$.
In the following, we assume the field to be given by $\bm
E=(E_x,0,E_z)$ ($p$ polarization). The case of $s$
polarization is addressed in Appendix \ref{app:transmissivity}.
\begin{figure}[t]
\begin{center}
\includegraphics*[angle=0,width=0.8\linewidth]{fig5.eps}
\caption{(color online) Geometry of $p$ polarized light scattering between two media with graphene
separating them. The electrical permittivities of the two media
are $\epsilon_i\epsilon_0$, with $i=1,2$
\label{fig2}}
\end{center}
\end{figure}
The electromagnetic boundary conditions then are\cite{Jackson}
\begin{eqnarray}
(\bm D_2-\bm D_1)\cdot\bm n=\rho\,,\\
\bm n\times(\bm E_2-\bm E_1)=0\,,
\end{eqnarray}
where $\rho$ is the surface charge density, in our case the
graphene charge density. If we represent the intensity
of the incident, reflected, and transmitted electric field as
$E_i$, $E_r$, and $E_t$, respectively, the boundary conditions
can be written as
\begin{eqnarray}
\label{BCone}
(E_i-E_r)\cos\theta_1=E_t\cos\theta_2\,,\\
-\epsilon_2\epsilon_0E_t\sin\theta_2+
\epsilon_1\epsilon_0(E_i+E_r)\sin\theta_1=\rho\,,
\label{BCtwo}
\end{eqnarray}
where $\epsilon_0$ is the vacuum permittivity, $\epsilon_1$ and
$\epsilon_2$ are the relative permittivity of the two media and
$\theta_1$ and $\theta_2$ are the incident and refracted angle,
respectively. Now the continuity equation in momentum space reads
\begin{equation}
\rho(\omega)=j_x(\omega)k_x/\omega\,,
\label{continuity}
\end{equation}
and Ohm's law is
written as
\begin{equation}
j_x(\omega)=\sigma(\omega)E_x=\sigma(\omega)E_t\cos\theta_2\,.
\label{ohm}
\end{equation}
Combining Eqs. (\ref{BCone}) - (\ref{ohm}),
we arrive at the following result, valid for normal
incidence, for the transmissivity $T$
\begin{equation}
T=\sqrt{\frac{\epsilon_2}{\epsilon_1}}
\frac {4(\epsilon_1\epsilon_0)^2}{|(\sqrt{\epsilon_1\epsilon_2}
+\epsilon_1)\epsilon_0+
\sqrt{\epsilon_1}\sigma(\omega)/c|^2}\,.
\end{equation}
If we now consider both media to be vacuum and that the graphene
is at half filling ($\sigma(\omega)\simeq\sigma_0$) we obtain
\begin{equation}
T=\frac 1 {(1+\pi\alpha/2)^2}\simeq 1-\pi\alpha\,,
\end{equation}
where $\alpha=e^2/(4\pi\epsilon_0c\hbar)$, is the fine structure constant. The reflectivity is also
controlled by the fine structure constant $\alpha$. For normal incidence
it reads
\begin{equation}
R=\frac {|\sqrt{\epsilon_1\epsilon_2}\epsilon_0+\sqrt{\epsilon_1}
\sigma(\omega)/c-\epsilon_1\epsilon_0|^2}{|\sqrt{\epsilon_1\epsilon_2}\epsilon_0+\sqrt{\epsilon_1}
\sigma(\omega)/c+\epsilon_1\epsilon_0|^2}\,,
\end{equation}
and if both media are the vacuum we obtain
\begin{equation}
R=\frac {\pi^2\alpha^2}4T\,.
\end{equation}
In Fig. \ref{transmissivity}, the transmission and reflection
coefficients for normal incident as function of the frequency for
temperature $T=10K$ are shown where the first medium is vacuum
($\epsilon_1=1$) and the second medium is either vacuum
($\epsilon_2=1$) or a SiO${}_2$-substrate
($\epsilon_2=\epsilon_\infty=2$, $\epsilon_\infty$ being the
high-frequency dielectric constant of SiO${}_2$). The left hand side
shows the data for zero doping and the right hand side for finite
doping $\mu=0.2$eV. In Appendix \ref{app:transmissivity}, we present
the formulas for arbitrary angle of incidence.
\begin{figure}[t]
\begin{center}
\includegraphics*[angle=0,width=0.8\linewidth]{fig6.eps}
\caption{(color online) The transmissivity and reflectivity for normal incident as function
of the frequency for $T=10K$ where the first medium is vacuum ($\epsilon_1=1$) and the second medium is either vacuum ($\epsilon_2=1$) or a SiO${}_2$-substrate ($\epsilon_2=\epsilon_\infty=2$). Left: At zero chemical potential. Right: At finite chemical potential $\mu=0.2$eV.
\label{transmissivity}}
\end{center}
\end{figure}
It is interesting to compare the result for graphene with that for
bilayer graphene. For the bilayer, the transmissivity is given by
\cite{Falko07}
$$
T = 1-2\pi\alpha f_2(\omega)
$$
with $f_2(\omega)$ given by
\begin{eqnarray}
f_2(\omega)&=&\frac {\hbar\omega+2t_\perp}{2(\hbar\omega+t_\perp)}
+\frac{\theta(\hbar\omega-t_\perp)}{(\hbar\omega/t_\perp)^2}
\nonumber\\
&+&
\frac{(\hbar\omega-2t_\perp)\theta(\hbar\omega-2t_\perp)}{2(\hbar\omega-t_\perp)}\,,
\end{eqnarray}
and $t_\perp$ the hopping amplitude between the graphene planes. For
frequencies much larger than $t_\perp$, which is the case in an
experiment done in the visible region of the spectrum, one obtains
\begin{equation}
f_2(\omega)\simeq 1 - \frac {t_\perp^2}{(\hbar\omega)^2}\simeq 1\,,
\end{equation}
which leads to $T \simeq 1-2\pi\alpha$. Again, as in graphene, the transmissivity is controlled
by the fine structure constant. It is interesting to note that for $\hbar\omega\ll t_\perp$
we also obtain the same result for $T$.
The appearance of the fine structure constant $\alpha$ in the two cases is connected to the spinorial structure of the electronic wave function. In other words, the reduction of the transmissivity through a clean system is caused by a universal current induced by interband transitions.
\section{Conclusions}
\label{Concl}
We have presented a detailed study of the optical properties of graphene based on the general, non-interacting tight-binding model. Special emphasis was placed on going beyond the usual Dirac-cone approximation, i.e., we included the cubic term in the density-of-states. The conductivity was thus consistently calculated to order $(\hbar\omega/t)^2$ for arbitrary chemical potential and temperature.
We also assessed the effect of the next nearest neighbor coupling $t'$ on the optical properties. We find that the additional terms to the current operator do not contribute to the conductivity and that modifications only enter through the modified energy dispersion.
Using the full conductivity of clean graphene, we determine the transmissivity and reflectivity of light that is scattered from two media with different permittivity and graphene at the interface. Our results are important for optical experiments in the visible frequency range.\cite{nair} For example, the apparent disagreement between the presented theory for graphene and experiments by Dawlaty {\it et al.}\cite{George} at visible frequencies indicates that the interlayer interaction in epitaxial-SiC graphene is significant and cannot be neglected.
\section*{Acknowledgements}
This work was supported by the ESF Science Programme INSTANS
2005-2010, and by FCT under the grant PTDC/FIS/64404/2006.
|
0803.0665
|
\section{Introduction and statement of the main result}
Let $\varphi(M^m,N^n)$ denote the minimal number of critical points
of smooth
maps between the manifolds $M^m$ and $N^n$.
When superscripts
are specified they denote the dimension of the respective manifolds.
We are interested below in the case when $m\geq n\geq 2$ and
the manifolds are compact.
The main problem concerning $\varphi$ is to characterize those
pairs of manifolds for which it is finite non-zero
and then to compute its value (see \cite{AndFun1}).
In \cite{AndFun1} the authors found that,
in small codimension $0\leq m-n-1\leq 3$, if $\varphi(M^m,N^{n+1})$ is finite
then $\varphi(M^m,N^{n+1})\in\{0,1\}$, except for the
exceptional pairs of dimensions
$(m,n+1)\in\{(2,2), (4,3), (4,2), (5,2), (6,3), (8,5)\}$.
Notice that $(5,3)$ was inadvertently included
in \cite{AndFun1} among the
exceptional pairs, but the proof carries out over this case.
Moreover, under the finiteness hypothesis,
$\varphi(M,N)=1$ if and only if $M$ is the connected sum
of a smooth fibration over $N$ with an exotic sphere and not a fibration
itself. There are two essential ingredients in this result. First, there
are local obstructions to the existence of isolated
singularities, namely the germs of smooth maps $\mathbb R^m\to \mathbb R^n$
having an isolated singularity at origin are actually locally topologically
equivalent to a projection. Thus, these maps are topological fibrations.
Second, singular points located in a disk cluster together.
The simplest exceptional case is that of (pairs of) surfaces,
which is completely understood by elementary means
(see \cite{AF2} for explicit computations).
Very little is known for the other exceptional and generic (i.e. $m-n-1\geq 4$)
cases and even the case of pairs of spheres is unsettled yet.
In particular, it is not known whether $\varphi$ is bounded in terms
only of the dimensions, in general.
The aim of this note is to find non-trivial examples
in dimensions $(4,3)$, $(8,5)$ and $(16,9)$ inspired by the early
work of Antonelli (\cite{Ant1,Ant3}).
The smooth maps considered in \cite{Ant3} are so-called
Montgomery-Samelson fibrations with finitely many singularities where
several fibers are pinched to points. According to \cite{Tim}
these maps should be locally topologically equivalent to a cone
over the Hopf fibration, in a neighborhood of a critical point.
The main ingredient of our approach is the existence of global obstructions
of topological nature to the clustering of genuine critical points in these
dimensions. This situation seems rather exceptional and it permits us
to obtain the precise value of $\varphi$ using only basic
algebraic topology.
Our computations show that $\varphi$
can take arbitrarily large even values. Thus the
behavior of $\varphi$ is qualitatively different from
what it was seen before in \cite{AndFun1}.
\begin{theorem}\label{comput}
Let $n\in \{2,4,8\}$, $e\geq c\geq 0$, with $c\neq 1$, and
$\Sigma^{2n}$ be a homotopy $2n$-sphere. If $n=2$ assume further that
$\Sigma^{4}\setminus {\rm int}(D^4)$ embeds smoothly into $S^4$, where
$D^4$ is a smooth 4-disk.
Then
\[\varphi(\Sigma^{2n}\sharp_{e}S^n\times S^n\sharp_{c} S^1\times S^{2n-1},\sharp_{c} S^1\times S^{n})=2e-2c+2\]
Here $\sharp_{c} S^1\times S^{n}=S^{n+1}$ if $c=0$ and $\sharp_{e}S^n\times S^n\sharp_{c} S^1\times S^{2n-1}=S^{2n}$
if $e=c=0$.
\end{theorem}
The structure of the proof of the theorem is as follows.
We prove Proposition \ref{lowbound} which
yields a lower bound for the number of critical values
derived from topological obstructions of algebraic nature.
The existence of a non-trivial lower bound is not obvious
since one might think that several singularities
could combine into a single more complicated singularity.
However, the proof uses only standard techniques of algebraic
topology.
The next step taken in section \ref{fibersum} is to construct
explicit smooth maps with any even
number of singularities. This follows by taking fiber sums of elementary
blocks of maps coming naturally from Hopf fibrations.
This construction is an immediate generalization of the one
considered by Antonelli in the case of two elementary blocks in (\cite{Ant1}, p.185-186).
Then Proposition \ref{fsum} concludes the proof.
\begin{remark}
Observe that $S^1\times S^{2n-1}$ fibers over
$S^1\times S^n$, when $n\in\{2,4,8\}$ so that the formula from Theorem \ref{comput} is still valid for
$\Sigma^{2n}=S^{2n}$, $e=0$ and $c=1$. However,
we do not know how to evaluate $\varphi$ when $e\leq c-1$.
The present methods do not work for $e\geq c=1$ either.
\end{remark}
\vspace{0.2cm}
{\small
{\bf Acknowledgements}. The authors are indebted to
Dennis Sullivan, Yuli Rudyak and Andr\'{a}s Sz\H{u}cs for useful discussions
on this topic, during the M. M. Postnikov Memorial Conference at Bedlewo,
June 2007. L. Funar was partially supported by the
ANR Repsurf: ANR-06-BLAN-0311.
C. Pintea was partially supported by the CNCSIS grant of type
A, 8/1467/2007 and partially by the CNCSIS grant PN II, ID 523/2007.
P. Zhang thanks Jianzhong Pan, Haibao Duan and Kaiming Zhao of Institute of
Mathematics of CAS in Beijing for their hospitality when part of this note
was written in the summer of 2007.
}
\section{A lower bound for the number of critical values}\label{lowbd}
\begin{proposition}\label{lowbound}
For any dimension $n\geq 2$, homotopy $2n$-sphere $\Sigma^{2n}$
and non-negative integers $e$ and $c$,
with $c\neq 1$ we have:
\[\varphi(\Sigma^{2n}\sharp_{e}S^n\times S^n\sharp_{c} S^1\times S^{2n-1},\sharp_{c} S^1\times S^{n})\geq 2e-2c+2\]
Here $\sharp_{c} S^1\times S^{n}=S^{n+1}$ if $c=0$ and $\sharp_{e}S^n\times S^n\sharp_{c} S^1\times S^{2n-1}=S^{2n}$
if $e=c=0$.
\end{proposition}
We will prove, more generally, the following:
\begin{proposition}\label{genca}
Let $M^{2n}$ and $N^{n+1}$ be closed connected orientable
manifolds and $n\geq 2$. Assume that
$\pi_1(M)\cong \pi_1(N)$ is a free group ${\mathbb F}(c)$ on $c$ generators,
$c\neq 1$ (with ${\mathbb F}(0)=0$) ,
$\pi_j(M)=\pi_j(N)=0$, for $2\leq j \leq n-1$ and $H_{n-1}(M)=0$.
Then $\varphi (M,N)\geq \beta_{n}(M)-2c+2$, where $\beta_k$ denotes the
$k$-th Betti number.
\end{proposition}
\begin{proof}
Let $B=B(f)$ denote the set of critical values of a smooth map
$f:M\to N$. We will prove that the
cardinality $|B|$ of $B(f)$ satisfies
$|B|\geq \beta_{n}(M)-2c+2$, which will imply our claim.
Set $V=f^{-1}(B(f))\subset M$. We can assume that $f$ has finitely many
critical points, since otherwise the claim of Proposition
\ref{genca} would be obviously verified.
The following two Lemmas do not depend on the homotopy assumptions
of Proposition \ref{genca}.
\begin{lemma}\label{finite}
If $A$ is a nonempty finite subset of a connected closed orientable
manifold $N^{n+1}$, then $\beta_{n}(N\setminus
A)=\beta_{n}(N)+|A|-1$.
\end{lemma}
\begin{proof}Clear from the homology exact sequence of the pair $(N,N\setminus A)$.
\end{proof}
\begin{lemma}\label{connect}
If $M^{n+q+1}$ and $N^{n+1}$ are smooth manifolds and
$f:M\rightarrow N$ is a smooth map with finitely many critical
points, then the inclusions $M\setminus
V\hookrightarrow M$ and $N\setminus B\hookrightarrow N$ are $n$-connected.
\end{lemma}
\begin{proof}
This is obvious for $N\setminus
B\hookrightarrow N$. It remains to prove
that $\pi_k(M,M\setminus V)\cong
0$ for $k\leq n$. Take $\alpha:(D^{k},
S^{k-1})\rightarrow (M, M\setminus V)$ to be an arbitrary
smooth map of pairs.
Since the critical set $C(f)$ of $f$ is finite and contained in
$V$, there exists a small homotopy of $\alpha$ relative to
the boundary such that the image $\alpha(D^k)$ avoids $C(f)$.
By compactness there exists a neighborhood $U$ of $C(f)$
consisting of disjoint balls centered at the critical points such that
$\alpha(D^k)\subset M\setminus U$. We can arrange by a small isotopy
that $V$ becomes transversal to $\partial U$.
Observe further that $V\setminus U$ consists of regular
points of $f$ and thus it is a properly embedded sub-manifold of
$M\setminus U$. General transversality arguments show that $\alpha$
can be made transverse to $V\setminus U$ by a small
homotopy. By dimension counting this means
that $\alpha(D^k)\subset M\setminus U$ is disjoint from $V$ and thus
the class of $\alpha$ in $\pi_k(M,M\setminus V)$ vanishes.
\end{proof}
The restriction of $f$ to $M\setminus V$ is a proper submersion
and thus the restriction $f|_{M\setminus V}$ is an open map.
In particular, $f(M\setminus V)\subset N\setminus B$ is an open subset.
On the other hand, the closed map lemma states that a proper map between
locally compact Hausdorff spaces is also closed. Thus $f(M\setminus V)$ is
also closed in $N\setminus B$ and hence $f(M\setminus V)=N\setminus B$.
According to Ehresmann's theorem, the restriction
$f|_{M\setminus V}$ is then a locally trivial smooth fibration over $N\setminus B$ with
compact smooth fiber $F^{n-1}$ (see \cite{Dim}).
\begin{lemma}
Assume that $c\neq 1$. Then the
generic fiber $F$ is homotopy equivalent to the $(n-1)$-sphere.
\end{lemma}
\begin{proof}
When $c=0$ the claim is a simple consequence of the homotopy sequence
of the fibration $M\setminus V\to N\setminus B$.
Let us assume henceforth that $c\geq 2$.
Consider the last terms of the homotopy exact sequence of this fibration:
\[ \to \pi_1(M\setminus V)\stackrel{f_*}{\to} \pi_1(N\setminus B)\stackrel{p}{\to} \pi_0(F)\to \pi_0(M\setminus V)\to \pi_0(N\setminus B)\]
From Lemma \ref{connect} $M\setminus V$ and $N\setminus B$ are connected and
$\pi_1(M\setminus V)\cong \pi_1(N\setminus B)\cong {\mathbb F}(c)$.
If $F$ has $d \geq 2$ connected components then the kernel $\ker p$ of $p$ is a
finite index proper subgroup of the free non-abelian
group ${\mathbb F}(c)$. Thus, by the Nielsen-Schreier theorem,
$\ker p$ is a free group of rank $d(c-1)+1$, where
$d$ is the number of components of $F$, and hence of rank larger than $c$.
On the other hand, by exactness of the sequence above, $\ker p$ is also the image of $f_*$ and
thus it is a group of rank at most $c$. This contradiction shows that
$F$ is connected.
If $n=2$ then $F$ is a circle, as claimed.
Let now $n>2$. We obtained above that
$f_*$ is surjective. Since finitely generated free groups are
Hopfian any surjective homomorphism ${\mathbb F}(c)\to {\mathbb F}(c)$
is also injective. Since $\pi_2(N\setminus B)\cong \pi_2(N)=0$
and $f_*$ is injective we derive that $\pi_1(F)=0$.
The remaining terms of the homotopy exact sequence of the fibration
and Lemma \ref{connect} show then that
$\pi_j(F)=0$ for $2\leq j\leq n-2$. Thus $F$ is a homotopy sphere.
\end{proof}
\begin{lemma}\label{surj}
Suppose that $B\neq\emptyset$.
\begin{enumerate}
\item We have
$H_1(N\setminus B)\cong \mathbb Z^c$, $H_{n}(N\setminus B)=\mathbb Z^{|B|+c-1}$ and $H_{n+1}(N\setminus B)=0$.
\item If $n > 2$ then $H_{n-1}(M\setminus V)=0$.
\item The homomorphism $H_n(M\setminus V)\to H_n(M)$ induced by the
inclusion map is surjective.
\end{enumerate}
\end{lemma}
\begin{proof}
The first two assertions are consequences of Lemma \ref{finite},
Lemma \ref{connect} and standard algebraic topology.
For instance, $H_1(N\setminus B)\cong H_1(N)=\mathbb Z^c$.
The last claim follows from Lemma \ref{connect} and the long exact
sequence in homology of the pair $(M,M\setminus V)$.
\end{proof}
\begin{lemma}\label{rank}
If $B\neq \emptyset$ and $c\neq 1$ then the rank of $H_{n}(M\setminus V)$ is $2c+|B|-2$.
\end{lemma}
\begin{proof}
The Gysin sequence of the fibration $M\setminus V\to N\setminus B$ (whose fiber is
a homotopy sphere) reads:
\[ \to H_m(M\setminus V)\to H_m(N\setminus B)\to H_{m-n}(N\setminus B)
\to H_{m-1}(M\setminus V)\to \]
Consider the exact subsequence
\[ H_{n+1}(N\setminus B)\to H_1(N\setminus B)\to H_{n}(M\setminus V)
\to H_n(N\setminus B)\to H_0(N\setminus B)\to H_{n-1}(M\setminus V)\]
If $n>2$ then the first and the last terms vanish.
The Euler characteristic of this subsequence is zero by exactness and thus
the rank of $H_{n}(M\setminus V)$ is $2c+|B|-2$ by Lemma \ref{surj}.
When $n=2$, we can complete the exact sequence above by
adding one more term to its right, namely
$H_{1}(M\setminus V)\stackrel{f_*}{\to} H_{1}(N\setminus B)$.
However, $f_*$ is actually the map induced in homology by the isomorphism
$f_*:\pi_1(M)\to \pi_1(N)$ and thus an isomorphism itself.
The argument with the Euler characteristic can be applied again and
yields the claimed result.
\end{proof}
From Lemma \ref{rank} and Lemma \ref{surj} (3) we derive that
\[ 2c+|B|-2 \geq \beta_{n}(M)\]
and the proposition is proved.
\end{proof}
\begin{corollary}
If $M^{2n}$ is a smooth $(n-1)$-connected
closed manifold, then
$$\varphi(M,\Sigma^{n+1})\geq \beta_{n}(M)+2,$$
where $\Sigma^{n+1}$ is a homotopy sphere.
\end{corollary}
\begin{remark}
The present approach does not work for $c=1$. In fact, fibers
might have several connected components, each one being
a homotopy sphere. In the absence of an upper bound of the number of components
the Leray-Serre spectral sequence leads only to a trivial lower bound for the
number of critical values.
\end{remark}
\section{Fiber sums of suspensions of Hopf fibrations}\label{fibersum}
\begin{proposition}\label{fsum}
Let $n\in \{2,4,8\}$, $e\geq c\geq 0$, with $c\neq 1$, and
$\Sigma^{2n}$ be a homotopy $2n$-sphere. If $n=2$ assume further that
$\Sigma^{4}\setminus {\rm int}(D^4)$ embeds smoothly into $S^4$, where
$D^4$ is a smooth 4-disk.
Then
\[\varphi(\Sigma^{2n}\sharp_{e}S^n\times S^n\sharp_{c} S^1\times S^{2n-1},\sharp_{c} S^1\times S^{n})\leq 2e-2c+2\]
\end{proposition}
\begin{proof}
Recall from \cite{AndFun1} that
$\varphi(S^{2n},S^{n+1})=2$ if $n=2,4$ or $8$. This is realized
by taking suspensions of both spaces in the Hopf fibration
$h:S^{2n-1}\rightarrow S^n$, where $n=2,4$ or $8$, and then
smoothing the new map at both ends. The extension
$H:S^{2n}\rightarrow S^{n+1}$ has precisely two critical
points. This is also the basic example of a Montgomery-Samelson
fibration with finitely many singularities, as considered in
\cite{Ant3}. Antonelli has considered in \cite{Ant1} manifolds
which admit maps with two critical points into spheres, by gluing together
two copies of $H$.
Our aim is to define fiber sums of Hopf fibrations
leading to other examples of pairs of manifolds with finite $\varphi$
using Antonelli's construction for more general gluing patterns.
Identify $S^{n+1}$ (and respectively $S^{2n}$) with the suspension
of $S^n$ (respectively $S^{2n-1}$) and thus equip it with the coordinates
$(x,t)$, where $|x|^2+t^2=1$, and $t\in [-1,1]$. We call the coordinate $t$
the height of the respective point.
The suspension $H$ is then given by:
\[H(x,t)=\left(\psi(|x|)h\left(\frac{x}{|x|}\right), t\right)\]
where $\psi:[0,1]\to [0,1]$ is a smooth increasing function infinitely flat at $0$ such that
$\psi(0)=0$ and $\psi(1)=1$.
Pick up a number of points $x_1,x_2,\ldots, x_k\in S^{n+1}$ and
their small enough disk neighborhoods $x_i\in D_i\subset S^{n+1}$,
such that:
\begin{enumerate}
\item the projections of $D_i$ on the height coordinate axis are disjoint;
\item the $D_i$'s do not contain the two poles, i.e. their projections on the
height axis are contained in the open interval $(-1,1)$.
\end{enumerate}
Let $A_k$ be the manifold with boundary obtained by deleting from
$S^{n+1}$ of the interiors of the disks $D_i$, for $1\leq i\leq k$.
Let also $B_k$ denote the preimage $H^{-1}(A_k)\subset S^{2n}$ by the
suspended Hopf map. Since $H$ restricts to a trivial fibration
over the disks $D_i$ it follows that $B_k$ is a manifold, each
one of its boundary components being diffeomorphic to $S^{n-1}\times S^n$.
Moreover, the boundary components are endowed with a natural
trivialization induced from $D_i$.
Let now $\Gamma$ be a finite connected graph.
To each vertex $v$ of valence $k$ we associate
a block $(B_v,A_v, H|_{B_v})$, which will be denoted $(B_k,A_k, H|_{B_k})$,
when we want to emphasize the dependence on the number of boundary components.
Each boundary component of $A_v$ or $B_v$ corresponds to an edge incident
to the vertex $v$.
We define the fiber sum along $\Gamma$
as the following triple $(B_{\Gamma}, A_{\Gamma}, H_{\Gamma})$:
\begin{enumerate}
\item $A_{\Gamma}$ is the result of gluing the manifolds with boundary
$A_v$, associated to the vertices $v$ of $\Gamma$, by identifying, for each
edge $e$ joining the vertices $v$ and $w$ (which might coincide)
the pair of boundary components in $A_v$ and $A_w$
corresponding to the edge $e$. The identification is made by using an orientation-reversing
diffeomorphism of the boundary spheres.
\item $B_{\Gamma}$ is the result of gluing the manifolds with boundary
$B_v$, associated to the vertices $v$ of $\Gamma$, by identifying, for each
edge $e$ joining the vertices $v$ and $w$ (which might coincide)
the boundary components in $B_v$ and $B_w$ corresponding to the pair
of boundary components in $A_{\Gamma}$ associated to $e$. Gluings in $B_{\Gamma}$ are realized by
some orientation-reversing diffeomorphisms which respect the product structure over
boundaries of $A_{v}$ and $A_{w}$.
\item As the boundary components are identified
the natural trivializations of the boundary components
of $B_v$ agree in pairs. Thus the maps $H_v$ induce a well-defined map
$H_{\Gamma}:B_{\Gamma}\to A_{\Gamma}$.
\end{enumerate}
In the case where the graph $\Gamma$ consists of two vertices joined by an edge
this construction is essentially that given in (\cite{Ant1}, p.185-186).
\begin{proposition}
The map $H_{\Gamma}:B_{\Gamma}\to A_{\Gamma}$ has $2m$ critical points, where
$m$ is the number of vertices of $\Gamma$.
\end{proposition}
\begin{proof} Clear, by construction.
\end{proof}
We say that $\Gamma$ has $c$ independent cycles if the rank of
$H_1(\Gamma)$ is $c$. This is equivalent to ask $\Gamma$
to become a tree only after removal of at least $c$ edges.
Moreover, $c=e-m+1$ where $e$ denotes the number of edges.
\begin{proposition}
If $\Gamma$ has $e$ edges and $c$ cycles, i.e. $e-c+1$ vertices,
then for a suitable choice of the gluing diffeomorphisms data
$B_{\Gamma}$ is diffeomorphic to $\Sigma^{2n}\sharp_{e}S^n\times S^n\sharp_{c} S^1\times S^{2n-1}$ (where $\Sigma^{2n}$ is a
homotopy sphere, which is trivial when $n=2$), while
$A_{\Gamma}$ is diffeomorphic to
$\sharp_{c} S^1\times S^{n}$. Here $\sharp_{c} S^1\times S^{n}$ states for $S^{n+1}$ when $c=0$.
\end{proposition}
\begin{proof}
The sub-blocks $A_k$ are diffeomorphic to the connected sum of $k$ copies
of disks $D^{n+1}$
out of their boundaries. When gluing together two such distinct sub-blocks
(since there is an edge in $\Gamma$ joining the corresponding vertices)
the respective pair of disks leads to a factor $D^{n+1}\cup_{\mu} D^{n+1}$, where
$\mu:S^n\to S^n$ is the identification map. If $\mu$ is a reflection
then the factor $D^{n+1}\cup_{\mu} D^{n+1}$ is the double of $D^{n+1}$ and
hence diffeomorphic to $S^{n+1}$.
When gluing all sub-blocks in the pattern of the graph $\Gamma$
the only non-trivial contribution comes from the cycles. Each cycle
of $\Gamma$ introduces a 1-handle. Thus the manifold $A_{\Gamma}$
is diffeomorphic to $\sharp_{c} S^1\times S^{n}$.
Further we have a similar result for the sub-blocks $B_k$:
\begin{lemma}
The sub-blocks $B_k$ are diffeomorphic to the connected sum of
$k$ copies of the product $S^{n}\times D^{n}$ out of their boundaries.
\end{lemma}
\begin{proof}
One obtains $B_k$ by deleting out $k$ copies of $H^{-1}(D_i)$; each $H^{-1}(D_i)$ is
a tubular neighborhood of the (generic) fiber of $H$ and thus
diffeomorphic to $S^{n-1}\times D^{n+1}$.
When $k=1$ the generic fiber of $H$ is an $S^{n-1}$ embedded in $S^{2n}$, namely
the image of the fiber of the Hopf fibration in the
suspension sphere $S^{2n}$. The generic fiber is unknotted in $S^{2n}$, as an immediate
consequence of Haefliger's classification of smooth embeddings. In fact, according to \cite{Hae3}, any
smooth embedding of $S^{k}$ in $S^{m}$ is unknotted,
i.e. isotopic to the boundary of a standard ball, if the dimensions satisfy the meta-stable range
condition $k<\frac{2}{3}m-1$.
This implies that the complement of a regular neighborhood of the fiber is
diffeomorphic to the complement of a standard sphere and thus to
$S^{n}\times D^{n}$.
When $k\geq 2$ we remark that the fibers over the points $x_i\in D_i$ lie at
different heights and thus they are contained in disjoint slice
spheres of the suspension $S^{2n}$. This implies that these fibers
are unlinked, i.e. isotopic to the boundary of a set of disjoint standard
balls. Thus the complement of a regular neighborhood of
their union is diffeomorphic to the connected sum of their
individual complements, and therefore to the connected sum of
$k$ copies of the product
$S^{n}\times D^{n}$ out of their boundaries.
\end{proof}
Let us stick for the moment to the case when $k=1$ and we have two diffeomorphic sub-blocks
$B_v$ and $B_w$, each one having one boundary component, to be glued together.
We choose the identification diffeomorphism $\nu:\partial B_v\to \partial B_w$
to be the one from the construction of the double of $B_v$. Observe that
the maps $B_v\to A_v$ and $B_w\to A_w$ glue together to form a well-defined smooth map
$B_v\cup_{\nu}B_w\to A_v\cup_{\mu}A_w$, as already noticed in (\cite{Ant1}, p.185).
\begin{lemma}
The factor $B_v\cup_{\nu} B_w$ is diffeomorphic to $\Sigma^{2n}\sharp S^{n}\times S^{n}$, where
$\Sigma^4=S^4$.
\end{lemma}
\begin{proof}
Consider first the case $n=2$, which is the most interesting one since the result cannot follow from
general classification results. The sub-block $D^2\times S^2$ can be easily
described by a Kirby diagram (see \cite{Go}, chapter 4), which encodes its handlebody structure.
As $D^2\times S^2$ is obtained from $D^4$ by throwing away the regular
neighborhood of an unknotted circle (i.e. a $1$-handle) it can be described as the result of attaching
the dual 2-handle on an unknotted circle with framing $0$. There is also a dual
handlebody decomposition of $D^2\times S^2$ in which each $j$-handle generates a $4-j$ handle.
The double of $D^2\times S^2$ is then described by putting together the two
handlebody descriptions (the usual one and the dual one) and thus is made of $D^4$ with
two 2-handles and finally a 4-handle capping off the boundary component.
Attaching maps of $4$-handles are orientation preserving diffeomorphisms of $S^3$, and by a classical
result of Cerf these are isotopic to identity. Thus there exists a unique way to attach a 4-handle
to a 4-manifold with boundary $S^3$. By the way, recall that a theorem of Laudenbach and
Poenaru (\cite{LP}) shows that there is only one way up to global diffeomorphism
to attach $3$-handles and $4$-handles to a $4$-manifold with boundary $\sharp_k S^1\times S^2$
in order to obtain a closed manifold.
Now it is easy to see that the new 2-handle (in the handlebody structure of the double
of $D^2\times S^2$) is attached along a meridian circle of the former 2-handle with
$0$ framing. Thus a Kirby diagram of the double of $D^2\times S^2$ consists of a Hopf link
with both components having framing $0$, and it is well-known that this diagram is
also that of $S^2\times S^2$. See also (\cite{Go}, Example 4.6.3) for more details.
This argument applies as well for $n\geq 3$. We have a handle decomposition of
$D^n\times S^n$ as $D^{2n}$ with one $n$-handle attached. The set of framings on
a sphere $S^{n-1}$ in $\partial D^{2n}$ is acted upon freely transitively by
$\pi_{n-1}(O(n))$. Moreover $\pi_3(O(4))\cong \pi_7(O(8))\cong\mathbb Z\oplus \mathbb Z$ (see \cite{Mi}).
Then the $n$-handle is attached on an unknotted $(n-1)$-sphere with trivial framing,
i.e. the $(0,0)$-framing. Observe that this is the canonical framing associated to
the identity attaching map ${\rm id}_{S^{n-1}\times D^n}$ (see e.g. \cite{Go} Example 4.1.4.(d)).
Further the double of $D^n\times S^n$ is obtained by putting together
the usual handlebody and its dual. As above
we can describe the double as the result of attaching two $n$-handles and one
$2n$-handle. The dual $n$-handle is attached on a meridian $(n-1)$-sphere which links
once the former attaching $(n-1)$-sphere and has trivial framing.
The union of the two spheres is the analogue of the Hopf link in $S^{2n-1}=\partial D^{2n}$.
As it is well-known $S^n\times S^n$ can also be obtained by adding two $n$-handles
along this high-dimensional trivially-framed Hopf link and a $2n$-handle.
The only difference between the cases $n>2$ and $n=2$
is that the result of attaching a $2n$-handle for $n>2$
is not unique, as there might exist diffeomorphisms of $S^{2n-1}$ which are not isotopic to identity.
However, detaching and then reattaching a $2n$-handle with a reflection diffeomorphism
as gluing map will create an exotic sphere (for $n\geq 4$) and thus the
double is diffeomorphic to $\Sigma^{2n}\sharp S^n\times S^n$ for some homotopy sphere
$\Sigma^{2n}$.
\end{proof}
When gluing all sub-blocks in the pattern of the graph $\Gamma$ such that each identification map
is $\nu$ then each pair of sub-blocks determines a factor $\Sigma^{2n}\sharp S^n\times S^n$.
If there are no cycles in $\Gamma$ then we obtain a connected sum of such factors, namely
$\Sigma^{2n}\sharp_e S^n\times S^n$. Finally,
the only additional non-trivial contribution comes from the cycles. Each cycle
of $\Gamma$ introduces an extra 1-handle.
Thus the manifold $B_{\Gamma}$ is diffeomorphic to $\Sigma^{2n}\sharp_{e}S^n\times S^n\sharp_{c} S^1\times S^{2n-1}$.
\end{proof}
In order to prove Proposition \ref{fsum} it suffices now to show that one can attach a homotopy sphere
to the manifolds $B_{\Gamma}$ and still have the same number of critical points.
This can be realized by removing a small disk centered at
a critical point and gluing it back differently, when $n\neq 2$,
and respectively gluing back a homotopy 4-disk, when $n=2$.
We consider only those homotopy 4-disks which embed smoothly into $S^4$.
In this way, using the theorem of Huebsch and Morse for $n=2$
(see \cite{HM} and also \cite{AndFun1} where this argument is carried out in detail) we obtain
a smooth map with the same (non-zero) number of critical points, namely $2e-2c+2$.
Since the homotopy spheres form a finite abelian group under the connected sum
one can obtain this way all manifolds of the form $\Sigma^{2n}\sharp_{e}S^n\times S^n\sharp_{c} S^1\times S^{2n-1}$.
\end{proof}
\begin{remark}
Recall that the group $\Theta^k$ of homotopy $k$-spheres
is $\Theta^8=\Theta^{16}=\mathbb Z/2\mathbb Z$.
\end{remark}
\begin{remark}
By twisting $\mu$ by a diffeomorphism of $S^n$ which is not isotopic to identity
(e.g. when $n=8$) one could obtain exotic spheres factors in $A_{\Gamma}$.
More interesting examples correspond to twisting $\nu$ by some orientation preserving
diffeomorphism $\eta:S^{n-1}\times S^n\to S^{n-1}\times S^n$
which still respect the product structure.
For instance we can consider some $\eta$ induced from a map
$S^{n-1}\to SO(n+1)$ whose homotopy class is an element of $\pi_{n-1}(SO(n+1))$.
It seems that all examples obtained by twisting are still diffeomorphic to $\Sigma^{2n}\sharp_{e}S^n\times S^n\sharp_{c} S^1\times S^{2n-1}$.
\end{remark}
\section{Examples with $\varphi=1$}
The result of \cite{AndFun1} shows that
if $\varphi(M^m,N^{n+1})$ is finite non-zero
(small codimension non-exceptional dimensions) then
$\varphi(M^m,N^{n+1})=1$ and $M^m$ should be diffeomorphic
to $\Sigma^m\sharp \widehat{N}$, where $\Sigma^m$ is an exotic sphere and
$\widehat{N}$ is the total space of a smooth fibration, such that $M^m$ is not fibered over $N$.
Actually this construction might produce
non-trivial examples in any codimension.
\begin{proposition}
If $\Sigma^m$ is an exotic sphere (for $m=4$ we assume that $\Sigma^4\setminus {\rm int}(D^4)$
embeds smoothly in $S^4$) and $\widehat{N}\to N$ a smooth fibration then
$\varphi(\Sigma^m\sharp \widehat{N}, N)\in\{0,1\}$.
\end{proposition}
\begin{proof}
We obtain $\Sigma^m\sharp \widehat{N}$ from $\widehat{N}$ by excising
a ball $D^{n+1}$ and gluing it (or a homotopy 4-disk when $m=4$)
back by means of a suitable diffeomorphism
$h$ of its boundary.
By a classical result of Huebsch and Morse (\cite{HM}), there exists
a smooth homeomorphism $\Sigma^m\sharp \widehat{N}\to \widehat{N}$
which has only one critical point located in the ball
$D^{n+1}$. This provides a smooth map $\Sigma^m\sharp \widehat{N}\to N$
with one critical point.
\end{proof}
\begin{remark}
Notice however that $\Sigma^m\sharp \widehat{N}$
might still be fibered over $N$, although not diffeomorphic
to $\widehat{N}$. This is so when
$\widehat{N}\to N$ is the Hopf fibration $S^7\to S^4$ and
$\Sigma^7\sharp \widehat{N}$ is a Milnor exotic sphere, namely
a $S^3$-fibration over $S^4$ with Euler class $\pm 1$.
\end{remark}
\begin{remark}
The manifold $M^m=\Sigma^m\sharp S^{m-n-1}\times S^{n+1}$ is not diffeomorphic
to $S^{m-n-1}\times S^{n+1}$ if $\Sigma^m$ is an exotic sphere (see
\cite{Sch}). Thus, the proposition above yields effective examples
where $\varphi=1$.
If $\Sigma^8$ is the exotic 8-sphere
which generates the group $\Theta^8=\mathbb Z/2\mathbb Z$ then
$\varphi(\Sigma^8\sharp S^3\times S^5,S^5)=1$.
In fact $M^8=\Sigma^8\sharp S^3\times S^5$
is homeomorphic but not diffeomorphic to $S^3\times S^5$.
Assume the contrary, namely that $M^8$ smoothly fibers over $S^5$.
Then the fiber should be a homotopy 3-sphere and hence $S^3$, by the
Poincar\'e Conjecture. The $S^3$-fibrations
over $S^5$ are classified by the elements of $\pi_4(SO(4))\cong \mathbb Z/2\mathbb Z\oplus \mathbb Z/2\mathbb Z$.
There exist precisely two homotopy types among the $S^3$-fibrations over $S^5$
which admit cross-sections (see \cite{JW1}, p.217). If $M^8$ is a $S^3$-fibration
then it should have a cross-section because it is homotopy equivalent to $S^3\times S^5$ and
the existence of a cross-section is a homotopy invariant
(see \cite{JW1}, p.196, \cite{JW}, p.164). However the two homotopy types correspond
to two distinct isomorphism types as spheres bundles. In fact they are classified by
the image of $\pi_4(SO(3))\cong \mathbb Z/2\mathbb Z$ into $\pi_4(SO(4))$.
This means that a $S^3$-fibration having a cross-section
is either homotopy equivalent to the trivial fibration and then it is isomorphic to the
trivial fibration or else it has not the same homotopy type as $S^3\times S^5$.
Observe also that there is only one $O(4)$-equivalence class and thus precisely
two isomorphism classes of such $S^3$-fibrations without cross-sections (\cite{JW}, p.164) .
In particular, non-trivial $S^3$-fibrations over $S^5$ cannot be homeomorphic to
$M^8$ and this contradiction shows that $M^8$ cannot smoothly fiber over $S^5$.
\end{remark}
\bibliographystyle{amsplain}
|
2202.01436
|
\section{Introduction}\label{sec:Overview}
Given a smooth convex body $K \in \mathbb{R}^n$, its normal to a point $p\in \partial K$ is a line passing through $p$ and orthogonal to $\partial K$ at the point $p$.
It is conjectured that for any
convex body $K \in \mathbb{R}^n$ there exists a point in
the interior of $K$ which is the intersection point of at least $2n$ normals from different points on the
boundary of $K$.
The concurrent normals conjecture trivially holds for $n=2$. For $n=3$ it was proven by Heil \cite{H1} and \cite{H2} via geometrical methods and reproved by Pardon via topological methods.
The case $n=4$ was completed also by Pardon \cite{P}.
Recently Martinez-Maure proved for $n=3,4$ that (under some mild conditions) almost every normal
through a boundary point to a smooth convex body $K$
passes arbitrarily close to the set
of points lying on normals through at least six distinct
points of $\partial K$ \cite{M-M}.
He used Minkowski differences of smooth convex bodies, that is, the \textit{theory of hedgehogs}.
The present paper is very much motivated by \cite{M-M}. We give an alternative short proof of almost the same fact for all $n\geq 3$, see Theorem \ref{ThmMain}. Our proof is based on the bifurcation theory and does not use hedgehogs.
\bigskip
\textbf{Acknowledgments.} G. Panina is supported by RFBR grant 20-01-00070A; A. Grebennikov is supported by Ministry of Science and Higher Education of the Russian Federation, agreement 075-15-2019-1619.
\newpage
\section{Some preliminaries and the main result}\label{prelim}
Let $n \ge 3$, and let $K$ be a strictly convex $C^{\infty}$-smooth compact body in $\R^n$. For a point $x \in \partial K$ we denote by $\mathcal{N}(x)$ the normal line to $\partial K$ at the point $x$.
Following \cite{M-M}, we make use of the support function. \footnote{Equivalently, one can work with the squared distance function to the boundary.}
Let $h: S^{n-1} \to \R$ be the support function of $K$. For $y \in \mathbb{R}^n$ define $$h_y: S^{n-1} \to \R$$
$$
h_y(v) = h_0(v) - \langle v, y\rangle.
$$
The function $h_y$ equals the support function of $K$ after a translation which takes the origin $O$ to the point $y$.
Given a point $y$, all the normals passing through $y$ can be read off the function $h_y$:
\begin{lemma}\label{Lemma1}\cite{M-M}
A point $y$ lies on the normal $\mathcal{N}(x)$ for some $x\in \partial K$
iff
$u(x)$ is a critical point of the function $h_y$.
Here $u(x)\in S^{n-1}$ is the outer unit normal to $\partial K$ at the point $x$.
\qed
\end{lemma}
By Morse lemma type arguments (\cite{Morse}, Lemma A), $h_y$ is a Morse function almost for all $y$.
Its bifurcation diagram \cite{A}, \cite{M-M} is given by the\textit{ focal surface} $\mathcal{F}_K$ of the body $K$.
\begin{enumerate}
\item The focal surface
equals the locus of the centers of principal curvatures of $\partial K$. Thus it has $n-1$ \textit{sheets}. Sheet number $k$ corresponds
to the $k$-th curvature, assuming that the curvature radii are enumerated in the ascending order:
$r_1\leq r_2 \leq...\leq r_{n-1}.$ Each sheet is an image of $S^{n-1}$. The sheets intersect each other and may have self-intersections and singularities.
\item The focal surface cuts the ambient space $\R^n$ into \textit{cameras} (that is, connected components of the complement of $\mathcal{F}_K$).
The type of the associated Morse functions $h_y$ depends on the camera containing $y$ only.
\item Transversal crossing of exactly one of the sheets (say, the sheet number $k$) of the focal surface at its smooth point amounts to a birth (or death) of two critical points of $h_y$ whose indices are $k$ and $k-1$.
\end{enumerate}
\iffalse
\begin{proposition}\label{non-sing}
(See \cite{M-M}, Proposition 3)
For every smooth $K$ and every $u\in S^{n-1}$,
the normal to $\partial K$ at the point with the outer normal vector $u$ crosses the
singular locus of the sheet $F^k$
iff one of the following two conditions
holds:
\begin{enumerate}
\item $R_k$ is a multiple eigenvalue of the second fundamental form of $\partial K$.
\item $R_k$ is not a multiple eigenvalue, and $$\frac{\partial R_k}{\partial v_k}=0,$$ where $v_k$ is the eigenvector of the second fundamental form that corresponds to $R_k$.\qed
\end{enumerate}
\end{proposition}
\fi
\iffalse
Motivated by the above fact, we define:
\begin{definition}
We say that a convex body $K$ is \textit{generic} if:\begin{enumerate}
\item $K$ is strictly convex and $C^{\infty}$-smooth.
\item The set of points satisfying one of the conditions 1 and 2 from Proposition \ref{non-sing}
has a zero measure.
\end{enumerate}
\end{definition}
\fi
\begin{theorem}\label{ThmMain}
Let $n\geq3$, and let $K\in \mathbb{R}^n$ be a $C^{\infty}$-smooth convex body, $x \in \partial K$. If the normal line $\mathcal{N}(x)$ does not intersect the singular locus of the focal surface $\mathcal{F}_K$
then $\mathcal{N}(x)$
contains a point $z$ such that:
\begin{enumerate}
\item $z$ is an intersection point of at least $6$ normals from different points on the
boundary of $K$.
\item The distance $|xz| $ satisfies $$r_1(x)<|xz|<r_{n-1}(x),$$ where $r_1(x)$ and $r_{n-1}(x)$ are the largest and the smallest principal curvature radii at the point $x$.
\end{enumerate}
\end{theorem}
\section{Proof of Theorem \ref{ThmMain}}\label{prelim}
\iffalse
\begin{proposition} \textbf{DO WE NEED IT?}
Denote $U_t = h_{x - t_1u}^{-1}([0, t_1))$. Let $t_1 > t_2 > 0$, then $U_{t_1} \supset U_{t_2}$.
\end{proposition}
\begin{proof}
Indeed, suppose $h_{x - t_1u}(v) < t_1$. Then
$$
h_{x - t_2u}(v) = h_{x - t_1u}(v) + (t_2 - t_1)\langle u, v\rangle \le h_{x - t_1u}(v) + (t_2 - t_1) < t_1 + (t_2 - t_1) = t_2.
$$
\end{proof}
\fi
\begin{definition}
A one-parametric family $f_t \in C^{\infty}(S^{n-1}, \R), \ \ t \in \R_{+}$ is \textit{nice} if the following properties hold:
\begin{enumerate}
\item $f_t$ depends smoothly on $t$;
\item $f_t$ is a Morse function for each $t$ except for finitely many \textit{bifurcation points} $t_1, \ldots t_m$;
\item each of the bifurcation points is of one of the two types:
\begin{enumerate}
\item A\textit{ birth/death point}. Some two critical points with some neighbor indices $k$ and $k-1$ collide and disappear, or, vice versa, two critical points with neighbor indices appear.
\item An \textit{index exchange point}. Two critical points with neighbor indices $k$ and $k-1$ collid
at $t=t_i$. There arises a degenerate critical point which splits afterwards into two critical points with the same indices $k$ and $k-1$.
\end{enumerate}
\end{enumerate}
\end{definition}
For a nice family, denote $T = \R_{+} \setminus \{ t_1, \ldots, t_m\}$. For each $t \in T$ and $0 \le k \le n-1$ let $C_{t, k}$ be the number of critical points of $f_t$ of index $k$, and let $N_t$ be the total number of critical points of the function $f_t$.
\iffalse The following holds for any {nice} family:
\begin{itemize}
\item $C_{t, k} = C_{t', k}$ if there are no bifurcation points between $t$ and $t'$;
\item passing a bifurcation point either keeps all the $C_{t, k}$ or changes by $\pm 2$ the value of $N_t$.
\iffalse the only possible change of values $C_{t, k}$ is the following: there exists $0 \le k_0 \le n-2$, $d \in \{ -1, 0, 1\}$ such that
$$
C_{t+\eps, k} = \begin{cases} C_{t-\eps, k} + d & \text{when $k \in \{k_0, k_0+1\},$} \\ C_{t-\eps, k} & \text{otherwise;} \end{cases}
$$
\fi
\end{itemize}
\fi
\begin{lemma}
\label{discrete-continuity}
Let $f_t$ be a {nice} family of functions. Let also $C_{t_1, 0} \ge 2$, $C_{t_2, n-1} \ge 2$ and
$$
\sum_{k = 1}^{n-2} C_{t, k} \ge 1 \text{ for all } t \in T \cap [t_1, t_2].
$$
Then there exists $t \in T \cap [t_1, t_2]$ such that $N_t \ge 6$.
\end{lemma}
\begin{proof} Assume the contrary, that is, for every $t$ we have $N_t < 6$.
By assumption of the lemma, there are other critical points of $f_t$ than max and min, so $N_t$ is at least $3$ for all $t$. Besides, $N_t$ is even, so $N_t$ necessarily equals $4$ for all $t \in T \cap[t_1,t_2]$.
Therefore there are no birth/death bifurcations on $[t_1, t_2]$. Together with the conditions on $C_{t_1, 0}$ and $C_{t_2, n-1}$, this implies that $f_t:S^n \rightarrow \mathbb{R}$ has two maxima and two minima for any $t\in T\cap [t_1, t_2]$ and no other critical points. A contradiction.
\end{proof}
Now we are ready to prove Theorem \ref{ThmMain}. Assume that the point $x$ is such that the normal $\mathcal{N}(x)$ does not meet the singularity locus of the focal surface. Denote by $u=u(x)$ the outer unit normal to $\partial K$ at the point $x$.
Then the principal curvature radii $r_1 < r_2 < \ldots < r_{n-1}$ of $\partial K$ at the point $x$ are all different, and the family of functions $\{h_{x - tu}\}_{t \in \R_{+}}$ is {nice}. The set $T$ of bifurcation points includes $\{r_1,...r_{n-1}\}$. Let us prove that there exists $r \in (r_1, r_{n-1})$, such that $x - ru$ lies on $\ge 6$ normals.
By Lemma \ref{Lemma1}, $u$ is a critical point of $h_{x - tu}$ for all $t$. It is easy to check that its Morse index equals $0$ when $t \in (0, r_1)$, equals $1$ when $t \in (r_1, r_2)$, \ldots, equals $n-1$ when $t \in (r_{n-1}, +\infty)$ \cite{M-M}.
We shall apply Lemma \ref{discrete-continuity} for the family $\{ h_{x - tu} \}$ on the segment $[t_1 ,t_2] := [r_1 + \eps, r_{n-1} - \eps]$ for sufficiently small $\eps > 0$. Since the Morse index of the point $u$ is neither $0$, nor $n-1$ on this segment, the condition $\sum_{k = 1}^{n-2} C_{t, k} \ge 1$ is satisfied. So, we only need to check that $C_{t_1, 0} \ge 2, C_{t_2, n-1} \ge 2$.
We will check the first inequality, the second may be verified in a similar way. The point $u$ at $t=r_1$ is an index exchange point. This means that as $t$ equals $r_1+\varepsilon$ and tends to $r_1$, there is a (local) minimum point $u_{\varepsilon}$ of $h_{x - (r_1+\varepsilon)u}$ which tends to the critical point $u$ of index $1$. We conclude that for small $\varepsilon$, the point $u_{\varepsilon}$ is not the global minimum of $h_{x - (r_1+\varepsilon)u}$. Therefore there are two local minima, that is, two distinct critical points of index $0$.
Application of Lemma \ref{discrete-continuity} completes the proof.\qed
|
2202.01529
|
\section{Introduction}
\label{sec:introduction}
Hand held gadgets like mobile phones and tablets have become part of every day life for most of the world population. Over the course of time, these small devices have also become a great source of data, as well as computationally robust units.
Together with the recent abundance of data, both quantitatively and quantitatively, data privacy has become one of the major area of concern. Users would love to have more engaging and personalized experiences with the gadgets, but at the same time will be privy in sharing their data for using it to build better applications. With more and more stricter rules like the General Data
Protection Regulation (GDPR)~\cite{regulation2016regulation}, the usage of private data need to be handled with utmost care ensuring minimal privacy leakage. At the same time, the main part of developing any learning algorithm is the existence of data, from which
salient features pertaining to the use-case can be learned.
Federated learning (FL)~\cite{mcmahan2017communication, yang2019federated, bonawitz2019towards} is a recent area of research, where the learning happens with the private data not leaving the
device.
In this setting, a shared global model under the supervision of a central orchestrator or server (called aggregator)\footnote{Since the aggregator resides in the server, in the federated learning scenario, we will call it as server, aggregator or central aggregator.} gets trained
by a federation of participating devices.
In a typical FL training environment, a base model is initiated by a server, which is then pushed to the
participating devices. The devices updates the model parameters, individually, in a typical optimization process like
stochastic gradient descent (SGD) or mini-batch gradient descent, involving batches and epochs, then returns the
parameter weight updates back to the aggregator. The aggregator then aggregates all the received weight updates and updates the
parameters of the base model. The process is repeated until the model converges satisfactorily with the desired performance criterion.
The FL process briefly explained above is fundamentally not the same as the original gradient descent algorithm design. However special
cases of the schemes do match some of the gradient descent variants. Still, there are certain challenges that we face with respect to the learning of models, because of the non IID nature of the data. The typical federated nature of data may introduce new learning challenges like each device having single labeled data or even single sample. In addition, various other factors like number of samples per device, the feature space, the model complexity all have an effect on performance of models trained via federated learning.
While considering the infrastructure cost involved in federated learning training, we see that the main advantage of FL is because of its primary security design, that is we need not have to sync the data to a central server\cite{bonawitz2019towards}. Syncing data to a central server involves a cost, which may not be negligible. In the case of FL training, the major chunk of the cost involved are related to sending/receiving the model parameter updates to/from the server
aggregator: the more complex the model, the more the number of parameters related to the model that need to be
send by the devices to the aggregator, and hence more the corresponding model update data that needs to be synced to the
central aggregator. More than the cost involved during training is the model deployment cost, which involves not just
the devices participating in the training, but all the devices to which the inference engines has to be deployed. Thus,
keeping the model size under control is essential for reducing the cost in FL.
The another aspect of FL training is security. Although the data is ensured not to leave the devices, we also have to
make sure that the updates are also secure, so as to reduce the chances of inference of data from gradients~\cite{geiping2020inverting}. Typical
private AI schemes like differential privacy~\cite{dwork2011firm, dwork2014algorithmic,dwork2006calibrating, baek2021enhancing}, secure aggregation~\cite{bonawitz2017practical} and homomorphic encryption~\cite{hardy2017private} have been employed in FL
in varying degrees for secure communication of the training updates. However, adding security to the process affects the model
performance or the training duration, thus impacting the cost. As we try to anonymise the data, the more the information loss is and hence lesser or slower would be the learning~\cite{mcmahan2017learning}.
In this work, we address two main problems:
\begin{enumerate}
\item How does federated learning optimization algorithms work in comparison to traditional gradient descent based
optimization techniques for specific non-IID settings, involving skewness in data samples and labels.
\item How much cost effect does federated learning based training and deployment have on the cloud platforms as
compared to centralized learning and inference schemes.
\end{enumerate}
Recent works have looked at how federated learning performs in general, for different non-IID scenarios. We try to extend their work by considering more practical scenarios related to on-device data. We first show theoretically, how FL is different from central learnings and how the federated averaging algorithm~\cite{mcmahan2017communication} is technically different from gradient descent variants.
We look at some of the special non-IID cases associated with number of samples and skewness of labels and see how it affects learning. We also show a comparison of costs involved in federated learning under different scenarios and centralized\footnote{The term ``central'', when used in the federated learning context would mean the central aggregator or server. General usage of ``central'' or ``centralized'' would mean the training happening with all the data available at one place.} schemes.
The rest of the manuscript is arranged as follows. Section~\ref{sec:related} discusses the related works in the area of federated learning. In section~\ref{sec:FL}, we discuss how federated learning is practically different from centralized training. The various experiments supporting section section~\ref{sec:FL} is shown in section~\ref{sec:exp}. We show the cost comparison between federated and centalised schemes in section~\ref{sec:cost}, followed by conclusion.
\vspace{5pt}
\section{Related works}
\label{sec:related}
Federated learning as a concept was introduced by McMahan {\it et al.}~\cite{mcmahan2017communication} in $2016$, although privacy preserving computations were already discussed in the 80's~\cite{rivest1978data, yao1982protocols}.
There have also been earlier works on privacy preserving techniques using local data with the help of central servers~\cite{agrawal2000privacy, vaidya2008privacy}.
Initially coined for edge devices or mobiles, federated learning has also been extended to other domains where edge clients would be large organizations and would be relatively small in number. The setting involving federation of many edge devices with relatively few data samples per device is called cross-device federated learning. The second setting involving few number of large organisations that themselves act as data centers is called a cross-silo federated learning. An example of cross-silo is medical image classification using hospital data~\cite{kaissis2020secure}. Here each hospital acting as a client has large non-sharable data, yet would like to learn a medical image classification model in collaboration with other hospitals. In this work, we would not analyze the learning challenges and cost effects of cross-silo setting. The survey paper by Kairouz {\it et al.} ~\cite{kairouz2019advances}, discusses the different federated settings, open problems and challenges in detail.
Since the conceptualization of federated learning, it has been widely used in many domains. As a primary initiator, Google has incorporated federated learning in its GBoard keyboard~\cite{hard2018federated, ramaswamy2019federated, yang2018applied}, in android messaging and in web tracking as a replacement for third party cookies~\cite{ravichandran2021evaluation, bindra2021building}. Cross-silo usecases include medical data segmentation and classification~\cite{sheller2020federated, kaissis2020secure}, financial risk prediction, pharmaceutical discovery, etc.
In terms of analysis of federated learning algorithm, the original authors~\cite{mcmahan2017communication} showed FL performance in various scenarios including IID and non-IID distribution of data, with increasing the number of rounds, and increasing the number of clients. In almost all the experiments, their federated averaging method was shown to be better than the naive federated SGD algorithm. \cite{stich2018local, yu2019parallel} look at asymptotic local SGD convergence and \cite{kairouz2019advances} extends it for the federated setting in their survey.
Different federated learning frameworks have also come up in the recent years with the important ones being TensorFlow Federated~\cite{The_TensorFlow_Federated_Authors_TensorFlow_Federated_2019}, PySyft~\cite{ryffel2018generic}, Federated AI Technology Enabler~\cite{The_FATE_Authors_2019}, Leaf~\cite{caldas2019leaf}. There have also been attempts to benchmark a few of these frameworks also~\cite{The_TensorFlow_Federated_Authors_TensorFlow_Federated_2019, budrionis2021benchmarking}.
In our work, we look into the training aspects of federated learning, when there is label sparsity; i.e., when we have only single label available per edge device for a multi-class learning problem. We also give a detailed study of the cost factor associated with federated learning, which is not a well covered issue. We hope that this will help FL designers to come up with cost optimal federated solutions or even to decide whether federated learning is actually required.
\vspace{5pt}
\section{Optimization in Federated `Learning'}
\label{sec:FL}
Federated learning optimization schemes use gradient updates for improving the model during training. The gradient updates happen both at the server end (global), as well as at the client end (local). The basic global
model gradient update step in federated learning is similar to any gradient update step, denoted as
\begin{equation}
\label{eqn:weight_update}
w^{t+1} = w^t - \eta_s \cdot f(gradient)
\end{equation}
What differs is how we compute the gradient function $f(gradient)$, which is basically a function of the client
gradients computed for each round of FL training.
In this section, we see the optimization schemes usually employed in federated learning. We see how it is
different from the normal gradient descent based optimization schemes. This will help in understanding the
practical difficulties in federated learning in comparison to centralized schemes.
But first, we will briefly overview the traditional gradient descent schemes, which will help in mapping the federated learning
gradient
descent based optimization schemes more clearly.
\subsection{Traditional gradient descent learning}
Any learning algorithm requires an objective function to optimize. In case of most of the classification and regression
problems, the objective function is the loss function, which is minimized. This loss function is actually an
aggregation
of the individual losses associated with each sample used for training. Let's assume the training data set to be
$\mathcal{D} = \{(x_i, y_i)\}_{i=1}^{N}$, where $N$ is the total number of training samples, $x_i, y_i$ are the input representation and output label, respectively of the $i$th samples. Similar to the notation in \cite{mcmahan2017communication}, we denote the optimization problem to be
\begin{equation}
\label{eqn:obj}
\min_{w \in \Re^d} f(w)\mbox{ where } f(w) = \frac{1}{N} \sum_{i=1}^{N} {l}(x_i, y_i, w)
\end{equation}
where $l(x_i, y_i, w)$ is the loss associated with input-output pair ($x_i, y_i$) with model parameter $w$. A gradient
step involves calculation of gradient of $f(w)$, denoted as
\begin{equation}
\label{eqn:grad}
g(w) = \nabla f(w) = \frac{1}{N}\sum_{i} g_i(w)
\end{equation} where
\[g_i(w) = \frac{\partial {l}(x_i, y_i, w)}{\partial w}\]
Note that while optimizing the above loss function, we may not take the entire dataset for each iteration. When only a subset of samples, whose indices denoted by $B \subset \{1,\ldots N\}$ is used, the loss function would be
\begin{equation}
\label{eqn:obj_batch}
\min_{w \in \Re^d} f_B(w)\mbox{ where } f_B(w) = \frac{1}{n_B} \sum_{i \in B} {l}(x_i, y_i, w)
\end{equation}
where $n_B = |B|$.
The gradient associated with this loss function is \[g_B(w) = \nabla f_B(w) = \frac{1}{n_B}\sum_{i \in B} g_i(w)\]
In gradient descent based optimization schemes, each step of weight updates correspond to a gradient computation step involving samples in a batch. The model learning happens over weight updates for batches of training samples. When all samples (in batches) are presented for training once, it corresponds to one epoch. The number of epochs is an indication of the amount of training
\begin{itemize}
\item When all the samples in $\mathcal{D}$ are considered as a batch, it is called batch gradient
descent. This would correspond to the case when $n_B = N$ given in ~\eqref{eqn:obj} and ~\eqref{eqn:grad}.
\item When a randomly picked batch of samples, subscripted by $B$ is considered, it is called stochastic gradient descent.
\item When we consider only a single sample per step, which may not be stored for later, then it is called online
gradient descent. This would correspond to the scenario when $N \sim \infty$, and $|B| = 1$.
\end{itemize}
It is easy to show that when the samples are IID and when the samples in each batch are also IID and drawn from the same distribution as the training data, then $E[f_B(w)] = f(w)$ and $E[g_B(w) = g(w)]$. However, in general, depending on the samples in the batch, which could have a different distribution as the training data, the loss function that is minimized for a batch would be different from the loss function of the entire data. Hence, a single update for a batch $B$ would reduce the loss $f_B(w)$ for that batch, yet increase the error on the full dataset, $f(w)$. However over a course of different batches, the loss function $f(w)$ in \eqref{eqn:obj} would reduce~\cite{duda2006pattern}.
\subsection{Federated Averaging}
In centralized setting, each batch is processed sequentially to update the model. In the federated setting, the data is distributed over clients. As mentioned earlier, two types of gradient updates happen in federated optmization: one done locally at the client side, involving the data present at clients and one at the server side, which involves aggregation of individual client update to update the global model weights. Depending on how the two gradient updates work at clients and server, the optimization in a federated setting may match the centralized setting or could be quite different. However the final goal would still be the same; to minimize the loss of the whole training data.
Let us assume there are $K$ clients and the $N$ samples are distributed over these clients with $P_k$ the set of indexes of data points on client $k$.
Thus, we can re-write the objective function \eqref{eqn:obj} as
\begin{equation}
\label{eqn:obj_fed}
f(w) = \sum_{i=1}^{K}\frac{n_k}{N} f_k(w)\mbox{ where } f_k(w) = \frac{1}{n_k} \sum_{i \in P_k} {l}(x_i, y_i, w)
\end{equation}
where $n_k = |P_k|$
Let us assume a fraction $C$ of clients is chosen in a particular round of federated learning. When we compute one step of gradient descent at each client using all the data present within them, we call this federated algorithm as \texttt{FederatedSGD}~\cite{mcmahan2017communication}. When $C = 1$, we obtain \eqref{eqn:obj} corresponding to batch gradient descent.
Applying gradient to ~\eqref{eqn:obj_fed} with $C=1$ gives
\[
\nabla f(w) = \sum_{i=1}^{K}\frac{n_k}{N} \nabla f_k(w)
\]
This implies the batch gradient is equal to the weighted average of the individual client gradients in \texttt{FederatedSGD} (with $C=1$). Hence using an appropriate step size of $\eta$, the weight update step in \texttt{FederatedSGD}, which would be
\begin{equation}
\label{eqn:Fed_wt_update}
w_{t+1} = w_t - \eta \cdot \sum_{i=1}^{K}\frac{n_k}{N} \nabla f_k(w_t)
\end{equation}
will reduce the global loss function.
Similar to the central setting, we could assume that each client passes over its data in batches, $B$, and passes over the entire data, $E$ (epoch) times. This tweak to the \texttt{FederatedSGD} is called the \texttt{FederatedAveraging}~\cite{mcmahan2017communication}. The pseudo-code for \texttt{FederatedAveraging} is provided in Algorithms~\ref{alg:FedAvg} and~\ref{alg:ClientUpdate}. Again, when $\mathcal{B} = \{P_k\}$ for each client, $k$, we get \texttt{FederatedSGD}. Thus \texttt{FederatedSGD} is a special case of \texttt{FederatedAveraging}. The $ClientUpdate()$ function given in Algorithm~\ref{alg:ClientUpdate} is the one which actually drives the learning and rate of \texttt{FederatedAveraging}, as it directly involves the client side data, as well as the skewness in the data, together with the batch size and epochs.
\begin{algorithm}
\caption{\texttt{FederatedAveraging} as in~\cite{mcmahan2017communication}. \newline The $K$ clients are
indexed by $k$}
\begin{algorithmic}
\STATE initialize $w_0$
\FOR {each round $t = 1,2, \ldots$ }
\STATE $m \gets max(C \cdot K, 1)$
\STATE $S_t\gets (\mbox{random set of m clients})$
\FOR {each client $k \in S_t$ \textbf{in parallel}}
\STATE $w_{t+1}^k \gets ClientUpdate(k, w_t)$
\ENDFOR
\STATE $w_{t+1} \gets \sum_{k=1}^{K}\frac{n_k}{N} w_{t+1}^k$
\ENDFOR
\end{algorithmic}
\label{alg:FedAvg}
\end{algorithm}
\begin{algorithm}[H]
\caption{$ClientUpdate(k, w)$ as in~\cite{mcmahan2017communication}. \newline $B$ is the local minibatch size, $E$ is the number
of local epochs, and $\eta_c$ is the learning rate.}
\begin{algorithmic}
\STATE \COMMENT{At client $k$}
\STATE $\mathcal{B} \gets (\mbox{split $P_k$ into batches of size $B$})$
\FOR {each local epoch $i$ from $1$ to $E$}
\FOR {batch $b \in \mathcal{B}$}
\STATE $w \gets w - \eta_c \nabla l(w;b)$
\ENDFOR
\ENDFOR
\RETURN $w$ to server
\end{algorithmic}
\label{alg:ClientUpdate}
\end{algorithm}
In some variants, instead of the updated weights, $w_{t+1}^k$, each client would send the difference, $w_{t+1}^k - w_t$. This quantity is nothing but the summation of the batch gradients for each client multiplied by the negative of step size
\begin{equation}
\label{eqn:weight_diff}
w_{t+1}^k -w_t = -\eta_c \sum_b \nabla l^k(w;b)
\end{equation}
where $\nabla l^k(w;b)$ denotes the gradient of client $k$ with batch $b$. The aggregator would calculate $f(gradient) =
\sum_k \frac{n_k}{N} w_{t+1}^k -w_t$ and use equation~\eqref{eqn:weight_update} to update the global weights. When $\eta_c = 1$, we effectively see that the update is equivalent to $w_{t+1} \gets \sum_{k=1}^{K}\frac{n_k}{N} w_{t+1}^k$ as in \texttt{FederatedAveraging}. The advantage of looking at the optimization as presented in ~\eqref{eqn:weight_update} and client updates as in~\eqref{eqn:weight_diff} is that we could employ any of the gradient optimizations like Adam, SGD, RMSProp~\cite{sun2019survey} both at client and server, which gives more handle to tuning weights in non-IID settings.
We now see how the distribution of the federated data plays a role in shaping the federated ``learning''.
\subsection{Federated Data}
In the federated setting, the "data" is held at individual end devices. Depending on the learning problem, this data could be large or small in number, multi-labeled or single labeled and could even be restricted to a single sample. We look at each of these cases seperately in this section.
\subsubsection*{Non-IID data}
When we have a large number of samples, like in the case of MNIST~\cite{cohen2017emnist}, or next word prediction~\cite{hard2018federated}, we cannot assume the samples present in devices to be IID, as most of the time, the data distribution would be dependent on the user of the device.
In the case of centralized models, the mini-batch data is randomly selected, and hence we could assume $E[\nabla f_B(w)] = \nabla f(w)$. We saw that when $C=1$, \texttt{FederatedSGD} actually solves the full training data optimization. When $C<1$, although the cost function is similar to SGD, the non-IID nature of the data makes the learning skewed to the characteristics of the participating devices. However, it is shown~\cite{bonawitz2019towards, nilsson2018performance} that with more number of aggregation rounds, it will converge, but may be slightly off from the optimum obtained in central training.
In \texttt{FederatedAveraging}, even when $C=1$, if the number of epochs $E$, or the number of batches is more than one in $ClientUpdate$, the expected loss becomes different from the full training loss because of non-IID data. The optimization solution is also different as the gradients calculated by end devices in $ClientUpdate$~\eqref{eqn:Fed_wt_update} are over batches and epochs, locally. Thus the gradient update from each client is very specific to the non-IID data held by the devices. If the end device data distribution is similar to the global distribution, then learning would be similar to centralized SGD steps. If the end device data distribution is quite dissimilar to the global distribution, then the gradients can be quite noisy to each other, which may slow up and hinder the learning.
We consider two extreme cases where the local data distribution is very different from global distribution: (1) multi samples, single label (2) single sample (single label). These types of data are common with user profile predictions like demographics, interests, etc.
\subsubsection{Multi sample single label}
For device centric profiles that does not change over time, the profile labels would be unique to a device. This could be either demographics attributes which change very rarely or user interest profiles, which may drift very slowly. For both the cases, we could consider that the label is unique (atleast for a predetermined time period). The features however may be thought of as a single feature vector or a group of vectors denoting, for instance, the hour, day, week or month.
In such cases, we have multiple samples in each device all with the same label. Thus the updated weights from each device $k$, $w_{t+1}^k$ would be directed more towards making the activation corresponding to its own label. In other words, devices with unique labels learn weight updates very specific to their attached labels. The server aggregation would consist of aggregating many "label overfitted" weight updates. We will show this effect more clearly in our experiments section.
\subsubsection{Single sample (single label)}
This is an extreme data distribution scenario where there is only one sample and hence one label in each device. However for the
case, when $E=1$, and with random selection of end-devices, this would actually correspond to mini-batch SGD! The main problem here is that, in federated learning, communication is costly (mainly because of the time, and also the payload), and local computation is cheap. Hence federated setups try to synchronize as many devices as possible in each aggregation. It is shown that $400\sim1000$ devices per round works well both in terms of cost and in terms of learning~\cite{bonawitz2019towards}. In single sample scenario, the number of devices would correspond to the batch size in SGD. The batch size is a hyper parameter in SGD, which depends on the problem in hand. It has been observed in practice that when using a larger batch there is a significant degradation in the quality of the model, as measured by its ability to generalize~\cite{keskar2016large}. The general batch sizes are $32\sim 250$.
\subsection{Effects of model complexity on learning and cost}
\begin{table*}
\small
\centering
\caption{Models in rows 1-3 are found in the literature. Models in rows 4,5 were designed by us taking into account the size factor. Model2 has 3 convolution layers and 2 fully connected layers. Model1 has a Global MaxPool layer (denoted as GMPool) in between the convolution and fully connected layers which drastically reduces the model size. Sixth row is Model2 trained in an FL way involving 10 devices.}
\label{tab:image_image}
\begin{tabular}{@{}lcccc@{}}\toprule
\# & Dataset & Method & Accuracy & Size(MB) \\
\midrule
1 & ImageNet(pretrain)+IMDB-WIKI( fine tuning) Adience(testing) & AL-ResNets-34~\cite{zhang2019fine} & 66.03& ~87.3\\
2 & ImageNet(pretrain)+Adience(training and testing) & RESF-EMD~\cite{hou2016squared} & 62.2 & ~103 \\
3 & ImageNet(pretrain)+IMDB-WIKI( fine tuning) Adience(testing) & VGGF-DEX with IMDB-WIKI~\cite{rothe2018deep} & 64 & ~553 \\ \midrule
4 & IMDB-WIKI~\cite{rotheimdb} dataset & Model1 (4Conv+GMPool+ 2FC) & 46.83 & 1.61 \\
5 & IMDB-WIKI dataset & Model2 (3Conv+2FC) & 49.34 & 24.03 \\
6 & IMDB-WIKI dataset & Model2 FL & 45.23 & 24.03 \\
\bottomrule
\end{tabular}
\end{table*}
Model complexity is an important parameter for any learning task. An optimal model for any problem would decrease both bias and overfitting, and hence generalizes well. The complexity of the models is largely governed by the complexity of the features to be learned from the input. Or in other words, when we have more features inherent in the data, more complex models would suit better. In central schemes, there is not much restriction of the model complexity, in terms of the cost involved in the training. The main cost of training is on the number of compute instances involved and the duration of training. The model complexity is not much of a factor although we could assume complex models to have lengthier training periods.
However there is no communication cost that is dependent on the model.
In federated setting, there is a communication cost associated with passing model parameters and configurations up and down the server to clients. The more the size of the model, the more the communication cost as is shown in section~\ref{sec:cost}. In addition, devices may not have the computation power left aside for doing computationally intensive trainings. Thus learning in federated setting would be challenging as it needs to compromise between performance and model size (cost), in many scenarios.
Take the example of image classification. Table~\ref{tab:image_image} shows the performance and model sizes of various age prediction models from images. Usually, age prediction is considered as a classification problem, where the intention is to classify subjects into different age buckets. The purpose of table is not a comparison of models, but to understand the accuracy levels achieved by different models of different complexities (sizes). As we see, complex models evidently perform well for the task, however the models may not be adopted for federated learning as the communication costs would be immensely high. For instance, using the assumptions mentioned in section~\ref{sec:cost}, the training cost for low performing, low sized models, Model1 and Model2 in table~\ref{tab:image_image} would be $\$ 1111$ and $\$ 6290$ respectively. For heavy models AL-ResNets-34, RESF-EMD, VGGF-DEX, the training costs are $ \$20905, \$24532,\mbox{ and } \$128482$ respectively! In addition there would also be a huge deployment cost, that would be directly dependent on the model size.
Hence, unlike central settings, it is very important to look at model sizes when we try for federated learning. More emphasis should be given to obtaining most relevant and small subet of features, as well as other model size reduction techniques like pruning or quantization~\cite{han2015deep}.
\vspace{5pt}
\section{Experimentation and results}
\label{sec:exp}
We now show the experimental results for the different scenarios we discussed in the federated learning setup. We specifically show how the number of samples and labels per device have an effect on overall learning.
\subsection{Data set and features}
The dataset we used is the MNIST data for handwritten digit recognition~\cite{lecun1998mnist, lecun1998gradient}.The MNIST dataset consist of 70000 handwritten images ($28\times 28 \times 1$) of the 10 digits. 60000 samples are used for training and 10000 samples for testing. The images were flattened to a 784 dimension vector. The hardware setup was made up of a Intel(R) Core(TM) $i7-4770K$ CPU \@ $3.50GHz$ processor with 32 GB RAM running on a Ubuntu 16.04.7 LTS operating system. No GPUs were involved in the simulations. The central models were trained using Keras~\cite{chollet2015keras}, and the federated training was simulated using Tensorflow Federated~\cite{The_TensorFlow_Federated_Authors_TensorFlow_Federated_2019}.
`
\subsection{Number of samples per device}
\begin{figure}[h!]
\centering
\includegraphics[width=0.75\linewidth]{numsamples_bar.pdf}
\caption{Change in accuracy with number of training samples per device. We see that as the number of samples increases, the test accuracy increases. Also as the number of neural network layers are increased, the federated learning training also shows an improved learning performance.}
\label{fig:numsamples}
\end{figure}
In federated learning, the number of training samples per device have a direct effect on training. Fig.~\ref{fig:numsamples} shows the change in accuracy with varying number of training samples per device. We train three models with (a) no hidden layer, (b) single layer with 200 nodes, (c) two hidden layers with 500 and 200 nodes.
We represent these networks as $[10], [200,10], [500,200,10]$ respectively
\footnote{[500,200,10] represents a neural network with three layers. Two hidden layers with 500 and 200 nodes each and output layer with 10 nodes.}. For comparison, we also trained these models centrally.
As is the case with centralized schemes, the more the number of samples per device, the more generalized the per device weight updates would be. With fewer samples per device, the gradient updates coming from individual devices can contain more noise. And hence there would be greater variance in the centralized updated weights. When we increase the number of layers, the FL learning is consistent with the centralized models in learning better.
\subsection{Learning with single label}
When there are multiple samples per device, all with the same label (but different for different devices), we see that the learning takes an opposite course, i.e., better learning is obtained with fewer number of samples. Fig.~\ref{fig:Onelabel} shows the training and test accuracies for the [500,200,10] network architecture with $10,50,100\mbox{ and }200$ samples, all with the same label per device.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{Singlelabel.pdf}
\caption{Single Label: Training with many samples per device with single label on the [500,200,10] network. We see that as the number of samples increases, the train accuracy is high, but the test accuracy does not improve. This shows overfitting of samples.}
\label{fig:Onelabel}
\end{figure}
We see that for 10 samples per device, the training and testing accuracies are comparable. With more number of samples, we see a scenario of clear overfitting. So how does this overfitting occur, which was not observed when we had all classes in each device?
Note that each device learning is specific to the single label attached to the device. Hence each device weights would be tuned (overfitted) to match only that label available on it. Thus the server would be aggregating different ``label overfitted'' gradient updates. This characteristic is more evident in Fig.~\ref{fig:OnelabelAccuracies}. When the number of samples are ten, there is less label specific learning, and hence better generalization, in comparison to when we have more samples (200) per device.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{SingleLabelAccuracies.pdf}
\caption{Variation in accuracies: The validation accuracies over the aggregation rounds, when we have multiple samples all with the same label for each device. We compare the learnings over rounds for 10 samples per device and 200 samples per device. When the number of samples are less, then the label specific learning is less and hence there is more generalization. With more samples with same label per device, there is overfitting of labels within device. This leads to noise on aggregation. For single sample, we see a scenario of underfitting.}
\label{fig:OnelabelAccuracies}
\end{figure}
\subsubsection*{Single sample (single label)}
In the single label case, while we see overfitting with increasing number of samples per device, we see underfitting, when the number of samples is too low or one. This is evident in Fig.~\ref{fig:OnelabelAccuracies}, where the learning curve is slow for single sample cases as compared to the two other multi-sample single label cases. Note that as with ten samples, the variation in accuracies for single sample over rounds is also relatively less as compared to more number of samples.
\section{Cost assessment}
\label{sec:cost}
In this section, we compare the costs for training a centralized model against federated learning settings. We consider both training and deployment cost for the respective approaches.
For the cost comparison between federated learning and centralized schemes, we use the actual Amazon Web Services (AWS) cost figures, at the time of
publishing of this work~\cite{aws_pricing}.
\subsection{Federated Learning communication}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{Proto.pdf}
\caption{A typical federated learning communication sequence.}
\label{fig:FLproto}
\end{figure*}
For federated learning costs, we assume the communication mechanism presented in~\cite{bonawitz2019towards, li2020federated}. For calculating the cost, we consider the following steps for each aggregation round:
\begin{enumerate}
\item
\begin{enumerate}
\item The registered devices which are ready, ping the server to participate in a training cycle.
\item The server selects a subset of these devices.
\end{enumerate} \label{item1}
\item The server sends the current model with weights and training plan, which could be thought of as a executable that the device needs to run on its data on the model.
\item The devices undergoes local weight updation and the updated weights are send back to the server.
\item The server aggregates these weights and updates the global model. \label{item2}
\item[*] The above steps are repeated until a server decided convergence criteria.
\end{enumerate}
The above steps~\ref{item1}-\ref{item2} are shown in Fig.~\ref{fig:FLproto}. Step~\ref{item1}, could be either initiated by the client or server. The most cost effective scenario would be that the server pings a subset of registered devices for its readiness to participate in each training cycle. This would however require additional logic on the side of server to choose the most plausible devices that could participate in each round.
In centralized setup, communication and data transfer is required for data syncing to the S3. The computation is done via EC2 instances, and the inferred labels are synced back to the devices. In the case of FL, the communication and data transfer is mainly for transfer of model weights and updates from/to server to/from devices.
For the assessment, we are assuming, there are around 500K devices, that is registered for FL training, and the total population is 12 million. The number of devices participating in each of round of training is fixed at 500. The training is assumed to complete within a month and the cost computation is calculated for a month.
We assume the ping messages and other request messages to be atmost 1KB. The main variables in a FL setup are the model size, number of rounds for training and number of rounds of training per day (inversely proportional to the time per round). For the FL training, the communication and model and weight download/upload is assumed to happen between devices and EC2 (server aggregator). For the final model deployment, the devices download the model from S3 directly, as it found to be more efficient than getting pulled from EC2. We account the data transfer payload cost from AWS to internet and the S3 storage, read\footnote{S3 storage and write are minimal, while the main cost would be for read corresponding to the model reads during the deployment} and write. For the EC2, we account for Portal(m4.xlarge, 1)\footnote{The number specifies the number of instances.} Model aggregator (c5.xlarge), Training Server (m4.xlarge), Monitoring Node (t3.xlarge) and Jump Box (t3.medium). The model aggregator and training server are accounted only for the duration of the training and rest for the entire month. An ELB (accounted during training), Route53 components and VPC (NAT gateway) are also considered.
We assume a similar infrastructure for centralized setting except for EC2. For the EC2, we account for Portal (m4.xlarge, 1)
Ingestion machines (m4.xlarge, 3), Training Server (m4.xlarge, 7), Tagging server (m4.xlarge, 3), Monitoring Node (t3.xlarge, 1),
Jump Box (t3.medium, 1). For the training, we assume 6 instances are employed for a day for data engineering (feature generation) and 2 instances are employed for a day for training. The tagging is assumed for 4 days. The main cost factor for centralized training is the data syncing cost. We are assuming that the 12 million devices sync on an average 250 KB of data monthly per device. The label size to be synced back is assumed to be 100 Bytes per device. With these figures, the centralized training cost is $\$ 1967.90$. To understand the effect of data sync, when the average data that is synced is assumed to be 1 MB (data that contains logs specific to training), which is a reasonable value, the training cost is $\$3769.70$. There is no "deployment" as such, as the centrally inferred labels are synced back to the devices, which is found to be negligibly small (Less than \$1 in our computations).
In the analysis, we have not accounted for communication losses. Also the ELB and EC2 instances for FL may not be optimized and can be done so based on the load requirements.
\subsubsection*{Factors affecting cost}
The main factors that affect the FL cost are the message payloads, the weight transfer payloads and training time. The training time determines the time for which the training related EC2 instances have to be kept live. To simplify the analysis, these three factors are captured using three metrics (a) Model size (b) Number of aggregation rounds (c) Number of aggregation rounds per day.
The model size affects data payload, both during training and deployment, number of aggregation rounds affects messages and data payloads, and the training time, the number of aggregation rounds per day affects the time for keeping the training EC2 instances live.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{ModelSize.pdf}
\caption{Cost as a function of model sizes. The assumed number of rounds is 200 and the number of rounds per day is 100.}
\label{fig:modelsize}
\end{figure}
Fig.~\ref{fig:modelsize} shows the effect of model size on cost. The training cost is almost same in all the cases (525.68, 533.15, 540.85, 48.55 for model sizes `15KB', `500KB', `1000KB', `15000KB' respectively). However the deployment cost takes a huge hit when the model sizes are increased. This is because of the S3 reads that need to be done for all the devices to which the model needs to be deployed. Hence it would be advisable to look for alternative ways of deployment, that does not involve S3 reads, like a FOTA deployment or a release deployment.
The impact of model size on cost also leads to thinking of model size reduction techniques. There are two ways in which model size could be reduced: (1) careful model design including feature set reduction (2) model compression techniques. Many a times, the size of a feature set could have an impact on the model size. For instance, if we are considering bag of words or similar techniques with large vocabulary, then this would increase the size of the model. Dimensionality reduction techniques like Principal Component Analysis~\cite{wold1987principal} may not be applicable in federated setting, as the entire data is not available for the matrix analysis. Techniques like hashing~\cite{weinberger2009feature} could be utilized for this purpose and the performances are found to be close to the non-hashed features performance. There are also techniques~\cite{zhang2018lq, cheng2020survey, han2015learning} for keeping neural networks compact which not only keeps the models small, but make the learning fast.~\cite{cheng2020survey} discusses the various techniques in this front. ~\cite{han2015learning} proposed a well adopted pruning technique, in which the neural network evolves over time to converge to the best compact neural network.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{numrounds.pdf}
\caption{Cost as a function of number of aggregation rounds. The assumed Model Size is 500 KB and the number of rounds per day is 200.}
\label{fig:numrounds}
\end{figure}
In Fig.~\ref{fig:numrounds}, we show the effect of the number of rounds on cost, specifically on the training. The number of rounds would have an impact on the training duration as well as the total number of communication messages that are passed between clients and servers, including the model weights and updates.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{NumRoundsPerDay.pdf}
\caption{Cost as a function of number of aggregation rounds per day.. The assumed model size is 500 KB and the number of rounds is 3000.}
\label{fig:roundsperday}
\end{figure}
The effect of the number of aggregation rounds per day is shown in Fig.~\ref{fig:roundsperday}. The cost decrease is because of the reduction in EC2 training instances live period. The more the number of aggregations per day, the less the number of days we need to keep the training instances running. This factor is affected by the per cycle timeout that can be specified by the FL engineer during training. Other factors could also play a role in restricting this number, like shortage of participating devices during certain periods of the day (like work time), the model complexity, which makes each gradient iteration at clients slower, etc.
\vspace{3pt}
\section{Conclusion}
In this work, we have looked more closely into federated learning training for non-IID cases, where there is large skewness in labels. We showed a comparison between the different SGD schemes with federated SGD and federated averaging in different data distribution scenarios. We showed that single label per device scenario does not work similar to multi label cases, and care should be taken about the optimal samples used for training. Too many samples with same labels may actually hamper performance. We also showed the cost comparison of FL with centralized schemes and looked at factors that affect cost, viz, model size, number of rounds and number of rounds per day. We showed that in FL, model deployment has a huge affect on cost and product owners need to look how to efficiently manage deployment.
\section{Acknowledgement}
We would like to thank the Samsung R$\&$D institute for providing the resources for performing this work.
\bibliographystyle{unsrt}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.